db


You have a schema containing lots of MyISAM tables that may possibly be stale due to code no longer updating them. You can rank them by date of most recent update.

mysql -D PriceComparison -NBe"show table status like '%'" \
| \
cut -f1,2,11,12 \
| \
sort -k 5,6

I omitted a grep MyISAM because the InnoDB tables will be grouped together with NULL dates so aren’t a problem. The sneaky problem to avoid is that sort sees a datetime field as being two fields. To sort the list by creation date you would use sort -k 3,4.

Advertisements

For reporting-style stats we often need to round down ‘now’ to the nearest 5 mins. For instance, my current project requires the code to identify the current ‘rolling twenty minutes’ with such rounding. An easy way to do this that works even in MySQL v4.0 is the following.

SELECT FROM_UNIXTIME(300 * FLOOR( UNIX_TIMESTAMP()/300 ))

which gives the most recently-passed 5-min datetime, taking 13:52:50 down to 13:50:00.
Of course, if you need rounding to the nearest 15 mins then the 300 would be changed to 900.

If your tables are all using InnoDB then you can get a snapshot of the master with minimal locking using

schemata="one two three"
mysqldump --opt --no-data --routines --skip-triggers \
  --databases $schemata >schema__all.sql
mysqldump --opt --no-create-info --skip-routines --triggers \
  --flush-logs --single-transaction --master-data=2 \
  --databases $schemata >data__all.sql

Otherwise, if you have some non-transactional tables (eg MyISAM) then omit single-transaction and use table locking.
A comment in the dump file will tell you what settings to use for master.info but you’d be able to guess it anyway (thanks to flush-logs).

change master to master_log_file='..', master_log_pos=..  # probably 107

Directories let database code interact with the filesystem.

create or replace directory EXPORT_DIR as '/var/local/instance0/export';
create or replace directory IMPORT_DIR as '/var/local/instance0/import';
grant READ on directory IMPORT_DIR to APPUSER;
grant WRITE on directory EXPORT_DIR to APPUSER;

set linesize 200
col grantee format a7
col table_schema format a12
col table_name format a10
col privilege format a9
select grantee,table_schema,table_name,privilege
from all_tab_privs
where table_name = 'EXPORT_DIR' or table_name = 'IMPORT_DIR';
rman target /
crosscheck backup;
crosscheck copy;
list failure;
advise failure;
report schema;
restore database preview summary;
startup force mount;
restore database;
recover database;
alter database open;

or

...
recover database until cancel;
alter database open resetlogs;

If you need to rebuild your temp tablespace, eg the existing one is corrupt or the wrong size, the trick is to temporarily give oracle a new one to use as its default temp tablespace while you replace the real one, then switch back.

create temporary tablespace TEMP_TMP
 tempfile '/oracle/oradata/IFSS/temp02.dbf'
 size 1G reuse
 autoextend on next 100M maxsize unlimited;
alter database default temporary tablespace TEMP_TMP;
drop tablespace TEMP;
! rm /oracle/oradata/IFSS/temp01.dbf
create temporary tablespace TEMP
 tempfile '/oracle/oradata/IFSS/temp01.dbf'
 size 1G reuse
 autoextend on next 100M maxsize unlimited;
alter database default temporary tablespace TEMP;
drop tablespace TEMP_TMP;
! rm /oracle/oradata/IFSS/temp02.dbf
select * from dba_temp_files;
select * from dba_tablespaces;

Jowan points out below that “reuse” is better omitted.

This is very useful for copying say your Live instance to Staging or another tier. Doing a disconnected duplication means there is absolutely no chance of upsetting your Live instance. I also think it’s more intuitive. (It’s only available in 11g.)

To begin, you need a backup of your source instance, including control file snapshot and any archive logs you want to include, and transfer those to the destination server. (The copied backup files don’t need to be in a particular structure and don’t need to be in the same directory.) I rsync the files I need into /opt/oracle/tmp/{backupset,archivelog,autobackup}/2011_06_21/.... The gotcha I need to remember is that archivelogs need to be gunzipped before the first attempt at import.

If you’ve taken care of all the steps for creating the destination instance, you’re ready to kick off the import.

sqlplus / as sysdba
startup nomount
quit
rman auxiliary /
duplicate database to DEST
until time "TO_DATE('2011-06-21 05:00:00', 'YYYY-MM-DD HH24:MI:SS')"
backup location '/path/to/tree/of/backup/files';
quit

(where DEST is the name of the destination instance.)

sqlplus / as sysdba
alter database open
quit

This often fails for one reason or another. Simply re-run the rman step and it should work second time. This will import everything under /path/to/tree/of/backup/files up to the specified cut-off time.
You may then want to disable archive logs if you don’t want them on the new instance.
Most likely you’ll also need to rebuild your TEMP tablespace(s).

Next Page »