'Offline' logminer and encryption



This week I've been using logminer to perform some analysis of one of our live systems where we see a pattern of data we don't expect and we want to track down all the transactions that modify a particular table.

The database in question is supported by a third party so we don't have admin rights to be able to do any of the investigation on live directly so we are having to do the investigation elsewhere. The system also uses an encryption wallet to encrypt some securefile lobs - this didn't seem relevant when i started but it turned out that it was.

For those of you not familiar with logminer it allows you to 'mine' the redo logs for information about all the transactions that have gone through the database and modified it - however it won;t work by default - you need some degree of additional supplemental logging added - in our case we had previously used streams in this database so the extra logging was in place.

Logminer can work directly in the live instance (and it's easier that way) - but we didn't have that option.

The first step you need to do when doing the analysis is create a logminer dictionary - this is essentially a mapping ob object id's etc to the names they have in the data dictionary - everything in the redo logs will be against id's rather than names - so we need to be able to map them back.

There are 3 ways to do this - from the data dictionary directly, by dumping the data dictionary to the redo log to then be read in, or by dumping it to a file. The first option is out for me - the file is the safest option for as the redo log option takes out a complete ddl lock so no object changes can be done.

However when i got the provider to run the statement it wouldn't work as utl_file_dir was set to a null string - so without a reboot that option was off the table.

I decided to tell them to use the redo log option - in tests it was very quick anyway and most of the time during the day no ddl is happening anyway.

So we dump that out with

SQL> EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

PL/SQL procedure successfully completed.

We then run a little bit of code to see which redo logs it ended up in

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';

NAME
--------------------------------------------------------------------------------
/oracle/DB_NAME/oraarch/DB_NAME_1_202466_729785513.arc

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';

NAME
--------------------------------------------------------------------------------
/oracle/DB_NAME/oraarch/DB_NAME_1_202466_729785513.arc 

So in our case just one file.

From the developers i have the time window that is of interest so i just need to identify the logfiles that span that time gap - this simple query lists everything in the last 4 hours and then i can just pick  out the ones i'm interested in - which are shown below


select sequence#,to_char(first_time,'dd-mon-yyyy hh24:mi:ss') 
from  v$log_history where first_time > sysdate-4/24


    202440 04-feb-2015 11:04:09
    202441 04-feb-2015 11:10:12
    202442 04-feb-2015 11:14:54
    202443 04-feb-2015 11:16:15
    202444 04-feb-2015 11:22:18
    202445 04-feb-2015 11:28:21
    202446 04-feb-2015 11:34:23

So i now know i need those 7 logseq numbers + the extra one containing the dictionary mappings.

So how do i get those files copied over to my server - well i could just ask the supplier but i don't want them making a mistake on the live server (they are already on tape by this point) so instead i got the tivoli storage manager directory for the live server tarred up and copied over to the test box - in our case we have everything under /tsm/DB

So i get them to tar it up and copy it over for me - i then untar it and i now have the ability to pretend i am the live db server - as long as i have the correct DBID from the live server to use (which i do)

To be able to restore the logfiles though i need an instance of the same name as the live DB - it doesn't need to be a real DB - literally just an instance

so i just create an initDB.ora file with a single line (db_name=LIVEDB_NAME) and start it up nomount

I can now connect to this dummy instance and use it to restore files to my test box

So i connect in and set the live dbid

[oracle@server-name]:DB_NAME:/oracle/11.2.0.3.DB/dbs# rman catalog=rman/rmanpass@rcat

Recovery Manager: Release 11.2.0.3.0 - Production on Wed Feb 4 17:35:56 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to recovery catalog database

RMAN> set dbid = 1860324777;

executing command: SET DBID
database name is "DB_NAME" and DBID is 1860324777

I then connect to the local dummy target

RMAN> connect target

connected to target database: DB_NAME (not mounted)

Then i can bring back the files i want to investigate

RMAN>
run {
 allocate channel t1 type 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=/tsm/DB_NAME/conf/tdpo.opt)';
  restore archivelog from logseq 202440 until logseq 202446 ;
}
RMAN> 2> 3> 4>

allocated channel: t1
channel t1: SID=429 device type=SBT_TAPE
channel t1: Data Protection for Oracle: version 6.3.0.0

Starting restore at 04-FEB-15

channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=202440
channel t1: restoring archived log
archived log thread=1 sequence=202441
channel t1: reading from backup piece redolog_DB_NAME_870780093_20150204_166955_1.rman
channel t1: piece handle=redolog_DB_NAME_870780093_20150204_166955_1.rman tag=TAG20150204T111738
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:45
channel t1: starting archived log restore to default destination
channel t1: restoring archived log
archived log thread=1 sequence=202442
channel t1: restoring archived log
archived log thread=1 sequence=202443
channel t1: restoring archived log
archived log thread=1 sequence=202444
channel t1: restoring archived log
archived log thread=1 sequence=202445
channel t1: restoring archived log
archived log thread=1 sequence=202446
channel t1: reading from backup piece redolog_DB_NAME_870783559_20150204_166957_1.rman
channel t1: piece handle=redolog_DB_NAME_870783559_20150204_166957_1.rman tag=TAG20150204T121502
channel t1: restored backup piece 1
channel t1: restore complete, elapsed time: 00:00:45
Finished restore at 04-FEB-15
released channel: t1

Followed by the one with the dictionary in

RMAN> run {
2>  restore archivelog from logseq 202466 until logseq 202466;}

Starting restore at 04-FEB-15
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=429 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 6.3.0.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=5 device type=DISK

channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=202466
channel ORA_SBT_TAPE_1: reading from backup piece redolog_DB_NAME_870787778_20150204_166961_1.rman
channel ORA_SBT_TAPE_1: piece handle=redolog_DB_NAME_870787778_20150204_166961_1.rman tag=TAG20150204T132641
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:15
Finished restore at 04-FEB-15

RMAN>

The files have been restored to the default directory which is $ORACLE_HOME/dbs - we can see them here

[oracle@server-name]:DB_NAME:/oracle/11.2.0.3.DB/dbs# ls -lrt

-rw-r--r-- 1 oracle oinstall        15 2015-02-04 17:34 initDB_NAME.ora
-rw-rw---- 1 oracle oinstall      1544 2015-02-04 17:35 hc_DB_NAME.dat
-rw-r----- 1 oracle oinstall  93298688 2015-02-04 17:38 arch1_202441_729785513.dbf
-rw-r----- 1 oracle oinstall 128683520 2015-02-04 17:38 arch1_202440_729785513.dbf
-rw-r----- 1 oracle oinstall 100395520 2015-02-04 17:39 arch1_202442_729785513.dbf
-rw-r----- 1 oracle oinstall 123347968 2015-02-04 17:39 arch1_202446_729785513.dbf
-rw-r----- 1 oracle oinstall 125743616 2015-02-04 17:39 arch1_202445_729785513.dbf
-rw-r----- 1 oracle oinstall 125374464 2015-02-04 17:39 arch1_202444_729785513.dbf
-rw-r----- 1 oracle oinstall 154757632 2015-02-04 17:39 arch1_202443_729785513.dbf
-rw-r----- 1 oracle oinstall  23934976 2015-02-04 17:43 arch1_202466_729785513.dbf

So now we're looking good  - now we just need to do the log miner analysis. For this i can;t use my dummy instance as it's nothing more than memory and processes - there is really nothing there.

So i choose another DB i have available o the same box which is at least the same version as the live one i extracted from

I then log on to this and run the logminer set up steps

 EXECUTE DBMS_LOGMNR.ADD_LOGFILE(
  LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202466_729785513.dbf',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202440_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202441_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202442_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202443_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202444_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202445_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202446_729785513.dbf');
PL/SQL procedure successfully completed.

SQL>
PL/SQL procedure successfully completed.

SQL>
PL/SQL procedure successfully completed.

SQL>
PL/SQL procedure successfully completed.

SQL>
PL/SQL procedure successfully completed.

SQL>
PL/SQL procedure successfully completed.

SQL>
PL/SQL procedure successfully completed.

SQL>

PL/SQL procedure successfully completed.

That loads the files ready to go, then i just need to say i want to start the analysis

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(   OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY 

+DBMS_LOGMNR.PRINT_PRETTY_SQL);

PL/SQL procedure successfully completed.

So all looking good.

until......

 1* select table_space from V$LOGMNR_CONTENTS where username='USER_OF_INTEREST'
SQL> select * from  V$LOGMNR_CONTENTS where username='USER_OF_INTEREST'
  2  /
ERROR:
ORA-28365: wallet is not open

Now this is annoying as the user i'm interested in doesn't have anything encrypted but the way the process is working isn't happy as some of the redo is encrypted - so we need the wallet

I get the provider to then send me the ewallet.p12 from live

Now i need to hook this up to my test DB. In this particular case there was already a wallet specified in the sqlnet.ora for another database. I didn't want to mess with this so i copied the sqlnet.ora file to a separate directory and added in these lines (replacing the original content)

ENCRYPTION_WALLET_LOCATION =
    (SOURCE=
        (METHOD=FILE)
        (METHOD_DATA=
            (DIRECTORY=/oracle/home/oracle/wallet)
    ))

Then i set TNS_ADMIN to this separate directory so we pick up this sqlnet.ora file and not any other

 export TNS_ADMIN=/oracle/home/oracle/wallet

I then restart the database with the wallet specified - and then i can now open the wallet using the appropriate password

SQL> ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "topsecret!";

System altered.

So now lets try a query again

SQL> select table_space from V$LOGMNR_CONTENTS where username='USER_OF_INTEREST';
select table_space from V$LOGMNR_CONTENTS where username='USER_OF_INTEREST'
                        *
ERROR at line 1:
ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from
v$logmnr_contents


Hmm totally failed - it seems that the logminer session is session specific - you have to load everything up and it's private to you.

restart means we have to redo logminer stuff again

So lets do that again

EXECUTE DBMS_LOGMNR.ADD_LOGFILE(-
   LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202466_729785513.dbf',OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202440_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202441_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202442_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202443_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202444_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202445_729785513.dbf');
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/11.2.0.3.DB/dbs/arch1_202446_729785513.dbf');

EXECUTE DBMS_LOGMNR.START_LOGMNR( OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY +DBMS_LOGMNR.PRINT_PRETTY_SQL);

That was fine again - now we try the select again

select table_space from V$LOGMNR_CONTENTS where username='USER_OF_INTEREST';

and that now works! hoorah

Slightly convoluted but nice to know it works and some useful learning along the way









Comments