Call: +44 (0)1904 557620 Call
Blog

Pete Finnigan's Oracle Security Weblog

This is the weblog for Pete Finnigan. Pete works in the area of Oracle security and he specialises in auditing Oracle databases for security issues. This weblog is aimed squarely at those interested in the security of their Oracle databases.

[Previous entry: "3200 Clever hackers are in my PC; wow!!"] [Next entry: "PFCLScan - Version 3.0"]

PFCLATK - Audit Trail Toolkit - Checksums



We have a toolkit called PFCLATK that is used in customer engagements to assist our customers to create comprehensive and useful audit trails for their databases. The toolkit is used in consulting engagements at the moment but will be adding into PFCLScan our vulnerability scanner for Oracle databases in the next release.

In these audit trail design and implement engagements we sit down with a customer and help them design useful audit trails for the Oracle database engine (i.e. capture actual Oracle database abuse) and also data access and controls. We initially create a table of all of the events that we would like to capture and classify them at a severity level, need for reporting, alerting and escalation. From this we can decide how fast the customer must react to an event becoming true. i.e. if we decide to react in real time or semi real time then an alert is raised straight away or if we decide to react once per day a report or an alert is raised or a report produced. Of course we then map the design document - the events table into the rules, factors and settings into the policies of the PFCLATK toolkit and deploy rapidly.

This means that we will sell licenses for PFCLATK inside of PFCLScan. Currently we only use it as a consulting tool but from version 3.0 of PFCLScan PFCLATK will be part of that product. This means that we will have two major interfaces added into PFCLScan for PFCLATK. The first is a dashboard to show realtime monitoring of Oracle databases and indications of the alerts and issues located and a second admin interface to allow the deployment of the toolkit and also to allow maintenance and configuration of all of the toolkits settings.

The power of adding PFCLATK into PFCLScan is that we can now combine activity monitoring and also vulnerability and data security analysis. This will mean that as audit trail events fire and are captured in the audit part say for 2 of your databases out of 60 databases you will be immediately able to go to the vulnerability section and just click and perform a quick vulnerability audit of all of the databases to see if they gave an issue in the area captured in 2 databases or perform complete security audits.

Conversely after a vulnerability scan you will be able to click and immediately deploy and audit trail policy set for the issues located during the vulnerability scans. This gives a strong synergy between looking for security holes in the Oracle database configuration and immediately enabling audit trails to capture abuse of those holes and also from the other side detecting attacks or anomalies and then running scans to see how the rest of the estate fares with the same issues.

The audit toolkit PFCLATK also includes centralised logging and auditing. You can deploy the toolkit to all target databases and then also deploy to a single or multiple central database that automatically collects the audit trails from the target databases into the central database to allow centralised management and reporting.

Anyway that's a quick intro to PFCLATK and now to the topic of the post. Part of the centralised audit is the the PUL mechanism that runs automagically to to PUL (yes, one L) the data from each target database to the central database. The process at a high level does these steps for each of the audit tables in the target database:

  1. Get a count of records in the previous hour

  2. checksum the records using a SHA1 checksum of the previous hour

  3. copy the records from the target to the central database using the DBID as a unique ID as we will store records from multiple databases

  4. checksum the same group of records in the central database

  5. if the checksums match then delete the records from the target database


Because the toolkit deploys the same code to the target and the central database and because the PUL runs from the central database we at first ran the checksum in the central database for the remote records in the audit tables and also the local records in the same central database. We recently because aware of a performance issue and this was tracked down to the fact that the audit records were checksummed across a database link. The checksum PL/SQL function is in the central database and the SQL used to feed into the checksum is run across a link. A bit of testing reveals that if we instead run the checksum in the target database and the SQL is also local to the target database the performance is improved. This is purely because the check done with remote SQL via a link means that all of the records that feed into the checksum are returned over the link. When we instead call the checksum function remotely then only the checksum itself is returned over the link. Two test cases are here.

1 - checksum as now where the checksum is in the central database and the data checksummed is returned with SQL across a link:

declare
lv_aud$_sql varchar2(32767);
lv_aud$_ch varchar2(200);
lg_date_fmt VARCHAR2(32767) := 'DD-MON-YYYY HH24:MI:SS.FF';
begin
lv_aud$_sql:='select sessionid,entryid,statement,timestamp#,userid,userhost,terminal,action#,'
||'returncode,obj$creator,obj$name,auth$privileges,auth$grantee,new$owner,'
||'new$name,ses$actions,ses$tid,logoff$lread,logoff$pread,logoff$lwrite,logoff$dead,'
||'logoff$time,comment$text,clientid,spare1,spare2,obj$label,ses$label,priv$used,'
||'sessioncpu,ntimestamp#,proxy$sid,user$guid,instance#,process#,xid,auditid,'
||'scn,dbid,sqlbind,sqltext,obj$edition from aud$@atk_12 '
||'where cast(from_tz(ntimestamp#,''00:00'') at local as date) between to_date(''05-JUN-2019 15:00:00'', ''DD-MM-YYYY HH24:MI:SS'') '
||'and to_date(''05-JUN-2019 15:59:59'', ''DD-MM-YYYY HH24:MI:SS'') order by sessionid,entryid asc';
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Before the checksum');
lv_aud$_ch:=pfclatk.checksum(lv_aud$_sql);
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Checksum local checksum() sql across the link: '||lv_aud$_ch);
end;
/

2 - The checksum is now in the remote database and the SQL is local to the remote database:

declare
lv_aud$_sql varchar2(32767);
lv_aud$_ch varchar2(200);
lg_date_fmt VARCHAR2(32767) := 'DD-MON-YYYY HH24:MI:SS.FF';
begin
lv_aud$_sql:='select sessionid,entryid,statement,timestamp#,userid,userhost,terminal,action#,'
||'returncode,obj$creator,obj$name,auth$privileges,auth$grantee,new$owner,'
||'new$name,ses$actions,ses$tid,logoff$lread,logoff$pread,logoff$lwrite,logoff$dead,'
||'logoff$time,comment$text,clientid,spare1,spare2,obj$label,ses$label,priv$used,'
||'sessioncpu,ntimestamp#,proxy$sid,user$guid,instance#,process#,xid,auditid,'
||'scn,dbid,sqlbind,sqltext,obj$edition from aud$ '
||'where cast(from_tz(ntimestamp#,''00:00'') at local as date) between to_date(''05-JUN-2019 15:00:00'', ''DD-MM-YYYY HH24:MI:SS'') '
||'and to_date(''05-JUN-2019 15:59:59'', ''DD-MM-YYYY HH24:MI:SS'') order by sessionid,entryid asc';
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Before the checksum');
lv_aud$_ch:=pfclatk.checksum@atk_12(lv_aud$_sql);
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Checksum remote checksum() across the link local sql in the ATK: '||lv_aud$_ch);
end;
/

The difference is subtle as the first example has the link in the SQL and the second on the PL/SQL API. The difference as i said is in the second one only the checksum is transmitted across the wire. The performance is better (albeit i dont have many audit records in my database - so its subtle but better):
1) - run the remote SQL

SQL> set serveroutput on
SQL> @run_chk_remote
[06-JUN-2019 09:10:52.356646000] Before the checksum
[06-JUN-2019 09:10:52.681866000] Checksum local checksum() sql across the link: 1628EC464B590F82FE98A7588492FC0D54145695

PL/SQL procedure successfully completed.

SQL>

2 - remote PL/SQL

SQL> @run_chk_rem_chk.sql
[06-JUN-2019 09:12:00.175714000] Before the checksum
[06-JUN-2019 09:12:00.343161000] Checksum remote checksum() across the link local sql in the ATK: 1628EC464B590F82FE98A7588492FC0D54145695

PL/SQL procedure successfully completed.

SQL>

so for the remote SQL then its 0.33 of a second to process and for the remote checksum its 0.17 of a second. With much bigger data sets then the savings are much bigger. We still need to process all records on the remote checksum and remote SQL (local to the checksum) and feed them into the checksum but we don't feed the data across the wire anymore

Lesson learned; be aware of how the data flows so that we reduce the work. The toolkit has been improved to do the checksum remotely now for each target database.

If anyone is interested in us helping design and implement an audit trail for them very quickly then please let me know via our contact details.