I often take a show of hands at conference talks that I do or trainings and I ask the room - How many people have audit enabled in the database? - Usually between 10% and 30% of the room will put their hands up. I then say - Those of you with your hands up keep them up if you actually do anything with the audit trails except store it and never look at it? - Again 10% to 30% may keep their hands up. In a room of more than 100 people I have seen one or two people keep their hand up. I have asked these questions for years and in different settings; countries and more. The answer is always similar and a fact is that its not changing over the years. My guess (not a complete guess but based only on people I have had chance to ask!) is that 5% to maybe 15% may have auditing of the database engine itself.
This is not good enough. I spend quite a lot of my time helping companies create useful audit trails in their database so that they know whats happening and who is trying to do wrong to their data. I do this with a toolkit i have created called PFCLATK and this toolkit helps map a policy/event driven plan into settings in the actual databases and includes management and audit of audit and audit of security and even audit of security of audit. GDPR has created an interest in getting suitable audit trails up and running quickly as these are a core benefit to helping with GDPR; i.e. to know who accessed what and to know if you have been breached and more.
With GDPR and other existing data laws and data breach laws we need to rely more these days on audit trails than not. I said to someone in the last weeks; whats the point of planning for a database breach and creating an incident response process and a plan to deal with forensics to allow you to swiftly, calmly and objectively deal with a data breach if/when it happens IF you take no steps now to protect the data in your database. I went on to discuss that you can create an Oracle database security policy ( A document that describes what secure data in an Oracle database looks like to you) and implement it across all databases. You should still create your incident response process BUT also start to plan and implement Oracle security in your databases at the same time; no point as i say to plan for the breach if you do not bolt down the database.
As part of that bolting down of the database it makes absolute sense for one of the first countermeasures to be a rich and useful audit trail.
Audit trails in Oracle have a lot of nuances and issues and strange ways. I will cover some of these here in this blog over the coming weeks in details and show you what I mean; keep a watch for new posts.
You may say what about Unified Audit in 12c; its been around now since 2013 BUT; i have literally come across a small amount of people who use Unified audit and not the standard core audit - bear in mind the amount of people who actually have an audit trail set up and react to it; see above!!. As part of the room show of hands I always also ask if any of the people with their hand up use Unified audit; usually its a resounding no; or perhaps one person says they use it. On the face of it Unified Audit seems to offer advantages; policies; filtered audit BUT it also turns off some existing audit features if you enable in PURE mode; syslog, writing to the OS; it also seemed to have some teething issues in 12.1 but the move back to a real table instead of a secure file in 12.2 maybe makes that better.
PFCLATK also has policies and filtered results and it had them a few years before 12cR1 was released and i was not on the beta of that so i didn't know. I presented at UKOUG many years ago about audit trails and i mused there about policies and context based audit in the database for the core audit.
There are naysayers who say that auditing in the database with its core audit presents two main risks; 1) - The DBA or someone else with power can delete or turn off the audit trails and 2) - There is a massive performance impact to using database audit.
Both of these are not true (well a caveated not true). The attacker (the DBA) can turn off audit or delete or update audit records BUT as part of the audit trail we include audit triggers and also audit of audit so that we can react if that event happens. We can always get back deleted audit from the redo via flashback query or using LogMiner; If the DBA turned off audit trail creation we can also react swiftly and extract using the Incident process we created current actions and also use Flashback or LogMiner to see what changes he made. I believe the risk to having audit in the database (for a short time - I do advocate moving it to a central storage as quick as possible) is outweighed by the benefits of the use of SQL and PL/SQL to report and process the audit rather than terse file search tools and consolidating thousands of files. Also if we compare to the alternatives; network sniffer based technologies; then these also have issues, firstly they only see what flying past on the wire. If you decide to monitor the CREDIT_CARD table and its accessed in a SQL and this is in the TNS network packets then fine BUT if you access the table via a package called SCOTT.CARDS then either you have to know what SCOTT.CARDS does in advance and analyse all code to find access or you place a tap on the database server to try and get that detail from the database!. Secondly these network based technologies may syphon off details that could include data and now this data is stored in a box outside of the database. Database level audit can also include SQL that includes data BUT this is still stored in dictionary tables accessible to SYS and if deleted we can get to back from REDO.
The second issue people always site for not turning on audit in the database as i said is performance and again this can also be site as a reason to use a network blade / software TNS sniffer solution although if you have a lot of network traffic this causes different issues. My answer to the performance issue of enabling audit is that we should only audit that which should not happen; i.e. things that do not matter for performance; if someone accesses static configuration data then we audit DELETE and UPDATE (perhaps INSERT dependant on the design of the data) bit not SELECT as that is likely to be the major action performed thousands of times a day BUT delete and update should hardly ever happen.
We should start our audit trail design / policy with a list of events - "I want to know!" items and then flesh that out - a table is a good structure. The table should include a list of audit events and information on whether we just capture it and hold or we react. If we react how quickly and finally do we create a report (or include in a report) or do we raise and alert and escalate.
This design drives the technical solution; PFCLATK is deliberately written to encase these ideas and to use policy and event based audit that is easy to define and implement quickly in a database. That was the reason to create it; to allow customers to design their events and THEN to translate those into events and alerts in the toolkit.
Back to the performance issue for a second. Often people site performance as a major issue to not implement database level audit BUT at the same time they have triggers creating BEFORE and AFTER images of data changes at the application level. These can create 300% or even 500% impacts on the original action; this heavily depends on the code written into triggers. I have seen some monsters in my time!. Core database audit to audit things that should not happen has a tiny tiny impact.
OK, I will return to audit trail events and nuances and issues very soon as I have a big list of things I would like to talk about here.