Call: +44 (0)1904 557620 Call
Blog

Pete Finnigan's Oracle Security Weblog

This is the weblog for Pete Finnigan. Pete works in the area of Oracle security and he specialises in auditing Oracle databases for security issues. This weblog is aimed squarely at those interested in the security of their Oracle databases.

PFCLScan - Version 3.0

PFCLScan Version 3.0 Dashboard and Single Page report

We are very excited to announce that we are currently working to have version 3.0 of PFCLScan our flagship database security scanner for the Oracle database. We will be ready for sale in September and this development is going really well. I just wanted to give a quick update on progress and a first discussion on the new updates / changes to be added for version 3.0.

The first major change that will not affect many people - in the sense that they will not really notice it - is that we re-wrote major parts of the Windows interface to use a more up to date screen components; this is around the Ribbon menus, the dockable windows and the results grids. This major re-write was stability and commitment to support of the interface. PFCLScan was working fine but some customers using Windows 10 builds 1709 and 1803 experienced some issues that could not be fixed as the components we used unfortunately had support stopped. So we committed quickly to a major re-write and this part is now complete.

Another major change to the interface / tool is the addition of plugins. Again most users will not notice this as its used in the background BUT we needed it to enable other features to be added more easily and we hope that most users will benefit from the plugins we plan to add. Actually a first version of Pugins is already there in version 2.0 but most people will not have noticed and in version 2.0 it was not flexible to allow new plugins to be run. The test database connection and the test server connection in the respective "manage database connections" and "manage server connections" screens were actually a first version of plugins. We have now re-written the plugin architecture and enabled it properly. So the two connection plugins for database and server are now proper plugins and could be dynamically replaced - why would you..:-) but you could.

A plugin in PFCLScan is actually just a normal PFCLScan project; so end users can create a new project to do something and add it to the plugins screen and then run it or they could substitute an existing plugin for theirs to customise what is run at the plugin time. We have added a number of "hook-points" to the interface and these are accessible via the plugins screen to configure. Each "Hook-point" represents a place in the interface where plugins can run. We provide standard plugins for each hook point BUT the end user can create their own if they wish and the interface will run them for you.

This creates a powerful interface as its possible to extend the scanner without changing it. Some of the hook points we already added are:

  • Test database connection

  • Test server connection

  • Before a scan

  • After a scan

  • After Error



We have provided plugins for these places and you can view them or change them; it's up to you. For instance the before a scan hook point is used to run a plugin that tests that the Oracle database user to be used for the scan has the correct permissions to run the scan. This is great as we get customers raising a ticket to say a scan errored and actually it was because the database user that they used was missing a permission. The permissions are covered in the manual of course.

Going forwards We plan to add many more plugins around many areas. A flavour is here:

  • Compare databases security - we can do this now but a plugin will be better

  • Compare scans - we can do this now but a plugin will be better

  • The new dashboard will be populated by a plugin

  • Convert a project to a plugin template

  • Allow us to implement more hands on tools - Forensics timeline, PL/SQL Code analysis, Audit Trail toolkit interface....

  • Find things, projects, policies, reports....

  • Many more


We have also added a new dashboard to the scanner to review all of the scans of databases that have taken place and all of the databases that have been scanned. This is a one stop shop to see the current average security state of all databases across the enterprise or a single database. The security is also been down into 8 distinct security categories and you can view the security progress in each category across the who database or for individual databases. You can also view the number of errors broken down across the whole database or categories and by severity from critical to medium.

The second major improvement to help you manage the security of all databases or individual databases is a new single page score report for a single database. This is a major tool for management to see at a very high level the security of a single database and how that security is broken down across 8 distinct categories of security issues in your databases. The report is also clever as it can suggest where the best improvements to overall database security can be gained by targeting your fixes - All of this in one page!!

The single page score report and the dashboard of course show the same details for each database.

Of course PFCLScan supports creating projects that allow you to scan multiple databases in one scan / project and as such there is also multi database version of the single page database security score report.

We have also been adding hundreds of new Oracle security checks to our scanner policies. We are also adding new reports and some new project types - more details to follow

Due to customer feedback; and whilst our reporting language is very simple and easy to create and write your own reports even if you are not a programmer
We will also add the ability to choose some simple customisations on the main database security report.

Version 3.0 will be released in September so if you have a valid license you will get a copy; Version 4.0 is also being developed and the roadmap for version 4.0 will include integration of our audit trail toolkit for activity monitoring of database engine use; This is called PFCLATK and is discussed here. We will also include SQL Server and Postgress and mySQL, MariaDB etc in version 4.0. This will be released in January 2020.

We have also been looking for new resellers for PFCLScan and some are signed up and will be added to the reseller page soon BUT we are interested to talk to any company who would like to represent PFCLScan in their region. Please contact me to discuss details and to become a reseller for us. One of the big advantages for you to reselling a database vulnerability scanner if you are a service company is that you can easily upsell services such as help with policy, custom reports and of course help with correcting the issues located. Its a win-win for us and also for resellers.

Our license fees for Engagement, Pro and Enterprise licenses have been held for a number of years at their current levels and we will be adding a reasonable price increase in September to coincide with the release of Version 3.0. So buy now to get the lower license fees and if you buy a Pro or Enterprise license now you will get version 3.0 and version 4.0 updated versions when they are released.

Contact us now to book your online demo via webex or to purchase a license.

Thanks for listening!

PFCLATK - Audit Trail Toolkit - Checksums

We have a toolkit called PFCLATK that is used in customer engagements to assist our customers to create comprehensive and useful audit trails for their databases. The toolkit is used in consulting engagements at the moment but will be adding into PFCLScan our vulnerability scanner for Oracle databases in the next release.

In these audit trail design and implement engagements we sit down with a customer and help them design useful audit trails for the Oracle database engine (i.e. capture actual Oracle database abuse) and also data access and controls. We initially create a table of all of the events that we would like to capture and classify them at a severity level, need for reporting, alerting and escalation. From this we can decide how fast the customer must react to an event becoming true. i.e. if we decide to react in real time or semi real time then an alert is raised straight away or if we decide to react once per day a report or an alert is raised or a report produced. Of course we then map the design document - the events table into the rules, factors and settings into the policies of the PFCLATK toolkit and deploy rapidly.

This means that we will sell licenses for PFCLATK inside of PFCLScan. Currently we only use it as a consulting tool but from version 3.0 of PFCLScan PFCLATK will be part of that product. This means that we will have two major interfaces added into PFCLScan for PFCLATK. The first is a dashboard to show realtime monitoring of Oracle databases and indications of the alerts and issues located and a second admin interface to allow the deployment of the toolkit and also to allow maintenance and configuration of all of the toolkits settings.

The power of adding PFCLATK into PFCLScan is that we can now combine activity monitoring and also vulnerability and data security analysis. This will mean that as audit trail events fire and are captured in the audit part say for 2 of your databases out of 60 databases you will be immediately able to go to the vulnerability section and just click and perform a quick vulnerability audit of all of the databases to see if they gave an issue in the area captured in 2 databases or perform complete security audits.

Conversely after a vulnerability scan you will be able to click and immediately deploy and audit trail policy set for the issues located during the vulnerability scans. This gives a strong synergy between looking for security holes in the Oracle database configuration and immediately enabling audit trails to capture abuse of those holes and also from the other side detecting attacks or anomalies and then running scans to see how the rest of the estate fares with the same issues.

The audit toolkit PFCLATK also includes centralised logging and auditing. You can deploy the toolkit to all target databases and then also deploy to a single or multiple central database that automatically collects the audit trails from the target databases into the central database to allow centralised management and reporting.

Anyway that's a quick intro to PFCLATK and now to the topic of the post. Part of the centralised audit is the the PUL mechanism that runs automagically to to PUL (yes, one L) the data from each target database to the central database. The process at a high level does these steps for each of the audit tables in the target database:

  1. Get a count of records in the previous hour

  2. checksum the records using a SHA1 checksum of the previous hour

  3. copy the records from the target to the central database using the DBID as a unique ID as we will store records from multiple databases

  4. checksum the same group of records in the central database

  5. if the checksums match then delete the records from the target database


Because the toolkit deploys the same code to the target and the central database and because the PUL runs from the central database we at first ran the checksum in the central database for the remote records in the audit tables and also the local records in the same central database. We recently because aware of a performance issue and this was tracked down to the fact that the audit records were checksummed across a database link. The checksum PL/SQL function is in the central database and the SQL used to feed into the checksum is run across a link. A bit of testing reveals that if we instead run the checksum in the target database and the SQL is also local to the target database the performance is improved. This is purely because the check done with remote SQL via a link means that all of the records that feed into the checksum are returned over the link. When we instead call the checksum function remotely then only the checksum itself is returned over the link. Two test cases are here.

1 - checksum as now where the checksum is in the central database and the data checksummed is returned with SQL across a link:

declare
lv_aud$_sql varchar2(32767);
lv_aud$_ch varchar2(200);
lg_date_fmt VARCHAR2(32767) := 'DD-MON-YYYY HH24:MI:SS.FF';
begin
lv_aud$_sql:='select sessionid,entryid,statement,timestamp#,userid,userhost,terminal,action#,'
||'returncode,obj$creator,obj$name,auth$privileges,auth$grantee,new$owner,'
||'new$name,ses$actions,ses$tid,logoff$lread,logoff$pread,logoff$lwrite,logoff$dead,'
||'logoff$time,comment$text,clientid,spare1,spare2,obj$label,ses$label,priv$used,'
||'sessioncpu,ntimestamp#,proxy$sid,user$guid,instance#,process#,xid,auditid,'
||'scn,dbid,sqlbind,sqltext,obj$edition from aud$@atk_12 '
||'where cast(from_tz(ntimestamp#,''00:00'') at local as date) between to_date(''05-JUN-2019 15:00:00'', ''DD-MM-YYYY HH24:MI:SS'') '
||'and to_date(''05-JUN-2019 15:59:59'', ''DD-MM-YYYY HH24:MI:SS'') order by sessionid,entryid asc';
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Before the checksum');
lv_aud$_ch:=pfclatk.checksum(lv_aud$_sql);
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Checksum local checksum() sql across the link: '||lv_aud$_ch);
end;
/

2 - The checksum is now in the remote database and the SQL is local to the remote database:

declare
lv_aud$_sql varchar2(32767);
lv_aud$_ch varchar2(200);
lg_date_fmt VARCHAR2(32767) := 'DD-MON-YYYY HH24:MI:SS.FF';
begin
lv_aud$_sql:='select sessionid,entryid,statement,timestamp#,userid,userhost,terminal,action#,'
||'returncode,obj$creator,obj$name,auth$privileges,auth$grantee,new$owner,'
||'new$name,ses$actions,ses$tid,logoff$lread,logoff$pread,logoff$lwrite,logoff$dead,'
||'logoff$time,comment$text,clientid,spare1,spare2,obj$label,ses$label,priv$used,'
||'sessioncpu,ntimestamp#,proxy$sid,user$guid,instance#,process#,xid,auditid,'
||'scn,dbid,sqlbind,sqltext,obj$edition from aud$ '
||'where cast(from_tz(ntimestamp#,''00:00'') at local as date) between to_date(''05-JUN-2019 15:00:00'', ''DD-MM-YYYY HH24:MI:SS'') '
||'and to_date(''05-JUN-2019 15:59:59'', ''DD-MM-YYYY HH24:MI:SS'') order by sessionid,entryid asc';
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Before the checksum');
lv_aud$_ch:=pfclatk.checksum@atk_12(lv_aud$_sql);
dbms_output.put_line('['||to_char(SYSTIMESTAMP,lg_date_fmt)||'] Checksum remote checksum() across the link local sql in the ATK: '||lv_aud$_ch);
end;
/

The difference is subtle as the first example has the link in the SQL and the second on the PL/SQL API. The difference as i said is in the second one only the checksum is transmitted across the wire. The performance is better (albeit i dont have many audit records in my database - so its subtle but better):
1) - run the remote SQL

SQL> set serveroutput on
SQL> @run_chk_remote
[06-JUN-2019 09:10:52.356646000] Before the checksum
[06-JUN-2019 09:10:52.681866000] Checksum local checksum() sql across the link: 1628EC464B590F82FE98A7588492FC0D54145695

PL/SQL procedure successfully completed.

SQL>

2 - remote PL/SQL

SQL> @run_chk_rem_chk.sql
[06-JUN-2019 09:12:00.175714000] Before the checksum
[06-JUN-2019 09:12:00.343161000] Checksum remote checksum() across the link local sql in the ATK: 1628EC464B590F82FE98A7588492FC0D54145695

PL/SQL procedure successfully completed.

SQL>

so for the remote SQL then its 0.33 of a second to process and for the remote checksum its 0.17 of a second. With much bigger data sets then the savings are much bigger. We still need to process all records on the remote checksum and remote SQL (local to the checksum) and feed them into the checksum but we don't feed the data across the wire anymore

Lesson learned; be aware of how the data flows so that we reduce the work. The toolkit has been improved to do the checksum remotely now for each target database.

If anyone is interested in us helping design and implement an audit trail for them very quickly then please let me know via our contact details.

3200 Clever hackers are in my PC; wow!!

Hackers are clever people; they must be to hack other people and take over their private data and steal identities and money. I have to draw the limit at the number of hackers who claim to be in my PC, since October 2018 this is now over 3200 of them. They claim to have videos of me watching websites i didn't visit. Even my main PC doesn't even have a webcam and my laptop usually has the webcam covered with a little blocker window so they must be fantastic hackers to be able to video with no webcam or with a cover over it. They claim to have videos of the victim and that they have installed backdoors in my PC and have access to all social media and email accounts and if you don't pay them large amounts of money via bitcoin they will send out the video to your contacts.

I have collected 3200 of these scam emails since October 2018 till now, mid May 2019 - thats 8 months so an average of 14 hackers a day in my PC. It's madness; these top hackers must actually struggle to not fall over each other when they exploit my PC and each of them has left a backdoor so there must be now 3200 backdoors in my PC; The emails are a joke and some are so badly written that the victim must struggle to actually understand what's beings said.

In the beginning most of these emails included a password. These people claim that they know my email password and in fact none these passwords were ever for my email and are in fact passwords used in various websites that must have been hacked over the years and published on various sites that list peoples passwords. None of the passwords shown are current for me and are actually from many years ago; some from 20 years ago.

The first emails were a little more convincing actually sent to pete at petefinnigan dot com but after a short while the attackers could not be bothered to even send a password that i may recognise. Also they started to send emails to random strings at my domain such asenssw at petefinnigan dot com. Some sent the sender address as the password and claimed that this shows how fantastic a hacker they are. Some simply sent from random most lily spoofed addresses. In the beginning they tried to be more professional and send from my own email address (spoofed of course) and include the password to show me how fantastic they are and how they must have my password to my email - actually it was a password from a website from 20 years ago not my email. I now even receive these same emails in German, Chinese and Russian.

Some have written that they have looked into the bit coin addresses and people do pay these emails. don't pay they are not real; they do not have videos of you. Based on fear; some people pay them. it is a probably a lucrative business generating thousands (millions) of emails and sending them out; a scatter gun approach.

The most interesting ones for me was a password that was more recent quoted in one of these emails - 7 years ago. I used that password only once; all my passwords are unique anyway. I knew i used it to buy one item on a website in the UK and never visited that website again. This password only showed up 4 times whilst others showed up hundreds and hundreds of times. The PC that this password was used on once has been long since gone years ago. It was never used on any current PC and can only ever have come from an attack of that companies website. I contacted them and let them know; they said they were looking into it and asked their web host to check but would not admit (or could find) a breach. It has to come from there as the PC that was used by me went 6 years ago and the password was used once.

This is an area of interest to me. I wrote a book published by Apress on Oracle Incident response and forensics last year and i have been involved in many breaches of Oracle databases in the aftermath and also in helping secure those databases. For me its always interesting to find evidence of a breach in an Oracle database and how the attacker got in, who were they connected as, what did they see and do, what could they have done with the rights they had with more skills and most importantly what was the range of the attack. When did it start and when did it end. what is usually interesting is that usually the attacks lasted much longer than the client thought; they may have believed that the breach stared a week or so ago but it started a year ago. Its also interesting to see patterns and evidence of multiple attackers in an Oracle database that has been breached; i have seen this many times and it has parallels with these ridiculous emails - if we were to believe them then i had 3200 attackers in my PC.. but in real life in Oracle databases its often possible to see that there indeed has been multiple attackers - we can see the style and the patterns to establish this.

I have also been involved in performing security audits of Oracle databases where the client was to aware they had been breached but i found evidence of breaches in the near and sometimes distant past.

So, 3200 hackers are not in my PC i would never have any resources left to use and there are not 3200 backdoors as each attacker would fall over each other to install 3200 backdoors in one PC; madness; but there are parallels to Oracle database breaches, evidence that points to the source of the leak - i.e. this site that i used once with one password. GDPR comes into play as well here!

DBID Is Not Definitive When Used As An Identifier

Our Audit Trail toolkit PFCLATK has some brief documentation on the page that's linked here but in summary it is a comprehensive toolkit that allows quick and easy deployment of an audit trail into a customers database. We are currently selling this as software a service (SAAS) and the process is that we help customers design an audit trail policy and in that policy the main part is table of audit events that are useful to capture what the customer needs to know about abuse of their Oracle database engines.

This could include users sharing passwords, access to critical resources, 0-day attacks, escalation of duties and more. This document forms the basis for the core events and as part of that table we assess how and when events should be reacted to and which audit events warrant an alert being raised. The next step is to implement that documented policy as events in the database and this is where PFCLATK comes in. We use this rebuild toolkit that allows us to easily map documented audit events into events in the database to audit actual user actions on real systems and data. This toolkit includes policy based events and also triggered alerts. so that for a particular event we can decide when we are alerted, immediately or maybe once a day or maybe never and it just falls to collection of raw audit data.

One other big element of the PFCLATK toolkit is the fact that its very quick and easy to deploy this audit trail toolkit to every Oracle database in the organisation and then the toolkit manages centralised audit to a single (or multiple) database. The same toolkit is installed to each target database (ATK) and also to the central database (ATC) and then a simple setup call is made from the central (ATC) database to each target database (ATK) and then automatically all of the audit trails and alerts generated in each target (ATK) are transported to the central database. This means centralised alerts and reporting are easy to manage.

Currently this is offered as a software as a service and we charge the client consultancy to help design and write the policy and then implement the events into all databases and help with test and go live. We provide the toolkit as part of this service. Going forwards we will also license the toolkit as a software product without the consulting by us and we are in development of a new GUI dashboard and admin console for management and reporting.

Anyway to the point of this post. As part of the centralised audit trails we copy the audit trails once per hour automatically from each database to the central storage and we need a way to identify each target databases audit trails in the central database. In older databases such as 11.2.0.4 then its simple we use the DBID column of V$DATABASE and thats fine but once we start to talk about multi-tenant architecture and CDB and PDB it unfortunately doesn't work. If we look at an 18c database and log into the CDB then we see:

Peters-MacBook-Pro:____atk pxf$ sqlplus sys/oracle1@//192.168.56.78:1523/xe as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 12 20:16:04 2019

Copyright (c) 1982, 2012, Oracle. All rights reserved.


Connected to:
Oracle Database 18c Express Edition Release 18.0.0.0.0 - Production

SQL> select sys_context('userenv','con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

SQL> select dbid from v$database;

DBID
----------
2883395441

SQL>

And then go to the PDB and do the same:

SQL> alter session set container=XEPDB1;

Session altered.

SQL> select sys_context('userenv','con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
XEPDB1

SQL> select dbid from v$database;

DBID
----------
2883395441

SQL>

Hmm, its the same DBID in the root container as in each PDB. This means that we can use DBID in an 11g or earlier database (unfortunately for some clients they don't get to run everything on the latest versions) to identify the source database and indeed we can do the same in 12c and above with a legacy installation (whilst we can still create legacy installs) but the problem lies in pluggable architecture. In this case we also need to look at the CON_DBID column of V$DATABASE so that we can detect if the database is CDB and if so replace the DBID as the identifier with the DBID. So for the PDB in my 18c database:

SQL> select sys_context('userenv','con_name') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
XEPDB1

SQL> select dbid,con_dbid from v$database;

DBID CON_DBID
---------- ----------
2883395441 1392023073

SQL>

So now in the PDB we have a different CON_DBID to the DBID and in the root container we have:

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> select dbid,con_dbid from v$database;

DBID CON_DBID
---------- ----------
2883395441 2883395441

SQL>

The CDB has the same CON_DBID as the DBID. SO for identification purposes we must resort to the CON_DBID not the DBID in pluggable database. The PFCLATK easily checks the database version and if its CDB our not and if we are in the root container or the PDB and uses the correct value.

Would be nice if Oracle would use the DBID as a unique identifier across PDB / CDB so that its consistent across legacy as well; but they don't..:-(

If anyone would like more details of PFCLATK and our audit trail service then please contact me.

Hardening and Securing The Oracle Database Training in London

I posted last week that I will teach my two day class "How to Perform a Security Audit of an Oracle Database" with Oracle University in London on the 29th and 30th April 2019.

We have now added a third day on the 1st may 2019 for my one day class "Hardening and Securing Oracle". This is a great fast paced one day class. In this class at a high level I start by showing why my Oracle database and sample application is vulnerable to attack and data breach by hacking it as various different actors (web user, database user with just CREATE SESSION, DBA user and more) then we analyse why the attacks work and why the data is insecure. The biggest part of the day is then take by walking through all of the core areas we can make improvements of database security in; patching, hardening, operating system security related to the database, user security, data access controls, context based security and audit trails.

During the day I show demos and lock down and secure my sample database and applications. At the end of the day we hack it again and show that it's not possible to steal the data. Of course in real life you would not hack your own database to prove its insecure and i would not do that at client sites but its a good graphical way too emphasise the reality of the current security of the data and the security of the database. Also you would not simply dive in and start fixing things without first planning and documenting and creating a security policy for all of your Oracle databases (we cover that) BUT i do dive in and secure in this class to show what can be achieved in a very short time in terms of securing data in your own Oracle databases.

This is a one day class and can be attended just as the one day BUT it is intended to be taught along with the two day class "How to perform a security audit in an Oracle database" so attending all three days is a great idea.

To register for this one day class with Oracle University then please click this link. The details for the two day class are in the link at the start of this post.

I look forwards to meeting you all in London in 2019

Stop The DBA Reading Data in Subtle Ways

The Problem:

Dan asked me a question about whether the DBA can be stopped from accessing views such as V$SQL or V$SQL_BIND_CAPTURE with Database Vault because these views can be used to read data from the SGA.

I have covered the issue of reading data from outside of the planned data model - i.e. access outside of SELECT, INSERT, UPDATE and DELETE grants on data tables or via PL/SQL execute privileges - many many times in the past on this site blog and in my trainings and consulting.

A blog entry here called Accessing data outside of the data model is a good example. I cover this very issue in my two day class "How to perform a security audit of an Oracle Database" and also "Secure coding in PL/SQL" from the view point of stealing data and I also cover the SGA and binds from the other side of needing that information to track a hacker in my "Oracle Incident response and forensics class".

The Problem Stated Even More:

Dans point that a DBA can read data by simply sampling SQL statements and PL/SQL statements or bind data is very valid and indeed this is the problem he wants to solve. If a DBA has access to the database then they need to do DBA like work (we haven't defined what that is exactly) and of course the DBA should not be able to see data; why in this data breach age should the DBA have carte blanche to read data along side his normal job. Business people are limited in what data they can see so why not limit the DBA as well.

So the first thoughts are to limit access to all tables and views and code so that the DBA cannot select data from business tables or views. Then the next thought is to limit PL/SQL procedures for the same reason so that they cannot run business code and indirectly access data. The permissions on data are not very granular; they are at the table, view procedure level. This is why Oracle brought out tools such as Virtual Private database to limit data by row or column or combinations if you care to do the setup and write the rules. It is possible to also do a lot with the old fashioned methods of using views and triggers and PL/SQL code to limit and enforce rules but these can be by passed if you can access the tables directly for select and not via the views that have had the security implemented in them. Of course Oracle also has Label Security, TSDP etc.

Then there is Database Vault which can be used to protect data, commands and roles via realms, command rules and secure application roles. The de-facto position is to block so called SYSTEM ANY rights and this is the next logical thing to mention for DBAs they tend to get the DBA role, The IMP_FULL_DATABASE or DATAPUMP versions or the CATALOG roles or worse the SYSDBA privilege. Included in all of this mess is usually most SYSTEM ANY rights. This includes SELECT ANY TABLE so the DBA can bypass the direct grants that he doesn't have now against business data or business code and access tables, views and code again. Now Database Vault out of the box stops the SYSTEM ANY rights when a realm is made around data. This is fine BUT as i said in my presentation "You don't have database vault; what can you do instead?" a simpler fix is possible without buying database vault. That is do not have the SYSTEM ANY grants in the first place.

Why does a DBA need SYSTEM ANY grants? in general for three main reasons:


  1. Access to data to trouble shoot application issues: this is the fundamental issue we are trying to resolve. the DBA should not have these ANY rights and if the DBA needs to trouble shoot then create a role with the specific direct grants needed and grant it only when completing that task.

  2. Access to deploy code/applications:This is normally because the DBA is also the release team and they then have the choice of connecting to the schema and running code that says create table ... or connecting as a DBA with CREATE ANY like rights and running create schema.table.... There is no need for the DBA to connect direct to the schema or to have CREATE ANY rights. The better approach is to use proxy and connect as the DBA account and proxy to the schema. In this way we can also have a log on trigger that executes AUDIT ALL IN SESSION CURRENT and AUDIT ALL STATEMENTS IN SESSION CURRENT and this will create a log of all actions as the DBA as a proxy to the schema during a release.

  3. Create other objects or changes: For instance create a directory object in another schema. The approach here is to create these objects once and a good approach is to isolate these rights into a shared security manager and access this via proxy - so no direct logins - and audit in the same way



So already it is complex and we have not talked about sneakily accessing data via lots of other means:


  • Select code and data embedded in it from the SGA; i.e. v$SQL, v$SQLAREA et al plus all of the other methods such as via stats pack or tuning pack or Quest products....the list is very big of possible ways

  • Use tools such as Log Miner, CDC, Streams, Goldengate to extract data from redo or archive logs

  • Dump data with ALTER SESSION and ALTER SYSTEM in many formats and ways

  • Access data via export files

  • Access data files

  • The list goes on to many many ways to get at data without selecting it from tables



There are then more subtle ways to read data outside of the areas we have just discussed:


  • Utilise ALTER USER, change the SYS password and connect as SYS and read data!

  • Do SQL Injection as a database user on vulnerable business logic written in PL/SQL and then use that avenue to read data!

  • Utilise CREATE ANY PROCEDURE and overwrite the F procedure of APEX (its usually installed in my experience) and steal the ALTER USER privilege of APEX and change the SYS password, log in as SYS and read the data!

  • There are many, many more ways...



The Real Problem; What is a DBA and what can he do:

All of this leads us to the conclusion that we need to solve two problems:


  1. What is the DBA and what should he be allowed to do (in the database or on the server)?

  2. What data are we protecting and just how many ways are there to access it?



So Dans question of how to protect access to V$SQL and V$SQL_BIND_CAPTURE is a good one but it must be extended to include all possible ways that data can be read outside of normal select statements but firstly it must be defined what is the DBA, what access should he have and for what purposes and when. In my view all DBAs should have their own accounts in the database and they must not use any built in Oracle roles such as DBA or the IMPORT roles or grants of all rights or SYSDBA. A DBA account must be designed for purpose and to design for purpose we must understand what the DBA is allowed to do and then we can start to make plans to limit access to only that which is necessary.

At a simple level that would mean no direct grants on tables, views, procedures etc, no SYSTEM ANY rights including DML and DDL rights. No rights to bypass the restrictions in place; i.e. Dans example of no access to V$SQL but also no access to GRANT roles, no access to ALTER USER and many more as we have stated these could be used to bypass the rules already in place.

This is an oxymoron waiting to happen though.

As soon as we start to limit some rights that clearly also have DBA like needs then we have problems. For instance ALTER USER is a good example. A DBA would need access to this BUT we would not want the DBA accessing SYS or SYSTEM on a daily basis and we may limit their access so we would need to also add a DDL trigger that stops the DBA from changing the SYS or SYSTEM password. This then brings in a next layer. We also dont want the DBA to be able to disable the DDL trigger so he should not have ALTER TRIGGER and ADMINISTER DATABASE TRIGGER; that means our DDL trigger must be a database wide system trigger so he needs ADMINISTER DATABASE TRIGGER to disable it so we dont grant this to the DBA. But also the parameter _system_trig_enabled could be turned off preventing system triggers to fire so the DBA should not have ALTER SYSTEM or access to stop and start the database and edit the parameters file. Stopping him have ALTER SYSTEM would be an issue; stopping him getting to the box on a daily basis to edit files can be managed.

12c brought LOCK DOWN PROFILES and this is a fantastic tool to limit things like ALTER SYSTEM in a fine grained way. So we could stop him turning on trace and stop him setting the _system_trig_enabled parameter.

We can also use layered automated security; we can test all these key elements in a regular way externally so that we can determine if the security of the database has been compromised. PFCLScan is a good tool to allow you to monitor this. A video showing a quick demo of PFCLScan is here.

We could even go as far as automatically re-applying the security if it is found to have been reduced. This can be tricky and needs to be managed properly but this is the start of the path towards adaptive security and we can also do adaptive auditing.. i.e. we can have a database go from defcon 5 to defcon 1.

Other Issues:

So far it seems complex and it gets worse. We also have since 12c the problem of multitenant databases - i say problem purely in the sense that we now need to manage at least two databases the CDB and the PDB; where it works out well is when we have multiple PDB in a single CDB; then the administration can be centralised. This brings in the issue of COMMON or LOCAL users, privileges, profiles and the side issue of COMMON grants to LOCAL and LOCAL grants to COMMON.

In this scenario of stopping the DBA team looking at data then in one sense having a centralised DBA (a COMMON user) makes sense and indeed Oracle provides some tools to assist in fine grained security around this. So imagine that the COMMON DBA only accesses the CDB and looks at data in V$SQL from the CDB then we can use CONTAINER_DATA to limit access to individual PDBs and also views within those. So we can stop the DBA from viewing the contents of V$SQL from the CDB BUT we have the secondary problem that the DBA is a COMMON user and we would then need to stop him accessing the PDB directly. This could be achieved via lack of CREATE SESSION for instance BUT as a DBA he may need to be in the PDB at some point to do something.

Clearly if a DBA has his own account and its not SYSDBA we can probably limit his access to data with some careful design and also layered security such as DDL triggers - but not limited to those - and maybe tools like Database vault if you have licensed it. Also we must make good use of audit trails to capture all actions performed by the DBA. We must also accept that in some circumstances he/she may need to be logged in as SYSDBA BUT not all day every day; and we should audit that access and actions and control it with proper processes.

Securing Oracle properly and limiting access to a DBA can also use tools like VPD, OLD and of course Database Vault but we can do a lot with normal rights and ideas and audit trails and of course management of people and processes and knowing what people are supposed to do and not do.

We need to analyse privileges of all users and they must be least rights for all jobs and that includes the DBA. Oracles Privilege Analysis is now free for 18c and above and back ported to 12c so thats a useful tool. You can also use all of my scripts, who_has_priv.sql, who_can_access.sql and find_all_privs.sql available on my Oracle security tools page.

Limit core grants such as Oracle roles, ANY rights, SELECT ANY DICTIONARY and more. We clearly need lots of thought and careful design and need layered security. I did a post a while ago about super locking an Oracle database and this post enhances that post in part.

I will return to this discussion as time goes on here in this blog as i have just created a new 18c XE database to form the new basis of my training sample database - most of the cost options such as Database Vault are available for free in 18c XE - and I have started to create a simple application (A CRM) as a sample for my training classes and I am creating a new two or possibly three day class where I want to super lock a database as a step by step process and show whats possible in a short period of time utilising many many layers and tools and techniques in an Oracle database.

This would be great as a follow along (hands on) class as i plan to use XE and everyone can download and use it.

More later on this and if anyone is interested to register an interest in attending the first sitting of this class later this year contact me via email or my social media accounts

Oracle Security Training in London with Oracle University

I have just agreed some training dates with Oracle University in London and I will be teaching my very popular two day class How to Perform a security audit of an Oracle database on the 29th and 30th April 2019. This class teaches you how to secure data in all of your Oracle databases. The class covers Oracle 11g, 12.1c, 12.2c and 18c. The focus of the class is to look at why data is insecure and how people can steal and hack it and then to use the vehicle of an audit of an Oracle database to walk through the process of assessing all aspects of an Oracle database for security issues and also discussing solutions as we go.

The class includes a download of a lot real world useful tools and scripts that you can take away and use in your own security assessments of your own Oracle databases.

I made a video of the outline of the class a while ago and this is a good 5 minute discussion of what to expect in the class.

The details from Oracle to enable you to register for this class are here.

I will also be teaching my one day class "Hardening and securing Oracle" on the 1st May also at the same venue with Oracle in London so it will be possible to take both classes as a 3 day event. I will have the registration details for the 3rd day soon from Oracle.

I look forwards to meeting everyone in London in April and being able to teach and discuss securing data in your Oracle databases.