AI for Oracle Security
The two most obvious things would be to analyse audit data to find things we do not spot naturally and to use AI to check configurations. I also talked about how the rise of AI might have come to be in the last few years - hardware is available now to build large neural nets (LLMs) in memory and process with matrix calculations in video cards and also the large availability of training data through search websites indexes, books and many other sources happened around the same time.
My view also is that these potential learning data sources are being corrupted by the web based get rich crowd who use AI to generate huge amounts of content for websites and social media. We can all spot fake posts but wrong or inaccurate facts in posts are harder to spot. If the AI model learning data gets corrupted and the AI models learn from it and people who do not know better then use that AI data to generate more bad data where are we left in the end?
The AI could be trained on inaccurate data in the first place and the data used to train AI can be corrupted by spam/inaccurate data generated by AI so devaluing the content generated by prompts. It is like a self-fulfilling prophecy in reverse.
Another aspect is copyright. I see everywhere emails and adverts and articles (again driven often by get rich quick schemes) about give a generative AI an idea and ask it to make an app including hosting, deploying the full stack and payments, generate a web site and more. In this case its less of an issue for me as the code would never become public - maybe, maybe not.
But, if you use code assist in you development environment and then deploy that code as part of a commercial application / Oracle database or ? then what is the source - the real source of that code generated by AI and who owns it. Did the AI learning phases check the license of every code snippet that is in the future used to generate new code? I think this is something managers need to consider before their commercial code base is updated with generative code. Yes, it makes simple things easy and quick and saves research time and document discovery tasks BUT is it legally the companies code as it was not written keyword for keyword by a developer.
If you generate code for something it often does not compile; as time goes on and the code gets better then developers could be replaced by generative AI (let me be very clear, I do not agree with this) BUT the skills needed to work out why something does not compile or why there is a logical bug in compiled code will be lost and suddenly your commercial application and database are developed and supported by AI but the skill base is lost when something critical happens you cannot support it or fix it.
I think that AI that uses specific reliable input data via RAG maybe will be the best to assist finding and searching that data and compiling answers BUT it depends on the quality of the input data. This can be FAQ, manuals, previous tickets and bugs and more. The creation of this data cannot be by Gen AI if it does not exist. For instance imagine we create a database, queries, tables, PL/SQL code and more and build a finance or CRM or ERP on top of Oracle. Gen AI cannot document it and teach itself; it needs input from designers/developers and more. Yes, i know AI could mine/learn from the source, designs etc but its unlikely to test the system and use it and create data that can answer any question. Well maybe we could do exactly that and the code generation if its exclusively from our code (we own the copyright) can solve the previous issue
In terms of using AI to mine data; this is clearer. We can point audit data at AI and ask it general questions or specific ones to find anomalies and edge cases and potential violations in our audit. This can work as a viable assist to security
I watched and read quite a lot on AI recently and there are some interesting discussions going on. For instance Steven Bartlett interviewed Dr. Roman Yampolskiy and he made a number of statements that i did not agree with. He said by 2027/2030/2045 99% of people will be redundant and only 5 jobs will exist because jobs will be replaced by AI LLMs. This does not make sense. Yes, in principal this could be a worst case Armageddon and companies will want to replace people with generative AI but going to 99% of people replaced is not including the practicalities. If everything went to AI where are all the servers/learning/bots and so on hosted, how do all companies transition to this quickly and how does AI availability increase to cope with the demand of 99% of jobs. Also stated was that cars and lorries will be replaced by self driving lorries and cars. Who is going to make all these cars and lorries very quickly and replace people driving them. The scale of manufacturing is immense.
Imagine a world in a few years where millions of lorries are cars are self driving with no one in them. How is that going to work, imagine 99% of people are redundant; they will no longer need shopping delivered from super markets - plenty of time to walk and buy, no more online retailers as they do not have money to buy, no more take away deliveries... would the take aways need to cook burgers automatically and send them down a Shute to a self driving car waiting outside and the poor redundant people have to go outside their homes to collect the burger from the same self driving cars or do these drive up to the house and an accurate AI slingshot send the food to the letter box, a bit like the reverse of the trains in the past picking up mail bags from hangers as they passed without stopping.
Others are talking that the AI bubble might burst like Web 1.0 and the dot com crash; Don't know, even if it did there will still be AI at some level.
What is needed is AI models built / learned for hundreds of dollars on small devices and not massive data center devices for billions or hundreds of billions. Maybe we ought to have specialised AI models and even create AI people; like tens/hundreds/thousands of models all taught slightly differently so they are like people!! We can have an AI Dinesh, AI Gilfoyle, AI Pete doing Oracle Security, AI Oracle Tuning, AI... the first two are from the excellent Silicon Valley comedy
Yes, AI can be good, can speed things up and reduce costs but there is a big risk that the use of it is being imagined too far ahead and far too end of days.
For Oracle security; Yes, if we have good data that can be learned from and good clean sources to use in other tasks then AI can work
#oracleace #sym_42 #ukoug #ai #UKOUGDiscover25 #OracleCommunity #JoelKallmanDay #oracle #database #AI