Oracle DB 23ai is available for Exadata and I've been spending a lot of time working on building some demos in my lab environment. Below is the architecture.
To help you get started below are the pre-steps I did to create this demo.
Download and install DB 23ai (latest version which was 23.6 when I created my demo).
Install APEX within the database. Most existing demos use APEX, and makes it easy to build a simple application. Here is a link to a blog that I used to explain the install process, and ORDS setup for the webserver.
Optional - Install the embedding model in your database to convert text it's vector representation. Here is a link to how to do this. You can also use an external model with Ollama.
Optional - Install DBMS_CLOUD to access object storage. Most demos access object storage to read in documents. Here is a link to my blog on how to install it. I actually used ZFS for my object storage after installing DBMS_CLOUD. You can OCI, or even a PAR against any Object storage.
Install ollama. Ollama is used to host the LLM, and you within Ollama you can download any open source model.. For my demo, I downloaded and installed llama3.2.
The demo I started with was the Texas Legislation demo which can be found here. This link points to a video showing the demo, and within the description is a link to the code and instruction on how to recreate the demo in your environment which are located in Github
The majority of the application is written in APEX, and can be downloaded using the instructions on github which can be found here.
The major changes I had to make to get this demo working on-premises had to do with using Ollama rather than access OCI for the LLM.
The biggest challenge was the LLM calls. The embedding and document search was the same DBM_VECTOR calls regardless of the model. The Demo, however uses DBMS_CLOUD.send_request which does not support OLLAMA.
I changed the functions to call DBMS_VECTOR_CHAIN.UTL_TO_GENERATE_TEXT instead, and I built a "prompt" instead of a message. This is outlined below.
Description
Demo request
Ollam request
Call LLM with chat history and results/td>
dbms_cloud.send_request
Message: Question:
DBMS_VECTOR_CHAIN.UTL_TO_GENERATE_TEXT
Question: Chat History: Context:
SUMMARY : This RAG demo is a great place to start learning how to create a RAG architecture, and with just a few changes many of the Demo's created for Autonomous can be used on-premises also !
In this blog post I am sharing a script that I wrote that will give you the list of databases running on a DB node. The information provided by the script is
DB_UNIQUE_NAME
ORACLE_SID
DB_HOME
WHY
I have been working on a script to automatically configure OKV for all of the Oracle Databases running on a DB host. In order to install OKV in a RAC cluster, I want to ensure the unique OKV software files are in the same location on every host when I set the WALLET_ROOT variable for my database. The optimal location is to put the software under $ORACLE_BASE/admin/${DB_NAME} which should exist on single instance nodes, and RAC nodes.
Easy right?
I thought it would be easy to determine the name of all of the databases on a host so that I could make sure the install goes into $ORACLE_BASE/admin/{DB_NAME}/okv directory on each DB node.
The first item I realized is that the directory structure under $ORACLE_BASE/admin is actually the DB_UNIQUE_NAME rather than DB_NAME. This allows for 2 different instances of the same DB_NAME (primary and standby) to be running on the same DB node without any conflicts.
Along with determining the DB_UNIQUE_NAME, I wanted to take the following items into account
A RAC environment with, or without srvctl properly configured
A non-RAC environment
Exclude directories that are under $ORACLE_BASE/admin that are not a DB_UNQUE_NAME running on the host.
Don't match on ORACLE_SID. The ORACLE_SID name on a DB node can be completely different from the DB_UNIQUE_NAME.
Answer:
After searching around Google and not finding a good answer I checked with my colleagues. Still no good answer.. There were just suggestions like "srvctl config", which would only work on a RAC node where all databases are properly registered.
The way I decided to this was to
Identify the possible DB_UNIQUE_NAME entries by looking in $ORACLE_BASE/admin
Match the possible DB_UNIQUE_NAME with ORACLE_SIDs by looking in $ORACLE_BASE/diag/rdbms/${DB_UNIQUE_NAME} to find the ORACLE_SID name. I would only include DB_UNIQUE_NAMEs that exist in this directory structure and have a subdirectory.
Find the possible ORACLE_HOME by matching the ORACLE_SID to the /etc/oratab. If there is no entry in /etc/oratab still include it.
Script:
Below is the script I came up with, and it displays a report of the database on the host. This can be changed to store the output in a temporary file and read it into a script that loops through the databases.
Output:
Below is the sample output from the script.. You can see that it doesn't require the DB to exist in the /etc/oratab file.
The ZDLRA introduced a new feature with release 23.1 that can both encrypt backups (if they are not already encrypted from TDE) and compress the backups . The combing of both encryption and compression with this feature is unique to the ZDLRA.
I talked about this new exciting feature in a blog post on Oracle.com you can find here.
What I am am going to cover in this blog post is how to audit the RMAN catalog on the ZDLRA to validate that your backups are completely RMAN encrypted.
There are two big advantages of ensuring your backups are fully encrypted
1) With the prevalence of data exfiltration, and the advent of new regulations in many industries, full encryption of backups is mandatory
2) When sending a backup to the Oracle cloud (either in OCI or to object storage on ZFS) full encryption is required to protect the backup data.
The question I often get asked with this feature is..
"How do you tell if your backups are encrypted ?"
You can can determine that your backups are encrypted by looking at the RMAN catalog.
The RC_BACKUP_PIECE view contains a column identifying if the backup is encrypted. This column is set to "YES" only when the backup piece is encrypted.
Keep in mind that there multiple types of backups pieces contained in the catalog
Controlfile backups
Spfile backups
Archive log sweeps
Archive log backups from real-time redo
Datafile backups
Virtual Full backups created from incremental backups.
All of these backups except for two are sent from RMAN with "encryption on" and the backup set will marked as encrypted based on the RMAN encryption setting.
The two that are not set by RMAN directly are
Real-time redo backups. Real-time redo backups are identified in the RMAN catalog as encrypted when the destination setting on the protected database has ENCRYPTION=ENABLE set.
Virtual Full backups. Virtual full backups are identified, for each datafile backup set, as encrypted ONLY after a new L0 is taken with RMAN encryption on, and all subsequent L1 backups are encrypted. I know that is a lot of stipulations on identifying the virtual full backup as encrypted. Only when a new FULL encrypted backup is taken, and all future incremental backups are encrypted can the ZDLRA be sure the backup has remained completely encrypted.
Checking the catalog
The script below takes 2 parameters (&db_name, and &days_to_compare) and it will check the RMAN catalog and display the status of the backups, by backup type making it easier to identify any backup pieces that may not be encrypted.
This provides a nicely formatted output as you can see below.
Database backup summary for last 15 days database: DBSG23AI
Encrypted Compressed Backup
Yes or No Yes or No pieces Backup piece type
========== ========== ====== ========================================
YES YES 69 Full backup
YES NO 39 Archive Log - log sweep
NO YES 1 Incremental L1 backup
YES NO 3958 Archive Log - real-time redo
YES YES 67 Incremental L1 backup
NO YES 3 Full backup
NO NO 1 Controlfile/SPFILE backup
YES NO 26 Controlfile/SPFILE backup
YES NO 221 Incremental L1 backup
In the report you can see that there a few backups that not encrypted, along with some controlfile/spfile backups.
NOTE: In order to run this report, I created a REPORT user in the database on the ZDLRA as an "monitor" user.. A report has enough permissions to create this report.
OKV and ZDLRA
Previously when sending backups to Cloud (which included OCI object storage on ZFSSA), OKV was required. When using Space Efficient Encrypted backups, you can ensure that EVERY backup piece is fully encrypted and RMAN recognizes them as encrypted.
If you follow the information in the blog, and what I have posted in the past, you will no longer need to configure OKV when sending backups to the cloud.
If all backup pieces are encrypted, and the RMAN catalog reports that all backup are encrypted, you can create backups using DBMS_RA.CREATE_ARCHIVAL_BACKUP setting the "encryption_algorithm" to "CLIENT" or "ENC_CLIENT". This will tell the ZDLRA not utilize OKV to encrypt backups, but if any backup pieces are NOT encrypted, the archival backups will fail.