Expert insights on Oracle Database, Zero Data Loss Recovery Appliance (ZDLRA), backup and recovery strategies, database security, and enterprise database solutions by Bryan Grenn.
Oracle TDE (Transparent Data Encryption) is a critical piece of data protection, and online encryption is a great tool you can use to simplify the process of encrypting or rekeying your tablespaces. In this post I will go through the process of
Online encrypting an unencrypted tablespace to TDE
Restarting encrypting a tablespace within a database that crashed during the process
Re-encrypted tablespaces with a stronger encryption key (AES128 --> AES256)
What happens to a standby database when I use encryption and when I rekey.
Throughout this blog post I am will be using queries from my previous blog you can find here.
Environment
First my environment.
Primary DB : ORCLCDB - DB 19.28
Standby DB : ORCLSTBY - DB 19.28
Both databases are using OMF to simplify naming files
Encryption - Using Local wallet file
The parameters that affect encryption are using the default values.
Parmeter
Name Value Default?
------------------------------ -------------------- ----------
tablespace_encryption MANUAL_ENABLE TRUE
encrypt_new_tablespaces CLOUD_ONLY TRUE
Using these parameters, new tablespaces will not be encrypted unless the DDL explicitly mentions encrypting the tablespace.
NOTE: Since I am using 19c, the default encryption Algorithm used will be AES128. Set _tablespace_encryption_default_algorithm to 'AES256' to change the default.
Encrypting my tablespace
Above is the Encryption clause that you can add to "ALTER TABLESPACE" to manage encrypted tablespaces.
For this post, I will concentrate on ONLINE encryption.
New tablespace
I am going to create a new tablespace in my PDB and I want any new tablespaces to be encrypted.
alter system set encrypt_new_tablespaces=always;
Create tablespace encrypt_test;
Running my query against the database, I can see that it created a tablespace and by default that tablespace was created as AES128.
PDB Name Tablespace Name Enc. Enc. alg Master Key ID Key ID tablespace Encryt key (trunc)
---------- --------------- ----- ---------- ------------------------- ----------------------------------- ------------------------------
CDB$ROOT SYSAUX NO AZTq/iBVq0/Kv5Es4oNsgQI= 94EAFE2055AB4FCABF912CE2836C8102 603EC31649CDC3684FE68D3DB376F6
SYSTEM NO AZTq/iBVq0/Kv5Es4oNsgQI= 94EAFE2055AB4FCABF912CE2836C8102 603EC31649CDC3684FE68D3DB376F6
TEMP NO AZTq/iBVq0/Kv5Es4oNsgQI= 94EAFE2055AB4FCABF912CE2836C8102 603EC31649CDC3684FE68D3DB376F6
UNDOTBS1 NO AZTq/iBVq0/Kv5Es4oNsgQI= 94EAFE2055AB4FCABF912CE2836C8102 603EC31649CDC3684FE68D3DB376F6
USERS NO AQAAAAAAAAAAAAAAAAAAAAA= 00000000000000000000000000000000 000000000000000000000000000000
ENCRYPT_TEST YES AES128 AZTq/iBVq0/Kv5Es4oNsgQI= 94EAFE2055AB4FCABF912CE2836C8102 5635777F4E7A6ACA229FEA10369967
NOTE: I did not change the parameter in my standby database and the tablespace in my standby is also encrypted. The DDL sent to the standby from the primary ensured that the standby also encrypts the tablespace.
Existing tablespaces
Now I am going to take an existing tablespace and encrypt it online.
Prior to performing this operation (even though it is online), I am going to take a backup of the database.
To make this more realistic I am using a tablespace with 4 datafiles that are all around 10 GB per datafile.
I am going to encrypt this tablespace on the primary database.
SQL> alter tablespace ENCTEST ENCRYPTION ONLINE ENCRYPT;
My session will not come back until the tablespace is encrypted.
Observations
Alert log and encrypting standby database.
When looking at the alert log for the primary and the standby, I can see the changes occurring.
Primary Database Alert log
ORCLPDB1(3):alter tablespace ENCTEST ENCRYPTION ONLINE ENCRYPT
2026-01-20T10:44:14.923114-07:00
ORCLPDB1(3):About to encrypt tablespace ENCTEST (tsn 3/6)
2026-01-20T10:44:15.020101-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf (13) to /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T10:48:44.473301-07:00--> Start Time for Encryption
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf size 1894528
2026-01-20T10:48:44.564605-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf
2026-01-20T10:48:46.584235-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf"
2026-01-20T10:50:06.377754-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf"
2026-01-20T10:50:06.380150-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf"
2026-01-20T10:50:06.388074-07:00
Standby Database Alert log
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf (13) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T10:48:44.976604-07:00 ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf size 1894528
2026-01-20T10:48:44.983884-07:00--> Start Time for Encryption
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf
2026-01-20T10:48:46.994337-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:06.371051-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:08.012125-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:08.088219-07:00
Notice that the conversion of the first datafile on the first node started at
2026-01-20T10:48:44.473301-07:00
And the standby database started converting the first datafile at
2026-01-20T10:48:44.976604-07:00
The difference is milliseconds due to latency.
Observation #1 - With a standby database, the standby database is also immediately encrypting through REDO, and it is encrypting at the same time as the primary.
This was the most interesting item I found when going through the alert log.
After creating a new encrypted copy of a datafile, the process "Zero'ed out" the old datafile prior to deletion.
This was something I had not thought of before.
When migrating to Encrypted datafiles, this ensures that the original datafile (unencrypted) is overwritten and cannot be "undeleted" and read.
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:06.371051-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:08.012125-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:08.088219-07:00
NOTE: This same process occurs when rekeying also.
26ai New feature : DB_RECOVERY_AUTO_REKEY
ON, which is the default for the standby, enables the standby to automatically perform an online tablespace rekey operation as part of the standby media recovery process. Be aware that this setting pauses media recovery. If the tablespace is large, then you could observe extended standby apply lag.
OFF, which is the default for the primary, does not perform a tablespace rekey operation. This setting enables an extended standby database to avoid lag time during an online conversion, and is designed for large Oracle Data Guard tablespace deployments. This enables the standby recovery to only record the tablespace key information but not perform the rekey operation inline. In the V$ENCRYPTED_TABLESPACES dynamic view, after the STATUS column value for the corresponding tablespace becomes REKEYING or ENCRYPTING on the standby database, then you can use a standby SQL session to issue an ALTER TABLESPACE ENCRYPTION ONLINE FINISH REKEY|ENCRYPT command to perform the rekey in parallel of media recovery.
Observation #2 - When encrypting/rekeying a tablespaces, the original datafiles are overwritten with zeros to ensure they no longer contain any usable data.
Backing up newly encrypted database
I am now going to perform a new Level 1 backup of the database, and look to see what is being backed up. Below is the output from Block changed tracking showing the blocks being backed up.
What you can see is that as far as RMAN is concerned, since this database is idle, only the header block changed. RMAN did not backup the datafile blocks even though every block changed through the encryption process. Only the user contents of the blocks changed not the metadata about the block.
This is a critical observation.
If you perform a weekly full/daily incremental backup, you will need a new Level 0 backup to ensure the datafiles are encrypted in your backup.
If you utilize the 9i "incremental merge" process, your backup will not be encrypted. If you perform a "switch to copy", your datafiles will not be encrypted. After encrypting, you must restart this strategy with a new full backup.
NOTE: One of the features of the ZDLRA is that it is "encryption aware", and you will not need to perform a new Level 0 backup to ensure you restore encrypted datafiles.
Observation #3 - The database does not recognize the blocks as changed. The block header information, containing SCN information, and other metadata remains untouched.
Datafile/Tablespace information
Original Datafiles
Tablespace Name FILE_ID File Name Size
--------------- ---------- ------------------------------ ----------
ENCTEST 13 o1_mf_enctest_npz8393s_.dbf 14.454 GB
ENCTEST 14 o1_mf_enctest_npz8gvz0_.dbf 8.589 GB
ENCTEST 15 o1_mf_enctest_npz8oy0p_.dbf 10.831 GB
ENCTEST 16 o1_mf_enctest_npz8yr6p_.dbf 12.793 GB
New Datafiles
Tablespace Name FILE_ID File Name Size
--------------- ---------- ------------------------------ ----------
ENCTEST 13 o1_mf_enctest_npzhqh0w_.dbf 14.454 GB
ENCTEST 14 o1_mf_enctest_npzj2gdb_.dbf 8.589 GB
ENCTEST 15 o1_mf_enctest_npzj9f4k_.dbf 10.831 GB
ENCTEST 16 o1_mf_enctest_npzjl4dz_.dbf 12.793 GB
Tablespace encryption Status
PDB Name Tablespace Name Enc. Enc. alg Master Key ID Key ID tablespace Encryt key (trunc)
---------- --------------- ----- ---------- ------------------------- ----------------------------------- ------------------------------
ORCLPDB1 SYSAUX NO AawCP+ykIE/2v9kPpgqvHOk= AC023FECA4204FF6BFD90FA60AAF1CE9 0C836501DE2C28286F4DFF45EDB563
SYSTEM NO AawCP+ykIE/2v9kPpgqvHOk= AC023FECA4204FF6BFD90FA60AAF1CE9 0C836501DE2C28286F4DFF45EDB563
TEMP NO AawCP+ykIE/2v9kPpgqvHOk= AC023FECA4204FF6BFD90FA60AAF1CE9 0C836501DE2C28286F4DFF45EDB563
UNDOTBS1 NO AawCP+ykIE/2v9kPpgqvHOk= AC023FECA4204FF6BFD90FA60AAF1CE9 0C836501DE2C28286F4DFF45EDB563
USERS NO AQAAAAAAAAAAAAAAAAAAAAA= 00000000000000000000000000000000 000000000000000000000000000000
ENCTEST YES AES128 AawCP+ykIE/2v9kPpgqvHOk= AC023FECA4204FF6BFD90FA60AAF1CE9 604F0A3F91F73875D0BC9325FA87C3
Above, you can see that a newly encrypted datafile copy was created for each datafile member of the tablespace. The size, is exactly the same as the original datafile. You can also see that since I didn't specify a specific algorithm, the tablespace was encrypted as AES128 (the default).
Observation #4 - A new datafile copy is created for each member. You need to make sure that there is enough space for a second copy of largest datafile.
Re-encrypted tablespaces from AES128 to AES256
The next process I am going to go through is common as customers rekey existing tablespaces from AES128 to utilize the AES256 encryption algorithm.
In this example, I am not only going to re-encrypt the tablespaces, I am also going to crash the database in the middle of this process to see how to handle this possibility.
SQL> alter tablespace ENCTEST ENCRYPTION ONLINE USING 'AES256' rekey;
For this example, I monitored the alert log and watched it encrypt the datafiles, as soon as it finished encrypting the first datafile (File# 14), I performed a "Shutdown Abort" on the database.
Primary Alert log
ORCLPDB1(3):About to rekey tablespace ENCTEST (tsn 3/6)
2026-01-20T11:46:45.353069-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf (13) to /home/db19c/oradata/ ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T11:51:50.367605-07:00
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdoc8_.dbf size 1894528
2026-01-20T11:51:50.415541-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdoc8_.dbf
2026-01-20T11:51:52.481666-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf"
2026-01-20T11:52:25.125992-07:00
Shutting down ORACLE instance (abort) (OS id: 827465)
2026-01-20T11:52:25.158712-07:00
Shutdown is initiated by sqlplus@bgrenn-19-28 (TNS V1-V3).
License high water mark = 12
USER (ospid: 827465): terminating the instance
2026-01-20T11:52:26.213187-07:00
Instance terminated by USER, pid = 827465
2026-01-20T11:52:27.343819-07:00
Standby Alert log
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf (13) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T11:51:51.143252-07:00
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdphy_.dbf size 1894528
2026-01-20T11:51:51.150126-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdphy_.dbf
2026-01-20T11:51:53.202013-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf"
2026-01-20T18:52:25.254096+00:00
rfs (PID:823518): Possible network disconnect with primary database
2026-01-20T18:52:25.258563+00:00
rfs (PID:823518): while processing B-1209738101.T-1.S-412 BNUM:110516 BCNT:1
2026-01-20T18:52:25.259701+00:00
rfs (PID:823516): Possible network disconnect with primary database
rfs (PID:823518): Current process action IDLE, elapsed idle time 0
2026-01-20T18:52:25.266055+00:00
rfs (PID:823516): while processing B-1209738101.T-1.S-412 BNUM:0 BCNT:0
2026-01-20T18:52:25.267216+00:00
rfs (PID:823518): RFS client ASYNC ORL SINGLE (PID:823296)
2026-01-20T18:52:25.268790+00:00
rfs (PID:823516): Current process action IDLE, elapsed idle time 32
rfs (PID:823516): RFS client GAP MANAGER (PID:822694)
2026-01-20T11:53:04.841560-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf"
2026-01-20T11:53:06.478227-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf"
2026-01-20T11:53:06.486711-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf (14) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T11:54:47.101862-07:00
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmrlhf_.dbf size 1125744
2026-01-20T11:54:47.108829-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmrlhf_.dbf
2026-01-20T11:54:49.120370-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf"
2026-01-20T11:55:28.313091-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf"
2026-01-20T11:55:37.980759-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf"
2026-01-20T11:55:37.989404-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj9kff_.dbf (15) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
What is most interesting in the alert logs, is that the standby continued to rekey the datafiles for the tablespace. This occurred independently of the primary.
Observation #5 - The primary sends the ALTER DATABASE REKEY command in the redo, and the standby database performs the process independently. even once the primary crashed.
Startup Database after crash
After starting up the primary database I took a look at the encryption status of the tablespace in both the primary and standby.. In the primary it shows as "REKEYING", the standby successfully finished the operation and shows as normal AES256 encrypted
-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 11:52 o1_mf_enctest_npzhqh0w_.dbf
-rw-r-----+ 1 oracle oracle 9222103040 Jan 20 12:10 o1_mf_enctest_npzj2gdb_.dbf
-rw-r-----+ 1 oracle oracle 11629633536 Jan 20 12:10 o1_mf_enctest_npzj9f4k_.dbf
-rw-r-----+ 1 oracle oracle 13736353792 Jan 20 12:10 o1_mf_enctest_npzjl4dz_.dbf
-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 12:10 o1_mf_enctest_npzmdoc8_.dbf
Since I aborted the database, while it was "zeroing out the datafile" there are two copies of the datafile.
At this point since it started the rekey operation, my only choice is to "FINISH" the operation. This is a manual step to restart the process.
alter tablespace ENCTEST ENCRYPTION FINISH rekey;
Observation #6 - Once you start an encryption operation, it must finish. There is no backing out.
Once the rekey operation finished, I again looked at the datafiles. I can see that the original datafile that was in process when I aborted the database is still there. If an online encryption operation is in process during a crash, you might have an orphan file from the process that has to be manually deleted.
-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 11:52 o1_mf_enctest_npzhqh0w_.dbf-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 12:10 o1_mf_enctest_npzmdoc8_.dbf
-rw-r-----+ 1 oracle oracle 9222103040 Jan 20 12:27 o1_mf_enctest_npzopwfg_.dbf
-rw-r-----+ 1 oracle oracle 11629633536 Jan 20 12:30 o1_mf_enctest_npzovdhd_.dbf
-rw-r-----+ 1 oracle oracle 13736353792 Jan 20 12:34 o1_mf_enctest_npzp0zd5_.dbf
Observation #7 - After a crash, there may be cleanup necessary.
Cloud Protect is a new offering announced at AI World 2025. Cloud Protect allows you to leverage the Oracle Database Zero Data Loss Autonomous Recovery Service for your on-premises Linux x64 databases. Cloud Protect even allows you to enable real-time redo.
I've been testing it out for the past week and I wanted to share everything I learned about using it.
First, there are a few things to understands about Cloud Protect Version 1.
You can only restore the database back to the original host/cluster you configured backups from. All nodes in a RAC cluster can be used for restoration.
Cloud Protect can only be used from non-ExaCC on-premises databases.
You cannot use a dual point-in-time backup strategy. Cloud Protect must be your primary backup location.
I have seen a lot of changes with the ZDLRA over the last 10 years or so, and because of that I wanted to document the best practices to follow that changed up to now, and are also helpful for new existing. (2025).
Because of the changes that have occurred as the product has matured, I wanted to write a blog post on what existing customers should think about doing now, and what new customers should think about.
1. Channel Configuration
There are a few items to talk about with channel configuration. First let's break down the pieces of the channel configuration
a) Library
Version
Up until version 19.26, you could use a shared version of the library (libra.so) based on the OS. Updates to the library could be downloaded directly from MOS.
Starting with version 19.27 on Linux, you must use the version that is stored in $ORACLE_HOME and patched using OPatch. More information can be found in the MOS note below.
Location (channel setting)
In the previous section I mentioned that the library is tied to the version of Oracle starting with 19.26. Prior to this release, the default channel setting for the library with the ZDLRA was
"PARMS 'SBT_LIBRARY=$ORACLE_HOME/lib/libra.so"
In order to always utilize the current library for the $ORACLE_HOME, the recommendation is to set the channel setting to
"SBT_LIBRARY=libra.so" - No path, and it will default to the current $ORACLE_HOME/lib"
Or
"SBT_LIBRARY=oracle.zldlra" - This will default to the current $ORACLE_HOME/lib/libra.so
b) Environment
In the environment section, you specify the SEPS wallet location. Because there could be conflicts with other oracle products that use a wallet (like OUD), the default location for the SEPS wallet should be set to the WALLET_ROOT location, rather than within the $ORACLE_HOME location.
In the past, the channel setting for ZDLRA was
"RA_WALLET=location=file:${ORACLE_HOME}/dbs/ra_wallet credential_alias={SEPS entry in wallet}"
The recommendation is to set the WALLET_ROOT in the spfile, and store the SEPS wallet within the server_seps directory under this location. If WALLET_ROOT is set to $ORACLE_BASE/admin/{$DB_NAME}/wallet, the setting would be
"RA_WALLET=location=file:${ORACLE_BASE}/admin/${DB_NAME}/wallet/server_seps/ credential_alias={SEPS entry in wallet}"
c) TLS or non-TLS
If a customer has the ZDLRA configured to encrypt backups with TLS/SSL, the library, by default, will attempt to encrypt the backup using SSL/HTTPS. If TLS/SSL is configured as optional, and if the client does not have the certificate information, backups will fail. To avoid this, it is recommended to add the following setting to the channel configuration. This setting will allow you to send backups even if optional TSL/SSL is configured.
_RA_NO_SSL=TRUE
d) Space Efficient Encrypted Backups
In order to use Space Efficient Encrypted Backups, you must set the following in your channel configuration. This is only available on linux with DB version 19.18+. Keep in mind this setting will only compress datafile backup pieces.
RA_FORMAT=TRUE
Example:
RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=oracle.zdlra,ENV=(_RA_NO_SSL=TRUE,RA_WALLET=location=file:/u01/app/oracle/admin/orcl/wallet/server_seps credential_alias="tcps://pmdra1-scan1.us.oracle.com:2484/ra1db", RA_FORMAT=true)';
oracle.zdlra - Use the libra.so from the $ORACLE_HOME for the database
_RA_NO_SSL=TRUE - Do not send backups encrypted if TLS is optionally enabled.
/u01/app/oracle/admin/orcl/wallet/server_seps - Wallet location from WALLET_ROOT
RA_FORMAT=true - Use space efficient encrypted backups
2. RMAN encryption
One misconception with Space Efficient Encrypted Backups, is that by setting RA_FORMAT=TRUE you are both compressing and encrypting backups. This is not true. You are only compressing the backups. If the tablespace is encrypted with TDE, the backup will remain encrypted, but any non-TDE tablespace backups will be unencrypted. In order to encrypt the backup pieces (Datafiles, controlfile,. archive logs and spfile) you need to both set an encryption key (if not already set), and set RMAN encryption on. If you are backing up any tablespaces that are not TDE, you must also ensure you set RA_FORMAT=TRUE.
You should also set the default encryption algorithm to AES256, as most customers use this today rather than the default of AES128
NOTE: Use of RA_FORMAT=TRUE (RMAN Compression) and RMAN encryption is included in the license for ZDLRA and you do not need the ACO or ASO license for backups to the ZDLRA.
3. Real-time redo settings.
a) Default SEPS wallet location
Wallets are becoming more and more common with databases, and this increases the chances that there will be conflict with other Oracle features that also utilize a wallet. To avoid this there is a new default location for the SEPS wallet used by real-time redo. The database will first look in the {WALLET_ROOT}/server_seps location for the wallet. It is recommended to set the WALLET_ROOT for all databases and use this default location for the SEPS wallet. Of course you may use links to simplify management.
b) Encrypt Redo
If you have an encryption key set, and you wish to fully encrypt your backups, you also need to ensure that real-time redo is also encrypted. You need to set encryption on the destination for real-time redo
ENCRYPTION=ON
NOTE: As an additional note, I have a previous blog post on wallets you can find here. I have worked with a few customers that have run into issues configuring real-time redo because they have other products that are affected by WALLET_OVERRIDE=TRUE. What I tell customers is that if they want to continue to use the SEPS wallet for RMAN scripts (i.e. rman target / catalog /@{wallet entry}), then they should create a separate/customer sqlnet.ora file in a different location, and set TNS_ADMIN in their RMAN backup script.
4. Reserve Space
Reserve space has been difficult to explain, and it has become even more important when implementing a compliance window. If there is no compliance set, then the reserve space is only interrogated when the ZDLRA runs out of space. When a compliance window is set, the reserve space is interrogated when backing up to ensure that compliance locked backups fit within the reserved space. If they do not, backups are rejected. Because of this, I always recommend setting the "auto-tune reserve space" for the policies, especially when setting a compliance window. This setting will automatically adjust the reserve space for you as databases grow.
5. When encrypting backups set "secure mode"
If you need to fully encrypt your backups with RMAN encryption, it is recommended to use "secure mode" on your protection policy. This setting checks all backup pieces (including real-time redo) to ensure that they are RMAN encrypted. Any unencrypted backup pieces will be rejected.
6. Use racli commands
One of the biggest changes is the increase in the functionality available with racli command vs executing DBMS_RA PL/SQL packages. It is recommended to utilize RACLI commands when possible. I know when trying to automate onboarding databases, it is much easier to utilize PL/SQL packages.
7. Pair ZDLRAs when using replication
Another change is the ability to pair ZDLRAs that have replication configured. This can be done with the "racli add ra_partner" command. Using the ZDLRA pairing commands for replication greatly simplifies the configuration of replication. Existing configurations can be converting to using the pairing model.
8. Review types of DB users
There have been a few changes to DB users. There are now 4 types of DB users.
ADMIN -Used to administer ZDLRA configuration settings
MONITOR - Read-only account that can view metadata
REPLICATION - Used to replicate backups between ZDLRAs
VPC - Used by protected databases to send/received backups
Insecure=TRUE - VPC user's password will expire, but it can be reset to the same password. This type of user does not support password rollover. Alternately you can set the profile of the VPC user to "NO_EXPIRE". This is less secure, but avoids having the password expire.
insecure=false - VPC user's password will expire, but this user can leverage password rollover (STIG) to manage password rotation without having backups fail.
NOTE: If you need to rotate password for VPC users you should leverage the STIG/password rollover function that allows you have two passwords associated with the same VPC user as you update wallets.
Summary:
Set WALLET_ROOT in the database and store the SEPS in the {WALLET_ROOT}/server_seps directory
Ensure the library location in the channel configuration defaults to the current $ORACLE_HOME
Set _RA_NO_SSL=TRUE to ensure that converting to TLS/SSL will not cause existing backups to fails
Set RA_FORMAT=TRUE to leverage space efficient encrypted backups --> Linux only
Enable RMAN encryption, and set algorithm to AES256 --> Linux only
Encrypt Real-time redo if applicable
If you require fully encrypted backups, set SECURE_MODE on the policy
Enable auto-tune of reserved space for all databases, especially those using a compliance window.
Use RACLI commands to manage the ZDLRA
Pair ZDLRAs when configuring replication
Leverage password rollover for VPC users if you require password rotation
One area that I have been spending a lot of time on is accessing data stored in object storage from within Oracle DB 23ai using external table definitions.
Parquet objects
I started by creating external tables on Parquet objects, and recently I have been testing accessing objects that are cataloged in Apache Iceberg.
Alexey Filanovskiy wrote a great blog post on Oracle Tables and parquet objects you can find here.
Reading through that blog post gives you a good foundation for why parquet formatted objects make sense for external tables. Oracle DB is able to leverage the metadata in parquet objects to optimize queries.
Defining external tables on parquet objects
Below is an example table definition for an external table accessing parquet objects. In the below example, the parquet objects were stored on ZFSSA using the OCI API.
What you will notice from this table definition is
It uses the "ORACLE_BIGDATA" type. This is necessary to create external tables using big data object types, which is currently [csv|textfile|avro|parquet|orc].
I am specifying a credential. I had already created a credential that contains the login credentials for the OCI API on my ZFSSA.
The object type for my objects is parquet.
The location of the object(s) is specified. In my case I am using an "*" as there are multiple objects within this path that make up the table.
The format of the URL is in this case "ORACLEBMC". The options are
ORACLEBMC - Specifies this is an OCI API object storage.
The format of the URL is not well documented, I also found that in some cases "s3a" can be used for non-AWS s3 object storage.
DBMS_CLOUD does not allow you to use URLs other than "https". You will most likely have to create tables using the above syntax rather than using the DBMS_CLOUD package.
Apache Iceberg
Once I was able to successfully access parquet objects directly, I began testing cataloging parquet objects with Apache Iceberg using Spark.
I found the most difficult part of this was properly creating an Apache Iceberg manifest file that could be used to build an external table definition against Iceberg.
Environment to build manifest
My testing environment to build the Apache Iceberg manifest contains the following.
Python
Python 3.9.18
Below is a list of my python packages that are installed
PySpark version: 4.0.0
Java version: 21.0.7
Scala version: version 2.13.16
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 4.0.0
/_/
Using Scala version 2.13.16, OpenJDK 64-Bit Server VM, 21.0.7
Branch HEAD
Compiled by user wenchen on 2025-05-19T07:58:03Z
Revision fa33ea000a0bda9e5a3fa1af98e8e85b8cc5e4d4
Url https://github.com/apache/spark
Python script to create namespace and create table (Part #2)
spark.sql("""
CREATE NAMESPACE IF NOT EXISTS oci_db
""")
import pyarrow.parquet as pq
spark.sql("""
CREATE NAMESPACE IF NOT EXISTS oci.tripdata
""")
spark.sql("""
CREATE TABLE IF NOT EXISTS oci.tripdata.yellow (
`VendorID` INT,
`tpep_pickup_datetime` TIMESTAMP,
`tpep_dropoff_datetime` TIMESTAMP,
`passenger_count` BIGINT,
`trip_distance` DOUBLE,
`RatecodeID` BIGINT,
`store_and_fwd_flag` STRING,
`PULocationID` INT,
`DOLocationID` INT,
`payment_type` BIGINT,
`fare_amount` DOUBLE,
`extra` DOUBLE,
`mta_tax` DOUBLE,
`tip_amount` DOUBLE,
`tolls_amount` DOUBLE,
`improvement_surcharge` DOUBLE,
`total_amount` DOUBLE,
`congestion_surcharge` DOUBLE,
`Airport_fee` DOUBLE
)
USING iceberg
PARTITIONED BY (months(tpep_pickup_datetime))
""")
Python scrypt to read parquet object/set type/append to iceberg table (Part #3)
sdf = spark.read.parquet("{parquet object}") # Location and name of parquet object to load
from pyspark.sql import functions as F
sdf_cast = (
sdf
.withColumn("VendorID", F.col("VendorID").cast("int"))
.withColumn("tpep_pickup_datetime", F.col("tpep_pickup_datetime").cast("timestamp"))
.withColumn("tpep_dropoff_datetime", F.col("tpep_dropoff_datetime").cast("timestamp"))
.withColumn("passenger_count", F.col("passenger_count").cast("bigint"))
.withColumn("trip_distance", F.col("trip_distance").cast("double"))
.withColumn("RatecodeID", F.col("RatecodeID").cast("bigint"))
.withColumn("store_and_fwd_flag", F.col("store_and_fwd_flag").cast("string"))
.withColumn("PULocationID", F.col("PULocationID").cast("int"))
.withColumn("DOLocationID", F.col("DOLocationID").cast("int"))
.withColumn("payment_type", F.col("payment_type").cast("bigint"))
.withColumn("fare_amount", F.col("fare_amount").cast("double"))
.withColumn("extra", F.col("extra").cast("double"))
.withColumn("mta_tax", F.col("mta_tax").cast("double"))
.withColumn("tip_amount", F.col("tip_amount").cast("double"))
.withColumn("tolls_amount", F.col("tolls_amount").cast("double"))
.withColumn("improvement_surcharge", F.col("improvement_surcharge").cast("double"))
.withColumn("total_amount", F.col("total_amount").cast("double"))
.withColumn("congestion_surcharge", F.col("congestion_surcharge").cast("double"))
.withColumn("Airport_fee", F.col("Airport_fee").cast("double"))
)
sdf_cast.writeTo("oci.tripdata.yellow").append()
Summary :
Combining the 3 parts of the script, filling in the credentials along with the endpoint, and bucket, along with specifying the parquet object will load the object storage properly including the manifest file.
Investigating the resulting manifest and objects
Below is a list of the objects created in the metadata directory.
You can see that I updated the data and added more data, and each time, it created a new version of the manifest along with snapshot information.
When writing the data, I chose to partition it, below is the partitioned data. You can see that when Apache Iceberg wrote the data, it automatically created parquet objects in directories for each partition, and added to the current directory with new data.
Creating an Oracle table on top of Iceberg
Now that I have created my iceberg table, and it is stored in object storage, I can create an Oracle table that will read the manifest file.
In my example, the most recent manifest object is "iceberg_warehouse/tripdata/yellow/metadata/v7.metadata.json".
I can now select from my new table and it will read the Iceberg manifest file.
Conclusion:
Oracle Database 23ai not only supports creating external tables on top of Parquet objects, but it also supports creating external tables on top of Apache Iceberg manifest objects.
When building a cyber vault, one of the most important items to manage is encryption keys. Encrypting your data is a fundamental pillar of ransomware protection, but encryption key management is often forgotten.
Ensuring your Cyber Vault has a current copy of your encryption keys associated with your Oracle Databases is critically important for a successful restore and recover after an attack.
Starting with the Oracle Key Vault (OKV) 21.11 release, Oracle Key Vault includes a preview of the Key Transfer Across Oracle Key Vault Clusters Integration Accelerator. You can use this feature to transfer security objects from one OKV cluster to another.
You can find more detail on this feature here, and I will describe it's benefits in this post.
The diagram below shows a typical cyber vault architecture to protect Oracle Databases with the Zero Data Loss Recovery Appliance (ZDLRA) and OKV .
Transferring keys into a cyber vault with OKV
Encryption Key Architecture
Encryption key management is a critical piece of data protection, and it is important to properly manage your keys. Good cyber protection begins with proper encryption key management.
Local Wallets
With Oracle databases, the default (and simplest) location for encryption keys is in a locally management wallet file. The keys are often stored in an auto-login wallet, which is automatically opened by the database at startup making key management transparent and simple, but not very secure.
Why aren't wallets secure ?
They are often auto-login (cwallet.sso) which allows the database to open the wallet without requiring a password. This wallet file gives full access to the encryption keys.
The wallet file is stored with the database files. A privileged administrator, such as a DBA has access to both the database and the keys to decrypt the data directly from the hosts. They also have the ability to delete the wallet file.
Often the wallet file is backed up with the database, which includes an auto-login wallet. This allows anyone who has access to backups, to also be able to decrypt the data.
Securely backing up the wallet file separate from the database is often forgotten, especially when ASM is used as the wallet location. Not having the wallet file when restoring the database makes restoration and recovery impossible.
Steps you can take
Create both a passworded wallet (ewallet.p12) and local auto-login wallet (cwallet.sso). With a local auto-login wallet, the wallet can only be opened on the host where the wallet was created.
Backup the passworded wallet only (ewallet.p12). The auto-login wallet can always be recreated from the passworded wallet.
Properly store the password for you passworded encryption wallet. You will need the password to rotate your encryption keys, and create the auto-login wallet.
Oracle Key Vault (OKV)
The best way to securely manage encryption keys is with OKV.
Why ?
Keys are managed outside of the database and cannot be accessed locally outside of the database.
Access to keys is granted to a specific database instance.
OKV is clustered for High Availability, and the OKV cluster can be securely backed up.
Key access is audited to allow for early detection of access.
Encryption Keys in a cyber vault architecture
Below is a diagram of a typical cyber vault architecture using ZDLRA. Because the backups are encrypted, either because the databases are using TDE and/or they are creating RMAN encrypted backups sent to the ZDLRA, the keys need to be available in the vault also.
Not only is the cyber vault a separate, independent database and backup architecture, the vault also contains a separate, independent OKV cluster.
This isolates the cyber vault from any attack to the primary datacenter, including any attack that could compromise encryption key availability.
Encryption Keys in an advanced cyber vault architecture
Below is a diagram of an advanced cyber vault architecture using ZDLRA. Not only are the backups replicated to a separate ZDLRA in the vault, they are internally replicated to an Isolated Recovery Environment (IRE). In this architecture, the recover area is further isolated, and the OKV cluster is even further isolated from the primary datacenter. This provides the highest level of protection.
OKV Encryption Key Transfer
This blog post highlights the benefit of the newly released (21.11) OKV feature to allow for the secure transfer of encryption keys.
Periodic rotation of encryption keys is a required practice to protect encryption keys, and ensuring you have the current key available in a cyber vault is challenging.
OKV solves this challenge by providing the ability to transfer any new or changed keys between clusters.
Implementing OKV in a cyber Vault
When building a cyber vault, it is recommend to build an independent OKV cluster. The OKV cluster in the vault is isolated from the primary datacenter and protected by an airgap. The nodes in this cluster are not be able to communicate with the OKV cluster outside of the vault.
The OKV cluster in the vault can be created using a full, secure, backup from the OKV cluster in the primary datacenter. The backup can be transferred into the vault, and then restored to the new, independent OKV cluster providing a current copy of the encryption keys.
Keeping OKV managed keys updated in a cyber Vault
The challenge, once creating an isolated OKV cluster has been keeping the encryption keys within the cluster current when new keys are created. This was typically accomplished by transferring a full backup of OKV into the vault, and rebuilding the cluster using this backup.
OKV 21.11 provides the solution with secure Encryption Key Transfer. Leveraging this feature you can securely transfer just the keys that have recently changed allowing you to manage independent OKV clusters that are synchronized on a regular basis.
The diagram below shows the flow of the secure Encryption Key Transfer package from the primary OKV cluster into the vault when the air-gap is opened.
This new OKV feature provides a much better way to securely manage encryption keys in a Cyber Vault.
Summary
As ransomware attacks increase, it is critical to protect a backup copy of your critical database in a cyber vault. It is also critical to protect a copy of your encryption keys to ensure you can recovery those databases. OKV provides the architecture for key management in both your primary datacenter and in a cyber vault. The new secure key transfer feature within OKV allows you to synchronize keys across independent OKV clusters.
Data Encryption Keys (DEK) are used by encryption algorithms (typically AES256) to protect data at rest in tablespaces, backups and even datapump exports. Since DEKs are stored with the data, they need to be protected. Master Encryption Keys (MEK) are stored outside the database in local wallets or in an external keystore, and MEKs are used to encrypt the DEK which provides a separation between the data that is encrypted and an external key.
DEKs are not only found on Tablespaces (TDE), but they are also found in RMAN encrypted backups, Space Efficient Encrypted backups to ZDLRA/Recovery Service and encrypted datapump exports.
The diagram below shows that relationship between MEKs and DEKs.
You can see in the diagram that MEKs are not only used to encrypt Tablespace Encryption Keys to support TDE, they are also used to encrypt RMAN backups to disk, Tape or ZDLRA/Recovery Service, along with Datapump encryption keys. NOTE: When implementing encryption you should make sure you meet ASO licensing restrictions
Creating encrypted tablespaces or encrypted columns in tables requires ASO, unless you are leveraging an OCI Cloud Database environment (including multicloud).
Creating RMAN encrypted disk backups requires licensing.
Creating RMAN encrypted tape backups is license included and only supported when writing to Oracle products.
OSB - Oracle Secure Backup is license included for creating encrypted backups.
OCI Object Storage - The OCI cloud library requires encryption and can only be leveraged for writing backups directly to OCI object storage in OCI, or writing backups from a ZDLRA to OCI object storage in cloud or on ZFSSA
ZDLRA - Creating Space Efficient Encrypted backups is license included on ZDLRA and will create encrypted backups on ZDLRA.
MEKs are commonly called "TDE keys", but they are much more, and I will refer to them as MEKs throughout this post.
MEK with a Wallet File
In this first section I am going to talk about how the Oracle Database manages an MEK when using a Wallet file.
Wallet settings
With wallet-based MEKs, the database stores and reads keys from a wallet file on disk (or +ASM or Exascale).
There are 2 parameters in the database (spfile) that help guide the database to the wallet file.
WALLET_ROOT - This replace the "ENCRYPTION_WALLET_LOCATION" parameter that was previous set in the sqlnet.ora file. Having it set at the database level allows multiple databases to independently share the same $ORACLE_HOME. It even allows PDBs to have individual wallet files. When using a wallet file, the WALLET_ROOT points to top directory where the encryption keys will be kept.
TDE_CONFIGURATION - This tells the database what kind of MEK will be used. For wallets you would specify "TDE_CONFIGURATION=FILE". This tells the database to use find the encryption keys in an encryption wallet.
Encryption key location
Within the WALLET_ROOT directory the encryption wallet containing the MEKs, the wallet must be stored in a subdirectory named "tde".
Within that directory you will find 2 wallet files
ewallet.p12- The password-protected wallet (contains keys; required for updates).
ewallet.sso — the auto-login wallet (allowing no password prompt on open); The auto-login wallet can also be created as a “local” autologin wallet bound to a single host.
On startup (or when you open the keystore), the database retrieves the CDB/PDB MEKs from the wallet file and caches them in memory.
Database interaction with MEK
When using a wallet file, the auto-login wallet file is opened when the database is opened, and the database constantly interacts with this file. If no auto-login wallet file is created, you must open the wallet manually providing the password.
Viewing V$ENCRYPTION_WALLET
What you’ll typically see for FILE based wallets:
PDB_NAME WALLET_TYPE STATUS WRL_PARAMETER KEYSTORE
-------------- ----------- ------ ---------------------------- --------
CDB$ROOT AUTOLOGIN OPEN {WALLET_ROOT}/tde NONE
DB23AI_PDB AUTOLOGIN OPEN UNITED
MEK with OKV/OCI KMS
When using an external keystore, the database communicates with the OKV/OCI KMS client libraries installed on the host in order to retrieve the MEKs.
OKV/OCI KMS settings
There are 2 parameters in the database (spfile) that help guide the database to the OKV/OCI KMS installation.
WALLET_ROOT - When using OKV/OCI KMS, the WALLET_ROOT points to top directory of the client install of the OKV/OCI KMS software.
TDE_CONFIGURATION - For OKV/OCI KMS specify "TDE_CONFIGURATION=OKV" or "TDE_CONFIGURATION=HSM". It is possible to combine both wallets and OKV/OCI KMS if there are older keys that are still stored a wallet file. In that case you would see "TDE_CONFIGURATION=OKV|FILE" for example, and the database would read older encryption keys from a wallet file along with reading the keys from OKV/OCI MKS.
Encryption key location
Within the WALLET_ROOT directory, the software to communicate with the HSM is typically installed in the "okv" or "hsm" subdirectory. The keys themselves are stored externally, and could be securely cached locally depending on your configuration.
The credential used to access OKV/OCI KMS is typically stored under the /tde directory in an auto-login wallet. This allows the database to connect to OKV/OCI KMS at startup. The external password store entries (if used) live under /tde_seps which allows up to manage keys without entering a password.
Database interaction with MEK
The MEK is always permanently stored externally. The database interacts with the external keystore through a library. This library is named liborapkcs.so. The library itself is vendor specific and is used by the database to interact with different vendors keystore,. Because there is a single library, you cannot use multiple external key store types on the same DB host.
When the database starts up, it reads the password for the external keystore out of the wallet, opens up a connection to the keystore (after authentication and encryption handshake) and caches the MEK. The database periodically will communicate with the external keystore through that library.
OKV/OCI MKS in Practice: What You’ll See
When both FILE and OKV configurations exist (e.g., during migration), you may observe dual rows in V$ENCRYPTION_WALLET showing the FILE wallet (no active master key) and the active OKV keystore:
PDB_NAME TYPE STATUS WALLET_TYPE WRL_PARAMETER
-------------- ---- ----------------- ----------- ---------------------------
CDB$ROOT FILE OPEN_NO_MASTERKEY AUTOLOGIN {WALLET_ROOT}/tde
CDB$ROOT OKV OPEN OKV
DB23AI_PDB FILE OPEN_NO_MASTERKEY AUTOLOGIN
DB23AI_PDB OKV OPEN OKV
Best Practices & Notes
Back up only the passworded wallet when using a wallet file. (ewallet.p12) and store it securely, separate from database backups.
Use local autologin wallets if you require host binding for additional security.
When migrating to OKV/OCI KMS, consider a staged approach (OKV|FILE) so you can fall back if needed.
Always test wallet/keystore open/close operations and RMAN/Data Pump encryption workflows in lower environments before production.
Keep client libraries (liborapkcs.so and OKV client) patched and consistent across RAC nodes.