Expert insights on Oracle Database, Zero Data Loss Recovery Appliance (ZDLRA), backup and recovery strategies, database security, and enterprise database solutions by Bryan Grenn.
Cloud Protect is a new offering announced at AI World 2025. Cloud Protect allows you to leverage the Oracle Database Zero Data Loss Autonomous Recovery Service for your on-premises Linux x64 databases. Cloud Protect even allows you to enable real-time redo.
I've been testing it out for the past week and I wanted to share everything I learned about using it.
First, there are a few things to understands about Cloud Protect Version 1.
You can only restore the database back to the original host/cluster you configured backups from. All nodes in a RAC cluster can be used for restoration.
Cloud Protect can only be used from non-ExaCC on-premises databases.
You cannot use a dual point-in-time backup strategy. Cloud Protect must be your primary backup location.
I have seen a lot of changes with the ZDLRA over the last 10 years or so, and because of that I wanted to document the best practices to follow that changed up to now, and are also helpful for new existing. (2025).
Because of the changes that have occurred as the product has matured, I wanted to write a blog post on what existing customers should think about doing now, and what new customers should think about.
1. Channel Configuration
There are a few items to talk about with channel configuration. First let's break down the pieces of the channel configuration
a) Library
Version
Up until version 19.26, you could use a shared version of the library (libra.so) based on the OS. Updates to the library could be downloaded directly from MOS.
Starting with version 19.27 on Linux, you must use the version that is stored in $ORACLE_HOME and patched using OPatch. More information can be found in the MOS note below.
Location (channel setting)
In the previous section I mentioned that the library is tied to the version of Oracle starting with 19.26. Prior to this release, the default channel setting for the library with the ZDLRA was
"PARMS 'SBT_LIBRARY=$ORACLE_HOME/lib/libra.so"
In order to always utilize the current library for the $ORACLE_HOME, the recommendation is to set the channel setting to
"SBT_LIBRARY=libra.so" - No path, and it will default to the current $ORACLE_HOME/lib"
Or
"SBT_LIBRARY=oracle.zldlra" - This will default to the current $ORACLE_HOME/lib/libra.so
b) Environment
In the environment section, you specify the SEPS wallet location. Because there could be conflicts with other oracle products that use a wallet (like OUD), the default location for the SEPS wallet should be set to the WALLET_ROOT location, rather than within the $ORACLE_HOME location.
In the past, the channel setting for ZDLRA was
"RA_WALLET=location=file:${ORACLE_HOME}/dbs/ra_wallet credential_alias={SEPS entry in wallet}"
The recommendation is to set the WALLET_ROOT in the spfile, and store the SEPS wallet within the server_seps directory under this location. If WALLET_ROOT is set to $ORACLE_BASE/admin/{$DB_NAME}/wallet, the setting would be
"RA_WALLET=location=file:${ORACLE_BASE}/admin/${DB_NAME}/wallet/server_seps/ credential_alias={SEPS entry in wallet}"
c) TLS or non-TLS
If a customer has the ZDLRA configured to encrypt backups with TLS/SSL, the library, by default, will attempt to encrypt the backup using SSL/HTTPS. If TLS/SSL is configured as optional, and if the client does not have the certificate information, backups will fail. To avoid this, it is recommended to add the following setting to the channel configuration. This setting will allow you to send backups even if optional TSL/SSL is configured.
_RA_NO_SSL=TRUE
d) Space Efficient Encrypted Backups
In order to use Space Efficient Encrypted Backups, you must set the following in your channel configuration. This is only available on linux with DB version 19.18+. Keep in mind this setting will only compress datafile backup pieces.
RA_FORMAT=TRUE
Example:
RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=oracle.zdlra,ENV=(_RA_NO_SSL=TRUE,RA_WALLET=location=file:/u01/app/oracle/admin/orcl/wallet/server_seps credential_alias="tcps://pmdra1-scan1.us.oracle.com:2484/ra1db", RA_FORMAT=true)';
oracle.zdlra - Use the libra.so from the $ORACLE_HOME for the database
_RA_NO_SSL=TRUE - Do not send backups encrypted if TLS is optionally enabled.
/u01/app/oracle/admin/orcl/wallet/server_seps - Wallet location from WALLET_ROOT
RA_FORMAT=true - Use space efficient encrypted backups
2. RMAN encryption
One misconception with Space Efficient Encrypted Backups, is that by setting RA_FORMAT=TRUE you are both compressing and encrypting backups. This is not true. You are only compressing the backups. If the tablespace is encrypted with TDE, the backup will remain encrypted, but any non-TDE tablespace backups will be unencrypted. In order to encrypt the backup pieces (Datafiles, controlfile,. archive logs and spfile) you need to both set an encryption key (if not already set), and set RMAN encryption on. If you are backing up any tablespaces that are not TDE, you must also ensure you set RA_FORMAT=TRUE.
You should also set the default encryption algorithm to AES256, as most customers use this today rather than the default of AES128
NOTE: Use of RA_FORMAT=TRUE (RMAN Compression) and RMAN encryption is included in the license for ZDLRA and you do not need the ACO or ASO license for backups to the ZDLRA.
3. Real-time redo settings.
a) Default SEPS wallet location
Wallets are becoming more and more common with databases, and this increases the chances that there will be conflict with other Oracle features that also utilize a wallet. To avoid this there is a new default location for the SEPS wallet used by real-time redo. The database will first look in the {WALLET_ROOT}/server_seps location for the wallet. It is recommended to set the WALLET_ROOT for all databases and use this default location for the SEPS wallet. Of course you may use links to simplify management.
b) Encrypt Redo
If you have an encryption key set, and you wish to fully encrypt your backups, you also need to ensure that real-time redo is also encrypted. You need to set encryption on the destination for real-time redo
ENCRYPTION=ON
NOTE: As an additional note, I have a previous blog post on wallets you can find here. I have worked with a few customers that have run into issues configuring real-time redo because they have other products that are affected by WALLET_OVERRIDE=TRUE. What I tell customers is that if they want to continue to use the SEPS wallet for RMAN scripts (i.e. rman target / catalog /@{wallet entry}), then they should create a separate/customer sqlnet.ora file in a different location, and set TNS_ADMIN in their RMAN backup script.
4. Reserve Space
Reserve space has been difficult to explain, and it has become even more important when implementing a compliance window. If there is no compliance set, then the reserve space is only interrogated when the ZDLRA runs out of space. When a compliance window is set, the reserve space is interrogated when backing up to ensure that compliance locked backups fit within the reserved space. If they do not, backups are rejected. Because of this, I always recommend setting the "auto-tune reserve space" for the policies, especially when setting a compliance window. This setting will automatically adjust the reserve space for you as databases grow.
5. When encrypting backups set "secure mode"
If you need to fully encrypt your backups with RMAN encryption, it is recommended to use "secure mode" on your protection policy. This setting checks all backup pieces (including real-time redo) to ensure that they are RMAN encrypted. Any unencrypted backup pieces will be rejected.
6. Use racli commands
One of the biggest changes is the increase in the functionality available with racli command vs executing DBMS_RA PL/SQL packages. It is recommended to utilize RACLI commands when possible. I know when trying to automate onboarding databases, it is much easier to utilize PL/SQL packages.
7. Pair ZDLRAs when using replication
Another change is the ability to pair ZDLRAs that have replication configured. This can be done with the "racli add ra_partner" command. Using the ZDLRA pairing commands for replication greatly simplifies the configuration of replication. Existing configurations can be converting to using the pairing model.
8. Review types of DB users
There have been a few changes to DB users. There are now 4 types of DB users.
ADMIN -Used to administer ZDLRA configuration settings
MONITOR - Read-only account that can view metadata
REPLICATION - Used to replicate backups between ZDLRAs
VPC - Used by protected databases to send/received backups
Insecure=TRUE - VPC user's password will expire, but it can be reset to the same password. This type of user does not support password rollover. Alternately you can set the profile of the VPC user to "NO_EXPIRE". This is less secure, but avoids having the password expire.
insecure=false - VPC user's password will expire, but this user can leverage password rollover (STIG) to manage password rotation without having backups fail.
NOTE: If you need to rotate password for VPC users you should leverage the STIG/password rollover function that allows you have two passwords associated with the same VPC user as you update wallets.
Summary:
Set WALLET_ROOT in the database and store the SEPS in the {WALLET_ROOT}/server_seps directory
Ensure the library location in the channel configuration defaults to the current $ORACLE_HOME
Set _RA_NO_SSL=TRUE to ensure that converting to TLS/SSL will not cause existing backups to fails
Set RA_FORMAT=TRUE to leverage space efficient encrypted backups --> Linux only
Enable RMAN encryption, and set algorithm to AES256 --> Linux only
Encrypt Real-time redo if applicable
If you require fully encrypted backups, set SECURE_MODE on the policy
Enable auto-tune of reserved space for all databases, especially those using a compliance window.
Use RACLI commands to manage the ZDLRA
Pair ZDLRAs when configuring replication
Leverage password rollover for VPC users if you require password rotation
One area that I have been spending a lot of time on is accessing data stored in object storage from within Oracle DB 23ai using external table definitions.
Parquet objects
I started by creating external tables on Parquet objects, and recently I have been testing accessing objects that are cataloged in Apache Iceberg.
Alexey Filanovskiy wrote a great blog post on Oracle Tables and parquet objects you can find here.
Reading through that blog post gives you a good foundation for why parquet formatted objects make sense for external tables. Oracle DB is able to leverage the metadata in parquet objects to optimize queries.
Defining external tables on parquet objects
Below is an example table definition for an external table accessing parquet objects. In the below example, the parquet objects were stored on ZFSSA using the OCI API.
What you will notice from this table definition is
It uses the "ORACLE_BIGDATA" type. This is necessary to create external tables using big data object types, which is currently [csv|textfile|avro|parquet|orc].
I am specifying a credential. I had already created a credential that contains the login credentials for the OCI API on my ZFSSA.
The object type for my objects is parquet.
The location of the object(s) is specified. In my case I am using an "*" as there are multiple objects within this path that make up the table.
The format of the URL is in this case "ORACLEBMC". The options are
ORACLEBMC - Specifies this is an OCI API object storage.
The format of the URL is not well documented, I also found that in some cases "s3a" can be used for non-AWS s3 object storage.
DBMS_CLOUD does not allow you to use URLs other than "https". You will most likely have to create tables using the above syntax rather than using the DBMS_CLOUD package.
Apache Iceberg
Once I was able to successfully access parquet objects directly, I began testing cataloging parquet objects with Apache Iceberg using Spark.
I found the most difficult part of this was properly creating an Apache Iceberg manifest file that could be used to build an external table definition against Iceberg.
Environment to build manifest
My testing environment to build the Apache Iceberg manifest contains the following.
Python
Python 3.9.18
Below is a list of my python packages that are installed
PySpark version: 4.0.0
Java version: 21.0.7
Scala version: version 2.13.16
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 4.0.0
/_/
Using Scala version 2.13.16, OpenJDK 64-Bit Server VM, 21.0.7
Branch HEAD
Compiled by user wenchen on 2025-05-19T07:58:03Z
Revision fa33ea000a0bda9e5a3fa1af98e8e85b8cc5e4d4
Url https://github.com/apache/spark
Python script to create namespace and create table (Part #2)
spark.sql("""
CREATE NAMESPACE IF NOT EXISTS oci_db
""")
import pyarrow.parquet as pq
spark.sql("""
CREATE NAMESPACE IF NOT EXISTS oci.tripdata
""")
spark.sql("""
CREATE TABLE IF NOT EXISTS oci.tripdata.yellow (
`VendorID` INT,
`tpep_pickup_datetime` TIMESTAMP,
`tpep_dropoff_datetime` TIMESTAMP,
`passenger_count` BIGINT,
`trip_distance` DOUBLE,
`RatecodeID` BIGINT,
`store_and_fwd_flag` STRING,
`PULocationID` INT,
`DOLocationID` INT,
`payment_type` BIGINT,
`fare_amount` DOUBLE,
`extra` DOUBLE,
`mta_tax` DOUBLE,
`tip_amount` DOUBLE,
`tolls_amount` DOUBLE,
`improvement_surcharge` DOUBLE,
`total_amount` DOUBLE,
`congestion_surcharge` DOUBLE,
`Airport_fee` DOUBLE
)
USING iceberg
PARTITIONED BY (months(tpep_pickup_datetime))
""")
Python scrypt to read parquet object/set type/append to iceberg table (Part #3)
sdf = spark.read.parquet("{parquet object}") # Location and name of parquet object to load
from pyspark.sql import functions as F
sdf_cast = (
sdf
.withColumn("VendorID", F.col("VendorID").cast("int"))
.withColumn("tpep_pickup_datetime", F.col("tpep_pickup_datetime").cast("timestamp"))
.withColumn("tpep_dropoff_datetime", F.col("tpep_dropoff_datetime").cast("timestamp"))
.withColumn("passenger_count", F.col("passenger_count").cast("bigint"))
.withColumn("trip_distance", F.col("trip_distance").cast("double"))
.withColumn("RatecodeID", F.col("RatecodeID").cast("bigint"))
.withColumn("store_and_fwd_flag", F.col("store_and_fwd_flag").cast("string"))
.withColumn("PULocationID", F.col("PULocationID").cast("int"))
.withColumn("DOLocationID", F.col("DOLocationID").cast("int"))
.withColumn("payment_type", F.col("payment_type").cast("bigint"))
.withColumn("fare_amount", F.col("fare_amount").cast("double"))
.withColumn("extra", F.col("extra").cast("double"))
.withColumn("mta_tax", F.col("mta_tax").cast("double"))
.withColumn("tip_amount", F.col("tip_amount").cast("double"))
.withColumn("tolls_amount", F.col("tolls_amount").cast("double"))
.withColumn("improvement_surcharge", F.col("improvement_surcharge").cast("double"))
.withColumn("total_amount", F.col("total_amount").cast("double"))
.withColumn("congestion_surcharge", F.col("congestion_surcharge").cast("double"))
.withColumn("Airport_fee", F.col("Airport_fee").cast("double"))
)
sdf_cast.writeTo("oci.tripdata.yellow").append()
Summary :
Combining the 3 parts of the script, filling in the credentials along with the endpoint, and bucket, along with specifying the parquet object will load the object storage properly including the manifest file.
Investigating the resulting manifest and objects
Below is a list of the objects created in the metadata directory.
You can see that I updated the data and added more data, and each time, it created a new version of the manifest along with snapshot information.
When writing the data, I chose to partition it, below is the partitioned data. You can see that when Apache Iceberg wrote the data, it automatically created parquet objects in directories for each partition, and added to the current directory with new data.
Creating an Oracle table on top of Iceberg
Now that I have created my iceberg table, and it is stored in object storage, I can create an Oracle table that will read the manifest file.
In my example, the most recent manifest object is "iceberg_warehouse/tripdata/yellow/metadata/v7.metadata.json".
I can now select from my new table and it will read the Iceberg manifest file.
Conclusion:
Oracle Database 23ai not only supports creating external tables on top of Parquet objects, but it also supports creating external tables on top of Apache Iceberg manifest objects.