Friday, April 10, 2026

Automating cloning of your Exadata Database Service on Dedicated Infrastructure database

One of the questions I often get from customers is 

"How do I automate the cloning of a production database backup to a non-prod copy?  This is something we do often."

There are three different OCI commands  to do what seems like the exact same thing. Specifically, when it comes to restoring a database from a backup, the OCI CLI gives us three primary paths.

The "secret sauce" to choosing the right command is understanding where the database is going and ensuring you have the right OCIDs for your target infrastructure. Let's break down the full parameter sets you need to keep your automation from failing.

These are all "oci db database " commands


The Restore Matrix: Choosing Your Command

Command Infrastructure Target Required Target ID Primary Use Case
create-database-from-backup Exadata / C@C --db-home-id Restoring into an existing Exadata Home.
create-from-backup Base DB (VM/BM) --db-system-id Adding a DB to an existing DB System.
create --source DB_BACKUP Base DB (VM/BM) --compartment-id Building a NEW DB System from a backup.

1. The Exadata Full Set: create-database-from-backup

This command uses a JSON object for the --database flag. This is where you define the identity of the clone within the Exadata rack along with the backup you want to use to create the new database.

{
  "adminPassword": "YourPassword123#",
  "backupId": "ocid1.dbbackup.oc1...",
  "backupTDEPassword": "SourceWalletPassword",
  "dbName": "EXACLON",
  "dbUniqueName": "EXACLON_PRD",
  "sidPrefix": "EXACL",
  "pluggableDatabases": ["PDB1", "PDB2"],
  "dbHomeId": "ocid1.dbhome.oc1...",
  "storageSizeDetails": {
    "dataStorageSizeInGBs": 256,
    "recoStorageSizeInGBs": 512
  },
  "sourceEncryptionKeyLocationDetails": {
    "providerType": "AWS|AZURE|GCP|EXTERNAL",
    "awsEncryptionKeyId": "string",
    "hsmPassword": "string"
  }
}

2. The VM/BM In-Place Clone: create-from-backup

`

This is for standard Virtual Machine shapes. You must provide the dbSystemId (the OCID of the VM) to tell OCI exactly where to deploy the restored data.

{
  "adminPassword": "NewAdminPassword123#",
  "backupId": "ocid1.dbbackup.oc1...",
  "backupTdePassword": "SourceWalletPassword",
  "dbSystemId": "ocid1.dbsystem.oc1.iad.example_vm_ocid",
  "dbName": "VMCLON",
  "dbUniqueName": "VMCLON_DEV",
  "sidPrefix": "VMCL",
  "kmsKeyId": "ocid1.key.oc1...",
  "dataStorageSizeInGbs": 256,
  "recoStorageSizeInGbs": 512,
  "databaseSoftwareImageId": "ocid1.dbsoftwareimage.oc1...",
  "isUnifiedAuditingEnabled": true,
  "waitForState": ["AVAILABLE"],
  "maxWaitSeconds": 3600
}

3. Provisioning New Infra: create with --source

This is the "All-In-One" command. It creates the VM Cluster or DB System infrastructure from scratch. Because of this, it requires networking IDs (VCN/Subnet) and hardware shapes.

{
  "source": "DB_BACKUP",
  "backupId": "ocid1.dbbackup.oc1...",
  "tdeWalletPassword": "SourceWalletPassword",
  "compartmentId": "ocid1.compartment.oc1...",
  "subnetId": "ocid1.subnet.oc1...",
  "vmClusterId": "ocid1.vmcluster.oc1...",
  "dbSystemId": "ocid1.dbsystem.oc1...",
  "dbHomeId": "ocid1.dbhome.oc1...",
  "dbName": "NEWDB",
  "dbUniqueName": "NEWDB_U",
  "shape": "VM.Standard.E4.Flex",
  "vaultId": "ocid1.vault.oc1...",
  "kmsKeyId": "ocid1.key.oc1...",
  "dbWorkload": "OLTP",
  "autoBackupEnabled": true,
  "waitForState": ["AVAILABLE"]
}

Key Identification Checklist:
  • Exadata: You must have the --db-home-id of an existing home on the rack.
  • VM In-Place: You need the --db-system-id of the running VM instance.
  • Identity: Every command requires a dbName (8 chars max) and dbUniqueName. For automation, use the sidPrefix to prevent instance ID collisions.



Mastering TDE & Key Management

One of the biggest hurdles in database cloning is handling the Transparent Data Encryption (TDE) layer. If your source backup was encrypted using a key from a different cloud provider or a local HSM, you must tell OCI how to decrypt it during the restore process.

1. Cross-Cloud & External Key Providers

When using the create-database-from-backup command (Exadata), you use the sourceEncryptionKeyLocationDetails parameter. This is a JSON object where you must specify the providerType and the corresponding Key OCID or ID from the source provider.

Provider Type Parameter Required Description
AWS awsEncryptionKeyId The ARN of the AWS KMS key used on the source.
AZURE azureEncryptionKeyId The Azure Key Vault key URI.
GCP googleCloudProviderEncryptionKeyId The fully qualified resource name of the GCP KMS key.
EXTERNAL hsmPassword Used for backups protected by an on-premises Hardware Security Module.

2. Native OCI Vault Integration

For native OCI restores, you have two choices: use the standard Oracle-managed keys (default) or use your own keys via OCI Vault (KMS). If you want to use your own keys, you must provide the kmsKeyId and, in some cases, the vaultId.

  • kmsKeyId: The OCID of the Master Encryption Key in the OCI Vault.
  • kmsKeyVersionId: (Optional) Use this if you need to pin the restore to a specific version of your key.
  • vaultId: Required by the create command to identify which Vault the key resides in.
Important Security Note: If you are restoring a database into a different compartment or tenancy than the source, your Dynamic Group for the target DB System must have READ and USE permissions for the Vault and Key. Without these IAM policies, the restore will fail immediately with a "Not Authorized" error.

By correctly mapping these key parameters, you ensure that your data remains encrypted and compliant throughout its entire lifecycle, even as it moves across cloud boundaries.


Automating with Infrastructure as Code (Terraform)

While the CLI is great for one-off tasks, most of my customers eventually want to bake these clones into their CI/CD pipelines. In Terraform, we use the oci_database_database resource. The "magic" happens in the source attribute and the database_details block.

resource "oci_database_database" "cloned_db" {
    # This maps to the --source flag in the CLI
    source = "DB_BACKUP"

    database {
        admin_password      = var.database_admin_password
        db_name             = "CLONEDB"
        db_unique_name      = "CLONEDB_IAD"
        character_set       = "AL32UTF8"
        ncharacter_set      = "AL16UTF16"
        db_workload         = "OLTP"
        
        # TDE Management
        tde_wallet_password = var.source_tde_password
        kms_key_id          = var.target_vault_key_ocid
    }

    # Target Infrastructure IDs
    db_home_id   = var.target_db_home_ocid
    database_id  = var.source_database_backup_ocid

    # Best Practice: Ignore password changes after initial provision
    lifecycle {
        ignore_changes = [database[0].admin_password]
    }
}

Terraform Pro-Tip: Always use the ignore_changes lifecycle hook for the admin_password. Once the database is restored, security policies often require a password rotation. Without this hook, Terraform will try to revert the password to the plain-text value in your .tfvars every time you run an update!

Thursday, April 2, 2026

Autonomous Recovery Service Live Lab available

One of the latest additions to Oracle's Live Labs is the Autonomous Recovery Service. This lab  allow you to understand how to utilize the Autonomous Recovery Service Service for backing up your Oracle database in the cloud, even in a multicloud environment.

If you haven't used it,  Live Labs is Oracle's free, hands-on platform which allows you to go though a workshop or lab to learn more about Oracle's products.

Start Here <-------- Link to this lab


The nice thing about this lab is that you can utilize Oracle's sandbox to learn about the Autonomous Recovery Service (RCV) without the requirement of accessing your OCI/Multicloud tenancy.

Also, the features that are demonstrated in this lab are the same regardless of using the Autonomous Recovery Service in OCI, or in a multcloud environment.

NOTE:

Keep in mind that it does take time to configure your lab for you to use since the provisioning process performs an initial backup.  In my case it took about 60 minutes. You can view the status of building your lab environment on the "My Reservations" page to follow the progress.  Once completed, this makes the environment immediately available once the lab environment is configured.

Setup:

Once your tenancy is configured for the lab you need to log in using the supplied credentials, and change the log in. Be sure to follow the directions and screenshots in the setup portion of the lab before beginning.

Also be sure to note the region and compartment that you will be using, and change the region after logging into the tenancy to the correct region.

Lab 1: Onboarding a database

One you log into the tenancy and region, you can now go through the steps to configure a database to use the recovery service.

NOTE: The lab uses the "Base DB service" for the demo but the steps would be the same regardless of the Oracle Database server utilized or the location (OCI, AWS, GCP, Azure, etc.).

In this section you will 

Create a protection policy - There are default protection policies policies you can use, but most customers chose to create their own for the following reason.

  • You can chose the exact retention period between 14 and 95 days. Since the service is incremental forever, backup usage is not dependent on a weekly full backup.
  • You can chose the backup location if using multicloud. The default is OCI, and you need to create a protection policy if you want to change the location from the default.
  • You can configure a retention lock. Setting a retention lock is only available when creating your own protection policy.

Configure backups for the existing database - In this section you will view the backup configuration for the database.  When the lab environment was provisioned backups were configured, and in this step you will change the protection policy and enable real-time data protection.
Once the configuration changes are saved, you will monitor the update progress.
Lastly you will view the backup information for this database.

Lab 2: Perform point-in-time restores

The next section of the lab will walk through a point-in-time restore.
You will be guided through connecting to the Database directly through "Cloud Shell" and in cloud shell you will
  • Create new table and insert data into it.
  • Determine the current SCN at this point (with the new table).
  • Delete the table
  • Abort the database (demonstrating real-time data protection)
  • Delete the database files
  • Restore the database to the SCN in the second step
This does take a bit and you are encouraged to continue to lab 3 while this occurs.

Lab 3: Create an on-demand backup

This lab walks you through the process to dynamically create an on-demand.
On-demand backups can be either
  1. Kept for the current retention period. This is useful when upgrading, or rolling out a new release and you want to create a known restore point. This type of backup is stored in the recovery service and will age out with the retention period.
  2. Long-Term backup retention period. This type of backup goes to Oracle managed infrequent object storage, and you specify how long the backups are kept for. 

Lab 4: Monitor & Create Alarms

This section of lab walks you through two additional features that are available with the Autonomous Recovery Service.

Observability - In this section of the lab you explore the metrics that available to view.  The lab demonstrates viewing the data loss exposure in either a chart or table

Alarms - This section shows you how to create an alarm that will sent out an alert on data loss exposure (for example).


Summary:

This lab is a great way to learn more about the Autonomous Recovery Service by going through the features in Oracle's tenancy.








Tuesday, March 24, 2026

MCP Server for Autonomous Recovery Service

Wouldn't it be nice if I could just use AI and ask my tenancy to tell me about my Autonomous Recovery Service database backups ? I want to simply ask questions like ....

Well you can and the screenshot below is just part of the answer I got. !!



Thanks to Harini Gavisiddappa who created an MCP server within the Oracle MCP server github project you can do just that.
You can find the project at https://github.com/oracle/mcp

Installing


Here is the streamlined path to getting this running on your local Windows environment.
I chose putting it on my windows environment because I love using the Claude Chat tool so I can ask questions in a UI and get nicely formatted reports/output back.


Phase 1: Repository & Environment Setup

A) Download repository onto my PC


We start by pulling the source from the [Oracle MCP repository](https://github.com/oracle/mcp/tree/main/src/oci-recovery-mcp-server).




B) Initialize a scoped Python 3.13 environment

 To keep the footprint small and the execution fast, we’ll use `uv` for our Python toolchain.


C) Install the Recovery Server in development mode


Phase 2: Identity & Access Management

Before the server can bridge to your tenancy, you need an active OCI session. This ensures the MCP server inherits your existing IAM permissions.


Phase 3: Client Integration

Phase 4: Validation

Restart your client. You should now see the oracle-recovery toolset available. You can verify the connection immediately with a simple natural language query:

Now that it is installed, let's see what it can do

**************************************************

Question 1


List the databases that are actively using the recovery service in my tenancy and include their status, how much storage they are using, where there are any issues and break it down by both compartment and tag if tags are assigned.

Total databases
8
2 compartments
Storage used
354.6 GB
all databases
Protected
4
healthy, within SLA
Alert
4
recovery window = 0
Prod_databases — 2 databases · 139.9 GB
db19vdb19v_837_iad
Alert
20.3 GB used  ·  est. 14.6 GB  ·  14-day policy
Org: HR · Prod
Recovery window 0 of 14 days — redo logs not streaming or recently enrolled
tt2srctt2src_iad1mz
Protected
119.6 GB used  ·  est. 112.9 GB  ·  14-day policy
Org: Finance · Prod
14.3d retention · redo enabled
Nonprod_databases — 6 databases · 214.7 GB
db23cdb23c_npb_iad
Alert
3.0 GB used  ·  est. 9.1 GB  ·  14-day policy
Org: Finance · Pre-prod
⚠ Unprotected window: 70.5 days — no recent backup activity. Requires immediate attention.
ORCL_TESTORCL
Alert
4.1 GB used  ·  est. 11.3 GB  ·  14-day policy
Environment: Trial
⚠ Unprotected window: 24.9 days — recovery point significantly stale.
clprotclprot
Alert
10.5 GB used  ·  est. 22.1 GB  ·  14-day policy
Fleet agent managed
Recovery window 0 of 14 days · retention = 1 sec · min recovery needed 21 days
db23aidb23ai_6rm_iad
Protected
75.6 GB used  ·  est. 68.1 GB  ·  14-day policy
Org: Finance · Dev
15.0d retention · policy locked 2023-11-16
test2test2_rjh_iad
Protected
77.2 GB used  ·  est. 74.6 GB  ·  35-day policy
Org: HR · Dev
35.8d retention · redo enabled
demodbdemodb_tfj_iad
Protected
44.4 GB used  ·  est. 42.7 GB  ·  35-day policy
No Oracle-Standard tags
35.9d retention
By tag — Oracle-Standard · Org
OrgDatabasesStorageStatus
Finance4198.2 GB
2 ok2 alert
HR297.5 GB
1 ok1 alert
Untagged359.0 GB
1 ok2 alert
By tag — Oracle-Standard · Environment
EnvironmentDatabasesStorageStatus
Prod2139.9 GB
1 ok1 alert
Dev2152.8 GB
2 ok
Pre-prod13.0 GB
1 alert
Trial14.1 GB
1 alert
Untagged254.9 GB
1 ok1 alert

**************************************************

 Question 2


List out the databases that I have and look at the usage for the last 30 days.. Based on that usage give a line chart showing my space usage by database and based on the current growth show what the project usage will be for the next year.


Question 3

Using the tags create a pie chart showing the space usage for all of my backups and show the usage for each tag.




Summary:

Above are some examples of what you can do with the new OCI Recovery Service MCP server with Claude.

Below is a list of the tools available to you to use.

Here are the 19 tools available through the Recovery Service MCP server, grouped by what they do:

Protected Databases

  • list_protected_databases — list databases enrolled in Recovery Service, with metrics and subnet details, filtered by compartment, policy, lifecycle state, etc.
  • get_protected_database — get full details for a single protected database by OCID
  • summarize_protected_database_health — count of healthy / warning / alert / unknown databases in a compartment
  • summarize_protected_database_backup_destination — how databases in a compartment are backed up (Recovery Service vs other destinations)
  • summarize_protected_database_redo_status — how many databases have redo transport on or off

Protection Policies

  • list_protection_policies — list policies in a compartment
  • get_protection_policy — get a single policy by OCID

Recovery Service Subnets

  • list_recovery_service_subnets — list subnets in a compartment
  • get_recovery_service_subnet — get a single subnet by OCID

Backups

  • list_backups — list backups with flexible filters and optional auto-paging
  • get_backup — get a single backup by OCID

Metrics

  • get_recovery_service_metrics — time-series metrics for a compartment or single database; supported metrics are SpaceUsedForRecoveryWindow, ProtectedDatabaseSize, ProtectedDatabaseHealth, and DataLossExposure; resolutions of 1m, 5m, 1h, 1d; aggregations of mean, sum, max, min, count

Storage Summaries

  • summarize_backup_space_used — total backup space in GB across databases in a compartment
  • summarize_protected_database_backup_destination — breakdown by backup destination type

DB Systems & Homes (for enrollment context)

  • list_databases — list databases across DB Homes in a compartment, with backup settings and linked protection policy
  • list_db_homes — list DB Homes in a compartment
  • get_db_home — get a single DB Home by OCID
  • list_db_systems — list DB systems in a compartment
  • get_db_system — get a single DB system by OCID



Friday, March 20, 2026

How many IP addresses do I need for the Autonomous Recovery Service

 One of the most common questions that comes up is "How many IP addresses do I need to set aside for the Autonomous Recovery Service" or "How big does the CIDR block need to be for my Recovery Service subnet"?

In this blog post, I will explain how IPs are used by the service, but how many IP address you will need is hard to put an exact number on.

First below is a diagram showing how this works.


Recovery Service Subnet(s)

The first piece to understand is how the Autonomous Recovery Service uses the Subnet(s) that are registered.

First you might be wondering why I have the "(s)" on the end.  When you register a recovery service subnet there are two levels.

You register a name for "Recovery Service Subnet" and this is actually a group of subnets. You can register multiple subnets as eligible to be used for a "Recovery Service Subnets".


 The screenshot above is what you will see in OCI. 

When you register a Recovery Service subnet,

  • You give it a name for the "Recovery Service subnet group" 
  • You identify the VCN that this subnet is registered for. Each VCN will have it's own registered subnet group.
  • You add one or more subnet within that VCN that can be used for endpoint IP address.

Any of these registered subnets can be used for Autonomous Recovery Service IP addresses.

Also subnets can be added, and removed within the group.


How many IP addresses for a Database backup?

I am going to start with a single database before I explain what happens when you have multiple databases using the service.  In order to support Oracle Database backups, the Autonomous Recovery Service uses endpoint IP addresses that map to a pair of ZDLRAs that store the backups as a service.  The pair of ZDLRAs provide an always available service.
For a single database below is what you would see for the endpoints that get created. In my example, you can see that there are 3 IP address per RA in the "Recovery Service Group".


Above, this shows the 6 private endpoint IP addresses that are created for the database backups being sent to two ZDLRAs (RA-018 and RA-020).  There are also FQDN names that are created for each each of the endpoints and you can see that the names map to the specific ZDLRAs that are storing the backups

NOTE: There are are also some 4 node ZDLRAs in some regions. In that case there will be 4 endpoint IP address for each ZDLRA in the pair, and a total of 8 IP addresses will be utilized.

How many IP addresses do I need for multiple databases?

This is where the answer is "It depends".  The simple example above shows you what happens for a single database. When you add another database it might not end up on the same "Recovery Service group". It is possible the new database backups could end up on another "Recovery Service group" needing additional IP addresses.
There are number of factors that affect how many "Recovery Service groups" are used when backing up multiple databases.
  • Number of databases - If you have a large number of databases, this increases the chances that more backup locations will be used to spread out the backups across multiple groups.
  • Size of the Database backups - if your backups are very large, the Recovery Service tries to balance larger database backups across more groups. 
  • Number of groups in the region - Some regions contain more "recovery Service groups" than other regions.  If you are backing up in a larger region there is a higher chance that more groups will be utilized to support many databases.
The diagram I started with below shows you what happens with 3 databases that are storing their backups across two different Recovery Service groups.


The first database is sending it's backup to a Recovery Service Group containing two X 2 DB node  ZDLRAs and it is utilizing 6 IP addresses.
The second and third databases are using the same Recovery Recovery Group which consists of Two X 3 DB node ZDLRAs and they are using the same 8 IP addresses.

How to interpret this?

The recommendation for Recovery Service Subnets is to create a separate subnet that is a /24 CIDR block which will provide the ability to have 254 private endpoint IP addresses. This will allow for at least 31 different Recovery Service groups.
If you only have a few databases, then this may be too big for what you need, and you may be able to have a smaller CIDR block, or have multiple subnets with smaller CIDR blocks.
The recommendation of /24 CIDR blocks ensures you will not have any issues with enough IP address.
As you decrease the number of available IP addresses you increase the chances that you will not enough IP address to add another database to be backed up to the Autonomous Recovery Service.

What happens if I don't register enough free IPs?

Once a database is added is configured for backups, it will not affect the need for additional free IP addresses.  The only time you will have an issue with free IP addresses for the recovery service is when you add a new database to be backed up. If the onboarded process decides that the backups need to reside a new Recovery Service Group of ZDLRAs, and there are not enough free IP address you will receive an error when configuring backups. At that point you can add more subnets to the Recovery Service subnet group registered with the VCN.

Do I have to worry about space since databases are assigned to Recovery Service groups?

No.  The recovery service will automatically manage the underlying storage for the database backups and move backups from one group to another group if needed in order ensure there is enough space for backups. Because of this, you may find that the names of the ZDLRAs where the backups reside could change over time. This is one of the reasons why the service dynamically creates the TNSNAMES entry as needed. The FQDN used for backups of a database will change if the database is moved because of space constraints.

Summary

There is no set number of number of IP addresses that need to be registered with the recovery service and freely available to be assigned for backups.  It is dependent on the size of your environment, and number of IP addresses utilized could grow as your environment adds more databases to be backed up.
If you have a start with a smaller number of IP addresses, keep an eye on the number of available IP address in subnets registered with the recovery service to ensure you have room to grow.

Thursday, March 5, 2026

Recovery Service failure checks

 When using the Autonomous Recovery Service there are some prerequisites that need be met. I have a checklist that goes through these requirements, and you can find that checklist here.


This blog post will help you perform some basic debugging and demonstrate what errors you will see if you miss some of the steps.

I want to point out that Billy Zou created a create post that will help you work through issues with Cross-region restore. Billy's post has some great information to use for debugging and you can find it here.

This post is broken into two possible places where you will have issues.

  1. Unable to Submit request. This can be caused by
    • Policy issues
    • Limits issue
  2. You submitted backup, but it failed to configure the Recovery Service. This can be caused by
    • DNS issues with resolving FQDN used by Recovery Service
    • Routing/port issues accessing the Recovery Service or Object Storage

Unable to submit Autonomous Recovery Service as a backup location


Policies for the tenancy

The first step is to ensure that you have configured policies for the recovery service.  The easiest way to do this is by utilizing Policy Builder.

NOTE: There is a policy that grants access to the "ADMIN" group. If your administrator group is a different group, you would 

Visible Issue

 If policies are not configured properly, you find that "Recovery Service" is greyed out as an option.


Limits for the Recovery Service

By default if you are not in a multi-cloud environment your paid tenancy will have a limit of
  • 10 Database
  • 10 TB of backups storage
If you are using Multi-cloud, and your database is in partner cloud, there is no default limits (defaulting to 0!), which means you have to apply for a limit increase!

This is the most common issue I see with multicloud.  You need to set the limit specifically for the multi-cloud subscription.

Visible Issue

 If limits  are not configured properly, you find that "Recovery Service" is greyed out as an option.

Below the choice for "Recovery service", you will see that there is a warning, telling you that you have exceeded your limits.


Thursday, February 12, 2026

Sudden ORA-12578: TNS:wallet open failed when logging in as SYS

ORA-12578: TNS:wallet open failed when logging in as SYS

This blog post covers a possible cause of a "ORA-12578: TNS:wallet open failed" error when trying to log into your database using 

>sqlplus / as sysdba



I have seen this issue a few times with DB 19.x.  I noticed the behavior changed with AI DB 26 and is much less likely to happen.

What causes this ?

The most likely reason why you would suddenly see this error code when trying to log in using "sys as sysdba" is a change to the sqlnet.ora file.

When logging in using "sys as sysdba", the sqlnet.ora file used by your environment will be parsed as part of the authentication process.  If the sqlnet.ora in your environment has any issues during the parsing, this will cause your login using "sys as sysdba" to fail with the above error.

Fortunately, this does not happen in AI DB 26.

How to test for the sqlnet.ora being the cause

The easiest way to test is by using the TNS_ADMIN environmental variable setting.
The steps I would follow are.
  1. cd to your $ORACLE_HOME/network/admin directory on the server
  2. Execute mkdir to create a new directory named "test"
  3. cd to that new directory "test"
  4. set TNS_ADMIN with "export TNS_ADMIN=$ORACLE_HOME/network/admin/test"
  5. Try logging in using "/ as sysdba"
Since there is no sqlnet.ora file in this new directory, if you can successfully login we have proven that the issue is the sqlnet.ora file.

Now that we have proven it is the sqlnet.ora (or ruled it out sorry I couldn't help), we can look at the causes.

Finding the issue

Now that you have a new directory, $ORACLE_HOME/network//admin/test, we can start working through the possible causes.

Step 1- copy the sqlnet.ora from the default location to this new directory so that we can update it and find the issue without affecting other users. "cp ../*.ora ."

Below is my sqlnet.ora that I am showing different issues with.

# sqlnet3189722425551944721.ora Network Configuration File: /tmp/sqlnet3189722425551944721.ora
# Generated by Oracle configuration tools.

SQLNET.WALLET_OVERRIDE = true

NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)

WALLET_LOCATION =
 (SOURCE =
 (METHOD = FILE)
 (METHOD_DATA =
 (DIRECTORY = /opt/oracle/admin/ORCLCDB/wallet1/server_seps)
 )
)


Cause 1 - Wallet file

The first possible cause is that the location of the wallet location is not correct. The same issue will most likely occur if you are setting the encryption_wallet_location in the sqlnet.ora file.

Both of these must be true when looking at the wallet file.

  1. The directory in the sqlnet.ora file MUST exist. If the directory location is incorrect, you will have an issue opening the wallet
  2. There must be a wallet file in that directory. Not only must the directory exist, but there must also be a wallet file within the directory to read.

Cause 2 - Syntax in sqlnet.ora file

When the database parses the sqlnet.ora file, it can be sensitive to any issues within the sqlnet.ora file.  Simple issues can cause parsing failures, and cause your login to fail.

Some of the most common issues are:
  1. Hidden characters in the file. This can happen when copying across platforms (windows to Linux for example). If there are any characters in the file that cause parsing to fail, your login will fail.
  2. Missing "(" or ")". This can cause parsing to fail, and your login will also fail.
  3. Starting "(" in the first column.  Unfortunately this causes a parsing failure. This can be the most annoying, and difficult to find cause. 
Below is an example of a sqlnet.ora file that fails, and you can compare to my sqlnet.ora at the beginning of this blog.


# sqlnet3189722425551944721.ora Network Configuration File: /tmp/sqlnet3189722425551944721.ora
# Generated by Oracle configuration tools.

SQLNET.WALLET_OVERRIDE = true

NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)

WALLET_LOCATION =
 (SOURCE =
 (METHOD = FILE)
 (METHOD_DATA =
(DIRECTORY = /opt/oracle/admin/ORCLCDB/wallet/server_seps)
 )
)

Can you spot the difference ? 
...
...
It's the "(" before the word "DIRECTORY". When it is in the first column, parsing fails.


Prevention-

What I tell customers to avoid any issues like this is the following:

  • NEVER change the default sqlnet.ora for all databases. This is the copy that is stored in the $ORACLE_HOME/network/admin directory.
  • ALWAYS set the WALLET_ROOT parameter in the database. This is interpreted first by the database, and replaces the encryptioin_wallet_location in the sqlnet.ora file.
  • ALWAYS put the SEPS wallet for Real-time redo with the ZDLRA in the {WALLET_ROOT}/server_seps directory. Even if it is a symbolic link.
  • ALWAYS use TNS_ADMIN when it is necessary to customize the sqlnet.ora.  When backing up using the ZDLRA I recommend customers create a customized sqlnet.ora file and use TNS_ADMIN when executing backup/restore scripts.
This will prevent any issues with the shared sqlnet.ora that may cause unexpected issues with logging in as sys.

SUMMARY:

If you suddenly receive a ORA-12578: TNS:wallet open failed when logging in as SYS the first place to look would be your sqlnet.ora file for any parsing errors.



Tuesday, January 20, 2026

Oracle Database Encryption (TDE)

Oracle TDE (Transparent Data Encryption) is a critical piece of data protection, and online encryption is a great tool you can use to simplify the process of encrypting or rekeying your tablespaces. In this post I will go through the process of 
  • Online encrypting an unencrypted tablespace to TDE
  • Restarting encrypting a tablespace within a database that crashed during the process
  • Re-encrypted tablespaces with a stronger encryption key (AES128 --> AES256)
  • What happens to a standby database when I use encryption and when I rekey.

Throughout this blog post I am will be using queries from my previous blog you can find here.

Environment


First my environment. 
  • Primary DB : ORCLCDB - DB 19.28 
  • Standby DB : ORCLSTBY - DB 19.28 
  • Both databases are using OMF to simplify naming files
  •  Encryption - Using Local wallet file
The parameters that affect encryption are using the default values.
Parmeter
Name                           Value                Default?
------------------------------ -------------------- ----------
tablespace_encryption          MANUAL_ENABLE        TRUE
encrypt_new_tablespaces        CLOUD_ONLY           TRUE

Using these parameters, new tablespaces will not be encrypted unless the DDL explicitly mentions encrypting the tablespace.

NOTE: Since I am using 19c, the default encryption Algorithm used will be AES128. Set _tablespace_encryption_default_algorithm to 'AES256' to change the default.

Encrypting my tablespace


Above is the Encryption clause that you can add to "ALTER TABLESPACE" to manage encrypted tablespaces.
For this post, I will concentrate on ONLINE encryption.

New tablespace

I am going to create a new tablespace in my PDB and I want any new tablespaces to be encrypted.
  •  alter system set encrypt_new_tablespaces=always;
  • Create tablespace encrypt_test;
Running my query against the database, I can see that it created a tablespace and by default that tablespace was created as AES128.
PDB Name   Tablespace Name Enc.  Enc. alg   Master Key ID             Key ID                              tablespace Encryt key (trunc)
---------- --------------- ----- ---------- ------------------------- ----------------------------------- ------------------------------
CDB$ROOT   SYSAUX          NO               AZTq/iBVq0/Kv5Es4oNsgQI=  94EAFE2055AB4FCABF912CE2836C8102    603EC31649CDC3684FE68D3DB376F6
           SYSTEM          NO               AZTq/iBVq0/Kv5Es4oNsgQI=  94EAFE2055AB4FCABF912CE2836C8102    603EC31649CDC3684FE68D3DB376F6
           TEMP            NO               AZTq/iBVq0/Kv5Es4oNsgQI=  94EAFE2055AB4FCABF912CE2836C8102    603EC31649CDC3684FE68D3DB376F6
           UNDOTBS1        NO               AZTq/iBVq0/Kv5Es4oNsgQI=  94EAFE2055AB4FCABF912CE2836C8102    603EC31649CDC3684FE68D3DB376F6
           USERS           NO               AQAAAAAAAAAAAAAAAAAAAAA=  00000000000000000000000000000000    000000000000000000000000000000
           ENCRYPT_TEST    YES   AES128     AZTq/iBVq0/Kv5Es4oNsgQI=  94EAFE2055AB4FCABF912CE2836C8102    5635777F4E7A6ACA229FEA10369967


NOTE: I did not change the parameter in my standby database and the tablespace in my standby is also encrypted. The DDL sent to the standby from the primary ensured that the standby also encrypts the tablespace.

Existing tablespaces

Now I am going to take an existing tablespace and encrypt it online. 
Prior to performing this operation (even though it is online), I am going to take a backup of the database.
To make this more realistic I am using a tablespace with 4 datafiles that are all around 10 GB per datafile.

Tablespace Name    FILE_ID File Name                      Size
--------------- ---------- ------------------------------ ----------
ENCTEST                 13 o1_mf_enctest_npz8393s_.dbf     14.454 GB
ENCTEST                 14 o1_mf_enctest_npz8gvz0_.dbf      8.589 GB
ENCTEST                 15 o1_mf_enctest_npz8oy0p_.dbf     10.831 GB
ENCTEST                 16 o1_mf_enctest_npz8yr6p_.dbf     12.793 GB


I am going to encrypt this tablespace on the primary database.

SQL> alter tablespace ENCTEST ENCRYPTION ONLINE ENCRYPT;

My session will not come back until the tablespace is encrypted.

Observations

Alert log and encrypting standby database.

When looking at the alert log for the primary and the standby, I can see the changes occurring.

Primary Database Alert log

ORCLPDB1(3):alter tablespace ENCTEST ENCRYPTION ONLINE ENCRYPT
2026-01-20T10:44:14.923114-07:00
ORCLPDB1(3):About to encrypt tablespace ENCTEST (tsn 3/6)
2026-01-20T10:44:15.020101-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf (13) to /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T10:48:44.473301-07:00  --> Start Time for Encryption
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf size 1894528
2026-01-20T10:48:44.564605-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf
2026-01-20T10:48:46.584235-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf"
2026-01-20T10:50:06.377754-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf"
2026-01-20T10:50:06.380150-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz8393s_.dbf"
2026-01-20T10:50:06.388074-07:00


Standby Database Alert log

ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf (13) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T10:48:44.976604-07:00 ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf size 1894528
2026-01-20T10:48:44.983884-07:00   --> Start Time for Encryption
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf 2026-01-20T10:48:46.994337-07:00 ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf" 2026-01-20T10:50:06.371051-07:00 ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf" 2026-01-20T10:50:08.012125-07:00 ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf" 2026-01-20T10:50:08.088219-07:00

Notice that the conversion of the first datafile on the first node started at 
    2026-01-20T10:48:44.473301-07:00
And the standby database started converting the first datafile at
    2026-01-20T10:48:44.976604-07:00
The difference is milliseconds due to latency. 

Observation #1 - With a standby database, the standby database is also immediately encrypting through REDO, and it is encrypting at the same time as the primary.

This was the most interesting item I found when going through the alert log.
After creating a new encrypted copy of a datafile, the process "Zero'ed out" the old datafile prior to deletion.
This was something I had not thought of before.  
When migrating to Encrypted datafiles, this ensures that the original datafile (unencrypted) is overwritten and cannot be "undeleted" and read.
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:06.371051-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:08.012125-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLSTBY/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npz83b9x_.dbf"
2026-01-20T10:50:08.088219-07:00

NOTE: This same process occurs when rekeying also.

26ai New feature : DB_RECOVERY_AUTO_REKEY 

ONwhich is the default for the standby, enables the standby to automatically perform an online tablespace rekey operation as part of the standby media recovery process. Be aware that this setting pauses media recovery. If the tablespace is large, then you could observe extended standby apply lag.

OFF, which is the default for the primary, does not perform a tablespace rekey operation. This setting enables an extended standby database to avoid lag time during an online conversion, and is designed for large Oracle Data Guard tablespace deployments. This enables the standby recovery to only record the tablespace key information but not perform the rekey operation inline. In the V$ENCRYPTED_TABLESPACES dynamic view, after the STATUS column value for the corresponding tablespace becomes REKEYING or ENCRYPTING on the standby database, then you can use a standby SQL session to issue an ALTER TABLESPACE ENCRYPTION ONLINE FINISH REKEY|ENCRYPT command to perform the rekey in parallel of media recovery. 

Observation #2 - When encrypting/rekeying a tablespaces, the original datafiles are overwritten with zeros to ensure they no longer contain any usable data.

Backing up newly encrypted database


I am now going to perform a new Level 1 backup of the database, and look to see what is being backed up. Below is the output from Block changed tracking showing the blocks being backed up.

    FILE# INCREMENTAL_LEVEL COMPLETION_TIME       BLOCKS DATAFILE_BLOCKS PCT_CHANGED
---------- ----------------- ----------------- ---------- --------------- -----------

Level 0 backup of datafiles
        13                 1 01-20-26 10:30:00    1793783         1894528       94.68
        14                 1 01-20-26 10:27:31    1081471         1125744       96.07
        15                 1 01-20-26 10:28:43    1368079         1419632       96.37
        16                 1 01-20-26 10:29:32    1573511         1676800       93.84

Level 1 backup of datafiles prior to encryption
        13                 1 01-20-26 10:30:15          1         1894528           0
        14                 1 01-20-26 10:30:15          1         1125744           0
        15                 1 01-20-26 10:30:15          1         1419632           0
        16                 1 01-20-26 10:30:15          1         1676800           0

Level 1 backup of datafiles post encryption
        13                 1 01-20-26 11:12:14          1         1894528           0
        14                 1 01-20-26 11:12:16          1         1125744           0
        15                 1 01-20-26 11:12:15          1         1419632           0
        16                 1 01-20-26 11:12:14          1         1676800           0

What you can see is that as far as RMAN is concerned, since this database is idle, only the header block changed.  RMAN did not backup the datafile blocks even though every block changed through the encryption process. Only the user contents of the blocks changed not the metadata about the block.

This is a critical observation.  
  • If you perform a weekly full/daily incremental backup, you will need a new Level 0 backup to ensure the datafiles are encrypted in your backup.
  • If you utilize the 9i "incremental merge" process, your backup will not be encrypted.  If you perform a "switch to copy", your datafiles will not be encrypted.  After encrypting, you must restart this strategy with a new full backup.
NOTE: One of the features of the ZDLRA is that it is "encryption aware", and you will not need to perform a new Level 0 backup to ensure you restore encrypted datafiles.

Observation #3 - The database does not recognize the blocks as changed.  The block header information, containing SCN information, and other metadata remains untouched. 


Datafile/Tablespace information 

Original Datafiles

Tablespace Name    FILE_ID File Name                      Size
--------------- ---------- ------------------------------ ----------
ENCTEST                 13 o1_mf_enctest_npz8393s_.dbf     14.454 GB
ENCTEST                 14 o1_mf_enctest_npz8gvz0_.dbf      8.589 GB
ENCTEST                 15 o1_mf_enctest_npz8oy0p_.dbf     10.831 GB
ENCTEST                 16 o1_mf_enctest_npz8yr6p_.dbf     12.793 GB

New Datafiles

Tablespace Name     FILE_ID File Name                      Size
--------------- ---------- ------------------------------ ----------
ENCTEST                 13 o1_mf_enctest_npzhqh0w_.dbf     14.454 GB
ENCTEST                 14 o1_mf_enctest_npzj2gdb_.dbf      8.589 GB
ENCTEST                 15 o1_mf_enctest_npzj9f4k_.dbf     10.831 GB
ENCTEST                 16 o1_mf_enctest_npzjl4dz_.dbf     12.793 GB

Tablespace encryption Status

PDB Name   Tablespace Name Enc.  Enc. alg   Master Key ID             Key ID                              tablespace Encryt key (trunc)
---------- --------------- ----- ---------- ------------------------- ----------------------------------- ------------------------------
ORCLPDB1   SYSAUX          NO               AawCP+ykIE/2v9kPpgqvHOk=  AC023FECA4204FF6BFD90FA60AAF1CE9    0C836501DE2C28286F4DFF45EDB563
           SYSTEM          NO               AawCP+ykIE/2v9kPpgqvHOk=  AC023FECA4204FF6BFD90FA60AAF1CE9    0C836501DE2C28286F4DFF45EDB563
           TEMP            NO               AawCP+ykIE/2v9kPpgqvHOk=  AC023FECA4204FF6BFD90FA60AAF1CE9    0C836501DE2C28286F4DFF45EDB563
           UNDOTBS1        NO               AawCP+ykIE/2v9kPpgqvHOk=  AC023FECA4204FF6BFD90FA60AAF1CE9    0C836501DE2C28286F4DFF45EDB563
           USERS           NO               AQAAAAAAAAAAAAAAAAAAAAA=  00000000000000000000000000000000    000000000000000000000000000000
           ENCTEST         YES   AES128     AawCP+ykIE/2v9kPpgqvHOk=  AC023FECA4204FF6BFD90FA60AAF1CE9    604F0A3F91F73875D0BC9325FA87C3


Above, you can see that a newly encrypted datafile copy was created for each  datafile member of the tablespace. The size, is exactly the same as the original datafile.  You can also see that since I didn't specify a specific algorithm, the tablespace was encrypted as AES128 (the default).

Observation #4 - A new datafile copy is created for each member. You need to make sure that there is enough space for a second copy of largest datafile.

Re-encrypted tablespaces from AES128 to AES256

The next process I am going to go through is common as customers rekey existing tablespaces from AES128 to utilize the AES256 encryption algorithm.
In this example, I am not only going to re-encrypt the tablespaces, I am also going to crash the database in the middle of this process to see how to handle this possibility.

SQL> alter tablespace ENCTEST ENCRYPTION ONLINE USING 'AES256' rekey;

For this example, I monitored the alert log and watched it encrypt the datafiles, as soon as it finished encrypting the first datafile (File# 14), I performed a "Shutdown Abort" on the database.

Primary Alert log
ORCLPDB1(3):About to rekey tablespace ENCTEST (tsn 3/6)
2026-01-20T11:46:45.353069-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf (13) to /home/db19c/oradata/                      ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T11:51:50.367605-07:00
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdoc8_.dbf size 1894528
2026-01-20T11:51:50.415541-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdoc8_.dbf
2026-01-20T11:51:52.481666-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLCDB/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqh0w_.dbf"
2026-01-20T11:52:25.125992-07:00
Shutting down ORACLE instance (abort) (OS id: 827465)
2026-01-20T11:52:25.158712-07:00
Shutdown is initiated by sqlplus@bgrenn-19-28 (TNS V1-V3).
License high water mark = 12
USER (ospid: 827465): terminating the instance
2026-01-20T11:52:26.213187-07:00
Instance terminated by USER, pid = 827465
2026-01-20T11:52:27.343819-07:00

Standby Alert log
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf (13) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T11:51:51.143252-07:00
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdphy_.dbf size 1894528
2026-01-20T11:51:51.150126-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmdphy_.dbf
2026-01-20T11:51:53.202013-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf"
2026-01-20T18:52:25.254096+00:00
 rfs (PID:823518): Possible network disconnect with primary database
2026-01-20T18:52:25.258563+00:00
 rfs (PID:823518): while processing B-1209738101.T-1.S-412 BNUM:110516 BCNT:1
2026-01-20T18:52:25.259701+00:00
 rfs (PID:823516): Possible network disconnect with primary database
 rfs (PID:823518): Current process action IDLE, elapsed idle time 0
2026-01-20T18:52:25.266055+00:00
 rfs (PID:823516): while processing B-1209738101.T-1.S-412 BNUM:0 BCNT:0
2026-01-20T18:52:25.267216+00:00
 rfs (PID:823518): RFS client ASYNC ORL SINGLE (PID:823296)
2026-01-20T18:52:25.268790+00:00
 rfs (PID:823516): Current process action IDLE, elapsed idle time 32
 rfs (PID:823516): RFS client GAP MANAGER (PID:822694)
2026-01-20T11:53:04.841560-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf"
2026-01-20T11:53:06.478227-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzhqj8h_.dbf"
2026-01-20T11:53:06.486711-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf (14) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf
2026-01-20T11:54:47.101862-07:00
ORCLPDB1(3):Blocks TDE converted for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmrlhf_.dbf size 1125744
2026-01-20T11:54:47.108829-07:00
ORCLPDB1(3):TDE convert operation committed for file /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzmrlhf_.dbf
2026-01-20T11:54:49.120370-07:00
ORCLPDB1(3):About to zero out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf"
2026-01-20T11:55:28.313091-07:00
ORCLPDB1(3):Successfully zero'ed out original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf"
2026-01-20T11:55:37.980759-07:00
ORCLPDB1(3):Successfully deleted original file "/home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj2j4t_.dbf"
2026-01-20T11:55:37.989404-07:00
ORCLPDB1(3):TDE converting datafile /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_npzj9kff_.dbf (15) to /home/db19c/oradata/ORCLCDB/ORCLSTBY/3CE209FE44CDECAEE06394A5500AE99C/datafile/o1_mf_enctest_%u_.dbf


What is most interesting in the alert logs, is that the standby continued to rekey the datafiles for the tablespace.  This occurred independently of the primary.

Observation #5 - The primary sends the ALTER DATABASE REKEY command in the redo, and the standby database performs the process independently. even once the primary crashed.

Startup Database after crash

After starting up the primary database I took a look at the encryption status of the tablespace in both the primary and standby.. In the primary it shows as "REKEYING", the standby successfully finished the operation and shows as normal AES256 encrypted


select ts#,
       ENCRYPTIONALG,
       ENCRYPTEDTS,
       status,
       con_id
from v$encrypted_tablespaces;

Primary Database (aborted)

      TS# ENCRYPT ENC STATUS         CON_ID
---------- ------- --- ---------- ----------
         6 AES256  YES REKEYING            3

Standby Database (continued operation)

      TS# ENCRYPT ENC STATUS         CON_ID
---------- ------- --- ---------- ----------
         6 AES256  YES NORMAL              3


Now I am going to look at the datafiles 
-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 11:52 o1_mf_enctest_npzhqh0w_.dbf
-rw-r-----+ 1 oracle oracle  9222103040 Jan 20 12:10 o1_mf_enctest_npzj2gdb_.dbf
-rw-r-----+ 1 oracle oracle 11629633536 Jan 20 12:10 o1_mf_enctest_npzj9f4k_.dbf
-rw-r-----+ 1 oracle oracle 13736353792 Jan 20 12:10 o1_mf_enctest_npzjl4dz_.dbf
-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 12:10 o1_mf_enctest_npzmdoc8_.dbf

Since I aborted the database, while it was "zeroing out the datafile" there are two copies of the datafile.
At this point since it started the rekey operation, my only choice is to "FINISH" the operation. This is a manual step to restart the process.

alter tablespace ENCTEST ENCRYPTION FINISH rekey;

Observation #6 - Once you start an encryption operation, it must finish. There is no backing out.

Once the rekey operation finished, I again looked at the datafiles.  I can see that the original datafile that was in process when I aborted the database is still there. If an online encryption operation is in process during a crash, you might have an orphan file from the process that has to be manually deleted.

-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 11:52 o1_mf_enctest_npzhqh0w_.dbf
-rw-r-----+ 1 oracle oracle 15519981568 Jan 20 12:10 o1_mf_enctest_npzmdoc8_.dbf
-rw-r-----+ 1 oracle oracle  9222103040 Jan 20 12:27 o1_mf_enctest_npzopwfg_.dbf
-rw-r-----+ 1 oracle oracle 11629633536 Jan 20 12:30 o1_mf_enctest_npzovdhd_.dbf
-rw-r-----+ 1 oracle oracle 13736353792 Jan 20 12:34 o1_mf_enctest_npzp0zd5_.dbf