Friday, April 10, 2026

Automating cloning of your Exadata Database Service on Dedicated Infrastructure database

One of the questions I often get from customers is 

"How do I automate the cloning of a production database backup to a non-prod copy?  This is something we do often."

There are three different OCI commands  to do what seems like the exact same thing. Specifically, when it comes to restoring a database from a backup, the OCI CLI gives us three primary paths.

The "secret sauce" to choosing the right command is understanding where the database is going and ensuring you have the right OCIDs for your target infrastructure. Let's break down the full parameter sets you need to keep your automation from failing.

These are all "oci db database " commands


The Restore Matrix: Choosing Your Command

Command Infrastructure Target Required Target ID Primary Use Case
create-database-from-backup Exadata / C@C --db-home-id Restoring into an existing Exadata Home.
create-from-backup Base DB (VM/BM) --db-system-id Adding a DB to an existing DB System.
create --source DB_BACKUP Base DB (VM/BM) --compartment-id Building a NEW DB System from a backup.

1. The Exadata Full Set: create-database-from-backup

This command uses a JSON object for the --database flag. This is where you define the identity of the clone within the Exadata rack along with the backup you want to use to create the new database.

{
  "adminPassword": "YourPassword123#",
  "backupId": "ocid1.dbbackup.oc1...",
  "backupTDEPassword": "SourceWalletPassword",
  "dbName": "EXACLON",
  "dbUniqueName": "EXACLON_PRD",
  "sidPrefix": "EXACL",
  "pluggableDatabases": ["PDB1", "PDB2"],
  "dbHomeId": "ocid1.dbhome.oc1...",
  "storageSizeDetails": {
    "dataStorageSizeInGBs": 256,
    "recoStorageSizeInGBs": 512
  },
  "sourceEncryptionKeyLocationDetails": {
    "providerType": "AWS|AZURE|GCP|EXTERNAL",
    "awsEncryptionKeyId": "string",
    "hsmPassword": "string"
  }
}

2. The VM/BM In-Place Clone: create-from-backup

`

This is for standard Virtual Machine shapes. You must provide the dbSystemId (the OCID of the VM) to tell OCI exactly where to deploy the restored data.

{
  "adminPassword": "NewAdminPassword123#",
  "backupId": "ocid1.dbbackup.oc1...",
  "backupTdePassword": "SourceWalletPassword",
  "dbSystemId": "ocid1.dbsystem.oc1.iad.example_vm_ocid",
  "dbName": "VMCLON",
  "dbUniqueName": "VMCLON_DEV",
  "sidPrefix": "VMCL",
  "kmsKeyId": "ocid1.key.oc1...",
  "dataStorageSizeInGbs": 256,
  "recoStorageSizeInGbs": 512,
  "databaseSoftwareImageId": "ocid1.dbsoftwareimage.oc1...",
  "isUnifiedAuditingEnabled": true,
  "waitForState": ["AVAILABLE"],
  "maxWaitSeconds": 3600
}

3. Provisioning New Infra: create with --source

This is the "All-In-One" command. It creates the VM Cluster or DB System infrastructure from scratch. Because of this, it requires networking IDs (VCN/Subnet) and hardware shapes.

{
  "source": "DB_BACKUP",
  "backupId": "ocid1.dbbackup.oc1...",
  "tdeWalletPassword": "SourceWalletPassword",
  "compartmentId": "ocid1.compartment.oc1...",
  "subnetId": "ocid1.subnet.oc1...",
  "vmClusterId": "ocid1.vmcluster.oc1...",
  "dbSystemId": "ocid1.dbsystem.oc1...",
  "dbHomeId": "ocid1.dbhome.oc1...",
  "dbName": "NEWDB",
  "dbUniqueName": "NEWDB_U",
  "shape": "VM.Standard.E4.Flex",
  "vaultId": "ocid1.vault.oc1...",
  "kmsKeyId": "ocid1.key.oc1...",
  "dbWorkload": "OLTP",
  "autoBackupEnabled": true,
  "waitForState": ["AVAILABLE"]
}

Key Identification Checklist:
  • Exadata: You must have the --db-home-id of an existing home on the rack.
  • VM In-Place: You need the --db-system-id of the running VM instance.
  • Identity: Every command requires a dbName (8 chars max) and dbUniqueName. For automation, use the sidPrefix to prevent instance ID collisions.



Mastering TDE & Key Management

One of the biggest hurdles in database cloning is handling the Transparent Data Encryption (TDE) layer. If your source backup was encrypted using a key from a different cloud provider or a local HSM, you must tell OCI how to decrypt it during the restore process.

1. Cross-Cloud & External Key Providers

When using the create-database-from-backup command (Exadata), you use the sourceEncryptionKeyLocationDetails parameter. This is a JSON object where you must specify the providerType and the corresponding Key OCID or ID from the source provider.

Provider Type Parameter Required Description
AWS awsEncryptionKeyId The ARN of the AWS KMS key used on the source.
AZURE azureEncryptionKeyId The Azure Key Vault key URI.
GCP googleCloudProviderEncryptionKeyId The fully qualified resource name of the GCP KMS key.
EXTERNAL hsmPassword Used for backups protected by an on-premises Hardware Security Module.

2. Native OCI Vault Integration

For native OCI restores, you have two choices: use the standard Oracle-managed keys (default) or use your own keys via OCI Vault (KMS). If you want to use your own keys, you must provide the kmsKeyId and, in some cases, the vaultId.

  • kmsKeyId: The OCID of the Master Encryption Key in the OCI Vault.
  • kmsKeyVersionId: (Optional) Use this if you need to pin the restore to a specific version of your key.
  • vaultId: Required by the create command to identify which Vault the key resides in.
Important Security Note: If you are restoring a database into a different compartment or tenancy than the source, your Dynamic Group for the target DB System must have READ and USE permissions for the Vault and Key. Without these IAM policies, the restore will fail immediately with a "Not Authorized" error.

By correctly mapping these key parameters, you ensure that your data remains encrypted and compliant throughout its entire lifecycle, even as it moves across cloud boundaries.


Automating with Infrastructure as Code (Terraform)

While the CLI is great for one-off tasks, most of my customers eventually want to bake these clones into their CI/CD pipelines. In Terraform, we use the oci_database_database resource. The "magic" happens in the source attribute and the database_details block.

resource "oci_database_database" "cloned_db" {
    # This maps to the --source flag in the CLI
    source = "DB_BACKUP"

    database {
        admin_password      = var.database_admin_password
        db_name             = "CLONEDB"
        db_unique_name      = "CLONEDB_IAD"
        character_set       = "AL32UTF8"
        ncharacter_set      = "AL16UTF16"
        db_workload         = "OLTP"
        
        # TDE Management
        tde_wallet_password = var.source_tde_password
        kms_key_id          = var.target_vault_key_ocid
    }

    # Target Infrastructure IDs
    db_home_id   = var.target_db_home_ocid
    database_id  = var.source_database_backup_ocid

    # Best Practice: Ignore password changes after initial provision
    lifecycle {
        ignore_changes = [database[0].admin_password]
    }
}

Terraform Pro-Tip: Always use the ignore_changes lifecycle hook for the admin_password. Once the database is restored, security policies often require a password rotation. Without this hook, Terraform will try to revert the password to the plain-text value in your .tfvars every time you run an update!

Thursday, April 2, 2026

Autonomous Recovery Service Live Lab available

One of the latest additions to Oracle's Live Labs is the Autonomous Recovery Service. This lab  allow you to understand how to utilize the Autonomous Recovery Service Service for backing up your Oracle database in the cloud, even in a multicloud environment.

If you haven't used it,  Live Labs is Oracle's free, hands-on platform which allows you to go though a workshop or lab to learn more about Oracle's products.

Start Here <-------- Link to this lab


The nice thing about this lab is that you can utilize Oracle's sandbox to learn about the Autonomous Recovery Service (RCV) without the requirement of accessing your OCI/Multicloud tenancy.

Also, the features that are demonstrated in this lab are the same regardless of using the Autonomous Recovery Service in OCI, or in a multcloud environment.

NOTE:

Keep in mind that it does take time to configure your lab for you to use since the provisioning process performs an initial backup.  In my case it took about 60 minutes. You can view the status of building your lab environment on the "My Reservations" page to follow the progress.  Once completed, this makes the environment immediately available once the lab environment is configured.

Setup:

Once your tenancy is configured for the lab you need to log in using the supplied credentials, and change the log in. Be sure to follow the directions and screenshots in the setup portion of the lab before beginning.

Also be sure to note the region and compartment that you will be using, and change the region after logging into the tenancy to the correct region.

Lab 1: Onboarding a database

One you log into the tenancy and region, you can now go through the steps to configure a database to use the recovery service.

NOTE: The lab uses the "Base DB service" for the demo but the steps would be the same regardless of the Oracle Database server utilized or the location (OCI, AWS, GCP, Azure, etc.).

In this section you will 

Create a protection policy - There are default protection policies policies you can use, but most customers chose to create their own for the following reason.

  • You can chose the exact retention period between 14 and 95 days. Since the service is incremental forever, backup usage is not dependent on a weekly full backup.
  • You can chose the backup location if using multicloud. The default is OCI, and you need to create a protection policy if you want to change the location from the default.
  • You can configure a retention lock. Setting a retention lock is only available when creating your own protection policy.

Configure backups for the existing database - In this section you will view the backup configuration for the database.  When the lab environment was provisioned backups were configured, and in this step you will change the protection policy and enable real-time data protection.
Once the configuration changes are saved, you will monitor the update progress.
Lastly you will view the backup information for this database.

Lab 2: Perform point-in-time restores

The next section of the lab will walk through a point-in-time restore.
You will be guided through connecting to the Database directly through "Cloud Shell" and in cloud shell you will
  • Create new table and insert data into it.
  • Determine the current SCN at this point (with the new table).
  • Delete the table
  • Abort the database (demonstrating real-time data protection)
  • Delete the database files
  • Restore the database to the SCN in the second step
This does take a bit and you are encouraged to continue to lab 3 while this occurs.

Lab 3: Create an on-demand backup

This lab walks you through the process to dynamically create an on-demand.
On-demand backups can be either
  1. Kept for the current retention period. This is useful when upgrading, or rolling out a new release and you want to create a known restore point. This type of backup is stored in the recovery service and will age out with the retention period.
  2. Long-Term backup retention period. This type of backup goes to Oracle managed infrequent object storage, and you specify how long the backups are kept for. 

Lab 4: Monitor & Create Alarms

This section of lab walks you through two additional features that are available with the Autonomous Recovery Service.

Observability - In this section of the lab you explore the metrics that available to view.  The lab demonstrates viewing the data loss exposure in either a chart or table

Alarms - This section shows you how to create an alarm that will sent out an alert on data loss exposure (for example).


Summary:

This lab is a great way to learn more about the Autonomous Recovery Service by going through the features in Oracle's tenancy.








Tuesday, March 24, 2026

MCP Server for Autonomous Recovery Service

Wouldn't it be nice if I could just use AI and ask my tenancy to tell me about my Autonomous Recovery Service database backups ? I want to simply ask questions like ....

Well you can and the screenshot below is just part of the answer I got. !!



Thanks to Harini Gavisiddappa who created an MCP server within the Oracle MCP server github project you can do just that.
You can find the project at https://github.com/oracle/mcp

Installing


Here is the streamlined path to getting this running on your local Windows environment.
I chose putting it on my windows environment because I love using the Claude Chat tool so I can ask questions in a UI and get nicely formatted reports/output back.


Phase 1: Repository & Environment Setup

A) Download repository onto my PC


We start by pulling the source from the [Oracle MCP repository](https://github.com/oracle/mcp/tree/main/src/oci-recovery-mcp-server).




B) Initialize a scoped Python 3.13 environment

 To keep the footprint small and the execution fast, we’ll use `uv` for our Python toolchain.


C) Install the Recovery Server in development mode


Phase 2: Identity & Access Management

Before the server can bridge to your tenancy, you need an active OCI session. This ensures the MCP server inherits your existing IAM permissions.


Phase 3: Client Integration

Phase 4: Validation

Restart your client. You should now see the oracle-recovery toolset available. You can verify the connection immediately with a simple natural language query:

Now that it is installed, let's see what it can do

**************************************************

Question 1


List the databases that are actively using the recovery service in my tenancy and include their status, how much storage they are using, where there are any issues and break it down by both compartment and tag if tags are assigned.

Total databases
8
2 compartments
Storage used
354.6 GB
all databases
Protected
4
healthy, within SLA
Alert
4
recovery window = 0
Prod_databases — 2 databases · 139.9 GB
db19vdb19v_837_iad
Alert
20.3 GB used  ·  est. 14.6 GB  ·  14-day policy
Org: HR · Prod
Recovery window 0 of 14 days — redo logs not streaming or recently enrolled
tt2srctt2src_iad1mz
Protected
119.6 GB used  ·  est. 112.9 GB  ·  14-day policy
Org: Finance · Prod
14.3d retention · redo enabled
Nonprod_databases — 6 databases · 214.7 GB
db23cdb23c_npb_iad
Alert
3.0 GB used  ·  est. 9.1 GB  ·  14-day policy
Org: Finance · Pre-prod
⚠ Unprotected window: 70.5 days — no recent backup activity. Requires immediate attention.
ORCL_TESTORCL
Alert
4.1 GB used  ·  est. 11.3 GB  ·  14-day policy
Environment: Trial
⚠ Unprotected window: 24.9 days — recovery point significantly stale.
clprotclprot
Alert
10.5 GB used  ·  est. 22.1 GB  ·  14-day policy
Fleet agent managed
Recovery window 0 of 14 days · retention = 1 sec · min recovery needed 21 days
db23aidb23ai_6rm_iad
Protected
75.6 GB used  ·  est. 68.1 GB  ·  14-day policy
Org: Finance · Dev
15.0d retention · policy locked 2023-11-16
test2test2_rjh_iad
Protected
77.2 GB used  ·  est. 74.6 GB  ·  35-day policy
Org: HR · Dev
35.8d retention · redo enabled
demodbdemodb_tfj_iad
Protected
44.4 GB used  ·  est. 42.7 GB  ·  35-day policy
No Oracle-Standard tags
35.9d retention
By tag — Oracle-Standard · Org
OrgDatabasesStorageStatus
Finance4198.2 GB
2 ok2 alert
HR297.5 GB
1 ok1 alert
Untagged359.0 GB
1 ok2 alert
By tag — Oracle-Standard · Environment
EnvironmentDatabasesStorageStatus
Prod2139.9 GB
1 ok1 alert
Dev2152.8 GB
2 ok
Pre-prod13.0 GB
1 alert
Trial14.1 GB
1 alert
Untagged254.9 GB
1 ok1 alert

**************************************************

 Question 2


List out the databases that I have and look at the usage for the last 30 days.. Based on that usage give a line chart showing my space usage by database and based on the current growth show what the project usage will be for the next year.


Question 3

Using the tags create a pie chart showing the space usage for all of my backups and show the usage for each tag.




Summary:

Above are some examples of what you can do with the new OCI Recovery Service MCP server with Claude.

Below is a list of the tools available to you to use.

Here are the 19 tools available through the Recovery Service MCP server, grouped by what they do:

Protected Databases

  • list_protected_databases — list databases enrolled in Recovery Service, with metrics and subnet details, filtered by compartment, policy, lifecycle state, etc.
  • get_protected_database — get full details for a single protected database by OCID
  • summarize_protected_database_health — count of healthy / warning / alert / unknown databases in a compartment
  • summarize_protected_database_backup_destination — how databases in a compartment are backed up (Recovery Service vs other destinations)
  • summarize_protected_database_redo_status — how many databases have redo transport on or off

Protection Policies

  • list_protection_policies — list policies in a compartment
  • get_protection_policy — get a single policy by OCID

Recovery Service Subnets

  • list_recovery_service_subnets — list subnets in a compartment
  • get_recovery_service_subnet — get a single subnet by OCID

Backups

  • list_backups — list backups with flexible filters and optional auto-paging
  • get_backup — get a single backup by OCID

Metrics

  • get_recovery_service_metrics — time-series metrics for a compartment or single database; supported metrics are SpaceUsedForRecoveryWindow, ProtectedDatabaseSize, ProtectedDatabaseHealth, and DataLossExposure; resolutions of 1m, 5m, 1h, 1d; aggregations of mean, sum, max, min, count

Storage Summaries

  • summarize_backup_space_used — total backup space in GB across databases in a compartment
  • summarize_protected_database_backup_destination — breakdown by backup destination type

DB Systems & Homes (for enrollment context)

  • list_databases — list databases across DB Homes in a compartment, with backup settings and linked protection policy
  • list_db_homes — list DB Homes in a compartment
  • get_db_home — get a single DB Home by OCID
  • list_db_systems — list DB systems in a compartment
  • get_db_system — get a single DB system by OCID