The latest release of ZFS (8.8.63) contains 2 new features associated with File Retention lock.
File retention (deletion or hold) after file expiration
Allow permission changes on retained files.
File retention on expiry policy
This new setting for projects/shares defaults to "off" which is the normal behavior of unlocking files, but leaving them on the filesystem. In order to delete files you need to wait until the retention period expires, and then you can delete the file.
There are 2 new settings you can use to work with locked files to change this behavior.
Delete
When set to "Delete", files will be immediately deleted when their retention lock expires. This can be very useful if you want files to be automatically cleaned up at the end of their retention without having to create a deletion process.
There are a few items to be aware of the automatic deletion process.
DO NOT use this with an RMAN retention window. Customers typically use a weekly full/daily incremental backup strategy with RMAN. With this strategy, a weeks worth of backups (all dependent on the oldest full backup) are deleted together. Deleting backups as soon as they expire would delete backups too soon. Even with archival backups I recommend letting RMAN perform the deletions, otherwise you risk having a file deleted too early.
Be careful changing this setting on an existing share. This setting takes effect immediately and will affect ALL files that have a retention lock. Any files that were locked, but their retention lock expired will be deleted when this setting is applied.
Hold
When set to "Hold", any files that have, or have had a retention lock will be affected. This setting immediately prevents the deletion of all retention locked files until the hold is removed regardless of when the lock is set to expire. Keep in mind that while a hold is in place, the files still have a retention lock with an expiration date.
Removing the hold: When you remove the hold, the normal expiration date takes effect. If you remove the hold by changing the expiry policy to "Delete", ALL files that have an expired retention will be immediately deleted. If you change the expire policy to "Off", the files remain, and you must manually delete them.
NOTE: Be very careful when changing the Expiry Policy. The new setting immediately affects existing files, not just new files going forward, unlike the other file retention settings.
Allow permission changes on retained files
What happens normally : When a file has a retention lock set, you are protecting this file from both being deleting AND from being updated. Because you were not allowed to update the file permissions, you were not able to change the settings from the default of -r--r-----+ while the file was locked.
This could be an issue depending on what type of file you are protecting. There are some cases where you want to make this file either
An executable file, not just a read only file.
readable by any user.
When you attempted to make this change to a locked file the update would fail with a "Operation not permitted".
-r--r-----+ 1 oracle oinstall 792 Dec 4 2023 testfile
[oracle@ssh-server rmanbackups]$ ls -al testfile
-r--r-----+ 1 oracle oinstall 792 Dec 4 20:26 testfile
[oracle@ssh-server rmanbackups]$ chmod 550 testfile
chmod: changing permissions of 'testfile': Operation not permitted
[oracle@ssh-server rmanbackups]$ chmod 444 testfile
chmod: changing permissions of 'testfile': Operation not permitted
[oracle@ssh-server rmanbackups]$
What this setting does: When you check the setting for "Allow permission changes on retained files" you are IMMEDIATELY able to change the permissions on files that are locked. The files are still protected from making them writable, but you can adjust both the "r" - read and "x" execute settings for all users.
NOTE: this setting does take affect immediately and will affect all currently locked files regardless of when they were created.
One topic that has been coming a lot as customers look at options for offsite protected backups, is the use of the Oracle Database Backup Cloud Service. This service can be used either directly from the database itself leveraging an RMAN tape library, or by performing a copy-to-cloud from the ZDLRA. In this post I will try to consolidate all the information I can find on this topic to get you started.
Overview
The best place to start is by downloading, and reading through this technical brief.
This document walks you through what the service is and how to implement it. Before you go forward with the Backup Cloud Service I suggest you download the install package and go through how to install it.
The key points I saw in this document are
RMAN encryption is mandatory - In this brief you will see that the backups being sent to OCI MUST be encrypted, and the brief explains how to create an encrypted backup. Included in the Backup Cloud Service is the use of encryption and compression (beyond basic compression) without requiring the ASO, or ACO license.
How to install the client files - The brief explains the parameters that are needed to install the client files, and what the client files are that get installed. I will go into more detail later on explaining additional features that have been added recently.
Config file settings including host - The document explains the contents of the configuration file used by the Backup Cloud Service library. It also explains how to determine the name of the host (OCI endpoint) based on the region you are sending the backups to.
Channel configuration example - There is an example channel configuration to show you how to connect to the service.
Best practices - The document includes sample scripts and best practices to use when using the Backup Cloud Service.
Lifecycle policies and storage tiers - This is an important feature of using the Backup Cloud Service, especially for long term archival backups. You most likely want have backups automatically moved to low cost archival storage after uploading to OCI.
NOTE: When using lifecycle polies to manage the storage tiers it is best to set the "-enableArchiving" and "-archiveAfterBackup" parameters when installing the backup module for a new bucket. There are small metadata files that MUST remain in standard storage, and the installation module creates a lifecycle rule with the bucket that properly archives backup pieces, leaving the metadata in standard storage.
Download
The version of the library on OTN (at the time I am writing this) is NOT the current release of the library, and that version does not support retention lock of objects.
Documentation on the newer features can be found here, using retention lock can be found here, and there is a oci_readme.txt file that contains all the parameters available.
Updates
There were a few updates since the tech brief was written, and I will summarize the important ones here. I also spoke the PM who is working on an updated brief that will contain this new information.
newRSAKeyPair - The installer is now able to generate the key pair for you making it much easier to generate new key pair. In order to have the installer ONLY create a new key pair pair, just pass the installer the "walletDir" parameter. The installer will generate both a public and private key, and place them in the walletDir (see below).
/u01/app/oracle/product/19c/dbhome_1/jdk/bin/java -jar oci_install.jar -newRSAKeyPair -walletDir /home/oracle/oci/wallet
Oracle Database Cloud Backup Module Install Tool, build 19.18.0.0.0DBBKPCSBP_2023-09-21
OCI API signing keys are created:
PRIVATE KEY --> /home/oracle/oci/wallet/oci_pvt
PUBLIC KEY --> /home/oracle/oci/wallet/oci_pub
Please upload the public key in the OCI console.
Once you generate the public/private key, you can upload the public key to the OCI console. This will show you the fingerprint, and you can execute the installer using the private key file.
"immutable-bucket" and "temp-metadata-bucket" - The biggest addition to library is the ability to support the use of retention rules on buckets containing backups. The uploading of backups is monitored by using a "heartbeat" file, and this file is deleted when the upload is successful. Because all objects in a bucket are locked, the "heartbeat" object must be managed from a second bucket without retention rules. This is the temp-metadata-bucket. When using retention rules you MUST have both buckets set in the config file.
NOTE:
I ran into 2 issues when executing this script.
1) When trying to execute the jar file, I used the default java version in my OCI tenancy that is located in "/user/bin". The installer received a java error
In order to properly execute the installer, I used the java executable located in $ORACLE_HOME/jdk/bin
2) When executing the jar file with my own RSA key that I had been previously used with OCI object storage, I received a java error.
Exception in thread "main" java.lang.RuntimeException: Could not produce a private key at oracle.backup.util.FileDownload.encode(FileDownload.java:823) at oracle.backup.util.FileDownload.addBmcAuthHeader(FileDownload.java:647) at oracle.backup.util.FileDownload.addHttpAuthHeader(FileDownload.java:169) at oracle.backup.util.FileDownload.addHttpAuthHeader(FileDownload.java:151) at oracle.backup.opc.install.BmcConfig.initBmcConnection(BmcConfig.java:437) at oracle.backup.opc.install.BmcConfig.initBmcConnection(BmcConfig.java:428) at oracle.backup.opc.install.BmcConfig.testConnection(BmcConfig.java:393) at oracle.backup.opc.install.BmcConfig.doBmcConfig(BmcConfig.java:250) at oracle.backup.opc.install.BmcConfig.main(BmcConfig.java:242) Caused by: java.security.spec.InvalidKeySpecException: java.security.InvalidKeyException: IOException : algid parse error, not a sequence
I found that this was caused by the PKCS format. I was using a PKCS1 key, and the java installer was looking for a PKCS8 key. The header in my private key file contained "BEGIN RSA PRIVATE KEY".
In order to convert my private PKCS1 key "oci_api_key.pem" to a PKCS8 key "pkcs8.key" I ran.
The next step is to execute the install. For my install I also wanted configure a lifecycle rule that would archive backups after 14 days. In order to implement this, I had the script create a new bucket "bsgtest". Below is parameters I used (note I used "..." to obfuscate the OCIDs).
This created a new bucket "bsgtest" containing a lifecycle rule.
I then added a 14 day retention rule to this bucket, and created a second bucket "bsgtest_meta" for the temporary metadata. If you want to make this rule permanent you enable retention rule lock which I highlighted on the screenshot below.
I then updated the config file to use the metadata bucket because I set a retention rule on the main bucket. Note that there is also a parameter that determines how long archival objects are cached in standard storage before they are returned to archival storage.
Once you execute the installer you will be able to begin backing up to OCI object storage. Don't forget that you need to:
Change the default device type to SBT_TAPE
Change the compression algorithm. I recommend "medium" compression.
Configure encryption for database ON.
Configure the device type SBT_TAPE to send COMPRESSED BACKUPSET to optimize throughput and storage in OCI.
Create a default channel configuration for SBT_TAPE (or allocate channels manually) that use the library that was downloaded, and point to the configuration file for the database.
If you do not use ACO and don't have a wallet , manually set an encryption password in your session.
I recommend sending a "small" backup piece first to ensure that everything is properly configured. My favorite command is
RMAN>backup incremental level 0 datafile 1;
Datafile 1 is always the system tablespace.
Below is what my configuration looks like for RMAN specifically for what I changed to use the Backup Cloud Service.
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=/home/oracle/oci/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/oci/config/backupconfig.ora)';
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES256'; # default
CONFIGURE COMPRESSION ALGORITHM 'MEDIUM' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;
Network Performance
One of the big areas that comes up with using the Backup Cloud Service, is understanding the network capabilities.
The best place to start is with this MOS note
RMAN> run {
2> allocate channel foo device type sbt PARMS 'SBT_LIBRARY=/home/oracle/oci/lib/libopc.so ENV=(OPC_PFILE=/home/oracle/oci/config/backupconfig.ora)';
3> send channel foo 'NETTEST 1000M';
4> }
allocated channel: foo
channel foo: SID=431 device type=SBT_TAPE
channel foo: Oracle Database Backup Service Library VER=19.0.0.1
released channel: foo
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of send command at 11/22/2023 14:12:04
ORA-19559: error sending device command: NETTEST 1000M
ORA-19557: device error, device type: SBT_TAPE, device name:
ORA-27194: skgfdvcmd: sbtcommand returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
KBHS-00402: NETTEST sucessfully completed
KBHS-00401: NETTEST RESTORE: 1048576000 bytes received in 15068283 microseconds
KBHS-00400: NETTEST BACKUP: 1048576000 bytes sent
Executing Backups
Now to put it all together I am going to execute a backup of datafile 1. My database is encrypted, so I am going to set a password along with the encryption key.
set encryption on identified by oracle;
executing command: SET encryption
RMAN> backup incremental level 0 datafile 1;
Starting backup at 22-NOV-23
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=404 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=494 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=599 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Oracle Database Backup Service Library VER=19.0.0.1
allocated channel: ORA_SBT_TAPE_4
channel ORA_SBT_TAPE_4: SID=691 device type=SBT_TAPE
channel ORA_SBT_TAPE_4: Oracle Database Backup Service Library VER=19.0.0.1
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ACMEDBP/system01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 22-NOV-23
channel ORA_SBT_TAPE_1: finished piece 1 at 22-NOV-23
piece handle=8t2c4fmi_1309_1_1 tag=TAG20231122T150554 comment=API Version 2.0,MMS Version 19.0.0.1
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:35
Finished backup at 22-NOV-23
Starting Control File and SPFILE Autobackup at 22-NOV-23
piece handle=c-1654679317-20231122-01 comment=API Version 2.0,MMS Version 19.0.0.1
Finished Control File and SPFILE Autobackup at 22-NOV-23
Restoring
Restoring is very easy as long as you have the entries in your controlfile. If you don't then there is a
script included in the installation that can catalog the backup pieces and I go through that process here.
This also allows you to display what's in the bucket.
Buckets 1 vs many
If you look at what created when executing backup you will see that there is a set format for the backup pieces. Below are the 2 backup pieces that I created
8t2c4fmi_1209_1_1 - This is the backup of datafile 1 for my database ACMEDBP
c-16546791317-20231122-01 - This is the controlfile backup for this database
Notice that the DB name is not in the name of the backup pieces, or in the visible nesting.
If you think about a medium sized database (let's say 100 datafiles), that has 2 weeks of backups (14 days), you would have 1,400 different backup pieces for the datafiles within the "sbt_catalog" directory.
My recommendation is to group small databases together in the same bucket (keeping the amount of backup pieces to a manageable level).
For large database (1,000+ datafiles), you can see where a 30 day retention could become 30,000+ backup pieces.
Having a large number of objects within a bucket increases the time to report the available backup pieces. There is no way to determine which database the object is a member of without looking at the metadata.
Keep this in mind when considering how many buckets to create.
Oracle DB Recovery Service recently added a new feature to protect backups from being prematurely deleted, even by a tenancy administrator. This new feature adds a retention lock to the Backup Retention Period at the policy level. The image below shows the new settings that you see within the protection policy.
Enabling retention lock
The recovery service comes with some default policies that appear as "oracle defined" policy types
NameBackup retention period
Platinum 46 days
Gold 65 days
Silver 35 days
Bronze 14 days
These policies can't' be changed, and they do not enable retention lock.
In order to implement a retention lock you need to create a new protection policy or update an existing user defined protection policy.
Step #1 Set/Adjust "Backup retention period"
If you are creating a new "user defined" protection policy, you need to set the backup retention to a number of days between 14 and 95. You should also take this opportunity to adjust the backup retention of an existing policy, if appropriate, before it is locked.
NOTE: Once a retention lock on the protection policy is activated (discussed in step #3), the backup retention period cannot be decreased, it can only be increased.
Step #2 Click on "enable retention lock"
This step is pretty straightforward. But the most important item to know is that the retention lock is not immediately in effect. Much like the "retention lock" that is set on object storage, there is a minimum period of at least 14 days before the lock is "active".
Note: Once the grace period has expired for the policy (explained later in this blog post) the "retention lock" is permanent and cannot be removed.
Step #3 Set "Scheduled lock time"
As I said in the previous step, the lock isn't immediately active. In this step you set the future date/time that the lock time becomes active, and this Date/Time must be at least 14 days in the future. This provides a grace period that delays when the lock on the policy becomes active. You have up until the lock activation date/time to adjust the scheduled lock time further into the future if it becomes necessary to further day lock activation.
Grace Period
I wanted to make sure I explain what happens with this grace period so that you can plan accordingly.
If you change an existing "user defined" policy to enable the retention lock, any databases that are a member of this policy will not have locked backups until the scheduled lock date/time activates the lock.
If you add databases to a protection policy that has a retention lock enabled, the backups will not be locked until whichever time is farther in the future.
Scheduled lock time for the policy if the retention lock has not yet activated.
14 days after the database is added to the protection policy.
Databases can be removed from a retention locked protection policy during this grace period.
If the policy itself is still within it's grace period from activating, the backup retention period can be adjusted down for the protection policy.
NOTE: This 14 day grace period allows you to review the estimated space needed. On the protected database summary page, for each database, you can see the "projected space for policy" in the Space Usage section. This value can be used to estimate the "locked backup" utilization.
What happens with a retention lock ?
Once the grace period expires the backups for the protected database are time locked and can't be prematurely deleted.
The backups are protected by the following rules.
1. The database cannot be moved to another policy. No user within the tenancy, including an administrator can remove a database from it's retention enabled policy. If it becomes necessary to move a database to another policy , an SR needs to raised, and security policies are followed to ensure that this is an approved change.
2. There is always a 14 day grace period in which changes can be made before the backups become locked. This is your window to verify the backup storage usage required before the lock activates.
3. Even if you check the "72 hour termination option" on the database, backups are locked throughout the retention window.
Comments:
This is a great new feature that protects backups from being deleted by anyone in the tenancy, including tenancy administrators. This provides an extra layer of security from an attack with compromised credentials. Because the lock is permanent, always use the 14 day grace period to ensure the usage and duration is appropriate for you database.