Tuesday, February 26, 2019

ZDLRA - Space management





When backing up to a ZDLRA you want to optimize  your storage allocation to ensure each database can meet its recovery window goal.

You also want to make sure that the ZDLRA doesn't needs to aggressively reclaim space.

***  NOTE - This was updated on 9/13/19 as I've learned more.


Values you can set to control space and purging

Mandatory values

  • Recovery Window Goal (RWG) - How far back (in days) do I need to ensure a point in time recovery for my databases. This is set at the policy level.
        ** NOTE. This is a GOAL, not a guarantee.

  • Disk Reserved Space - How much storage to I need to ensure I keep my recovery window goal for this database.    This is set at the database level and should be adjusted as the database grows.  The sum of Disk_Reserved_Space for all databases can not be greater than the amount of space available on the storage location.  

Optional values


  • Max_Retention_Window -  What is the maximum number of days I want to keep backups for.  The MOS note on purging does a great job of documenting what the recommended value is (if you want to set it), and what the current default value is based on RA version.
  • Guaranteed_Copy -  This is set at the policy level and has 2 possible values
    • NO (default) - Follow purge rules regardless if backups have been replicated to tape/downstream.
    • YES - Backups that have not been successfully copied to tape/replicated are NOT eligible for deletion.  If the space usage for the database is above the Disk_Reserved_Space, incoming backups (both through RMAN and real-time redo) will be rejected.

Derived values that control space and purging


Each storage location has the following values that are used to control purging.

  • Free_Space - The amount of space (GB) that is available for storing backups . This is for the storage location
  • Free_Space_Goal - The estimated amount of space that is needed to support incoming backups to the storage location. This is evaluated every 3 hours. This estimate is the sum of space needed for the next 6 hours taking into account 2 types of incoming backups .
    • Daily backups from databases that have historical data.  The amount of space required to support normal incoming backup space is calculated in the Free_Space_Goal.
    • Newly added databases. In the case of databases that do not have good historical data, the Disk_Reserved_Space is used to calculate how much space needs to be available to support incoming backups.
  • Recovery_Window_Space - The amount of space needed by the database to support the RWG. This is available for each database.
  • Space_Usage - The amount of space currently being used by this protected database


Purge Process


How backups are prioritized for purging is described in this MOS note.
Zero Data Loss Recovery Appliance Purging Rules (Doc ID 2353188.1)


There are 3 rules of purging and this is outlined in the MOS note.




  • Rule 1 - This is the normal housekeeping purging and occurs on a regular basis for all databases.  Any backups whose recovery window is greater than the Max_Retention_Window are periodically removed from the ZDLRA to make space for incoming backups. This is a low priority background process.
The way this rule works is that the retention window for each database (maximum based on policy) grows until the number of days set for Max_Retention_Window is reached for the database.  When this occurs, the retention period is purged down to the Recovery_Window_goal for that database.  For example if my database has a Recovery_Window_Goal of  15 days, and a Max_Retention_Window of 20 days, when the database reaches 20 days of retention, the oldest 5 days of backups will be removed and I will have 15 days of backups after purging.  This ensures the purge is more of a bulk process, and ensures that the purge workload is spread out for all databases.

  • Rule 2 - When the Free_Space for a storage location falls below the Free_Space_Goal regular purging begins.  The database whose current retention is most above it's RWG is added to the purging queue and purging begins at a low priority process.

The way this rule works is similar to Rule 1.  Beginning with the database with the largest current  number of days of retention beyond the Recovery_Window_goal, databases, each database is purged down to its Recovery_Window_Goal until enough space is released. As I said, the processing is similar to rule 1, except it doesn't wait until the Max_Retention_Window is reached.

     
  • Rule 3 - When the Free_Space is not enough  to allow incoming backups, a more aggressive purging begins starting with the database that is most above (by percentage) its allocated Disk_Reserved_Space.  This could impact the RWG if the Disk_Reserved_Space is not large enough to support the RWG.

** NOTE there are a few exceptions to removing backups.
  • Any backup that is currently being restored by a client is exempt from the purging rules. THis ensures that any restore in process is successful
  • Any replication or copy-to-tape process that is currently executing is exempt.  This ensures that the replication/tape copy completes successfully.

How this works.

As long as there is enough space in the storage location to support the Max_Retention_Window for all databases, there is nothing to note on purging.  Rule 1 executes on a periodic basis.

Rule 2 - Free_Space_Goal.


As I tried to explain above, Free_Space_Goal is a estimate of how much space may be needed to ingest new backups.
If the RA believes that it will need more space available ingest backups than what is currently available, it will start purging older backups that are not needed to support the RWG for databases.
This purging will continue until the Free_Space_Goal is more than Free_Space, or all databases only have backups that support their RWG.

This makes sense for a couple of reasons.

  • It takes into account newly added databases that don't have history.  This ensures that when you send the first few backups to the RA, there is sufficient space to ingest these backups.
  • If space has become tight on the RA, and it can no longer support the Max_Retention_Window for databases, there is enough space available to keep ingesting new backups.
Rule 2 only goes as far as removing backups that are not needed to support the RWG for all databases.

As you can imagine this might not free enough space if your Disk_Reserved_Space is not adequate for all databases.


Rule 3 - Disk_Reserved_Space

Rule 3 kicks in when you run out of Free_Space.  This is the most aggressive purging, and it's priority is higher than most processes that occur on the ZDLRA.  Accepting incoming backups is one of the highest priorities and freeing up space for backups is critical.

From this document, you can see that this is where the Disk Reserved Space allocation for each database becomes important.

The amount of Disk Reserved Space allocated is compared to the amount of space used by this database.  The % of space used greater than the space reserved is used to decide the order in which databases backups are purged.

Backups will continue to be purged to free up space.  

A good recommendation is to set the Disk_Reserved_Space to at least 10% greater than what is needed to support your RWG. This needs to be updated on a regular basis.

*** NOTE   -- Also, you need to keep in mind that you might have some basis with seasonal peaks, ie cyber Monday.  For databases that have periodic peak periods, the Disk_Reserved_space should reflect the amount of space needed to support the peak.  This should be exempt from any automated adjustment using the RWG needed space.

Keep in mind that the combined Disk Reserved Space for all databases cannot be larger than the amount space available in the storage pool.


Important notes :

It is critical that you accurately set the Disk_Reserved_Space for all databases.  If you consistently set the Disk_Reserved_Space to be 20% greater than what is needed for the RWG for ALL databases, this ensures that the ZDLRA will not get more 80% full, and give you some buffer when space becomes tight.

Incorrectly setting the Disk_Reserved_Space for an existing  database may cause the RWG to be lost.  If the Disk_Reserved_Space is less than what is needed, Rule 3 may purge backups needed to support the RWG to free up space. THE RWG is NOT GUARANTEED.

Correctly setting the Disk_Reserved_Space for newly added databases is important.  This value is used by the Free_Space_Goal, and may force unnecessary purging to make room for the incoming backups.


Adjusting RWG


There is a package you can use to estimate how much storage is needed to support a given recovery window goal.  This is can be used to determine if a databases's reserved space is enough to meet the goal.  DBMS_RA.ESTIMAT_SPACE is the package you call.

ESTIMATE_SPACE

This procedure estimates the amount of storage in GB required for recovery of a given database and a desired recovery window.
Syntax
FUNCTION estimate_space (
   db_unique_name IN VARCHAR2,
   target_window IN DSINTERVAL_UNCONSTRAINED) RETURN NUMBER;
Parameters
Table 12-23 ESTIMATE_SPACE Parameters
ParameterDescription
db_unique_name
The name of the database needing the storage estimate.
target_window
The desired recovery window for the database. Specify the goal as any valid INTERVAL DAY TO SECOND expression, such as INTERVAL '2' DAY (2 days), INTERVAL '4' HOUR (4 hours), and so on.


Recommendations on Purging and Space Management.


  • Correctly setting the Disk_Reserved_Space is critical to managing space
  • Periodically adjusting the Disk_Reserved_Space as the databases grows or shrinks is important.  This can be done through the PLSQL package, and can be scheduled on a regular basis.
  • Know which databases, if any, have seasonal peaks and ensure the Disk_Reserved_Space reflects any peak.
  • Properly set the Disk_Reserved_Space for new database.  This will help prevent unnecessary purging, and possibly affecting ingesting of backups.
  • Use the estimate space package to adjust Disk_Reserved_Space if adjusting RWG for a policy.
  • Create an alert on the Disk_Reserved_Space falling below the Recovery_Window_Space. This tells you that the RWG may be lost if the ZDLRA needs to reclaim space.
  • Keep track of the Space_Usage vs Disk_Reserved_Space vs Recovery_Window_Space to understand how each databases space will be managed.
  • Periodically check SAR report (MOS note 2275176.1) for issues.  Process issues, etc. that cause ordering waits affect the estimated "needed space for RWG".  This may quickly inflate the Disk_Reserved_space when using an automated process.
  • Use the Size_Estimate from the RA_DATABASE view to ensure the space utilization makes sense.  This can be a check to verify that the storage utilization of backups on the ZDLRA seems reasonable for the size of the database.

Thursday, January 31, 2019

TDE Implementation first 30 days

This blog post is on the immediate effect of implementing TDE in your database backup storage.


In the last blog post I mentioned that after implementing TDE you need to explicitly perform a full backup on all tablespaces you encrypt.
  • If you are performing an incremental strategy (level 0 and level 1) then explicitly perform a level 0 backup.
  • If you are using an incremental merge backup (BACKUP INCREMENTAL LEVEL 1 for RECOVERY of COPY), you must create a new image copy and start a new divergent backup
The reason a full explicit backup is needed is that the SCN number of the underlying data blocks do not change when you implement encryption.  Because of this, any backup strategy you are using does not recognize that the the blocks have changed.  This is true even with the ZDLRA.

Above is the reason for this post.  If you have an unencrypted database, and you implement TDE on all tablespaces, then perform a full backup of all tablespaces, you will have 2 copies of your backups for a period of time.

I am going to take a closer look at what this means to my backup storage when implementing TDE.

For my example, I'm going to use the same example I used in the last post.
Below is what it looks like.











Using my example dataset for a just a weekly full backup and a daily incremental backup (using deduplication) you can see that

The data in my database is 35 GB ( the fully allocated size is 50 GB).
The amount of change over 30 days is 70 GB.
Compressing both of these, the final size is 41.5 GB used.

Prior to implementing TDE, I am using 41.5 consistently to store 30 days of compressed backups for my 50GB database.

The next column shows what happens when after implementing TDE.  The full backup size compressed is now 43.9 GB and the amount of change over 30 days is 54 GB compressed.
After implementing TDE, I am using 97.9 GB consistently to store 30 days of compressed backups for my 50 GB database.

Now let's see what first 30 days looks like.



This shows the unencrypted backups (purple) and how they start to age off and get removed on day 31.  You can also see how the encrypted backups (gray) grow over time and the will stabilize to the size you see on day 31.

You can see that there is an immediate impact to the backup usage. That growth continues until the old backup is finally ages off.

You need to plan for TDE ahead of time.  For my example dataset, the storage needed more than doubled.



Friday, January 25, 2019

Implementing TDE

This blog post is about what happens when you implement TDE (Transparent Data Encryption) in your database.

I started with 2 other blog posts, but I soon realized that this is a HUGE topic with ramifications throughout the database system.

Well let's start at the beginning.
I wanted to know what happens when you implement TDE.
The 3 items that I keep hearing are

  1. Encrypted data will not compress
  2. Encrypted data will not dedupe
  3. Implementing Advanced compression in the database will mitigate the loss of compression to encryption by compressing the data before encrypting


Now in order to investigate this further, I had some restrictions that affected my testing.

  • I had no access to a deduplicating appliance to test point 2
  • I didn't have a real data set to use.  I ended up using the SOE schema in swingbench 
  • The only way I could test the effect of compression was to compress the backupset.  Most appliances (including the ZDLRA) break the file up into pieces and compress each piece. I couldn't emulate the affect of doing that.
Now for my testing dataset.  I created a dataset in the SOE tablespace that in uncompressed.  I then took that dataset and created a second copy of the schema with advanced compression in place in the SOE_COMPRESSED tablespace.

When I finished this I had 2 tablespaces comprised of 10 x 5 GB datafiles.
SOE                              --> uncompressed copy of data
SOE_COMPRESSED --> Advanced compression (ACO) copy of data









In the graphs above you can see  that out of the 50 GB of allocated space
uncompressed  --> 35.8 GB of used space
compressed      -->  24.9 GB of used space

These are the datasets I am going to use throughout my testing.

I started by taking these 2 datasets and seeing what happens with the 2 full backup types
L0               --> Full backups saved as backupset
image copy --> Full copy of datafile.

Below is what I found when I compressed both these backup types.























This was very helpful to see what happens.

For my uncompressed dataset (35 GB), I was able to compress the file down to 20 GB.
For my uncompressed image copy (35 GB used) I was able to compress the file down to 18 GB
For my compressed dataset (24 GB), I was able to compress the file down to 18 GB
For my compressed image copy (24 GB), I was able to compress the file down to 18 GB.

So what I learned is no matter how I compressed the data, in the database (ACO), or just compress the backup, the final size is still ~18 GB.

Now we have some benchmarks on what happens when I just compress the backups.

I started with encrypting the image copy.  This was one area that I wanted to learn more about for 2 reasons.

  1. Many of the newer backup strategies use the "incremental merge" process that started with 10G of oracle. I was curious what happens to image copies when you implement TDE.
  2. Many of the Flash arrays use compression to get more usable storage from the smaller SSD drives.  Looking at what happens with compressing the datafile copies gives a lot of insight into how those flash arrays will be affected by TDE.










You can see from the above testing there was a very significant impact on the compressed size of the image copy.  Both compressed and uncompressed data compressed nicely when it was unencrypted.  After encrypting the dataset, the compressed size is about 3x larger than the compressed size.

This gives me the answer on how it will affect these 2 areas.


  1. If using an "incremental merge" based backup that is stored on compressed storage, you will need 3x the amount of storage for your backups.
  2. If using Flash storage for your database files that utilizes compression, you will need 3x the amount of database storage.
OK.. There's the first answer I was looking for.

Now let's see what happens with my Level 0 backups.












With level 0 backups, I can see the benefit of compressing the data first.
I can also see the effect of encryption.

Compressing the data first saved me a about a 1/3 of the size of the data, and about 1/3 of the size of the backupset compressed (once encrypted).


Now let's take a look of the effect of encryption on the change rate.  I updated one of the tables in the database to create a change rate, and I looked at both the archive logs and the incremental backup.













Wow.. What stood out to me on this testing is the size of the archive logs.  Adding Advanced Compression to the database decreased the size of the incremental backups (expected), but increased the size of the archive logs.  Then when I looked at the compressed size (both with and without encryption) compressing the data in the database increased the size of the archive logs.

This showed me a lot of what happens with TDE.  What I learned through this was.


  1. Encrypted data indeed does not compress, in fact it can get much bigger.
  2. Implementing Advanced Compression does not always mitigate the effects of encryption. Depending on the dataset, it might have very limited effect.
  3. The compressed size of datafiles and image copy backups may increase by 3x or more depending on the amount of free space you typically have in your datafiles..