Thursday, January 3, 2019

Verifying you have a recoverable backup

As you probably know the best way to validate that you can recover your database to a specific point-in-time is to perform a point-in-time recovery of the database and successfully open it. That isn't always possible due to the size of the database, infrastructure necessary, and the time it takes to go through the process.  To verify recoverability there are several commands that give you all the pieces necessary, but it is important to understand what each command does.

Restore Validate -  The restore validate command can be performed for any database objects including the whole database to verify the files are valid.  This command can be used to restore as of the current time (default), but you can also specify a previous point-in-time.  The validate verifies that the files are available and also checks for corruption.  By default it checks for physical corruption, but it can also check for logical corruption. This command reads through the backup pieces 
Examples of restore validate for different cases.
verify current backup -
RMAN> RESTORE DATABASE VALIDATE;
RMAN> RESTORE ARCHIVELOG ALL VALIDATE;
This will verify that the last full backup (level 0) does not contain corruption, and it verifies that archivelogs exist throughout the recovery window, and that they do not contain corruption. 
verify previous backup -
RMAN> RESTORE DATABASE VALIDATE UNTIL TIME "to_date('xx/xx/xx xx:xx:xx','mm/dd/yy hh24:mi:ss')";
This will verify that the full backup performed prior to the "until time" is available and does not contain corruption.
Restore Preview - The restore preview can be used to identify the backup pieces necessary for a restore AND recovery of the database.  The preview will show all the datafile backups (both full and incremental), along with the archive logs needed.  The restore preview does not check for corruption, it simply lists the backup pieces necessary.

Validate -  The validate command can be performed on any database objects along with backupsets. Unlike restore, you cannot give it an "until time", you must identify the object you want to validate.  By default it checks for physical corruption, but it can also check for logical corruption.

It is critical to understand exactly what all three commands do to ensure you can efficiently recover to a previous point-in-time.  It takes a combination of 2 commands (plus an extra to be sure).

So let's see how I can test if I can recover to midnight on the last day of the previous month.  My backup strategy is "Weekly full/Daily incremental" backups.  I perform my full backups on Saturday night and keep my backups for 90 days. My archive logs are kept on disk for 20 days.

It is now 12/15/18 and I want to verify that I can recover my database to 23:59:59 on 11/30.

First I want to perform a restore validate of my database using until time.
RMAN > restore validate database UNTIL TIME "to_date('11/30/18 23:59:59','mm/dd/yy hh24:mi:ss')";
By looking through the logs, I can see the this command performed a validation of the Level 0 (full) backup I performed on 11/24/18.  It did not validate any of the incremental backups or archive logs that I will need to recover the database.
Now I want to perform a restore validate of my archive logs using until time.
RMAN > restore validate archivelog UNTIL TIME "to_date('11/30/18 23:59:59','mm/dd/yy hh24:mi:ss')";
By looking through the logs, I can see that this command validated the backup of all my older archive logs (older than 20 days), and it validated the FRA copy of my archive logs that were 20 days or newer.
There are couple of issues with this that jump out at me.
1) My incremental backups are NOT validated.  Yes I know I can recover to the  point-in-time I specified, but all I verified is that it is possible by applying 6 days of archive logs.
2) I don't know if any of my newer backups of archive logs are corrupt.  The validation of archive logs only validated backups of archivelogs that are NOT on disk. I still need to validate the archive log backups for archive logs that will age off from the FRA in a few days to be sure..
3) The process doesn't check or validate a backup of the controlfile.  If I am restoring to another server (or if I lose my database completely) I want to make sure I have a backup of the controlfile to restore.
The only way to be sure that I have an efficient, successful recovery to this point in time is to combine the 2 commands and use that information to validate all the pieces necessary for recovery.
Here are the steps to ensure I have valid backups of all the pieces necessary to restore efficiently.
1) Perform a restore preview and write the output to a log file.
2) Go through the output from step 1 and identify all the "BS KEY" (backupset key) values.
3) Perform a "validate backupset xx;" using the keys from step 2 to ensure all datafile backups (both full and incremental backups) are valid.
4) Go through the output from step 1 and identify the archive log sequences needed.
5) Identify the backupsets that contain all log sequences needed (my restore preview pointed to the archive logs on disk).
6) Perform a "validate backupset xx;" using the keys from step 5.
7) Perform a restore controlfile to a dummy location using the UNTIL time.
As you can see the "restore validate" is not enough to ensure efficient recovery. It only ensures that the you have a valid RESTORE not a valid recovery.
The ZDLRA does constant recovery checks using the virtual full backups, and archive logs.
The ZDLRA ensures you can quickly AND successfully recover to any point-in-time within your retention window.


Friday, November 30, 2018

TDE–How to implement TDE in your database and what to think about (part 2)

This is the second part in a series on implementing TDE and what happens to the sizing.

At first my plan was to encrypt the dataset I created in my first post, but instead I compressed it.

At this point (and throughout this post), I am working with an un-encrypted dataset.

One of the first things to understand about Encryption is that encrypted data DOES NOT compress.
This is critical when understanding what happens when you implement TDE.

One way to save storage when implementing TDE is to implement encryption AND compression together.

In order to break down the affects of encryption on compressed data VS uncompressed data, I took my dataset (the SOE dataset from swing bench) and I compressed it.  I implemented Advanced compression on the tables, and I compressed the indexes and rebuilt them.

I now have 2 copies of the same dataset. 1 is compressed, and 1 is not.

Now let's take a look at the sizing of the Data sets and I will go through the same backup procedures and see what happens.

SEGMENT_NAME              SEGMENT_TYPE Space Used uncompressed   Space Used Compressed SPACE_SAVINGS
------------------------- ------------ ------------------------- -------------------- -------------
ADDRESSES                 TABLE           3,392 MB                  3,264 MB                      3
ADDRESS_CUST_IX           INDEX             703 MB                    728 MB                     -3
ADDRESS_PK                INDEX             662 MB                    888 MB                    -34
CARDDETAILS_CUST_IX       INDEX             703 MB                    562 MB                     20
CARD_DETAILS              TABLE           2,048 MB                  1,600 MB                     21
CARD_DETAILS_PK           INDEX             662 MB                      0 MB                    100
CUSTOMERS                 TABLE           3,328 MB                  2,880 MB                     13
CUSTOMERS_PK              INDEX             443 MB                      0 MB                    100
CUST_ACCOUNT_MANAGER_IX   INDEX             417 MB                    272 MB                     34
CUST_DOB_IX               INDEX             528 MB                    280 MB                     47
CUST_EMAIL_IX             INDEX             975 MB                    280 MB                     71
CUST_FUNC_LOWER_NAME_IX   INDEX             683 MB                    280 MB                     58
INVENTORIES               TABLE             176 MB                    176 MB                      0
INVENTORY_PK              INDEX              18 MB                      0 MB                    100
INV_PRODUCT_IX            INDEX              16 MB                     12 MB                     24
INV_WAREHOUSE_IX          INDEX              16 MB                     12 MB                     24
ITEM_ORDER_IX             INDEX           2,000 MB                  1,770 MB                     11
ITEM_PRODUCT_IX           INDEX           1,768 MB                  1,301 MB                     26
LOGON                     TABLE           1,728 MB                  1,728 MB                      0
ORDERENTRY_METADATA       TABLE               0 MB                      0 MB                      0
ORDERS                    TABLE           3,968 MB                  2,816 MB                     29
ORDER_ITEMS               TABLE           6,976 MB                  4,992 MB                     28
ORDER_ITEMS_PK            INDEX           2,234 MB                      0 MB                    100
ORDER_PK                  INDEX             632 MB                      0 MB                    100
ORD_CUSTOMER_IX           INDEX             671 MB                    480 MB                     28
ORD_ORDER_DATE_IX         INDEX             752 MB                    439 MB                     41
ORD_SALES_REP_IX          INDEX             594 MB                    438 MB                     26
ORD_WAREHOUSE_IX          INDEX             709 MB                    438 MB                     38
PRD_DESC_PK               INDEX               0 MB                      0 MB                    100
PRODUCT_DESCRIPTIONS      TABLE               0 MB                      0 MB                      0
PRODUCT_INFORMATION       TABLE               0 MB                      0 MB                      0
PRODUCT_INFORMATION_PK    INDEX               0 MB                      0 MB                    100
PROD_CATEGORY_IX          INDEX               0 MB                      0 MB                      0
PROD_NAME_IX              INDEX               0 MB                      0 MB                      0
PROD_SUPPLIER_IX          INDEX               0 MB                      0 MB                   -100
WAREHOUSES                TABLE               0 MB                      0 MB                      0
WAREHOUSES_PK             INDEX               0 MB                      0 MB                    100
WHS_LOCATION_IX           INDEX               0 MB                      0 MB                   -100


Here is the total savings by compressing both tables and indexes with advanced compression.

Space Used uncompressed   Space Used Compressed  SPACE_SAVINGS
------------------------- ---------------------- -------------
  36,804 MB                 25,636 MB                       30



Now to compare this with the previous data uncompressed I am going to backup by tablespace.  Below is the sizing of the backups. I used a tag to identify the backups.


-rw-rw----. 1 oracle oracle 26773323776 Nov 29 17:02 COMPRESSED_SOE.bkp
-rw-rw----. 1 oracle oracle 38441140224 Nov 29 17:04 UNCOMPRESSED_SOE.bkp
-rw-rw----. 1 oracle oracle 10987765760 Nov 29 18:35 BASIC_COMPRESSED_SOE.bkp
-rw-rw----. 1 oracle oracle 11135655936 Nov 29 18:36 BASIC_COMPRESSED_SOE_COMPRESSED.bkp
-rw-rw----. 1 oracle oracle 13360308224 Nov 29 20:12 MEDIUM_COMP_SOE_COMPRESSED.bkp
-rw-rw----. 1 oracle oracle 14383603712 Nov 29 20:12 MEDIUM_COMPRESSED_SOE_.bkp
-rw-rw----. 1 oracle oracle  9420791808 Nov 30 00:12 HIGH_COMP_SOE_COMPRESSED.bkp
-rw-rw----. 1 oracle oracle  9112944640 Nov 30 00:23 HIGH_COMPRESSED_SOE.bkp


Now I'm going to put that in a table and a chart to compare..

First the table of sizes



Now the chart


Now by looking at the chart it is apparent what happens with compression and the data.


  • Compression in the database reduced the size of the data by 30%
  • An uncompressed backupset matched the size of the data
  • Once I compressed the backupset, the difference is size was minimal.

** Bottom line - Compressing the data in the database saved on the uncompressed backupsize. Once the backupset is compressed the final size is about the same.

** Final conclusion -- Most modern backup appliances (ZDLRA, ZFS, DD) compress the backups.  When using those appliances with unencrypted data, the final size is the same regardless of whether the data is compressed in the Database.

Now that I've looked at both compressed and uncompressed data at the DB and backupset I am going to compress the data.  Next post.


Thursday, November 15, 2018

TDE–How to implement TDE in your database and what to think about (part 1)

This is the first part in a series of blog posts on TDE.
Many organizations are moving to TDE, and this can have a dramatic affect on your systems.
TDE impacts 2 areas
1) Post encryption compression goes away.  Encrypted data can’t be compressed.  Now why do I mention “Post encryption” ? This is because data can be compressed before encrypting.  Compressed data in your database (HCC, OLTP, basic etc.) is compressed PRIOR to encryption.  Utilizing compression in your database not only saves you disk space on your storage system, but it also saves you disk space for your backups.  The loss of compression post encryption can have many consequences you might not immediately think of
    • if you are using SSD storage that compresses blocks, you need to take into account the extra storage needed
    • If you are using a De-duplication appliance you will lose most of the benefits of de-duplication.
    • If you are compressing your backups, you will lose the benefits gained from compression (small backups and lowered network traffic).
2) Moving to TDE requires more space during the migration. Rebuilding the tablespaces with a newly encrypted copy is done by creating a second copy of each datafile (encrypted), and then removing the pre-encrypted copy.  The database switches to the new datafile when the process is complete.  This utilizes additional storage for the second copy of the datafiles.  The other migration impact is an increase in backup storage.  After encrypting tablespaces, a new level 0 backup is needed to ensure you are restoring to an encrypted copy of the data. Remember the encryption process changes all the blocks in the datafiles. I will discuss the backup implications more later.

Now I’m going to start by describing the dataset I used for testing.
In order to create this dataset I used the oewizard from Swingbench

Here are the objects and the sizes.

SEGMENT_NAME              SEGMENT_TYPE TABLESPACE_NAME SPACE_USED
------------------------- ------------ --------------- ------------
ADDRESSES                 TABLE        SOE                3,392 MB
ADDRESS_CUST_IX           INDEX        SOE                  703 MB
ADDRESS_PK                INDEX        SOE                  662 MB
CARDDETAILS_CUST_IX       INDEX        SOE                  703 MB
CARD_DETAILS              TABLE        SOE                2,048 MB
CARD_DETAILS_PK           INDEX        SOE                  662 MB
CUSTOMERS                 TABLE        SOE                3,328 MB
CUSTOMERS_PK              INDEX        SOE                  443 MB
CUST_ACCOUNT_MANAGER_IX   INDEX        SOE                  417 MB
CUST_DOB_IX               INDEX        SOE                  528 MB
CUST_EMAIL_IX             INDEX        SOE                  975 MB
CUST_FUNC_LOWER_NAME_IX   INDEX        SOE                  683 MB
INVENTORIES               TABLE        SOE                  176 MB
INVENTORY_PK              INDEX        SOE                   18 MB
INV_PRODUCT_IX            INDEX        SOE                   16 MB
INV_WAREHOUSE_IX          INDEX        SOE                   16 MB
ITEM_ORDER_IX             INDEX        SOE                2,000 MB
ITEM_PRODUCT_IX           INDEX        SOE                1,768 MB
LOGON                     TABLE        SOE                1,728 MB
ORDERENTRY_METADATA       TABLE        SOE                    0 MB
ORDERS                    TABLE        SOE                3,968 MB
ORDER_ITEMS               TABLE        SOE                6,976 MB
ORDER_ITEMS_PK            INDEX        SOE                2,234 MB
ORDER_PK                  INDEX        SOE                  632 MB
ORD_CUSTOMER_IX           INDEX        SOE                  671 MB
ORD_ORDER_DATE_IX         INDEX        SOE                  752 MB
ORD_SALES_REP_IX          INDEX        SOE                  594 MB
ORD_WAREHOUSE_IX          INDEX        SOE                  709 MB
PRD_DESC_PK               INDEX        SOE                    0 MB
PRODUCT_DESCRIPTIONS      TABLE        SOE                    0 MB
PRODUCT_INFORMATION       TABLE        SOE                    0 MB
PRODUCT_INFORMATION_PK    INDEX        SOE                    0 MB
PROD_CATEGORY_IX          INDEX        SOE                    0 MB
PROD_NAME_IX              INDEX        SOE                    0 MB
PROD_SUPPLIER_IX          INDEX        SOE                    0 MB
WAREHOUSES                TABLE        SOE                    0 MB
WAREHOUSES_PK             INDEX        SOE                    0 MB
WHS_LOCATION_IX           INDEX        SOE                    0 MB


TOTAL                                                                                 36,804 MB

Here is the total size for the data

TABLESPACE_NAME   FILE_ID FILE_NAME            SPACE_USED   TOTAL_ALLOCATED
--------------- --------- -------------------- ------------ --------------------
SYSTEM                  1 system01.dbf              819 MB       830 MB
SYSAUX                  3 sysaux01.dbf              809 MB       860 MB
UNDOTBS1                4 undotbs01.dbf             369 MB    29,180 MB
SOE                     5 soe_1.dbf               3,600 MB     5,120 MB
USERS                   7 users01.dbf                 5 MB         5 MB
SOE                     8 soe_2.dbf               3,841 MB     5,120 MB
SOE                     9 soe_3.dbf               3,822 MB     5,120 MB
SOE                    10 soe_4.dbf               3,825 MB     5,120 MB
SOE                    11 soe_5.dbf               3,806 MB     5,120 MB
SOE                    12 soe_6.dbf               3,728 MB     5,120 MB
SOE                    13 soe_7.dbf               3,781 MB     5,120 MB
SOE                    14 soe_8.dbf               3,442 MB     5,120 MB
SOE                    15 soe_9.dbf               3,464 MB     5,120 MB
SOE                    16 soe_10.dbf              3,495 MB     5,120 MB


===================================================================================================================

Total                                   38,303 MB      60,820 MB      22,517 MB

From above I can see that I am using 38 GB of space, out of the 61 GB of space allocated.
Now I created a backup set . With no compression the size of the backup set is about the size data used.
[oracle@oracle-server]$ ls -al
total 38910628
drwxrwx---. 2 oracle oracle          58 Nov 15 16:34 .
drwxrwx---. 3 oracle oracle          24 Nov 15 16:23 ..
-rw-rw----. 1 oracle oracle 39844478976 Nov 15 16:37 o1_mf_nnnd0_TAG20181115T163432_fyvsm8rz_.bkp
[oracle@oracle-server]$

Just to save my spot .I’m going to create a restore point to make this the starting point of all my testing.
SQL> create restore point new_database;

Restore point created.



Now let’s look at what happens when I compress the backup of this database

 oracle oracle 39844478976 Nov 15 16:37 o1_mf_nnnd0_TAG20181115T163432_fyvsm8rz_.bkp   ---> Original backup
 oracle oracle 11424759808 Nov 15 17:09 o1_mf_nnndf_TAG20181115T165247_fyvtoj7m_.bkp   ---> Basic Compression
 oracle oracle  9468592128 Nov 15 18:33 o1_mf_nnndf_TAG20181115T174452_fyvxq4s2_.bkp   ---> High Compression
 oracle oracle 14488240128 Nov 15 18:44 o1_mf_nnndf_TAG20181115T183319_fyw0l08k_.bkp   ---> Medium Compression

Finally I took an incremental merge backup to see what happens with that.

ls -al
total 62300276
drwxrwx---. 2 oracle oracle       4096 Nov 16 09:47 .
drwxrwx---. 7 oracle oracle         92 Nov 14 13:24 ..
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:38 o1_mf_soe_fyxolco8_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:39 o1_mf_soe_fyxon2o4_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:40 o1_mf_soe_fyxoohtl_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:41 o1_mf_soe_fyxopwy8_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:41 o1_mf_soe_fyxorb2f_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:42 o1_mf_soe_fyxosq7c_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:43 o1_mf_soe_fyxov49x_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:44 o1_mf_soe_fyxowklr_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:44 o1_mf_soe_fyxoxyl8_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:45 o1_mf_soe_fyxozcq9_.dbf
-rw-rw----. 1 oracle oracle 5368717312 Nov 16 09:46 o1_mf_soe_fyxp0rwd_.dbf
-rw-rw----. 1 oracle oracle  692068352 Nov 16 09:47 o1_mf_sysaux_fyxp3gc4_.dbf
-rw-rw----. 1 oracle oracle  859840512 Nov 16 09:46 o1_mf_system_fyxp2z72_.dbf
-rw-rw----. 1 oracle oracle 3187679232 Nov 16 09:46 o1_mf_undotbs1_fyxp2666_.dbf


Backup Method                    Backup Size         
Image Copy 62 GB
No Compression 40 GB
Basic Compression 11 GB
Medium Compression         14 GB
High Compression 95 GB


My next Blog will cover taking this data set and compressing it.