Tuesday, September 3, 2019

Oracle : Dataguard vs Backups

For anyone that has been around Oracle databases for a long time, they should be familiar with MAA (Maximum Availability Architecture).
Here is a link to the latest version of it.

As you probably know, MAA is described as "best practices" for implementing the Oracle Database.

What I am going to explain in this blog post, is why you need the different pieces.
Questions often comes up around why each piece of availability (for lack of a better word) is important.
These pieces include
  • Data Guard
  • Active Data Guard
  • Flashback database
  • Point-in-time backups
  • Archive Backups
  • DR site

Data Guard


Well let's start with the first piece Data Guard.  I'm not going address Data Guard in the same datacenter, just Data Guard between datacenters.
There is a difference between "just Data Guard" and "Active Data Guard"

Data Guard  -  Mounted copy of the primary database that is constantly applying redo changes. It can be thought of as always in recovery. This copy can be gracefully "switched" to without data loss (all transactions are applied before opening). In the event of an emergency, it can be "failed" to which allows for opening it without applying all transactions.

Active Data Guard -  A Data Guard copy of the data (like above), BUT this is a read-only copy of the primary database.  Note this is an extra cost option requiring a license. Having the database open read-only has many advantages.
  • Reporting, and read-only workload can be sent to the Data Guard copy of the database taking advantage of the resources available that would typically sit idle.
  • Automatic block repair from the primary database.
  • A Block Change Tracking can be used to speed up incremental backups.

Data Guard, regardless of the 2 types is often between datacenters with limited bandwidth.


This becomes important for a few reasons
  •  Cloning across datacenters, or even rebuilding a copy of the database in the opposite datacenter can take a long time.  Even days for a very large database.
  • The application tier that accesses the database needs to move with the database. In the event of a switchover/failover of the database, the application must also switchover.

NOTE :  Making your Data Guard database your primary database involves moving the application tier to your DR data center.  The testing and changes to move the application may be risky and involve more time than simply restoring the database.

Flashback Database


Flashback database allows you to query the database as of a previous point-in-time, and even flashback the entire database to a previous point-in-time.
You can also use flashback database to "recover" a table that was accidentally dropped.
With flashback database, the changes are kept in the FRA for a period of time, typically only a few hours or less depending on workload.

Flashback can be helpful in a few situations.
  1. Logical corruption of a subset of data.  If a logical corruption occurs, like data is accidentally deleted, or incorrectly updated, you can view the data as it appeared before the corruption occurred. This of course, is assuming that the corruption is found within the flashback window.  To correct corruption, the pre-corrupt data can be selected from the database and saved to a file.  The application team familiar with the data, can then create scripts to surgically correct the data.
  2. Logical corruption of much of data.  If much of the data is corrupted you can flashback the database to a point-in-time prior to the corruption within the flashback window.  This a FULL flashback of ALL data.  After the flashback occurs, a resetlogs is performed creating a new incarnation of the database.  This can be useful if an application release causes havoc with the data, and it is better to suffer data loss, than deal with the corruption.
  3. Iterative testing.  Flashback database can be a great tool when testing, to repeat the same test over and over.  Typically a restore point is created before beginning a test, and the database is flashed back to that state.  This allows an application team to be assured that the data is in the same state at the beginning of each iterative test.

Backups 


Backups allow you to restore the database to a point-in-time prior to the current point-in-time. ZDLRA is the best product to use for backups.

ZDLRA offers

  • Near Zero RPO
  • Very low RTO by using the proprietary incremental forever strategy.  Incremental forever reduces recovery time, by creating a full restore point for each incremental backup taken.

The most recent backups are kept local to the database to ensure quick restore time. here are typically 2 types of backups. 

Archive backups - These are often "keep" backups.  They are special because they are a self contained backupset that allows you recovery only to a specific prior point-in-time. An example would be the creation of an archival backup on the last day of the month, that allows you recover only to the end of the backup time.  A single recovery point for each month rather than any point during the month.

Point-in-time backups - These are backups that include the archive logs to recovery to any point-in-time.  Typically these types of backups allow you to recover to any point in time from the beginning to the current point in time. For example, allow you to recover to any point in the last 30 days.

The advantages of backups are
  • You can recover to any point-in-time within the recovery window.  This window is much longer than flashback database can provide.  Any logical corruptions can be corrected by restoring portions of "good" data, or the whole database.
  • You can restore the database within the same data center returning the database to availability without affecting the application tier.
  • Backups are very important if using active Data Guard. The issue may be with the read-only copy of the database.  Since both the primary and the Data Guard copy are used by the application, restoring from a backup is the quickest method to rebuild either environment.

Disaster Recovery


 Disaster Recovery is typically a documented, structured approach to dealing with unplanned incidents.  These incidents include planning for a complete site failure involving the loss of all data centers within a geographical area.

The 2 major risks areas addressed by any approach are

RTO - Recovery Time Objective.  How long does it take to return the application to full availability

RPO - Recovery Point Objective. How much data loss can be tolerated in the event of a disaster


Comparison



The bottom line when comparing the different technologies


Flashback database - Correct logical corruption that is immediately found.

Dataguard - Full disaster requiring moving the full environment. Keep application continuity in the event of a disaster.

Backups - Restore current environment to availability (production site or Data Guard site).


ZDLRA, by offering a very low RTO and RPO provides a solution that can be used to return availability in all but a datacenter disaster.




Tuesday, August 27, 2019

Oracle incremental merge and corruption



In the last couple of years I have seen a lot of backup vendors (Commvault, Netbackup, Rubrik etc.) using the incremental merge backup process that Oracle introduced in 10G.

These vendors have combined the incremental merge with snapshots of each new merged backup to provide a daily restore point.  The process is to perform an incremental backup, use the DB software to merge in the changes, and then "snap" the storage.  Rinse and repeat.
The archive logs are also backed up to provide a recovery point in time.

This process has a few flaws.

  1. You are leveraging DB resources to perform the merge.  Only the DB software knows the proprietary format of files to merge in the changes.
  2. The merge process is a sequential process that can be slow. You are essentially "recovering" the database one day forward using the merge process. We all know how slow a recovery can be.
  3. The storage used for these backups is typically not tier 1 storage, and this also affects the speed of the merge process.
  4. The RMAN catalog only knows about the last incremental merge.  In order to recover to more than the last merge, you need to catalog the older backups.
  5. These are FULL SIZE datafile copies.  RMAN backup sets have many features to optimize the size of backupsets (exclude unused blocks, compress, etc.) that can't be leveraged with this type of backups.
  6. Lastly, there is no inherent checking of backups for corruption.  If there is any corruption in the backup, it may not be found.

I am going walk through an example showing you the last point.

But first, I want to point out how the ZDLRA addresses all of these points.


  1. The ZDLRA uses the same proprietary software that RMAN uses to merge in the changes.  Rather than using DB resources (and licensed CPUs) to merge in the changes, the ZDLRA offloads this workload and performs the merge internally
  2. The ZDLRA simply keeps track of block changes and indexes them.  This process is much more efficient than a recovery process, and scales much better.
  3. The ZDLRA uses tier 1 storage and flash allowing the performance to match or exceed that of the databases it is protecting
  4. The RMAN catalog is within the ZDLRA allowing it to automatically catalog new full backups as they are created.
  5. These are backupsets, and don't include unused space optimizing space utilization.
  6. There are many layers of error checking.  This includes not only during the backup, but also within the ZDLRA (and when replicated).  The ZDLRA offloads the "restore validate" process to ensure that DB blocks that haven't been touched for a long time are periodically checked for corruption.  An "incremental forever" strategy is risky if you don't periodically check blocks for corruption.

Now to show why periodically checking for corruption is so important.

I am going to schedule an incremental merge of a single datafile (to keep it simple), inject corruption into the datafile copy, and then continue to merge into the backup.

First, I'm going to create a new tablespace for my testing.

SYS:DB10 > create tablespace bsg datafile '/tmp/bsg.dbf' size 10m;

Tablespace created.


Now that we have a tablespace, let's create a table in the tablespace that we can use to provide corruption.


create table corruption_test tablespace bsg as select * from dba_users;

Table created.

SQL> select  file_id, block_id from dba_extents where segment_name = 'CORRUPTION_TEST';

   FILE_ID   BLOCK_ID
---------- ----------
       16    128


Now I have a copy of dba_users in my new tablespace, and I know where the data is.

I am going run my script (below) and perform a couple of incremental merges.


run
{
allocate channel disk1 device type disk format '/tmp/bkup%U';
recover copy of datafile 16  with tag 'incr_merge';
backup incremental level 1 for recover of copy with tag 'incr_merge' datafile 16;
}


Now here is the output, everything looks good.  I backed it up, and the changes are getting merged into the image copy of the datafile.


Starting recover at 08/27/19 17:05:00
no copy of datafile 16 found to recover
Finished recover at 08/27/19 17:05:00

Starting backup at 08/27/19 17:05:00
channel disk1: starting incremental level 1 datafile backup set
channel disk1: specifying datafile(s) in backup set
input datafile file number=00016 name=/home/oracle/app/oracle/oradata/BSG18/bsg.dbf
channel disk1: starting piece 1 at 08/27/19 17:05:01
channel disk1: finished piece 1 at 08/27/19 17:05:02
piece handle=/tmp/bkup52ua97lt_1_1 tag=INCR_MERGE comment=NONE
channel disk1: backup set complete, elapsed time: 00:00:01
Finished backup at 08/27/19 17:05:02

Starting Control File and SPFILE Autobackup at 08/27/19 17:02:49
piece handle=/home/oracle/app/oracle/fast_recovery_area/bsg18/BSG18/autobackup/2019_08_27/o1_mf_s_1017421369_gpc6mt45_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 08/27/19 17:02:52
released channel: disk1

RMAN> run
{
allocate channel disk1 device type disk format '/tmp/bkup%U';
recover copy of datafile  16 with tag 'incr_merge';
backup incremental level 1 for recover of copy with tag 'incr_merge' datafile 16;
}


Starting recover at 08/27/19 17:07:35
channel disk1: starting incremental datafile backup set restore
channel disk1: specifying datafile copies to recover
recovering datafile copy file number=00016 name=/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
channel disk1: reading from backup piece /tmp/bkup52ua97lt_1_1
channel disk1: piece handle=/tmp/bkup52ua97lt_1_1 tag=INCR_MERGE_BSG
channel disk1: restored backup piece 1
channel disk1: restore complete, elapsed time: 00:00:01
Finished recover at 08/27/19 17:07:37

Starting backup at 08/27/19 17:07:37
channel disk1: starting incremental level 1 datafile backup set
channel disk1: specifying datafile(s) in backup set
input datafile file number=00016 name=/home/oracle/app/oracle/oradata/BSG18/bsg.dbf
channel disk1: starting piece 1 at 08/27/19 17:07:37
channel disk1: finished piece 1 at 08/27/19 17:07:38
piece handle=/tmp/bkup54ua97qp_1_1 tag=INCR_MERGE_BSG comment=NONE
channel disk1: backup set complete, elapsed time: 00:00:01
Finished backup at 08/27/19 17:07:38


Now let's corrupt the datafile.
I am going to use "sed" and change the user "SYSTEM" to "      " in the image copy of the datafile.


oracle@/tmp [18c] $ sed -i 's/SYSTEM/      /g' /tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho



Now I'm going to run the same incremental merge again (and again).
Here's the output.. everything looks fine since I didn't touch the block that is corrupted.
The merge process is simply replacing blocks that have changed.

Starting recover at 08/27/19 17:13:05
channel disk1: starting incremental datafile backup set restore
channel disk1: specifying datafile copies to recover
recovering datafile copy file number=00016 name=/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
channel disk1: reading from backup piece /tmp/bkup54ua97qp_1_1
channel disk1: piece handle=/tmp/bkup54ua97qp_1_1 tag=INCR_MERGE
channel disk1: restored backup piece 1
channel disk1: restore complete, elapsed time: 00:00:01
Finished recover at 08/27/19 17:13:07

Starting backup at 08/27/19 17:13:07
channel disk1: starting incremental level 1 datafile backup set
channel disk1: specifying datafile(s) in backup set
input datafile file number=00016 name=/home/oracle/app/oracle/oradata/BSG18/bsg.dbf
channel disk1: starting piece 1 at 08/27/19 17:13:07
channel disk1: finished piece 1 at 08/27/19 17:13:08
piece handle=/tmp/bkup56ua9853_1_1 tag=INCR_MERGE comment=NONE
channel disk1: backup set complete, elapsed time: 00:00:01
Finished backup at 08/27/19 17:13:08



Now that I have performed some incremental merges, let's see what happens when I go to restore that datafile.
I took the datafile offline, removed it, and now will try to restore it from the image copy.

RMAN> alter database datafile 16 offline;

Statement processed
RMAN> exit
[oracle@oracle-server] rm /home/oracle/app/oracle/oradata/BSG18/bsg.dbf

[oracle@oracle-server]  rman target /

RMAN> restore datafile 16;



Starting restore at 08/27/19 17:15:01
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=30 device type=DISK

channel ORA_DISK_1: restoring datafile 00016
input datafile copy RECID=53 STAMP=1017421986 file name=/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
destination for restore of datafile 00016: /home/oracle/app/oracle/oradata/BSG18/bsg.dbf
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/27/2019 17:15:02
ORA-19566: exceeded limit of 0 corrupt blocks for file /tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho
ORA-19600: input file is datafile-copy 53 (/tmp/bkupdata_D-BSG18_I-101380451_TS-BSG_FNO-16_4vua97ho)
ORA-19601: output file is datafile 16 (/home/oracle/app/oracle/oradata/BSG18/bsg.dbf)



Now, even though the incremental merges all look good, the corruption isn't caught until I go to restore the datafile.

This is why it is so important to execute a restore validate on a regular basis, and keep a secondary backup.

With the incremental merge process, there is only 1 copy of each unique block.  For historical data that doesn't change, those blocks will never get checked for corruption.





Tuesday, July 30, 2019

Block change tracking and the ZDLRA (part 2)

Oracle Block Change Tracking(BCT) and the ZDLRA



One of the most commonly asked question I get when explaining how the ZDLRA works has to do with Block Change tracking.

The question is usually something like...

"If the the Block Change Tracking file only keeps 7 days of history by default, how does it work with the ZDLRA where you only perform a full backup once"?

And the second question is

"If I only perform incremental cumulative backups, how does it use the BCT if it get's recreated/invalidated ?"

So how is the BCT file used ?


First, I am assuming you understand what's in it. This is explained in my previous post

For full backups -

The BCT file is not used, but it is updated prior to the backup if any changes took place.
Since the datafiles are "fuzzy" (being updated), the next BCT record starts with SCN number prior to the checkpoint SCN of the backup


 For Incremental Cumulative/Differential backups -

The BCT file is updated prior to the incremental cumulative if any changes took place.


RMAN then determines starting SCN and ending SCN number to backup, and if a BCT can possibly be used.





Once RMAN determines the starting/stopping SCN for the backup, and verifies that the BCT is in use, it can then determine if the BCT was capturing changes for the backup period, and if the changes fall within BCT capturing period



Now at this point we have determined

  • BCT was in use
  • The backup period falls within the window that the BCT has complete, valid records for.

Now we need to determine if any changes took place, and if so, build the list of blocks to backup up.  Keep in mind that if there are no version records, this means that no changes were made.








NOTE : if using multiple backups strategies, and TAGS to identify backups, this will complicate the process.


ZDLRA PROCESS


For the ZDLRA, the process is very simple.  As you go through the steps, you see that that new version records are created for each datafile that has any changes since the last backup (regardless of the backup type).

When an incremental cumulative backup is executed and sent to the ZDLRA, the RMAN catalog on the ZDLRA get's updated with the virtual Full.  The RMAN client, which uses this catalog,  compares against the checkpoint scn of the last full backup in the catalog (the virtual full).

Because the RMAN client is using the RMAN catalog from the ZDLRA (containing the virtual fulls), the RMAN client always compares to the last virtual full.

Keeping 7 days of BCT history isn't an issue because the current RMAN incremental cumulative backup always compares to the previous incremental cumulative backup which is virtualized.