Tuesday, November 26, 2019

Dataguard and LOG_ARCHIVE_MAX_PROCESSES


Any DBA who has set up Dataguard has seen this diagram of how the sending of logs to standby database works.

What I wanted to go through is the use of LOG_ARCHIVE_MAX_PROCESSES and how how it affects the sending of redo logs.

If you have set this parameter in your environment it is typically because you are seeing issues with the database cleaning out redo logs.
The primary reason this gets set from the default (4) is to add more processes capable of clearing out redo logs and making them archive logs.  The more processes running in parallel to clean out redo logs, the faster  the database can move on to start a new redo log.

But did you know the LOG_ARCHIVE_MAX_PROCESSES parameter also affects how many processes are used on your standby database to fill any gaps ?

In my example I have a primary database (BSG18) which on 18.3.0.  I also have a standby database (BSG18d).

In my primary database I set the parameter to 10.

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_max_processes            integer     10


I then shutdown my dataguard database, and performed many log switches. I needed at least 10 log switches to demonstrate what happens.

Now once I start up my standby database, the LOG_ARCHIVE_MAX_PROCESSES is used to determine how many processes on my standby database to fill the gap.

Here is my query from V$DATAGUARD_PROCESS

select name,
           pid "standby PID",
           role, client_pid "Primary DB PID",
           client_role
    from V$DATAGUARD_PROCESS
    where type='NET'
order by 2;


NAME  standby PID              ROLE                     Primary DB PID CLIENT_ROLE     
----- ------------------------ ------------------------ -------------- ----------------
rfs   18116                    RFS async                          4165 async ORL multi 
rfs   26337                    RFS ping                           4153 gap manager     
rfs   26401                    RFS archive                        4159 archive gap     
rfs   26404                    RFS archive                        8752 archive gap     
rfs   26408                    RFS archive                        8738 archive gap     
rfs   26411                    RFS archive                        8750 archive gap     
rfs   26413                    RFS archive                        8740 archive gap     
rfs   26415                    RFS archive                        8748 archive gap     
rfs   26417                    RFS archive                        8742 archive gap     
rfs   26419                    RFS archive                        8744 archive gap     
rfs   26421                    RFS archive                        8746 archive gap     


If you look at the output, you can see all the pieces fit together.


Real Time redo process


The first process sends the redo information to the standby database in async mode, and is written to the standby redo logs.


oracle    4165     1  0 Nov05 ?        00:05:57 ora_tt02_bsg18


We can see on the primary database it is a "tt" process and starts up with the database.

oracle   18116     1  0 Nov12 ?        00:07:58 oraclebsg18d (LOCAL=NO)


and for the standby database , it is a sqlnet process.

Gap Manager process


Now let's look at the second process.

oracle    4153     1  0 Nov05 ?        00:00:26 ora_tt00_bsg18


We can see on the primary database it is also a "tt" process and also starts up with the database.

oracle   26337     1  0 Nov12 ?        00:00:18 oraclebsg18d (LOCAL=NO)

and for the standby database , it is a sqlnet process.


Gap fill processes


Now let's look at the remaining processes.


oracle    4159     1  0 Nov05 ?        00:00:14 ora_arc1_bsg18
oracle    8738     1  0 Nov05 ?        00:00:13 ora_arc2_bsg18
oracle    8740     1  0 Nov05 ?        00:00:14 ora_arc3_bsg18
oracle    8742     1  0 Nov05 ?        00:00:13 ora_arc4_bsg18
oracle    8744     1  0 Nov05 ?        00:00:13 ora_arc5_bsg18
oracle    8746     1  0 Nov05 ?        00:00:13 ora_arc6_bsg18
oracle    8748     1  0 Nov05 ?        00:00:13 ora_arc7_bsg18
oracle    8750     1  0 Nov05 ?        00:00:13 ora_arc8_bsg18
oracle    8752     1  0 Nov05 ?        00:00:13 ora_arc9_bsg18


When I look at the primary database, I can see 9 "arc" processes.
Why 9 processes? because LOG_ARCHIVE_MAX_PROCESSES of 10 tells the primary database that it can use 10 processes to fill the gap on the standby.
 The 1st process is the gap manager "tt', and the remaining 9 processes "arc" are used to send over the archive logs.

Increasing the LOG_ARCHIVE_MAX_PROCESSES can be used to increase the parallelization of sending archive logs to fill any gaps.



Monday, October 21, 2019

Oracle Active Dataguard - More than just a read-only copy.


NOTE: I updated this on 1/16/20 with additional information.

Anyone that knows me, knows I'm a stickler for properly explaining technical topics and ensuring I use the correct term.

Dataguard  vs Active Dataguard is a topic that drives me crazy sometimes as people tend to use the two options interchangeably.

The differences appear to be subtle on the surface, but there are some major difference (other than the obvious) that you might not know about.

What I am hoping you get out this blog post is... If you have the license for Active Dataguard, then turn  on portions of it, even if the application isn't using it for queries. There are more benefits from Active Dataguard than just having a read-only database copy.


Dataguard  -  First let's talk about normal Dataguard.


Basically, a Dataguard database, is an exact copy of the primary/protected database, that is constantly in recovery.  Redo log information (used for recovery) is automatically sent from the primary/protected database by specifying the name/location of the Dataguard copy as an additional location for the redo logs.
This is a simple explanation, but Dataguard concepts are that simple.

Dataguard - What Dataguard does for me to protect my primary database.

Dataguard is used for a number of purposes.  The most common of which is to have a "Disaster Recovery" copy of the primary database available in the event of the loss of the primary copy.
For a "Disaster Recovery" copy, best practice is to have this copy be geographically isolated from the primary.  This ensures a disaster to the primary datacenter (flood, earthquake, etc.) doesn't affect the Dataguard copy.

When moving the application to use the Dataguard copy, there are 2 different ways to bring up the dataguard copy as the primary.

1) Switchover - In the case of the switchover, all transactions on the primary are applied to the dataguard copy before it is opened up.  This ensures no data loss. This isn't always possible since transactions on primary database need to be "drained" and transferred to the Dataguard copy.

2) Failover - In the case of a failover, it is not possible to "drain" transactions from the primary database.  All outstanding transactions that have been received from the primary database are applied on the Dataguard copy and it is opened with "resetlogs" and there is data loss.

Other uses for Dataguard

  • Dataguard can be used to create an up-to-date copy of production for testing/QA, etc.
  • A Dataguard database can be opened for write (snapshot standby) to test code releases etc and then flashed back to being a Datagaurd copy again.
  • A Dataguard database can have a delay in log apply, essentially providing a time gap, allowing data to be recovered within the time gap in case of user error.

Active Dataguard Option (Licensible option)- 

This contains many features now

Original feature  -- Dataguard copy is is open as read only.

  That is as simple as it is to use active dataguard (if you have the license).  Before starting the apply of redo log information, the database is open read only.

  The main advantage of Active Dataguard is that you can now use the DR copy of the database for queries.  This not only offloads activity from the primary to the mostly idle Dataguard copy, it also ensures that there is a readable copy of the data even while the primary is not available (patching, etc.).

What I wanted to go through in this post, is all the other features that comes with Active Dataguard that you might not realize.

Additional Active Dataguard Features.

First I am going to separate the features into 2 requirements.

Features that are available when the database is in mount mode (read-only not required).


  1. Far Sync -  Far sync allows you to create a shell instance that is used to capture real-time redo from a database, and send it on to standby database.  You can have multiple far sync instances for redundancy, and they are typically local to primary to provide a synchronous destination with very little network lag..
  2. BCT (Block Change Tracking). You can create a BCT file on your standby database, and it will be used for incremental backups. 
  3. Real-time- redo - This allows you to cascade redo from the standby to a destination real-time in the same manner that the primary DB does with Standby redologs.
Features that are available when the database is in read-only mode
  1. Automatic Block repair -  Corrupted blocks on the primary are repaired by automatically applying the "clean" block from the Dataguard copy.
  2. DML redirection.  Occasional updates to the dataguard copy are redirected to the primary database.
  3. Preserve Buffer cache during Role change - When a Dataguard database becomes the primary, the mode change is done without bouncing the database, thus preserving what is in memory.

Additional Active Dataguard Features affecting ZDLRA.



I wanted to call out there 2 features specific to Active Dataguard that have an affect on the ZDLRA recoverability.

  1. Block Change Tracking File - With Active Dataguard, the BCT file gets updated with the changes and is used for incremental backups.. This can be extremely important when using a ZDLRA to backup your Dataguard database.  With the ZDLRA, only incremental backups are performed.  Without an active BCT file, incremental backups will scan all database blocks with every incremental backup.  If you have the license for Active Dataguard be sure to create a BCT anyway, and ensure it is used.
  2. Far Sync Support -  You are probably wondering what this has to do with ZDLRA.  The Far Sync support (starting with 12c) is more than just support for Far Sync.  This feature changes when the applied updates are written to the destinations. Prior to 12c, changes were written to the destinations AFTER the log switch.  This means that the downstream Dataguard databases, and ZDLRA only got full archive logs.  With 12c, an Active Dataguard database, just like the primary sends changes to the destinations from memory as they are applied.  This can make a big difference as to the recovery point objective (RPO) of a dataguard database from the ZDLRA.  This feature, using real-time redo from a standby database to a ZDLRA is allowed as an exception under licensing.

Key takeaways.


Active Dataguard has a couple of great features that the ZDLRA takes advantage of. If you have the license for it, you should turn on BCT, even if the application isn't using any of the other features..


Tuesday, September 3, 2019

Oracle : Dataguard vs Backups

For anyone that has been around Oracle databases for a long time, they should be familiar with MAA (Maximum Availability Architecture).
Here is a link to the latest version of it.

As you probably know, MAA is described as "best practices" for implementing the Oracle Database.

What I am going to explain in this blog post, is why you need the different pieces.
Questions often comes up around why each piece of availability (for lack of a better word) is important.
These pieces include
  • Data Guard
  • Active Data Guard
  • Flashback database
  • Point-in-time backups
  • Archive Backups
  • DR site

Data Guard


Well let's start with the first piece Data Guard.  I'm not going address Data Guard in the same datacenter, just Data Guard between datacenters.
There is a difference between "just Data Guard" and "Active Data Guard"

Data Guard  -  Mounted copy of the primary database that is constantly applying redo changes. It can be thought of as always in recovery. This copy can be gracefully "switched" to without data loss (all transactions are applied before opening). In the event of an emergency, it can be "failed" to which allows for opening it without applying all transactions.

Active Data Guard -  A Data Guard copy of the data (like above), BUT this is a read-only copy of the primary database.  Note this is an extra cost option requiring a license. Having the database open read-only has many advantages.
  • Reporting, and read-only workload can be sent to the Data Guard copy of the database taking advantage of the resources available that would typically sit idle.
  • Automatic block repair from the primary database.
  • A Block Change Tracking can be used to speed up incremental backups.

Data Guard, regardless of the 2 types is often between datacenters with limited bandwidth.


This becomes important for a few reasons
  •  Cloning across datacenters, or even rebuilding a copy of the database in the opposite datacenter can take a long time.  Even days for a very large database.
  • The application tier that accesses the database needs to move with the database. In the event of a switchover/failover of the database, the application must also switchover.

NOTE :  Making your Data Guard database your primary database involves moving the application tier to your DR data center.  The testing and changes to move the application may be risky and involve more time than simply restoring the database.

Flashback Database


Flashback database allows you to query the database as of a previous point-in-time, and even flashback the entire database to a previous point-in-time.
You can also use flashback database to "recover" a table that was accidentally dropped.
With flashback database, the changes are kept in the FRA for a period of time, typically only a few hours or less depending on workload.

Flashback can be helpful in a few situations.
  1. Logical corruption of a subset of data.  If a logical corruption occurs, like data is accidentally deleted, or incorrectly updated, you can view the data as it appeared before the corruption occurred. This of course, is assuming that the corruption is found within the flashback window.  To correct corruption, the pre-corrupt data can be selected from the database and saved to a file.  The application team familiar with the data, can then create scripts to surgically correct the data.
  2. Logical corruption of much of data.  If much of the data is corrupted you can flashback the database to a point-in-time prior to the corruption within the flashback window.  This a FULL flashback of ALL data.  After the flashback occurs, a resetlogs is performed creating a new incarnation of the database.  This can be useful if an application release causes havoc with the data, and it is better to suffer data loss, than deal with the corruption.
  3. Iterative testing.  Flashback database can be a great tool when testing, to repeat the same test over and over.  Typically a restore point is created before beginning a test, and the database is flashed back to that state.  This allows an application team to be assured that the data is in the same state at the beginning of each iterative test.

Backups 


Backups allow you to restore the database to a point-in-time prior to the current point-in-time. ZDLRA is the best product to use for backups.

ZDLRA offers

  • Near Zero RPO
  • Very low RTO by using the proprietary incremental forever strategy.  Incremental forever reduces recovery time, by creating a full restore point for each incremental backup taken.

The most recent backups are kept local to the database to ensure quick restore time. here are typically 2 types of backups. 

Archive backups - These are often "keep" backups.  They are special because they are a self contained backupset that allows you recovery only to a specific prior point-in-time. An example would be the creation of an archival backup on the last day of the month, that allows you recover only to the end of the backup time.  A single recovery point for each month rather than any point during the month.

Point-in-time backups - These are backups that include the archive logs to recovery to any point-in-time.  Typically these types of backups allow you to recover to any point in time from the beginning to the current point in time. For example, allow you to recover to any point in the last 30 days.

The advantages of backups are
  • You can recover to any point-in-time within the recovery window.  This window is much longer than flashback database can provide.  Any logical corruptions can be corrected by restoring portions of "good" data, or the whole database.
  • You can restore the database within the same data center returning the database to availability without affecting the application tier.
  • Backups are very important if using active Data Guard. The issue may be with the read-only copy of the database.  Since both the primary and the Data Guard copy are used by the application, restoring from a backup is the quickest method to rebuild either environment.

Disaster Recovery


 Disaster Recovery is typically a documented, structured approach to dealing with unplanned incidents.  These incidents include planning for a complete site failure involving the loss of all data centers within a geographical area.

The 2 major risks areas addressed by any approach are

RTO - Recovery Time Objective.  How long does it take to return the application to full availability

RPO - Recovery Point Objective. How much data loss can be tolerated in the event of a disaster


Comparison



The bottom line when comparing the different technologies


Flashback database - Correct logical corruption that is immediately found.

Dataguard - Full disaster requiring moving the full environment. Keep application continuity in the event of a disaster.

Backups - Restore current environment to availability (production site or Data Guard site).


ZDLRA, by offering a very low RTO and RPO provides a solution that can be used to return availability in all but a datacenter disaster.