Wednesday, April 15, 2020

ZDLRA, Real-time redo compression


  ZDLRA: Changes in the Protection Policy Compression Algorithms (Doc ID 2654539.1)

First I wanted to go through a little background on how compression works for archive logs.


As you probably know, database backups are automatically compressed.  That is not necessarily the case for archive logs.

Archive logs can reach the ZDLRA by 2 different methods.

  • Real-time redo,
  • Log sweeps.
Regardless of how the Archive Logs reach the ZDLRA, they are stored as backup files.

Log Sweeps


Now for log sweeps, the backupset of archive logs is stored exactly as they are sent.
if you follow the best practices for backups, you send over archive logs as
   - Plus archive logs not backed up filesperset=32;
As you can figure out, this creates a single, or multiple backupsets each backupset contains multiple archive logs up to a total of 32 archive logs per backupset.
Also note, that that by default the archive log backupsets are NOT compressed.  You can RMAN compress the archive log backups (if you have the ACO license), but be careful because it is not recommended to compress the datafile backups.

Real-time redo

Real-time redo is where this MOS note takes affect.  Up to the current release of ZDLRA, archive logs sent via real-time redo were compressed using basic compression.  A "standby redo" log would mirror the redo log for the protected database, and a "BACKUP_ARCH" task would be queued to create a backupset from the "standby redo log".  In the process of creating the RMAN backupset, basic compression was applied and a compress RMAN backupset is created.

This does a few things that I want to point to out.
  • During the creation of the archivelog backupset, CPU is used to compress the backupset, and the final result is a smaller backupset.
  • During the validation of backups for a database, CPU is used to uncompress the RMAN backupset to validate it. NOTE: this would also occur if you RMAN compress the archive logs during a log sweep.
  • When recovering a database, the compressed RMAN backupset is sent over the network (less network traffic), and the Database Host uncompresses the backupset to read the archive logs.

If you read this carefully, you will see that there a CPU cost for compression, and there is a space savings when using it.

New Feature to control log compression!


The new feature allows you to control the log compression for archive logs sent through real-time redo.
NOTE: When using compression on the ZDLRA, you don't need the ACO license.

Below is the information from the documentation.

log_compression_algorithm

The setting for the archivelog compression feature. 
This setting is used to adjust the compression level of NZDL/polled archivelog backups.


OFF means that the archivelogs will not be compressed. 
BASIC means the BASIC compression alogorithm will be used to compress the backups. 
LOW means the LOW compression alogorithm will be used to compress the backups. 
MEDIUM means the MEDIUM compression alogorithm will be used to compress the backups. 
HIGH means the HIGH compression alogorithm will be used to compress the backups.



This is a great lever to control what happens with archive logs.

Medium compression can be used if you have ample CPU and are concerned about space, OFF can be used if you have CPU concerns and ample space.

Remember, the deeper the compression, the higher the CPU. Like most things in IT, there are tradeoffs .


Monday, February 24, 2020

What are those processes in V$DATAGUARD_PROCESSES on the ZDLRA?


With 12.2 of Oracle there is a now detail in the V$DATAGUARD_PROCESS view on the downstream database.  Keep in mind that the "downstream database" may not only be a standby database, it can include a Far Sync database, or a ZDLRA receiving Real-time-redo.

Below is a table of the processes that I've seen in this view from a ZDLRA, and I will go through what each one of these are.

NAME ACTION        CLIENT_ROLE      COUNT(*)
---- ------------ ---------------- ----------
rfs  IDLE          async SRL single      10
rfs  IDLE          async SRL multi       20
rfs  IDLE          async ORL single      30
rfs  IDLE          async ORL multi       40
rfs  IDLE          gap manager          100
rfs  IDLE          archive gap           10



First off.  I want to point out what happens in a RAC environment.  As you can image, each thread (node) in a RAC environment independently sends it's redo to the downstream.  The same thing happens with a standby database sending it's redo to a downstream.
For example, if my primary DB cluster is a 4 Node RAC cluster, and I have a standby database that is a single node, redo sent from both the primary AND the standby cluster to a ZDLRA will be sent as 4 independent streams.

Now to walk through what each of these types of processes are, I will first define the types.
Since I am seeing multiple types, this example is from a ZDLRA (rather than a standby database) so I will define the terms from a ZDLRA standpoint.


SRL – Standby Redo Logs are sending the changes. These are standby databases
ORL – Online Redo Logs are sending the changes. These are primary databases

Multi  - The Protected database is using the same log buffer to write both archive logs and REDO to the ZDLRA (and possibly another standby).
Single – The Protected database is using separate buffers for REDO to the ZDLRA. The ZDLRA could not keep up with the archive/standby database.

Async       - Real-time redo processes
Gap manager – Process that keeps track of any gaps in redo, and can send archive logs to fill the gap.
Archive gap – Additional processes that send over any gaps in redo. This is controlled by the parameter “LOG_ARCHIVE_MAX_PROCESSES” on the protected databases. The Gap manager will send over the first archive log, but archive gap processes will be started in parallel with the Gap manager if more than 1 archive log is needed. This is controlled by the LOG_ARCHIVE_MAX_PROCESSES parameter which has a default value of 4.

So for my example, this is what we are seeing 

  • “Async SRL single” processes of 10 is telling us that there are 10 standby treads being applied and sending real-time-redo using the same buffer on the protected database that the archive redo process is using. 
  • Async SRL multi” processes of 20 is telling us that there are 20 standby threads being applied and sending real-time-redo that could not keep up with the archiver process and spawned an additional buffer. 
  • A total of 30 combined processes for "Async SRL" is telling us that there are 30 threads sending real-time redo in total.
**** Notice I used the term "threads" above.  The number of threads is the number of RAC nodes on the primary database regardless of how many instances are on the standby applying redo.


  • “Async ORL single” processes of 30 is telling us that there are 30 primary database instances sending real-time-redo using the same buffer on the protected database that the archive redo process is using. 

  • “Async ORL Multi” processes of 40 is telling us that there are 40 primary database instances sending real-time-redo that could not keep up with the archiver process and spawned an additional buffer. 
  • A total of 70 combined processes for "Async ORL" is telling us there are 70 primary instances sending real-time redo.
**** Notice I used the term "instances" above.  The number of instances is the number of RAC nodes on the primary database rather than the number of databases.
  • "Gap Manager" process of 100 is telling us that there are 100 process that are monitoring the async process.  Notice that the total number of "Gap Manager" processes matches the total number of async processes.
  • “archive gap” processes are temporary and should be killed once they are idle for 2 minutes since they completed their work. We should only see these processes if the ZDLRA falls behind in collecting redo.

I hope this helps explain what the processes are that are in use to manage the real-time redo.

Monday, December 23, 2019

Oracle - Alternate Destinations, FRA, and balancing 2 destinations


My previous post was on using Groups/Priorities when specifying an alternate destination for sending Redo to a remote destination. You can find it here.

This post covers 2 topics that come up whenever I talk about alternate destinations.

  • Does it balance 2 destinations with the same group/priority. and once it choses a destination, does it remain sticky to that destination and ignore other destinations with the same "group/priority"
  • Does my FRA know both destinations can be considered the same for "shipped" status?


First I will show you what I configured my tests.

In my example, I actually have both a standby database and a Far Sync instance for my primary database.

bsg18   --> Primary database
bsg18d --> Dataguard database
bsg18f  --> Far Sync database

This is what my 2 remote destinations look like in my configuration.


log_archive_dest_2 ='service="bsg18d" ASYNC NOAFFIRM max_failure=0 db_unique_name="bsg18d" group=1 priority=1 valid_for=(online_logfile,all_roles)';

log_archive_dest_3 ='Service="bsg18f" SYNC AFFIRM  max_failure=1 db_unique_name="bsg18f" group=1 priority=1 valid_for=(online_logfile,all_roles)';


You can see that both Dataguard (bsg18d) and Far Sync (bsg18f) are in "group=1 priority=1"

Multiple Destinations with the same Group and Priority - Does it balance them ?


First I'm going to look at where the logs are going.

   DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES NO        NO  NO
         2       1752 A YES NO        NO  NO
         2       1753 A YES NO        NO  NO


I can see that they are going to DEST_2, which is my dataguard instance.  I am going to shut it down, do a few log switches and see what happens.

   DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES YES       NO  NO
         1       1751 A YES NO        NO  NO
         1       1752 A YES NO        NO  NO
         2       1752 A YES YES       NO  NO
         1       1753 A YES NO        NO  NO
         2       1753 A YES YES       NO  NO
         1       1754 A YES NO        NO  NO
         2       1754 A YES NO        NO  NO
         1       1755 A YES NO        NO  NO
         1       1756 A YES NO        NO  NO
         1       1757 A YES NO        NO  NO
         1       1758 A YES NO        NO  NO
         1       1759 A YES NO        NO  NO
         1       1760 A YES NO        NO  NO


I've including DEST_1 this time to show that my sequence# had moved forward, but any sending of redo my standby has stopped.

Now I started it back up, and did a few log switches.

  DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES YES       NO  NO
         2       1752 A YES YES       NO  NO
         2       1753 A YES YES       NO  NO
         2       1754 A YES YES       NO  NO
         2       1761 A YES NO        NO  NO
         3       1755 A YES YES       NO  YES
         3       1758 A YES NO        NO  YES
 .....
         3       1754 A YES NO        NO  YES
         3       1752 A YES NO        NO  YES
         3       1762 A YES NO        NO  NO
         3       1763 A YES NO        NO  NO


After restarting the database, I can see that it is now using DEST_3 (my Far Sync) instance instead of  DEST_2 (dataguard).

Now for a final test on this, I am going to STOP my Far Sync instance.

   DEST_ID  SEQUENCE# S SHI APPLIED   DEL FAL
---------- ---------- - --- --------- --- ---
         2       1751 A YES YES       NO  NO
         2       1752 A YES YES       NO  NO
         2       1753 A YES YES       NO  NO
         2       1754 A YES YES       NO  NO
         2       1761 A YES YES       NO  NO
         3       1755 A YES YES       NO  YES
         3       1758 A YES NO        NO  YES
 ....
         3       1754 A YES YES       NO  YES
         3       1752 A YES YES       NO  YES
         3       1762 A YES NO        NO  NO
         3       1763 A YES NO        NO  NO
         2       1764 A YES NO        NO  YES
         2       1765 A YES NO        NO  NO
         2       1766 A YES NO        NO  NO


I stopped bsg18f (Far Sync), when it was on sequence 1764.  I can see that the database automatically switched to sending logs to bsg18d (dataguard) when DEST_3 because unavailable. 

This is exactly what I would expect to happen when I have 2 destinations with the same Group/Priority.
  • It chooses one of the 2 destinations and becomes "sticky" once chosen.
  • If the "sticky" destination becomes unavailable, it then automatically switches to the second destination.

FRA - If you use an FRA (Fast Recovery Area), how does it understand that I my redo can go to an alternate location?

Now after all this, you can see from my previous tests, some logs were sent from DEST_2 and some were sent from DEST_3.  Both of these were members of the same group.
Let's see if the FRA sees them as shipped (my retention policy).

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         21.78                     21.78             555          0
BACKUP PIECE                          7.54                       .45              38          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0



Perfect !  All the archive log space is reclaimable.  When I have different remote destinations that are members of the same group, as long as the Redo Log was successfully shipped to a destination in the group, the log is eligible for deletion.