Tuesday, February 2, 2021

ZDLRA - Using Protection Policies to manage databases that have migrated or to be retired

 One the questions that keeps coming up with ZDLRA is how to manage the backups for a database that has either

  • Been migrated to another ZDRA
  • Been retired, but the backup needs to be kept for a period of time












The best way to deal with this by the use of Protection Policies.

How Protection Policies work:


If you remember right, Protection Policies are way of grouping databases together that have the same basic characteristics.

The most important of which are :

Name/Description             - Used to identify the Protection Policy
Recovery Window Goal    - How many days of recovery do you want to store at a minimum 
Max Retention Window    - (Optional) Maximum number of days of backups you want to keep
Unprotected Window        - (Optional) Used to set alerts for databases that are no longer receiving recovery data.

One of the common questions I get is.. What happens if I change the Protection Policy associated with my database ?

Answer :  By changing the Protection Policy a database is associated with, you are only changing the metadata.  Once the change is made, the database follows the Protection Policy rules it is now associated with, and no longer is associated with the old Protection Policy

How this plays out with a real example is... 
My Database (PRODDB) is a member of a Protection Policy (GOLD) which has a Recovery Window Goal of 20 days, and a Max Retention Window of 40 days (the default value being 2x the Recovery Window Goal).
My Database (PRODDB) currently has 30 days of backups, which is right in the middle. 



 What would normally happen for this database is (given enough space), backups will continue to be kept until PRODDB has 40 days of backups.  On day 41, a maintenance job (which runs daily) will execute, and find that my database, PRODDB, has exceeded it's Recovery Window Goal.  This job will remove all backups (in a batch process for efficiency) that are older than 20 days.

BUT ........................

Today, I moved my database, PRODDB, to a new protection policy (Silver) which only has a 10 day Recovery Window Goal, and a Max Recovery Window of 20 Days.


As I pointed out, the characteristics of the NEW Protection Policy will be used, and the next time the daily purge occurs, this database will be flagged, and all backups greater than the Recovery Window Goal will be purged.





Retiring databases: - 

One very common question how to handle the retiring of database.  As you might know, when you remove a database from the ZDLRA, ALL backups are removed from ZDLRA.
When a database is no longer sending backups to the ZDLRA,  the backups will continue to be purged until only a single level 0 backup remains.  This is to ensure that at least one backup is kept, regardless of the Max Recovery Window.
The best way to deal with Retiring database (and still keep the last Level 0 backup) through the use of Protection Policies.
In my example for my database PRODDB, I am going to retire the database instead of moving it to the Silver policy.  My companies standard is to  keep the final backup for my database available for 90 days, and on day 91 all backups can be removed.

These are requirements from the above information.
  • At least 1 backup is kept for 90 days, even though my Max Recovery Window was 40 days.
  • I want to know when my database has been retired for 90 days so I can remove it from the ZDLRA.
In order to accomplish both of these items, I am going to create a Protection Policy named RETIRED_DB with the following attributes
  • Recovery Window Goal of 2 days
  • Max Recovery Window of 3 Days
  • Unprotected Data Window of 90 days
  • New Alert in OEM to tell me when a database in this policy violates its Unprotected Data Window
If you look closely at the attributes, you will noticed that I decreased the Recovery Window Goal to allow backups to be removed after 3 days.  I also set the Unprotected Data Window to be 90 days.
What this looks like over  time is 




As you can see by moving it to the new policy, within a few days, all backups except for the most recent Full back is removed.  You can also see that on day 91 (when it's time to remove this database) I will be getting an alert.

Migrating Databases:

Migrating databases is very similar to retiring databases, except that I don't want remove the old backups until they naturally expire.  For my example of PRODB with a Recovery Window Goal of 20 days, as soon as I have a new Level 0 on the new ZDLRA, I will move this database to a new policy (GOLD_MIGRATED) with the following attributes.
  • Recovery Window Goal of 20 days, since I still need to preserve old backups
  • Max Recovery Window goal of 21 days. This will  remove the old backups as they age off.
  • Unprotected Data Window of 21 days, which will alert me that it time to remove this database.
How this would look over time time is.




Conclusion:

When retiring or migrating databases, Protection Policies can be leveraged to both
  • Ensure backups are removed as they age out until only a single L0 (Full) remains
  • Alert you when it is time to remove the database from the ZDLRA.

Thursday, January 21, 2021

ZFS as a swift object store

 This blog post goes through a feature of the ZFS Appliance that has been around for at least 3 years now. The Openstack Swift Object store.


When looking at the S3 API, and the OCI API, I forgot all about where it started.. With the Swift API.

I will go through the 3 APIs, and how they came about (from what I can find by reading through articles)..

It all started with the Swift API. Swift (V1) has simple authentication and a simple interface.

A URI to manage/access objects has the format of

HTTP://{object store server}/object/v1/{Account}/{bucket name}/{object name}.

In the case of ZFS, 

  • Account - this is the share name.. "/export/swiftshare" for example
  • Bucket name - The name of the bucket that was created
  • Object name - name of the object.
Authentication with Swift while using curl is typically a 2 step process.  
First the authorization URI is called
HTTP://{object store server}/auth/v1.0/{Account}

The username and password is sent with the authentication URI.  The URI then returns an auth token which is used in the curl command line to manage buckets/objects.

Username/password authentication (v1.0) is one of the 3 choices.
  1. Local username/password created on the ZDLRA.
  2. LDAP user ZFS ties to
  3. Keystone authentication server.
For all the testing I am doing on my ZFS simulator, I use a local user.

Before I go into how to configure and use the Swift interface on ZFS, I'll share what I was able to find out.

The Swift API has some limitations, and these limitations is what drove the move to S3.
As you probably noticed, the authentication and tracking of objects does not have enough details to support the segregation of users, and billing.
The S3 API takes the Swift API, and adds the ability to create separate tenants, set up billing, etc. All the things an enterprise needs to do.

With S3, you probably noticed that the authentication layer changed. It is based on secret name/secret rather than a username/password returning an auth token..

Well lets go through what it takes to configure the Swift interface.

First, most of the steps around configuring ZFS for an object store, I documented in my previous blog posts. 
If you look the posts below you see the steps on configuring a share,creating a local user on ZFS, and configuring the http service.

ZFS Appliance - Your on-premise cloud store


For Swift, I will just go the steps specific to Swift.

All I need to do is enable swift. That's it !



Swift gets enabled just like enabling the S3 API, and the OCI API. Because I do not have a Keystone Authentication Server (which would be the OpenStack Identity Service), I didn't fill those values in.

NOTE: Authentication for swift is a little different from S3, or OCI. Both of the other APIs do not tie directly to the local user.  S3 uses "secrets", and OCI uses a PEM file, and a Fingerprint.

Accessing my Swift bucket.

First some links to documentation that will give you examples of these ways of connecting.
Swift Guide for ZFS OS 8.8 release  -- Current release as of writing.
Using ZFS as an object store  -- This is old, but has a lot of detail and great detail
API Guide OS 8.8 for Swift Docs -- Current documentation guide

Also, since the Swift implementation is OpenStack, there is a lot of examples and documentation (non-oracle) available across the web.


I was able to access my bucket one of 3 ways

The first 2 ways are very similar

Swift command line tool - python based tool to connect to swift and manage buckets

CURL - Command line took similar to swift.



In order to create a bucket, and upload an object ......

First I execute the  curl command to get the authentication token.


NOTE: my ZFS emulator is 10.0.0.110 and my share is /export/short

curl -i http://10.0.0.110/auth/v1.0/export/short -X GET -H "X-Auth-User: oracle" -H "X-Auth-Key: oracle123"

HTTP/1.1 200 OK
Date: Thu, 21 Jan 2021 18:54:38 GMT
Server: Apache
X-Content-Type-Options: nosniff
X-Storage-Url: http://10.0.0.110:80/object/v1/export/short
X-Auth-Token: ZFSSA_522d6355-9056-4a95-9060-c88648007993
X-Storage-Token: ZFSSA_522d6355-9056-4a95-9060-c88648007993
Content-Length: 0
X-Trans-Id: tx62e2f031f21640c29a2bf-006009cdee
Content-Type: text/html; charset=utf-8

Next I execute create a bucket .

From the output  above I can get the "Auth Token", and the Storage URL to manage the object store in curl. Note that the Auth Token will expire.

Create a container in curl

curl -i http://10.0.0.110:80/object/v1/export/short/bucketswift -X PUT -H "Content-Length: 0" -H "X-Auth-Token: ZFSSA_522d6355-9056-4a95-9060-c88648007993"

Create a container in swift

swift post container -A http://10.0.0.110:80/object/v1/export/short -U oracle -K oracle123


That's all there is to it with the Swift Object Store on ZFS.




Tuesday, January 5, 2021

Managing an Object Store on ZFS

 This blog post will cover how to access the object store on ZFS to create buckets and upload files. For S3, I am using Cloudberry, which I downloaded here. For OCI, I am using the OCI cli tool.












S3 access to ZFS

This is the easiest, since the S3 object store on ZFS is an S3 compatible interface.

In cloudberry add a new account and use the following for input.





Note that you need to enter the 4 fields above.

  • Display Name -- What you want to call the new account entry
  • Service Point  -- This is the ZFS interface for S3 in the form of 
HTTP://{the IP of the ZFS}/s3/v1/{share name}
  • Access Key  -- This is the name you gave the S3 access key, when you added it to the ZFS
  • Secret Key   -- This is the long string of characters that was returned by ZFS when you created the key.
That's it ! You can now use cloudberry to create buckets, upload files, sync object stores etc.


OCI access to ZFS

Install the CLI

Like the Oracle cloud, there is currently (as of me writing the blog post), no GUI tool like cloudberry that will connect to an OCI object store.  When connecting to the Oracle cloud, you can access the OCI object store through the S3 interface, but this is not possible on ZFS. Both the OCI and S3 object store are independent and cannot access buckets etc. in the other object store.

In order to access ZFS through OCI we start with downloading the OCI cli tool. Documentation on how do this can be found here.

In my install, I took the easy route  (and since I had a Ubuntu client with root access to play with). I installed it directly using "sudo pip install oci-cli"

Create a config file.


Once you have the OCI cli installed we need to set up a configuration file to be used.
The default file is ~/.oci/config, but this location can be changed when using the command if you access multiple OCI installations.

This is the contents of my file.


1
2
3
4
5
6
7
[DEFAULT]
user=ocid1.user.oc1..oracle
fingerprint=1e:6e:0e:79:38:f5:08:ee:7d:87:86:01:13:54:46:c6
key_file=/home/oracle/opc/oracle_private.pem
tenancy=ocid1.tenancy.oc1..nobody
region=us-phoenix-1
pass_phrase = oracle

Now to walk through each line.

1. This identified the entry. Since the config file can contain entries for multiple OCI locations, this entry is identified as the default entry to use (If I don't specify one).
2. This is the user ID.  Since I am using ZFS, the format is "ocid1.user.oc1..{zfs user}"
3. This is the fingerprint. I mentioned in the last blog post that this will be needed. This fingerprint identifies the API public_key entry on ZFS to use when matching the private API key being sent
4. This is the private key file. This contains the private API key that matches the public key that was added to the ZFS.
5. This is unimportant to ZFS, but is required to be set. Use the entry above.
6. Like #5. this is not used by ZFS but is needed by the OCI client.
7. This is optional. If the API private key was created with a pass_phrase, this the pass_phrase that matches the private key.


Create a bucket on OCI.

Almost there now ! We have everything in place for authentication, and we are ready to create an OCI bucket on ZFS for storing data.

The command is 

oci os bucket create --endpoint {OCI object store location} --namespace-name {location on the object store} --compartment-id {compartment in OCI} --name {new bucket name}


Now let's walk through what the parameters will be for ZFS

--endpoint               -> For my ZFS appliance, it is the url + oci
--namespace-name  -> This is the share on the ZFS.  "/export/short" in my config.
--compartment-id    -> This is also the share on the ZFS.  "/export/short" in my config.
--name                     -> the name of the bucket I want to create.

For my configuration below is the command and the output.. I now have a bucket created, and I am able upload data !


oci os bucket create --endpoint http://10.0.0.110/oci --namespace-name export/short --compartment-id export/short --name mynewbucket 

{
  "data": {
    "approximate-count": null,
    "approximate-size": null,
    "compartment-id": "export/short",
    "created-by": "oracle",
    "defined-tags": null,
    "etag": "a51c8ecbf1429f95b446c4413df9f494",
    "freeform-tags": null,
    "id": null,
    "is-read-only": null,
    "kms-key-id": null,
    "metadata": null,
    "name": "mynewbucket",
    "namespace": "export/short",
    "object-events-enabled": null,
    "object-lifecycle-policy-etag": null,
    "public-access-type": "NoPublicAccess",
    "replication-enabled": null,
    "storage-tier": "Standard",
    "time-created": "2021-01-05T16:15:05+00:00",
    "versioning": null
  },
  "etag": "a51c8ecbf1429f95b446c4413df9f494"
}



ADVANCED TOPIC -- SSL with OCI CLI


Now let's say I want to encrypt my connections to OCI and use the HTTPS server available on ZFS.
First I need to create a file containing the certificate. I can get the certificate by executing.

openssl s_client -showcerts -connect 10.0.0.110:443

This returns a lot of information, but within the output I can see the certificate, and I can copy and paste into a file.

Certificate chain
 0 s:CN = 10.0.0.110, description = https://10.0.0.110:215/#cert
   i:CN = 10.0.0.110, description = https://10.0.0.110:215/#cert
-----BEGIN CERTIFICATE-----
MIIDXDCCAkSgAwIBAgIIW+387wAAAAIwDQYJKoZIhvcNAQELBQAwPDETMBEGA1UE
AwwKMTAuMC4wLjExMDElMCMGA1UEDQwcaHR0cHM6Ly8xMC4wLjAuMTEwOjIxNS8j
Y2VydDAeFw0wNjAyMTUxODAwMDBaFw0zODAxMTkwMzE0MDdaMDwxEzARBgNVBAMM
CjEwLjAuMC4xMTAxJTAjBgNVBA0MHGh0dHBzOi8vMTAuMC4wLjExMDoyMTUvI2Nl
cnQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCyfgnTMtxgPEtrmCpB
J4gHngdqpRQWnUXA/OtWGymXME/+gAd5Q/8LZ74VpkHIwk3T7z/+tJVgL1HFmmbi
ZRSsXfSUgOBHm0doPn3VGbykz5MHWm3HHwjpPwvVhyeuVEkUfs/yiZ9B1WZrkr6U
ePNKlkbdL1VN5q2zuLdJ7+jn3HIiSS9j10i7HQVFEuzUAGdt3q0rp2MwaxSP6+cZ
hzMaI5IGBHuVkw2fGX1RdDB6uZpFEEhRSHURr5/3d+UgOprkMKp8Wph3kH0E2Nha
tGpSn2/6NM/Up/nDjfu2Dxm9A2aCwC56ShTckTTxE2HrgfSE9r/vEnkJEdSemH+X
9BuRAgMBAAGjYjBgMCYGCWCGSAGG+EIBDQQZFhdBdXRvbWF0aWNhbGx5IGdlbmVy
YXRlZDA2BgNVHREBAf8ELDAqggoxMC4wLjAuMTEwhwQKAABuhhZodHRwczovLzEw
LjAuMC4xMTA6MjE1MA0GCSqGSIb3DQEBCwUAA4IBAQAqxZk2knSBinWvTADkrvuS
C3vkeLyOLCRwABnGzZV80AAZ3tSVZt2JPXtg8uAVEj29J4VFw/I7HuneGL/faW9q
Qr9h+2WjvoT+m6lIfwELeaomZhkrLmJomGqSP1wfw5jaw3cpt0yOeS4RWUYb9eEe
bTH6laFBtSdbaI/uHslxpJwNRDwn8zBpAWmZk83UQ5CytH37yrFPRoHQWp+OqF+V
GYTPA4drxQ00nuelNfpHWMCjjMr0WxFz5rNJPMOAe2W1Xcr/MM1h04kGVwRtYsC0
4izqKtfiOHt0wMkSbYuSj1tIzdEzjVmxNSS7nv/znrMt+6SsdYQHMmaJ4+wHlJo4
-----END CERTIFICATE-----
---
Server certificate
subject=CN = 10.0.0.110, description = https://10.0.0.110:215/#cert

I want to copy the certificate including the "BEGIN CERTIFICATE" and "END CERTIFICATE" lines into a file. 

I now need to set my environment to see the certificate file and use it. In my case "/home/oracle/opc/wallet_cloud/zfs.cer"

export REQUESTS_CA_BUNDLE=/home/oracle/opc/wallet_cloud/zfs.cer

I can now view the buckets in my object store, and upload files encrypted.

oci os bucket list --endpoint https://10.0.0.110/oci --namespace-name export/short --compartment-id export/short 
{
  "data": [
    {
      "compartment-id": "export/short",
      "created-by": "oracle",
      "defined-tags": null,
      "etag": "a51c8ecbf1429f95b446c4413df9f494",
      "freeform-tags": null,
      "name": "mynewbucket",
      "namespace": "export/short",
      "time-created": "2021-01-05T16:15:05+00:00"
    }
  ]
}


The OCI documentation should give you everything you need to upload/download objects within a bucket.