Sunday, March 13, 2011

Social media and being an oracle professional

First I have to admit I'm old. I went to college at a time when card decks were still being used. Those were simpler times. You would submit your program on a deck of punch card, go out for the night, and you get the output the next day. There were numerous advantages to this.

The professors didn't expect you to code and debug through the night.. It just wasn't possible.

You couldn't be sloppy. If you didn't syntax check, and debug your code in your head you would get lots of output back that would require rework. Rework with a 24 hour turnaround time gets old fast.

There were very few resources and very few people to turn to.

Times have changed since then. There is a lot of places to find great information. Here are just a few.

Linked In Groups.. There is a great Exaata groups

Blogs. Most people who are famous in our industry blog

Twitter - Follow the people you like on twitter.

All this Social media really helps to keep up-to-date on what is happening. You don't have to feel all in alone in figuring out whether Huge pages is worth it, or how to deal with the problems with a new feature.

I have turned to these media a couple of times lately and been amazed that some of the top people have answered my questions.. Most recently Tanel Podor answered my question on Huge pages, and Tweeted to me that he is happy to answer any question that I tweet him. Here is his answer

Wednesday, March 9, 2011

Configuration of an Exadata Part II

After seeing how popular Configuration of An Exadata Part I was, I figured I would follow up with a part II.

First off there a couple of things I'd like to point out. Along with an Exadata waiting to be configured at this point, I have a Netezza Twin fin waiting to be configured. Many of the reasons why it is taking so long, aren't specifically because of it being an Exadata.

We did do a POC on an Exadata, and Oracle was able to bring it in quickly. It took a day or so to fill out the configuration sheet, and Oracle had the machine up within 1 week. One onsite person for a couple of days, and we were off and running. Although the Netezza video http://www.youtube.com/watch?v=jB7rlChCG-E is very humorous, it doesn't tell the whole picture. I would guess if you are coming from SQL-Server, which some people are, then the video is probably a little more realistic.

Anyway, as many of you can guess a POC is nothing like bringing a new production box into your production datacenter, to be supported by your production support staff. In the POC, there was no dataguard to instantiate, no backups to take, everyone on the POC knew the root, and oracle passwords, and the machine sat in the middle of a lab.

Bringing the box into a production environment is where all the questions begin. In a company that needs approval from support teams, the Exadata brings up a lot of questions. An appliance like the netezza brings up fewer questions, but still there are some hurdles.

At this point we are still coming up with a backup strategy that gives everyone some degree of comfort, without constantly being in hot backup mode. Utilizing 1ge for backup can be very painful. The configuration worksheets are finally completed, and ACS will be scheduled to come in, once we get some new switches.

In case you missed Part I in the Netezza slam Exadata series it's here.

Tuesday, March 8, 2011

How to utilize AWR beyond what grid gives you

Like many of you I use AWR reports all the time to see what is happening in the database, and especially to compare between different points in time.. Whenever the users tell me the system is "running slow", I always try to find a comparison time.. "what other day recently has a similar workload as today", and run an AWR Comparison between the time periods. Through Grid (or dbconsole), this can be frustrating for a RAC database. Sometimes you want to compare between nodes for the same period, or different periods across nodes. This is only the beginning.. I also want to compare across databases.

As you can guess the reports available through the grid don't give you these choices. But guess what ? DBReplay gives you these choices. In order to support DBReplay, Oracle has added some pretty nifty features into AWR.

First a non-dbreplay feature, the "colored SQL".

As many of you know, Oracle only saves the top 50 SQL of interest. Of course you can change the number of top sql saved with the MODIFY_SNAPSHOT_SETINGS procedure.

dbms_workload_repository.modify_snapshot_settings(
retention IN NUMBER DEFAULT NULL,
interval IN NUMBER DEFAULT NULL,
topnsql IN NUMBER DEFAULT NULL,
dbid IN NUMBER DEFAULT );


The problem is, what happens if you are interested in a SQL that isn't in the top sql statements ? If you have a very active system, you might find that the sql isn't in the top 50, or even the 100. So what is there to do ? "color the sql". Use this procedure to mark sql to gather information about the SQL_ID every snapshot even if it isn't a top sql. Here is the description

Routine to add a colored SQL ID. If an SQL ID is colored, it will always be captured in every snapshot, independent of its level of activities (i.e. does not have to be a TOP SQL). Capturing will occur if the SQL is found in the cursor cache at snapshot time

dbms_workload_repository.add_colored_sql(
sql_id IN VARCHAR2,
dbid IN NUMBER DEFAULT NULL);

Now onto my original problem.

How do I do comparisons against instances or even database ? Well when I took a good look at the AWR_DIFF_REPORT_HTML procedure, I noticed that you can call it with multiple parameters (like instance ID and dbid).

dbms_workload_repository.awr_diff_report_html(
dbid1 IN NUMBER,
inst_num1 IN NUMBER,
bid1 IN NUMBER,
eid1 IN NUMBER,
dbid2 IN NUMBER,
inst_num2 IN NUMBER,
bid2 IN NUMBER,
eid2 IN NUMBER)
RETURN awrrpt_html_type_table PIPELINE;

If you run awr_diff_report with the 2 different instances you compare them against each other. Now how do you do database ? Look at metallink note 872733.1 (yes I still call it metalink). This note tells you how to export the AWR information into a file, and import it into a new database.

Here is my plan to handle all this.

1) Create a centralized repository of all AWR information for all database in the infrastructure..

2) Follow the metalink note to export/import data into the centeralized repository

3) In the centralized repository create a table mapping DBID to source database.

4) Configure APEX (application Express), to pick DBID's, Instances, and snapshot ranges to run a comparison report. Display the report in APEX as a webpage.

This is my summer project, and I will let everyone know how it goes.