Wednesday, October 6, 2010

Exadata and indexes

This has been a very interesting topic around my shop.. Some people say that you can get rid of all you indexes, some people say no..

Well first lets look at why you have indexes and rule out those as removal candidates.

1) Indexes that support Primary keys. Gotta keep those right ?

2) Indexes that support RI to avoid locking. OLTP ? Gotta keep those.

For a lot of OLTP applications, just the 2 above criteria is enough to keep most of your indexes in play. But what about everything else.

Here is what I've been seeing. The exadata can scan like crazy, but there is a limit (20.8 on a full rack, do the math for your configuration). If you have a FTS on a table containing 50G, you can see that you utilizing ALL I/O for almost 3 seconds. if you have any concurrency, you can imagin what happens.

So in my mind the answer is to keep indexes where they can significantly limit the data access.
Concurrency.

Now that I've had a few beers, and few cups of coffee, I've had time to arrange brain cells in the right trays.. This is what I've found on Concurency with a table doing a FTS.

First.. single query.. 33g. 1/2 rack does 10.4g/second as advertised.. the single query doing a FTS runs in 3.3 seconds (or so).

Now scale up to 10 processes.. The 10 processes all scour 33.g gig apiece, the time goes up. The secret is to cut down the I/O requests at the DB layer to limit the data scoured.

Monday, October 4, 2010

Concurrency on the Exadata

Now that I have some benchmarks, I'm starting to delve into some testing to find out how it scales up.. I started with a large table 200+ million rows.

My base query did a FTS and returned one row of data.


1 execution runs in 3 seconds (DOP 32).

Once I scale up to 100 simultaneus executions, it runs longer, but I can't figure out the average execution time (parallel query skews the numbers).

In looking at the resource usage for both the database nodes and the storage nodes, I found the database nodes are almost Idle, and the storage nodes (7 of them) are producing about 10g of data/second. The cpu usage is about 7% user and 30% wait. When looking at the AWR information, all the time is still going to I/O waits. 399 seconds out of the 444 seconds are I/O wait times.. It appears that the Exadata does fantastic for a single query.. Once you execute that single query 100 times simultaneously, the times start to slow down.

I'm going to do more experients to see how I can get it to scale up nicer :)