Exadata smart flash logging
Exadata storage software 11.2.2.4 introduced the Smart flash logging feature. The intent of this is to reduce overall redo log sync times - especially outliers - by allowing the exadata flash storage to serve as a secondary destination for redo log writes. During a redo log sync, Oracle will write to the disk and flash simultaneously and allow the redo log sync operation to complete when the first device completes.
Jason Arneil reports some initial observations here, and Luis Moreno Campos summarized it here.
I’ve reported in the past on using SSD for redo including on Exadata and generally I’ve found that SSD is a poor fit for redo log style sequential write IO. But this architecture should at least do now harm and on the assumption that the SSD will at least occasionally complete faster than a spinning disk I tried it out.
My approach involved the same workload I’ve used in similar tests. I ran 20 concurrent processes each of which performed 200,000 updates and commits – a total of 4,000,000 redo log sync operations. I captured every redo log sync wait from 10046 traces and loaded them in R for analysis.
I turned flash logging on or off by using an ALTER IORMPLAN command like this (my DB is called SPOT):
ALTER IORMPLAN dbplan=((name='SPOT', flashLog=$1),(name=other,flashlog=on))'
And I ran “list metriccurrent where objectType='FLASHLOG'” before and after each run so I could be sure that flash logging was on or off.
When flash logging was on, I saw data like this:
Before:
FL_DISK_FIRST FLASHLOG 32,669,310 IO requests
FL_FLASH_FIRST FLASHLOG 7,318,741 IO requests
FL_PREVENTED_OUTLIERS FLASHLOG 774,146 IO requests
After:
FL_DISK_FIRST FLASHLOG 33,201,462 IO requests
FL_FLASH_FIRST FLASHLOG 7,337,931 IO requests
FL_PREVENTED_OUTLIERS FLASHLOG 774,146 IO requests
So for this particular cell the flash disk “won” only 3.8% of times (7,337,931-7,318,741)*100/(7,337,931-7,318,741+33,201,462-32,669,310) and prevented no “outliers”. Outliers are defined as being redo log syncs that would have taken longer than 500 ms to complete.
Looking at my 4 million redo log sync times, I saw that the average and median times where statistically significantly higher when the smart flash logging was involved:
> summary(flashon.data$synctime_us) #Smart flash logging ON
Min. 1st Qu. Median Mean 3rd Qu. Max.
1.0 452.0 500.0 542.4 567.0 3999.0
> summary(flashoff.data$synctime_us) #Smart flash logging OFF
Min. 1st Qu. Median Mean 3rd Qu. Max.
29.0 435.0 481.0 508.7 535.0 3998.0
> t.test(flashon.data$synctime_us,flashoff.data$synctime_us,paired=FALSE)Welch Two Sample t-test
data: flashon.data$synctime_us and flashoff.data$synctime_us
t = 263.2139, df = 7977922, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
33.43124 33.93285
sample estimates:
mean of x mean of y
542.3583 508.6763
Plotting the distribution of redo log sync times we can pretty easily see that there’s actually a small “hump” in times when flash logging is on (note logarithmic scale):
This is of course the exact opposite of what we expect, and I checked my data very carefully to make sure that I had not somehow switched samples. And I repeated the test many times and always saw the same pattern.
It may be that there is a slight overhead to running the race between disk and flash, and that that overhead makes redo log sync times slightly higher. That overhead may become more negligible on a busy system. But for now I personally can’t confirm that smart flash logging provides the intended optimization and in fact I observed a small but statistically significant and noticeable degradation in redo log sync times when it is enabled.
Reader Comments (1)
very interesting ... thanks for sharing the test results