(This appeared in the March 2001 issue of Oracle magazine. I was particularly happy with the lede.)
Database benchmarks abound. What do they mean and how well do they address real-world performance questions?
The human desire to measure things is as old as civilization itself. In 3000 B.C. Egypt, measurement of the cubit was so accurate that the pyramids were built within .005 percent of geometric perfection. Five thousand years ago, the Mayans had developed a calendar that precisely accounted for leap years. Chinese astronomical “Oracle Bones” from 1302 B.C. were used by NASA to determine that the length of a day was 47/1000ths of a second shorter then than it is now. (Oracle Bones is NASA’s name, not ours.)
But measuring anything can be fraught with subjectivity and politics. Take the precise Egyptian cubit: It was based on the distance from Pharaoh Khufu’s elbow to his fingertip. Our obsession with measuring continues to this day, but now we measure distances between stars and the weight of subatomic particles. And, of course, in the database industry we measure performance. We want to know how fast a database is and how much it costs to run so we can determine which one is the best value.
Database benchmarking attempts to measure these and other factors. But as with any sort of measurement, the challenge is to devise a test that’s accurate and fair—and that gives truly useful numbers. Sometimes the process seems as complicated and difficult as building the pyramids.