Wednesday, May 27, 2009

Bottom-up PostgreSQL benchmarking and PGCon2009

Last week I got a lot of positive feedback from my PGCon presentation in Ottawa about how to benchmark systems at a low-level when the intended application is to run a database. There were three main topics I was trying to cover in that:
  1. Why you should always run your own hardware benchmarks on every piece of hardware you can
  2. Examples of the simplest benchmarks I've found to be accurate
  3. How do organize your tests and your vendor interactions to support performance measurement as a purchasing requirement
There was one slide missing from the set I presented. I've uploaded a version of the slides that fixes that (along with a typo in the sysbench seeks slide) to my home page. For those who missed it, a couple of people have put their notes from the talk as part of PGCon coverage on the PG wiki, and video of many talks from the conference is already available from FOSSLC.

Also available on my web page now is a presentation I did last month at PG East 2009. Titled "Using and Abusing pgbench", that talk also has 3 things it tries to convey:
  1. How does pgbench and its internal scripting language work? (Most people aren't even aware there is such a scripting language available)
  2. What should you do in order to get good results from the built-in pgbench tests?
  3. How can you use pgbench as a test harness for writing your own tests?
The hardware benchmarking presentation ends where the pgbench ones starts, with a bit of overlap. That's intentional--I always consider pgbench tests to be something you should do only after confirming all of your hardware does the right thing, top to bottom. A perfect example just came out recently: even someone who's done as much benchmarking work as Joshua Drake can end up measuring the wrong thing, because he skipped the step I suggest for confirming expected commit rate before moving onto higher-level pgbench tests. Since not many people saw the pgbench talk at PG East I'm hoping to repeat that one in the near future to a larger audience.

As part of putting that presentation together, I did more work on a toolchain I've been using for a couple of years now (since I was working on 8.3 development) I've named pgbench-tools. The current 0.4 release posted to my home page is the first to benefit from having some users, which has gotten me an enormous amount of feedback toward making the program bug-free and more usable. Thanks in particular to Robert Treat and Jignesh Shah for their contributions. I think it's finally mature enough that it might be useful for others who want to automate running large numbers of pgbench tests too.

Documentation is still minimal, but I have written some (and what's there is accurate, both of which put me ahead of a lot of open-source projects I guess). There is an into README in the tar file and the presentation tries to give some examples of usage too. When I get more time I'm putting the source code into the PostgreSQL git repository (the repo is already there, I just haven't pushed to it yet), where it will be easier for other people to work with and on. There's a growing need in the PG community for regression testing of performance results, and at the yearly PGCon Developer Meeting I volunteered to see if an improved version of this pgbench-tools package might be useful in that role. I hope the ideas in my presentations and the suggested practice demonstrated by these tools turns out to be helpful to others.

The approach taken in pgbench-tools, that you should parse results from pgbench, save them to a database, and then graph the lot of them using SQL to summarize as needed, is only partially mine. I stole the first rev of the graphing code and several other ideas from the work Mark Wong and others did on the dbt2 program (here's an intro to using dbt2). Now that I've got something useful for my purposes and am free from conferences for a while, I'm hoping to spend some time investigating how to integrate the unique things I'm doing with some of the tools he's already written. The biggest thing the dbt tests have that I haven't provided for pgbench yet is a framework for measuring I/O and similar statistics during the test run. Given that the PostgreSQL development process already has a heavy requirement on Perl, I really should fall into line and adopt that myself too--despite my strong personal preference for Python in this role.

1 comment:

Baron said...

Greg, hello again and thanks for the slides. I think your approach and method is a great direction to push things. It matches what we do a lot at my company ;-) I also wanted to make you aware of something we've been using for benchmarking MySQL, which ties a lot of things together (I/O stats, for example). Maybe it's useful or inspirational.