Posts in Performance Testing
benerator
benerator supports load and performance testing by providing a framework for generating high-volume test data. From the tool's website, out of the box benerator supports database systems, XML, XML Schema, CSV, Flat Files and Excel. Domain packages provide reusable generators for creating domain-specific data as names and addresses internationalizable in language and region (via nestable datasets).



A commercial version is available in addition to the free GPL version.
Some HTTP GET headers
Today's tip comes from High Performance Web Sites by Steve Souders. Before the author jumps into web tuning tips, he provides a very brief introduction to some of the features of the HTTP GET headers, including:

  • compression for requests which the Accept-Encoding and Content-Encoding headers to reduce the size of the response using common compression techniques

  • conditional requests which uses the If-Modified-Since, ETag, or If-None-Match headers to send the last modified date back to the server - if the server returns a 304 status it skips sending the body of the response

  • expiring requests which use the Expires header to save the expiration date with the component in it's cache

  • persistent connections which use the Keep-Alive header to keep the same TCP connection open to the same server (reducing the overhead of opening and closing multiple socket connections)


If you're a regular performance tester, there's no news there... but I though it was a nice summary for those who might be just breaking in.

I'm a couple chapters into the book and like it quite a bit. The format reminds me of the "How to Break Software" series of books, only instead of attacks the book provides rules. I'm not willing to sumarize any of the rules without contacting the author, but I recommend the book. It's well written and covers some great fundamentals of front-end performance optimization.
Just hit refresh
I was talking a look at Google product search and wanted to think of the simplest test I could that might reveal a lot of information. After the page loads, if you simply hit the refresh button in your browser repeatedly, you'll be able to notice the following behaviors:

  • query response times change each time

  • some sponsors change each time, while others don't

  • column width (between product description and price) changes based on sponsor size


This gives me several ideas for testing and learning about the product. First, I feel like I could quickly program a script to track sponsor results and performance over time time. If I varied the search criteria for similar products, this could quickly be used to start to verify the accuracy of  adds and the rules for displaying them. This could also become a good no-load baseline for the performance of whatever environment you're testing in.

Understanding the relationship between sponsor text (number of characters) and column widths would be worth looking into. Might be an issue, might not (likely not and issue). But it's also something that can be verified quickly and repeatedly with the script that's pulled together.
Percent connection-pool utilization
When performance testing, a lot of time gets spent calibrating your tests. To do this effectively, you often have to calibrate using multiple methods. One method I use is to look at percent connection-pool utilization.

This is a specific example of a general metric. For any finite resource that might be important to your system, look at how that resource is utilized over your run and compare that to your target numbers. For example, if the production environment never uses more then 60% of its available connections, but your tests gets utilization of up to 90%, you might need to adjust your tests. Other things you might look at include CPU utilization, memory utilization and average queue depth.

You'll need to have some idea of what sessions might look like in production (both actual and forecasted).
Concurrent live and active sessions
When performance testing, a lot of time gets spent calibrating your tests. To do this effectively, you often have to calibrate using multiple methods. One method I use is to look at concurrent live and active sessions.

I happen to do a lot of web testing, so sessions can be a big deal. Looking at the number of concurrent live and active sessions generated by my load test and comparing that to the production environment can give me an idea of whether or not I've got the right number of users in the test at a given period of time or if I've got the right amount of user session abandonment.

For your application it might be important to recognize that different users might have different session sizes, abandonment rates, and time-out rates. You'll need to have some idea of what sessions might look like in production (both actual and forecasted). If your tool alows it, try to build in a way to programatically track and throttle these numbers as needed. It might save you a lot of time.