Posts in Performance Testing
Performance testing in context
A while ago I answered the following question on SearchSoftwareQuality.com’s Ask The Software Quality Expert: Questions & Answers.


Our company markets a solution that monitors the production client devices from the actual end user perspective, which provides level of QoS, performance, availability, etc. My question is: Would it not make sense that QA could use this current and historical information to determine the level of QoS and determine if the level of efficiency increased as a result of new release under a production environment? I see many testing products and procedures that will emulate the production environment or create synthetic transactions but do not have actual end user production data from end users' actual experiences to accurately gauge whether QA test of new or modified application will ultimately meet user acceptance.


Here is a clip from my answer:


However, saying that is not without its problems. Let's look at just a handful of situations where Web analytics may not add a lot of value to our performance testing:

  1. When the performance testing is focused on transactions and not on the end user response time.
    Imagine my surprise when a fellow performance tester came up to me one day and burst my end-user-focused bubble. He didn't really care about end user response times. When he did his testing, he only cared about transactions and how they affected the system environment. After a long conversation about how our approaches could be so different, I came to understand that not everyone shares my context and his concerns happened to be different than mine.

    His product had service level agreements and alerts in place that focused on resource utilization and throughput specified in transactions per minute and percent usage. His company didn't get in trouble if the 95th percentile user experienced a five second response time, they got in trouble if the 101st transaction fell off the queue or failed to process in under 0.5 seconds. The end user was not his end goal.

    When your performance testing risk concerns contractual requirements, then the value you can derive from Web analytics may be limited in terms of your testing. I'm certain that someone along the way could have benefited from that information when the contract was specified, but from a testing perspective, that type of feedback might be too late. That said, I'm by no means suggesting that information like the information you are referring to won't be valuable in that context. It just may not be as valuable given the different goals of the testing..

  2. When the performance testing is focused on new features and systems and not on enhancements to existing systems.
    An obvious context where detailed Web analytics might not be too helpful is when the performance testing is focused on new development. If the users can't do it today, then you obviously can't get real information on its use. It seems obvious to say this, but it's an area where I've seen some teams struggle. I've been on projects where management wanted to take "real" numbers from a legacy system and apply them to a new system which had a different workflow, different screens, and sometimes even different data. Sometimes we become anchored to production numbers, even when we don't really have a compelling argument for using them.

  3. When the test being modeled is not similar to current production usage.
    A third scenario where this information might not be overly useful is one where your test models something that doesn't happen in production today. Possibly the simplest example of this might be a Super Bowl commercial for a company with an online product. If you've never run a Super Bowl ad before, and never had 50 million people navigate to your site in sixty seconds to see that monkey commercial again, then how do you know what they will really do when they get there? Well, the same way you would have done it without the Web analytics. You might hold focus groups, build in fixed navigation, scale down the features, or perhaps just guess. But odds are, current usage models won't offer much help.


All that said, I think real data like that is invaluable in many situations. It allows a performance tester to check assumptions, model more accurately, and it provides insight into potential future use if you trend the data over time. If you have access to information like that, and you are a performance tester, it's most likely in your best interest to at least review the data to see if there is a way you can use it to make your testing more accurate or more valuable.


You can find the full posting here.
WOPR 12 - Resource Monitoring During Performance Testing
Through much arm twisting and whining on my part, I've convinced the Workshop on Performance and Reliability (WOPR) organizers to host the next WOPR 12 here in Indianapolis. The theme for the workshop is "Resource Monitoring During Performance Testing." You can find more details here.

WOPR is one of my favorite workshops, both for the quality of it's attendees as well as it's selection of topics. I'm looking forward to both hosting and attending. If you're interested in more details or have any questions, I'm more than happy to help.