• Header Performance Testing
    Performance Testing
    What a performance.


Performance Testing execution will deliver controlled, repeatable results based on user concurrency, throughput and target response times. As a service, it provides planning, scripting, execution, analysis and reporting stages like other testing activities, but is more subjective and uses cycles for troubleshooting.

How performance testing achieves this, includes:

  • Load testing is used to verify that peak hour user behaviour - ensuring key target SLA response times are not breached
  • Stress testing is used to understand the capacity of computer systems and identifies the primary bottleneck at the point of failure
  • Soak testing is classified as operational acceptance testing but is normally executed with an extended load test, identifying issues with memory leaks, database tablespaces, plus audit and logging problems
  • usiness volumetrics need to be peer reviewed and the workload profile signed-off by stakeholders; for example the accuracy of the simulation correlates strongly with confidence in the results
  • Agile projects often adopt component performance testing at different levels of integration; open source performance testing tools are preferred such as JMeter
  • Enterprise customers usually have a wide range of protocol requirements and typically use LoadRunner (best of breed) to record and simulate different thin client, middleware, CRM and database technologies
  • Other specialised performance test types include single thread benchmarking, volume, spike and endurance

Key Benefits

  • Peak hour load testing will drive out performance response time issues and ensure computer systems are performant for go-live
  • Stress testing will identify concurrency and hardware ceilings and enable strategic geographic phased roll-out
  • Load and Stress testing simulating future user concurrency and database volumes helps predicts the longevity of a software release
  • Soak testing will increase confidence is system availability and performance, for throughput based on a specified number of weeks
  • Regular baselines and benchmarking provide a powerful capability for verifying the performance impact of single key changes; e.g. functional release, operating system or database patches

  • e: [email protected]
  • t: +44(0)161 240 3603
  • US (Tampa): +1 (813) 9061585
  • US (New York): +1 (929) 4746696


WeWork, Moorgate, 1 Fore Street, London, EC2Y 9DT


iTest Hub, XYZ Building, Spinningfields, M3 3EB