Whitepaper: Build API Performance From the Ground Up – Use Unit Tests to Benchmark API Component Performance

Get the best of both worlds: execute unit-level tests with traditional performance testing tools.

Learn the advantages of evaluating API performance with unit tests, plus why unit-level performance testing is frequently overlooked.


Download the whitepaper to understand the strategy for measuring and benchmarking performance of components your team can integrate into your target application.

Focus areas:
  • Establishing a component benchmarking workflow
  • Introducing a component benchmarking example
  • Creating unit tests for benchmarked components
  • Compensating for the testing framework
  • Selecting and configuring benchmark performance parameters
  • Understanding concurrency, intensity and test duration
  • Configuring concurrency, intensity and test duration
  • Analyzing benchmark outputs
  • Performance at target load levels
  • Component scalability assessment
  • Offers a flexible, but at the same time stan­dardized, way of testing at the component level.
  • Are well-understood and widely used in de­velopment environments.
  • Typically require only a fraction of the hardware resources necessary for testing the entire application. This means you can test the components at maximum projected “Stress” level (see Fig. 1) early and more often with the hardware resources available in development environments.
  • Which of the available third-party components not only satisfies the functional requirements, but also has the best performance? Should I use component A, B, C, etc. or implement a new one? (Design and prototyping stages)
  • Which of the alternative code implementations is the most optimal from the performance perspective? (Development stage/related to code developed internally)

A properly configured and executed component benchmark can help answer these questions. A typical component benchmark workflow consists of:

  1. Creating unit tests for the benchmarked components
  2. Selecting benchmark performance parameters (the same for all components)
  3. Executing performance tests
  4. Comparing performance profiles of different components

This includes the operating system, JVM version, JVM options like GC options, server mode, etc. It might not always be possible to reproduce all deployment parameters in the test environment. On the other hand, the closer it is, the less chance the test component performance will differ from the target environment.

  • Major performance test parameters that determine the conditions under which the components will be tested are load level and load duration.
  • To shape generic performance test parameters into concrete forms, start by examining performance requirements of the target application where you anticipate using this component. Application-level performance requirements can provide concrete or approximate characteristics of the load level to which the component should be subjected.
  • Translating application performance requirements into component performance requirements, however, presents multiple challenges.
  • If there’s an older version of the application, make an educated guess by tracing a few application-level calls or examining call trace statistics collected by an APM. If neither options are available, the answer can come from examining the application design.
  • If the component load parameters can’t be deduced from the target application performance specifications, an alternative is to run at the maximum load level that can be achieved on the hardware available for the test. Note: the risk involved in this approach is benchmark results may not be relevant in the context of the target application.
  • Be aware of how the resource overutilization impacts the test results.
  • Often, the aggregate performance testing parameter ‘load level’ is not explicitly separated into its major parts: intensity and concurrency. This can lead to an inadequate performance test configuration.

Load intensity: The rate of requests or test invocations at which the component will be tested.

Load concurrency: The degree of parallelism with which a load of a given intensity is applied. Concurrency level can be configured by the number of virtual users or threads in a load test scenario.

Test duration: One of the major factors in determining the test duration is the statistical significance of the load test data set. However, this type of approach may be too complicated for everyday practical purposes.

  • When concrete performance test parameters have been established, you can use them to configure the load test application. After you run the performance tests with different components, you can start analyzing the benchmark outputs.
  • Key performance statistics
  • Efficiency
  • Reliability
  • Scalability

This type of testing will improve developers’ understanding of performance implications of various programming patterns, eliminate guesswork related to code performance and help establish a culture where software performance is as much a concern as the software functionality. The use of a common method, testing standard and performance testing tool can help organizations implement component benchmarking systematically.

About Parasoft

Parasoft helps organizations continuously deliver quality software with its market-proven, integrated suite of automated software testing tools. Supporting the embedded, enterprise, and IoT markets, Parasoft’s technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to web UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. Bringing all this together, Parasoft’s award winning reporting and analytics dashboard delivers a centralized view of quality enabling organizations to deliver with confidence and succeed in today’s most strategic ecosystems and development initiatives — security, safety-critical, Agile, DevOps, and continuous testing.