With the current day trend where customers product expectations are rapidly changing and everyone needs products that are really rich, fast and intuitive to use, the kinds of testing that are imperative are very different than a few years ago. With excellent internet bandwidth that is now available, products and applications are all scaling very well in their performance and this is something the customers are also expecting from the products they use. That said, such a fast and flawless implementation of products is not as easy as it sounds. There are a lot of challenges associated with meeting the products performance goals which include:
- Increasing complexity of product architecture
- Several interfaces that the product has with external systems and dependencies
- Increased user base, leading to heavy load on the system
- Performance goals undergoing a lot of change owing to competition in the market and demanding customers
- Faster time to market for product release, which shrinks available time for performance testing
- Shortage in availability of specialized testers for performance testing
- Investment in performance testing infrastructure and tools, which are often very expensive and may become obsolete soon
If you see the above list, some of them are product and user specific challenges while some are test process challenges. The key to solving the product and user specific challenges is really around developing a robust performance test strategy and adequately testing the product for its performance before release. So, what we really need to be looking at is how to address the performance testing challenges; this forms the core of our discussion below.
Performance testing has long been in existence, however it has rightfully been given a lot of thought around specialization, domain centric testing, using the right infrastructure which mimics end user environment etc. only in the recent years. As product development technology advances with use of secured protocols, rich internet applications, service oriented architecture, web services etc. developing performance testing scripts has also become more complex. No longer will a regular record and play tool work across a major set of products by . There are a lot of very advanced performance testing tools that are entering the market from commercial players like HP, Microsoft, IBM which are constantly being enhanced to keep pace with the changing product technology. However, taking advantage of such tools does not come for free. These are all very expensive Commercial of the Shelf (COTS) tools, which you need to very carefully evaluate before investing. These might especially make sense for a product company to invest in, but not so much for a testing services company, because the services company is often required to use a tool that its clients have aligned their product with. On the other end of the spectrum, we have some excellent open source testing tools available for performance testing lately, which give the COTS a run for their money. A lot of companies, including product companies have started looking at such tools as good alternatives from a cost and feature standpoint, to use in their performance testing efforts. Jmeter is one such tool which a lot of companies are leveraging in the recent years. Again talking of trade-offs here, Jmeter is an open source tool available free of cost, easy for a tester to ramp up on, but is not comprehensive in its feature set, unlike the COTS. It has quite a few limitations around its reporting capabilities, which is a very important feature in performance testing, especially when the tester is handling large volumes of data. Also, this is data that the management and business teams are going to be interested in viewing, so a performance testing tool has to certainly offer very rich reporting functions.
This is where a company can draw its balance and decide to go with using an open source tool to take advantage of what it has to offer, but also invest in building reusable intellectual property (IP) on top of it, to address the tools limitations. Doing so, is beneficial for all entities involved as seen below:
- Clients (Product companies) are able to get performance testing done at a cheaper cost without compromising on quality
- Vendors (Test Service companies) are able to offer good value add services to their clients, differentiate themselves in the services market amongst their competition and also offer interesting out-of-project opportunities for their employees to work on building such core IP
- 3. Very challenging and interesting opportunities and thus good career path progression avenues are available for the employees of test services companies
Such reusable IP and frameworks often also serve as productivity enhancers e.g. data generation, especially user data creation is quite an overhead in a performance testing effort compared to regular functional testing. If the performance testing tool that you choose does not offer good user data generation features out of the box, this is an area worth investing some time on upfront as it will save time both in the current test cycle as well as future cycles. Automating such time consuming and monotonous jobs, also frees up time for the tester to focus on more important areas such as performance benchmarking, system analysis, competitive data analysis etc. Also, to maximize the performance testing efforts, the product company should involve the performance testing team right in the early stages of product design. This will help the performance test team work in unison with the business team to understand end users performance expectations from the product under development; Understanding this will help the team chalk out measurable performance goals for the product and help the entire product development team understand these goals on the same page. Another benefit to such early involvement is the suggestions that the performance test team can provide on the product architecture from a performance angle (e.g. it could be the number of interfaces the product has, how certain web service calls are made, threshold values that are being set, production system configuration parameters etc.) which will help reduce the number of performance bugs in the test cycle. Such interactions between the developer and tester truly help build a quality product in the limited available release time.