IT projects: Contractually safeguarding performance
Users expect software to work. This includes, for example, good response times, a permanently and reliably available application, no program crashes and punctual reports. This sounds simple, but it is by no means a matter of course. When purchasing software, users usually have no concrete idea of the time it will take to achieve certain results. Only in the course of an implementation project do they discover that the application falls short of their expectations. If these performance criteria are not then agreed in writing, trouble looms. In the worst case, they can even lead to the failure of an implementation project.
How can performance be measured?
In computer science, the term performance is used to describe the ability of software to execute a defined functionality in a certain way - for example, quickly, uninterruptedly, or simultaneously.
The performance of a software application includes the following aspects:
• Availability: The software application must offer maximum available uptime from the user's point of view. Good availability results from increased reliability and robustness of hardware components and software.
• Reliability: The software application should respond correctly and predictably in terms of its functionalities.
• Response time behavior: The software application must respond below an agreed time threshold when the business transaction is defined.
• Scalability: The resource requirements of the software application and system components must behave linearly as input volumes increase. It must be ensured that the response time behavior remains consistently good as end-user requests vary or increase.
How usable is the software?
Performance is essentially about the usability of a software application. Usability is specified in more detail from the viewpoints of effectiveness, efficiency and satisfaction.
• Effectiveness refers to the accuracy and completeness with which a user achieves a particular goal.
• Efficiency captures the effort used to effectively achieve the goal.
• Satisfaction characterizes the positive attitude and freedom from interference when using an application.
Performance also depends to a large extent on the satisfaction of the end users. If the end users are dissatisfied, considerable difficulties and multiple notices of defects after the go-live are almost inevitable.
What are the criteria?
But how can the criteria for good performance be defined? The complexity becomes particularly apparent when performance is obviously insufficient from the user's point of view, but it takes a great deal of effort to determine what is actually the cause of poor response time behavior.
If attempts are then made to compensate for the lack of software performance with increased hardware use, this can increase dissatisfaction even further. Further - unplanned - costs may arise for the user. Additional costs for hardware and/or additional CPU resources lead to discussions about claims for damages.
This can only be avoided if the contracting parties stipulate two things from the outset:
• The performance criteria of the software application to be introduced
• The requirements for the hardware to be used as well as the other IT infrastructure to ensure the agreed performance criteria.
The contracting parties can therefore only be helped by a concrete contractual agreement, namely a specification of the performance criteria, which can or should, however, cover much more than the mere specification of response times for certain dialogs. After all, the performance of a software application also determines its usability and thus the satisfaction of the end users.
The performance criteria that the contractual partners must define naturally depend on the software application and the specific project. It is crucial that the agreements are not limited to vague and imprecise formulations, but that the criteria are measurable and controllable. In case of doubt, the user must be able to substantiate non-compliance with the agreed performance criteria. The following quality criteria can be applied to the agreement of performance criteria:
• Appropriate scope and structure
• Understandability, unambiguity and consistency
• Feasibility and testability
Inadequacies in the description of requirements raise a multitude of risks in the project. This also applies to the description of performance criteria. Ideally, these criteria are discussed at the beginning of the contract negotiations.
Arrange performance and load tests
Test scenarios can be used to generate a (possibly extreme) load to be expected during productive use and to examine the behavior of the tested software application. On the one hand, this allows errors to be uncovered that were not found in the functionally oriented system or integration test; on the other hand, the fulfillment of non-functional requirements (including agreed response times) can be tested.
Such performance and load tests are downstream of the functional tests and include, for example, the following steps:
• Determining the requirements for the business transactions to be executed
• Test script design
• Structure of a load generating system
• Performing the load test(s)
• Monitor system components during load test(s).
• Analysis of the load test results and definition of resulting measures for performance optimization
• Reporting of the results
The software application must already be in a functionally stable state in order to be tested at all for load handling. Performance and load tests will therefore only make sense towards the end of the project. In order not to be faced with the realization shortly before Go Life that the application does not meet the agreed performance criteria, the contractual partners can also agree to carry out performance measurements during the course of the project. They repeat selected test cases and individual processes under a base load. This tests individual functions for compliance with the performance criteria. Performance and load tests, on the other hand, cover entire process chains and business processes.
It is best for the contracting parties to consider at an early stage of the contract negotiations what requirements will be placed on performance as part of usability. It is important to specify the criteria and the prerequisites of the IT infrastructure required to meet them. The provisions on testing and acceptance that are contractually required anyway should also include explanations of performance and load tests. What to do when IT projects fail and what options are available in the worst case is described in my article IT Acceptance.
How the performance criteria are defined usually depends on the complexity of the project and - unfortunately - also on the time of the contract partners. Both contractual partners should focus on good preparation and ensure that sufficient time and resources are available. If in doubt, the client and the IT service provider should seek external help. We advise contract partners on the identification of performance criteria and help them to identify and regulate the relevant issues. We also support them with specific questions about the implementation of the project.