As a system is developed, it is likely that the number of entities will grow, and with this the number of interactions between those entities. As complexity increases, it can become increasingly difficult to maintain a complete understanding of those interactions, and therefore what is actually happening within the system when a particular event occurs. This in turn increases the time that it can take to implement new functionality and to fix bugs, while also increasing the likelihood of new bugs being introduced while fixing another.
Measures of software complexity have been developed since the 1970s in an attempt to quantify the internal working of a software system rather than rely on a software developer’s subjective opinion.
Some measures of complexity attempt to model the communication between different modules of code, and consider how closely each module relies on internal knowledge of another module (known as ‘coupling’) to function. Lower coupling results in a more modularised system, which should be easier to test, maintain and enhance.
Another such metric is ‘Cyclometric complexity’ developed by Thomas J. McCabe. Cyclometric complexity is a measure of the number of potential paths of execution through a system; the more paths there are, the more potential ways a system could take a wrong path and produce an incorrect result. High cyclometric complexity will also increase both the testing and the maintenance burden because each possible path will have to be tested and maintained.
Of the many ways to measure complexity within a software system, the ones that provide greatest value to a project will depend on the business requirements and the development environment. It may be that a software system build on top of a legacy platform will have a certain amount of essential complexity because of the nature of the existing system. Similarly, a greenfield build may have more freedom to employ whichever best practices the development team deems most valuable.
In 2013, $542 billion was spent on software with $132.2 billion of that being on custom-built software alone, and, considerable attention has been devoted to controlling software costs. Historically, this has been achieved by focusing on tools and techniques designed to make software development as a rapid and inexpensive as possible. This focus is however shifting from the development phase of a software lifecycle to the maintenance phase because for every $1 spent on development, $3 is spent on the maintenance and enhancements .
Software complexity has been widely regarded as a major contributor to software maintenance costs because increased complexity means that maintenance and enhancement projects will take longer, cost more, and result in more errors.
Sajeel Chaudhry, consultant at Brickendon says: “Developing with an aim to reduce complexity will lead to a longer development phase, but this will be more than compensated for by the huge savings during the maintenance phase by reducing labour, improving lead times for bug fixes, enhancements and critical changes.”
What Factors Need to Be Considered?
In recent years the focus has shifted towards software development approaches that are designed to improve the system’s maintainability by introducing automated testing at the earliest stage, writing small modularised units of code and working in short ‘sprints’ of work, often with a sprint set aside for ‘refactoring’ – reworking an area of code that may have been rethought, or become overly complex.
Additionally, there are now improved debugging tools, integrated refactoring functions, static analysis tools and continuous integration platforms, all of which help developers to make changes to a system that is under development with confidence. If however, a system already has a high level of complexity, these tools are less helpful and can provide a false sense of security.
As this all highlights, reducing the complexity of a software system during build can have a positive impact on both the costs, and the time that it takes to enhance the system further.