Changing requirements
The hard reality for anyone engaged in software development is this: Changing requirements occur from day one. While some are easy to handle, others are harder to tame. The impact level of a requirement on costs is heavily dependent on the moment of time in the system lifecycle.
The only way is up
Back in the 1980s, the IT industry was alerted to the cost of change over time by Barry Boehm in his pioneering book Software Engineering Economics. He warned that the cost-of-change curve grows exponentially without timely resolution. In other words, a requirement that could have been “nipped in the bud” at the start of development can result in a 100-fold effort growth five years into production. And although agile methodologies often flatten the curve, typically reducing effort from a 1:100 to a 1:4 ratio, no solution completely reverses that upward trend.
Unsurprisingly, the burden of legacy software systems results in a cost-of-change curve that’s anything but flat. Now this is rarely just the fault of one department and more down to lack of cohesion between teams. Legacy systems are typically left to degrade over many years, sometimes even decades. At any rate it’s simply not possible to foresee the impact each aspect of a system will have ten years down the line. Let’s not forget that changing requirements have a knock-on effect, leading to breaking changes in technologies, architecture and, of course, functionalities. Here are some examples of breaking changes:
- A system used to serve thousands of clients in one country, but now has to support millions of users around the globe on a 24/7 basis.
- An intranet system becomes part of a public web application.
- A system was architected to support batch processing only. Now it’s expected to operate in near real-time mode.
As you can see, these examples belong in the “non-functional” requirements category used to describe the operational characteristics of a system. This is different from the “functional” requirements group, which outlines specific behaviours. Now while some functional requirements are manageable, non-functional requirements are often more intractable because they impact the foundations of each system across both technology and architecture.
Dig deep before you build
Just as each system is unique, so is each situation. Best practice is to rebuild parts of the solution on a phased basis. Before launching a step-by-step migration, all relevant friction points must be identified. In this case, new functionalities are built from scratch while preserving the architecture scope and the ability to support requirements. Former components are mediated through an API layer, which can only be built once the system’s existing code has been refactored upfront.
Legacy Systems Assessment is the first mandatory step for any company serious about modernising its legacy systems. This involves a 4-phase analysis of software engineering practices. It’s important to be aware that this analysis will engage everyone, not just IT but also business. This means every team member with a connection to the system. An assessment can only be completed successfully provided there is close collaboration involving all parties (as well as third-party vendors where required).
— Our services —
Profinit Modernisation Framework
We apply a proven framework of best practices to modernise your legacy system enterprise-wide.
ExploreLegacy Systems Assesment
We identify the friction points to help you take the informed approach to legacy systems modernisation.
ExploreLegacy Systems Takeover
Our expert software engineers will oversee a smooth transition and successful A-Z takeover of your legacy applications.
Explore