Our Modernisation Journey
For a long time the default response to an aging legacy core business application was wholesale replacement - those were the golden days of the ERP and the ‘off the shelf’ package solution.
More recently organisations have started to think about what they could do differently to leverage the investment they have made in their ‘legacy’ applications and how they can better manage the risks associated with whole-sale replacement. There’s certainly no shortage of horror stories about large-scale system replacement projects going off the rails, and the price tag of some of those projects is eye watering - out of the price range of most organisations in New Zealand.
So what are the alternatives? The key to understanding what alternatives are available to you is a clear understanding of your ‘why’. Why do you need to do anything about your legacy application at all? Perhaps it’s about supportability - the application
mostly
does what you need from a business perspective but the software and hardware it runs on is no longer supported? Perhaps it’s about usability - maybe you need it to run on a mobile device but it’s been written to run on a desktop computers? Is it a workforce thing - are you struggling to find people who ‘code your language’? Whatever the reason the good news is that you have a range of options available to you including modularising, modernising, replatforming, extending, replacing etc. The picture below shows our ‘Modernisation on a Page’ view, explained in more detail below.

For us the ‘why’ was centred around enabling our organisational transformation. The context in which we operate is changing rapidly and we need to ensure we have
fit for future capabilities and platforms in place to navigate those changes.

Our legacy application is over a decade old and is a monolithic application made up of a 11M+ lines of Java code. To get it future ready we needed to focus on first modularising the application as a lead into then modernising or replacing the various components.

We have invested a lot in building the right capabilities and culture within our teams to support the shift we need to make.
Modularising is as much about changing the way our teams work as it is about technology changes such a microservices, APIs and changing code repository structures etc. Moving towards a Feature Driven Development process, and bringing along the right DevOps, Agile and CI/CD building blocks, represented a change to our working practices, processes, tools and the way we think about things.
One of the key challenges facing organisations who move towards delivering online services is agility and speed to market. Customer don’t want to (and won’t) wait weeks or months for changes or fixes to your application. That may have been ‘ok’ in a traditional enterprise IT context but it won’t work when you shift to external customer delivery. You need to be able to respond to change in a timely manner and that takes investment in infrastructure and capability. For us modularisation is a critical enabler of the move towards a build
pipeline
model which allows us to better meet those challenges associated with being an online service delivery organisation.
The delivery pipelines also underpin our move towards a product-centric team structure, where each team can run its own product-based delivery at its own cadence.

As part of modernising the application we also had to make some changes to the underpinning infrastructure. Our shift towards being more evidence-based and intelligence led as an organisation means we need to make some big changes at the data layer. Whilst we run enterprise grade databases today we know that over the coming few years the volume and velocity of data we deal with will increase significantly. We need to ensure we have a suitable data infrastructure that can not only support advanced analytics but over time allows us to get further into the realms of machine learning, analytics and AI.

We are also looking at what investments in business intelligence and
data analytics capabilities we need to be making which will build on
this modernised data layer.
All of our infrastructure is currently either virtualised (on-premise model) or cloud based. Our current batch based/key date driven model means we have a few high capacity days a year - for example, around 100,000+ users visit the website in one day in January each year. We have traditionally capacity managed to those peak workloads but the hybrid model we’re adopting allows us more choice around where we run our workloads and the ability to burst during high peak usage times. Our shift towards being event driven - rather than batch based - will also help to alleviate some of those peaks over time.

Using a mix of virtualisation, containerisation, serverless and software as a service allows us maximum flexibility in relation to the platforms and infrastructure we use to deliver our products and services.The adoption of cloud and ‘as a service’ based offerings mean your finance model is going to change - we are adopting a FinOps based model to work through these changes and implement the right risk and cost control points.
We have made good progress on our modernisation journey - having delivered a major modularisation and upgrade release earlier in the year - and we are already starting to see the benefits of a more flexible, responsive architecture.
At the same time we are working on the development of business to business (B2B) and business to consumer (B2C) APIs, sorting out our Digital Identity Management (IDM) platform, replatforming legacy line of business applications, looking at our business intelligence & data analytics capabilities as well as shifting our operating model to be DevOp-esque.
Piece by piece - step by step - we are putting in place the building blocks which will support our organisation transformation into the digital age.