Our Modernisation Journey

For a long time the default response to an aging legacy core business application was wholesale replacement - those were the golden days of the ERP and the ‘off the shelf’ package solution.

More recently organisations have started to think about what they could do differently to leverage the investment they have made in their ‘legacy’ applications and how they can better manage the risks associated with whole-sale replacement. There’s certainly no shortage of horror stories about large-scale system replacement projects going off the rails, and the price tag of some of those projects is eye watering - out of the price range of most organisations in New Zealand.

So what are the alternatives? The key to understanding what alternatives are available to you is a clear understanding of your ‘why’. Why do you need to do anything about your legacy application at all? Perhaps it’s about supportability - the application mostly does what you need from a business perspective but the software and hardware it runs on is no longer supported? Perhaps it’s about usability - maybe you need it to run on a mobile device but it’s been written to run on a desktop computers? Is it a workforce thing - are you struggling to find people who ‘code your language’? Whatever the reason the good news is that you have a range of options available to you including modularising, modernising, replatforming, extending, replacing etc. The picture below shows our ‘Modernisation on a Page’ view, explained in more detail below.

image

For us the ‘why’ was centred around enabling our organisational transformation. The context in which we operate is changing rapidly and we need to ensure we have fit for future capabilities and platforms in place to navigate those changes.

image

Our legacy application is over a decade old and is a monolithic application made up of a 11M+ lines of Java code. To get it future ready we needed to focus on first modularising the application as a lead into then modernising or replacing the various components.

image

We have invested a lot in building the right capabilities and culture within our teams to support the shift we need to make. Modularising is as much about changing the way our teams work as it is about technology changes such a microservices, APIs and changing code repository structures etc. Moving towards a Feature Driven Development process, and bringing along the right DevOps, Agile and CI/CD building blocks, represented a change to our working practices, processes, tools and the way we think about things.

One of the key challenges facing organisations who move towards delivering online services is agility and speed to market. Customer don’t want to (and won’t) wait weeks or months for changes or fixes to your application. That may have been ‘ok’ in a traditional enterprise IT context but it won’t work when you shift to external customer delivery. You need to be able to respond to change in a timely manner and that takes investment in infrastructure and capability. For us modularisation is a critical enabler of the move towards a build pipeline model which allows us to better meet those challenges associated with being an online service delivery organisation.

The delivery pipelines also underpin our move towards a product-centric team structure, where each team can run its own product-based delivery at its own cadence.

image

As part of modernising the application we also had to make some changes to the underpinning infrastructure. Our shift towards being more evidence-based and intelligence led as an organisation means we need to make some big changes at the data layer. Whilst we run enterprise grade databases today we know that over the coming few years the volume and velocity of data we deal with will increase significantly. We need to ensure we have a suitable data infrastructure that can not only support advanced analytics but over time allows us to get further into the realms of machine learning, analytics and AI.

image

We are also looking at what investments in business intelligence and data analytics capabilities we need to be making which will build on this modernised data layer.

All of our infrastructure is currently either virtualised (on-premise model) or cloud based. Our current batch based/key date driven model means we have a few high capacity days a year - for example, around 100,000+ users visit the website in one day in January each year. We have traditionally capacity managed to those peak workloads but the hybrid model we’re adopting allows us more choice around where we run our workloads and the ability to burst during high peak usage times. Our shift towards being event driven - rather than batch based - will also help to alleviate some of those peaks over time.

image

Using a mix of virtualisation, containerisation, serverless and software as a service allows us maximum flexibility in relation to the platforms and infrastructure we use to deliver our products and services.The adoption of cloud and ‘as a service’ based offerings mean your finance model is going to change - we are adopting a FinOps based model to work through these changes and implement the right risk and cost control points.

We have made good progress on our modernisation journey - having delivered a major modularisation and upgrade release earlier in the year - and we are already starting to see the benefits of a more flexible, responsive architecture.

At the same time we are working on the development of business to business (B2B) and business to consumer (B2C) APIs, sorting out our Digital Identity Management (IDM) platform, replatforming legacy line of business applications, looking at our business intelligence & data analytics capabilities as well as shifting our operating model to be DevOp-esque.

Piece by piece - step by step - we are putting in place the building blocks which will support our organisation transformation into the digital age.

Pathways to the Cloud

Over the past couple of years we have helped and advised a range of organisations (both public and commercial) in relation to cloud adoption, largely based on our experiences in that space to date. At the recent AWS ReInvent there was a lot of chatter amongst exec about the approach to cloud adoption - unsurprisingly most organisations still seem to follow the ‘lift and shift’ approach to getting out into the cloud.

Broadly speaking there are two pathways to cloud - one is the lift & shift and the second is re-engineering for cloud. Each pathway comes with its own set of considerations as well as risks & benefits. We have experience with both approaches and I thought it might be useful to capture highlights from each approach here.

The lift and shift approach requires little up front work and can get you into the cloud faster. On the downside this pathway can require a fair amount of clean up work. Post shift you will mostly likely need to remediate things like network & security configurations to make best use of cloud environments & tools as well as a bunch of cost optimisation work to ensure you’re getting best bang for buck in terms of CPU, storage etc. costs. Looking at things like machine sizes/types, reserved vs non-reserved instances and the like requires someone who understands cloud infrastructure to optimise your application/infrastructure environment.

Problems tend to start cropping up when organisations complete the first part of the lift and shift and then don’t do the follow up actions. You essentially end up with the worst of both worlds, and the benefits of neither. I think a lot of people who are disillusioned with cloud adoption are in this camp. Like I’ve said countless time - cloud adoption isn’t simply about someone else running your infrastructure and yet so many people still seem to get stuck there.

On the flipside a re-engineering for cloud approach requires more time and investment up front. In effect this involves converting the existing application/infrastructure to take full advantage of cloud based technologies such as serverless and ‘as a service’ components such as a databases. This approach simplifies the migration and helps you realise the benefits associated with cloud adoption quicker. It’s also the more future-proof of the two approaches.

Either approach is a valid pathway to the cloud but you need to understand the implications of both pathways and select the one that makes sense for your organisation and strategy. Whichever way you approach it you do need to ensure you have a robust plan and sufficient funding/resources allocated to the effort.

We have kicked off the modernisation of our core line up business application (actually roughly 17 apps in one molithic stack, thousands of lines of Java code, approaching 15 years old) and as part of that we are replatforming the database, implementing a container based infrastructure and converting the monolithic applications to microservices & APIs. This is squarely down the re-engineering pathway.

We are doing all of that on a hybrid of public and private cloud but operating under a ‘cloud first’ principle. The plan is to modularise the application to create more choices (some of the modules might for example be replaced by SaaS offerings) before modernising components and eventually rebuilding the UI/UX elements.

Part of this modernisation will include a move towards a more DevOps-style operating model and and adoption of continuous integration and continuous delivery (CI/CD). We are investing a lot in building capability in that space and helping people & teams transition from the current operating model and tools to the future operatng model.

Modernisation of a legacy application is always inherently risky (and expensive)  - you only need to look at other modernisation efforts like Kiwi Bank to see examples of that. Utilising cloud-based technologies helps us accelerate the change and trial/test options to find the best way through the modernisation effort.

We expect our modernisation efforts to take between 18-24 months and they are an integral part of our wider organisational digital transformation efforts. We will continue to migrate other workloads, and builds new applications, on public cloud infrastructure during that time.