Our Modernisation Journey

For a long time the default response to an aging legacy core business application was wholesale replacement - those were the golden days of the ERP and the ‘off the shelf’ package solution.

More recently organisations have started to think about what they could do differently to leverage the investment they have made in their ‘legacy’ applications and how they can better manage the risks associated with whole-sale replacement. There’s certainly no shortage of horror stories about large-scale system replacement projects going off the rails, and the price tag of some of those projects is eye watering - out of the price range of most organisations in New Zealand.

So what are the alternatives? The key to understanding what alternatives are available to you is a clear understanding of your ‘why’. Why do you need to do anything about your legacy application at all? Perhaps it’s about supportability - the application mostly does what you need from a business perspective but the software and hardware it runs on is no longer supported? Perhaps it’s about usability - maybe you need it to run on a mobile device but it’s been written to run on a desktop computers? Is it a workforce thing - are you struggling to find people who ‘code your language’? Whatever the reason the good news is that you have a range of options available to you including modularising, modernising, replatforming, extending, replacing etc. The picture below shows our ‘Modernisation on a Page’ view, explained in more detail below.

image

For us the ‘why’ was centred around enabling our organisational transformation. The context in which we operate is changing rapidly and we need to ensure we have fit for future capabilities and platforms in place to navigate those changes.

image

Our legacy application is over a decade old and is a monolithic application made up of a 11M+ lines of Java code. To get it future ready we needed to focus on first modularising the application as a lead into then modernising or replacing the various components.

image

We have invested a lot in building the right capabilities and culture within our teams to support the shift we need to make. Modularising is as much about changing the way our teams work as it is about technology changes such a microservices, APIs and changing code repository structures etc. Moving towards a Feature Driven Development process, and bringing along the right DevOps, Agile and CI/CD building blocks, represented a change to our working practices, processes, tools and the way we think about things.

One of the key challenges facing organisations who move towards delivering online services is agility and speed to market. Customer don’t want to (and won’t) wait weeks or months for changes or fixes to your application. That may have been ‘ok’ in a traditional enterprise IT context but it won’t work when you shift to external customer delivery. You need to be able to respond to change in a timely manner and that takes investment in infrastructure and capability. For us modularisation is a critical enabler of the move towards a build pipeline model which allows us to better meet those challenges associated with being an online service delivery organisation.

The delivery pipelines also underpin our move towards a product-centric team structure, where each team can run its own product-based delivery at its own cadence.

image

As part of modernising the application we also had to make some changes to the underpinning infrastructure. Our shift towards being more evidence-based and intelligence led as an organisation means we need to make some big changes at the data layer. Whilst we run enterprise grade databases today we know that over the coming few years the volume and velocity of data we deal with will increase significantly. We need to ensure we have a suitable data infrastructure that can not only support advanced analytics but over time allows us to get further into the realms of machine learning, analytics and AI.

image

We are also looking at what investments in business intelligence and data analytics capabilities we need to be making which will build on this modernised data layer.

All of our infrastructure is currently either virtualised (on-premise model) or cloud based. Our current batch based/key date driven model means we have a few high capacity days a year - for example, around 100,000+ users visit the website in one day in January each year. We have traditionally capacity managed to those peak workloads but the hybrid model we’re adopting allows us more choice around where we run our workloads and the ability to burst during high peak usage times. Our shift towards being event driven - rather than batch based - will also help to alleviate some of those peaks over time.

image

Using a mix of virtualisation, containerisation, serverless and software as a service allows us maximum flexibility in relation to the platforms and infrastructure we use to deliver our products and services.The adoption of cloud and ‘as a service’ based offerings mean your finance model is going to change - we are adopting a FinOps based model to work through these changes and implement the right risk and cost control points.

We have made good progress on our modernisation journey - having delivered a major modularisation and upgrade release earlier in the year - and we are already starting to see the benefits of a more flexible, responsive architecture.

At the same time we are working on the development of business to business (B2B) and business to consumer (B2C) APIs, sorting out our Digital Identity Management (IDM) platform, replatforming legacy line of business applications, looking at our business intelligence & data analytics capabilities as well as shifting our operating model to be DevOp-esque.

Piece by piece - step by step - we are putting in place the building blocks which will support our organisation transformation into the digital age.

Reflections on ReInvent 2017

This years AWS ReInvent was bigger than ever before - the conference spanned five different hotels/conference centres along the Las Vegas strip so getting your daily allocation of steps in was no challenge. Roughly 43,000 people attended the event this year! If you’ve attended ReInvent in previous years you would know that there is a fair amount of walking involved - this year really kicked it up a notch with people shredding shoes in a matter of days

Physical exercise and footwear aside this year event provided what seemed like an endless list of new services in almost every category. CEO Andy Jassy kicked things off with his keynote presentation which riffled through a bunch of new service announcements. They key ones for me were:

  • Amazon Elastic Container Service for Kubernetes (EKS), a managed Kubernetes service running on top of AWS. Simplifying running and managing containers.
  • Aurora Serverless—on-demand, auto-scaling Amazon Aurora. This service eliminates the need to provision instances, automatically scales up/down, and starts up and shuts down automatically. It was very clear that AWS is keep to liberate customers from the tyranny of their existing database vendors (leave you to guess who they mean…)
  • In the Machine Learning space Andy introduced Amazon SageMaker (leverages open source Jupyter project). SageMaker provides built-in, high performance algorithms, but doesn’t prevent users from bringing their own algorithms and frameworks. SageMaker also greatly simplifies training and tuning, and helps automate the deployment/operation of machine learning in production.
  • DeepLens, the world’s first HD video camera with built-in machine learning support. This technology is incredible - I attended the workshop session and walked away with a DeepLens unit so expect more detail on this front in the coming few weeks/months.
  • Amazon Translate, which does real-time language translation as well as batch translation.

Andy’s keynote focused on what ‘builders’ wanted and how they would build the organisations and societies of the future. It’s very clear that AWS is trying to take the heavy lifting out of technology, making it simpler for anyone to be a builder.

It’s very clear that AWS is sticking to it’s ‘customer obsessed’ mantra, not only in terms of how it delivers services to its customers but also in the types/range of services its bringing to market for AWS users to utilise to improve the experience of their customers. Investments in voice technologies, AI and machine learning are all geared towards re-inventing how organisations interact with their customers.

In contrast Werner Vogels’ keynote was light on service announcements and more focused on 21st century architectures and how technology will shape (and will be shaped) by the world in the coming 5-10 years. Werner’s presentation also showcased a number of female techies doing some impressive things in their respective organisations/industries - pretty inspiring stuff.

Werner did announce a couple of key services which stood out for me:

  • Alexa for Business is a fully managed service for Alexa voice-controlled devices at work.
  • AWS Cloud9, a cloud-based IDE which AWS acquired last year. Cloud9 is a clean and feature rich IDE but the ‘killer app’ is collaboration. You can invite other AWS users to join your project for pair programming sessions with a nice little chat box to help you work through bugs (it comes with a full debugger for solo projects as well).
  • Lambda language support for .Net and Go meeting a long requested feature request.

Serverless architectures and services were definite a headline topic this year. A number of the presentations included case studies of AWS customer leveraging serverless technologies to deliver on-demand applications and services. This is consistent with the AWS strategy of ‘business rules being the only thing you will need to code’ in the future.

One of my special interest categories this year was around artificial intelligence and machine learning. It’s clear that AI/ML will bring about unprecedented workforce/job changes in the coming decade. I think a lot of people assume that AI is coming when in reality it’s already here and getting better every day.

A number of services announced were intended to make AI/ML accessible to a wider user base - to take it out of labs and into the hands of people building front-line products and services. These AI/ML developments - paired with things like DeepLens - will pave the way for potentially changing the way we interact with technology in every aspect of our lives.

Cloud adoption still seems to be variable - based on the people I talked to and the round-table sessions I attended. Many organisations are still pursuing the ‘lift and shift’ approach with variable benefits. There are organisations re-engineering their processes and applications as part of the move to cloud but they are still the exception. Worryingly I was actually part of a couple of round-table sessions where some people seemed to be advocating for the on-premise model as a better option.

On a global scale what we are doing around cloud adoption in New Zealand still seems to be on par with what leaders in other parts of the world are doing.

In terms of logistics, you could perhaps argue that ReInvent got too large this year. The travel times between venues were high and I know lots of people missed sessions they wanted to attend due to travel times or popular sessions not offering any walk up spots. From what I remember the 2016 event (which was all at the Venetian) seemed to flow more smoothly with fewer frustrations from attendees. Perhaps it’s time to split ReInvent into two events - one in the US and one in Europe?