The CTO Advisor

View Original

Google Anthos Migrate vs. Complexity

At what point do you have to worry about snowflakes becoming a snow storm? That’s the challenge presented to Architectural Review Boards in most enterprises. At what point is a solution worth the added complexity? A discussion between career IT professionals took place on Twitter around Googles Anthos Migrate announcement.

Google Anthos Migrate

Google puts forth the idea that enterprise potentially takes a two-step approach to application modernization. Step one is to migrate applications to either Google Compute Engine (GCE) VM-based compute or Google Kubernetes Engine (GKE) container-based compute. Step two is to redevelop the app to leverage cloud-native services.

At Google Cloud Next, Google moved GKE into the spotlight for step 1 of the application modernization journey. Anthos Migrate enables an enterprise to abstract away the application from the operating system. System administrators just point the tool to an application running in a Linux system, and Anthos Migrate will move the application to a container. The container runs on either GKE or any other Kubernetes-based cloud service.

Want to know the why, what, and how of customers' approaches to deploying Google modernization technology? This session is for you.

While the announcement was new, Google Anthos Migrate is not a new product or concept. Docker has championed application modernization via this method for a few years now. So, why haven’t enterprises embraced the approach for modernizing applications?

Scale Breaks Everything

If an organization has a few dozen applications running in Linux, Anthos Migrate may very well be a sound method to migrate applications first to containers and then to a cloud-native design. However, if an organization has thousands of applications spread across both Windows and Linux, then Athos Migrate doesn’t look as appealing.

Technology is only a single part of the three-legs that make enterprise complexity. People and process apply equally. Simplified patch management is one of the stated benefits of moving applications directly from VM’s to containers. On the surface this is true. Once you’ve moved the application and dependencies from an OS into a container, you only have the container host’s OS to manage.

However, you still have the underlying dependencies to manage within the container image. If a version of the LAMP stack used in a container image has a vulnerability, then you most patch the image. It may very well be easier to update LAMP in a container image vs. LAMP in a full OS. A few operational questions jump to mind.

  1. How do you maintain an inventory for versioning of LAMP running across your enterprise?

  2. How do you track patch and change management?

  3. Is a shared Kernel a less challenging problem than patch management?

  4. How does the conversation with auditors change?

Customer must answer all of these questions as they move to cloud-native platforms. Google Anthos Migrate doesn’t present a new set of questions in that regard. The problem is around operational complexity. At the risk of oversimplifying the Anthos Migrate environment – the application landscape includes Virtual Machines, VM’s converted to containers and cloud-native applications.

Is there enough value for the added layer of complexity?

Final Take

There’s plenty of appeal to the idea of moving legacy applications to containers. Especially if your strategy is to lift and shift to the cloud. Don’t underestimate the value of cost control. Being able to containerize applications enables more granular units of cost inside cloud providers. Netflix noted this as one of the advantages of its Titus platform.

Some may argue that moving a traditional app to a container isn’t modernization vs. a shift in operations. One could say the established processes around patch management offer operational stability vs. the unknowns of container image management in this multi-step cloud migration approach.

The final recommendation is to start the discussion with a vision of the to-be operational state. If the desire is to end at containerized applications, what is the best functional path for your organization? If the end state is cloud-native based on a combination of cloud-native services and serverless applications, does moving the apps from VM’s to containers add value vs. complexity?