Azul Intelligence Cloud The Powerful Analytics Solution That Boosts DevOps Efficiency with Insights from Production Java Runtime Data from Any JVM  
Blog chevron_right Cloud Cost

Application Workloads in the Cloud Will Be the Top Choice in 2023

Azul made a series of predictions for Java and technology in 2023. One of the predictions is “Cloud Will Be Deemed Optimal for More Than Half of All Application Workloads.” Azul Deputy CTO Simon Ritter builds on that premise with this blog post. 

Many enterprise IT users think of the Cloud as just someone else’s data center, and they can lift-and-shift applications to the cloud. Viola! Cloud resiliency, elasticity, and the utility pricing model instantly reduces costs. 

The reality is somewhat different. Determining the best way to migrate an application from the data center to the cloud takes time, effort, and skill. Re-architecting an application to a microservice architecture is often the best approach, enabling the most heavily used parts of the application to scale independently of the rest. 


 “By 2027, 35% of application workloads will not be optimal or ready for cloud delivery, down from 55% in 2022.”

Gartner, 2023 Planning Guide for Cloud, Data Center and Edge Infrastructure, page 2, October 13, 2022 


Currently, 55% of applications are not considered optimal for deployment to the cloud (Gartner, 2023 Planning Guide for Cloud, Data Center and Edge Infrastructure, page 2, October 13, 2022). If they were deployed as is, they would cost the same or even more than when running in a private data center. This will change as more enterprises understand the reality of cloud deployment and adapt their applications to be cloud optimal.

The appeal of application workloads in the cloud is obvious: a utility-based pricing model, only paying for what you use.

The illusion of automatic cost savings in the cloud 

With very few exceptions, every enterprise of any size is investigating or already in the process of moving its application workloads to the cloud.   

Cloud computing is the logical progression of how best to provision computing resources to run mission-critical enterprise applications. Electricity generation moved from localized, incompatible suppliers to centralized, standardized ones, and computing is doing the same. The appeal of cloud computing is obvious: a utility-based pricing model, only paying for what you use. There is no capital expenditure to provision new services or operating expenses for maintaining a data center. 

When explained like this, it seems obvious that moving an application from the corporate data center to a cloud provider will immediately result in cost savings.  

Except that is most definitely not the case. 

451 Vanguard Report

How Optimized Runtimes Address Cloud Costs and Performance

This situation has been referred to as the trillion-dollar cloud paradox by the venture capital firm Andreessen Horowitz. Essentially, this says running applications in the cloud can cost more than leaving them in the data center. 

How can this be? 


It seems obvious that moving an application from the corporate data center to a cloud provider will immediately result in cost savings. Except that is most definitely not the case


Many people assume that the cloud can be viewed simply as someone else’s data center.  From that perspective, migrating an application takes a lift-and-shift approach. You simply transfer your application files from your data center to a cloud instance. After changing the necessary network details, your application continues precisely as it did before. 

Applications like this have most likely been developed using a monolithic architecture, and all the functionality resides in a single executable, which runs as a single process. 

The need for microservices 

To take advantage of the power of the cloud, it is necessary to re-architect the application to become cloud native. This changes the fundamental architecture to one that uses microservices, breaking the monolith into discreet, loosely coupled yet highly cohesive services. 

Emily Jiang, a cloud-native advocate at IBM, describes seven critical features of this architecture:

7 Critical Features of a Microservice Architecture

  1. The application consists of multiple microservices communicating using REST 
  1. The services are configurable 
  1. The services are fault tolerant (through redundancy and detected failure and restart) 
  1. The services can be discovered and orchestrated (via frameworks like Kubernetes) 
  1. The services are secure 
  1. The services provide traceability and observability outside of their core functionality 
  1. The services can communicate with the cloud infrastructure 

Using this approach, the application becomes much more flexible in how it can operate and scale. However, if one service is being used by many others, it can become a bottleneck that degrades the overall application performance. In a monolithic architecture, the only solution is to increase the resources provided for the whole application (leading to significantly increased costs).  In a microservice architecture, new instances of the heavily used service can be started, and the bottleneck can be eliminated through load balancing of requests. Making services fully dynamic allows for instances to be started and stopped as required to address the current load. In this case, only paying for the computing resources you need becomes a reality. 

However, this is not the only consideration when looking to reduce costs as far as possible. 

Lower costs through a high-performance Java runtime 

Even with a well-engineered architecture, microservices may require more resources due to limitations in other parts of the system. Java is a ubiquitous platform for developing and deploying services, often using popular frameworks like Spring Boot. More complex services can take advantage of Java-based infrastructure components like Kafka, Cassandra, and Lucene. The Java Virtual Machine is very powerful, delivering high performance and internet-level scalability. Significant cost reductions can be achieved – not by changing the code involved but by deploying a higher-performing Java runtime. 

One example of this is Azul Platform Prime. This delivers lower latency through advanced pauseless garbage collection and higher throughput through more heavily optimized code from the Falcon just-in-time compiler. This frequently leads to fewer nodes in clusters and the requirement for smaller cloud instances for each node. 

Currently, over half of enterprise applications are not considered optimal for deployment in the cloud as the changes outlined above have not yet been made.  Over the next 12 months, that number will change, and I predict that over half of all enterprise applications will be ready for deployment into the cloud. 

If you’re looking to deploy JVM-based services to the cloud, why not try Azul Platform Prime and see how it can save you money and solve the cloud paradox? 

We Love to Talk About Java

Ask us anything about Java versions and features.