Comparison Google Cloud Run (GCR) vs Azure Container Instances (ACI) vs AWS ECS with Fargate — With Pricing

Lonare
9 min readMar 21, 2022

--

Photo by Ian Taylor on Unsplash

Should we use managed containers as a service or not?

That must be the most crucial question we should try to answer unfortunately, it is hard to provide the universal answer since the solutions differ significantly from one provider to another.

Currently, in March 2022 cons can be described as wild west IT solutions ranging from amazing to useless. Before we attempt to answer the big question.

Let’s go through some of the things we learned by exploring AWS ECS with fargate and Azure Container instances, we can compare those three from different angles.

One of those can be simplicity.

After all, ease of use is one of the most essential benefits of serverless computing. It is supposed to allow engineers to provide code or binaries in whichever form with the reasonable expectation that the platform of choice will do most of the rest of the work.

From the simplicity perspective, both Google Cloud run and Azure Container instances are exceptional.

They allow us to deploy our container images without almost any initial setup. Google needs only a project file, Azure requires only a resource group.

On the other hand, AWS requires over 20 different bits and pieces resources to be assembled before we can even start thinking about deploying something quizzes.

Even after all the infrastructure is set up. We need to create a task definition, a service and a Container Definition. If simplicity is what you’re looking for, ECS is not it.

AWS ECS — It is horrifyingly complicated.

And it is far from give us a container image, and we will take care of the rest type of approach we are all looking for when switching to serverless deployments.

Surprisingly, a company that provides such amazing Functions as a Service solution, lambda, did not do something similar with this.

Yes. If AWS took the same approach with this, Yes, as with lambda, it would likely be the winner, but it didn’t. So I’m going to give it a huge negative point.

From the simplicity of setup and deployment perspective. Azure and Google are clear winners.

Now that we mentioned infrastructure in the context of the initial setup, we might want to take that as criteria as well.

There is no infrastructure for us to manage when using cast in Google Cloud or Azure.

They take care of all the details. AWS, on the other hand, forces us to create a full-blown cluster.

That alone can disqualify. AWS ECS with fargate from being considered as a service solution.

I’m not even sure whether we could qualify it as containers as a service. As a matter of fact, I would prefer using elastic Kubernetes engine or EKS is just as easy, If not easier than ECS and at least it adheres to widely accepted standards and does not lock us into a suboptimal appropriately solution for which there is no escape.

How about scalability, do our application scale when deployed into manage containers as a service solution?

The answer to that question changes the rhythm of the story.

Google Cloud run is scalable by design. It is based on K native, which is a Kubernetes resource designed for serverless workloads.

It scales without us even specifying anything. Unless we overwrite the default behaviour. It will create a replica of our application for every 100 concurrent requests.

If there are no requests, no replicas will run. If he jumps to 300. It is scaled to three replicates to it to queue requests if none of the replicas can handle them, and scale up and down to accommodate fluctuations in traffic.

All that will happen without us providing any specific information. It could sane defaults while still providing the ability to fine tune the behaviours to match our particular needs, applications deployed to ECS are scalable as well, but it is not easy.

Scaling applications deployed to ECS is complicated and limiting, even if you can overlook those issues.

It does not scale to zero replicas. At least one replica of our application needs to run at all times, since there is no built-in mechanism to queue requests, and spin up new replicas.

From that perspective, scaling applications in ECS is not what we would expect from serverless computing. It is similar to what we would get from Horizontal Pod Autoscale.

In Kubernetes, it can go up and down, but never to zero replicas.

Given that, there is a scaling mechanism of sorts, but it cannot go down to zero replicas. And it is limiting in what it can actually do.

I can only say that this is only partially fulfils the scalability needs of our obligations, at least in the context of servers computing.

How about Azure Container instances. I like Google Cloud run, and this Yes, it does not use a scheduler. There is no scaling of any kind.

All we can do is run single replica containers isolated from each other.

That alone means that Azure Container instances cannot be used in production for anything but small businesses maybe even in those cases, it is still not a good idea to use ACI for production workloads.

The only use case I can imagine and that would require a lot of imagination would be for situations in which your application cannot scale.

If you’re one of those old, often Stateful applications that can run only in single replica mode, you might consider Azure Container instances for anything else.

The inability to scale is a showstopper.

Simply put, Azure Container instances provide a way to run Docker containers in the cloud or there.

There is not much more to it, And we know that Docker alone is not enough for anything but development purposes, I would say that even development with Docker alone is not a good idea.

But that would open a discussion that I want to leave for another time. Another potentially important criterion is the level of locking ECS with or without forget, is fully appropriately enforces us to rely entirely on AWS.

The amount of resources we need to create and the format for writing application definitions ensures that we are locked into AWS if you choose to use it, you will not be able to move anywhere else, at least not easily.

That does not necessarily mean that the benefits do not outweigh the potential cost behind being locked in, but instead that we might need to be aware of it for making the decision whether they use it or not.

The issue with this yes is not luck in itself. There is nothing wrong with using proprietary solutions that solve problems in a better way than alternatives. The problem is the ECS is, by no means, any better than Kubernetes.

As a matter of fact, it is a reverse solution to the issue with being locked in.

Yes, it is that you are locked into a service it is not as good as the more open counterpart, provided by the same company as AWS EKS.

That does not mean that the case is the best managed Kubernetes service. It is not, but that within the AWS ecosystem, it is probably a better solution.

As your container instances are also fully proprietary but given that all the investment is in creating container images and running a single command, you will not be locked.

The investment is very low.

So if you choose to switch to another solution or a different provider, you should be able to do that with relative ease.

Google Container run is based on Kubernetes, which is open source and open standard, and Google is only providing a layer on top of it.

You can even deploy it using K native definitions, which can be installed in any Kubernetes cluster.

From the locking perspective. There is close to none.

How about high availability.

Google Cloud run was the only solution that did not produce 100% availability in our tests, which seems so far that is the first negative point we could get it.

It is a potentially huge downside. That does not mean that it is not highly available, but rather that it tends to produce only a few nines after decimal like 99.99%, that is not a bad result by any means.

If we did more serious testing, we would see that over a more extended period.

And with a higher number of requests. The other solutions could also drop below 100% availability.

Nevertheless, with a small sample, Azure Container instances and AWS ECS did produce better results than Google Cloud run.

And that is not something we should ignore. Azure Container instances. On the other hand, it can only limited traffic.

The inability to scale horizontally inevitably leads to failure to be highly available.

Production Readiness:

We did not experience that without tests with siege, mostly because a single replica was able to handle 1000 concurrent requests.

If we increase the load, it will start collapsing by reaching the limit as one replica can handle.

On the other hand, ECS provides the highest availability as long as we set the horizontal scaling.

Finally, the most important question to answer is whether any of those services is production ready?

We already saw that Azure Container instances should not be used in production, except for a very specific use cases.

Google Cloud run and AWS ECS, on the other hand, our production ready.

Both provide all the features you might need when running production workloads. The significant difference is that ECS exists for much longer, or Google Cloud run is a relatively new service, at least currently.

Nevertheless, it is based on Google Kubernetes engine or GK which is considered the most mature and stable managed Kubernetes we can use today.

Even though Google Cloud run is only a layer on top of GT, we can safely assume that it is stable enough.

The bigger potential problem is in K native itself. It is a relatively new project that did not yet reached the first GA release.

At the time of this recording. The latest release is all 16 Oh, nevertheless, major software vendors are behind it. Even though it might not be battle tested. It is getting fairly close to being the preferable way to run serverless computing in Kubernetes.

To summarize, Azure Container instances are not and I repeat are not and never will be production ready.

AWS ECS is fully there. And Google Cloud run is very close to being production ready.

Finally, can any of the services be qualified the service to answer the question, let’s define what the features we expect from manage serverless computing are it is supposed to remove the need to manage infrastructure or it pleased to simplify greatly.

It should provide scalability and high availability. and it should charge us for what our users use, while making sure that our apps are running only when needed.

We can summarize those as follows. No need to manage infrastructure. out of the box scalability and high availability, and a what your users use model.

If you take those three as the base evaluation of whether something is serverless or not, it can easily discard both Azure Container instances in AWS ECS with fargate Azure Container instances service does not help out to the box scalability and high availability.

As a matter of fact, it has no scalability of any kind, and therefore it cannot be highly available. On top of that, we do not pay what our users use. since it cannot scale to zero applicants. So rock is always running no matter whether someone is consuming it.

Billing:

Our bill will be based on the amount of pre assigned resources like memory and CPU. The only major serverless computing feature that it does provide is kind of infrastructure.

AWS ECS with fargate does provide some sort of scalability. it’s not necessarily out of the box experience, but it is there.

Nevertheless, it feels to obstructs infrastructure management, and it suffers from the same problem. as ACI min billing is concerned, given that we can scale our obligations to zero replicas, when they’re not used.

We have to pay for resources they’re consuming independently of our users needs. Google Cloud run is by all accounts it’s serverless implementation of containers as a service.

It removes the need to manage infrastructure. It provides horizontal scale and getting out of the box solution while still allowing us to fine tune the behavior, IT skills to zero when not in use.

So it does adhere to the

What your users use model will be cloud run is without doubt the best of the three.

Before we jump into my personal thoughts about manage ECS I am not able to summarizes all the findings.

But here we go:

--

--

Lonare

Imagination is the key to unlock the world. I am trying to unlock mine.