This page is part of my digital garden.

This page might be unfinished or have typos. These pages are meant as part of a public living notebook to be edited over time. For more, visit the page explaining the concept of a digital garden.

Amazon Web Services (AWS)

The original “Cloud” provider and current market leader.

Serverless Link to heading

“Serverless” is the marketing term used to describe services which are managed and scalable. The primary example of this is AWS Lambda, but also includes other services like their managed Aurora DB, DynamoDB, S3, and others.

For my purposes I do not include using just outsourcing work to APIs as using serverless. Serverless is kind of an oxymoron: It seems to mean that it doesn’t run on servers, but it merely refers to how those servers have been abstracted away.

For both Fargate, AWS’s serverless container service, and Lambda the actual server the service runs on is unknown and does not matter to the developer1.

The point of Serverless is an economic one which tries to solve the problem of low traffic or spiky services. An example: This website is static and served via a CDN2 and has no backend code. It’s common for web developers to say “what about comments and contact forms” in response to making a website static. The answer was: 1) run a server just for this or 2) use an external service via an iFrame or some JS code. With Lambda you can keep everything in-house without needing to pay for an entire server all the time. If you suddenly have hundreds of people contacting you or commenting on your website which never received traffic before those Lambdas will scale to handle that load. As with most spikes like this it’s usually a temporary spike (e.g. “HN Hug Death”, “Slash-dotted”, or etc).

Serverless functions also often only make sense if they run for very small amounts of time. Long running serverless functions which run frequently are probably better executed on dedicated servers which could be reserved or requested from discounted spot pools.

  • Serverless requires tracing infrastructure in place
    • Just like with containers, you can not debug a lambda. You need to log and you need to log a LOT to get needed visibility sometimes.
  • Just cause you’re not managing the server doesn’t mean you don’t manage the service (and related services)
    • You still need to monitor graphs. You still need a Notification Group to send emails when a metric goes above or below where it’s supposed to be.
    • Also remember that infinite scale means infinite bills. Lambda and related services can out scale your wallet if you let them.
      • This is also how competitors, or their jealous fans, can cause severe economic damage.

Fargate Link to heading

Fargate is a serverless service from AWS which allows you to run containers without worrying about provisioning, monitoring, maintaining, or securing the underlying hosts.

In exchange it is considerably more expensive than running your own CPU, but it also provides a dead simple way to launch a process, put it behind a load balancer, auto scale it, and do rolling deployments.

Plus, because it’s just Docker Containers running it on your own, including on your laptop, is not notably different than running in Fargate making it far easier to deploy locally and then push with confidence.

Fargate is serverless, not “ECS” Link to heading

Fargate runs on servers managed by AWS which is what makes it “serverless.” As the operator you do not have access or even visibility into the server your containers are running on. At most you get some average metrics on CPU and Memory usage across a service.

But, Fargate is part of the ECS service both in the APIs and in the Console so all I’m trying to clarify is that you should understand it’s a different thing.

Auto Scaling Link to heading

AWS provides a thing in the console to create the things you need to auto scale a Fargate service via CloudWatch alarms which trigger a Lambda Function.

It’s important to understand that while you’re running in a “cluster” since you no longer manage the servers you are not using the auto scale features which used to scale the servers which your containers ran on. You’re auto-scaling each specific service you have.

For EC2 instances which you would use before to run your containers in an ECS cluster that’s pretty simple to setup and not the same thing as scaling the service itself.


  1. Can every app ever made ignore what server it’s running on and go serverless without any changes? No. But, just like with EC2 instances which die the answer is to design an app/server which can be restarted on a new underlying server. It’s the job of AWS to ensure they’re running Lambdas and Containers on servers which aren’t failing for some reason. Exactly like how it is for EC2 instances. ↩︎

  2. There are no servers! It’s serverless!… Eh, I think that’s not useful. Let’s avoid making serverless impossible to understand. ↩︎

Last updated on