We’ll analyse instance costs, for various workloads and usages. All prices are given in dollars per month (720 hours) for servers located in Europe (eu-west-1).
Shared CPU Instances
Shared CPU instances give only a bit of CPU. The physical processor is over allocated and shared with many other instances running on the same host. A shared CPU instance may burst to 100% CPU usage for short periods but it may also be starved of CPU and paused. Note that these instances are cheap but they are not reliable for non-negligible continuous workloads.
The smallest instances on both cloud is 500MB and a few percent of CPU. That’s the cheapest instance. It’s usable for testing and minimal needs (can’t do much with only 5% of CPU and 500MB).
The infamous t2.small and it’s rival the g1-small are usually the most common instance types in use. They come with 2GB of memory and a bit of CPU. It’s cheap and good enough for many use cases (excluding production and time critical processing, which need dedicated CPU time).
The Cheapest Production Instances
Production instances are all the instances with dedicated CPU time (i.e. everything but the shared CPU instances).
Most services will just run on the cheapest production instance available. That instance is very important because it determines the entry price and the specifications for everything.
The cheapest production instance on Google Cloud is the n1-standard-1 which gives 1 CPU and 4 GB of memory.
AWS is more complex. The m3.medium is 1 CPU and 4 GB of memory. The c4.large is 2 CPU and 4 GB of memory.
m3/c3 are the previous family generation (pre-2015), using older hardware and an ancient virtualisation technology. c4/m4 are the current generation, it has enhanced networking and reserved bandwidth for EBS, among other system improvements.
Either way, the Google entry-level instance is significantly cheaper than both AWS entry-level instances. There will be a lot of these running, expect massive costs savings by using Google cloud.
I’m a believer that one should optimize for manageability and not raw costs. That means adopting c4/m4 as the standard for deployments (instead of c3/m3).
Given this decision, the smallest production instance on AWS is the c4.large (2 CPU, 4GB memory), a rather big instance when compared to the n1-standard-1 (1 CPU, 4GB memory). Why are we forced to pay for two CPUs as the minimal choice on AWS? That does set a high base price.
Not only Google is cheaper because it’s more competitive but it also offers more tailored options. The result is a massive 68% discount on the most commonly used production instance.
Personal Note: I would criticize the choice of AWS to discontinue the line of m4.medium instance type (1 CPU).
Instances by usage
A server has 3 dimensions of specifications: CPU performances, memory size and network speed.
Most applications only have a hard requirement in a single dimension. We’ll analyse the pricing separately for each usage pattern.
Typical Consumers: load balancers, file transfers, uploads/downloads, backups and generally speaking everything that uses the network.
What should we order to have 1Gbps and how much will it be?
The minimum on Google Cloud is the n1-highcpu-4 instance (4 CPU, 4 GB memory).
The minimum on AWS is the c4.4xlarge instance (16 CPU, 30 GB memory).
AWS bandwidth allowance is limited and correlated to the instance size. The big instances -with decent bandwidth- are incredibly expensive.
To give a point of comparison, the c4|m4|r3.large instances have a hard cap at 220 Mbits/s of network traffic (Note: It also applies internally within a VPC).
All Google instances have significantly faster network than the equivalent [and even bigger] AWS instances, to the point where they’re not even playing in the same league.
Google has been designing networks and manufacturing their own equipment for decades. It’s fair to assume than AWS doesn’t have the technology to compete.
Typical Consumers: web servers, data analysis, simulations, processing and computations.
Google is cheaper per CPU.
Google CPU instances have half the memory of AWS CPU instances. While that could have justified a 10% difference, it doesn’t justify double.
Note: The performances per CPU are equivalent on both cloud (though the CPU models and serial numbers may vary).
 A sane design decision. Most CPU bound workloads don’t need much memory. (Note: if they do, they can be run on “standard” instances).
 Pricing is mostly linked to CPU count. Additional memory is cheap.
Typical Consumers: database, caches and in-memory workloads.
Google is cheaper per GB of memory.
Google memory instances have 15% less memory than AWS CPU instances. While that could have justified a few percent difference, it sure as hell doesn’t justify double.
 Pricing is mostly linked to CPU count. Additional memory is cheap.
Local SSD and Scaling Up
There are software that can only scale up, typically SQL databases. A database holding tons of data will require fast local disks and truckloads of memory to operate non-sluggishly.
Scaling up is the most typical use case for beefy dedicated servers, but we’re not gonna rent a single server in another place just for one application. The cloud provider will have to accommodate that need.
Google allows to attach local 400GB SSDs to any instance type ($85 a month per disk).
Some AWS instances comes with small local SSD (16-160GB), you’re out of luck if you need more space than that. The only option to get big local SSD is the special i2 instances family, they have specifications in powers of 800GB local SSD + 4 CPU + 15 GB RAM (for $655 a month).
The Google SSD model is superior. It’s significantly more modular and cheaper (and more performant but that’s a different topic).
Disk Intensive Load: A job that requires high volume fast disks (i.e. local SSD) but not much memory.
AWS forces you to buy a big instance (i2.xlarge) to get enough SSD space whereas Google allows you to attach a SSD to a small instance (n1-highcpu-4). The lack of flexibility from AWS has a measurable impact, the AWS setup is 406% the costs of the Google setup to achieve the same need.
Database: A typical database. Fast storage and sizeable memory.
Bigger Database: Sometimes there is no choice but to scale up, to whatever resources are commanded by the application.
On AWS (i2.8xlarge) 32 cores, 244GB memory, 2 x 800 GB local SSD in RAID1 (+ 6 SSD unused yet gotta pay for it).
On Google Cloud (n1-highmem-32): 32 cores, 208 GB memory, 4 x 375 GB local SSD in RAID10.
This last number is meant to show that the lack of flexibility of AWS can (and will) snowball quickly. Only a very particular instance can fulfil the requirements, it comes with many cores and 4800 GB of unnecessary local SSD. The AWS bill is $4k (273%) higher than the equivalent setup on Google Cloud.
Google offers custom machine types. You can pick how much CPU and memory you want, you’ll get that exact instance with a tailored pricing.
It is quite flexible. For instance, we could recreate any instance from AWS on Google Cloud.
Of course, there are physical bounds inherent to hardware (e.g. you can’t have a single core with 100 GB of memory).
Reserved Instances are bullshit!
Reserving capacity is a dangerous and antiquated pricing model that belongs to the era of the datacenter.
The numbers given in this article do not account for any AWS reservation. However, they all account for Google sustained use discount (30% automatic discount on instances that ran for the entire month).
My first encounter with docker goes back to early 2015. Docker was experimented with to find out whether it could benefit us. At the time it wasn’t possible to run a container [in the background] and there wasn’t any command to see what was running, debug or ssh into the container. The experiment was quick, Docker was useless and closer to an alpha prototype than a release.
Fast forward to 2016. New job, new company and docker hype is growing like mad. Developers here have pushed docker into production projects, we’re stuck with it. On the bright side, the run command finally works, we can start, stop and see containers. It is functional.
We have 12 dockerized applications running in production as we write this article, spread over 31 hosts on AWS (1 docker app per host [note: keep reading to know why]).
The following article narrates our journey with Docker, an adventure full of dangers and unexpected turns.
Production Issues with Docker
Docker Issue: Breaking changes and regressions
We ran all these versions (or tried to):
1.6 => 1.7 => 1.8 => 1.9 => 1.10 => 1.11 => 1.12
Each new version came with breaking changes. We started on docker 1.6 early this year to run a single application.
We updated 3 months later because we needed a fix only available in later versions. The 1.6 branch was already abandoned.
The versions 1.7 and 1.8 couldn’t run. We moved to the 1.9 only to find a critical bug on it two weeks later, so we upgraded (again!) to the 1.10.
There are all kind of subtle regressions between Docker versions. It’s constantly breaking unpredictable stuff in unexpected ways.
The most tricky regressions we had to debug were network related. Docker is entirely abstracting the host networking. It’s a big mess of port redirection, DNS tricks and virtual networks.
Bonus: Docker was removed from the official Debian repository last year, then the package got renamed from docker.io to docker-engine. Documentation and resources predating this change are obsolete.
Docker Issue: Can’t clean old images
The most requested and most lacking feature in Docker is a command to clean older images (older than X days or not used for X days, whatever). Space is a critical issue given that images are renewed frequently and they may take more than 1GB each.
The only way to clean space is to run this hack, preferably in cron every day:
docker images -q -a | xargs --no-run-if-empty docker rmi
It enumerates all images and remove them. The ones currently in use by running containers cannot be removed (it gives an error). It is dirty but it gets the job done.
The docker journey begins with a clean up script. It is an initiation rite every organization has to go through.
Many attempts can be found on the internet, none of which works well. There is no API to list images with dates, sometimes there are but they are deprecated within 6 months. One common strategy is to read date attribute from image files and call ‘docker rmi‘ but it fails when the naming changes. Another strategy is to read date attributes and delete files directly but it causes corruption if not done perfectly, and it cannot be done perfectly except by Docker itself.
Docker Issue: Kernel support (or lack thereof)
There are endless issues related to the interactions between the kernel, the distribution, docker and the filesystem
We are using Debian stable with backports, in production. We started running on Debian Jessie 3.16.7-ckt20-1 (released November 2015). This one suffers from a major critical bug that crashes hosts erratically (every few hours in average).
Linux 3.x: Unstable storage drivers
Docker has various storage drivers. The only one (allegedly) wildly supported is AUFS.
The AUFS driver is unstable. It suffers from critical bugs provoking kernel panics and corrupting data.
It’s broken on [at least] all “linux-3.16.x” kernel. There is no cure.
We follow Debian and kernel updates very closely. Debian published special patches outside the regular cycle. There was one major bugfix to AUFS around March 2016. We thought it was THE TRUE ONE FIX but it turned out that it wasn’t. The kernel panics happened less frequently afterwards (every week, instead of every day) but they were still loud and present.
Once during this summer there was a regression among a major update, that brought back a previous critical issue. It started killing CI servers one by one, with 2 hours in average between murders. An emergency patch was quickly released to fix the regression.
There were multiple fixes to AUFS published along the year 2016. Some critical issues were fixed but there are many more still left. AUFS is unstable on [at least] all “linux-3.16.x” kernels.
Debian stable is stuck on kernel 3.16. It’s unstable. There is nothing to do about it except switching to Debian testing (which can use the kernel 4).
Ubuntu LTS is running kernel 3.19. There is no guarantee that this latest update fixes the issue. Changing our main OS would be a major disruption but we were so desperate that we considered it for a while.
RHEL/CentOS-6 is on kernel 2.x and RHEL/CentoS-7 is on kernel 3.10 (with many later backports done by RedHat).
Linux 4.x: The kernel officially dropped docker support
It is well-known that AUFS has endless issues and it’s regarded as dead weight by the developers. As a long-standing goal, the AUFS filesystem was finally dropped in kernel version 4.
There is no unofficial patch to support it, there is no optional module, there is no backport whatsoever, nothing. AUFS is entirely gone.
How does docker work without AUFS then? Well, it doesn’t.
So, the docker guys wrote a new filesystem, called overlay.
“OverlayFS is a modern union filesystem that is similar to AUFS. In comparison to AUFS, OverlayFS has a simpler design, has been in the mainline Linux kernel since version 3.18 and is potentially faster.” — Docker OverlayFS driver
Note that it’s not backported to existing distributions. Docker never cared about [backward] compatibility.
Update after comments: Overlay is the name of both the kernel module to support it (developed by linux maintainers) and the docker storage driver to use it (part of docker, developed by docker). They are two different components [with a possible overlap of history and developers]. The issues seem mostly related to the docker storage driver, not the filesystem itself.
The debacle of Overlay
A filesystem driver is a complex piece of software and it requires a very high level of reliability. The long time readers will remember the Linux migration from ext3 to ext4. It took time to write, more time to debug and an eternity to be shipped as the default filesystem in popular distributions.
Making a new filesystem in 1 year is an impossible mission. It’s actually laughable when considering that the task is assigned to Docker, they have a track record of unstability and disastrous breaking changes, exactly what we don’t want in a filesystem.
Long story short. That did not go well. You can still find horror stories with Google.
Overlay development was abandoned within 1 year of its initial release.
Making a new filesystem in 1 year is still an impossible mission. Docker just tried and failed. Yet they’re trying again! We’ll see how it turns out in a few years.
Right now it’s not supported on any systems we run. We can’t use it, we can’t even test it.
Lesson learnt: As you can see with Overlay then Overlay2. No backport. No patch. No retro compatibility. Docker only moves forward and breaks things. If you want to adopt Docker, you’ll have to move forward as well, following the releases from docker, the kernel, the distribution, the filesystems and some dependencies.
As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch”
This issue is worldwide. It affects ALL systems on the planet configured with the docker repository. It is confirmed on all Debian and ubuntu versions, independent of OS and docker versions.
All CI pipelines in the world which rely on docker setup/update or a system setup/update are broken. It is impossible to run a system update or upgrade on an existing system. It’s impossible to create a new system and install docker on it.
After a while. We get an update from a docker employee: “To give an update; I raised this issue internally, but the people needed to fix this are in the San Francisco timezone [8 hours difference with London], so they’re not present yet.”
I personally announce that internally to our developers. Today, there is no Docker CI and we can’t create new systems nor update existing systems which have a dependency on docker. All our hope lies on a dude in San Francisco, currently sleeping.
[pause waiting for the fix, that’s when free food and drinks come in handy]
An update is posted from a Docker guy in Florida at around 3pm (London Time). He’s awake, he’s found out the issue and he’s working on the fix.
Keys and packages are republished later.
We try and confirm the fix at around 5pm (London Time).
That was a 7 hours interplanetary outage because of Docker. All that’s left from the outage is a few messages on a GitHub issue. There was no postmortem. It had little (none?) tech news or press coverage, in spite of the catastrophic failure.
The docker registry is storing and serving docker images.
Automatic CI build ===> (on success) push the image to ===> docker registry
Deploy command <=== pull the image from <=== docker registry
There is a public registry operated by docker. As an organization, we also run our own internal docker registry. It’s a docker image running inside docker on a docker host (that’s quite meta). The docker registry is the most used docker image.
There are 3 versions of the docker registry. The client can pull indifferently from any:
The Trusted Registry, a (paid?) service mentioned everywhere in the doc, not sure what it is, just ignore it
Docker Registry Issue: Abandon and Extinguish
The docker registry v2 is as a full rewrite. The registry v1 was retired soon after the v2 release.
We had to install a new thing (again!) just to keep docker working. They changed the configuration, the URLs, the paths, the endpoints.
The transition to the registry v2 was not seamless. We had to fix our setup, our builds and our deploy scripts.
Lesson learnt: Do not trust on any docker tool or API. They are constantly abandoned and extinguished.
One of the goal of the registry v2 is to bring a better API. It’s documented here, a documentation that we don’t remember existed 9 months ago.
Docker Registry Issue: Can’t clean images
It’s impossible to remove images from the docker registry. There is no garbage collection either, the doc mentions one but it’s not real. (The images do have compression and de-duplication but that’s a different matter).
The registry just grows forever. Our registry can grow by 50 GB per week.
We can’t have a server with an unlimited amount of storage. Our registry ran out of space a few times, unleashing hell in our build pipeline, then we moved the image storage to S3.
Lesson learnt: Use S3 to store images (it’s supported out-of-the-box).
We performed a manual clean-up 3 times in total. In all cases we had to stop the registry, erase all the storage and start a new registry container. (Luckily, we can re-build the latest docker images with our CI).
Lesson learnt: Deleting any file or folder manually from the docker registry storage WILL corrupt it.
To this day, it’s not possible to remove an image from the docker registry. There is no API either. (One of the point of the v2 was to have a better API. Mission failed).
Docker Issue: The release cycle
The docker release cycle is the only constant in the Docker ecosystem:
Abandon whatever exists
Make new stuff and release
Ignore existing users and retro compatibility
The release cycle applies but is not limited to: docker versions, features, filesystems, the docker registry, all API…
Judging by the past history of Docker, we can approximate that anything made by Docker has a half-life of about 1 year, meaning that half of what exist now will be abandoned [and extinguished] in 1 year. There will usually be a replacement available, that is not fully compatible with what it’s supposed to replace, and may or may not run on the same ecosystem (if at all).
“We make software not for people to use but because we like to make new stuff.” — Future Docker Epitaph
The current status-quo on Docker in our organization
Growing in web and micro services
Docker first came in through a web application. At the time, it was an easy way for the developers to package and deploy it. They tried it and adopted it quickly. Then it spread to some micro services, as we started to adopt a micro services architecture.
Web applications and micro services are similar. They are stateless applications, they can be started, stopped, killed, restarted without thinking. All the hard stuff is delegated to external systems (databases and backend systems).
The docker adoption started with minor new services. At first, everything worked fine in dev, in testing and in production. The kernel panics slowly began to happen as more web services and web applications were dockerized. The stability issues became more prominent and impactful as we grew.
A few patches and regressions were published over the year. We’ve been playing catchup & workaround with Docker for a while now. It is a pain but it doesn’t seem to discourage people from adopting Docker. Support and demand is still growing inside the organisation.
Note: None of the failures ever affected any customer or funds. We are quite successful at containing Docker.
Banned from the core
We have some critical applications running in Erlang, managed by a few guys in the ‘core’ team.
They tried to run some of their applications in Docker. It didn’t work. For some reasons, Erlang applications and docker didn’t go along.
It was done a long time ago and we don’t remember all the details. Erlang has particular ideas about how the system/networking should behave and the expected load was in thousands of requests per second. Any unstability or incompatibility could justify an outstanding failure. (We know for sure now that the versions used during the trial suffered from multiple major unstability issues).
The trial raised a red flag. Docker is not ready for anything critical. It was the right call. The later crashes and issues managed to confirm it.
We only use Erlang for critical applications. For example, the core guys are responsible for a payment system that handled $96,544,800 in transaction this month. It includes a couple of applications and databases, all of which are under their responsibilities.
Docker is a dangerous liability that could put millions at risk. It is banned from all core systems.
Banned from the DBA
Docker is meant to be stateless. Containers have no permanent disk storage, whatever happens is ephemeral and is gone when the container stops. Containers are not meant to store data. Actually, they are meant by design to NOT store data. Any attempt to go against this philosophy is bound to disaster.
Moreover. Docker is locking away processes and files through its abstraction, they are unreachable as if they didn’t exist. It prevents from doing any sort of recovery if something goes wrong.
Long story short. Docker SHALL NOT run databases in production, by design.
It gets worse than that. Remember the ongoing kernel panics with docker?
A crash would destroy the database and affect all systems connecting to it. It is an erratic bug, triggered more frequently under intensive usage. A database is the ultimate IO intensive load, that’s a guaranteed kernel panic. Plus, there is another bug that can corrupt the docker mount (destroying all data) and possibly the system filesystem as well (if they’re on the same disk).
Nightmare scenario: The host is crashed and the disk gets corrupted, destroying the host system and all data in the process.
Conclusion: Docker MUST NOT run any databases in production, EVER.
Every once in a while, someone will come and ask “why don’t we put these databases into docker?” and we’ll tell some of our numerous war stories, so far, no-one asked twice.
Note: We started going over our Docker history as an integral part of our on boarding process. That’s the new damage control philosophy, kill the very idea of docker before it gets any chance to grow and kill us.
A Personal Opinion
Docker is gaining momentum, there is some crazy fanatic support out there. The docker hype is not only a technological liability any more, it has evolved into a sociological problem as well.
The perimeter is controlled at the moment, limited to some stateless web applications and micro services. It’s unimportant stuff, they can be dockerized and crash once a day, I do not care.
So far, all people who wanted to use docker for important stuff have stopped after a quick discussion. My biggest fear is that one day, a docker fanatic will not listen to reason and keep pushing. I’ll be forced to barrage him and it might not be pretty.
Nightmare scenario: The future accounting cluster revamp, currently holding $23M in customer funds (the M is for million dollars). There is already one guy who genuinely asked the architect “why don’t you put these databases into docker?“, there is no word to describe the face of the architect.
My duty is to customers. Protecting them and their money.
Surviving Docker in Production
Follow releases and change logs
Track versions and change logs closely for kernel, OS, distributions, docker and everything in between. Look for bugs, hope for patches, read everything with attention.
ansible '*' -m shell -a "uname -a"
Let docker crash
Let docker crash. self-explanatory.
Once in a while, we look at which servers are dead and we force reboot them.
Have 3 instances of everything
High availability require to have at least 2 instances per service, to survive one instance failure.
When using docker for anything remotely important, we should have 3 instances of it. Docker die all the time, we need a margin of error to support 2 crashes in a raw to the same service.
Most of the time, it’s CI or test instances that crash. (They run lots of intensive tests, the issues are particularly outstanding). We’ve got a lot of these. Sometimes there are 3 of them crashing in a row in an afternoon.
Don’t put data in Docker
Services which store data cannot be dockerized.
Docker is designed to NOT store data. Don’t go against it, it’s a recipe for disaster.
On top, there are current issues killing the server and potentially destroying the data so that’s really a big no-go.
Don’t run anything important in Docker
Docker WILL crash. Docker WILL destroy everything it touches.
It must be limited to applications which can crash without causing downtime. That means mostly stateless applications, that can just be restarted somewhere else.
Put docker in auto scaling groups
Docker applications should be run in auto-scaling groups. (Note: We’re not fully there yet).
Whenever an instance is crashed, it’s automatically replaced within 5 minutes. No manual action required. Self healing.
The impossible challenge with Docker is to come with a working combination of kernel + distribution + docker version + filesystem.
Right now. We don’t know of ANY combination that is stable (Maybe there isn’t any?). We actively look for one, constantly testing new systems and patches.
Goal: Find a stable ecosystem to run docker.
It takes 5 years to make a good and stable software, Docker v1.0 is only 28 months old, it didn’t have time to mature.
The hardware renewal cycle is 3 years, the distribution release cycle is 18-36 months. Docker didn’t exist in the previous cycle so systems couldn’t consider compatibility with it. To make matters worse, it depends on many advanced system internals that are relatively new and didn’t have time to mature either, nor reach the distributions.
That could be a decent software in 5 years. Wait and see.
Goal: Wait for things to get better. Try to not go bankrupt in the meantime.
Use auto scaling groups
Docker is limited to stateless applications. If an application can be packaged as a Docker Image, it can be packaged as an AMI. If an application can run in Docker, it can run in an auto scaling group.
Most people ignore it but Docker is useless on AWS and it is actually a step back.
First, the point of containers is to save resources by running many containers on the same [big] host. (Let’s ignore for a minute the current docker bug that is crashing the host [and all running containers on it], forcing us to run only 1 container per host for reliability).
Thus containers are useless on cloud providers. There is always an instance of the right size. Just create one with appropriate memory/CPU for the application. (The minimum on AWS is t2.nano which is $5 per month for 512MB and 5% of a CPU).
Second, the biggest gain of containers is when there is a complete orchestration system around them to automatically manage creation/stop/start/rolling-update/canary-release/blue-green-deployment. The orchestration systems to achieve that currently do not exist. (That’s where Nomad/Mesos/Kubernetes will eventually come in, there are not good enough in their present state).
AWS has auto scaling groups to manage the orchestration and life cycle of instances. It’s a tool completely unrelated to the Docker ecosystem yet it can achieve a better result with none of the drawbacks and fuck-ups.
Create an auto-scaling group per service and build an AMI per version (tip: use Packer to build AMI). People are already familiar with managing AMI and instances if operations are on AWS, there isn’t much more to learn and there is no trap. The resulting deployment is golden and fully automated. A setup with auto scaling groups is 3 years ahead of the Docker ecosystem.
Goal: Put docker services in auto scaling groups to have failures automatically handled.
Update after comments: Docker and CoreOS are made by separate companies.
To give some slack to Docker for once, it requires and depends on a lot of new advanced system internals. A classic distribution cannot upgrade system internals outside of major releases, even if it wanted to.
It makes sense for docker to have (or be?) a special purpose OS with an appropriate update cycle. It may be the only way to have a working bundle of kernel and operating system able to run Docker.
Goal: Trial the CoreOS ecosystem and assess stability.
In the grand scheme of operations, it’s doable to separate servers for running containers (on CoreOS) from normal servers (on Debian). Containers are not supposed to know (or care) about what operating systems they are running.
The hassle will be to manage the new OS family (setup, provisioning, upgrade, user accounts, logging, monitoring). No clue how we’ll do that or how much work it might be.
Goal: Deploy CoreOS at large.
One of the [future] major breakthrough is the ability to manage fleets of containers abstracted away from the machines they end up running on, with automatic start/stop/rolling-update and capacity adjustment,
The issue with Docker is that it doesn’t do any of that. It’s just a dumb container system. It has the drawbacks of containers without the benefits.
There are currently no good, battle tested, production ready orchestration system in existence.
Mesos is not meant for Docker
Docker Swarm is not trustworthy
Nomad has only the most basic features
Kubernetes is new and experimental
Kubernetes is the only project that intends to solve the hard problems [around containers]. It is backed by resources that none of the other projects have (i.e. Google have a long experience of running containers at scale, they have Googley amount of resources at their disposal and they know how to write working software).
Right now, Kubernetes is young & experimental and it’s lacking documentation. The barrier to entry is painful and it’s far from perfection. Nonetheless, it is [somewhat] working and already benefiting a handful of people.
In the long-term, Kubernetes is the future. It’s a major breakthrough (or to be accurate, it’s the final brick that is missing for containers to be a major [r]evolution in infrastructure management).
The question is not whether to adopt Kubernetes, the question is when to adopt it?
Goal: Keep an eye on Kubernetes.
Note: Kubernetes needs docker to run. It’s gonna be affected by all docker issues. (For example, do not try Kubernetes on anything else than CoreOS).
Google Cloud: Google Container Engine
As we said before, there is no known stable combination of OS + kernel + distribution + docker version, thus there is no stable ecosystem to run Kubernetes on. That’s a problem.
There is a potential workaround: Google Container Engine. It is a hosted Kubernetes (and Docker) as a service, part of Google Cloud.
Google gotta solve the Docker issues to offer what they are offering, there is no alternative. Incidentally, they might be the only guys who can find a stable ecosystem around Docker, fix the bugs, and sell that ready-to-use as a cloud managed service. We might have a shared goal for once.
They already offer the service so that should mean that they already worked around the Docker issues. Thus the simplest way to have containers working in production (or at-all) may be to use Google Container Engine.
Goal: Move to Google Cloud, starting with our subsidiaries not locked in on AWS. Ignore the rest of the roadmap as it’s made irrelevant.
Google Container Engine: One more reason why Google Cloud is the future and AWS is the past (on top of 33% cheaper instances with 3 times the network speed and IOPS, in average).
A bit of context missing from the article. We are a small shop with a few hundreds servers. At core, we’re running a financial system moving around multi-million dollars per day (or billions per year).
It’s fair to say that we have higher expectations than average and we take production issues rather (too?) seriously.
Overall, it’s “normal” that you didn’t experience all of these issues if you’re not using docker at scale in production and/or if you didn’t use it for long.
I’d like to point out that these are issues and workarounds happening over a period of [more than] a year, summarized all together in a 10 minutes read. It does amplify the dramatic and painful aspect.
Anyway, whatever happened in the past is already in the past. The most important section is the Roadmap. That’s what you need to know to run Docker (or use auto scaling groups instead).
We’ll focus only on GCE and AWS in this article. They are the two majors, fully featured, shared infrastructure, IaaS offerings.
They both provide everything needed in a typical datacenter.
Infrastructure and Hardware:
Get servers with various hardware specifications
In multiple datacenters across the planet
Remote and local storage
Networking (VPC, subnets, firewalls)
Start, stop, delete anything in a few clicks
Pay as you go
Additional Managed Services (optional):
SQL Database (RDS, Cloud SQL)
NoSQL Database (DynamoDB, Big Table)
CDN (CloudFront, Google CDN)
Load balancer (ELB, Google Load Balancer)
Long term storage (S3, Google Storage)
Things you must know about Amazon
GCE vs AWS pricing: Good vs Evil
Real costs on the AWS side:
Base instance plus storage cost
Add provisioned IOPS for databases (normal EBS IO are not reliable enough)
Add local SSD (675$ per 800 GB + 4 CPU + 30 GB. ALWAYS ALL together)
Add 10% on top of everything for Premium Support (mandatory)
Add 10% for dedicated instances or dedicated hosts (if subject to regulations)
Real costs on the GCE side:
Base instance plus storage cost
Enjoy fast and dependable IOPS out-of-the-box on remote SSD volumes
Add local SSD (82$ per 375 GB, attachable to any existing instance)
Enjoy automatic discount for sustained usage (~30% for instances running 24/7)
AWS IO are expensive and inconsistent
EBS SSD volumes: IOPS, and P-IOPS
We are forced to pay for Provisioned-IOPS whenever we need dependable IO.
The P-IOPS are NOT really faster. They are slightly faster but most importantly they have a lower variance (i.e. 90%-99.9% latency). This is critical for some workload (e.g. databases) because normal IOPS are too inconsistent.
Overall, P-IOPS can get very expensive and they are pathetic compared to what any drive can do nowadays (720$/month for 10k P-IOPS, in addition to $0.14 per GB).
Local SSD storage
Local SSD storage is only available via the i2 instances family which are the most expensive instances on AWS (and over all clouds).
There is no granularity possible. CPU, memory and SSD storage amount all DOUBLE between the few i2.xxx instance types available. They grow in powers of 4CPU + 30GB memory + 800 GB SSD and the multiplier is $765/month.
These limitations make local SSD storage expensive to use and special to manage.
AWS Premium Support is mandatory
The premium support is +10% on top of the total AWS bill (i.e. EC2 instances + EBS volumes + S3 storage + traffic fees + everything).
Handling spikes in traffic
ELB cannot handle sudden spikes in traffic. They need to be scaled manually by support beforehand.
An unplanned event is a guaranteed 5 minutes of unreachable site with 503 errors.
All resources are artificially limited by a hardcoded quota, which is very low by default. Limits can only be increased manually, one by one, by sending a ticket to the support.
I cannot fully express the frustration when trying to spawn two c4.large instances (we already got 15) only to fail because “limit exhaustion: 15 c4.large in eu-central region“. Message support and wait for one day of back and forth email. Then try again and fail again because “limit exhaustion: 5TB of EBS GP2 in eu-central region“.
This circus goes on every few weeks, sometimes hitting 3 limits in a row. There are limits for all resources, by region, by availability zone, by resource types and by resource specifics criteria.
Paying guarantees a 24h SLA to get a reply to a limit ticket. The free tiers might have to wait for a week (maybe more), being unable to work in the meantime. It is an absurd yet very real reason pushing for premium support.
Handling failures on the AWS side
There is NO log and NO indication of what’s going on in the infrastructure. The support is required whenever something wrong happens.
For example. An ELB started dropping requests erratically. After contacting support, they acknowledged to have no idea what’s going on and took action “Thank you for your request. One of the ELB was acting weird, we stopped it and replaced it with a new one“.
The issue was fixed. Sadly, they don’t provide any insight or meaningful information. This is a strong pain point for debugging and planning future failures.
Note: We are barraging further managed service from being introduced in our stack. At first they were tried because they were easy to setup (read: limited human time and a bit of curiosity). They soon proved to be causing periodic issues while being impossible to debug and troubleshoot.
ELB are unsuitable to many workloads
[updated paragraph after comments on HN]
ELB are only accessible with a hostname. The underlying IPs have a TTL of 60s and can change at any minute.
This makes ELB unsuitable for all services requiring a fixed IP and all services resolving the IP only once at startup.
ELB are impossible to debug when they fail (they do fail), they can’t handle sudden spike and the CloudWatch graphs are terrible. (Truth be told. We are paying Datadog $18/month per node to entirely replace CloudWatch).
Load balancing is a core aspect of high-availability and scalable design. Redundant load balancing is the next one. ELB are not up to the task.
The alternative to ELB is to deploy our own HAProxy in pairs with VRRP/keepalived. It takes multiple weeks to setup properly and deploy in production.
By comparison, we can achieve that with google load balancers in a few hours. A Google load balancer can have a single fixed IP. That IP can go from 1k/s to 10k/s requests instantly without loosing traffic. It just works.
Note: Today, we’ve seen one service in production go from 500 requests/s to 15000 requests/s in less than 3 seconds. We don’t trust an ELB to be in the middle of that
“Dedicated instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Your Dedicated instances are physically isolated at the host hardware level from your instances that aren’t Dedicated instances and from instances that belong to other AWS accounts.”
Dedicated instances/hosts may be mandatory for some services because of legal compliance, regulatory requirements and not-having-neighbours.
We have to comply to a few regulations so we have a few dedicated options here and there. It’s 10% on top of the instance price (plus a $1500 fixed monthly fee per region).
Note: Amazon doesn’t explain in great details what dedicated entails and doesn’t commit to anything clear. Strangely, no regulators pointed that out so far.
Answer to HN comments: Google doesn’t provide “GCE dedicated instances”. There is no need for it. The trick is that regulators and engineers don’t complain about not having something which is non-existent, they just live without it and our operations get simpler.
Reserved Instances are bullshit
A reservation is attached to a specific region, an availability zone, an instance type, a tenancy, and more. In theory the reservation can be edited, in practice that depends on what to change. Some combinations of parameters are editable, most are not.
Plan carefully and get it right on the first try, there is no room for errors. Every hour of a reservation will be paid along the year, no matter whether the instance is running or not.
For the most common instance types, it takes 8-10 months to break even on a yearly reservation. Think of it as gambling game in a casino. A right reservation is -20% and a wrong reservation is +80% on the bill. You have to be right MORE than 4/5 times to save any money.
Keep in mind that the reserved instances will NOT benefit from the regular price drop happening every 6-12 months. If there is a price drop early on, you’re automatically loosing money.
Critical Safety Notice: 3 years reservation is the most dramatic way to loose money on AWS. We’re talking potential 5 digits loss here, per click. Do not go this route. Do not let your co-workers go this route without a warning.
What GCE does by comparison is a PURELY AWESOME MONTHLY AUTOMATIC DISCOUNT. Instances hours are counted at the end of every month and discount is applied automatically (e.g. 30% for instances running 24/7). The algorithm also accounts for multiple started/stopped/renewed instances, in a way that is STRONGLY in your favour.
Reserving capacity does not belong to the age of Cloud, it belongs to the age of data centers.
AWS Networking is sub-par
Network bandwidth allowance is correlated with the instance size.
The 1-2 cores instances peak around 100-200 Mbps. This is very little in a world more and more connected where so many things rely on the network.
Typical things experiencing slow down because of the rate limited networking:
Instance provisioning, OS install and upgrade
Docker/Vagrant image deployment
sync/sftp/ftp file copying
Backups and snapshots
Load balancers and gateways
General disk read/writes (EBS is network storage)
Our most important backup takes 97 seconds to be copied from the production host to another site location. Half time is saturating the network bandwidth (130 Mbps bandwidth cap), half time is saturating the EBS volume on the receiving host (file is buffered in memory during initial transfer then 100% iowait, EBS bandwidth cap).
The same backup operation would only take 10-20 seconds on GCE with the same hardware.
This post wouldn’t be complete without an instance to instance price comparison.
Hidden fees everywhere + unreliable capabilities = human time wasted in workarounds
Capacity planning and day to day operations
Capacity planning is unnecessary hard with the not-scalable resources, unreliable performances capabilities, insufficient granularity, and hidden constraints everywhere. Planning cost is a nightmare.
Every time we have to add an instance. We have to read the instances page, pricing page, EBS page again. There are way too many choices, some of which being hard to change latter. That could be printed on papers and cover a4x7 feet table. By comparison it takes only 1 page both-sided to pick an appropriate instance from Google.
Optimizing usage is doomed to fail
The time taken to optimizing reserved instance is a similar cost to the savings done.
Between CPU count, memory size, EBS volume size, IOPS, P-IOPS. Everything is over-provisioned on AWS. Partly because there are too many things to follow and optimize for a human being, partly as workaround against the inconsistent capabilities, partly because it is hard to fix later for some instances live in production.
All these issues are directly related to the underlying AWS platform itself, being not neat and unable to scale horizontal cleanly, neither in hardware options, nor in hardware capabilities nor money-wise.
Every time we think about changing something to reduce costs, it is usually more expensive than NOT doing anything (when accounting for engineering time).
AWS has a lot of hidden costs and limitations. System capabilities are unsatisfying and cannot scale consistently. Choosing AWS was a mistake. GCE is always a better choice.
GCE is systematically 20% to 50% cheaper for the equivalent infrastructure, without having to do any thinking or optimization. Last but not least it is also faster, more reliable and easier to use day-to-day.
The future of our company
Unfortunately, our infrastructure on AWS is working and migrating is a serious undertaking.
I learned recently that we are a profitable company, more so than I thought. Looking at the top 10 companies by revenue per employee, we’d be in the top 10. We are stuck with AWS in the near future and the issues will have to be worked around with lots of money. The company is able to cover the expenses and cost optimisation ain’t a top priority at the moment.
There’s a saying “throwing money at a problem“. We shall say “throwing houses at the problem” from now on as it better represents the status quo.
If we get to keep growing at the current pace, we’ll have to scale vertically, and by that we mean “throwing buildings at Amazon” 😀