Docker in Production: A History of Failure


Introduction

My first encounter with docker goes back to early 2015. Docker was experimented with to find out whether it could benefit us. At the time it wasn’t possible to run a container [in the background] and there wasn’t any command to see what was running, debug or ssh into the container. The experiment was quick, Docker was useless and closer to an alpha prototype than a release.

Fast forward to 2016. New job, new company and docker hype is growing like mad. Developers here have pushed docker into production projects, we’re stuck with it. On the bright side, the run command finally works, we can start, stop and see containers. It is functional.

We have 12 dockerized applications running in production as we write this article, spread over 31 hosts on AWS (1 docker app per host [note: keep reading to know why]).

The following article narrates our journey with Docker, an adventure full of dangers and unexpected turns.

so it begins, the greatest fuck up of our time

Production Issues with Docker

Docker Issue: Breaking changes and regressions

We ran all these versions (or tried to):

1.6 => 1.7 => 1.8 => 1.9 => 1.10 => 1.11 => 1.12

Each new version came with breaking changes. We started on docker 1.6 early this year to run a single application.

We updated 3 months later because we needed a fix only available in later versions. The 1.6 branch was already abandoned.

The versions 1.7 and 1.8 couldn’t run. We moved to the 1.9 only to find a critical bug on it two weeks later, so we upgraded (again!) to the 1.10.

There are all kind of subtle regressions between Docker versions. It’s constantly breaking unpredictable stuff in unexpected ways.

The most tricky regressions we had to debug were network related. Docker is entirely abstracting the host networking. It’s a big mess of port redirection, DNS tricks and virtual networks.

Bonus: Docker was removed from the official Debian repository last year, then the package got renamed from docker.io to docker-engine. Documentation and resources predating this change are obsolete.

Docker Issue: Can’t clean old images

The most requested and most lacking feature in Docker is a command to clean older images (older than X days or not used for X days, whatever). Space is a critical issue given that images are renewed frequently and they may take more than 1GB each.

The only way to clean space is to run this hack, preferably in cron every day:

docker images -q -a | xargs --no-run-if-empty docker rmi

It enumerates all images and remove them. The ones currently in use by running containers cannot be removed (it gives an error). It is dirty but it gets the job done.

The docker journey begins with a clean up script. It is an initiation rite every organization has to go through.

Many attempts can be found on the internet, none of which works well. There is no API to list images with dates, sometimes there are but they are deprecated within 6 months. One common strategy is to read date attribute from image files and call ‘docker rmi‘ but it fails when the naming changes. Another strategy is to read date attributes and delete files directly but it causes corruption if not done perfectly, and it cannot be done perfectly except by Docker itself.

Docker Issue: Kernel support (or lack thereof)

There are endless issues related to the interactions between the kernel, the distribution, docker and the filesystem

We are using Debian stable with backports, in production. We started running on Debian Jessie 3.16.7-ckt20-1 (released November 2015). This one suffers from a major critical bug that crashes hosts erratically (every few hours in average).

Linux 3.x: Unstable storage drivers

Docker has various storage drivers. The only one (allegedly) wildly supported is AUFS.

The AUFS driver is unstable. It suffers from critical bugs provoking kernel panics and corrupting data.

It’s broken on [at least] all “linux-3.16.x” kernel. There is no cure.

We follow Debian and kernel updates very closely. Debian published special patches outside the regular cycle. There was one major bugfix to AUFS around March 2016. We thought it was THE TRUE ONE FIX but it turned out that it wasn’t. The kernel panics happened less frequently afterwards (every week, instead of every day) but they were still loud and present.

Once during this summer there was a regression among a major update, that brought back a previous critical issue. It started killing CI servers one by one, with 2 hours in average between murders. An emergency patch was quickly released to fix the regression.

There were multiple fixes to AUFS published along the year 2016. Some critical issues were fixed but there are many more still left. AUFS is unstable on [at least] all “linux-3.16.x” kernels.

  • Debian stable is stuck on kernel 3.16. It’s unstable. There is nothing to do about it except switching to Debian testing (which can use the kernel 4).
  • Ubuntu LTS is running kernel 3.19. There is no guarantee that this latest update fixes the issue. Changing our main OS would be a major disruption but we were so desperate that we considered it for a while.
  • RHEL/CentOS-6 is on kernel 2.x and RHEL/CentoS-7 is on kernel 3.10 (with many later backports done by RedHat).

Linux 4.x: The kernel officially dropped docker support

It is well-known that AUFS has endless issues and it’s regarded as dead weight by the developers. As a long-standing goal, the AUFS filesystem was finally dropped in kernel version 4.

There is no unofficial patch to support it, there is no optional module, there is no backport whatsoever, nothing. AUFS is entirely gone.

[dramatic pause]

.

.

.

How does docker work without AUFS then? Well, it doesn’t.

[dramatic pause]

.

.

.

So, the docker guys wrote a new filesystem, called overlay.

OverlayFS is a modern union filesystem that is similar to AUFS. In comparison to AUFS, OverlayFS has a simpler design, has been in the mainline Linux kernel since version 3.18 and is potentially faster.” — Docker OverlayFS driver

Note that it’s not backported to existing distributions. Docker never cared about [backward] compatibility.

Update after comments: Overlay is the name of both the kernel module to support it (developed by linux maintainers) and the docker storage driver to use it (part of docker, developed by docker). They are two different components [with a possible overlap of history and developers]. The issues seem mostly related to the docker storage driver, not the filesystem itself.

The debacle of Overlay

A filesystem driver is a complex piece of software and it requires a very high level of reliability. The long time readers will remember the Linux migration from ext3 to ext4. It took time to write, more time to debug and an eternity to be shipped as the default filesystem in popular distributions.

Making a new filesystem in 1 year is an impossible mission. It’s actually laughable when considering that the task is assigned to Docker, they have a track record of unstability and disastrous breaking changes, exactly what we don’t want in a filesystem.

Long story short. That did not go well. You can still find horror stories with Google.

Overlay development was abandoned within 1 year of its initial release.

[dramatic pause]

.

.

.

Then comes Overlay2.

The overlay2 driver addresses overlay limitations, but is only compatible with Linux kernel 4.0 [or later] and docker 1.12” — Overlay vs Overlay2 storage drivers

Making a new filesystem in 1 year is still an impossible mission. Docker just tried and failed. Yet they’re trying again! We’ll see how it turns out in a few years.

Right now it’s not supported on any systems we run. We can’t use it, we can’t even test it.

Lesson learnt: As you can see with Overlay then Overlay2. No backport. No patch. No retro compatibility. Docker only moves forward and breaks things. If you want to adopt Docker, you’ll have to move forward as well, following the releases from docker, the kernel, the distribution, the filesystems and some dependencies.

Bonus: The worldwide docker outage

On 02 June 2016, at approximately 9am (London Time). New repository keys are pushed to the docker public repository.

As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch

This issue is worldwide. It affects ALL systems on the planet configured with the docker repository. It is confirmed on all Debian and ubuntu versions, independent of OS and docker versions.

All CI pipelines in the world which rely on docker setup/update or a system setup/update are broken. It is impossible to run a system update or upgrade on an existing system. It’s impossible to create a new system and install docker on it.

After a while. We get an update from a docker employee: “To give an update; I raised this issue internally, but the people needed to fix this are in the San Francisco timezone [8 hours difference with London], so they’re not present yet.

I personally announce that internally to our developers. Today, there is no Docker CI and we can’t create new systems nor update existing systems which have a dependency on docker. All our hope lies on a dude in San Francisco, currently sleeping.

[pause waiting for the fix, that’s when free food and drinks come in handy]

An update is posted from a Docker guy in Florida at around 3pm (London Time). He’s awake, he’s found out the issue and he’s working on the fix.

Keys and packages are republished later.

We try and confirm the fix at around 5pm (London Time).

That was a 7 hours interplanetary outage because of Docker. All that’s left from the outage is a few messages on a GitHub issue. There was no postmortem. It had little (none?) tech news or press coverage, in spite of the catastrophic failure.

Docker Registry

The docker registry is storing and serving docker images.

Automatic CI build  ===> (on success) push the image to ===> docker registry
Deploy command <=== pull the image from <=== docker registry

There is a public registry operated by docker. As an organization, we also run our own internal docker registry. It’s a docker image running inside docker on a docker host (that’s quite meta). The docker registry is the most used docker image.

There are 3 versions of the docker registry. The client can pull indifferently from any:

Docker Registry Issue: Abandon and Extinguish

The docker registry v2 is as a full rewrite. The registry v1 was retired soon after the v2 release.

We had to install a new thing (again!) just to keep docker working. They changed the configuration, the URLs, the paths, the endpoints.

The transition to the registry v2 was not seamless. We had to fix our setup, our builds and our deploy scripts.

Lesson learnt: Do not trust on any docker tool or API. They are constantly abandoned  and extinguished.

One of the goal of the registry v2 is to bring a better API. It’s documented here, a documentation that we don’t remember existed 9 months ago.

Docker Registry Issue: Can’t clean images

It’s impossible to remove images from the docker registry. There is no garbage collection either, the doc mentions one but it’s not real. (The images do have compression and de-duplication but that’s a different matter).

The registry just grows forever. Our registry can grow by 50 GB per week.

We can’t have a server with an unlimited amount of storage. Our registry ran out of space a few times, unleashing hell in our build pipeline, then we moved the image storage to S3.

Lesson learnt: Use S3 to store images (it’s supported out-of-the-box).

We performed a manual clean-up 3 times in total. In all cases we had to stop the registry, erase all the storage and start a new registry container. (Luckily, we can re-build the latest docker images with our CI).

Lesson learnt: Deleting any file or folder manually from the docker registry storage WILL corrupt it.

To this day, it’s not possible to remove an image from the docker registry. There is no API either. (One of the point of the v2 was to have a better API. Mission failed).

Docker Issue: The release cycle

The docker release cycle is the only constant in the Docker ecosystem:

  1. Abandon whatever exists
  2. Make new stuff and release
  3. Ignore existing users and retro compatibility

The release cycle applies but is not limited to: docker versions, features, filesystems, the docker registry, all API…

Judging by the past history of Docker, we can approximate that anything made by Docker has a half-life of about 1 year, meaning that half of what exist now will be abandoned [and extinguished] in 1 year. There will usually be a replacement available, that is not fully compatible with what it’s supposed to replace, and may or may not run on the same ecosystem (if at all).

We make software not for people to use but because we like to make new stuff.” — Future Docker Epitaph

The current status-quo on Docker in our organization

Growing in web and micro services

Docker first came in through a web application. At the time, it was an easy way for the developers to package and deploy it. They tried it and adopted it quickly. Then it spread to some micro services, as we started to adopt a micro services architecture.

Web applications and micro services are similar. They are stateless applications, they can be started, stopped, killed, restarted without thinking. All the hard stuff is delegated to external systems (databases and backend systems).

The docker adoption started with minor new services. At first, everything worked fine in dev, in testing and in production. The kernel panics slowly began to happen as more web services and web applications were dockerized. The stability issues became more prominent and impactful as we grew.

A few patches and regressions were published over the year. We’ve been playing catchup & workaround with Docker for a while now. It is a pain but it doesn’t seem to discourage people from adopting Docker. Support and demand is still growing inside the organisation.

Note: None of the failures ever affected any customer or funds. We are quite successful at containing Docker.

Banned from the core

We have some critical applications running in Erlang, managed by a few guys in the ‘core’ team.

They tried to run some of their applications in Docker. It didn’t work. For some reasons, Erlang applications and docker didn’t go along.

It was done a long time ago and we don’t remember all the details. Erlang has particular ideas about how the system/networking should behave and the expected load was in thousands of requests per second. Any unstability or incompatibility could justify an outstanding failure. (We know for sure now that the versions used during the trial suffered from multiple major unstability issues).

The trial raised a red flag. Docker is not ready for anything critical. It was the right call. The later crashes and issues managed to confirm it.

We only use Erlang for critical applications. For example, the core guys are responsible for a payment system that handled $96,544,800 in transaction this month. It includes a couple of applications and databases, all of which are under their responsibilities.

Docker is a dangerous liability that could put millions at risk. It is banned from all core systems.

Banned from the DBA

Docker is meant to be stateless. Containers have no permanent disk storage, whatever happens is ephemeral and is gone when the container stops. Containers are not meant to store data. Actually, they are meant by design to NOT store data. Any attempt to go against this philosophy is bound to disaster.

Moreover. Docker is locking away processes and files through its abstraction, they are unreachable as if they didn’t exist. It prevents from doing any sort of recovery if something goes wrong.

Long story short. Docker SHALL NOT run databases in production, by design.

It gets worse than that. Remember the ongoing kernel panics with docker?

A crash would destroy the database and affect all systems connecting to it. It is an erratic bug, triggered more frequently under intensive usage. A database is the ultimate IO intensive load, that’s a guaranteed kernel panic. Plus, there is another bug that can corrupt the docker mount (destroying all data) and possibly the system filesystem as well (if they’re on the same disk).

Nightmare scenario: The host is crashed and the disk gets corrupted, destroying the host system and all data in the process.

Conclusion: Docker MUST NOT run any databases in production, EVER.

Every once in a while, someone will come and ask “why don’t we put these databases into docker?” and we’ll tell some of our numerous war stories, so far, no-one asked twice.

Note: We started going over our Docker history as an integral part of our on boarding process. That’s the new damage control philosophy, kill the very idea of docker before it gets any chance to grow and kill us.

A Personal Opinion

Docker is gaining momentum, there is some crazy fanatic support out there. The docker hype is not only a technological liability any more, it has evolved into a sociological problem as well.

The perimeter is controlled at the moment, limited to some stateless web applications and micro services. It’s unimportant stuff, they can be dockerized and crash once a day, I do not care.

So far, all people who wanted to use docker for important stuff have stopped after a quick discussion. My biggest fear is that one day, a docker fanatic will not listen to reason and keep pushing. I’ll be forced to barrage him and it might not be pretty.

Nightmare scenario: The future accounting cluster revamp, currently holding $23M in customer funds (the M is for million dollars). There is already one guy who genuinely asked the architect “why don’t you put these databases into docker?“, there is no word to describe the face of the architect.

My duty is to customers. Protecting them and their money.

Surviving Docker in Production

gif-what-docker-pretends-to-be
What docker pretends to be.
gif-what-docker-really-is
What docker really is.

Follow releases and change logs

Track versions and change logs closely for kernel, OS, distributions, docker and everything in between. Look for bugs, hope for patches, read everything with attention.

ansible '*' -m shell -a "uname -a"

Let docker crash

Let docker crash. self-explanatory.

Once in a while, we look at which servers are dead and we force reboot them.

Have 3 instances of everything

High availability require to have at least 2 instances per service, to survive one instance failure.

When using docker for anything remotely important, we should have 3 instances of it. Docker die all the time, we need a margin of error to support 2 crashes in a raw to the same service.

Most of the time, it’s CI or test instances that crash. (They run lots of intensive tests, the issues are particularly outstanding). We’ve got a lot of these. Sometimes there are 3 of them crashing in a row in an afternoon.

Don’t put data in Docker

Services which store data cannot be dockerized.

Docker is designed to NOT store data. Don’t go against it, it’s a recipe for disaster.

On top, there are current issues killing the server and potentially destroying the data so that’s really a big no-go.

Don’t run anything important in Docker

Docker WILL crash. Docker WILL destroy everything it touches.

It must be limited to applications which can crash without causing downtime. That means mostly stateless applications, that can just be restarted somewhere else.

Put docker in auto scaling groups

Docker applications should be run in auto-scaling groups. (Note: We’re not fully there yet).

Whenever an instance is crashed, it’s automatically replaced within 5 minutes. No manual action required. Self healing.

Future roadmap

Docker

The impossible challenge with Docker is to come with a working combination of kernel + distribution + docker version + filesystem.

Right now. We don’t know of ANY combination that is stable (Maybe there isn’t any?). We actively look for one, constantly testing new systems and patches.

Goal: Find a stable ecosystem to run docker.

It takes 5 years to make a good and stable software, Docker v1.0 is only 28 months old, it didn’t have time to mature.

The hardware renewal cycle is 3 years, the distribution release cycle is 18-36 months. Docker didn’t exist in the previous cycle so systems couldn’t consider compatibility with it. To make matters worse, it depends on many advanced system internals that are relatively new and didn’t have time to mature either, nor reach the distributions.

That could be a decent software in 5 years. Wait and see.

Goal: Wait for things to get better. Try to not go bankrupt in the meantime.

Use auto scaling groups

Docker is limited to stateless applications. If an application can be packaged as a Docker Image, it can be packaged as an AMI. If an application can run in Docker, it can run in an auto scaling group.

Most people ignore it but Docker is useless on AWS and it is actually a step back.

First, the point of containers is to save resources by running many containers on the same [big] host. (Let’s ignore for a minute the current docker bug that is crashing the host [and all running containers on it], forcing us to run only 1 container per host for reliability).

Thus containers are useless on cloud providers. There is always an instance of the right size. Just create one with appropriate memory/CPU for the application. (The minimum on AWS is t2.nano which is $5 per month for 512MB and 5% of a CPU).

Second, the biggest gain of containers is when there is a complete orchestration system around them to automatically manage creation/stop/start/rolling-update/canary-release/blue-green-deployment. The orchestration systems to achieve that currently do not exist. (That’s where Nomad/Mesos/Kubernetes will eventually come in, there are not good enough in their present state).

AWS has auto scaling groups to manage the orchestration and life cycle of instances. It’s a tool completely unrelated to the Docker ecosystem yet it can achieve a better result with none of the drawbacks and fuck-ups.

Create an auto-scaling group per service and build an AMI per version (tip: use Packer to build AMI). People are already familiar with managing AMI and instances if operations are on AWS, there isn’t much more to learn and there is no trap. The resulting deployment is golden and fully automated. A setup with auto scaling groups is 3 years ahead of the Docker ecosystem.

Goal: Put docker services in auto scaling groups to have failures automatically handled.

CoreOS

Update after comments: Docker and CoreOS are made by separate companies.

To give some slack to Docker for once, it requires and depends on a lot of new advanced system internals. A classic distribution cannot upgrade system internals outside of major releases, even if it wanted to.

It makes sense for docker to have (or be?) a special purpose OS with an appropriate update cycle. It may be the only way to have a working bundle of kernel and operating system able to run Docker.

Goal: Trial the CoreOS ecosystem and assess stability.

In the grand scheme of operations, it’s doable to separate servers for running containers (on CoreOS) from normal servers (on Debian). Containers are not supposed to know (or care) about what operating systems they are running.

The hassle will be to manage the new OS family (setup, provisioning, upgrade, user accounts, logging, monitoring). No clue how we’ll do that or how much work it might be.

Goal: Deploy CoreOS at large.

Kubernetes

One of the [future] major breakthrough is the ability to manage fleets of containers abstracted away from the machines they end up running on, with automatic start/stop/rolling-update and capacity adjustment,

The issue with Docker is that it doesn’t do any of that. It’s just a dumb container system. It has the drawbacks of containers without the benefits.

There are currently no good, battle tested, production ready orchestration system in existence.

  • Mesos is not meant for Docker
  • Docker Swarm is not trustworthy
  • Nomad has only the most basic features
  • Kubernetes is new and experimental

Kubernetes is the only project that intends to solve the hard problems [around containers]. It is backed by resources that none of the other projects have (i.e. Google have a long experience of running containers at scale, they have Googley amount of resources at their disposal and they know how to write working software).

Right now, Kubernetes is young & experimental and it’s lacking documentation. The barrier to entry is painful and it’s far from perfection. Nonetheless, it is [somewhat] working and already benefiting a handful of people.

In the long-term, Kubernetes is the future. It’s a major breakthrough (or to be accurate, it’s the final brick that is missing for containers to be a major [r]evolution in infrastructure management).

The question is not whether to adopt Kubernetes, the question is when to adopt it?

Goal: Keep an eye on Kubernetes.

Note: Kubernetes needs docker to run. It’s gonna be affected by all docker issues. (For example, do not try Kubernetes on anything else than CoreOS).

Google Cloud: Google Container Engine

As we said before, there is no known stable combination of OS + kernel + distribution + docker version, thus there is no stable ecosystem to run Kubernetes on. That’s a problem.

There is a potential workaround: Google Container Engine. It is a hosted Kubernetes (and Docker) as a service, part of Google Cloud.

Google gotta solve the Docker issues to offer what they are offering, there is no alternative. Incidentally, they might be the only guys who can find a stable ecosystem around Docker, fix the bugs, and sell that ready-to-use as a cloud managed service. We might have a shared goal for once.

They already offer the service so that should mean that they already worked around the Docker issues. Thus the simplest way to have containers working in production (or at-all) may be to use Google Container Engine.

Goal: Move to Google Cloud, starting with our subsidiaries not locked in on AWS. Ignore the rest of the roadmap as it’s made irrelevant.

Google Container Engine: One more reason why Google Cloud is the future and AWS is the past (on top of 33% cheaper instances with 3 times the network speed and IOPS, in average).


Why docker is not yet succeeding in production, July 2015, from the Lead Production Engineer at Shopify.

Docker is not ready for primetime, August 2016.

Docker in Production: A retort, November 2016, a response to this article.

How to deploy an application with Docker… and without Docker, An introduction to application deployment, The HFT Guy.


Disclaimer (please read before you comment)

A bit of context missing from the article. We are a small shop with a few hundreds servers. At core, we’re running a financial system moving around multi-million dollars per day (or billions per year).

It’s fair to say that we have higher expectations than average and we take production issues rather (too?) seriously.

Overall, it’s “normal” that you didn’t experience all of these issues if you’re not using docker at scale in production and/or if you didn’t use it for long.

I’d like to point out that these are issues and workarounds happening over a period of [more than] a year, summarized all together in a 10 minutes read. It does amplify the dramatic and painful aspect.

Anyway, whatever happened in the past is already in the past. The most important section is the Roadmap. That’s what you need to know to run Docker (or use auto scaling groups instead).

139 thoughts on “Docker in Production: A History of Failure

  1. Our stable setup: we’ve been running docker in production for months at AWS with all images based on alpine linux with s6-overlay and s6 secure supervision software suite. Images are stored in ECR. We have 5 ECS clusters managed with AutoScaling. An EC2 host may run multiple docker containers. All public facing services use ephemeral ports, so we can deploy a new version which will temporarily run 2 versions of the same service without any port conflicts. We use ELB Application load balancers. All logs (from s6-log) and all docker logging goes straight to CloudWatch Logs. The data stores are managed by AWS (RDS Aurora, Elasticache, Redis). Everything is encrypted at rest and in flight. Everything is self healing — I can forcefully kill any service (even the database writer node), and it will survive and return to a healthy stable state within minutes. We have an additional requirement that our software also needs to be able to run on site for various customers, which couldn’t have been achieved (easily) weren’t it for docker. Certainly not if you go down the AMI path…

    I believe you went through a lot of pain, but I think you unfortunately didn’t dig in the right direction for reasons I can only guess at.

    > Docker WILL crash. Docker WILL destroy everything it touches.

    It never crashes for us. Even then, yes, you should architect that it WILL crash, that’s right. We crash it on purpose during tests, in staging we have a “chaos monkey” process (from Netflix) that randomly kills a node every now and then, and with proper management the system survives those crashes. Has nothing to do with docker though.

    Liked by 5 people

  2. @Hein Bloed We off-load management of the database(s) to the customer. So we say “you own the database”. Which imo is the only valid option: the customer *must* own their own data and take full responsibility for it — like this article pointed out, you’d be crazy if you tried to deliver this via a docker container. Not because of docker, but simply because of the principle of it. So basically they provide the credentials and optionally the SSL certs to the docker containers that need a connection with those database(s). RDS Aurora is binary compatible with mysql and mariadb, so on site customers use that. Elasticache is a very simple service that we deliver via an additional docker image. Same for load balancers: additional docker images with nginx. Some customers run their own load balancers already, so it’s only a matter of adding a small bit of configuration to their existing infra.

    Like

  3. By the way, locally in development we do run mariadb via a docker container (as that gets a new devhead up-and-running in no-time), and all the other necessary components. So completely outside of AWS. Uptime of my local mariadb container is currently at 2 months, never had an issue. But, as said, I would not run a database inside a docker container in production mode. Though, who knows, maybe AWS run their RDS services using docker containers.

    Like

  4. Hear, hear! Every goddamned docker storage driver is a joke. Every. Single. One.

    Docker hub is a joke. “It’s not a reliable service,” says the rebuttal. But it’s a PAID service so it should be reliable.

    Then there are the random bugs which get fixed, but not backported.

    I shudder to consider anyone who considers trusting their data to a Docker volume. Have you read the migration guides from aufs -> devicemapper, or from overlay to overlay2?

    I just…don’t get Docker. Devs love it because they can run docker-machine and docker-compose on their laptop and everything is “peachy” because links. Well, until you mix Linux lovers with Mac Lovers. Then you have an entirely different mix of port forwarding failures. But it doesn’t solve the real problems that developers praise Docker for solving: that of forcing artifacts into your workflow.

    Just force artifacts into your workflow. Build packages. Create AMIs. Make immutable objects. But trust your systems guys when it comes to wholesale adoption of Docker. It’s not ready, and indeed it’s a step back.

    Orchestration just doesn’t exist. Yes, there’s Kubernetes. But it’s a six-month endeavor to wrap your brain around Kubernetes on anything other than kube-up in Google Compute Cloud. Perhaps the recent versions post-1.3 fix that, but I doubt it. Docker Swarm? LOL.

    ECS doesn’t require special software-defined networking but it is horrendously slow at blue/green rollout. It also revs very quickly, just like Docker.

    CoreOS uses (or at least used) btrfs. I have yet to recover a corrupted btrfs filesystem and it’s only a matter of time before something goes catastrophically wrong.

    The problem with Docker is that it doesn’t solve a problem seamlessly; especially if you can automate all of your systems on AWS. Consider the last couple of major, “disruptive” technologies that shifted IT. VMware was widely adopted and normal businesses succeeded with VMware because it seamlessly solved problems. You could P2V a hardware server and it would just work. VMFS just worked. VMware DRS just worked. VMware HA just…worked. You got a basic PC with 1990s Intel BX440 hardware with American Megatrends BIOS. With SRV-IO, etc., things only got better.

    Then came the API! AWS requires a steeper learning curve but if you stick things into ASGs and CloudFormation then your infrastructure just works. You get run-of-the-mill x86 EC2 instances (or Lambda, if you want to do the serverless thing), programmable with easy languages like Ruby and Python. Seriously, we can stop here.

    With Docker, you have to bend everything to fit into a container, and Dockerfiles are like bad JCL or BATCH scripts from the 1980s. And it doesn’t just work. It works on the developer’s machine. When you move it to production, you run into a horde of networking, storage and resource management problems (and, maybe, at higher scale, kernel panics, though I have yet to find them). Your average dev hasn’t a f*cking clue about networking or storage design.

    Plus running containers on an ECS cluster basically means one big game of whack-a-mole, but instead of having semi-predictable server names, you’re hopelessly stuck trying to figure out which UUID belongs to which task, which belongs to which service definition, and ultimately where the hell the container is even *running*. It’s like we literally reinvented the hypervisor and did a poorer job of it.

    Don’t get me started on logs and monitoring. Both deplorable.

    Configuration management, bemoaned by many a dev (and quite a few sysadmins) doesn’t go away. You still need it for your Docker hosts. You could use it with your containers (though I haven’t done that yet; just seems weird). And you’re absolutely right about banning Docker from the core and the DBA. You need configuration management there, too.

    I’ve been supporting Docker in production for about seven or eight months now and it’s been dreadful. Like I-want-to-look-for-another-job dreadful. Too bad all the $JOBS mention Docker.

    Like

  5. I see docker as really an application deployment solution that is geared toward immutable, non-super-critical workload. It is quite good at application deployment and tie nicely to orchestration.

    For OP use case, it doesn’t make much sense because his #1 concern is stability; docker ain’t it because it means you introduce more complexity and more point of failure.

    Liked by 1 person

    • We have some deployment problems to solve -like everyone- and we’re an environment where downtime is bad -like everyone who operate real world software-. Whatever we adopt will have some challenges to make it work nicely, it’s fine, DevOps/SRE is a job for a reason.

      None of this is an excuse for Docker -or any application- to crash every day or to break on each mandatory update.

      Like

  6. Nice article. I’ve been feeling the pressure to find a problem that docker can solve for a while now and every time I look at it, it just seems like using a real vm is easier and more manageable. At the same time it does look like cloud services will eventually force containers on all of us so I hope these problems are just early growing pains, and not just the new normal.

    Liked by 1 person

    • A Docker image is really just a glorified .zip or .deb file. Except it requires a massive amount of tooling and experience to be usable.

      There was supposed to be some Docker tooling to help with hard problems that zip/deb files don’t handle, like multi-system deployments and auto-scaling. Except it’s totally immature and experimental work in progress. It doesn’t live up to the promises. It will be available some days, hopefully.

      Like

    • I’m not sure why you’d feel pressure to find a problem Docker can solve; it does what it does and nothing is useful for everything.

      The main problem Docker solves for me in my little corner of the world is unusual, but perhaps instructive. As well as a fairly standard cloud infrastructure (in AWS and GCP), my company has a number of racks in various countries with physical servers and dedicated connections directly to other companies in those racks. Applications are partitioned by whom they talk to: applications A1 and A2 talk to company A over a dedicated link, application B1 talks to company B over a separate dedicated link, and so on. It’s considerably easier and less resource intensive to have a single VM for apps A1 and A2, since they share a network configuration (connectivity to the Internet and company A, but definitely not connectivity to company B or any other companies) and then partition the two apps themselves using Docker. That gives me separate deployment for the apps, without them interfering with each other, while using fewer VMs and having to do less VM configuration.

      Like

  7. Docker daemon doesn’t even run on a hardened host kernel, unless you lower the security of the host by disabling MPROTECT either on bin and lib, or for the whole system. Both are not desirable with docker running as root. For me (personally), docker is a “fancy chroot” that does a bad job at almost anything. It’s nice that it uses zfs features with snapshots etc. but a simple bash/ksh-script is easier to debug and can provide the same result for smaller installations. For larger installations: virtual machines are well-understood, maybe throw out useless stuff from distribution systems (alsa in a server? come on…) rather than re-inventing the wheel. There are already tools for managing VMs, been there for many years. Even google has one: ganeti.

    Crashing instances: No-Go for me. It’s either the host crashing, then we usually have a very serious hardware (which usually boils down to parts failing) or kernel issue, or the application crashing, which we can debug (or rather, the devs can). Imagine servers from a specific vendor randomly crash all the time, they wouldn’t stay in the racks for too long and be replaced by something that works. Oh, looks like we’re back at bare metal, virtual machines and LPARs. Too bad the decisions are not made by the people who are on-call.

    Like

    • People on-call have been tricked by Docker too.

      Seems fine during a quick evaluation (i.e. not crashing). Then the projects is too deep in Docker when it goes to production and the major issues are popping up one by one.

      Like

  8. I guess ( not 100% sure ) Google uses rkt their own open container initiative implementation ( Google is major invenstor of CoreOS ) on their own Linux OS supported by the Linux Kernel Which btw Google contributes most of the code ( can be easily checked that they are one of the top contributors ) so I agree with the comment Google CaaS but as Google advertises Google Infrastructure For Everybody Else ( GIFEE ) my recommendation play with CoreOS more seriously and Tektonik for your solution if you prefer on premises infra to have a better competitive advantage. For the debugging the use of systemd is beneficial and also magnificent tools like sysdig , Prometheus etc just simply work. I feel your pain you work on a demanding industry perhaps a visit to KubeCon or DockerCon can be helpful to speak with others which they have massive apps since big companies also from financial institutes also use similar technologies with thousands of nodes not to mention even platforms built in top like meteor framework ( everything in containers ) , Pokémon , Bloomberg , SAP, some cable operators in US etc.

    Also please note containers fit well with microservices architectures so the 12factor rule is beneficial to have it in mind also in such architectures . There is a reason why chaos monkeys have been developed by companies which use micro service apps . It is a different mentality.

    Lastly I have met personally Solomon Hykes , Jessie Frazelle and the CoreOS team Brandon Phillips , Brian Redbread and Kelsey Hightower and legends like Loris Degioanni ( tcpdump winpcap sysdig ) and many more from genuine contributors.

    Behind the name and the companies there are people and believe me having worked in the industry for 20+ years it is rare to see so much passionate people which have Linux in their blood , mind and DNA but most importantly in their heart !!! Combined with open source mentality. They are true hackers so it is not fair imho such a negative attitude. The whole idea of open source is that people work together to improve things mentioning VMware is similar to the early days where people where comparing Linux with Windows. Thanks for your time for reading my small 2cents !

    Liked by 1 person

    • Pokemon is running on GKE and they get a lot of custom support from Google employees. They are not powered by Docker.

      SAP, there are employees from SAP who commented on this article, one who is trying stuff on Docker, one who is trying stuff on Cloud Foundry. Like all big companies, they are running a billion things and a random quote of a technology is not representative of what runs the actual business.

      Bloomberg, who is also in London have none of their core technologies on Docker, for good reasons. I could argue that the whole point of this article is to help serious companies, like Bloomberg, stay away from Docker because it’s way too immature and it’s gonna backfire pretty bad. (Worst case scenario: I’m pretty clear on what’s known to break catastrophically, you play leading edge crazy if you want, you’ve been warned what you’re heading to).

      The conferences I attended were disappointing. It’s a shit show of vendors trying to shovel their things onto you.

      Worst of all, the ecosystem if so immature and competing, the companies are fighting each other constantly to redo the tools and undo marketing from the competitors.

      Have fun explaining that you are gonna run Docker and Kubernetes, while the Docker CEO goes on stage to say that Kubernetes is a component that is not needed anymore, now that Docker has Swarm and THAT IS THE NEXT BIG THING that everyone is already using in production at big-co (#read: a guy from SAP ran docker compose once).

      Like

      • I will for sure there is no point to continue answering with facts with this attitude. Have a nice day.

        Like

  9. For docker cleanup there is: https://github.com/meltwater/docker-cleanup which you can run as a single container per cluster. And as of docker 1.13 there is also `docker system prune`: https://docs.docker.com/engine/reference/commandline/system_prune/

    For registry we also use ECR similar to @Bernard Soh. ECR image cleanup can be done via container based approach such as https://github.com/trek10inc/ecr-cleaner or lambdas: https://github.com/awslabs/ecr-cleanup-lambda

    Like

    • >>> And as of docker 1.13 there is also `docker system prune`

      You can thanks this article and the 100k readers for finally pushing Docker into providing that 😉

      As usual, it didn’t get back-ported, as usual, it’s still missing in the docker registry.

      Like

    • Disclaimer: You work at Pivotal.

      It’s complicated. It was mentioned then some people complained that this doesn’t run on Docker, it can but it doesn’t have to, as they have containerization technology that predates Docker, that one may or may not use, voluntarily or non voluntarily. So I removed Cloud Foundry from the article, then people are now complaining that it’s not mentioned. The word “people” includes both employees and non-employees, both in various flavors.

      I lack the extensive experience on Cloud Foundry to clarify anything about it so I’m keeping it off for now. The article is long and there is enough competitions and confusion around container ecosystems to not bother adding more. Sorry guys.

      This is surely making some marketing guys crazy @ Pivotal because they love to target big institutions, including banks and finance, which is also a target of this article, that doesn’t bother mentioning Cloud Foundry :p

      Like

  10. Also I agree the Docker the tool is immature in the sense that they break API contracts quite frequently I don’t agree that the container technology itself is immature and “does not solve any problems”.
    You say
    “A Docker image is really just a glorified .zip or .deb file. Except it requires a massive amount of tooling and experience to be usable.”

    So what’s the problem? Even if you don’t want to use Docker (the software), you will be still able to use the docker images in the future even without docker.

    The point is also that with containers you can ship services which need to be co-located on the same host , for performance reasons for example, or because they need to the same hardware resources (GPU?) in an efficient way (layers, where are those with VMs?).
    You also argument that AWS provides better solutions (currently) for some problems, which is fine, but get locked in into thier eco system. If you don’t mind fine go and use AWS, but others want to have some chance to run their containers somewhere else without too much effort (maybe even on-premise.

    And yes in principle VMs can be used to ship services in a unified way, Netflix AFAIK did that for a while, but the overhead is so much bigger compared to containers that IMHO it just doesn’t make sense to not use containers for am application based on a micro service architecture.

    Like

    • >>> You also argument that AWS provides better solutions (currently) for some problems, which is fine, but get locked in into thier eco system. If you don’t mind fine go and use AWS, but get locked in into thier eco system.

      The tooling on AWS and Google is many years ahead compared to what is available elsewhere, it takes much less effort to run, it’s better debugged and supported.

      If you can’t use them (maybe running on your own private datacenters?), you’ll need a lot of tooling and not just for orchestration. It’s a lot of hard work.

      >>> The point is also that with containers you can ship services which need to be co-located on the same host

      You don’t need containers to ship services, let alone on the same host.

      Like

      • My point was that switching between AWS and google is difficult if you use their proprietary services.

        “You don’t need containers to ship services, let alone on the same host.”
        Sure you don’t need containers to do that, but it is much easier. For example no problems with overlapping ports, much less scripting needed (chef, ansible etc), because for developers you can just “hardcode” hostnames for the services in your docker-compose file (if needed).

        As bmullan already mentioned, it is not that important whether Docker itself survives, what is more important is that a common container image format survives.

        Like

        • > For example no problems with overlapping ports, much less scripting needed

          Docker solves none of that. You need to specify ports to expose and write the script to setup the software (inside the container).

          It seems you already have a lot of engineering done around Docker to make it work.

          Like

          • For development purposes running several services on one machine I do not need to map all the ports. E.g. I run for example one tomcat for the frontend at port 8080 and I run another tomcat for an internal service on 8080 as well inside containers. There is no need to have some scripting that assigns non overlapping ports to the different services. I can also run an older version of a service easily and even in parallel. devops Tools such as Chef do not have very good support for running different versions of a service.

            Like

          • (This is a reply to Markus Kohler’s comment; I can’t seem to reply to it directly.)

            You’re right that Chef does not have good support for running different versions of a service, but you shouldn’t be making Chef a part of your application anyway. If you have services (such as web servers) that need a specific configuration for your application, that’s part of your application code, should be in the repo for your application code, and that will be used by your test framework anyway.

            Docker can be a useful work-around if you’re new to this idea, but it’s hardly the only way of doing it and people have been doing this for many years before Docker existed.

            Like

          • in reply to Curt J Sampson :
            My use case might be a bit different than the use cases others have. I started to use chef a few years ago to setup both, production servers and development environments. The simple reason is that a I have to support a significant number of developers (hundreds) and it looked like a good idea to automate the installation of services such as an application server using the same scripts. It actually worked out quite well, but one problem is that I cannot really easily support installing two different versions of tomcat or the JVM, which is a requirements for developers because they need to be able to develop patches for older releases, potentially requiring an older revision of some component.
            Docker solves this problem easily. Potentially technologies such as flatpak could also help me with that but that still would not solve the problem with the overlapping ports I mentioned before.

            Like

    • News is that Google & Red Hat have forked Docker and are now going a different direction.
      I’ve seen this posted in several places this past week.

      https://www.certdepot.net/death-of-docker/

      The hype momentum is with Docker but obviously if these reports are true then its going to start having people make some hard choices what with Google/Kubernetes & Red Hat working on an alternative reference application container.

      Like

  11. I feel your pain. You can get ahead with SmartOS. Good old Solaris zones, as stable, secure, and as performant as ever. And you sure can run databases in them, we did for years (four Oracle databases with 4 GB SGA per 32 GB T2000 machine, each DB in her own zone).

    Good luck. As I’m starting to work on backtesting algorithms, sounds like I could help you guys get to where you want to go.

    Like

  12. Great writeup, thanks. I was surprised to see that you said there are no good, battle-tested, production orchestration systems in existence. That’s how I would describe Mesos, I work on it :).

    While we don’t have the user experience as flushed out as docker or kubernetes, it is a very stable and reliable system. Twitter runs all of its microservices atop of Mesos (via the Aurora scheduler) [1], Apple runs Siri atop of Mesos (via a custom scheduler called Jarvis) [2], Netflix runs a mix of batch, streaming and services atop Mesos (via Mantis, Titus, and Meson) [3], Uber runs Cassandra on top of Mesos [4], and there are many more production users.

    With respect to “mesos was not built for docker”, I’m not sure what this means. It’s important to understand that containers existed before docker, and mesos (among other things) were creating containers before docker came into popularity. We added support for using docker as the containerization component of Mesos, but we, like you, also ran into a lot of stability issues around Docker. We now provide support for Docker images, while using the containerization built into Mesos, which gives some of the benefit of Docker without the stability issues.

    I also wouldn’t say that kubernetes is the only project tackling the hard problems around containers, we in the Mesos community have been doing this for some time and have been trying to convince the industry that we need a datacenter operating system. Now, I think the industry is at a point where they understand this, and it’s a matter of choice, which I’m personally very happy to see. 🙂

    Thanks again for the article, hope this sheds some light from the Mesos perspective.

    Ben

    [1] https://www.wired.com/2013/03/google-borg-twitter-mesos/
    [2] http://www.businessinsider.com/apple-siri-uses-apache-mesos-2015-8
    [3] https://medium.com/netflix-techblog/distributed-resource-scheduling-with-apache-mesos-32bd9eb4ca38
    [4] http://highscalability.com/blog/2016/9/28/how-uber-manages-a-million-writes-per-second-using-mesos-and.html

    Like

  13. Docker did not rename itself to Moby. They extracted Docker internals into an open-source umbrella project called Moby to better allow open-source collaboration. Docker is still docker, and it’s built using components that now live under the Moby project.

    Using the Moby move as an argument to “warn the world about unexpected Docker changes and breakages” is (at best) just very uninformed, as it’s really quite the opposite. For your own credibility, I recommend you redact that update.

    That said, thanks for sharing your experience. It’s great to learn from the lessons learned from your journey.

    Like

  14. Wish I had found this piece sooner.
    Caused quite a bit of muted chuckles, since we went through some of this nightmare ourselves, though to a much lesser extent.

    Our story started out badly as well, but had a much happier ending.

    We started with docker in late 2013 and have had our fair share of crashes, incompatibility issues (we ran docker at AWS).
    I don’t even dare count the amount of time we’ve sunk into dealing with “it’s broken again!”.

    To this day we have some front-end apps at AWS, and alerts go off every single day about some networking thing glitching, ELBs + docker being flaky etc.
    Most everything else we moved to GKE/GCP, including all critical services and data from our own data centers, and I couldn’t be happier.

    Instead of spending 80% of my day keeping the lights on (baby sitting Docker + Linux + AWS + our self hosted services) which was the case 2-3 years ago, I now spend 90% of the time on new products and services, stuff that matters. All thanks to a fairly solid container management platform, mostly off the shelf (GKE).

    Containers have been a complete success and blessing for us, since after we went the k8s route that is. Everyone likes it; development, ops, qa, research and .. finance alike.

    K8s gives everyone the visibility they need, they can log into their “services” (containers) to trouble shoot if needed (kubectl exec) or look at live logs (kubectl logs).
    With VMs, access was gated since it was a shared environment, and we didn’t want people mucking around in a way that affected others. If a developer screws up her or his own service container, I don’t really care. The burden is on them to fix it. Their services, their containers, they are fully responsible for them. In this sense containerization has transformed ownership at the company. From silos to verticals, and attitudes from “its ‘s problem” to “it’s my problem”.

    What I appreciate most from the docker to k8s migration are:
    * The Pod (scheduling one or more containers together). It makes so much more sense as the atom for scheduling, when compared to trying to compose (no pun intended) systems out of individual containers.

    * A flat network with distinct IPs. Much more intuitive than port mapping hell. Not addressing this issue was a huuuge design mistake by docker inc imo. Kube got it right. Services are addressed by … Addresses. Ports should be the standard _expected_ service port for the service’s nature (https service, that means 443, everywhere, period!).

    * Rolling updates and Health checks. CI/CD is much simplified when one can roll out [network] namespaced versions of new services, running in parallel with released ones, have the platform report on the health of the service before stealing people’s time for a code review. Why even review code that doesn’t have a chance of working properly in production?

    * Health checks for services. Programmatically define what it means to have a healthy service. It’s not productive to have people spend their time staring at dashboards, putting out fires, detecting known signs of failure, reacting to them. Much better to codify the procedure everyone anyway has on the bookshelf or the wiki, and let the system mitigate automatically. [1]

    * Health checks for nodes (machines). Similar to services, if they show signs of trouble, GKE will move workloads to healthy machines and recycle the bad ones. [2]

    * Automatic storage provisioning. People now just declare what storage, quality and life-cycle policy they need and that will be provisioned along with their apps.

    * Secrets sharing. The primitive for this has allowed our employees to simply declare which other services or resources they depend on, be it our own or some external service / resource the company has contracted, and the proper credentials and configuration get injected into their containers (after automatic policy checks of course). Or when possible, we just schedule the apps with a standard “resource sidecar” which performs is needed to expose the resource/service to the application, so that the app can access it through localhost or the local file system. Dirt simple for people to *leverage* other services and resources. [3]

    * Declarative resource model. Kubernetes is so much more than just a container runner. The real potential lies in it’s interface abstractions and mechanisms for *system* state sharing and notification. This provides us all the primitives we need to automate more or less the entire set of Ops related tasks. [4]

    We started out trying to find a better unit of composition than VMs, by running docker. Found it lacking and suffered. Moved to a managed k8s which freed up so much time that we could seriously start focusing on the company’s future. As part of that transition we discovered that what we migrated to offered an unprecedented potential for automation, not simply another flavor of a workload scheduler.

    A long comment, but I wanted to share a contrasted example of a company which also started out with docker challenges, but didn’t write off the entire containerization concept as a fad, but as one of our best investments, after a slight course correction.

    [1] Sure dashboards are still useful, but for a different purpose. Mainly for discovery of new types of problems, for signals that affect future roadmapping, not constantly looking for small fires (traditional operations).
    [2] ELB + ASG sort of did this at AWS, for web services at least, and was the one feature I enjoyed most from that vendor. K8S health checks go further; they are a first class properties of workloads, not dependent on external services or in fact being a service at all. I can have a batch job with health checks, which trigger recovery actions. Don’t think I could even do that in AWS without a lot of custom developed boiler plate infrastructure.
    [3] I understand Docker copied this concept, but haven’t tested it since we’d already left the core docker ecosystem behind once that feature cam out in swarm.
    [4] E.g. we can write a spec for a “Virtual DC”, in which key architectural properties can be defined, and a piece of code that reads such specs and tries to fulfill them. Like _that_ we’d have automated the virtual data center creations, would have an up-to-date view of all of them, including real-time states, and could have them decommissioned as soon as the associated spec (resource) was removed from the “management namespace”. The {desired world}{real world} feedback loop is the bit that’s been missing previously when I’ve attempted to realize this vision at other companies. With this now in place through kube, there’s no limit to the amount of stuff that can be automated.

    Like

    • So you basically just listed why you should use a managed service for containers and not attempt to run your own orchestration. Honestly, I somewhat agree, but I blame K8S for this current state. Frankly I think k8s is tremendously over-hyped and the root of a lot of issues.

      Today, most people will immediately link Docker containers to Kubernetes, with an intrinsic understanding that containers need orchestration, and therefore said orchestration must be the new Openstack of the container world: an amalgamation of complex components creating a Rube Goldberg under the guise of a single product (k8s). Reality: most companies would be better off with three simple HashiCorp products: Nomad (schedule/orchestrate), Consul (SD/KV store), and Vault (secrets management). Three binaries, three json configs, done. Instead they rush out this complex install that forces them to consider things they don’t need (do we really need abstracted SDN for every app?) and yet fails to address things they actually do (RBAC and Kubernetes are basically oil and water to the point you end up standing entire k8s farms).

      At any rate, Google and Microsoft already have fully managed K8S clusters available now, and AWS finally announced the same will be ready in 2018 … so it seems the real winner in the end to container management in 2018 is to pay the big three to handle it for you. K8S in every data-center is going about as well as Openstack in every datacenter (read: you keep hearing it’s true, but hard numbers tell a different story). Those who are determined to run managed container stacks locally are still in for a lot of pain (unless they simply admit being cool is less important than being stable … then just run a hashi-stack).

      Like

      • Docker is unstable. Adding Kubernetes or Nomad on top will not make it work better.

        Half the value in fully managed services is to handle Docker breakages for you.

        Like

  15. I think that now in 2018 docker is much more stable environment that it used to be two years ago. Nevertheless problems still happen like volumes or folders becoming inaccessible or system crashes. Services that offer redundancy, monitoring and automatic replacement/restart are good but sometimes costly way to mitigate.

    Liked by 1 person

  16. “They already offer the service so that should mean that they already worked around the Docker issues.”

    You’d think so, but this kind of logic doesn’t apply to Google.

    Liked by 1 person

  17. Sir, did your experience changed in these years? There has been 4 years and I think docker has changed a lot. How do you see it now? It would be great if you make an update.

    Liked by 1 person

  18. Reliability is better; I don’t know that I’d trust it to be as reliable as Debian, but it’s not bad. You probably don’t want to do things like `apt-get update docker` at the start of every CI build before you start up a Docker container; I don’t know why you’d do that in the first place.

    Many of the issues here are based on a fundamental misunderstanding of containers and Docker: a container is a process tree, which is nothing at all like a VM.[1] (Many things that people thing are provided by “Docker” are not at all; it’s just Docker configuring the Linux kernel to set up processes.

    So “ssh into a container” makes as much sense as “ssh into this Bash process I started in another window.” You can easily exec a shell that shares that configuration that the rest of the process tree shares, though, if a shell binary is available in the layers you set up for that container.

    And of course you don’t want to store persistent data in a container’s filesystem layer; you wouldn’t want to store persistent data in a process’ memory space, either. You can easily bind parts or all of a machine’s “global” filesystem space into the Docker process, or use volumes, or whatever.

    The current default networking configurations it builds (virtual interfaces on to an internal virtual bridge, with the kernal doing NAT between the network on the bridge and the external network, etc.) is reasonable enough for many Docker use cases, but it’s certainly also reasonable to strip that dow a bit, or just to turn that all off and have your “container” processes use the same networking that any other process on that host uses. I do this all the time.

    In short, understand how containerisation actually works at the process and kernel level, and especially be clear on what is being done by Docker and what is being done by the Linux kernel, and you’ll encounter a lot less misery when trying to use it.

    [1]: https://stackoverflow.com/a/56606244/107294

    Like

Leave a reply to gregswallow Cancel reply