You’re doing it wrong: 5 common Docker mistakes

You're doing it wrong: 5 common Docker mistakes

The newer the tool, the tougher it is to use correctly. Sometimes nobody — not even the toolmaker itself — knows how to use it right.

As Docker moves from a hyped newcomer to a battle-tested technology, early adopters have developed best practices and ideal setups to get the most out of it. Along the way, they’ve identified what works — and what doesn’t.

Here are five mistakes that come with using Docker, along with some advice on how to steer clear of them.

Using quick-and-dirty hacks to store secrets

“Secrets” cover anything that you would not want outsiders to see — passwords, keys, one-way hashes, and so on. The Docker team has enumerated some of the hacks people use store secrets, including environment variables, tricks with container layers or volumes, and manually built containers.

Many of these are done as quick hacks for the sake of convenience, but they can be quickly enshrined as standard procedure — or, worse, leak private information to the world at large.

Part of the problem stems from Docker not handling these issues natively. A couple of earlier proposals were closed for being too general, but one possibility currently under discussion is creating a pluggable system that can be leveraged by third-party products like Vault.

Keywhiz, another recommended storer of secrets, can be used in conjunction with volumes. Or users can fetch keys using SSH. But using environment variables or other “leaky” methods should be straight out.

Taking the “one process per container” rule as gospel

Running one process per container is a good rule of thumb — it’s in Docker’s own best practices document — but it’s not an absolute law. Another way to think about it is to have one responsibility per container, where all the processes that relate to a given role — Web server, database, and so on — are gathered together because they belong together.

Sometimes that requires having multiple processes in a single container, especially if you need instances of syslog or cron running inside the container. Baseimage-docker was developed to provide a baseline Linux image (and sane defaults) with those services.

If your reason for having a one-process container is to have a lean container, but you still need some kind of caretaker functionality (startup control, logging), Chaperonemight help, as it provides those functions with minimal overhead. It’s not yet recommended for production use, but according to the GitHub page, “if you are currently starting up your container services with Bash scripts, Chaperone is probably a much better choice.”

Ignoring the consequences of caching with Dockerfiles

If images are taking forever to build from Dockerfiles, there’s a good chance misuse or misunderstanding of the build cache is the culprit. Docker provides a few notesabout how the cache behaves, and the folks at devo.ps detail specific behaviors that can inadvertently invalidate the cache. (ADD, VOLUMES, and RUN commands are the biggest culprits.)

The reverse can also be true: Sometimes, you don’t want the cache to preserve everything, but purging the whole cache is impractical. The folks at CenturyLink have useful notes on when and how to selectively invalidate the cache.

Using Docker when a package manager will do

“Today Docker is usually used to distribute applications instead of just [used] for easier scaling,” says software developer Marc Scholten. “We’re using containers to avoid the downsides of bad package managers.”

If the goal is to simply grab a version of an application and try it out in a disposable form, Docker’s fine for that. But there are times when you really need a package manager. A package manager operates at a lower level of abstraction than a Docker image, provides more granularity, and automatically deals with issues like dependency resolution between packages.

Here and there, work is being done to determine how containers could be used to replace conventional package management altogether. CoreOS, for instance, employs containers as a basic unit of system management. But for now, containers (meaning Docker) are best suited for situations where the real issues are scale and the need to encapsulate multiple versions of apps without side effects.

Building mission-critical infrastructure without laying a foundation first

This ought to be obvious, but it always bears repeating: Docker, like any other tool, works best when used in conjunction with other best practices for creating mission-critical infrastructure. It’s a puzzle piece, not the whole puzzle.

Matt Jaynes of Valdhaus (formerly DevOps University) has noted that he sees “too many folks trying to use Docker prematurely,” without first setting up all the vital details around Docker. “Using and managing [Docker] becomes complex very quickly beyond the tiny examples shown in most articles promoting [it],” says Jaynes.

Automated setup, deployment, and provisioning tools, along with monitoring, least-privilege access, and documentation of the arrangement ought to be in place before Docker is brought in. If that sounds nontrivial, it ought to.

 

[Source:- Javaworld]