I’ll start:

When I was first learning to use Docker, I didn’t realize that most tutorials that include a database don’t configure the database to persist. Imagine my surprise when I couldn’t figure out why the database kept getting wiped!

1 point

Spending hours and hours debugging an issue where files weren’t being written before finally realising I was looking in my host file system not the container… fml.

permalink
report
reply
1 point

I still don’t really know how to get mounted folders to not wreck permission on the files. You can use a non root user, but that requires users have UID 1000 when you distribute an image.

permalink
report
reply
1 point

The closest thing I’ve found is to allow users to specify the UID and GID to run with, but there’s no good way to just auto detect that upon runtime unfortunately

permalink
report
parent
reply
1 point
*
  • Docker swarm does not respect its own compose spec, exposes services on all interfaces and bypasses firewall rules [1], [2]
  • 1 million SLOC daemon running as root [1]
  • Buggy network implementation, sometimes requires restarting the daemon to release bridges [1]
  • Requires frequent rebuilds to keep up to date with security patches [1] [2] [3]
  • No proper support for external config files/templating, not possible to do graceful reloads, requires full restarts/downtime for simple configuration changes [1]
  • Buggy NAT implementation [1]
  • Buggy overlay network implementation, causes TCP resets [1]
  • No support for PID limits/fork bomb protection [1], no support for I/O limits [2]
  • No sane/safe garbage collection mechanism, docker system prune --all deletes all unused volumes - including named volumes which are unused because the container/swarm service that uses them is stopped at that particular moment for whatever reason. Eats disk space like mad [1] [2]
  • Requires heavy tooling if you’re serious about it (CI, container scanning tools, highly-available registry…) [1], Docker development and infrastructure is fully controlled by Docker Inc. [1] [2] [3] [4] [5] [6]
permalink
report
reply
1 point

Be really careful when building images that require secrets for build configuration. Secrets can be passed in as build args, but you MUST UNSET THEM IN THE DOCKERFILE and then repass them in as environment variables at runtime (or else you are leaking your secrets with your image).

Also, image != container. Image is the thing you publish to a registry (e.g. dockerhub). Container is an instance of an image.

permalink
report
reply
1 point

This is no longer true with buildkit - you can use the --secret to securely pass a secret in as an argument.

permalink
report
parent
reply
1 point

Thanks for sharing! I will need to look deeper into build kit. Containers aren’t my main artifacts, unfortunately, so I am still building them the ways of old, sounds like.

permalink
report
parent
reply
1 point

Early in the history of docker, a lot of bits and bobs hadn’t been worked out yet, and I had a bug land on my desk where a service was leaking memory until it crashed, but only when running in a container. Turns out, the jvm at the time just never collected in a container because the /proc directory was mounted from the host rather than the k8s scheduler. So it would only collect if it did not receive a second allocation request during the GC.

permalink
report
reply

DevOps

!devops@programming.dev

Create post

DevOps integrates and automates the work of software development (Dev) and IT operations (Ops) as a means for improving and shortening the systems development life cycle.

Rules:

  • Posts must be relevant to DevOps
  • No NSFW content
  • No hate speech, bigotry, etc
  • Try to keep discussions on topic
  • No spam of tools/companies/advertisements
  • It’s OK to post your own stuff part of the time, but the primary use of the community should not be promotional content.

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

Community stats

  • 1

    Monthly active users

  • 57

    Posts

  • 46

    Comments