A container is basically a process. There is no technical issue with running 500 processes on a decent-sized Linux system, although they will have to share the CPU(s) and memory.
The cost of a container over a process is some extra kernel resources to manage namespaces, file systems and control groups, and some management structures inside the Docker daemon, particularly to handle stdout
and stderr
.
The namespaces are introduced to provide isolation, so that one container does not interfere with any others. If your groups of 5 containers form a unit that does not need this isolation then you can share the network namespace using --net=container
. There is no feature at present to share cgroups, AFAIK.
What is wrong with what you suggest:
- it is not “the Docker way”. Which may not be important to you.
- you have to maintain the scripting to get it to work, worry about process restart, etc., as opposed to using an orchestrator designed for the task
- you will have to manage conflicts in the filesystem, e.g. two processes need different versions of a library, or they both write to the same output file
stdout
andstderr
will be intermingled for the five processes