If one intends to use Docker from within a container, they should clearly understand security implications.
Accessing Docker from within the container is simple:
- Use the
dockerofficial image or install Docker inside the container. Or you may download archive with docker client binary as described here - Expose Docker unix socket from host to container
That’s why
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
should do the trick.
Alternatively, you may expose into container and use Docker REST API
UPD: Former version of this answer (based on previous version of jpetazzo post ) advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Considerations:
- All host containers will be accessible to container, so it can stop them, delete, run any commands as any user inside top-level Docker containers.
- All created containers are created in a top-level Docker.
- Of course, you should understand that if container has access to host’s Docker daemon, it has a privileged access to entire host system. Depending on container and system (AppArmor) configuration, it may be less or more dangerous
- Other warnings here dont-expose-the-docker-socket
Other approaches like exposing /var/lib/docker to container are likely to cause data corruption. See do-not-use-docker-in-docker-for-ci for more details.
Note for users of official Jenkins CI container
In this container (and probably in many other) jenkins process runs as a non-root user. That’s why it has no permission to interact with docker socket. So quick & dirty solution is running
docker exec -u root ${NAME} /bin/chmod -v a+s $(which docker)
after starting container. That allows all users in container to run docker binary with root permissions. Better approach would be to allow running docker binary via passwordless sudo, but official Jenkins CI image seems to lack the sudo subsystem.