Docker is container management tool.
Docker Engine popularly known by docker, is similar in concept to Virtual Machines, except it’s much more lightweight. Instead of running an entire separate operating system, Docker runs containers, which use the same host operating system, and only virtualize at a software level by developing required filesystem structure and replicate into many instances.It provides the ability to package and run an application in an isolated environment i.e; containers. The isolation and security allows you to run many containers simultaneously on a given host.
Docker has many reasons as to why it widely used which are listed as below-
It reduces the cost as due to dockers images concept can be worked upon in a distributed fashion.
It saves time as it deploys code quickly.
It improves software quality.
It's Portable Across Machines and you may deploy your containerized program to any other system that runs Docker after testing it.
It's Lightweight and due to Containers' portability and performance advantages can aid in making your development process more fluid and responsive.
It works in isolated manner .Even if you are running a container, it’s guaranteed not to be impacted by any host OS securities or unique setups, unlike with a virtual machine or a non containerized environment.
Docker is quite Scalable too and if the demand for your apps necessitates, you can quickly generate new containers.
In simple meaning, a container is a sandboxed process on your machine that is isolated from all other processes on the host machine. That isolation leverages kernel namespaces and cgroups, features.To summarize, a container:
Is a runnable instance of an image. You can create, start, stop, move, or delete a container using the DockerAPI or CLI.
Can be run on local machines, virtual machines or deployed to the cloud.
Is portable (can be run on any OS).
Is isolated from other containers and runs its own software, binaries, and configurations._
You can create, start, stop, move, or delete a container.
Containers do not carry any Guest OS with them the way a VM must.
Containerized applications are tied to all its dependencies as a single deployable unit. Leveraging the features and capabilities of the host OS, containers enable these software apps to work in all environments. Docker Engine requests kernel to create independent computational resources like network,PID,mount and username, those all resources combined and creates a runtime environment for application which is a single container.
Key difference between kernal and container is as below-
BOOT FILESYSTEM --> ROOT FILESYSTEM --> USER FILESYSTEM --> APP FILESYSTEM (OS Kernal)
ROOT FILESYSTEM --> USER FILESYSTEM --> APP FILESYSTEM (Container)
Docker provides convenience scripts at get.docker.com.
Download and Run the script.
$ curl -fsSL get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo systemctl enable docker
$ sudo systemctl start docker
Verify that docker is installed correctly by running the hello-world image.
$ sudo docker run hello-world
There are four components of docker-
Docker client and server-This is a command-line-instructed solution in which you would use the terminal on your Mac or Linux system to issue commands from the Docker client to the Docker daemon. The communication between the Docker client and the Docker host is via a REST API.
Docker image -A Docker image is a template that contains instructions for the Docker container. The Docker image is built within the YAML file and then hosted as a file in the Docker registry.
Docker registry -The Docker registry is where you would host various types of images and where you would distribute the images from. The repository itself is just a collection of Docker images, which are built on instructions written in YAML and are very easily stored and shared.
Docker container -The Docker container is an executable package of applications and its dependencies bundled together. The container is built using Docker images.
Two more advanced components of Docker are-
Docker compose - Docker-compose is designed for running multiple containers as a single service. It does so by running each container in isolation but allowing the containers to interact with one another. As noted earlier, you would write the compose environments using YAML.
Docker swamp -Docker swarm is a service for containers that allows IT administrators and developers to create and manage a cluster of swarm nodes within the Docker platform. Each node of Docker swarm is a Docker daemon, and all Docker daemons interact using the Docker API. A swarm consists of two types of nodes: a manager node and a worker node. A manager node maintains cluster management tasks. Worker nodes receive and execute tasks from the manager node.
There are different stages when we create a Docker container which is known as Docker Container Lifecycle. Some of the states are:
- Create phase
Create Containers- docker create --name
Start Container- docker start
- Running phase
docker run --name
In an interactive manner-
docker run -it --name
- Paused phase/unpause phase
Pause Container-docker pause
Unpause Container-docker unpause
- Stopped phase
Stop Container-docker stop
To stop all the running containers- docker stop $(docker container ls –aq)
- Killed phase
Delete Container-docker rm
Delete all containers using a single command- docker rm $(docker ps -aq)
Kill Container-docker kill
The docker pause command suspends all processes in the specified containers. On Linux, this uses the freezer cgroup. Traditionally, when suspending a process the SIGSTOP signal is used, which is observable by the process being suspended. With the freezer cgroup the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. On Windows, only Hyper-V containers can be paused.
Ex- docker pause
The docker unpause command un-suspends all processes in the specified containers. On Linux, it does this using the freezer cgroup.
Ex- docker unpause
Docker stop command stop one or more running containers.
The main process inside the container will receive SIGTERM, and after a grace period, SIGKILL. It end the session with Exit (0) code and stops container gracefully.
Ex- docker stop
Docker kill command is kind of similar to docker stop command however it sends SIGKILL signal to our running container process. SIGKILL signal immediately shuts the container down without taking any pause. It terminate the session forcefully with Exit (137) code.
First attempt to attach to a container-
docker run -itd ubuntu /bin/bash
Then to get inside a Docker container , you can use Shell and try to run docker exec command to get inside the container-
docker exec ls(or any other linux command/s can also be used)
Container can be accessed from outside by using network commands-
- docker inspect 71afb466705d -- to inspect the container parameters
- docker ps -- to check the running container information
- curl http://172.17.0.1 --to check if success messages(It works) is sent or not
- ping "172.17.0.1" --to check through sending ping in form of data packets
PID1 has to run with anything (because no kernel in the container) which has not been exited.
The container by default, runs in foreground unless explicitly detached using the -d flag.
The container runs as long as the specified command keeps running and then stops.In Unix only, there's an exception we have CTRL+P+Q to exit a shell and not let it know that the process has exited yet and container keeps on running still.