Debug School

Jawahar Lal Soni
Jawahar Lal Soni

Posted on

Docker Assignment Day 1

What is docker?
Docker is a software platform that enables developers to create, deploy, and run applications in containers. A container is a lightweight, standalone executable package that includes everything needed to run an application, including code, libraries, and system tools. Docker containers are designed to be portable and easily scalable, allowing developers to move their applications from development to production environments with ease. Docker is based on open-source technology and has become increasingly popular in recent years, particularly in cloud computing and microservices architectures. Docker also provides a way to isolate applications from the underlying infrastructure, increasing security and reducing the risk of compatibility issues.

Why Docker?
Criteria Virtual Machine Docker D
OS Support Occupies a lot of memory space Docker Containers Occupy less space
Boot-up Time Long boot-up time Short boot-up time

Performance Running multiple virtual machines lead to unstable performance Containers have a better performance as they are hosted in a single Docker engine

Scaling Difficulty to Scale Up Easy to Scale Up

Efficiency Low Efficiency High Efficiency
Portability Compatibility Issues while porting across different platforms Easy porting across different platforms

 1.Consistency: Docker enables developers to create consistent environments for their applications, regardless of where they are deployed. This ensures that the application runs the same way across different environments, reducing the risk of bugs and compatibility issues.

 2.Portability: Docker containers are portable, meaning that they can be deployed on any machine that has Docker installed, regardless of the underlying infrastructure. This makes it easy to move applications between development, testing, and production environments.

 3.Scalability: Docker makes it easy to scale applications by allowing developers to spin up new containers as needed. This means that applications can easily handle increases in traffic or demand.

 4.Efficiency: Docker enables developers to package all of the dependencies and components needed to run an application in a single container. This simplifies the deployment process and reduces the risk of configuration errors.

 5.Isolation: Docker provides a way to isolate applications from the underlying infrastructure, reducing the risk of security breaches and other issues.

 Overall, Docker helps developers and organizations to streamline their application development and deployment processes, while also improving consistency, portability, scalability, efficiency, and security.

What is container?

 A container is a lightweight, standalone executable package that includes everything needed to run an application, including code, libraries, and system tools. Containers provide a way to package software into a single unit that can be run consistently across different environments.

 Containers are similar to virtual machines, but they are more lightweight and efficient. While virtual machines require a separate operating system and allocate resources such as CPU and memory, containers share the same operating system kernel and only allocate the resources needed to run the specific application. This makes containers more efficient and faster to start up and shut down than virtual machines.

 Containers are also designed to be portable, meaning that they can be deployed on any machine that has the container runtime installed, regardless of the underlying infrastructure. This makes it easy to move applications between development, testing, and production environments.

 Containers are a key technology in modern software development and are commonly used in microservices architectures, cloud computing, and DevOps practices. They provide a way to isolate applications from the underlying infrastructure, making it easier to manage and deploy complex software systems.

How does the Docker Container work?

 Docker images: A Docker image is a read-only template that contains the application and all its dependencies. Docker images are built using a Dockerfile, which specifies the instructions to build the image. Docker images are stored in a registry, such as Docker Hub, where they can be easily downloaded and used.

 Docker container: A Docker container is an instance of a Docker image that is running. Docker containers are isolated from the host system and from other containers, so they can run multiple applications on the same machine without any conflicts.

 Docker engine: The Docker engine is the core component of Docker that runs on the host system. The Docker engine manages the creation, running, and deletion of Docker containers. It communicates with the Docker daemon, which is responsible for managing the Docker objects such as images, containers, networks, and volumes.

 Dockerfile: A Dockerfile is a script that contains instructions for building a Docker image. It specifies the base image, the application code, and the dependencies required to run the application. The Dockerfile is used to build the Docker image, which can then be used to create Docker containers.

 Docker network: A Docker network is a virtual network that enables communication between Docker containers. Docker networks can be created and managed using Docker commands.

 Docker volume: A Docker volume is a way to store data outside of a Docker container. Docker volumes can be used to share data between Docker containers or to persist data between Docker container restarts.

 Overall, Docker containers provide a way to package and run applications in a self-contained environment, making it easy to deploy and scale applications across different environments.

How to Install Docker?

 Installing Docker on Linux
 Update the package index: sudo apt-get update
 Install the required dependencies: sudo apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
 Add the Docker GPG key: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
 Add the Docker repository to your system: echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
 Update the package index again: sudo apt-get update
 Install Docker: sudo apt-get install docker-ce docker-ce-cli containerd.io
 Verify that Docker is installed correctly: sudo docker run hello-world

What are Docker Components?

  1. Docker Engine: The Docker engine is the core component of Docker that runs on the host system. It provides the runtime environment for Docker containers, manages container creation, networking, storage, and other aspects of containerization.

  2. Docker Images: A Docker image is a lightweight, standalone, executable package that includes everything needed to run an application. It is created from a Dockerfile that specifies the application's dependencies and configurations.

  3. Docker Registry: A Docker registry is a repository for storing and sharing Docker images. The Docker Hub is the default registry provided by Docker, but private registries can also be set up for enterprise use.

  4. Docker Containers: A Docker container is a lightweight, standalone, and executable package that includes everything needed to run an application, including the application code, libraries, and dependencies. Docker containers are created from Docker images and can be run on any system that supports Docker.

  5. Docker Compose: Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define the services, networks, and volumes required for your application in a single configuration file.

  6. Docker Swarm: Docker Swarm is a clustering and orchestration tool for managing a cluster of Docker hosts. It allows you to deploy, manage, and scale Docker containers across multiple hosts.

Overall, these components work together to provide a powerful and flexible platform for containerization and application deployment.

What are container lifecycle commands?
What is Docker pause/unpause?
What is Docker Kill?

  1. docker create: This command creates a new Docker container based on a Docker image, but does not start it.

  2. docker start: This command starts an existing Docker container that has been created but is not running.

  3. docker stop: This command stops a running Docker container gracefully.

  4. docker restart: This command restarts a running Docker container.

  5. docker pause: This command pauses a running Docker container, suspending all processes inside the container.

  6. docker unpause: This command resumes a paused Docker container.

  7. docker kill: This command sends a SIGKILL signal to a running Docker container, forcing it to immediately stop.

  8. docker rm: This command removes a stopped Docker container from the host system.

  9. docker update: This command updates the configuration of a running Docker container.

These commands are essential for managing the lifecycle of Docker containers, allowing you to easily start, stop, restart, and remove containers as needed. By using these commands, you can manage your Docker containers with ease, ensuring that your applications are running smoothly and efficiently.

How to get inside the container?

 First, you need to identify the container you want to enter. You can use the docker ps command to list all running containers:
 This command will list all running containers with their container IDs, names, and other details.

 Once you have identified the container you want to enter, you can use the docker exec command to enter the container. The basic syntax of the command is:

docker exec -it

 Here, -it flag is used to allocate a pseudo-TTY and allow for interactive input. is the ID of the container you want to enter, and is the command you want to execute inside the container.

For example - to enter a container with ID 1234567890ab and run a shell inside the container, you can use the following command:

 docker exec -it 1234567890ab /bin/bash
 This command will open a shell inside the container, allowing you to run commands as if you were logged into the container.

 Once you are inside the container, you can run commands just like you would on a regular command line. To exit the container, type exit and press enter.
 Note: If you want to enter a container that is not running, you can use the docker start command to start the container and then use the docker exec command to enter it.

How to access the Container form outside?

First, you need to start the container with the -p option.
The syntax of the command is: docker run -p :

Here, is the port on the host system that you want to map to the container's . is the name of the Docker image you want to run.

For example, if you want to run a container based on the nginx image and map the container's port 80 to the host's port 8080,
you can use the following command:docker run -p 8080:80 nginx

This command will start a new container based on the nginx image and map the container's port 80 to the host's port 8080.

Once the container is running, you can access it from outside the host system using a web browser or a command-line tool like curl.
In this example, you can access the container by entering http://localhost:8080 in a web browser or running the following command in a terminal:
curl http://localhost:8080

This command will retrieve the content of the default page served by the nginx container.

By mapping the container's ports to the host's ports, you can access the container from outside the host system. Note that the container must be running and the ports must be mapped correctly for external access to work.

What is the rule for container is running?

Container is running as long as PID is running.

Top comments (0)