Have you heard of Docker, but still having trouble understanding it? Don’t worry, you are not alone. Let’s see together what Docker is and how it works.
What is Docker?
Docker is a tool designed to make it easier to build, deploy, and run applications using containers. Containers allow a developer to package an application with all the parts it needs, such as libraries and other dependencies, and distribute it as a single package. This way, thanks to the container, the developer can be sure that the application will run on any other machine regardless of any custom settings that machine may have which may differ from the machine used to write and test the code.
In some ways, Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating an entire virtual operating system, Docker allows applications to use the same kernel as the system they’re running on. This provides a significant performance boost and reduces the size of the application.
Also, Docker is an open source project. This means that anyone can contribute to Docker and extend it to meet their needs if they need additional functionality that isn’t available off the shelf.
What is it used for?
Docker is a tool designed to benefit both developers and system administrators. For developers, it means they can focus on writing code without worrying about the system it will run on. It also allows them to get a head start by using one of the thousands of programs already designed to run in a Docker container as part of their application. For system administrators, Docker offers flexibility and potentially reduces the number of systems needed due to its small footprint.
What are containers?
One of the goals of modern software development is to keep applications on the same host or cluster isolated from each other so that they do not interfere with each other’s operation or maintenance. This can be difficult, thanks to the packages, libraries, and other software components needed to run. One solution to this problem has been virtual machines, which keep applications on the same hardware completely separate and minimize conflicts between software components and competition for hardware resources. But virtual machines are bulky, each requiring its own operating system, so they typically require several gigabytes and are difficult to maintain and upgrade.
Containers, on the other hand, isolate application execution environments, but share the underlying operating system kernel. They are typically measured in megabytes, use far fewer resources than VMs, start up almost immediately and can be packed much more densely. Containers provide a highly efficient mechanism for combining software components into the types of applications and services needed in a modern enterprise and for keeping those software components updated and maintained.
How does Docker work?
To understand how Docker works, let’s look at some of the components you would use to build Docker applications.
Dockerfile
Every Docker container starts with a Dockerfile. A Dockerfile is a text file that includes instructions for building a Docker image. A Dockerfile specifies the operating system that will underpin the container, along with the languages, environment variables, file locations, network ports, and other components it needs, and of course what the container will actually do once executed.
Docker Image
After you write the Dockerfile, you’ll call the Docker build utility to build an image based on that Dockerfile. While the Dockerfile is the set of instructions that tells the build how to build the image, a Docker image is a read-only template that contains a set of instructions for building a container that can run on the Docker platform. It provides a convenient way to package preconfigured server environments and applications, and is immutable unlike containers.
Docker run
The Docker run utility is the command that actually starts a container. Each container is an instance of an image. Containers are designed to be temporary, but they can be stopped and restarted, which starts the container in the same state it was stopped in. Additionally, multiple container instances of the same image can run concurrently (as long as each container has a unique name).
Docker Hub
While creating containers is easy, you don’t have to create each image from scratch. Docker Hub is a repository for sharing and managing containers, where you’ll find official Docker images from open source projects and unofficial images from the general public. You can download container images containing useful code or upload your own, share them openly or make them private.
Benefits of using Docker
The advantages of using Docker are many, let’s see what they are together.
Portability
Once you’ve tested your containerized application, you can deploy it to any other system running Docker, and you can be sure that your application will perform exactly as it did when you tested it.
Performance
While virtual machines are an alternative to containers, the fact that containers don’t contain an operating system (whereas virtual machines do) makes them faster to create and faster to boot.
Agility
The portability and performance benefits that containers offer can help you make your development process more agile and responsive.
Isolation
A Docker container that contains one of your applications also includes the relevant versions of any supporting software required by the application. If other Docker containers contain applications that require different versions of the same supporting software, that’s not a problem because the different Docker containers are totally independent from each other.
Flexibility
If you need to upgrade during a product release cycle, you can easily make the necessary changes to your Docker containers, test them, and deploy new containers. This kind of flexibility is another key benefit of using Docker. Docker really lets you build, test, and release images that can be deployed across multiple servers.
Scalability
The Docker containerization method allows you to segment an application so that you can update, clean up, repair without even taking the entire application apart. Furthermore, with Docker it is possible to build an architecture for applications composed of small processes that communicate with each other through APIs.
Deployment and orchestration
When running just a few containers, it’s fairly simple to manage an application within Docker Engine, but for deployments comprising thousands of containers and hundreds of services, it’s nearly impossible to manage the workflow without the help of some purpose-built tools.
Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Docker Swarm
Docker Swarm is a container orchestration tool built and managed by Docker. It is the native clustering tool for Docker. Swarm uses the standard Docker API, containers can be launched using normal docker run commands and Swarm will take care of selecting an appropriate host to run the container on. This also means that other tools that use the Docker API, such as Compose, can use Swarm without any changes and take advantage of running on a cluster rather than a single host.
Kubernetes
Kubernetes is an open source container orchestration platform descended from a project developed for internal use at Google. Kubernetes schedules and automates tasks integral to the management of container based architectures, including container deployment, updates, service discovery, storage provisioning, load balancing, health monitoring, and more.
Conclusion
Docker is a powerful tool that can help you automate the deployment of your applications. If you’re looking for a tool that can help you streamline your workflow and make your life as a developer and sysadmin easier, then Docker is a mandatory tool to add to your box.