Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.
Docker containers can encapsulate any payload, and will run consistently on and between virtually any server. The same container that a developer builds and tests on a laptop will run at scale, in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.
Common use cases for Docker include:
Fifteen years ago, virtually all applications were written using well defined stacks of services and deployed on a single monolithic, proprietary server. Today, developers build and assemble applications using a multiplicity of the best available services, and must be prepared for those applications to be deployed across a multiplicity of different hardware environments, included public, private, and virtualized servers.
Figure 1: The Evolution of IT
This sets up the possibility for:
Figure 2: The Challenge of Multiple Stacks and Multiple Hardware Environments
Or, viewed as a matrix, we can see that there is a huge number of combinations and permutations of applications/services and hardware environments that need to be considered every time an application is written or rewritten. This creates a difficult situation for both the developers who are writing applications and the folks in operations who are trying to create a scalable, secure, and highly performance operations environment.
Figure 3: Dynamic Stacks and Dynamic Hardware Environments Create an NxN Matrix
How to solve this problem? A useful analogy can be drawn from the world of shipping. Before 1960, most cargo was shipped break bulk. Shippers and carriers alike needed to worry about bad interactions between different types of cargo (e.g. if a shipment of anvils fell on a sack of bananas). Similarly, transitions between different modes of transport were painful. Up to half the time to ship something could be taken up as ships were unloaded and reloaded in ports, and in waiting for the same shipment to get reloaded onto trains, trucks, etc. Along the way, losses due to damage and theft were large. And, there was an n X n matrix between a multiplicity of different goods and a multiplicity of different transport mechanisms.
Figure 4: Analogy: Shipping Pre-1960
Fortunately, an answer was found in the form of a standard shipping container. Any type of goods, from pistachios to Porsches, can be packaged inside a standard shipping container. The container can then be sealed, and not re-opened until it reaches its final destination. In between, the containers can be loaded and unloaded, stacked, transported, and efficiently moved over long distances. The transfer from ship to gantry crane to train to truck can be automated, without requiring a modification of the container. Many authors credit the shipping container with revolutionizing both transportation and world trade in general. Today, 18 million standard containers carry 90% of world trade.
Figure 5: Solution to Shipping Challenge Was a Standard Container
To some extent, Docker can be thought of as an intermodal shipping container system for code.
Figure 6: The Solution to Software Shipping is Also a Standard Container System
Docker enables any application and its dependencies to be packaged up as a lightweight, portable, self-sufficient container. Containers have standard operations, thus enabling automation. And, they are designed to run on virtually any Linux server. The same container that that a developer builds and tests on a laptop will run at scale, in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.
In other words, developers can build their application once, and then know that it can run consistently anywhere. Operators can configure their servers once, and then know that they can run any application.
"Docker interests me because it allows simple environment isolation and repeatability. I can create a run-time environment once, package it up, then run it again on any other machine. Furthermore, everything that runs in that environment is isolated from the underlying host (much like a virtual machine). And best of all, everything is fast and simple."
It is useful to compare the main features of Docker to those of shipping containers. (See the analogy above).
Physical Containers | Docker | |
Content Agnostic | The same container can hold almost any kind of cargo | Can encapsulate any payload and its dependencies |
Hardware Agnostic | Standard shape and interface allow same container to move from ship to train to semi-truck to warehouse to crane without being modified or opened | Using operating system primitives (e.g. LXC) can run consistently on virtually any hardware - VMs, bare metal, openstack, public IAAS, etc. - without modification |
Content Isolation and Interaction | No worry about anvils crushing bananas. Containers can be stacked and shipped together | Resource, network, and content isolation. Avoids dependency hell |
Automation | Standard interfaces make it easy to automate loading, unloading, moving, etc. | Standard operations to run, start, stop, commit, search, etc. Perfect for devops: CI, CD, autoscaling, hybrid clouds |
Highly efficient | No opening or modification, quick to move between waypoints | Lightweight, virtually no perf or start-up penalty, quick to move and manipulate |
Separation of duties | Shipper worries about inside of box, carrier worries about outside of box | Developer worries about code, Ops worries about infrastructure. |
Figure 7: Main Docker Features
For a more technical view of features, please see the following:
Docker makes it easy to build, modify, publish, search, and run containers. The diagram below should give you a good sense of the Docker basics. With Docker, a container comprises both an application and all of its dependencies. Containers can either be created manually or, if a source code repository contains a DockerFile, automatically. Subsequent modifications to a baseline Docker image can be committed to a new container using the Docker Commit Function and then Pushed to a Central Registry.
Containers can be found in a Docker Registry (either public or private), using Docker Search. Containers can be pulled from the registry using Docker Pull and can be run, started, stopped, etc. using Docker Run commands. Notably, the target of a run command can be your own servers, public instances, or a combination.
Figure 8: Basic Docker Functions
For a full list of functions, please go to: http://docs.docker.io/en/latest/commandline/
Docker runs three ways: * as a daemon to manage LXC containers on your Linux host (sudo docker -d) * as a CLI which talks to the daemon's REST API (docker run ...) ::...
或是邮件反馈可也:
askdama[AT]googlegroups.com
订阅 substack 体验古早写作:
关注公众号, 持续获得相关各种嗯哼: