Getting Started with Container Orchestration

While you don’t need orchestration to run containers, you do need orchestration to run containers efficiently and at scale.

That’s why one of the first steps you’ll want to take after learning the basics of containers is to learn about container orchestration. Keep reading for an overview of how container orchestration works and what to consider when getting started with orchestration tools.

What is container orchestration?

Container orchestration is the use of automated tools to manage the tasks required to run a containerized application.

These tasks include processes like deploying containers, adding or removing instances of a container in response to changes in load, restarting failed containers and moving containers from one server to another in order to balance the load across a cluster.

Why is container orchestration important?

You could handle all of these tasks manually, of course. Tools like the Docker CLI provide all of the commands you need to start, stop and otherwise manage containers by hand.

But manually managing containers is not practical when you have more than just a few containers in your environment. It’s also very difficult when your containers are spread across multiple servers, in which case you’d have to log into each server separately to manage containers on it.

That’s why container orchestration is so important. It enables the use of containers at scale. By automating tasks that would take a long time to perform manually, container orchestration makes it possible to run dozens, hundreds or even thousands of containers at once.

Container orchestration tools

There are a variety of container orchestration tools available today.

The most popular by far is Kubernetes, whose market share was 77 percent as of 2019 (and if you consider OpenShift to be a form of Kubernetes, which it arguably is, then the market share of Kubernetes is closer to 90 percent.) Kubernetes is widely used because it’s highly extensible and open source, making it easy to adapt to a variety of use cases. It also benefits from a thriving development ecosystem that includes a number of add-on tools to help manage Kubernetes and extend its functionality.

On the other hand, Kubernetes is  a very complex orchestration solution. For small-scale container orchestration needs, you may prefer to use an orchestration tool like Docker Swarm, which is easier (but not trivial) to configure, or AWS ECS, which requires very little configuration and management compared to other orchestrators.

Alternatively, if you want to use Kubernetes but worry that it is too complex, you can lower the Kubernetes learning curve by choosing a managed Kubernetes service, such as AWS EKS, Azure AKS or Platform9. These services simplify some aspects of Kubernetes configuration and management, such as the provisioning of the servers required to set up a Kubernetes cluster and the collection of metrics to aid in Kubernetes performance management. They still require a fair amount of effort and expertise to use, but they are simpler than setting up and managing Kubernetes totally on your own.

What container orchestration doesn’t do

While container orchestration makes it much easier to deploy containers at scale, it’s important to understand that container orchestrators don’t address every management task related to containers.

For one, although orchestrators can automatically restart failed containers, they don’t monitor containers for performance issues or alert you about problems like a container that repeatedly fails to start. To gain visibility into these types of issues, you’ll need to use a monitoring tool or service that supports containers. Most modern Application Performance Management (APM) and observability platforms do.

Likewise, container orchestration does little to address security needs. Although orchestrators like Kubernetes provide some basic security-related tools, like access-control frameworks and the ability to enforce isolation between containers at the network level, they are not full-blown security suites. Here again, you’ll need to leverage external tools (such as Falco, an open source threat detection platform designed for Kubernetes) to enhance the security of your containerized environment.

Container orchestrators also don’t set up storage or networking resources for you. They can typically integrate with external services to provide these resources, but a platform like Kubernetes does not include any native storage media or specific networking configuration.

Finally, note that container orchestrators don’t help you manage container images, which are the blueprints on which running containers are based. For that task, you’ll need a container registry.

First steps with container orchestration

Now that you understand how container orchestration works and which orchestrator tools are available, you can begin incorporating container orchestration into your software stack.

A simple way to get started is to set up a simple container environment on a virtual machine or spare PC, then install an orchestrator inside it. If you use Docker to manage your containers, Docker Swarm should be available in your environment by default. If you want to test out Kubernetes, consider using a “lightweight” variant, like K3s, which is designed to run well on a single computer and can be installed on most Linux distributions with just a few commands.

To experiment with ECS, which is a hosted service available only in the AWS cloud, you’ll have to set up an AWS account. But minimal usage of the service for testing purposes should not cost you much, if anything.

Use your test environment to compare different orchestrators, experiment with plugins and, if you wish, scale up your cluster to include multiple VMs or PCs instead of just one. From there, it’s a steady march toward deploying a container orchestration platform for production use.