Creating containers is quite different from deploying traditional servers, and to achieve the best chance of success you should use tools that are meant to work in the container world.
A traditional environment deploys servers by, in part, specifying the servers’ storage layout and their network interface card (NIC), CPU, and other relevant configurations. And this is on top of figuring out the brand of the server you want to buy, ordering it, and waiting for it to arrive.
Containers, on the other hand, are deployed as they’re needed, on demand, and can be created without specifying their storage and NIC requirements.
There are many for-pay container tools, but there are also quite a few powerful open-source options. Here are some worth checking into; these are five products that my team and I use for our clients. There are of course many other tools you can use, but we find that these tools form the core for our work and are the ones we keep coming back to.
Kubernetes is a Linux-based open-source tool that manages containers for cluster computing. One of the most appealing parts of Kubernetes is how easy it makes life for DevOps pros who need to manage containerized applications. You create your app and specify a pipeline and the type of workload, and Kubernetes will automatically run in your environment. You’ll have the same level of reliability and agility with your container-based applications that you have with traditional applications.
As you deploy your applications, you’ll get notifications, which means that there’s no need to spend time manually monitoring or managing the containers. Once the application is deployed, you’re free to scale, migrate, or shut down your app.
A main pain point for companies has been the difficulty of protecting the integrity of the data in their Kubernetes-based deployments. This is why many are choosing to deploy to private cloud providers, to use on-premises instances, or to depend on managed services. The process of orchestrating security for these workloads in a cloud environment may be simple, but implementing this security in a private cloud, or even in a hypervisor-based private cloud, could prove more difficult.
It is clear that Kubernetes’ rapid growth is a key driver of these problems. Organizations shouldn’t let the headaches associated with Kubernetes be reasons for not adopting it.
Yes, you’ll need to solve the problems associated with scaling and security. But after you do, Kubernetes allows organizations to shift to the cloud and leverage the benefits associated with container-based applications without worrying about their growing pains.
Docker is a containerization technology that works with the Kubernetes platform, with many adopting it to create an app that runs across multiple containers.
The first thing to note is that you need to set up a Kubernetes cluster. The good news is that there is a tool, called kops, to help you install the cluster, on top of which you can run Docker.
This is one of those operations that you will want to run in a server that you own rather than one provided by your cloud provider. It can be a tricky operation, and while Kubernetes will make it easier, some worry about locking in to a vendor’s server, which may change.
Google recommends you purchase a new server for this task. You also need to install a Linux operating system distribution and the Docker client.
How many containers does it take to run an app? The good news is that the process is relatively fast. If you are working on a big app and need hundreds of containers, then you will be pleased with the speed at which you can create them.
However, if you are just testing something, you don’t need many containers. As such, my recommendation is to have just a few containers and really manage the ones you are using to make sure you are using the right ones.
The goal is to get Docker to support many-to-many networking so that you can run many containers simultaneously without slowing down, but this issue hasn’t been fully resolved.
3. Apache Mesos
Apache Mesos was released in 2012 as an open-source platform for automating deployment, scaling, and management of large clusters of container-based applications. The system relies on a cluster manager called Marathon, as well as a networking framework called Shipworm and a lightweight daemon, also called Marathon, that handles the control and coordination between those components. It is a customizable platform that can be used in applications ranging from workflows to business intelligence.
In practice, Mesos uses a federated process that mirrors Docker’s command-line interface. All processes run as separate processes. Container-level details are managed centrally via the Mesos resource manager. And the cluster is composed of one or more lightweight nodes, which are designed to be infinitely scalable.
People who create distributed applications often use Docker to define the underlying runtime of the container. For example, a lot of large enterprises use Docker for packaging applications for delivery in a custom-branded, secure, private cloud deployment.
In this approach, the applications don’t need to access all the infrastructure at once, and the containers need to access only a single piece of infrastructure at a time. That configuration has a number of benefits:
- The applications are built in isolation, so if there are problems with any one container, they don’t affect the whole system.
- The applications are deployed across many different servers, and one server may not need to touch any of the data that the container has created.
- If a problem arises in one server, it doesn’t impact the rest of the servers in the cluster.
- The distributed nature of Docker means it doesn’t have to trust the underlying operating system. Each container can be as self-sufficient as it wants to be.
However, in practice, Docker isn’t the only technology that can be used to run a distributed program. For a program to be considered “distributed,” the application must call on services that are also available as services across a distributed network.
Mesos has an extremely active community and is currently maintained by the Apache Mesos Project, an open-source organization that oversees it. The Mesos community often hosts meetups and educational events around the world, and users can participate in a weekly chat.
OpenShift is a cloud container platform from Red Hat that both creates containers and manages them. With the power of Kubernetes, you can deploy and manage distributed applications using Docker containers, on-premises systems, or the public cloud.
The open-source alternative to OpenShift is OKD. With it, you get most of the same features and functionality that OpenShift provides. Key features include the ability to:
- Automatically distribute Kubernetes on Azure, AWS, Google Cloud Platform, other leading cloud providers and bare metal, Red Hat OpenStack, and other virtualization providers
- Quickly build applications with integrated service discovery and persistent storage
- Scale applications to manage increased demand
- Support automatic high availability, load balancing, health checking, and failover
- Provide CI/CD tooling and console for building containerized applications on Kubernetes
The bottom line is that OKD is an effective alternative to OpenShift, with a strong and thriving open-source community.
5. Operator Hub
Operator Hub is a first-class Node.js API wrapper for the Kubernetes API. It gives you an API for creating, managing, and monitoring your entire Kubernetes cluster.
To get started, just create an Operator Hub project on GitHub and point your GitHub repository to the root of your Kubernetes project. You can then register your Kubernetes pods by going to kubectl and then to the cluster_NAME prefix of your Operator Hub project.
While it’s technically possible to run Operator Hub without registering your pods, there is a cost associated with maintaining the registration system, and this can significantly slow down your development. For example, if you’re using Supervisor, the registration system, it’s not very practical to have to run Operator Hub first, then Supervisor, then Supervisor again in order to update your pods.
To run Kubernetes clusters without registering your pods, simply run:
$ kubectl join default Where default is the name of the master or cluster that you want to join.
For more information about Operator Hub and what you can do with it, check out the Operator Hub GitHub repository.
A changing landscape
The rapid evolution of containers has continued with the arrival of Kubernetes and OpenShift. The combination of Docker and Kubernetes, and now Mesos, can provide an efficient way to run the latest microservices in production environments. This allows companies to develop, test, and deploy their services quickly while monitoring performance and ensuring the security of the application.
Go to Publisher: TechBeacon
Author: Matthew David