Take container cluster management to the next level; learn how to administer and configure Kubernetes on CoreOS; and apply suitable management design patterns such as Configmaps, Autoscaling, elastic resource usage, and high availability. Some of the other features discussed are logging, scheduling, rolling updates, volumes, service types, and multiple cloud provider zones. The atomic unit of modular container service in Kubernetes is a Pod, which is a group of containers with a common filesystem and networking. The Kubernetes Pod abstraction enables design patterns for containerized applications similar to object-oriented design patterns. Containers provide some of the same benefits as software objects such as modularity or packaging, abstraction, and reuse.
CoreOS Linux is used in the majority of the chapters and other platforms discussed are CentOS with OpenShift, Debian 8 (jessie) on AWS, and Debian 7 for Google Container Engine.
CoreOS is the main focus becayse Docker is pre-installed on CoreOS out-of-the-box. CoreOS:
- Supports most cloud providers (including Amazon AWS EC2 and Google Cloud Platform) and virtualization platforms (such as VMWare and VirtualBox)
- Provides Cloud-Config for declaratively configuring for OS items such as network configuration (flannel), storage (etcd), and user accounts
- Provides a production-level infrastructure for containerized applications including automation, security, and scalability
- Leads the drive for container industry standards and founded appc
- Provides the most advanced container registry, Quay
Docker was made available as open source in March 2013 and has become the most commonly used containerization platform. Kubernetes was open-sourced in June 2014 and has become the most widely used container cluster manager. The first stable version of CoreOS Linux was made available in July 2014 and since has become one of the most commonly used operating system for containers.
What You'll Learn- Use Kubernetes with Docker
- Create a Kubernetes cluster on CoreOS on AWS
- Apply cluster management design patterns
- Use multiple cloud provider zones
- Work with Kubernetes and tools like Ansible
- Discover the Kubernetes-based PaaS platform OpenShift
- Create a high availability website
- Build a high availability Kubernetes master cluster
- Use volumes, configmaps, services, autoscaling, and rolling updates
- Manage compute resources
- Configure logging and scheduling
Who This Book Is ForLinux admins, CoreOS admins, application developers, and container as a service (CAAS) developers. Some pre-requisite knowledge of Linux and Docker is required. Introductory knowledge of Kubernetes is required such as creating a cluster, creating a Pod, creating a service, and creating and scaling a replication controller. For introductory Docker and Kubernetes information, refer to Pro Docker (Apress) and Kubernetes Microservices with Docker (Apress). Some pre-requisite knowledge about using Amazon Web Services (AWS) EC2, CloudFormation, and VPC is also required.
Kubernetes Management Design Patterns: With Docker, CoreOS Linux, and Other Platforms
Introduction Section I Platforms1 Kubernetes On AWS1.1 Installing a Kubernetes Cluster on AWS1.2 Creating a Deployment1.3 Creating a Service1.4 Accessing the Service<1.5 Scaling the Deployment1.6 Summary 2 Kubernetes on CoreOS2.1 Setting the Environment2.2 Configuring AWS Credentials2.3 Installing Kube-aws2.4 Setting Up Cluster Parameters 2.3.1 Creating a KMS Key 2.3.2 Setting Up an External DNS Name2.5 Creating the Cluster CloudFormation 2.4.1 Creating an Asset Directory 2.4.2 Initializing the Cluster CloudFormation 2.4.3 Rendering Contents of the Asset Directory 2.4.4 Customizing the Cluster 2.4.5 Validating the CloudFormation Stack 2.4.6 Launching the Cluster CloudFormation2.6 Configuring DNS2.7 Accessing the Cluster2.8 Testing the Cluster2.9 Summary 3 Kubernetes on Google Cloud Platform3.1 Setting the Environment3.2 Creating a Project on Google Cloud Platform3.3 Enabling Permissions3.4 Enabling the Compute Engine API3.5 Creating a VM Instance3.6 Connecting to the VM Instance3.7 Reserving a Static Address3.8 Creating a Kubernetes Cluster3.9 Creating a Kubernetes Application and Service3.10 Stopping the Cluster 3.11 Summary Section 2 Administration and Configuration4 Using Multiple Zones4.1 Setting the Environment4.2 Initializing a CloudFormation4.3 Configuring Cluster.yaml for Multiple Zones4.4 Launching the CloudFormation4.5 Configuring External DNS4.6 Running a Kubernetes Application4.7 Using Multiple Zones on AWS4.8 Summary 5 Using the Tectonic Console5.1 Setting the Environment5.2 Downloading the Pull Secret and the Tectonic Console manifest5.3 Installing the Pull Secret and the Tectonic Console5.4 Accessing the Tectonic Console5.5 Using the Tectonic Console5.6 Removing the Tectonic Console5.7 Summary 6 Using Volumes6.1 Setting the Environment6.2 Creating a AWS Volume6.3 Using a awsElasticBlockStore Volume6.4 Creating a Git Repo6.5 Using a gitRepo Volume 6.6 Summary 7 Using Services7.1 Setting the Environment7.2 Creating a ClusterIP Service7.3 Creating a NodePort Service7.4 Creating a LoadBalancer Service7.5 Summary 8 Using Rolling Updates 8.1 Setting the Environment8.2 Rolling Update with a RC Definition File8.3 Rolling Update by Updating the Container Image8.4 Rolling Back an Update8.5 Use only one of file or image8.6 Rolling Update on Deployment with Deployment file 9 Scheduling Pods 9.1 Scheduling Policy 9.2 Setting the Environment 9.3 Using the Default Scheduler 9.4 Scheduling Pods without a Node Selector 9.5 Setting Node Labels 9.6 Scheduling Pods with Node Selector 9.7 Setting Node Affinity9.6.1 Setting requiredDuringSchedulingIgnoredDuringExecution9.6.2 Setting preferredDuringSchedulingIgnoredDuringExecution 10 Configuring Compute Resources 10.1 Types of Compute Resources 10.2 Resource Requests and Limits 10.3 Quality of Service 10.4 Setting the Environment 10.5 Finding Node Capacity 10.6 Creating a Pod with Resources Specified 10.7 Overcommitting Resource Limits 10.8 Reserving Node Resources 11 Using Configmaps 11.1 Kubectl create configmap Command 11.2 Setting the Environment 11.3 Creating ConfigMaps from Directories 11.4 Creating ConfigMaps from Files 11.5 Creating configmap from literal values 11.6 Consuming a ConfigMap in a Volume 12 Setting Resource Quotas 12.1 Setting the Environment 12.2 Defining Compute Resource Quotas 12.3 Exceeding Compute Resource Quotas 12.4 Defining Object Quotas 12.5 Exceeding Object Resource Quotas 12.6 Defining Best Effort Quotas 12.7 Using Quotas 12.8 Exceeding Object Quotas 12.9 Exceeding ConfigMaps Quota 13 Using Autoscaling 13.1 Setting the Environment 13.2 Running PHP Apache Server Deployment 13.3 Creating a Service 13.4 Creating a Horizontal Pod Autoscaler 13.5 Increasing Load 14 Configuring Logging 14.1 Setting the Environment 14.2 Getting the Logs generated by Default Logger 14.3 Docker Log Files 14.4 Cluster Level Logging with Elasticsearch and Kibana14.4.1 Starting Elastic Search14.4.2 Starting a Replication Controller14.4.3 Starting Fluentd Elasticsearch to Collect Logs14.4.4 Starting Kibana Section 3 High Availability15 Using a HA Master with OpenShift15.1 Setting the Environment15.2 Installing the Credentials15.3 Installing the Network Manager15.4 Installing OpenShift Ansible14.5 Configuring the Ansible15.6 Running Ansible Playbook15.7 Testing the Cluster15.8 Testing the HA 15.9 Summary 16 Developing a Highly Available Web Site16.1 Setting the Environment16.2 Creating Multiple CloudFormations16.3 Configuring External DNS16.4 Creating a Kubernetes Service16.5 Creating a AWS Route 53 16.4.1 Creating a Hosted Zone 16.4.2 Configuring Name Servers 16.4.3 Creating Record Sets16.6 Testing HA 16.7 Summary