Huawei Cloud Stack 8.3.1 Solution Description 04
Basic Concepts
Basic Concepts
CCE provides highly scalable, high-performance, enterprise-class Kubernetes clusters and supports Docker containers. With CCE, you can easily deploy, manage, and scale containerized applications in the cloud with an easy-to-use graphical console.
In addition, CCE supports native Kubernetes APIs and kubectl. Before using CCE, you are advised to learn about related basic concepts.
Cluster
A cluster is a group of cloud servers (also known as nodes) in the same subnet. It has all the cloud resources (including VPCs and compute resources) required for running containers.
Node
A node is a cloud server (virtual or physical machine) running an instance of the Docker Engine. Containers are deployed, run, and managed on nodes. The node agent (kubelet) runs on each node to manage containers on the node. The number of nodes in a cluster can be scaled.
Node Pool
A node pool contains one node or a group of nodes with identical configuration in a cluster.
Virtual Private Cloud (VPC)
A VPC provides a secure and logically isolated network environment. VPCs provide the same network functions as physical networks plus advanced network services, such as elastic IP addresses and security groups.
Security Group
A security group is a collection of access control rules for ECSs that have the same security protection requirements and are mutually trusted in a VPC. After a security group is created, you can create different access rules for the security group to protect the ECSs associated with this security group.
Relationship Between Clusters, VPCs, Security Groups, and Nodes
- Different clusters are created in different VPCs.
- Different clusters are created in the same subnet.
- Different clusters are created in different subnets.
Pod
A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP address, and options that govern how the containers should run.
Container
A container is a runtime instance of a Docker image. Multiple containers can run on one node. Containers are basically software processes but have separate namespaces and do not run directly on a host.
Workload
A workload is an abstract model of a group of pods in Kubernetes. Kubernetes classifies workloads into Deployment, StatefulSet, DaemonSet, job, and cron job.
- Deployment: Pods are completely independent of each other and functionally identical. They feature auto scaling and rolling upgrade. Typical examples include Nginx and WordPress.
- StatefulSet: Pods are not completely independent of each other. They have stable persistent storage and feature orderly deployment and deletion. Typical examples include MySQL-HA and etcd.
- DaemonSet: A DaemonSet ensures that all or some nodes run one pod. You can use DaemonSets if you want your pods to run on every node. Typical examples include Ceph, Fluentd, and Prometheus Node Exporter.
- Job: A job is a one-time task that runs to completion. It can be executed immediately after being created. Before creating a workload using an image, you can execute a job to upload the image to the image repository.
- Cron job: A cron job runs periodically on a given schedule. Cron jobs can also schedule individual tasks on all nodes for a specific time.
Orchestration Template
An orchestration template describes the definitions and dependencies between a group of container services. You can use orchestration templates to deploy and manage multi-container applications.
Image
Docker creates an industry standard for packaging containerized applications. A Docker image is a special file system that includes everything needed to run containers: programs, libraries, resources, and configuration files. It also contains configuration parameters (such as anonymous volumes, environment variables, and users) required within a container runtime. An image does not contain any dynamic data. Its content remains unchanged after being built. When deploying containerized applications, you can use images from Software Repository for Container (SWR) or your private image registries. For example, a Docker image can contain a complete Ubuntu operating system, in which only the required programs and dependencies are installed.
Images become containers at runtime. That is, containers are created from images. Containers can be created, started, stopped, deleted, and suspended.
Namespace
A namespace is a collection of resources and objects. Multiple namespaces can be created in a single cluster with data isolated from each other. This enables namespaces to share the same cluster services without affecting each other. Examples:
- You can deploy workloads in a development environment into one namespace, and deploy workloads in a testing environment into another namespace.
- Pods, Services, ReplicationControllers, and Deployments belong to a namespace (named default, by default), whereas nodes and PersistentVolumes do not belong to any namespace.
Service
A Service is an abstract method that exposes a group of applications running on pods as networked services.
Kubernetes provides you with a service discovery mechanism without the need to modify applications. In this mechanism, Kubernetes provides pods with their own IP addresses and a single DNS for a group of pods, and balances load between them.
Kubernetes allows you to specify a Service of a required type. The values and actions of different types of Services are as follows:
- ClusterIP: ClusterIP Service, as the default Service type, is exposed through the internal IP address of the cluster. If this mode is selected, Services can be accessed only within the cluster.
- NodePort: NodePort Services are exposed through the IP address and static port of each node. The NodePort Service is routed to the ClusterIP Service, and the ClusterIP Service is automatically created. By requesting <NodeIP>:<NodePort>, you can access a NodePort Service from outside the cluster.
- LoadBalancer (ELB): LoadBalancer (ELB) Service is exposed by using the load balancer of the cloud provider. External load balancers can route requests to the NodePort and ClusterIP Services.
Layer-7 Load Balancing (Ingress)
An ingress is a set of routing rules for requests entering a cluster. It provides Services with URLs, load balancing, SSL termination, and HTTP routing for external access to the cluster.
Network Policy
Network policies provide policy-based network control to isolate applications and reduce the attack surface. A network policy uses label selectors to simulate traditional segmented networks and controls traffic between them and traffic from outside.
ConfigMap
A ConfigMap is used to store configuration data or configuration files as key-value pairs. ConfigMaps are similar to secrets, but provide a means of working with strings that do not contain sensitive information.
Secret
Secrets resolve the configuration problem of sensitive data such as passwords, tokens, and keys, and will not expose the sensitive data in images or pod specs. A secret can be used as a volume or an environment variable.
Label
A label is a key-value pair and is associated with an object, for example, a pod. Labels are used to identify special features of objects and are meaningful to users. However, labels have no direct meaning to the kernel system.
Label Selector
Label selector is the core grouping mechanism of Kubernetes. It identifies a group of resource objects with the same characteristics or attributes through the label selector client or user.
Annotation
Annotations are defined in key-value pairs as labels are.
Labels have strict naming rules. They define the metadata of Kubernetes objects and are used by label selectors.
Annotations are additional user-defined information for external tools to search for a resource object.
PersistentVolume
A PersistentVolume (PV) is a network storage in a cluster. Similar to a node, it is also a cluster resource.
PersistentVolumeClaim
A PersistentVolumeClaim (PVC) is a request for a PV. PVCs are similar to pods. Pods consume node resources, and PVCs consume PV resources. Pods request CPU and memory resources, and PVCs request data volumes of a specific size and access mode.
Auto Scaling - HPA
Horizontal Pod Autoscaling (HPA) is a function that implements horizontal scaling of pods in Kubernetes. The scaling mechanism of ReplicationController can be used to scale your Kubernetes clusters.
Affinity and Anti-Affinity
If an application is not containerized, multiple components of the application may run on the same virtual machine and processes communicate with each other. However, in the case of containerization, software processes are packed into different containers and each container has its own lifecycle. For example, the transaction process is packed into a container whereas the monitoring/logging process and local storage process are packed into other containers. If closely related container processes run on distant nodes, routing between them will be costly and slow.
- Affinity: Containers are scheduled onto the nearest node. For example, if application A and application B frequently interact with each other, it is necessary to use the affinity feature to keep the two applications as close as possible or even let them run on the same node. In this way, no performance loss will occur due to slow routing.
- Anti-affinity: Instances of the same application are spread across different nodes to achieve higher availability. Once a node is down, instances on other nodes are not affected. For example, if an application has multiple replicas, it is necessary to use the anti-affinity feature to deploy the replicas on different nodes. In this way, a single point of failure (SPOF) will not affect service running.
Node Affinity
By setting affinity labels, you can have pods scheduled to specific nodes.
Node Anti-Affinity
By setting anti-affinity labels, you can prevent pods from being scheduled to specific nodes.
Pod Affinity
You can deploy pods onto the same node to reduce latency and the consumption of network resources.
Pod Anti-Affinity
You can deploy pods of a workload onto different nodes to reduce the impact of system breakdowns. Anti-affinity deployment is also recommended for workloads that may interfere with each other.
Resource Quota
Resource quotas are used to limit the resource usage of users.
Resource Limit (LimitRange)
By default, all containers in Kubernetes have no CPU or memory limit. LimitRange (limits for short) is used to add a resource limit to a namespace, including the minimum, maximum, and default amounts of resources. When a pod is created, resources are allocated according to the limits parameters.
Environment Variable
An environment variable is a variable whose value can affect the way a running container will behave. A maximum of 30 environment variables can be defined in a container chart. You can modify environment variables even after workloads are deployed, increasing flexibility in workload configuration.
Setting environment variables on CCE is the same as specifying ENV in a Dockerfile.
Istio-based Application Service Mesh (ASM)
Istio is an open platform that connects, secures, controls, and observes microservices.
Istio-based ASM is integrated into CCE and provides a non-intrusive approach to microservice governance. It supports complete lifecycle management and traffic management, and is compatible with Kubernetes and Istio ecosystems. You can start ASM in just a few clicks. Then ASM intelligently controls the flow of traffic by using a variety of features including load balancing, circuit breakers, and rate limiting. The built-in support for canary release, blue-green deployment, and other forms of grayscale releases enables you to automate release management all in one place. Based on the monitoring data that is collected non-intrusively, ASM works closely with Application Performance Management (APM) to provide a panoramic view of your services, including real-time traffic topology, tracing, performance monitoring, and runtime diagnosis.
Mappings Between CCE and Kubernetes Terms
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of container clusters. It is a container orchestration tool and a leading solution based on the distributed architecture of the container technology. Kubernetes is built on the open-source Docker technology that automates deployment, resource scheduling, service discovery, and dynamic scaling of containerized applications.
This topic describes the mappings between CCE and Kubernetes terms.
CCE |
Kubernetes |
---|---|
Cluster |
Cluster |
Node |
Node |
Node pool |
NodePool |
Container |
Container |
Image |
Image |
Namespace |
Namespace |
Deployment |
Deployment |
StatefulSet |
StatefulSet |
DaemonSet |
DaemonSet |
Job |
Job |
Cron job |
CronJob |
Pod |
Pod |
Service |
Service |
ClusterIP |
Cluster IP |
NodePort |
NodePort |
LoadBalancer |
LoadBalancer |
Layer-7 load balancing |
Ingress |
Network policy |
NetworkPolicy |
Chart |
Template |
ConfigMap |
ConfigMap |
Secret |
Secret |
Label |
Label |
Label selector |
LabelSelector |
Annotation |
Annotation |
Volume |
PersistentVolume |
PersistentVolumeClaim |
PersistentVolumeClaim |
Auto scaling |
HPA |
Node affinity |
NodeAffinity |
Node anti-affinity |
NodeAntiAffinity |
Pod affinity |
PodAffinity |
Pod anti-affinity |
PodAntiAffinity |
Webhook |
Webhook |
Endpoint |
Endpoint |
Quota |
Resource Quota |
Resource limit |
Limit Range |