text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
To make things more clear, here is an example kubeadm configuration file kubeadm-
config.yaml for the single-stack control plane node.
apiVersion : kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking :
podSubnet : 10.244.0.0 /16
serviceSubnet : 10.96.0.0 /16
What's next
Validate IPv4/IPv6 dual-stack networking
Read about Dual-stack cluster networking
Learn more about the kubeadm configuration format
Turnkey Cloud Solutions
This page provides a list of Kubernetes certified solution providers. From each provider page,
you can learn how to install and setup production ready clusters.
Best practices
Considerations for large clusters
Running in multiple zones
Validate node setup
Enforcing Pod Security Standards
PKI certificates and requirements
Considerations for large clusters
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed
by the control plane . Kubernetes v1.29 supports clusters with up to 5,000 nodes. More
specifically, Kub | 8,200 |
ernetes is designed to accommodate configurations that meet all of the
following criteria:
No more than 110 pods per node
No more than 5,000 nodes
No more than 150,000 total pods
No more than 300,000 total containers
You can scale your cluster by adding or removing nodes. The way you do this depends on how
your cluster is deployed.•
•
•
•
•
•
| 8,201 |
Cloud provider resource quotas
To avoid running into cloud provider quota issues, when creating a cluster with many nodes,
consider:
Requesting a quota increase for cloud resources such as:
Computer instances
CPUs
Storage volumes
In-use IP addresses
Packet filtering rule sets
Number of load balancers
Network subnets
Log streams
Gating the cluster scaling actions to bring up new nodes in batches, with a pause between
batches, because some cloud providers rate limit the creation of new instances.
Control plane components
For a large cluster, you need a control plane with sufficient compute and other resources.
Typically you would run one or two control plane instances per failure zone, scaling those
instances vertically first and then scaling horizontally after reaching the point of falling returns
to (vertical) scale.
You should run at least one instance per failure zone to provide fault-tolerance. Kubernetes
nodes do not automatically steer traffic towards control-plane endpoints that | 8,202 |
are in the same
failure zone; however, your cloud provider might have its own mechanisms to do this.
For example, using a managed load balancer, you configure the load balancer to send traffic that
originates from the kubelet and Pods in failure zone A, and direct that traffic only to the control
plane hosts that are also in zone A. If a single control-plane host or endpoint failure zone A
goes offline, that means that all the control-plane traffic for nodes in zone A is now being sent
between zones. Running multiple control plane hosts in each zone makes that outcome less
likely.
etcd storage
To improve performance of large clusters, you can store Event objects in a separate dedicated
etcd instance.
When creating a cluster, you can (using custom tooling):
start and configure additional etcd instance
configure the API server to use it for storing events
See Operating etcd clusters for Kubernetes and Set up a High Availability etcd cluster with
kubeadm for details on configuring and | 8,203 |
managing etcd for a large cluster.•
◦
◦
◦
◦
◦
◦
◦
◦
•
•
| 8,204 |
Addon resources
Kubernetes resource limits help to minimize the impact of memory leaks and other ways that
pods and containers can impact on other components. These resource limits apply to addon
resources just as they apply to application workloads.
For example, you can set CPU and memory limits for a logging component:
...
containers :
- name : fluentd-cloud-logging
image : fluent/fluentd-kubernetes-daemonset:v1
resources :
limits :
cpu: 100m
memory : 200Mi
Addons' default limits are typically based on data collected from experience running each
addon on small or medium Kubernetes clusters. When running on large clusters, addons often
consume more of some resources than their default limits. If a large cluster is deployed without
adjusting these values, the addon(s) may continuously get killed because they keep hitting the
memory limit. Alternatively, the addon may run but with poor performance due to CPU time
slice restrictions.
To avoid running | 8,205 |
into cluster addon resource issues, when creating a cluster with many nodes,
consider the following:
Some addons scale vertically - there is one replica of the addon for the cluster or serving
a whole failure zone. For these addons, increase requests and limits as you scale out your
cluster.
Many addons scale horizontally - you add capacity by running more pods - but with a
very large cluster you may also need to raise CPU or memory limits slightly. The
VerticalPodAutoscaler can run in recommender mode to provide suggested figures for
requests and limits.
Some addons run as one copy per node, controlled by a DaemonSet : for example, a node-
level log aggregator. Similar to the case with horizontally-scaled addons, you may also
need to raise CPU or memory limits slightly.
What's next
VerticalPodAutoscaler is a custom resource that you can deploy into your cluster to help
you manage resource requests and limits for pods.
Learn more about Vertical Pod Autoscaler and how you can use it | 8,206 |
to scale cluster
components, including cluster-critical addons.
The cluster autoscaler integrates with a number of cloud providers to help you run the
right number of nodes for the level of resource demand in your cluster.
The addon resizer helps you in resizing the addons automatically as your cluster's scale
changes.•
•
•
•
•
| 8,207 |
Running in multiple zones
This page describes running Kubernetes across multiple zones.
Background
Kubernetes is designed so that a single Kubernetes cluster can run across multiple failure zones,
typically where these zones fit within a logical grouping called a region . Major cloud providers
define a region as a set of failure zones (also called availability zones ) that provide a consistent
set of features: within a region, each zone offers the same APIs and services.
Typical cloud architectures aim to minimize the chance that a failure in one zone also impairs
services in another zone.
Control plane behavior
All control plane components support running as a pool of interchangeable resources,
replicated per component.
When you deploy a cluster control plane, place replicas of control plane components across
multiple failure zones. If availability is an important concern, select at least three failure zones
and replicate each individual control plane component (API server, scheduler | 8,208 |
, etcd, cluster
controller manager) across at least three failure zones. If you are running a cloud controller
manager then you should also replicate this across all the failure zones you selected.
Note: Kubernetes does not provide cross-zone resilience for the API server endpoints. You can
use various techniques to improve availability for the cluster API server, including DNS round-
robin, SRV records, or a third-party load balancing solution with health checking.
Node behavior
Kubernetes automatically spreads the Pods for workload resources (such as Deployment or
StatefulSet ) across different nodes in a cluster. This spreading helps reduce the impact of
failures.
When nodes start up, the kubelet on each node automatically adds labels to the Node object
that represents that specific kubelet in the Kubernetes API. These labels can include zone
information .
If your cluster spans multiple zones or regions, you can use node labels in conjunction with Pod
topology spread constraints | 8,209 |
to control how Pods are spread across your cluster among fault
domains: regions, zones, and even specific nodes. These hints enable the scheduler to place Pods
for better expected availability, reducing the risk that a correlated failure affects your whole
workload.
For example, you can set a constraint to make sure that the 3 replicas of a StatefulSet are all
running in different zones to each other, whenever that is feasible. You can define this
declaratively without explicitly defining which availability zones are in use for each workload | 8,210 |
Distributing nodes across zones
Kubernetes' core does not create nodes for you; you need to do that yourself, or use a tool such
as the Cluster API to manage nodes on your behalf.
Using tools such as the Cluster API you can define sets of machines to run as worker nodes for
your cluster across multiple failure domains, and rules to automatically heal the cluster in case
of whole-zone service disruption.
Manual zone assignment for Pods
You can apply node selector constraints to Pods that you create, as well as to Pod templates in
workload resources such as Deployment, StatefulSet, or Job.
Storage access for zones
When persistent volumes are created, Kubernetes automatically adds zone labels to any
PersistentVolumes that are linked to a specific zone. The scheduler then ensures, through its
NoVolumeZoneConflict predicate, that pods which claim a given PersistentVolume are only
placed into the same zone as that volume.
Please note that the method of adding zone labels can depend on y | 8,211 |
our cloud provider and the
storage provisioner you’re using. Always refer to the specific documentation for your
environment to ensure correct configuration.
You can specify a StorageClass for PersistentVolumeClaims that specifies the failure domains
(zones) that the storage in that class may use. To learn about configuring a StorageClass that is
aware of failure domains or zones, see Allowed topologies .
Networking
By itself, Kubernetes does not include zone-aware networking. You can use a network plugin to
configure cluster networking, and that network solution might have zone-specific elements. For
example, if your cloud provider supports Services with type=LoadBalancer , the load balancer
might only send traffic to Pods running in the same zone as the load balancer element
processing a given connection. Check your cloud provider's documentation for details.
For custom or on-premises deployments, similar considerations apply. Service and Ingress
behavior, including handling of di | 8,212 |
fferent failure zones, does vary depending on exactly how
your cluster is set up.
Fault recovery
When you set up your cluster, you might also need to consider whether and how your setup
can restore service if all the failure zones in a region go off-line at the same time. For example,
do you rely on there being at least one node able to run Pods in a zone?
Make sure that any cluster-critical repair work does not rely on there being at least one healthy
node in your cluster. For example: if all nodes are unhealthy, you might need to run a repair Job
with a special toleration so that the repair can complete enough to bring at least one node into
service | 8,213 |
Kubernetes doesn't come with an answer for this challenge; however, it's something to consider.
What's next
To learn how the scheduler places Pods in a cluster, honoring the configured constraints, visit
Scheduling and Eviction .
Validate node setup
Node Conformance Test
Node conformance test is a containerized test framework that provides a system verification and
functionality test for a node. The test validates whether the node meets the minimum
requirements for Kubernetes; a node that passes the test is qualified to join a Kubernetes
cluster.
Node Prerequisite
To run node conformance test, a node must satisfy the same prerequisites as a standard
Kubernetes node. At a minimum, the node should have the following daemons installed:
CRI-compatible container runtimes such as Docker, Containerd and CRI-O
Kubelet
Running Node Conformance Test
To run the node conformance test, perform the following steps:
Work out the value of the --kubeconfig option for the kubelet; for example: --
kub | 8,214 |
econfig=/var/lib/kubelet/config.yaml . Because the test framework starts a local
control plane to test the kubelet, use http://localhost:8080 as the URL of the API server.
There are some other kubelet command line parameters you may want to use:
--cloud-provider : If you are using --cloud-provider=gce , you should remove the flag to
run the test.
Run the node conformance test with command:
# $CONFIG_DIR is the pod manifest path of your Kubelet.
# $LOG_DIR is the test output path.
sudo docker run -it --rm --privileged --net =host \
-v /:/rootfs -v $CONFIG_DIR :$CONFIG_DIR -v $LOG_DIR :/var/result \
registry.k8s.io/node-test:0.2
Running Node Conformance Test for Other Architectures
Kubernetes also provides node conformance test docker images for other architectures:•
•
1.
•
1 | 8,215 |
Arch Image
amd64 node-test-amd64
arm node-test-arm
arm64 node-test-arm64
Running Selected Test
To run specific tests, overwrite the environment variable FOCUS with the regular expression of
tests you want to run.
sudo docker run -it --rm --privileged --net =host \
-v /:/rootfs:ro -v $CONFIG_DIR :$CONFIG_DIR -v $LOG_DIR :/var/result \
-e FOCUS =MirrorPod \ # Only run MirrorPod test
registry.k8s.io/node-test:0.2
To skip specific tests, overwrite the environment variable SKIP with the regular expression of
tests you want to skip.
sudo docker run -it --rm --privileged --net =host \
-v /:/rootfs:ro -v $CONFIG_DIR :$CONFIG_DIR -v $LOG_DIR :/var/result \
-e SKIP =MirrorPod \ # Run all conformance tests but skip MirrorPod test
registry.k8s.io/node-test:0.2
Node conformance test is a containerized version of node e2e test . By default, it runs all
conformance tests.
Theoretically, you can run any node e2e test if you configure the container and mount required
volumes properly. B | 8,216 |
ut it is strongly recommended to only run conformance test , because
it requires much more complex configuration to run non-conformance test.
Caveats
The test leaves some docker images on the node, including the node conformance test
image and images of containers used in the functionality test.
The test leaves dead containers on the node. These containers are created during the
functionality test.
Enforcing Pod Security Standards
This page provides an overview of best practices when it comes to enforcing Pod Security
Standards .
Using the built-in Pod Security Admission Controller
FEATURE STATE: Kubernetes v1.25 [stable]
The Pod Security Admission Controller intends to replace the deprecated PodSecurityPolicies.•
| 8,217 |
Configure all cluster namespaces
Namespaces that lack any configuration at all should be considered significant gaps in your
cluster security model. We recommend taking the time to analyze the types of workloads
occurring in each namespace, and by referencing the Pod Security Standards, decide on an
appropriate level for each of them. Unlabeled namespaces should only indicate that they've yet
to be evaluated.
In the scenario that all workloads in all namespaces have the same security requirements, we
provide an example that illustrates how the PodSecurity labels can be applied in bulk.
Embrace the principle of least privilege
In an ideal world, every pod in every namespace would meet the requirements of the restricted
policy. However, this is not possible nor practical, as some workloads will require elevated
privileges for legitimate reasons.
Namespaces allowing privileged workloads should establish and enforce appropriate
access controls.
For workloads running in those permissive n | 8,218 |
amespaces, maintain documentation about
their unique security requirements. If at all possible, consider how those requirements
could be further constrained.
Adopt a multi-mode strategy
The audit and warn modes of the Pod Security Standards admission controller make it easy to
collect important security insights about your pods without breaking existing workloads.
It is good practice to enable these modes for all namespaces, setting them to the desired level
and version you would eventually like to enforce . The warnings and audit annotations
generated in this phase can guide you toward that state. If you expect workload authors to
make changes to fit within the desired level, enable the warn mode. If you expect to use audit
logs to monitor/drive changes to fit within the desired level, enable the audit mode.
When you have the enforce mode set to your desired value, these modes can still be useful in a
few different ways:
By setting warn to the same level as enforce , clients wi | 8,219 |
ll receive warnings when
attempting to create Pods (or resources that have Pod templates) that do not pass
validation. This will help them update those resources to become compliant.
In Namespaces that pin enforce to a specific non-latest version, setting the audit and
warn modes to the same level as enforce , but to the latest version, gives visibility into
settings that were allowed by previous versions but are not allowed per current best
practices.
Third-party alternatives
Note: This section links to third party projects that provide functionality required by
Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are
listed alphabetically. To add a project to this list, read the content guide before submitting a
change. More information.•
•
•
| 8,220 |
Other alternatives for enforcing security profiles are being developed in the Kubernetes
ecosystem:
Kubewarden .
Kyverno .
OPA Gatekeeper .
The decision to go with a built-in solution (e.g. PodSecurity admission controller) versus a third-
party tool is entirely dependent on your own situation. When evaluating any solution, trust of
your supply chain is crucial. Ultimately, using any of the aforementioned approaches will be
better than doing nothing.
PKI certificates and requirements
Kubernetes requires PKI certificates for authentication over TLS. If you install Kubernetes with
kubeadm , the certificates that your cluster requires are automatically generated. You can also
generate your own certificates -- for example, to keep your private keys more secure by not
storing them on the API server. This page explains the certificates that your cluster requires.
How certificates are used by your cluster
Kubernetes requires PKI for the following operations:
Client certificates for the kube | 8,221 |
let to authenticate to the API server
Kubelet server certificates for the API server to talk to the kubelets
Server certificate for the API server endpoint
Client certificates for administrators of the cluster to authenticate to the API server
Client certificates for the API server to talk to the kubelets
Client certificate for the API server to talk to etcd
Client certificate/kubeconfig for the controller manager to talk to the API server
Client certificate/kubeconfig for the scheduler to talk to the API server.
Client and server certificates for the front-proxy
Note: front-proxy certificates are required only if you run kube-proxy to support an extension
API server .
etcd also implements mutual TLS to authenticate clients and peers.
Where certificates are stored
If you install Kubernetes with kubeadm, most certificates are stored in /etc/kubernetes/pki . All
paths in this documentation are relative to that directory, with the exception of user account
certificates which kubeadm pl | 8,222 |
aces in /etc/kubernetes .
Configure certificates manually
If you don't want kubeadm to generate the required certificates, you can create them using a
single root CA or by providing all certificates. See Certificates for details on creating your own
certificate authority. See Certificate Management with kubeadm for more on managing
certificates.•
•
•
•
•
•
•
•
•
•
•
| 8,223 |
Single root CA
You can create a single root CA, controlled by an administrator. This root CA can then create
multiple intermediate CAs, and delegate all further creation to Kubernetes itself.
Required CAs:
path Default CN description
ca.crt,key kubernetes-ca Kubernetes general CA
etcd/ca.crt,key etcd-ca For all etcd-related functions
front-proxy-ca.crt,key kubernetes-front-proxy-ca For the front-end proxy
On top of the above CAs, it is also necessary to get a public/private key pair for service account
management, sa.key and sa.pub . The following example illustrates the CA key and certificate
files shown in the previous table:
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
All certificates
If you don't wish to copy the CA private keys to your cluster, you can generate all certificates
yourself.
Required certificates:
Default CN Parent CA | 8,224 |
O (in Subject) kind hosts (SAN)
kube-etcd etcd-caserver,
client<hostname> , <Host_IP> , localhost ,
127.0.0.1
kube-etcd-
peeretcd-caserver,
client<hostname> , <Host_IP> , localhost ,
127.0.0.1
kube-etcd-
healthcheck-
clientetcd-ca client
kube-
apiserver-
etcd-clientetcd-ca client
kube-
apiserverkubernetes-
caserver<hostname> , <Host_IP> , <advertise_IP> ,
[1]
kube-
apiserver-
kubelet-clientkubernetes-
casystem:masters client
front-proxy-
clientkubernetes-
front-proxy-
caclien | 8,225 |
Note: Instead of using the super-user group system:masters for kube-apiserver-kubelet-client a
less privileged group can be used. kubeadm uses the kubeadm:cluster-admins group for that
purpose.
[1]: any other IP or DNS name you contact your cluster on (as used by kubeadm the load
balancer stable IP and/or DNS name, kubernetes , kubernetes.default , kubernetes.default.svc ,
kubernetes.default.svc.cluster , kubernetes.default.svc.cluster.local )
where kind maps to one or more of the x509 key usage, which is also documented in
the .spec.usages of a CertificateSigningRequest type:
kind Key usage
server digital signature, key encipherment, server auth
client digital signature, key encipherment, client auth
Note: Hosts/SAN listed above are the recommended ones for getting a working cluster; if
required by a specific setup, it is possible to add additional SANs on all the server certificates.
Note:
For kubeadm users only:
The scenario where you are copying to your cluster CA certific | 8,226 |
ates without private keys
is referred as external CA in the kubeadm documentation.
If you are comparing the above list with a kubeadm generated PKI, please be aware that
kube-etcd , kube-etcd-peer and kube-etcd-healthcheck-client certificates are not generated
in case of external etcd.
Certificate paths
Certificates should be placed in a recommended path (as used by kubeadm ). Paths should be
specified using the given argument regardless of location.
Default CNrecommended
key pathrecommended
cert pathcommandkey
argumentcert argument
etcd-ca etcd/ca.key etcd/ca.crtkube-
apiserver--etcd-cafile
kube-
apiserver-
etcd-clientapiserver-etcd-
client.keyapiserver-etcd-
client.crtkube-
apiserver--etcd-
keyfile--etcd-certfile
kubernetes-ca ca.key ca.crtkube-
apiserver--client-ca-file
kubernetes-ca ca.key ca.crtkube-
controller-
manager--cluster-
signing-
key-file--client-ca-file, --
root-ca-file, --
cluster-signing-
cert-file
kube-
apiserverapiserver.key apiserver.crtkube-
apiserver--tls-
priv | 8,227 |
ate-key-
file--tls-cert-file
kube-
apiserver-
kubelet-clientapiserver-
kubelet-
client.keyapiserver-
kubelet-client.crtkube-
apiserver--kubelet-
client-key--kubelet-client-
certificate
front-proxy-ca•
| 8,228 |
Default CNrecommended
key pathrecommended
cert pathcommandkey
argumentcert argument
front-proxy-
ca.keyfront-proxy-
ca.crtkube-
apiserver--requestheader-
client-ca-file
front-proxy-cafront-proxy-
ca.keyfront-proxy-
ca.crtkube-
controller-
manager--requestheader-
client-ca-file
front-proxy-
clientfront-proxy-
client.keyfront-proxy-
client.crtkube-
apiserver--proxy-
client-key-
file--proxy-client-
cert-file
etcd-ca etcd/ca.key etcd/ca.crt etcd--trusted-ca-file, --
peer-trusted-ca-file
kube-etcd etcd/server.key etcd/server.crt etcd --key-file --cert-file
kube-etcd-
peeretcd/peer.key etcd/peer.crt etcd--peer-key-
file--peer-cert-file
etcd-ca etcd/ca.crt etcdctl --cacert
kube-etcd-
healthcheck-
clientetcd/
healthcheck-
client.keyetcd/
healthcheck-
client.crtetcdctl --key --cert
Same considerations apply for the service account key pair:
private key path public key path command argument
sa.key kube-controller-manager --service-account-private-key-file
sa.pub kube-apiserver --service-account- | 8,229 |
key-file
The following example illustrates the file paths from the previous tables you need to provide if
you are generating all of your own keys and certificates:
/etc/kubernetes/pki/etcd/ca.key
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/apiserver-etcd-client.key
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver-kubelet-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/front-proxy-ca.key
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-client.key
/etc/kubernetes/pki/front-proxy-client.crt
/etc/kubernetes/pki/etcd/server.key
/etc/kubernetes/pki/etcd/server.crt
/etc/kubernetes/pki/etcd/peer.key
/etc/kubernetes/pki/etcd/peer.crt
/etc/kubernetes/pki/etcd/healthcheck-client.key
/etc/kubernetes/pki/etcd/healthcheck-client.crt
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pu | 8,230 |
Configure certificates for user accounts
You must manually configure these administrator account and service accounts:
filename credential name Default CN O (in Subject)
admin.conf default-admin kubernetes-admin<admin-
group>
super-admin.conf default-super-admin kubernetes-super-admin system:masters
kubelet.conf default-authsystem:node: <nodeName> (see
note)system:nodes
controller-
manager.confdefault-controller-
managersystem:kube-controller-manager
scheduler.conf default-scheduler system:kube-scheduler
Note: The value of <nodeName> for kubelet.conf must match precisely the value of the node
name provided by the kubelet as it registers with the apiserver. For further details, read the
Node Authorization .
Note:
In the above example <admin-group> is implementation specific. Some tools sign the certificate
in the default admin.conf to be part of the system:masters group. system:masters is a break-
glass, super user group can bypass the authorization layer of Kubernetes, such a | 8,231 |
s RBAC. Also
some tools do not generate a separate super-admin.conf with a certificate bound to this super
user group.
kubeadm generates two separate administrator certificates in kubeconfig files. One is in
admin.conf and has Subject: O = kubeadm:cluster-admins, CN = kubernetes-admin .
kubeadm:cluster-admins is a custom group bound to the cluster-admin ClusterRole. This file is
generated on all kubeadm managed control plane machines.
Another is in super-admin.conf that has Subject: O = system:masters, CN = kubernetes-super-
admin . This file is generated only on the node where kubeadm init was called.
For each config, generate an x509 cert/key pair with the given CN and O.
Run kubectl as follows for each config:
KUBECONFIG=<filename> kubectl config set-cluster default-cluster --server=https://<host ip>:
6443 --certificate-authority <path-to-kubernetes-ca> --embed-certs
KUBECONFIG=<filename> kubectl config set-credentials <credential-name> --client-key
<path-to-key>.pem --cli | 8,232 |
ent-certificate <path-to-cert>.pem --embed-certs
KUBECONFIG=<filename> kubectl config set-context default-system --cluster default-cluster --
user <credential-name>
KUBECONFIG=<filename> kubectl config use-context default-system
These files are used as follows:
filename command comment
admin.conf kubectl Configures administrator user for the cluster
super-admin.conf kubectl Configures super administrator user for the cluster
kubelet.conf kubelet One required for each node in the cluster.1.
2 | 8,233 |
filename command comment
controller-
manager.confkube-controller-
managerMust be added to manifest in manifests/kube-
controller-manager.yaml
scheduler.conf kube-schedulerMust be added to manifest in manifests/kube-
scheduler.yaml
The following files illustrate full paths to the files listed in the previous table:
/etc/kubernetes/admin.conf
/etc/kubernetes/super-admin.conf
/etc/kubernetes/kubelet.conf
/etc/kubernetes/controller-manager.conf
/etc/kubernetes/scheduler.conf | 8,234 |
This section of the Kubernetes documentation contains tutorials. A tutorial shows how to
accomplish a goal that is larger than a single task. Typically a tutorial has several sections, each
of which has a sequence of steps. Before walking through each tutorial, you may want to
bookmark the Standardized Glossary page for later references.
Basics
Kubernetes Basics is an in-depth interactive tutorial that helps you understand the
Kubernetes system and try out some basic Kubernetes features.
Introduction to Kubernetes (edX)
Hello Minikube
Configuration
Example: Configuring a Java Microservice
Configuring Redis Using a ConfigMap
Stateless Applications
Exposing an External IP Address to Access an Application in a Cluster
Example: Deploying PHP Guestbook application with Redis
Stateful Applications
StatefulSet Basics
Example: WordPress and MySQL with Persistent Volumes
Example: Deploying Cassandra with Stateful Sets
Running ZooKeeper, A CP Distributed System
Services
Connecting Applications | 8,235 |
with Services
Using Source IP
Security
Apply Pod Security Standards at Cluster level
Apply Pod Security Standards at Namespace level
AppArmor
Seccomp
What's next
If you would like to write a tutorial, see Content Page Types for information about the tutorial
page type.•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
| 8,236 |
Hello Minikube
This tutorial shows you how to run a sample app on Kubernetes using minikube. The tutorial
provides a container image that uses NGINX to echo back all the requests.
Objectives
Deploy a sample application to minikube.
Run the app.
View application logs.
Before you begin
This tutorial assumes that you have already set up minikube . See Step 1 in minikube start for
installation instructions.
Note: Only execute the instructions in Step 1, Installation . The rest is covered on this page.
You also need to install kubectl . See Install tools for installation instructions.
Create a minikube cluster
minikube start
Open the Dashboard
Open the Kubernetes dashboard. You can do this two different ways:
Launch a browser
URL copy and paste
Open a new terminal, and run:
# Start a new terminal, and leave this running.
minikube dashboard
Now, switch back to the terminal where you ran minikube start .
Note:
The dashboard command enables the dashboard add-on and opens the proxy in the | 8,237 |
default web
browser. You can create Kubernetes resources on the dashboard such as Deployment and
Service.
To find out how to avoid directly invoking the browser from the terminal and get a URL for the
web dashboard, see the "URL copy and paste" tab.
By default, the dashboard is only accessible from within the internal Kubernetes virtual
network. The dashboard command creates a temporary proxy to make the dashboard accessible
from outside the Kubernetes virtual network.•
•
•
•
| 8,238 |
To stop the proxy, run Ctrl+C to exit the process. After the command exits, the dashboard
remains running in the Kubernetes cluster. You can run the dashboard command again to create
another proxy to access the dashboard.
If you don't want minikube to open a web browser for you, run the dashboard subcommand
with the --url flag. minikube outputs a URL that you can open in the browser you prefer.
Open a new terminal, and run:
# Start a new terminal, and leave this running.
minikube dashboard --url
Now, you can use this URL and switch back to the terminal where you ran minikube start .
Create a Deployment
A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of
administration and networking. The Pod in this tutorial has only one Container. A Kubernetes
Deployment checks on the health of your Pod and restarts the Pod's Container if it terminates.
Deployments are the recommended way to manage the creation and scaling of Pods.
Use the kubectl create com | 8,239 |
mand to create a Deployment that manages a Pod. The Pod
runs a Container based on the provided Docker image.
# Run a test container image that includes a webserver
kubectl create deployment hello-node --image =registry.k8s.io/e2e-test-images/agnhost:
2.39 -- /agnhost netexec --http-port =8080
View the Deployment:
kubectl get deployments
The output is similar to:
NAME READY UP-TO-DATE AVAILABLE AGE
hello-node 1/1 1 1 1m
View the Pod:
kubectl get pods
The output is similar to:
NAME READY STATUS RESTARTS AGE
hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
View cluster events:
kubectl get events
View the kubectl configuration:
kubectl config view1.
2.
3.
4.
5 | 8,240 |
View application logs for a container in a pod.
kubectl logs hello-node-5f76cf6ccf-br9b5
The output is similar to:
I0911 09:19:26.677397 1 log.go:195] Started HTTP server on port 8080
I0911 09:19:26.677586 1 log.go:195] Started UDP server on port 8081
Note: For more information about kubectl commands, see the kubectl overview .
Create a Service
By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To
make the hello-node Container accessible from outside the Kubernetes virtual network, you
have to expose the Pod as a Kubernetes Service .
Expose the Pod to the public internet using the kubectl expose command:
kubectl expose deployment hello-node --type =LoadBalancer --port =8080
The --type=LoadBalancer flag indicates that you want to expose your Service outside of
the cluster.
The application code inside the test image only listens on TCP port 8080. If you used
kubectl expose to expose a different port, clients could not conn | 8,241 |
ect to that other port.
View the Service you created:
kubectl get services
The output is similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m
On cloud providers that support load balancers, an external IP address would be
provisioned to access the Service. On minikube, the LoadBalancer type makes the Service
accessible through the minikube service command.
Run the following command:
minikube service hello-node
This opens up a browser window that serves your app and shows the app's response.
Enable addons
The minikube tool includes a set of built-in addons that can be enabled, disabled and opened in
the local Kubernetes environment.
List the currently supported addons:6.
1.
2.
3.
1 | 8,242 |
minikube addons list
The output is similar to:
addon-manager: enabled
dashboard: enabled
default-storageclass: enabled
efk: disabled
freshpod: disabled
gvisor: disabled
helm-tiller: disabled
ingress: disabled
ingress-dns: disabled
logviewer: disabled
metrics-server: disabled
nvidia-driver-installer: disabled
nvidia-gpu-device-plugin: disabled
registry: disabled
registry-creds: disabled
storage-provisioner: enabled
storage-provisioner-gluster: disabled
Enable an addon, for example, metrics-server :
minikube addons enable metrics-server
The output is similar to:
The 'metrics-server' addon is enabled
View the Pod and Service you created by installing that addon:
kubectl get pod,svc -n kube-system
The output is similar to:
NAME READY STATUS RESTARTS AGE
pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
pod/metrics-server-67fb648c5 | 8,243 |
1/1 Running 0 26s
pod/etcd-minikube 1/1 Running 0 34m
pod/influxdb-grafana-b29w8 2/2 Running 0 26s
pod/kube-addon-manager-minikube 1/1 Running 0 34m
pod/kube-apiserver-minikube 1/1 Running 0 34m
pod/kube-controller-manager-minikube 1/1 Running 0 34m
pod/kube-proxy-rnlps 1/1 Running 0 34m
pod/kube-scheduler-minikube 1/1 Running 0 34m
pod/storage-provisioner 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
service/monitoring-grafana | 8,244 |
NodePort 10.99.24.54 <none> 80:30002/TCP
26s2.
3 | 8,245 |
service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/
TCP 26s
Check the output from metrics-server :
kubectl top pods
The output is similar to:
NAME CPU(cores) MEMORY(bytes)
hello-node-ccf4b9788-4jn97 1m 6Mi
If you see the following message, wait, and try again:
error: Metrics API not available
Disable metrics-server :
minikube addons disable metrics-server
The output is similar to:
metrics-server was successfully disabled
Clean up
Now you can clean up the resources you created in your cluster:
kubectl delete service hello-node
kubectl delete deployment hello-node
Stop the Minikube cluster
minikube stop
Optionally, delete the Minikube VM:
# Optional
minikube delete
If you want to use minikube again to learn more about Kubernetes, you don't need to delete it.
Conclusion
This page covered the basic aspects to get a minikube cluster up and running. You are now
ready to deploy applications.
What's nex | 8,246 |
t
Tutorial to deploy your first app on Kubernetes with kubectl .
Learn more about Deployment objects .
Learn more about Deploying applications .
Learn more about Service objects .4.
5.
•
•
•
| 8,247 |
Learn Kubernetes Basics
html
Kubernetes Basics
This tutorial provides a walkthrough of the basics of the Kubernetes cluster orchestration
system. Each module contains some background information on major Kubernetes features and
concepts, and a tutorial for you to follow along.
Using the tutorials, you can learn to:
Deploy a containerized application on a cluster.
Scale the deployment.
Update the containerized application with a new software version.
Debug the containerized application.
What can Kubernetes do for you?
With modern web services, users expect applications to be available 24/7, and developers expect
to deploy new versions of those applications several times a day. Containerization helps
package software to serve these goals, enabling applications to be released and updated without
downtime. Kubernetes helps you make sure those containerized applications run where and
when you want, and helps them find the resources and tools they need to work. Kubernetes is a
production-rea | 8,248 |
dy, open source platform designed with Google's accumulated experience in
container orchestration, combined with best-of-breed ideas from the community.
Kubernetes Basics Modules
1. Create a Kubernetes cluster
2. Deploy an app
3. Explore your app
4. Expose your app publicly
5. Scale up your app•
•
•
| 8,249 |
6. Update your app
Create a Cluster
Learn about Kubernetes cluster and create a simple cluster using Minikube.
Using Minikube to Create a Cluster
Learn what a Kubernetes cluster is. Learn what Minikube is. Start a Kubernetes cluster.
Using Minikube to Create a Cluster
Learn what a Kubernetes cluster is. Learn what Minikube is. Start a Kubernetes cluster.
html
Objectives
Learn what a Kubernetes cluster is.
Learn what Minikube is.
Start a Kubernetes cluster on your computer.
Kubernetes Clusters
Kubernetes coordinates a highly available cluster of computers that are connected to
work as a single unit. The abstractions in Kubernetes allow you to deploy containerized
applications to a cluster without tying them specifically to individual machines. To make use of
this new model of deployment, applications need to be packaged in a way that decouples them
from individual hosts: they need to be containerized. Containerized applications are more
flexible and available than in past deployment m | 8,250 |
odels, where applications were installed
directly onto specific machines as packages deeply integrated into the host. Kubernetes
automates the distribution and scheduling of application containers across a cluster in
a more efficient way. Kubernetes is an open-source platform and is production-ready.
A Kubernetes cluster consists of two types of resources:
The Control Plane coordinates the cluster
Nodes are the workers that run applications
Summary:
Kubernetes cluster
Minikube
Kubernetes is a production-grade, open-source platform that orchestrates the placement
(scheduling) and execution of application containers within and across computer clusters.•
•
•
•
•
•
| 8,251 |
Cluster Diagram
The Control Plane is responsible for managing the cluster. The Control Plane coordinates
all activities in your cluster, such as scheduling applications, maintaining applications' desired
state, scaling applications, and rolling out new updates.
A node is a VM or a physical computer that serves as a worker machine in a
Kubernetes cluster. Each node has a Kubelet, which is an agent for managing the node and
communicating with the Kubernetes control plane. The node should also have tools for
handling container operations, such as containerd or CRI-O . A Kubernetes cluster that handles
production traffic should have a minimum of three nodes because if one node goes down, both
an etcd member and a control plane instance are lost, and redundancy is compromised. You can
mitigate this risk by adding more control plane nodes.
Control Planes manage the cluster and the nodes that are used to host the running applications.
When you deploy applications on Kubernetes, you tell th | 8,252 |
e control plane to start the application
containers. The control plane schedules the containers to run on the cluster's nodes. Node-
level components, such as the kubelet, communicate with the control plane using the
Kubernetes API , which the control plane exposes. End users can also use the Kubernetes API
directly to interact with the cluster.
A Kubernetes cluster can be deployed on either physical or virtual machines. To get started
with Kubernetes development, you can use Minikube. Minikube is a lightweight Kubernetes
implementation that creates a VM on your local machine and deploys a simple cluster
containing only one node. Minikube is available for Linux, macOS, and Windows systems. The
Minikube CLI provides basic bootstrapping operations for working with your cluster, including
start, stop, status, and delete.
Now that you know more about what Kubernetes is, visit Hello Minikube to try this out on
your computer.
Deploy an App
Using kubectl to Create a Deployment
Learn about a | 8,253 |
pplication Deployments. Deploy your first app on Kubernetes with kubectl.
Using kubectl to Create a Deployment
Learn about application Deployments. Deploy your first app on Kubernetes with kubectl.
html
Objectives
Learn about application Deployments. | 8,254 |
Deploy your first app on Kubernetes with kubectl.
Kubernetes Deployments
Note: This tutorial uses a container that requires the AMD64 architecture. If you are using
minikube on a computer with a different CPU architecture, you could try using minikube with
a driver that can emulate AMD64. For example, the Docker Desktop driver can do this.
Once you have a running Kubernetes cluster , you can deploy your containerized applications
on top of it. To do so, you create a Kubernetes Deployment . The Deployment instructs
Kubernetes how to create and update instances of your application. Once you've created a
Deployment, the Kubernetes control plane schedules the application instances included in that
Deployment to run on individual Nodes in the cluster.
Once the application instances are created, a Kubernetes Deployment controller continuously
monitors those instances. If the Node hosting an instance goes down or is deleted, the
Deployment controller replaces the instance with an instance on | 8,255 |
another Node in the cluster.
This provides a self-healing mechanism to address machine failure or maintenance.
In a pre-orchestration world, installation scripts would often be used to start applications, but
they did not allow recovery from machine failure. By both creating your application instances
and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally
different approach to application management.
Summary:
Deployments
Kubectl
A Deployment is responsible for creating and updating instances of your application
Deploying your first app on Kubernetes
You can create and manage a Deployment by using the Kubernetes command line interface,
kubectl . Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll
learn the most common kubectl commands needed to create Deployments that run your
applications on a Kubernetes cluster.
When you create a Deployment, you'll need to specify the container image for your application
and the number | 8,256 |
of replicas that you want to run. You can change that information later by
updating your Deployment; Modules 5 and 6 of the bootcamp discuss how you can scale and
update your Deployments.
Applications need to be packaged into one of the supported container formats in order to be
deployed on Kubernetes
For your first Deployment, you'll use a hello-node application packaged in a Docker container
that uses NGINX to echo back all the requests. (If you didn't already try creating a hello-node•
•
| 8,257 |
application and deploying it using a container, you can do that first by following the
instructions from the Hello Minikube tutorial ).
You will need to have installed kubectl as well. If you need to install it, visit install tools .
Now that you know what Deployments are, let's deploy our first app!
kubectl basics
The common format of a kubectl command is: kubectl action resource
This performs the specified action (like create , describe or delete ) on the specified resource (like
node or deployment ). You can use --help after the subcommand to get additional info about
possible parameters (for example: kubectl get nodes --help ).
Check that kubectl is configured to talk to your cluster, by running the kubectl version
command.
Check that kubectl is installed and you can see both the client and the server versions.
To view the nodes in the cluster, run the kubectl get nodes command.
You see the available nodes. Later, Kubernetes will choose where to deploy our application
based | 8,258 |
on Node available resources.
Deploy an app
Let’s deploy our first app on Kubernetes with the kubectl create deployment command. We
need to provide the deployment name and app image location (include the full repository url
for images hosted outside Docker Hub).
kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/
kubernetes-bootcamp:v1
Great! You just deployed your first application by creating a deployment. This performed a few
things for you:
searched for a suitable node where an instance of the application could be run (we have
only 1 available node)
scheduled the application to run on that Node
configured the cluster to reschedule the instance on a new Node when needed
To list your deployments use the kubectl get deployments command:
kubectl get deployments
We see that there is 1 deployment running a single instance of your app. The instance is
running inside a container on your node.•
•
| 8,259 |
View the app
Pods that are running inside Kubernetes are running on a private, isolated network. By default
they are visible from other pods and services within the same Kubernetes cluster, but not
outside that network. When we use kubectl , we're interacting through an API endpoint to
communicate with our application.
We will cover other options on how to expose your application outside the Kubernetes cluster
later, in Module 4 . Also as a basic tutorial, we're not explaining what Pods are in any detail here,
it will be covered in later topics.
The kubectl proxy command can create a proxy that will forward communications into the
cluster-wide, private network. The proxy can be terminated by pressing control-C and won't
show any output while it's running.
You need to open a second terminal window to run the proxy.
kubectl proxy
We now have a connection between our host (the terminal) and the Kubernetes cluster. The
proxy enables direct access to the API from these terminals.
You can | 8,260 |
see all those APIs hosted through the proxy endpoint. For example, we can query the
version directly through the API using the curl command:
curl http://localhost:8001/version
Note: If port 8001 is not accessible, ensure that the kubectl proxy that you started above is
running in the second terminal.
The API server will automatically create an endpoint for each pod, based on the pod name, that
is also accessible through the proxy.
First we need to get the Pod name, and we'll store it in the environment variable POD_NAME :
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}
{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME
You can access the Pod through the proxied API, by running:
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/
In order for the new Deployment to be accessible without using the proxy, a Service is required
which will be explained in Module 4 .
Once you're ready, move on to Viewing Pods and Nodes .
Expl | 8,261 |
ore Your Ap | 8,262 |
Viewing Pods and Nodes
Learn how to troubleshoot Kubernetes applications using kubectl get, kubectl describe, kubectl
logs and kubectl exec.
Viewing Pods and Nodes
Learn how to troubleshoot Kubernetes applications using kubectl get, kubectl describe, kubectl
logs and kubectl exec.
html
Objectives
Learn about Kubernetes Pods.
Learn about Kubernetes Nodes.
Troubleshoot deployed applications.
Kubernetes Pods
When you created a Deployment in Module 2, Kubernetes created a Pod to host your
application instance. A Pod is a Kubernetes abstraction that represents a group of one or more
application containers (such as Docker), and some shared resources for those containers. Those
resources include:
Shared storage, as Volumes
Networking, as a unique cluster IP address
Information about how to run each container, such as the container image version or
specific ports to use
A Pod models an application-specific "logical host" and can contain different application
containers which are relatively tig | 8,263 |
htly coupled. For example, a Pod might include both the
container with your Node.js app as well as a different container that feeds the data to be
published by the Node.js webserver. The containers in a Pod share an IP Address and port
space, are always co-located and co-scheduled, and run in a shared context on the same Node.
Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on
Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating
containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until
termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are
scheduled on other available Nodes in the cluster.
Summary:
Pods
Nodes
Kubectl main commands
A Pod is a group of one or more application containers (such as Docker) and includes shared
storage (volumes), IP address and information about how to run them.•
•
•
•
•
•
•
•
| 8,264 |
Pods overview
Nodes
A Pod always runs on a Node . A Node is a worker machine in Kubernetes and may be either a
virtual or a physical machine, depending on the cluster. Each Node is managed by the control
plane. A Node can have multiple pods, and the Kubernetes control plane automatically handles
scheduling the pods across the Nodes in the cluster. The control plane's automatic scheduling
takes into account the available resources on each Node.
Every Kubernetes Node runs at least:
Kubelet, a process responsible for communication between the Kubernetes control plane
and the Node; it manages the Pods and the containers running on a machine.
A container runtime (like Docker) responsible for pulling the container image from a
registry, unpacking the container, and running the application.
Containers should only be scheduled together in a single Pod if they are tightly coupled and need to
share resources such as disk.
Node overview
Troubleshooting with kubectl
In Module 2, you used the kubec | 8,265 |
tl command-line interface. You'll continue to use it in Module 3
to get information about deployed applications and their environments. The most common
operations can be done with the following kubectl subcommands:
kubectl get - list resources
kubectl describe - show detailed information about a resource
kubectl logs - print the logs from a container in a pod
kubectl exec - execute a command on a container in a pod
You can use these commands to see when applications were deployed, what their current
statuses are, where they are running and what their configurations are.
Now that we know more about our cluster components and the command line, let's explore our
application.
A node is a worker machine in Kubernetes and may be a VM or physical machine, depending on
the cluster. Multiple Pods can run on one Node.•
•
•
•
•
| 8,266 |
Check application configuration
Let's verify that the application we deployed in the previous scenario is running. We'll use the
kubectl get command and look for existing Pods:
kubectl get pods
If no pods are running, please wait a couple of seconds and list the Pods again. You can
continue once you see one Pod running.
Next, to view what containers are inside that Pod and what images are used to build those
containers we run the kubectl describe pods command:
kubectl describe pods
We see here details about the Pod’s container: IP address, the ports used and a list of events
related to the lifecycle of the Pod.
The output of the describe subcommand is extensive and covers some concepts that we didn’t
explain yet, but don’t worry, they will become familiar by the end of this bootcamp.
Note: the describe subcommand can be used to get detailed information about most of the
Kubernetes primitives, including Nodes, Pods, and Deployments. The describe output is designed to
be human reada | 8,267 |
ble, not to be scripted against.
Show the app in the terminal
Recall that Pods are running in an isolated, private network - so we need to proxy access to
them so we can debug and interact with them. To do this, we'll use the kubectl proxy command
to run a proxy in a second terminal . Open a new terminal window, and in that new terminal,
run:
kubectl proxy
Now again, we'll get the Pod name and query that pod directly through the proxy. To get the
Pod name and store it in the POD_NAME environment variable:
export POD_NAME="$(kubectl get pods -o go-template --template '{{range .items}}
{{.metadata.name}}{{"\n"}}{{end}}')"
echo Name of the Pod: $POD_NAME
To see the output of our application, run a curl request:
curl http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/
The URL is the route to the API of the Pod.
View the container logs
Anything that the application would normally send to standard output becomes logs for the
container within the Pod. We can retrieve | 8,268 |
these logs using the kubectl logs command:
kubectl logs "$POD_NAME | 8,269 |
Note: We don't need to specify the container name, because we only have one container inside the
pod.
Executing command on the container
We can execute commands directly on the container once the Pod is up and running. For this,
we use the exec subcommand and use the name of the Pod as a parameter. Let’s list the
environment variables:
kubectl exec "$POD_NAME" -- env
Again, it's worth mentioning that the name of the container itself can be omitted since we only
have a single container in the Pod.
Next let’s start a bash session in the Pod’s container:
kubectl exec -ti $POD_NAME -- bash
We have now an open console on the container where we run our NodeJS application. The
source code of the app is in the server.js file:
cat server.js
You can check that the application is up by running a curl command:
curl http://localhost:8080
Note: here we used localhost because we executed the command inside the NodeJS Pod. If you
cannot connect to localhost:8080, check to make sure you have run the | 8,270 |
kubectl exec command and
are launching the command from within the Pod
To close your container connection, type exit.
Once you're ready, move on to Using A Service To Expose Your App .
Expose Your App Publicly
Using a Service to Expose Your App
Learn about a Service in Kubernetes. Understand how labels and selectors relate to a Service.
Expose an application outside a Kubernetes cluster.
Using a Service to Expose Your App
Learn about a Service in Kubernetes. Understand how labels and selectors relate to a Service.
Expose an application outside a Kubernetes cluster.
htm | 8,271 |
Objectives
Learn about a Service in Kubernetes
Understand how labels and selectors relate to a Service
Expose an application outside a Kubernetes cluster using a Service
Overview of Kubernetes Services
Kubernetes Pods are mortal. Pods have a lifecycle . When a worker node dies, the Pods running
on the Node are also lost. A ReplicaSet might then dynamically drive the cluster back to the
desired state via the creation of new Pods to keep your application running. As another
example, consider an image-processing backend with 3 replicas. Those replicas are
exchangeable; the front-end system should not care about backend replicas or even if a Pod is
lost and recreated. That said, each Pod in a Kubernetes cluster has a unique IP address, even
Pods on the same Node, so there needs to be a way of automatically reconciling changes among
Pods so that your applications continue to function.
A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by
which to ac | 8,272 |
cess them. Services enable a loose coupling between dependent Pods. A Service is
defined using YAML or JSON, like all Kubernetes object manifests. The set of Pods targeted by a
Service is usually determined by a label selector (see below for why you might want a Service
without including a selector in the spec).
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster
without a Service. Services allow your applications to receive traffic. Services can be exposed in
different ways by specifying a type in the spec of the Service:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes
the Service only reachable from within the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster
using NAT. Makes a Service accessible from outside the cluster using
<NodeIP>:<NodePort> . Superset of ClusterIP.
LoadBalancer - Creates an external load balancer in the current cloud (if supported) | 8,273 |
and
assigns a fixed, external IP to the Service. Superset of NodePort.
ExternalName - Maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com ), by returning a CNAME record with its value. No proxying of any
kind is set up. This type requires v1.7 or higher of kube-dns , or CoreDNS version 0.0.8 or
higher.
More information about the different types of Services can be found in the Using Source IP
tutorial. Also see Connecting Applications with Services .
Additionally, note that there are some use cases with Services that involve not defining a
selector in the spec. A Service created without selector will also not create the corresponding
Endpoints object. This allows users to manually map a Service to specific endpoints. Another
possibility why there may be no selector is you are strictly using type: ExternalName .
Summary
Exposing Pods to external traffic
Load balancing traffic across multiple Pods
Using labels•
•
•
•
•
•
•
•
•
| 8,274 |
A Kubernetes Service is an abstraction layer which defines a logical set of Pods and enables
external traffic exposure, load balancing and service discovery for those Pods.
Services and Labels
A Service routes traffic across a set of Pods. Services are the abstraction that allows pods to die
and replicate in Kubernetes without impacting your application. Discovery and routing among
dependent Pods (such as the frontend and backend components in an application) are handled
by Kubernetes Services.
Services match a set of Pods using labels and selectors , a grouping primitive that allows logical
operation on objects in Kubernetes. Labels are key/value pairs attached to objects and can be
used in any number of ways:
Designate objects for development, test, and production
Embed version tags
Classify an object using tags
Labels can be attached to objects at creation time or later on. They can be modified at any time.
Let's expose our application now using a Service and apply some labels.
Step | 8,275 |
1: Creating a new Service
Let’s verify that our application is running. We’ll use the kubectl get command and look for
existing Pods:
kubectl get pods
If no Pods are running then it means the objects from the previous tutorials were cleaned up. In
this case, go back and recreate the deployment from the Using kubectl to create a Deployment
tutorial. Please wait a couple of seconds and list the Pods again. You can continue once you see
the one Pod running.
Next, let’s list the current Services from our cluster:
kubectl get services
We have a Service called kubernetes that is created by default when minikube starts the cluster.
To create a new service and expose it to external traffic we'll use the expose command with
NodePort as parameter.
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
Let's run again the get services subcommand:
kubectl get services•
•
| 8,276 |
We have now a running Service called kubernetes-bootcamp. Here we see that the Service
received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).
To find out what port was opened externally (for the type: NodePort Service) we’ll run the
describe service subcommand:
kubectl describe services/kubernetes-bootcamp
Create an environment variable called NODE_PORT that has the value of the Node port
assigned:
export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o go-
template='{{(index .spec.ports 0).nodePort}}')"
echo "NODE_PORT=$NODE_PORT"
Now we can test that the app is exposed outside of the cluster using curl, the IP address of the
Node and the externally exposed port:
curl http://"$(minikube ip):$NODE_PORT"
Note:
If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is
needed. This is because containers inside Docker Desktop are isolated from your host computer.
In a separate terminal window, execute:
minikube | 8,277 |
service kubernetes-bootcamp --url
The output looks like this:
http://127.0.0.1:51082
! Because you are using a Docker driver on darwin, the terminal needs to be open to
run it.
Then use the given URL to access the app:
curl 127.0.0.1:51082
And we get a response from the server. The Service is exposed.
Step 2: Using labels
The Deployment created automatically a label for our Pod. With the describe deployment
subcommand you can see the name (the key) of that label:
kubectl describe deployment
Let’s use this label to query our list of Pods. We’ll use the kubectl get pods command with -l as
a parameter, followed by the label values:
kubectl get pods -l app=kubernetes-bootcamp
You can do the same to list the existing Services:
kubectl get services -l app=kubernetes-bootcam | 8,278 |
Get the name of the Pod and store it in the POD_NAME environment variable:
export POD_NAME="$(kubectl get pods -o go-template --template '{{range .items}}
{{.metadata.name}}{{"\n"}}{{end}}')"
echo "Name of the Pod: $POD_NAME"
To apply a new label we use the label subcommand followed by the object type, object name
and the new label:
kubectl label pods "$POD_NAME" version=v1
This will apply a new label to our Pod (we pinned the application version to the Pod), and we
can check it with the describe pod command:
kubectl describe pods "$POD_NAME"
We see here that the label is attached now to our Pod. And we can query now the list of pods
using the new label:
kubectl get pods -l version=v1
And we see the Pod.
Step 3: Deleting a service
To delete Services you can use the delete service subcommand. Labels can be used also here:
kubectl delete service -l app=kubernetes-bootcamp
Confirm that the Service is gone:
kubectl get services
This confirms that our Service was removed. To confirm that | 8,279 |
route is not exposed anymore you
can curl the previously exposed IP and port:
curl http://"$(minikube ip):$NODE_PORT"
This proves that the application is not reachable anymore from outside of the cluster. You can
confirm that the app is still running with a curl from inside the pod:
kubectl exec -ti $POD_NAME -- curl http://localhost:8080
We see here that the application is up. This is because the Deployment is managing the
application. To shut down the application, you would need to delete the Deployment as well.
Once you're ready, move on to Running Multiple Instances of Your App .
Scale Your Ap | 8,280 |
Running Multiple Instances of Your App
Scale an existing app manually using kubectl.
Running Multiple Instances of Your App
Scale an existing app manually using kubectl.
html
Objectives
Scale an app using kubectl.
Scaling an application
Previously we created a Deployment , and then exposed it publicly via a Service . The
Deployment created only one Pod for running our application. When traffic increases, we will
need to scale the application to keep up with user demand.
If you haven't worked through the earlier sections, start from Using minikube to create a
cluster .
Scaling is accomplished by changing the number of replicas in a Deployment.
Summary:
Scaling a Deployment
You can create from the start a Deployment with multiple instances using the --replicas parameter
for the kubectl create deployment command
Note:
If you are trying this after the previous section , you may have deleted the Service exposing the
Deployment. In that case, please expose the Deployment again using the fol | 8,281 |
lowing command:
kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080
Scaling overview
Previous Next
Scaling out a Deployment will ensure new Pods are created and scheduled to Nodes with
available resources. Scaling will increase the number of Pods to the new desired state.
Kubernetes also supports autoscaling of Pods, but it is outside of the scope of this tutorial.
Scaling to zero is also possible, and it will terminate all Pods of the specified Deployment.•
•
1.
2 | 8,282 |
Running multiple instances of an application will require a way to distribute the traffic to all of
them. Services have an integrated load-balancer that will distribute network traffic to all Pods
of an exposed Deployment. Services will monitor continuously the running Pods using
endpoints, to ensure the traffic is sent only to available Pods.
Scaling is accomplished by changing the number of replicas in a Deployment.
Once you have multiple instances of an application running, you would be able to do Rolling
updates without downtime. We'll cover that in the next section of the tutorial. Now, let's go to
the terminal and scale our application.
Scaling a Deployment
To list your Deployments, use the get deployments subcommand:
kubectl get deployments
The output should be similar to:
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-bootcamp 1/1 1 1 11m
We should have 1 Pod. If not, run the command again. This shows:
NAME lists the names of | 8,283 |
the Deployments in the cluster.
READY shows the ratio of CURRENT/DESIRED replicas
UP-TO-DATE displays the number of replicas that have been updated to achieve the
desired state.
AVAILABLE displays how many replicas of the application are available to your users.
AGE displays the amount of time that the application has been running.
To see the ReplicaSet created by the Deployment, run:
kubectl get rs
Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-
[RANDOM-STRING] . The random string is randomly generated and uses the pod-template-hash
as a seed.
Two important columns of this output are:
DESIRED displays the desired number of replicas of the application, which you define
when you create the Deployment. This is the desired state.
CURRENT displays how many replicas are currently running.
Next, let’s scale the Deployment to 4 replicas. We’ll use the kubectl scale command, followed by
the Deployment type, name and desired number of instances:
kubectl sc | 8,284 |
ale deployments/kubernetes-bootcamp --replicas=4
To list your Deployments once again, use get deployments :
kubectl get deployments•
•
•
•
•
•
| 8,285 |
The change was applied, and we have 4 instances of the application available. Next, let’s check
if the number of Pods changed:
kubectl get pods -o wide
There are 4 Pods now, with different IP addresses. The change was registered in the Deployment
events log. To check that, use the describe subcommand:
kubectl describe deployments/kubernetes-bootcamp
You can also view in the output of this command that there are 4 replicas now.
Load Balancing
Let's check that the Service is load-balancing the traffic. To find out the exposed IP and Port we
can use the describe service as we learned in the previous part of the tutorial:
kubectl describe services/kubernetes-bootcamp
Create an environment variable called NODE_PORT that has a value as the Node port:
export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o go-
template='{{(index .spec.ports 0).nodePort}}')"
echo NODE_PORT=$NODE_PORT
Next, we’ll do a curl to the exposed IP address and port. Execute the command multiple times:
curl htt | 8,286 |
p://"$(minikube ip):$NODE_PORT"
We hit a different Pod with every request. This demonstrates that the load-balancing is
working.
Note:
If you're running minikube with Docker Desktop as the container driver, a minikube tunnel is
needed. This is because containers inside Docker Desktop are isolated from your host computer.
In a separate terminal window, execute:
minikube service kubernetes-bootcamp --url
The output looks like this:
http://127.0.0.1:51082
! Because you are using a Docker driver on darwin, the terminal needs to be open to
run it.
Then use the given URL to access the app:
curl 127.0.0.1:51082
Scale Down
To scale down the Deployment to 2 replicas, run again the scale subcommand:
kubectl scale deployments/kubernetes-bootcamp --replicas= | 8,287 |
List the Deployments to check if the change was applied with the get deployments
subcommand:
kubectl get deployments
The number of replicas decreased to 2. List the number of Pods, with get pods :
kubectl get pods -o wide
This confirms that 2 Pods were terminated.
Once you're ready, move on to Performing a Rolling Update .
Update Your App
Performing a Rolling Update
Perform a rolling update using kubectl.
Performing a Rolling Update
Perform a rolling update using kubectl.
html
Objectives
Perform a rolling update using kubectl.
Updating an application
Users expect applications to be available all the time, and developers are expected to deploy
new versions of them several times a day. In Kubernetes this is done with rolling updates. A
rolling update allows a Deployment update to take place with zero downtime. It does this by
incrementally replacing the current Pods with new ones. The new Pods are scheduled on Nodes
with available resources, and Kubernetes waits for those new Pods to s | 8,288 |
tart before removing the
old Pods.
In the previous module we scaled our application to run multiple instances. This is a
requirement for performing updates without affecting application availability. By default, the
maximum number of Pods that can be unavailable during the update and the maximum number
of new Pods that can be created, is one. Both options can be configured to either numbers or
percentages (of Pods). In Kubernetes, updates are versioned and any Deployment update can be
reverted to a previous (stable) version.
Summary:
Updating an app•
| 8,289 |
Rolling updates allow Deployments' update to take place with zero downtime by incrementally
updating Pods instances with new ones.
Rolling updates overview
Previous Next
Similar to application Scaling, if a Deployment is exposed publicly, the Service will load-balance
the traffic only to available Pods during the update. An available Pod is an instance that is
available to the users of the application.
Rolling updates allow the following actions:
Promote an application from one environment to another (via container image updates)
Rollback to previous versions
Continuous Integration and Continuous Delivery of applications with zero downtime
If a Deployment is exposed publicly, the Service will load-balance the traffic only to available Pods
during the update.
In the following interactive tutorial, we'll update our application to a new version, and also
perform a rollback.
Update the version of the app
To list your Deployments, run the get deployments subcommand: kubectl get deployment | 8,290 |
s
To list the running Pods, run the get pods subcommand:
kubectl get pods
To view the current image version of the app, run the describe pods subcommand and look for
the Image field:
kubectl describe pods
To update the image of the application to version 2, use the set image subcommand, followed
by the deployment name and the new image version:
kubectl set image deployments/kubernetes-bootcamp kubernetes-
bootcamp=docker.io/jocatalin/kubernetes-bootcamp:v21.
2.
3.
4.
•
•
| 8,291 |
The command notified the Deployment to use a different image for your app and initiated a
rolling update. Check the status of the new Pods, and view the old one terminating with the
get pods subcommand:
kubectl get pods
Verify an update
First, check that the app is running. To find the exposed IP address and port, run the describe
service command:
kubectl describe services/kubernetes-bootcamp
Create an environment variable called NODE_PORT that has the value of the Node port
assigned:
export NODE_PORT="$(kubectl get services/kubernetes-bootcamp -o go-
template='{{(index .spec.ports 0).nodePort}}')"
echo "NODE_PORT=$NODE_PORT"
Next, do a curl to the exposed IP and port:
curl http://"$(minikube ip):$NODE_PORT"
Every time you run the curl command, you will hit a different Pod. Notice that all Pods are now
running the latest version (v2).
You can also confirm the update by running the rollout status subcommand:
kubectl rollout status deployments/kubernetes-bootcamp
To view the curren | 8,292 |
t image version of the app, run the describe pods subcommand:
kubectl describe pods
In the Image field of the output, verify that you are running the latest image version (v2).
Roll back an update
Let’s perform another update, and try to deploy an image tagged with v10:
kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/
google-samples/kubernetes-bootcamp:v10
Use get deployments to see the status of the deployment:
kubectl get deployments
Notice that the output doesn't list the desired number of available Pods. Run the get pods
subcommand to list all Pods:
kubectl get pods
Notice that some of the Pods have a status of ImagePullBackOff | 8,293 |
To get more insight into the problem, run the describe pods subcommand:
kubectl describe pods
In the Events section of the output for the affected Pods, notice that the v10 image version did
not exist in the repository.
To roll back the deployment to your last working version, use the rollout undo subcommand:
kubectl rollout undo deployments/kubernetes-bootcamp
The rollout undo command reverts the deployment to the previous known state (v2 of the
image). Updates are versioned and you can revert to any previously known state of a
Deployment.
Use the get pods subcommand to list the Pods again:
kubectl get pods
Four Pods are running. To check the image deployed on these Pods, use the describe pods
subcommand:
kubectl describe pods
The Deployment is once again using a stable version of the app (v2). The rollback was
successful.
Remember to clean up your local cluster
kubectl delete deployments/kubernetes-bootcamp services/kubernetes-bootcamp
Configuration
Example: Configuring a Java M | 8,294 |
icroservice
Configuring Redis using a ConfigMap
Example: Configuring a Java Microservice
Externalizing config using MicroProfile, ConfigMaps and Secrets
Externalizing config using MicroProfile,
ConfigMaps and Secrets
In this tutorial you will learn how and why to externalize your microservice’s configuration.
Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment
variables and then consume them using MicroProfile Config | 8,295 |
Before you begin
Creating Kubernetes ConfigMaps & Secrets
There are several ways to set environment variables for a Docker container in Kubernetes,
including: Dockerfile, kubernetes.yml, Kubernetes ConfigMaps, and Kubernetes Secrets. In the
tutorial, you will learn how to use the latter two for setting your environment variables whose
values will be injected into your microservices. One of the benefits for using ConfigMaps and
Secrets is that they can be re-used across multiple containers, including being assigned to
different environment variables for the different containers.
ConfigMaps are API Objects that store non-confidential key-value pairs. In the Interactive
Tutorial you will learn how to use a ConfigMap to store the application's name. For more
information regarding ConfigMaps, you can find the documentation here.
Although Secrets are also used to store key-value pairs, they differ from ConfigMaps in that
they're intended for confidential/sensitive information and are stored | 8,296 |
using Base64 encoding.
This makes secrets the appropriate choice for storing such things as credentials, keys, and
tokens, the former of which you'll do in the Interactive Tutorial. For more information on
Secrets, you can find the documentation here.
Externalizing Config from Code
Externalized application configuration is useful because configuration usually changes
depending on your environment. In order to accomplish this, we'll use Java's Contexts and
Dependency Injection (CDI) and MicroProfile Config. MicroProfile Config is a feature of
MicroProfile, a set of open Java technologies for developing and deploying cloud-native
microservices.
CDI provides a standard dependency injection capability enabling an application to be
assembled from collaborating, loosely-coupled beans. MicroProfile Config provides apps and
microservices a standard way to obtain config properties from various sources, including the
application, runtime, and environment. Based on the source's defined priority, | 8,297 |
the properties are
automatically combined into a single set of properties that the application can access via an
API. Together, CDI & MicroProfile will be used in the Interactive Tutorial to retrieve the
externally provided properties from the Kubernetes ConfigMaps and Secrets and get injected
into your application code.
Many open source frameworks and runtimes implement and support MicroProfile Config.
Throughout the interactive tutorial, you'll be using Open Liberty, a flexible open-source Java
runtime for building and running cloud-native apps and microservices. However, any
MicroProfile compatible runtime could be used instead.
Objectives
Create a Kubernetes ConfigMap and Secret
Inject microservice configuration using MicroProfile Config•
| 8,298 |
Example: Externalizing config using MicroProfile,
ConfigMaps and Secrets
Start Interactive Tutorial
Configuring Redis using a ConfigMap
This page provides a real world example of how to configure Redis using a ConfigMap and
builds upon the Configure a Pod to Use a ConfigMap task.
Objectives
Create a ConfigMap with Redis configuration values
Create a Redis Pod that mounts and uses the created ConfigMap
Verify that the configuration was correctly applied.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured
to communicate with your cluster. It is recommended to run this tutorial on a cluster with at
least two nodes that are not acting as control plane hosts. If you do not already have a cluster,
you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Killercoda
Play with Kubernetes
To check the version, enter kubectl version .
The example shown on this page works with kubectl 1.14 and above.
Under | 8,299 |