text
stringlengths 1
1k
| id
int64 0
8.58k
|
---|---|
When zk-0 if fully terminated, use CTRL-C to terminate kubectl.
zk-2 1/1 Terminating 0 9m
zk-0 1/1 Terminating 0 11m
zk-1 1/1 Terminating 0 10m
zk-2 0/1 Terminating 0 9m
zk-2 0/1 Terminating 0 9m
zk-2 0/1 Terminating 0 9m
zk-1 0/1 Terminating 0 10m
zk-1 0/1 Terminating 0 10m
zk-1 0/1 Terminating 0 10m
zk-0 0/1 Terminating 0 11m
zk-0 0/1 Terminating 0 11m
zk-0 0/1 Terminating 0 11m
Reapply the manifest in zookeeper.yaml .
kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
This creates the zk StatefulSet object, but the other API objects in the manifest are not modified
because they already exist.
Watch the StatefulSet controller recreate the StatefulSet's Pods.
kubectl get pods -w -l app=zk
Once the zk-2 Pod is | 8,500 |
Running and Ready, use CTRL-C to terminate kubectl.
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
zk-0 0/1 ContainerCreating 0 0s
zk-0 0/1 Running 0 19s
zk-0 1/1 Running 0 40s
zk-1 0/1 Pending 0 0s
zk-1 0/1 Pending 0 0s
zk-1 0/1 ContainerCreating 0 0s
zk-1 0/1 Running 0 18s
zk-1 1/1 Running 0 40s
zk-2 0/1 Pending 0 0s
zk-2 0/1 Pending 0 0s
zk-2 0/1 ContainerCreating 0 0s
zk-2 0/1 Running 0 19s
zk-2 1/1 Running 0 40s
Use the command below to get the value you entered during the sanity test , from the zk-2 Pod.
kubectl exec zk-2 zkCli.sh get /hello
Even though you terminated and recreated all of the Pods in the zk StatefulSet, the ens | 8,501 |
emble
still serves the original value.
WATCHER: | 8,502 |
WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x100000002
ctime = Thu Dec 08 15:13:30 UTC 2016
mZxid = 0x100000002
mtime = Thu Dec 08 15:13:30 UTC 2016
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
The volumeClaimTemplates field of the zk StatefulSet's spec specifies a PersistentVolume
provisioned for each Pod.
volumeClaimTemplates :
- metadata :
name : datadir
annotations :
volume.alpha.kubernetes.io/storage-class : anything
spec:
accessModes : [ "ReadWriteOnce" ]
resources :
requests :
storage : 20Gi
The StatefulSet controller generates a PersistentVolumeClaim for each Pod in the StatefulSet .
Use the following command to get the StatefulSet 's PersistentVolumeClaims .
kubectl get pvc -l app=zk
When the StatefulSet recreated its Pods, it remounts the Pods' PersistentVolumes.
NAME STATUS VOLUME | 8,503 |
CAPACITY ACCESSMODES AGE
datadir-zk-0 Bound pvc-bed742cd-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
datadir-zk-1 Bound pvc-bedd27d2-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi RWO 1h
The volumeMounts section of the StatefulSet 's container template mounts the
PersistentVolumes in the ZooKeeper servers' data directories.
volumeMounts :
- name : datadir
mountPath : /var/lib/zookeeper
When a Pod in the zk StatefulSet is (re)scheduled, it will always have the same
PersistentVolume mounted to the ZooKeeper server's data directory. Even when the Pods are
rescheduled, all the writes made to the ZooKeeper servers' WALs, and all their snapshots,
remain durable | 8,504 |
Ensuring consistent configuration
As noted in the Facilitating Leader Election and Achieving Consensus sections, the servers in a
ZooKeeper ensemble require consistent configuration to elect a leader and form a quorum. They
also require consistent configuration of the Zab protocol in order for the protocol to work
correctly over a network. In our example we achieve consistent configuration by embedding the
configuration directly into the manifest.
Get the zk StatefulSet.
kubectl get sts zk -o yaml
...
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_int | 8,505 |
erval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
...
The command used to start the ZooKeeper servers passed the configuration as command line
parameter. You can also use environment variables to pass configuration to the ensemble.
Configuring logging
One of the files generated by the zkGenConfig.sh script controls ZooKeeper's logging.
ZooKeeper uses Log4j , and, by default, it uses a time and size based rolling file appender for its
logging configuration.
Use the command below to get the logging configuration from one of Pods in the zk StatefulSet .
kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties
The logging configuration below will cause the ZooKeeper process to write all of its logs to the
standard output file stream | 8,506 |
zookeeper.root.logger=CONSOLE
zookeeper.console.threshold=INFO
log4j.rootLogger=${zookeeper.root.logger}
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:
%C{1}@%L] - %m%n
This is the simplest possible way to safely log inside the container. Because the applications
write logs to standard out, Kubernetes will handle log rotation for you. Kubernetes also
implements a sane retention policy that ensures application logs written to standard out and
standard error do not exhaust local storage media.
Use kubectl logs to retrieve the last 20 log lines from one of the Pods.
kubectl logs zk-0 --tail 20
You can view application logs written to standard out or standard error using kubectl logs and
from the Kubernetes Dashboard.
2016-12-06 19:34:16,236 [myid:1] - INFO [NIOServerC | 8,507 |
xn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740
2016-12-06 19:34:16,237 [myid:1] - INFO [Thread-1136:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52740 (no session established for client)
2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /
127.0.0.1:52749
2016-12-06 19:34:26,155 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52749
2016-12-06 19:34:26,156 [myid:1] - INFO [Thread-1137:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52749 (no session established for client)
2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /
127.0.0.1:52750
2016-12-06 19:34:26,222 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOSe | 8,508 |
rverCnxn@827] - Processing ruok command from /127.0.0.1:52750
2016-12-06 19:34:26,226 [myid:1] - INFO [Thread-1138:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52750 (no session established for client)
2016-12-06 19:34:36,151 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /
127.0.0.1:52760
2016-12-06 19:34:36,152 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52760
2016-12-06 19:34:36,152 [myid:1] - INFO [Thread-1139:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52760 (no session established for client)
2016-12-06 19:34:36,230 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /
127.0.0.1:52761
2016-12-06 19:34:36,231 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok comman | 8,509 |
d from /127.0.0.1:52761
2016-12-06 19:34:36,231 [myid:1] - INFO [Thread-1140:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52761 (no session established for client | 8,510 |
2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /
127.0.0.1:52767
2016-12-06 19:34:46,149 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52767
2016-12-06 19:34:46,149 [myid:1] - INFO [Thread-1141:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52767 (no session established for client)
2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /
127.0.0.1:52768
2016-12-06 19:34:46,230 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52768
2016-12-06 19:34:46,230 [myid:1] - INFO [Thread-1142:NIOServerCnxn@1008] - Closed socket
connection for client /127.0.0.1:52768 (no session established for client)
Kubernetes integrates with many loggin | 8,511 |
g solutions. You can choose a logging solution that best
fits your cluster and applications. For cluster-level logging and aggregation, consider deploying
a sidecar container to rotate and ship your logs.
Configuring a non-privileged user
The best practices to allow an application to run as a privileged user inside of a container are a
matter of debate. If your organization requires that applications run as a non-privileged user
you can use a SecurityContext to control the user that the entry point runs as.
The zk StatefulSet 's Pod template contains a SecurityContext .
securityContext :
runAsUser : 1000
fsGroup : 1000
In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000 corresponds
to the zookeeper group.
Get the ZooKeeper process information from the zk-0 Pod.
kubectl exec zk-0 -- ps -elf
As the runAsUser field of the securityContext object is set to 1000, instead of running as root,
the ZooKeeper process runs as the zookeeper user.
F S UID | 8,512 |
PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S zookeep+ 1 0 0 80 0 - 1127 - 20:46 ? 00:00:00 sh -c zkGenConfig.sh &&
zkServer.sh start-foreground
0 S zookeep+ 27 1 0 80 0 - 1155556 - 20:46 ? 00:00:19 /usr/lib/jvm/java-8-openjdk-
amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -
Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/
usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-
log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/
netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/
jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -
Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false
org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cf | 8,513 |
By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data
directory, it is only accessible by the root user. This configuration prevents the ZooKeeper
process from writing to its WAL and storing its snapshots.
Use the command below to get the file permissions of the ZooKeeper data directory on the zk-0
Pod.
kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
Because the fsGroup field of the securityContext object is set to 1000, the ownership of the
Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to
read and write its data.
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec 5 20:45 /var/lib/zookeeper/data
Managing the ZooKeeper process
The ZooKeeper documentation mentions that "You will want to have a supervisory process that
manages each of your ZooKeeper server processes (JVM)." Utilizing a watchdog (supervisory
process) to restart failed processes in a distributed system is a common pattern. When
deploying an appli | 8,514 |
cation in Kubernetes, rather than using an external utility as a supervisory
process, you should use Kubernetes as the watchdog for your application.
Updating the ensemble
The zk StatefulSet is configured to use the RollingUpdate update strategy.
You can use kubectl patch to update the number of cpus allocated to the servers.
kubectl patch sts zk --type ='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/
resources/requests/cpu", "value":"0.3"}]'
statefulset.apps/zk patched
Use kubectl rollout status to watch the status of the update.
kubectl rollout status sts/zk
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 1 pods at revision zk-5db4499664...
Waiting for 1 pods to be ready...
Waiting for 1 pods to be ready...
waiting for statefulset rolling update to complete 2 pods at revision zk-5db4499664...
Waitin | 8,515 |
g for 1 pods to be ready...
Waiting for 1 pods to be ready...
statefulset rolling update complete 3 pods at revision zk-5db4499664...
This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the
new configuration. This ensures that quorum is maintained during a rolling update.
Use the kubectl rollout history command to view a history or previous configurations | 8,516 |
kubectl rollout history sts/zk
The output is similar to this:
statefulsets "zk"
REVISION
1
2
Use the kubectl rollout undo command to roll back the modification.
kubectl rollout undo sts/zk
The output is similar to this:
statefulset.apps/zk rolled back
Handling process failure
Restart Policies control how Kubernetes handles process failures for the entry point of the
container in a Pod. For Pods in a StatefulSet , the only appropriate RestartPolicy is Always, and
this is the default value. For stateful applications you should never override the default policy.
Use the following command to examine the process tree for the ZooKeeper server running in
the zk-0 Pod.
kubectl exec zk-0 -- ps -ef
The command used as the container's entry point has PID 1, and the ZooKeeper process, a child
of the entry point, has PID 27.
UID PID PPID C STIME TTY TIME CMD
zookeep+ 1 0 0 15:03 ? 00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
zookeep+ 27 | 8,517 |
1 0 15:03 ? 00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -
Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/
bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/
bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/
bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/
bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -
Xmx2G -Xms2G -Dcom.sun.management.jmxremote -
Dcom.sun.management.jmxremote.local.only=false
org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
In another terminal watch the Pods in the zk StatefulSet with the following command.
kubectl get pod -w -l app=zk
In another terminal, terminate the ZooKeeper process in Pod zk-0 with the following command.
kubectl exec zk-0 -- pkill java
The termination of th | 8,518 |
e ZooKeeper process caused its parent process to terminate. Because the
RestartPolicy of the container is Always, it restarted the parent process | 8,519 |
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 21m
zk-1 1/1 Running 0 20m
zk-2 1/1 Running 0 19m
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Error 0 29m
zk-0 0/1 Running 1 29m
zk-0 1/1 Running 1 29m
If your application uses a script (such as zkServer.sh ) to launch the process that implements the
application's business logic, the script must terminate with the child process. This ensures that
Kubernetes will restart the application's container when the process implementing the
application's business logic fails.
Testing for liveness
Configuring your application to restart failed processes is not enough to keep a distributed
system healthy. There are scenarios where a system's processes can be both alive and
unresponsive, or otherwise unhealthy. You should use liveness probes to notify Kubernetes that
your application's processes | 8,520 |
are unhealthy and it should restart them.
The Pod template for the zk StatefulSet specifies a liveness probe.
livenessProbe :
exec:
command :
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds : 15
timeoutSeconds : 5
The probe calls a bash script that uses the ZooKeeper ruok four letter word to test the server's
health.
OK=$(echo ruok | nc 127.0.0.1 $1)
if [ "$OK" == "imok" ]; then
exit 0
else
exit 1
fi
In one terminal window, use the following command to watch the Pods in the zk StatefulSet.
kubectl get pod -w -l app=zk
In another window, using the following command to delete the zookeeper-ready script from the
file system of Pod zk-0.
kubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready
When the liveness probe for the ZooKeeper process fails, Kubernetes will automatically restart
the process for you, ensuring that unhealthy processes in the ensemble are restarted | 8,521 |
kubectl get pod -w -l app=zk
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 1h
zk-1 1/1 Running 0 1h
zk-2 1/1 Running 0 1h
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Running 0 1h
zk-0 0/1 Running 1 1h
zk-0 1/1 Running 1 1h
Testing for readiness
Readiness is not the same as liveness. If a process is alive, it is scheduled and healthy. If a
process is ready, it is able to process input. Liveness is a necessary, but not sufficient, condition
for readiness. There are cases, particularly during initialization and termination, when a process
can be alive but not ready.
If you specify a readiness probe, Kubernetes will ensure that your application's processes will
not receive network traffic until their readiness checks pass.
For a ZooKeeper server, liveness implies readiness. Therefore, the readiness probe from the
zookeeper.yaml mani | 8,522 |
fest is identical to the liveness probe.
readinessProbe :
exec:
command :
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds : 15
timeoutSeconds : 5
Even though the liveness and readiness probes are identical, it is important to specify both. This
ensures that only healthy servers in the ZooKeeper ensemble receive network traffic.
Tolerating Node failure
ZooKeeper needs a quorum of servers to successfully commit mutations to data. For a three
server ensemble, two servers must be healthy for writes to succeed. In quorum based systems,
members are deployed across failure domains to ensure availability. To avoid an outage, due to
the loss of an individual machine, best practices preclude co-locating multiple instances of the
application on the same machine.
By default, Kubernetes may co-locate Pods in a StatefulSet on the same node. For the three
server ensemble you created, if two servers are on the same node, and that node fails, the
clients of | 8,523 |
your ZooKeeper service will experience an outage until at least one of the Pods can be
rescheduled.
You should always provision additional capacity to allow the processes of critical systems to be
rescheduled in the event of node failures. If you do so, then the outage will only last until the
Kubernetes scheduler reschedules one of the ZooKeeper servers. However, if you want your
service to tolerate node failures with no downtime, you should set podAntiAffinity | 8,524 |
Use the command below to get the nodes for Pods in the zk StatefulSet .
for i in 0 1 2; do kubectl get pod zk- $i --template {{.spec.nodeName }}; echo ""; done
All of the Pods in the zk StatefulSet are deployed on different nodes.
kubernetes-node-cxpk
kubernetes-node-a5aq
kubernetes-node-2g2d
This is because the Pods in the zk StatefulSet have a PodAntiAffinity specified.
affinity :
podAntiAffinity :
requiredDuringSchedulingIgnoredDuringExecution :
- labelSelector :
matchExpressions :
- key: "app"
operator : In
values :
- zk
topologyKey : "kubernetes.io/hostname"
The requiredDuringSchedulingIgnoredDuringExecution field tells the Kubernetes Scheduler
that it should never co-locate two Pods which have app label as zk in the domain defined by the
topologyKey . The topologyKey kubernetes.io/hostname indicates that the domain is an
individual node. Using different rules, labels, and selectors, you | 8,525 |
can extend this technique to
spread your ensemble across physical, network, and power failure domains.
Surviving maintenance
In this section you will cordon and drain nodes. If you are using this tutorial on a shared
cluster, be sure that this will not adversely affect other tenants.
The previous section showed you how to spread your Pods across nodes to survive unplanned
node failures, but you also need to plan for temporary node failures that occur due to planned
maintenance.
Use this command to get the nodes in your cluster.
kubectl get nodes
This tutorial assumes a cluster with at least four nodes. If the cluster has more than four, use
kubectl cordon to cordon all but four nodes. Constraining to four nodes will ensure Kubernetes
encounters affinity and PodDisruptionBudget constraints when scheduling zookeeper Pods in
the following maintenance simulation.
kubectl cordon <node-name>
Use this command to get the zk-pdb PodDisruptionBudget .
kubectl get pdb zk-pd | 8,526 |
The max-unavailable field indicates to Kubernetes that at most one Pod from zk StatefulSet can
be unavailable at any time.
NAME MIN-AVAILABLE MAX-UNAVAILABLE ALLOWED-DISRUPTIONS AGE
zk-pdb N/A 1 1
In one terminal, use this command to watch the Pods in the zk StatefulSet .
kubectl get pods -w -l app=zk
In another terminal, use this command to get the nodes that the Pods are currently scheduled
on.
for i in 0 1 2; do kubectl get pod zk- $i --template {{.spec.nodeName }}; echo ""; done
The output is similar to this:
kubernetes-node-pb41
kubernetes-node-ixsl
kubernetes-node-i4c4
Use kubectl drain to cordon and drain the node on which the zk-0 Pod is scheduled.
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName }}) --ignore-daemonsets --force
--delete-emptydir-data
The output is similar to this:
node "kubernetes-node-pb41" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or
DaemonSet: fluentd | 8,527 |
-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41;
Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
pod "zk-0" deleted
node "kubernetes-node-pb41" drained
As there are four nodes in your cluster, kubectl drain , succeeds and the zk-0 is rescheduled to
another node.
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
zk-1 1/1 Running 0 1h
zk-2 1/1 Running 0 1h
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
zk-0 0/1 ContainerCreating 0 0s
zk-0 0/1 Running 0 51s
zk-0 1/1 Running 0 1 | 8,528 |
Keep watching the StatefulSet 's Pods in the first terminal and drain the node on which zk-1 is
scheduled.
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName }}) --ignore-daemonsets --force
--delete-emptydir-data
The output is similar to this:
"kubernetes-node-ixsl" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or
DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl;
Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
pod "zk-1" deleted
node "kubernetes-node-ixsl" drained
The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule
preventing co-location of the Pods, and as only two nodes are schedulable, the Pod will remain
in a Pending state.
kubectl get pods -w -l app=zk
The output is similar to this:
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
zk-1 1/1 Running 0 1h
zk-2 1/1 | 8,529 |
Running 0 1h
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
zk-0 0/1 ContainerCreating 0 0s
zk-0 0/1 Running 0 51s
zk-0 1/1 Running 0 1m
zk-1 1/1 Terminating 0 2h
zk-1 0/1 Terminating 0 2h
zk-1 0/1 Terminating 0 2h
zk-1 0/1 Terminating 0 2h
zk-1 0/1 Pending 0 0s
zk-1 0/1 Pending 0 0s
Continue to watch the Pods of the StatefulSet, and drain the node on which zk-2 is scheduled.
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName }}) --ignore-daemonsets --force
--delete-emptydir-data
The output is sim | 8,530 |
ilar to this:
node "kubernetes-node-i4c4" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, o | 8,531 |
DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4;
Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting
pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-
logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4
There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's
disruption budget.
pod/zk-2
Use CTRL-C to terminate kubectl.
You cannot drain the third node because evicting zk-2 would violate zk-budget . However, the
node will remain cordoned.
Use zkCli.sh to retrieve the value you entered during the sanity test from zk-0.
kubectl exec zk-0 zkCli.sh get /hello
The service is still available because its PodDisruptionBudget is respected.
WatchedEvent state:SyncConnected type:None path:null
world
cZxid = 0x200000002
ctime = Wed Dec 07 00:08:59 UTC 2016
mZxid = 0x2000000 | 8,532 |
02
mtime = Wed Dec 07 00:08:59 UTC 2016
pZxid = 0x200000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
Use kubectl uncordon to uncordon the first node.
kubectl uncordon kubernetes-node-pb41
The output is similar to this:
node "kubernetes-node-pb41" uncordoned
zk-1 is rescheduled on this node. Wait until zk-1 is Running and Ready.
kubectl get pods -w -l app=zk
The output is similar to this:
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 2 1h
zk-1 1/1 Running 0 1h
zk-2 1/1 Running 0 1h
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2 | 8,533 |
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Terminating 2 2h
zk-0 0/1 Pending 0 0s
zk-0 0/1 Pending 0 0s
zk-0 0/1 ContainerCreating 0 0s
zk-0 0/1 Running 0 51s
zk-0 1/1 Running 0 1m
zk-1 1/1 Terminating 0 2h
zk-1 0/1 Terminating 0 2h
zk-1 0/1 Terminating 0 2h
zk-1 0/1 Terminating 0 2h
zk-1 0/1 Pending 0 0s
zk-1 0/1 Pending 0 0s
zk-1 0/1 Pending 0 12m
zk-1 0/1 ContainerCreating 0 12m
zk-1 0/1 Running 0 13m
zk-1 1/1 Running 0 13m
Attempt to drain the node on which zk-2 is scheduled.
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName }}) --ignore-daemonsets --force
--delete-emptydir-data
The output is similar to this:
node "kubernet | 8,534 |
es-node-i4c4" already cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or
DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4;
Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
pod "heapster-v1.2.0-2604621511-wht1r" deleted
pod "zk-2" deleted
node "kubernetes-node-i4c4" drained
This time kubectl drain succeeds.
Uncordon the second node to allow zk-2 to be rescheduled.
kubectl uncordon kubernetes-node-ixsl
The output is similar to this:
node "kubernetes-node-ixsl" uncordoned
You can use kubectl drain in conjunction with PodDisruptionBudgets to ensure that your
services remain available during maintenance. If drain is used to cordon nodes and evict pods
prior to taking the node offline for maintenance, services that express a disruption budget will
have that budget respected. You should always allocate additional capacity for critical services
so that their Pods can be immediately rescheduled.
C | 8,535 |
leaning up
Use kubectl uncordon to uncordon all the nodes in your cluster. | 8,536 |
You must delete the persistent storage media for the PersistentVolumes used in this
tutorial. Follow the necessary steps, based on your environment, storage configuration,
and provisioning method, to ensure that all storage is reclaimed.
Services
Connecting Applications with Services
Using Source IP
Explore Termination Behavior for Pods And Their Endpoints
Connecting Applications with Services
The Kubernetes model for connecting containers
Now that you have a continuously running, replicated application you can expose it on a
network.
Kubernetes assumes that pods can communicate with other pods, regardless of which host they
land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to
explicitly create links between pods or map container ports to host ports. This means that
containers within a Pod can all reach each other's ports on localhost, and all pods in a cluster
can see each other without NAT. The rest of this document elaborates on how you can r | 8,537 |
un
reliable services on such a networking model.
This tutorial uses a simple nginx web server to demonstrate the concept.
Exposing pods to the cluster
We did this in a previous example, but let's do it once again and focus on the networking
perspective. Create an nginx Pod, and note that it has a container port specification:
service/networking/run-my-nginx.yaml
apiVersion : apps/v1
kind: Deployment
metadata :
name : my-nginx
spec:
selector :
matchLabels :
run: my-nginx
replicas : 2
template :
metadata :
labels :
run: my-nginx | 8,538 |
spec:
containers :
- name : my-nginx
image : nginx
ports :
- containerPort : 80
This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:
kubectl apply -f ./run-my-nginx.yaml
kubectl get pods -l run=my-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-
minion-905m
my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-
ljyd
Check your pods' IPs:
kubectl get pods -l run=my-nginx -o custom-columns =POD_IP:.status.podIPs
POD_IP
[map[ip:10.244.3.4 ]]
[map[ip:10.244.2.5 ]]
You should be able to ssh into any node in your cluster and use a tool such as curl to make
queries against both IPs. Note that the containers are not using port 80 on the node, nor are
there any special NAT rules to route traffic to the pod. This me | 8,539 |
ans you can run multiple nginx
pods on the same node all using the same containerPort , and access them from any other pod
or node in your cluster using the assigned IP address for the pod. If you want to arrange for a
specific port on the host Node to be forwarded to backing Pods, you can - but the networking
model should mean that you do not need to do so.
You can read more about the Kubernetes Networking Model if you're curious.
Creating a Service
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk
to these pods directly, but what happens when a node dies? The pods die with it, and the
ReplicaSet inside the Deployment will create new ones, with different IPs. This is the problem a
Service solves.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere
in your cluster, that all provide the same functionality. When created, each Service is assigned a
unique IP address (also called clusterIP). This addres | 8,540 |
s is tied to the lifespan of the Service, and
will not change while the Service is alive. Pods can be configured to talk to the Service, and
know that communication to the Service will be automatically load-balanced out to some pod
that is a member of the Service.
You can create a Service for your 2 nginx replicas with kubectl expose :
kubectl expose deployment/my-ngin | 8,541 |
service/my-nginx exposed
This is equivalent to kubectl apply -f the following yaml:
service/networking/nginx-svc.yaml
apiVersion : v1
kind: Service
metadata :
name : my-nginx
labels :
run: my-nginx
spec:
ports :
- port: 80
protocol : TCP
selector :
run: my-nginx
This specification will create a Service which targets TCP port 80 on any Pod with the run: my-
nginx label, and expose it on an abstracted Service port ( targetPort : is the port the container
accepts traffic on, port: is the abstracted Service port, which can be any port other pods use to
access the Service). View Service API object to see the list of supported fields in service
definition. Check your Service:
kubectl get svc my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx ClusterIP 10.0.162.149 <none> 80/TCP 21s
As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed
through EndpointSlices . The Service's selector w | 8,542 |
ill be evaluated continuously and the results
will be POSTed to an EndpointSlice that is connected to the Service using a labels . When a Pod
dies, it is automatically removed from the EndpointSlices that contain it as an endpoint. New
Pods that match the Service's selector will automatically get added to an EndpointSlice for that
Service. Check the endpoints, and note that the IPs are the same as the Pods created in the first
step:
kubectl describe svc my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: <none>
Selector: run=my-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.162.149
IPs: 10.0.162.149
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.5:80,10.244.3.4:8 | 8,543 |
Session Affinity: None
Events: <none>
kubectl get endpointslices -l kubernetes.io/service-name =my-nginx
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
my-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s
You should now be able to curl the nginx Service on <CLUSTER-IP>:<PORT> from any node in
your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you're
curious about how this works you can read more about the service proxy .
Accessing the Service
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS.
The former works out of the box while the latter requires the CoreDNS cluster addon .
Note: If the service environment variables are not desired (because possible clashing with
expected program ones, too many variables to process, only using DNS, etc) you can disable
this mode by setting the enableServiceLinks flag to false on the pod spec .
Environment Variables
When a | 8,544 |
Pod runs on a Node, the kubelet adds a set of environment variables for each active
Service. This introduces an ordering problem. To see why, inspect the environment of your
running nginx Pods (your Pod name will be different):
kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
KUBERNETES_SERVICE_HOST=10.0.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
Note there's no mention of your Service. This is because you created the replicas before the
Service. Another disadvantage of doing this is that the scheduler might put both Pods on the
same machine, which will take your entire Service down if it dies. We can do this the right way
by killing the 2 Pods and waiting for the Deployment to recreate them. This time the Service
exists before the replicas. This will give you scheduler-level Service spreading of your Pods
(provided all your nodes have equal capacity), as well as the right environment variables:
kubectl scale deployment my-nginx --replicas =0; | 8,545 |
kubectl scale deployment my-nginx --
replicas =2;
kubectl get pods -l run=my-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-
ljyd
my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-
minion-905m
You may notice that the pods have different names, since they are killed and recreated | 8,546 |
kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
KUBERNETES_SERVICE_PORT=443
MY_NGINX_SERVICE_HOST=10.0.162.149
KUBERNETES_SERVICE_HOST=10.0.0.1
MY_NGINX_SERVICE_PORT=80
KUBERNETES_SERVICE_PORT_HTTPS=443
DNS
Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other
Services. You can check if it's running on your cluster:
kubectl get services kube-dns --namespace =kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m
The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a
DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon
(application name kube-dns ), so you can talk to the Service from any pod in your cluster using
standard methods (e.g. gethostbyname() ). If CoreDNS isn't running, you can enable it referring
to the CoreDNS README or Installing CoreDNS . Let's run an | 8,547 |
other curl application to test this:
kubectl run curl --image =radial/busyboxplus:curl -i --tty --rm
Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
Hit enter for command prompt
Then, hit enter and run nslookup my-nginx :
[ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: my-nginx
Address 1: 10.0.162.149
Securing the Service
Till now we have only accessed the nginx server from within the cluster. Before exposing the
Service to the internet, you want to make sure the communication channel is secure. For this,
you will need:
Self signed certificates for https (unless you already have an identity certificate)
An nginx server configured to use the certificates
A secret that makes the certificates accessible to pods
You can acquire all these from the nginx https example . This requires having go and make tools
installed. If you don't want to install those, then follow the manual steps later | 8,548 |
. In short:
make keys KEY=/tmp/nginx.key CERT =/tmp/nginx.crt
kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt•
•
| 8,549 |
secret/nginxsecret created
kubectl get secrets
NAME TYPE DATA AGE
nginxsecret kubernetes.io/tls 2 1m
And also the configmap:
kubectl create configmap nginxconfigmap --from-file =default.conf
You can find an example for default.conf in the Kubernetes examples project repo .
configmap/nginxconfigmap created
kubectl get configmaps
NAME DATA AGE
nginxconfigmap 1 114s
You can view the details of the nginxconfigmap ConfigMap using the following command:
kubectl describe configmap nginxconfigmap
The output is similar to:
Name: nginxconfigmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
default.conf:
----
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html;
server_name localhost;
ssl_certificate /etc/n | 8,550 |
ginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
location / {
try_files $uri $uri/ =404;
}
}
BinaryDat | 8,551 |
====
Events: <none>
Following are the manual steps to follow in case you run into problems running make (on
windows for example):
# Create a public private key pair
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/
nginx.crt -subj "/CN=my-nginx/O=my-nginx"
# Convert the keys to base64 encoding
cat /d/tmp/nginx.crt | base64
cat /d/tmp/nginx.key | base64
Use the output from the previous commands to create a yaml file as follows. The base64
encoded value should all be on a single line.
apiVersion : "v1"
kind: "Secret"
metadata :
name : "nginxsecret"
namespace : "default"
type: kubernetes.io/tls
data:
tls.crt : "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5
M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hV
zU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRK
YQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWm
pNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y | 8,552 |
05BUUVCQlFB
RGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQ
ktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg
5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5C
i82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBp
NXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3
R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTR
aNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3
RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKT
UI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZE
V3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyM
DhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGS
W0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0U
rKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMAp
wRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4 | 8,553 |
v
bVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvT
HFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdF
ZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
tls.key : "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXc
wQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVE
ZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFh
Wm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QO
UxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZ
MMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9
JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOc | 8,554 |
NSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YX
BKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHW
WVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdw
R1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZy
tGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBac
DRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWV
tN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DS
mRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZ
hRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtG
cVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0
ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelR
yMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQ
VB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9J
eXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYn
RnOXFY | 8,555 |
cGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0
JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppS
XlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZ
NS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIw
MllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMj
ZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUt
pbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZI
aHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCR
kwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdm
xTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVd
UTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1Vb
Gc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K"
Now create the secrets using the file:
kubectl apply -f nginxsecrets.yaml
kubectl get secrets
NAME TYPE DATA AGE
nginxsecret | 8,556 |
kubernetes.io/tls 2 1m
Now modify your nginx replicas to start an https server using the certificate in the secret, and
the Service, to expose both ports (80 and 443):
service/networking/nginx-secure-app.yaml
apiVersion : v1
kind: Service
metadata :
name : my-nginx
labels :
run: my-nginx
spec:
type: NodePort
ports :
- port: 8080
targetPort : 80
protocol : TCP
name : http
- port: 44 | 8,557 |
protocol : TCP
name : https
selector :
run: my-nginx
---
apiVersion : apps/v1
kind: Deployment
metadata :
name : my-nginx
spec:
selector :
matchLabels :
run: my-nginx
replicas : 1
template :
metadata :
labels :
run: my-nginx
spec:
volumes :
- name : secret-volume
secret :
secretName : nginxsecret
- name : configmap-volume
configMap :
name : nginxconfigmap
containers :
- name : nginxhttps
image : bprashanth/nginxhttps:1.0
ports :
- containerPort : 443
- containerPort : 80
volumeMounts :
- mountPath : /etc/nginx/ssl
name : secret-volume
- mountPath : /etc/nginx/conf.d
name : configmap-volume
Noteworthy points about the nginx-secure-app manifest:
It contains both Deployment and Service specification in the same file.
The nginx server serves HTTP traffic on port 80 and HTTPS traffic on 443, and nginx | 8,558 |
Service exposes both ports.
Each container has access to the keys through a volume mounted at /etc/nginx/ssl . This is
set up before the nginx server is started.
kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml
At this point you can reach the nginx server from any node.
kubectl get pods -l run=my-nginx -o custom-columns =POD_IP:.status.podIPs
POD_IP
[map[ip:10.244.3.5 ]]•
•
| 8,559 |
node $ curl -k https://10.244.3.5
...
<h1>Welcome to nginx!</h1>
Note how we supplied the -k parameter to curl in the last step, this is because we don't know
anything about the pods running nginx at certificate generation time, so we have to tell curl to
ignore the CName mismatch. By creating a Service we linked the CName used in the certificate
with the actual DNS name used by pods during Service lookup. Let's test this from a pod (the
same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
service/networking/curlpod.yaml
apiVersion : apps/v1
kind: Deployment
metadata :
name : curl-deployment
spec:
selector :
matchLabels :
app: curlpod
replicas : 1
template :
metadata :
labels :
app: curlpod
spec:
volumes :
- name : secret-volume
secret :
secretName : nginxsecret
containers :
- name : curlpod
command :
- sh
- -c
- while true; do | 8,560 |
sleep 1; done
image : radial/busyboxplus:curl
volumeMounts :
- mountPath : /etc/nginx/ssl
name : secret-volume
kubectl apply -f ./curlpod.yaml
kubectl get pods -l app=curlpod
NAME READY STATUS RESTARTS AGE
curl-deployment-1515033274-1410r 1/1 Running 0 1m
kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/
tls.crt
...
<title>Welcome to nginx!</title>
.. | 8,561 |
Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP
address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The
Service created in the last section already used NodePort , so your nginx HTTPS replica is ready
to serve traffic on the internet if your node has a public IP.
kubectl get svc my-nginx -o yaml | grep nodePort -C 5
uid: 07191fb3-f61a-11e5-8ae5-42010af00002
spec:
clusterIP: 10.0.162.149
ports:
- name: http
nodePort: 31704
port: 8080
protocol: TCP
targetPort: 80
- name: https
nodePort: 32453
port: 443
protocol: TCP
targetPort: 443
selector:
run: my-nginx
kubectl get nodes -o yaml | grep ExternalIP -C 1
- address: 104.197.41.11
type: ExternalIP
allocatable:
--
- address: 23.251.152.56
type: ExternalIP
allocatable:
...
$ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
...
<h1>Welcome to nginx!</h1>
Let's now recreate the Service | 8,562 |
to use a cloud load balancer. Change the Type of my-nginx
Service from NodePort to LoadBalancer :
kubectl edit svc my-nginx
kubectl get svc my-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx LoadBalancer 10.0.162.149 xx.xxx.xxx.xxx 8080:30163/TCP 21s
curl https://<EXTERNAL-IP> -k
...
<title>Welcome to nginx!</title | 8,563 |
The IP address in the EXTERNAL-IP column is the one that is available on the public internet.
The CLUSTER-IP is only available inside your cluster/private cloud network.
Note that on AWS, type LoadBalancer creates an ELB, which uses a (long) hostname, not an IP.
It's too long to fit in the standard kubectl get svc output, in fact, so you'll need to do kubectl
describe service my-nginx to see it. You'll see something like this:
kubectl describe service my-nginx
...
LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-
east-1.elb.amazonaws.com
...
What's next
Learn more about Using a Service to Access an Application in a Cluster
Learn more about Connecting a Front End to a Back End Using a Service
Learn more about Creating an External Load Balancer
Using Source IP
Applications running in a Kubernetes cluster find and communicate with each other, and the
outside world, through the Service abstraction. This document explains what happens to the
source IP of packets | 8,564 |
sent to different types of Services, and how you can toggle this behavior
according to your needs.
Before you begin
Terminology
This document makes use of the following terms:
NAT
Network address translation
Source NAT
Replacing the source IP on a packet; in this page, that usually means replacing with the IP
address of a node.
Destination NAT
Replacing the destination IP on a packet; in this page, that usually means replacing with
the IP address of a Pod
VIP
A virtual IP address, such as the one assigned to every Service in Kubernetes
kube-proxy
A network daemon that orchestrates Service VIP management on every node
Prerequisites
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured
to communicate with your cluster. It is recommended to run this tutorial on a cluster with at•
•
| 8,565 |
least two nodes that are not acting as control plane hosts. If you do not already have a cluster,
you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Killercoda
Play with Kubernetes
The examples use a small nginx webserver that echoes back the source IP of requests it receives
through an HTTP header. You can create it as follows:
Note: The image in the following command only runs on AMD64 architectures.
kubectl create deployment source-ip-app --image =registry.k8s.io/echoserver:1.4
The output is:
deployment.apps/source-ip-app created
Objectives
Expose a simple application through various types of Services
Understand how each Service type handles source IP NAT
Understand the tradeoffs involved in preserving source IP
Source IP for Services with Type=ClusterIP
Packets sent to ClusterIP from within the cluster are never source NAT'd if you're running
kube-proxy in iptables mode , (the default). You can query the kube-proxy mode by fetching
http://lo | 8,566 |
calhost:10249/proxyMode on the node where kube-proxy is running.
kubectl get nodes
The output is similar to this:
NAME STATUS ROLES AGE VERSION
kubernetes-node-6jst Ready <none> 2h v1.13.0
kubernetes-node-cx31 Ready <none> 2h v1.13.0
kubernetes-node-jj1t Ready <none> 2h v1.13.0
Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):
# Run this in a shell on the node you want to query.
curl http://localhost:10249/proxyMode
The output is:
iptables
You can test source IP preservation by creating a Service over the source IP app:
kubectl expose deployment source-ip-app --name =clusterip --port =80 --target-port =8080
The output is:
service/clusterip exposed•
•
•
•
| 8,567 |
kubectl get svc clusterip
The output is similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
clusterip ClusterIP 10.0.170.92 <none> 80/TCP 51s
And hitting the ClusterIP from a pod in the same cluster:
kubectl run busybox -it --image =busybox:1.28 --restart =Never --rm
The output is similar to this:
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
You can then run a command inside that Pod:
# Run this inside the terminal from "kubectl run"
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue
link/ether 0a:58:0a:f4:03:08 brd ff:ff:ff:ff:ff:ff
inet 10.244.3.8/24 sc | 8,568 |
ope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::188a:84ff:feb0:26a5/64 scope link
valid_lft forever preferred_lft forever
...then use wget to query the local webserver
# Replace "10.0.170.92" with the IPv4 address of the Service named "clusterip"
wget -qO - 10.0.170.92
CLIENT VALUES:
client_address=10.244.3.8
command=GET
...
The client_address is always the client pod's IP address, whether the client pod and server pod
are in the same node or in different nodes.
Source IP for Services with Type=NodePort
Packets sent to Services with Type=NodePort are source NAT'd by default. You can test this by
creating a NodePort Service:
kubectl expose deployment source-ip-app --name =nodeport --port =80 --target-port =8080 --
type=NodePor | 8,569 |
The output is:
service/nodeport exposed
NODEPORT =$(kubectl get -o jsonpath ="{.spec.ports[0].nodePort}" services nodeport )
NODES =$(kubectl get nodes -o jsonpath ='{ $.items[*].status.addresses[?
(@.type=="InternalIP")].address }' )
If you're running on a cloud provider, you may need to open up a firewall-rule for the
nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster
through the node port allocated above.
for node in $NODES ; do curl -s $node :$NODEPORT | grep -i client_address; done
The output is similar to:
client_address=10.180.1.1
client_address=10.240.0.5
client_address=10.240.0.3
Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:
Client sends packet to node2:nodePort
node2 replaces the source IP address (SNAT) in the packet with its own IP address
node2 replaces the destination IP on the packet with the pod IP
packet is routed to node 1, and then to the endpoint
the pod's reply is | 8,570 |
routed back to node2
the pod's reply is sent back to the client
Visually:
source IP nodeport figure 01
Figure. Source IP Type=NodePort using SNAT
To avoid this, Kubernetes has a feature to preserve the client source IP . If you set
service.spec.externalTrafficPolicy to the value Local , kube-proxy only proxies proxy requests to
local endpoints, and does not forward traffic to other nodes. This approach preserves the
original source IP address. If there are no local endpoints, packets sent to the node are dropped,
so you can rely on the correct source-ip in any packet processing rules you might apply a
packet that make it through to the endpoint.
Set the service.spec.externalTrafficPolicy field as follows:
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
The output is:
service/nodeport patched
Now, re-run the test:
for node in $NODES ; do curl --connect-timeout 1 -s $node :$NODEPORT | grep -i
client_address; done•
•
•
•
•
| 8,571 |
The output is similar to:
client_address=198.51.100.79
Note that you only got one reply, with the right client IP, from the one node on which the
endpoint pod is running.
This is what happens:
client sends packet to node2:nodePort , which doesn't have any endpoints
packet is dropped
client sends packet to node1:nodePort , which does have endpoints
node1 routes packet to endpoint with the correct source IP
Visually:
source IP nodeport figure 02
Figure. Source IP Type=NodePort preserves client source IP address
Source IP for Services with Type=LoadBalancer
Packets sent to Services with Type=LoadBalancer are source NAT'd by default, because all
schedulable Kubernetes nodes in the Ready state are eligible for load-balanced traffic. So if
packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint,
replacing the source IP on the packet with the IP of the node (as described in the previous
section).
You can test this by exposing the source-ip-app through | 8,572 |
a load balancer:
kubectl expose deployment source-ip-app --name =loadbalancer --port =80 --target-port =8080 --
type=LoadBalancer
The output is:
service/loadbalancer exposed
Print out the IP addresses of the Service:
kubectl get svc loadbalancer
The output is similar to this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
loadbalancer LoadBalancer 10.0.65.118 203.0.113.140 80/TCP 5m
Next, send a request to this Service's external-ip:
curl 203.0.113.140
The output is similar to this:•
•
•
| 8,573 |
CLIENT VALUES:
client_address=10.240.0.5
...
However, if you're running on Google Kubernetes Engine/GCE, setting the same
service.spec.externalTrafficPolicy field to Local forces nodes without Service endpoints to
remove themselves from the list of nodes eligible for loadbalanced traffic by deliberately failing
health checks.
Visually:
Source IP with externalTrafficPolicy
You can test this by setting the annotation:
kubectl patch svc loadbalancer -p '{"spec":{"externalTrafficPolicy":"Local"}}'
You should immediately see the service.spec.healthCheckNodePort field allocated by
Kubernetes:
kubectl get svc loadbalancer -o yaml | grep -i healthCheckNodePort
The output is similar to this:
healthCheckNodePort : 32122
The service.spec.healthCheckNodePort field points to a port on every node serving the health
check at /healthz . You can test this:
kubectl get pod -o wide -l app=source-ip-app
The output is similar to this:
NAME READY STATUS RESTARTS A | 8,574 |
GE IP NODE
source-ip-app-826191075-qehz4 1/1 Running 0 20h 10.180.1.136 kubernetes-
node-6jst
Use curl to fetch the /healthz endpoint on various nodes:
# Run this locally on a node you choose
curl localhost:32122/healthz
1 Service Endpoints found
On a different node you might get a different result:
# Run this locally on a node you choose
curl localhost:32122/healthz
No Service Endpoints Found
A controller running on the control plane is responsible for allocating the cloud load balancer.
The same controller also allocates HTTP health checks pointing to this port/path on each node.
Wait about 10 seconds for the 2 nodes without endpoints to fail health checks, then use curl to
query the IPv4 address of the load balancer | 8,575 |
curl 203.0.113.140
The output is similar to this:
CLIENT VALUES:
client_address=198.51.100.79
...
Cross-platform support
Only some cloud providers offer support for source IP preservation through Services with
Type=LoadBalancer . The cloud provider you're running on might fulfill the request for a
loadbalancer in a few different ways:
With a proxy that terminates the client connection and opens a new connection to your
nodes/endpoints. In such cases the source IP will always be that of the cloud LB, not that
of the client.
With a packet forwarder, such that requests from the client sent to the loadbalancer VIP
end up at the node with the source IP of the client, not an intermediate proxy.
Load balancers in the first category must use an agreed upon protocol between the loadbalancer
and backend to communicate the true client IP such as the HTTP Forwarded or X-
FORWARDED-FOR headers, or the proxy protocol . Load balancers in the second category can
leverage the feature described above | 8,576 |
by creating an HTTP health check pointing at the port
stored in the service.spec.healthCheckNodePort field on the Service.
Cleaning up
Delete the Services:
kubectl delete svc -l app=source-ip-app
Delete the Deployment, ReplicaSet and Pod:
kubectl delete deployment source-ip-app
What's next
Learn more about connecting applications via services
Read how to Create an External Load Balancer
Explore Termination Behavior for Pods
And Their Endpoints
Once you connected your Application with Service following steps like those outlined in
Connecting Applications with Services , you have a continuously running, replicated
application, that is exposed on a network. This tutorial helps you look at the termination flow
for Pods and to explore ways to implement graceful connection draining.1.
2.
•
| 8,577 |
Termination process for Pods and their endpoints
There are often cases when you need to terminate a Pod - be it for upgrade or scale down. In
order to improve application availability, it may be important to implement a proper active
connections draining.
This tutorial explains the flow of Pod termination in connection with the corresponding
endpoint state and removal by using a simple nginx web server to demonstrate the concept.
Example flow with endpoint termination
The following is the example of the flow described in the Termination of Pods document.
Let's say you have a Deployment containing of a single nginx replica (just for demonstration
purposes) and a Service:
service/pod-with-graceful-termination.yaml
apiVersion : apps/v1
kind: Deployment
metadata :
name : nginx-deployment
labels :
app: nginx
spec:
replicas : 1
selector :
matchLabels :
app: nginx
template :
metadata :
labels :
app: nginx
spec:
terminationGracePeriodSeco | 8,578 |
nds : 120 # extra long grace period
containers :
- name : nginx
image : nginx:latest
ports :
- containerPort : 80
lifecycle :
preStop :
exec:
# Real life termination may take any time up to terminationGracePeriodSeconds.
# In this example - just hang around for at least the duration of
terminationGracePeriodSeconds,
# at 120 seconds container will be forcibly terminated.
# Note, all this time nginx will keep processing requests.
command : [
"/bin/sh" , "-c", "sleep 180"
| 8,579 |
service/explore-graceful-termination-nginx.yaml
apiVersion : v1
kind: Service
metadata :
name : nginx-service
spec:
selector :
app: nginx
ports :
- protocol : TCP
port: 80
targetPort : 80
Now create the Deployment Pod and Service using the above files:
kubectl apply -f pod-with-graceful-termination.yaml
kubectl apply -f explore-graceful-termination-nginx.yaml
Once the Pod and Service are running, you can get the name of any associated EndpointSlices:
kubectl get endpointslice
The output is similar to this:
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
nginx-service-6tjbr IPv4 80 10.12.1.199,10.12.1.201 22m
You can see its status, and validate that there is one endpoint registered:
kubectl get endpointslices -o json -l kubernetes.io/service-name =nginx-service
The output is similar to this:
{
"addressType": "IPv4",
"apiVersion": "discovery.k8s.io/v1",
"endpoints": [
{
"addresses": [ | 8,580 |
"10.12.1.201"
],
"conditions": {
"ready": true,
"serving": true,
"terminating": false
Now let's terminate the Pod and validate that the Pod is being terminated respecting the
graceful termination period configuration:
kubectl delete pod nginx-deployment-7768647bf9-b4b9s
All pods:
kubectl get pod | 8,581 |
The output is similar to this:
NAME READY STATUS RESTARTS AGE
nginx-deployment-7768647bf9-b4b9s 1/1 Terminating 0 4m1s
nginx-deployment-7768647bf9-rkxlw 1/1 Running 0 8s
You can see that the new pod got scheduled.
While the new endpoint is being created for the new Pod, the old endpoint is still around in the
terminating state:
kubectl get endpointslice -o json nginx-service-6tjbr
The output is similar to this:
{
"addressType": "IPv4",
"apiVersion": "discovery.k8s.io/v1",
"endpoints": [
{
"addresses": [
"10.12.1.201"
],
"conditions": {
"ready": false,
"serving": true,
"terminating": true
},
"nodeName": "gke-main-default-pool-dca1511c-d17b",
"targetRef": {
"kind": "Pod",
"name": "nginx-deployment-7768647bf9-b4b9s", | 8,582 |
"namespace": "default",
"uid": "66fa831c-7eb2-407f-bd2c-f96dfe841478"
},
"zone": "us-central1-c"
},
{
"addresses": [
"10.12.1.202"
],
"conditions": {
"ready": true,
"serving": true,
"terminating": false
},
"nodeName": "gke-main-default-pool-dca1511c-d17b",
"targetRef": {
"kind": "Pod",
"name": "nginx-deployment-7768647bf9-rkxlw",
"namespace": "default",
"uid": "722b1cbe-dcd7-4ed4-8928-4a4d0e2bbe35"
},
"zone": "us-central1-c | 8,583 |
This allows applications to communicate their state during termination and clients (such as load
balancers) to implement a connections draining functionality. These clients may detect
terminating endpoints and implement a special logic for them.
In Kubernetes, endpoints that are terminating always have their ready status set as as false. This
needs to happen for backward compatibility, so existing load balancers will not use it for
regular traffic. If traffic draining on terminating pod is needed, the actual readiness can be
checked as a condition serving .
When Pod is deleted, the old endpoint will also be deleted.
What's next
Learn how to Connect Applications with Services
Learn more about Using a Service to Access an Application in a Cluster
Learn more about Connecting a Front End to a Back End Using a Service
Learn more about Creating an External Load Balancer•
•
•
• | 8,584 |