prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>My architecture looks like this:</p>
<p><a href="https://i.stack.imgur.com/vxm5o.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vxm5o.jpg" alt="enter image description here" /></a></p>
<p>Here, the HTTPS requests first go to the route53 service for DNS resolution. Route53 forwards the request to the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/nlb/" rel="nofollow noreferrer">Network Load balancer</a>. This service redirects the traffic to HAProxy pods running inside a Kubernetes cluster.</p>
<p>The HAProxy servers are required to read a specific request header and based on its value, it will route the traffic to backend. To keep things simple, I have kept a single K8 Backend cluster, but assume that there are more than 1 such backend cluster running.</p>
<p>Considering this architecture:</p>
<ul>
<li>What is the best place to perform <a href="https://security.stackexchange.com/questions/20803/how-does-ssl-tls-work">TLS termination</a>? Should we do it at NLB (green box) or implement it at HAProxy (Orange box)?</li>
<li>What are the advantages and disadvantages of each scenario?</li>
</ul>
| <p>As you are using the NLB you can achieve <strong>End to end</strong> HTTPS also however it forces the service also to use.</p>
<p>You can terminate at the LB level if you have multiple <strong>LB</strong> backed by clusters, leveraging the <strong>AWS cert manage</strong> with <strong>LB</strong> will be an easy way to manage the multiple setups.</p>
<p>There is no guarantee that if anyone that enters in your network won't be able to exploit a bug capable of intercepting traffic between services, Software Defined Network(SDN) in your VPC is secure and protects from spoofing but no guarantee.</p>
<p>So there is an advantage if you use <strong>TLS/SSL</strong> inside the VPC also.</p>
|
<p>I am trying to specify the windows location for the .kube/config file for the Go client application without specifying the absolute path.</p>
<pre class="lang-golang prettyprint-override"><code>kubeconfig := flag.String("kubeconfig", "%USERPROFILE%/.kube/config", "location to the Kube config file")
</code></pre>
<p>Output :</p>
<pre class="lang-golang prettyprint-override"><code>panic: runtime error: invalid memory address or nil pointer dereference
</code></pre>
<p>When I use <code>echo %USERPROFILE%</code> in a cmd, the output is <code>C:\Users\<username></code>, so I thought that this is because the different usage of <code>\</code> and <code>/</code> in the code and path.
I tried to specify the path using <code>\</code> instead of <code>/</code> but it gives out a syntax error.</p>
<p>Can anyone suggest me with a solution to use windows environmental variables to specify paths in a Go application?
Thanks in advance.</p>
| <p>The output of <code>flag.String</code> itself does not expand environment variables but you can use <code>os.ExpandEnv</code> to do that. However <code>os.ExpandEnv</code> requires you to use the Unix notation for environment variables, i.e. <code>$USERPROFILE</code> or <code>${USERPROFILE}</code>. You can get a clean file path for our specific OS (Windows in your case) using <code>filepath.Clean</code>.</p>
<p>Example:</p>
<pre class="lang-golang prettyprint-override"><code>kubeconfig := flag.String("kubeconfig", "$USERPROFILE/.kube/config", "location to the Kube config file")
fmt.Println(*kubeconfig)
fmt.Println(os.ExpandEnv(*kubeconfig))
fmt.Println(filepath.Clean(os.ExpandEnv(*kubeconfig)))
</code></pre>
<p>This will output the following on Windows:</p>
<pre><code>$USERPROFILE/.kube/config
C:\Users\username/.kube/config
C:\Users\username\.kube\config
</code></pre>
|
<p>I am experiencing a lot of CPU throttling (see nginx graph below, other pods often 25% to 50%) in my Kubernetes cluster (k8s v1.18.12, running 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux).</p>
<p>Due to backports, I do not know whether my cluster contains the Linux kernel bug described in <a href="https://lkml.org/lkml/2019/5/17/581" rel="nofollow noreferrer">https://lkml.org/lkml/2019/5/17/581</a>. How can I find out? Is there a simple way to check or measure?</p>
<p>If I have the bug, what is the best approach to get the fix? Or should I mitigate otherwise, e.g. not use CFS quota (<code>--cpu-cfs-quota=false</code> or no CPU limits) or reduce <code>cfs_period_us</code> and <code>cfs_quota_us</code>?</p>
<p>CPU Throttling Percentage for nginx (scaling horizontally around 15:00 and removing CPU limits around 19:30): <a href="https://i.stack.imgur.com/M1oi3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M1oi3.png" alt="enter image description here" /></a></p>
| <p>Recently, I'm working on debuging the cpu throttling issue, with the following 5 tests, I've tested out the bug in kernel (Linux version 4.18.0-041800rc4-generic)</p>
<p>This test case is intended to hit 100% throttling for the test 5000ms / 100 ms periods = 50 periods. A kernel without this bug should be able to have a CPU usage stats about 500ms.</p>
<p>Maybe you can try these tests to check whether your kernel will be throttlled.</p>
<p>[Multi Thread Test 1]</p>
<pre><code>./runfibtest 1; ./runfibtest
From <https://github.com/indeedeng/fibtest>
</code></pre>
<p>[Result]</p>
<pre><code>Throttled
./runfibtest 1
Iterations Completed(M): 465
Throttled for: 52
CPU Usage (msecs) = 508
./runfibtest 8
Iterations Completed(M): 283
Throttled for: 52
CPU Usage (msecs) = 327
</code></pre>
<p>[Multi Thread Test 2]</p>
<pre><code>docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/fibtest:latest /bin/sh -c "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
</code></pre>
<p>[Result]</p>
<pre><code>Throttled
Iterations Completed(M): 192
Throttled for: 53
CPU Usage (msecs) = 227
nr_periods 58
nr_throttled 56
throttled_time 10136239267
267434463
209397643 2871651 8044402 4226146 5891926 5532789 27939741 4104364
</code></pre>
<p>[Multi Thread Test 3]</p>
<pre><code>docker run -it --rm --cpu-quota 10000 --cpu-period 100000 hipeteryang/stress-ng:cpu-delay /bin/sh -c "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"
</code></pre>
<p>Result</p>
<pre><code>Throttled
nr_periods 56
nr_throttled 53
throttled_time 7893876370
379589091
330166879 3073914 6581265 724144 706787 5605273 29455102 3849694
</code></pre>
<p>For the following kubernetes test, we can use "kubectl logs pod-name" to get the result once the job is done</p>
<p>[Multi Thread Test 4]</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: fibtest
namespace: default
spec:
template:
spec:
containers:
- name: fibtest
image: hipeteryang/fibtest
command: ["/bin/bash", "-c", "runfibtest 8 && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
</code></pre>
<p>Result</p>
<pre><code>Throttled
Iterations Completed(M): 195
Throttled for: 52
CPU Usage (msecs) = 230
nr_periods 56
nr_throttled 54
throttled_time 9667667360
255738571
213621103 4038989 2814638 15869649 4435168 5459186 4549675 5437010
</code></pre>
<p>[Multi Thread Test 5]</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: stress-ng-test
namespace: default
spec:
template:
spec:
containers:
- name: stress-ng-test
image: hipeteryang/stress-ng:cpu-delay
command: ["/bin/bash", "-c", "stress-ng --taskset 0 --cpu 1 --timeout 5s & stress-ng --taskset 1-7 --cpu 7 --cpu-load-slice -1 --cpu-delay 10 --cpu-method fibonacci --timeout 5s && cat /sys/fs/cgroup/cpu,cpuacct/cpu.stat && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage && cat /sys/fs/cgroup/cpu,cpuacct/cpuacct.usage_percpu
"]
resources:
requests:
cpu: "50m"
limits:
cpu: "100m"
restartPolicy: Never
</code></pre>
<p>Result</p>
<pre><code>Throttled
nr_periods 53
nr_throttled 50
throttled_time 6827221694
417331622
381601924 1267814 8332150 3933171 13654620 184120 6376208 2623172
</code></pre>
<p>Feel free to leave any comment, I’ll reply as soon as possible.</p>
|
<p>Does anyone know what am I doing wrong with my kubernetes secret yaml and why its not able to successfully create one programatically?</p>
<p>I am trying to programmatically create a secret in Kubernetes cluster with credentials to pull an image from a private registry but it is failing with the following:</p>
<pre><code>"Secret "secrettest" is invalid: data[.dockerconfigjson]: Invalid value: "<secret contents redacted>": invalid character 'e' looking for beginning of value"
</code></pre>
<p>This is the yaml I tried to use to create the secret with. It is yaml output from a secret previously created in my kubernetes cluster using the command line except without a few unnecessary properties. So I know this is valid yaml:</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJoZWxsb3dvcmxkLmF6dXJlY3IuaW8iOnsidXNlcm5hbWUiOiJoZWxsbyIsInBhc3N3b3JkIjoid29ybGQiLCJhdXRoIjoiYUdWc2JHODZkMjl5YkdRPSJ9fX0=
kind: Secret
metadata:
name: secrettest
namespace: default
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>This is the decoded value of the ".dockerconfigjson" property which seems to be throwing the error but not sure why if the value is supposed to be encoded per documentation:</p>
<pre><code>{"auths":{"helloworld.azurecr.io":{"username":"hello","password":"world","auth":"aGVsbG86d29ybGQ="}}}
</code></pre>
<p>According to the documentation, my yaml is valid so Im not sure whats the issue:
<a href="https://i.stack.imgur.com/Cdl2H.png" rel="nofollow noreferrer">Customize secret yaml</a></p>
<p><strong>Note: I tried creating the Secret using the Kubernetes client and "PatchNamespacedSecretWithHttpMessagesAsync" in C#</strong></p>
<p>Referenced documentaion: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
| <p>I was getting similar error when I was trying to create secret using client-go.
The error actually tells that the encoded string has invalid e at the beginning of the value (so may be it's expecting '{' this at the beginning ).To solve this, the value should not be encoded into base64. Just use it as it is and that will be encoded later.</p>
|
<p>As the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="noreferrer">documentation</a> states:</p>
<blockquote>
<p>For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod
receives one PersistentVolumeClaim. In the nginx example above, each
Pod receives a single PersistentVolume with a StorageClass of
my-storage-class and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod
is (re)scheduled onto a node, its volumeMounts mount the
PersistentVolumes associated with its PersistentVolume Claims. Note
that, the PersistentVolumes associated with the Pods' PersistentVolume
Claims are not deleted when the Pods, or StatefulSet are deleted. This
must be done manually.</p>
</blockquote>
<p>The part I'm interested in is this: <code>If no StorageClassis specified, then the default StorageClass will be used</code></p>
<p>I create a StatefulSet like this:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: ches
name: ches
spec:
serviceName: ches
replicas: 1
selector:
matchLabels:
app: ches
template:
metadata:
labels:
app: ches
spec:
serviceAccountName: ches-serviceaccount
nodeSelector:
ches-worker: "true"
volumes:
- name: data
hostPath:
path: /data/test
containers:
- name: ches
image: [here I have the repo]
imagePullPolicy: Always
securityContext:
privileged: true
args:
- server
- --console-address
- :9011
- /data
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: access-key
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: secret-key
ports:
- containerPort: 9000
hostPort: 9011
resources:
limits:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: edge-storage-token
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.</p>
<p>When I apply the yaml file, the pod is in Pending status and <code>kubectl describe pod ches-0 -n ches</code> gives the following error:</p>
<blockquote>
<p>Warning FailedScheduling 6s default-scheduler 0/1 nodes are
available: 1 pod has unbound immediate PersistentVolumeClaims.
preemption: 0/1 nodes are available: 1 Preemption is not helpful for
scheduling</p>
</blockquote>
<p>Am I missing something here?</p>
| <p>You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.</p>
<p>First create a PV with at least the amout of space need by your PVC.
Then you can apply your deployment yaml which contains the PVC claim.</p>
|
<p>I see the following error when I run my deployment:</p>
<p><code>Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found</code></p>
<p>My n3deployment.yaml has no reference to n2?</p>
<p><strong>Step By Step</strong></p>
<ol>
<li>Ensure everything is empty</li>
</ol>
<pre><code>c:\temp\k8s>kubectl get pods
No resources found.
c:\temp\k8s>kubectl get svc
No resources found.
c:\temp\k8s>kubectl get deployments
No resources found.
c:\temp\k8s>kubectl get namespaces
NAME STATUS AGE
default Active 20h
docker Active 20h
kube-public Active 20h
kube-system Active 20h
</code></pre>
<ol start="2">
<li>Create files</li>
</ol>
<pre><code>n3namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: n3
n3service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app-n3
namespace: n3
labels:
app: my-app-n3
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app-n3
n3deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-n3
labels:
app: my-app-n3
spec:
replicas: 1
selector:
matchLabels:
app: my-app-n3
template:
metadata:
labels:
app: my-app-n3
spec:
containers:
- name: waiter
image: waiter:v1
ports:
- containerPort: 80
</code></pre>
<ol start="3">
<li>Apply configuration</li>
</ol>
<pre><code>c:\temp\k8s>kubectl apply -f n3namespace.yaml
namespace "n3" created
c:\temp\k8s>kubectl apply -f n3service.yaml
service "my-app-n3" created
c:\temp\k8s>kubectl apply -f n3deployment.yaml
Error from server (NotFound): error when creating "n3deployment.yaml": namespaces "n2" not found
</code></pre>
<p>I used to have a namespace called <code>n2</code> but, as you can see, it no longer exists.</p>
| <p>In my case, I was missing to have executed before:</p>
<pre><code>kubectl create namespace n3
</code></pre>
|
<p>I am running load tests using K6 as a tool. I am sending 1000 requests per second to a .NET API (without database) that I deployed in Kubernetes. Sending this many requests impacts the performance of my application. This is noticeable in the request duration(as expected).</p>
<p>The K8s cluster has two nodes and one of them has high CPU usage because of all the requests I am sending (which is also expected). However, the second node has low CPU usage. While monitoring the CPU usage of the pods I am seeing the same results. Both pods use about 10% - 20% of their CPU. How can these metrics be explained? Shouldn't the loadbalancer (ingress) balance the load evenly? And why am I seeing a decrease in performance if I still have resoures left?</p>
<p>Any advice/help would be greatly appreciated.</p>
<pre><code>Node 1, CPU usage: https://i.stack.imgur.com/r0QPF.png
Node 2, CPU usage: https://i.stack.imgur.com/Ovub5.png
Pod 1, CPU usage: https://i.stack.imgur.com/ofUmm.png
Pod 2, CPU usage: https://i.stack.imgur.com/Jj3sC.png
</code></pre>
| <p>Did you make sure that the pods are scheduled one on each node? Otherwise, they could be both scheduled on the same node.
For your test with one per node, a daemonset would be sufficient.
Otherwise, consider topology spread constraints:</p>
<pre><code>https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
</code></pre>
|
<p>I'm trying to deploy my code placed in bitbucket repo to Kubernetes clusters available in google cloud. I've created a trigger which react when I push new changes to bitbucket repo.</p>
<p>I mean:</p>
<p><a href="https://i.stack.imgur.com/YkK1h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YkK1h.png" alt="enter image description here" /></a></p>
<p>Based on guidance: <a href="https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments" rel="nofollow noreferrer">https://cloud.google.com/build/docs/deploying-builds/deploy-gke#automating_deployments</a> I've created: <code>cloudbuild.yaml</code> file with content:</p>
<pre><code>steps:
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=${_TECH_RADAR_KUB_CONFIG_LOCATION}
- --image=gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:${SHORT_SHA}
- --location=${_TECH_RADAR_GKE_LOCATION}
- --cluster=${_TECH_RADAR_CLUSTER_NAME}
env:
- 'SHORT_SHA=$SHORT_SHA'
options:
logging: CLOUD_LOGGING_ONLY
</code></pre>
<p>In my repo I additionally I have additionally <code>deployment.yaml</code> file with Kuberneties config:</p>
<pre><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "technology-radar-be-1"
image: "gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:SHORT_SHA"
</code></pre>
<p>My build is starting but got stack on issue MANIFEST UNKNOWN like below:
<a href="https://i.stack.imgur.com/Egm3i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Egm3i.png" alt="enter image description here" /></a></p>
<p>How can I solve this problem? Can I somehow let Kubernetes engine know that such manifest exits in additional pre step in cloudbuild.yaml file?
I would be grateful for help.
Thanks!</p>
<p><strong>UPDATE</strong>:</p>
<p>I'm trying to use <strong>cloudbuild.yaml</strong> like below:</p>
<pre><code>substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value
steps:
- id: 'build test image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/technology-radar-be/master:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/technology-radar-be/master:$SHORT_SHA:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,${_TECH_CONTAINER_IMAGE},gcr.io/$PROJECT_ID/technology-radar-be/$master:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
options:
logging: CLOUD_LOGGING_ONLY
</code></pre>
<p>The last error message is:
<a href="https://i.stack.imgur.com/eAoah.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eAoah.png" alt="enter image description here" /></a></p>
<p>Is there something wrong with my <code>Dockerfile</code> definition? :</p>
<pre><code>FROM maven:3.8.2-jdk-11 as builder
ARG JAR_FILE=target/docker-demo.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
</code></pre>
<p>pom.xml:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.6</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>docker-kubernetes</artifactId>
<version>0.0.1</version>
<name>docker-kubernetes</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
<finalName>docker-demo</finalName>
</build>
</project>
</code></pre>
<p>Screenshot from project tree:
<a href="https://i.stack.imgur.com/PkcyF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PkcyF.png" alt="enter image description here" /></a></p>
<p>Local deployment works:</p>
<p><a href="https://i.stack.imgur.com/5vf3i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5vf3i.png" alt="enter image description here" /></a></p>
| <p>Not sure but if you look at the error it's trying to fetch the</p>
<p><code>gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be/SHORT_SHA</code> not <code>gcr.io/java-kubernetes-clusters-test/bitbucket.org/intivetechradar/technology-radar-be:SHORT_SHA</code> it's not using <code>:</code> before the <strong>TAG</strong> are you building images ?</p>
<p><strong>Update</strong> :</p>
<p>Just tested this file with cloud build and GKE working for me : <a href="https://github.com/harsh4870/basic-ci-cd-cloudbuild/blob/main/cloudbuild-gke.yaml" rel="nofollow noreferrer">https://github.com/harsh4870/basic-ci-cd-cloudbuild/blob/main/cloudbuild-gke.yaml</a></p>
<p>However you can use the <strong>cloudbuild.yaml</strong></p>
<pre><code>substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: standard-cluster-1 # default value
steps:
- id: 'build test image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA', '.']
- id: 'push test core image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
- id: 'set test core image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," deployment.yaml']
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "technology-radar-be-1"
image: TEST_IMAGE_NAME
</code></pre>
|
<p>I am new in Kubernetes and stuck on the issue. I was trying to renew letsencrypt SSL certificate. But when I try to get certificate by running following command</p>
<pre><code>kubectl get certificate
</code></pre>
<p>System throwing this exception</p>
<pre><code>Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=Certificate failed: Post https://cert-manager-webhook.default.svc:443/convert?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")
</code></pre>
<p>I have checked the pods also</p>
<p><a href="https://i.stack.imgur.com/R94Wn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R94Wn.png" alt="enter image description here" /></a></p>
<p>The "cert-manager-webhook" is in running state. When I check logs of this pod, I get the following response</p>
<p><a href="https://i.stack.imgur.com/BF0AR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BF0AR.png" alt="enter image description here" /></a></p>
<p>I have also tried to apply cluster-issuer after deleting it but face same issue</p>
<pre><code>kubectl apply -f cluster-issuer.yaml
</code></pre>
<p><a href="https://i.stack.imgur.com/hzNQa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hzNQa.png" alt="enter image description here" /></a></p>
<p>I also have done R&D about this but could not find any suitable solution. Whats the issue here? Can someone please help me regarding this? Thanks.</p>
| <p>In my case, I was attempting to install an older version of <code>cert-manager</code> onto my cluster, and Simply pulling the latest version of <code>cert-manger</code> (1.10.1 at time of writing) and installing that worked.</p>
<hr />
<p>When attempting to install an older version of <code>cert-manager</code> I saw the following error from the <code>cainjector</code> pod.</p>
<blockquote>
<p>error registering secret controller: no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"</p>
</blockquote>
<p>I assume that the API <strong>admissionregistration.k8s.io/v1beta1</strong> has been removed between K8s versions 1.21 and 1.24, and that's why I encountered an issue.</p>
|
<p>I have three servers:</p>
<pre><code>1 master: 192.168.1.131 k8s
1 node: 192.168.1.132 k8s
1 rancher: 192.168.1.133 rancher 2.6
</code></pre>
<p>I have created a docker image (private registry docker) on the node used <code>192.168.1.132:5000/test.</code>
Both master and node pushed and pulled to the image. But used rancher deploy set image 192.168.1.132:5000/test then error:</p>
<pre><code>Failed to pull image “192.168.1.132:5000/test-demo”: rpc error: code = Unknown desc = failed to pull and unpack image “192.168.1.132:5000/test-demo:latest”: failed to resolve reference “192.168.1.132:5000/test-demo:latest”: failed to do request: Head “https://192.168.1.132:5000/v2/test-demo/manifests/latest”: http: server gave HTTP response to HTTPS client.
</code></pre>
<p>My image used
<code>http</code> not <code>https</code>. But the rancher sends HTTPS.
<a href="https://ibb.co/t2wxbQ8" rel="nofollow noreferrer">This is image problem</a></p>
| <p>You may try this out by adding one line to the <strong>daemon.json</strong> file</p>
<p>The configuration file can be found at <em><strong>'C:\ProgramData\Docker\config\daemon.json'</strong></em>. You can create this file if it doesn't already exist.</p>
<pre><code>"insecure-registries": [
"192.168.1.132:5000"
]
</code></pre>
<p>Then restart the docker service hope solves the issue.</p>
<p>Raff:</p>
<ol>
<li><a href="https://docs.docker.com/config/daemon/#configure-the-docker-daemon" rel="nofollow noreferrer">Configure the Docker daemon</a></li>
<li><a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon#configure-docker-with-a-configuration-file" rel="nofollow noreferrer">Configure Docker with a configuration file.</a></li>
</ol>
|
<p>I'm using an Kubernetes server with API version (1.25.2) . When I try to do a kubectl command getting the below error</p>
<pre><code>TRONBQQ2:~$ kubectl get nodes
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
error: unknown flag: --environment
Unable to connect to the server: getting credentials: exec: executable kubelogin failed with exit code 1
</code></pre>
<p>From the same terminal I'm able to access the Kubernetes server with version 1.23.12.</p>
<p>Is this due to an old kubectl clieint version?.</p>
<pre><code>TRONBQQ2:~$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean",
BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
TRONBQQ2:~$ sudo apt-get install -y kubectl
Reading package lists... Done
Building dependency tree
Reading state information... Done
</code></pre>
<p>kubectl is already the newest version (1.19.4-00).
0 upgraded, 0 newly installed, 0 to remove and 313 not upgraded.</p>
<p>I even tried to upgrade the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux" rel="nofollow noreferrer">kubectl</a> . Even after upgrade, version remains in v1.19.4. Not sure this is the reason for the above mentioned error</p>
| <p>Try to check whether the the following command is printing the below options.</p>
<pre><code>>>>kubelogin -h
Login to azure active directory and populate kubeconfig with AAD tokens
Usage:
kubelogin [flags]
kubelogin [command]
Available Commands:
completion Generate the autocompletion script for the specified shell
convert-kubeconfig convert kubeconfig to use exec auth module
get-token get AAD token
help Help about any command
remove-tokens Remove all cached tokens from filesystem
Flags:
-h, --help help for kubelogin
--logtostderr log to standard error instead of files (default true)
-v, --v Level number for the log level verbosity
--version version for kubelogin
</code></pre>
<p>It seems I was having a different kubelogin and missed the above specified Command Options. So I installed the new version of kubelogin using</p>
<pre><code> az aks install-cli
</code></pre>
<p>If that doesn't work, then you can refer "<a href="https://i.stack.imgur.com/bB2N7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bB2N7.png" alt="enter image description here" /></a>" to get the kubelogin brew package</p>
<p>Also export the below path</p>
<pre><code>export PATH="/usr/local/bin:$PATH"
</code></pre>
<p>Once the kubelogin is available , run the below command to convert your KUBECONFIG</p>
<pre><code>kubelogin convert-kubeconfig ./kube/config
</code></pre>
|
<p>I am getting - when installing <a href="https://cilium.io" rel="nofollow noreferrer">Cilium</a>:</p>
<pre><code>Warning FailedScheduling 4m21s (x17 over 84m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.
</code></pre>
<p>How can I see the rule and can I change it?</p>
<p>If I do <code>kubectl describe node</code>, id do not have anyy <code>nodeAffinity</code> settings. And the Node has <code>Taints:<none></code></p>
| <p>Run <code>$ kubectl get pods</code> , it shows Pending status. (<code>kubectl get pods -o wide</code>)</p>
<p>To describe the pod run <code>$ kubectl describe pod POD_NAME</code>, it shows a warning as part of events, if not works, try as suggested by @ <strong>Chris</strong> run <code>kubectl get pod <name> -o yaml</code>. There you'll find <code>spec.affinity</code>.</p>
<p>After identifying which anti-affinity rules triggers the warning. You can choose to</p>
<blockquote>
<p>either rectify the rule or make some changes in the cluster to support
the rule</p>
</blockquote>
<p><strong>For example :</strong> let's take a case, try to deploy 4 replicas of Ngnix deployment with <strong>podAntiAffinity</strong> rule in a 3 Node cluster. Here the last replica cannot be scheduled because no available Nodes left.</p>
<blockquote>
<p>You can choose to reduce the number of replicas, increase the number
of Nodes, adjust the rule to use soft/preference requirements or
remove the <strong>podAntiAffinity</strong> rule.</p>
</blockquote>
|
<h3>First, I have created a service account <code>jenkins</code></h3>
<pre><code>> kubectl create serviceaccount jenkins
serviceaccount/jenkins created
</code></pre>
<h3>Second, create a token for this service account.</h3>
<pre><code>> kubectl create token jenkins
eyJhbGc****************iQS-AVXfIzA
</code></pre>
<h3>Then, I run <code>kubectl describe serviceaccount jenkins</code> command to check the tokens of newly created service account.</h3>
<p>But the output shows None tokens.</p>
<pre><code>> kubectl describe serviceaccount jenkins
Name: jenkins
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none> <===== look at this!
Events: <none>
</code></pre>
<h3>Questions</h3>
<ol>
<li><p>The output shows "None" tokens, how do I know there is a token associated with this service account?</p>
</li>
<li><p>If I run <code>kubectl create token jenkins</code> command multiple times, will kubernetes create multiple tokens for this account? or the latest one will overwrite the previous one?</p>
<pre><code># first time
kubectl create token jenkins
# second time
kubectl create token jenkins
# third time
kubectl create token jenkins
`
</code></pre>
</li>
<li><p>What's the mapping relation between the service account and the token? is it <code>1:n</code> ?</p>
</li>
</ol>
| <p>The mechanism of service accounts and tokens has been changed (Moved to stable in v1.22).</p>
<p>In the past, you have created a service account. Then in the first time you ran a pod with that service account, Kubernetes created a long-lived, never expired, token that resided in secret of type <code>kubernetes.io/service-account-token</code>. Kubernetes attached this token to the pod via volume mount.</p>
<p>Due to the unsecured nature of long-lived tokens this has been changed to something called <code>Bound Service Account Token Volumes</code>.</p>
<p>In short, Kubernetes no more creates a secret with a token for the service account but instead Kubelet injects a short-lived token with a default timespan of 1 hour to the pod and refresh it when it's expired.</p>
<p>When you ran <code>kubectl describe serviceaccount jenkins</code> you saw <code><none></code> in the section of the <code>Tokens</code> because it represents the 'old fashioned' static tokens that as mentioned are not created by default anymore.</p>
<p>You can manually create such a static token with</p>
<pre><code>> cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: jenkins
annotations:
kubernetes.io/service-account.name: jenkins
EOF
</code></pre>
<p>and then when you run <code>describe</code> again and you will the new token</p>
<pre><code>> kubectl describe serviceaccount jenkins
Name: jenkins
Namespace: jenkins-demo
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: jenkins
Events: <none>
</code></pre>
<p>You can create multiple tokens with different names and you will see all of them in the <code>describe</code> output.</p>
<p><strong>BUT</strong> This is a bad practice to create these static tokens because they never expired. You should use short-lived token that you can create with the command you mentioned <code>kubectl create token jenkins</code></p>
<p>You can control the duration with <code>--duration <seconds>s</code> and create a token with an expiration of up to 48h. The default is 1h.</p>
<p>The creation of new token doesn't overwrite the previous one. These tokens are JWTs - meaning they are signed and distributed, and are not kept on the server. If you want to see the content of a token you can paste the output
of <code>kubectl create token jenkins</code> in <a href="https://jwt.io/" rel="noreferrer">jwt.io</a>.</p>
<p>Same as with the static token. You can run
<code>kubectl get secret jenkins --output=jsonpath='{.data.token}' | base64 -d</code>
and paste the output in <a href="https://jwt.io/" rel="noreferrer">jwt.io</a>. You can notice this token doesn't have an expiration date.</p>
<p>Reference:</p>
<ol>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount</a></li>
<li><a href="https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens" rel="noreferrer">https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens</a></li>
<li><a href="https://github.com/kubernetes/enhancements/issues/542" rel="noreferrer">https://github.com/kubernetes/enhancements/issues/542</a></li>
</ol>
|
<p>I'm currently using Kubernetes Python Client V1 API.
Since there is a way to list node details but I could not find which one is master</p>
<pre class="lang-py prettyprint-override"><code>k8s_api = client.CoreV1Api()
logger.info("Getting k8s nodes...")
response = k8s_api.list_node()
</code></pre>
| <pre><code>k8s_api = client.CoreV1Api()
response = k8s_api.list_node()
# Loop through the nodes and check for the master node label
for node in response.items:
if "node-role.kubernetes.io/master" in node.metadata.labels:
print(f"Master node: {node.metadata.name}")
</code></pre>
|
<p>I have my application <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a> deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.</p>
<p>Now there is a need to expose one service on a specific port for example <a href="https://myapp.com:8888" rel="nofollow noreferrer">https://myapp.com:8888</a>. Idea is to keep <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a> secured inside the private network and expose only port number 8888 to the internet for integration.</p>
<p>Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?</p>
<p>Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a>:<node_port> with https.</p>
<p>Is using multiple ingress controllers an option?</p>
<p>What is the best practice to do this in Kubernetes?</p>
| <p>Use sidecar proxy pattern to add HTTPS support to the application running inside the pod.</p>
<p><strong>Refer the below diagram as a reference</strong></p>
<p><a href="https://i.stack.imgur.com/aym8f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aym8f.png" alt="enter image description here" /></a></p>
<p>Run nginx as a sidecar proxy container fronting the application container inside the same pod. Access the application through port 8888 on nginx proxy. nginx would route the traffic to the application.</p>
<p>Find below the post showing how it can be implemented</p>
<p><strong><a href="https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs" rel="nofollow noreferrer">https://vorozhko.net/kubernetes-sidecar-pattern-nginx-ssl-proxy-for-nodejs</a></strong></p>
|
<p><strong>Intro</strong></p>
<p>There is an option <code>--platform</code> for Docker image to be run and config <code>platform</code> for docker-compose.</p>
<p>Also, almost in all official Docker images in hub.docker.com there is some of supported architectures in one tag.</p>
<p>Example, Ubuntu official image:</p>
<p><a href="https://i.stack.imgur.com/Yuxul.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yuxul.png" alt="enter image description here" /></a></p>
<p>Most of Servers (also in Kubernetes) are <code>linux/amd64</code>.</p>
<p>I updated my MacBook to new one with their own Silicon chip (M1/M2...) and now Docker Desktop showing me message:</p>
<p><a href="https://i.stack.imgur.com/2hitQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2hitQ.png" alt="enter image description here" /></a></p>
<p>For official images (you can see them without yellow note) it downloads automatically needed platform (I guess).</p>
<p>But for custom created images (in private repository like nexus, artifacts) I have no influence. Yes, I can build appropriate images (like with buildx) for different platforms and push it to the private repository, but, in companies, where repos managed by DevOps - it is tricky to do so. They say that the server architecture is linux/amd64, and if I develop web-oriented software (PHP etc.) on a different platform, even if the version (tag) is the same - then the environment is different, and there is no guarantee that it will work on the server.</p>
<p>I assumed that it is only the difference in interpretation of instructions between the software and the hardware.</p>
<p>I would like to understand the subject better. There is a lot of superficial information on the web, no details.</p>
<p><strong>Questions</strong></p>
<ol>
<li>what "platform/architecture" for Docker image does it really means? Like core basics.</li>
<li>Will you really get different code for interpreted programming languages?</li>
<li>It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)</li>
</ol>
| <h2>TLDR</h2>
<ul>
<li>Build multi-arch images supporting multiple architectures</li>
<li>Always ensure that the image you're trying to run has compatible architecture</li>
</ul>
<hr />
<blockquote>
<ol>
<li>what "platform/architecture" for docker image does it really means? Like core basics. Links would be appreciated.</li>
</ol>
</blockquote>
<p>It means that some of the compiled binary code within the image contains CPU instructions exlusive to that specific architecture.
If you run that image on the incorrect architecture, it'll either be slower due to the incompatible code needing to run through an emulator, or it might even not work at all.
Some images are "multi-arch", where your Docker installation selects the most suitable architecture of the image to run.</p>
<blockquote>
<p>Will you really get different code for interpreted programming languages?</p>
</blockquote>
<p>Different machine code, yes. But it will be functionally equivalent.</p>
<blockquote>
<p>It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)</p>
</blockquote>
<p>I recommend to always ensure you're running images meant for your machine's infrastructure.</p>
<p>For the sake of science, you could do an experiment.
You can build an image that is meant to run a simple batch job for two different architectures, and then you can try running them both on your machine. Compare the time it takes the containers to finish.</p>
<p>Sources:</p>
<ul>
<li><a href="https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:%7E:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent">https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:~:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent</a>.</li>
<li><a href="https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/" rel="nofollow noreferrer">https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/</a></li>
<li><a href="https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/" rel="nofollow noreferrer">https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/</a></li>
</ul>
|
<p>I try to install <code>gke-gcloud-auth-plugin</code> on a Mac M1 with zsh, following the gcloud <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl" rel="noreferrer">docs</a>.</p>
<p>The installation ran without issue and trying to re-run <code>gcloud components install gke-gcloud-auth-plugin</code> I get the <code>All components are up to date.</code> message.</p>
<p>However, <code>gke-gcloud-auth-plugin --version</code> returns <code>zsh: command not found: gke-gcloud-auth-plugin</code>. <code>kubectl</code>, installed the same way, works properly.</p>
<p>I tried to install <code>kubectl</code> using <code>brew</code>, with no more success.</p>
| <p>I had the same error and here's how I fixed it.</p>
<pre><code>brew info google-cloud-sdk
</code></pre>
<p>which produces:</p>
<pre><code>To add gcloud components to your PATH, add this to your profile:
for bash users
source "$(brew --prefix)/share/google-cloud-sdk/path.bash.inc"
for zsh users
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
source "$(brew --prefix)/share/google-cloud-sdk/completion.zsh.inc"
for fish users
source "$(brew --prefix)/share/google-cloud-sdk/path.fish.inc"
</code></pre>
<p>Grab the code for your terminal and then run it (e.g., for zsh)</p>
<pre><code>source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
</code></pre>
<p>Add the above line to your <code>.zshrc</code> profile to make sure that it is loaded every time you open a new terminal.</p>
<p>I had originally installed gcloud sdk with homebrew <code>brew install google-cloud-sdk</code>. At that time, I read the Caveats, which tell you how to add gcloud components to your PATH.</p>
<p>I installed both <code>kubectl</code> and <code>gke-gcloud-auth-plugin</code>, and neither of them could be found from the command line. I got the same error as the OP "command not found"</p>
|
<p>I am trying to run the following command on a pod from my local machine.</p>
<pre><code>kubectl -n myns exec mypod -- /bin/bash -c "err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )"
</code></pre>
<p>The command stores the output of the command in a variable and if the output contains a "No such file or directory" it ignores it else exits with the return code.</p>
<p>The command itself: <code>err=$(tar -cvzf /tmp/bludr-logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )</code> runs fine when I run this manually inside the pod. But when I try to run this remotely using exec it gives it won't accept <code>*</code>. Here is what I get:</p>
<pre><code>root:~# kubectl -n myns exec mypod -- /bin/bash -c "err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )"
such: Cowardly: command not found
such: -c: line 1: conditional binary operator expected
such: -c: line 1: syntax error near `*No'
such: -c: line 1: `Try 'tar --help' or 'tar --usage' for more information. || ( export ret=2; [[ == *No'
command terminated with exit code 1
</code></pre>
<p>I tried to replace * with <code>&2A</code> but that did not work.</p>
| <p>Your command contain nested double quotes so use single quotes instead:</p>
<pre><code>kubectl -n myns exec mypod -- /bin/bash -c 'err=$(tar -cvzf /tmp/logs_aj.tgz ${all_files} 2>&1) || ( export ret=$?; [[ $err == *"No such file or directory"* ]] || exit "$ret" )'
</code></pre>
|
<p>Auto-Scaling is OFF and Cluster-Upgrade is OFF but still dont know why cluster get restarted today morning. All Nodes got replace/change and All Pods got restarted</p>
<p>Reason is that it got restarted due to node got upgrade from 1.22 to 1.23 version</p>
<p>I have Regular Version in GKE Cluster</p>
| <p>You should double-check with Google support (if you have a support plan), but I know from painful experience that if you're running a version of GKE that falls out of support, they may force-upgrade you to keep you within support, even if you have cluster upgrade turned off -- unless you use a maintenance exclusion.</p>
<p>The REGULAR channel release notes are here: <a href="https://cloud.google.com/kubernetes-engine/docs/release-notes-regular" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/release-notes-regular</a></p>
<p>The December 5th one is probably the one that affected you.</p>
|
<h2>Background</h2>
<p>I have been playing around with GCP free plan. I am trying to learn how to apply gitops with IaC. To do so I'm trying to create the infrastructure for a kubernetes cluster using terraform, using google as the cloud provider.</p>
<p>Managed to configure the Github Actions to apply the changes when a push is made. However I get the following error:</p>
<pre><code>│ Error: googleapi: Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB": request requires '300.0' and is short '50.0'. project has a quota of '250.0' with '250.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=swift-casing-370717., forbidden
│
│ with google_container_cluster.primary,
│ on main.tf line 26, in resource "google_container_cluster" "primary":
│ 26: resource "google_container_cluster" "primary" ***
</code></pre>
<h2>Configuration</h2>
<p>The above mentioned terraform configuration file is the following:</p>
<pre><code># https://registry.terraform.io/providers/hashicorp/google/latest/docs
provider "google" {
project = "redacted"
region = "europe-west9"
}
# https://www.terraform.io/language/settings/backends/gcs
terraform {
backend "gcs" {
bucket = "redacted"
prefix = "terraform/state"
}
required_providers {
google = {
source = "hashicorp/google"
version = "~> 4.0"
}
}
}
resource "google_service_account" "default" {
account_id = "service-account-id"
display_name = "We still use master"
}
resource "google_container_cluster" "primary" {
name = "k8s-cluster"
location = "europe-west9"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "k8s-node-pool"
location = "europe-west9"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-small"
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
</code></pre>
<h2>Problem</h2>
<p>It seems that I need to limit the resources so they use up to 250 GB, how can I do so?</p>
<h2>What I have tried</h2>
<h4>Reducing <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#disk_size_gb" rel="nofollow noreferrer">node_pool size</a>.</h4>
<p>According to the documentation default size is 100GB and changed it to 50 as follows:</p>
<pre><code>resource "google_container_node_pool" "primary_preemptible_nodes" {
name = "k8s-node-pool"
location = "europe-west9"
cluster = google_container_cluster.primary.name
node_count = 1
node_config {
preemptible = true
machine_type = "e2-small"
disk_size_gb = 50
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.default.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
</code></pre>
<p>Despite reducing the size the error message did not change at all.</p>
| <p>The <code>google_container_cluster</code> resource also allows you to specify disk usage. Update the configuration as follows:</p>
<pre><code>resource "google_container_cluster" "primary" {
name = "k8s-cluster"
location = "europe-west9"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
node_config {
disk_size_gb = 50
}
}
</code></pre>
|
<p>I want to write some tests for Kubernetes with python. This is a sample of my deployment file in Kubernetes:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-service
namespace: test
creationTimestamp: '2022-12-08T18:18:49Z'
labels:
app: test-service
team: teamA
.
.
</code></pre>
<p>what I want to do is get access to the items inside the deployment file:</p>
<pre class="lang-py prettyprint-override"><code>some codes here....
result = set()
some codes here.....
with open(deoloyment_file, "r") as stream:
for data in yaml.safe_load_all(stream.read().replace('\t', ' ')):
if data and data['kind'] == 'Deployment':
result.add(f"{data['metadata']['namespace']}:{data['metadata']['name']}:{data['metadata']['labels']['team']}")
</code></pre>
<p>This throws an exception because in some deployments files there are no labels or team. I'm wondering how can I conditionally access items with Python.</p>
| <p>You can specify a default value for the <code>dict.get</code> method to make it fall back to the value when a given key does not exist.</p>
<p>Assuming you want an empty string when either <code>labels</code> or <code>team</code> doesn't exist, you can change:</p>
<pre><code>data['metadata']['labels']['team']
</code></pre>
<p>to:</p>
<pre><code>data['metadata'].get('labels', {}).get('team', '')
</code></pre>
|
<p>I'm trying to set up a Harbor Registry in Kubernetes cluster locally in Minikube. I am using <a href="https://github.com/goharbor/harbor-helm" rel="nofollow noreferrer">https://github.com/goharbor/harbor-helm</a> and trying to set it up with the ingress controller. At the moment, I have not changed any defaults (the default is of type 'ingress') but have installed the Nginx Ingress Controller via <code>minikube addons enable ingress</code>.</p>
<p>Right now, the helm chart installs, and I added an entry to my hosts file for <code>192.168.49.2 core.harbor.domain</code> but page simply returns a 503 error.</p>
<p>When reviewing the pod logs for the harbor portal, I get this: <code>ginx: [emerg] socket() [::]:8080 failed (97: Address family not supported by protocol)</code></p>
<p>I'm pretty sure that the reason why I'm getting this is because I need some default values set when installing the chart, but I'm not really sure what those may be.</p>
| <p>I found out what the issue was: I needed to set the <code>ipFamily.ipv6.enabled</code> version to false:</p>
<p><code>helm upgrade -i my-release harbor/harbor --set ipFamily.ipv6.enabled=false</code></p>
|
<p>I have a k8s cluster which contains 2 nodes. And in the cluster I deployed 2 pods for the same application. Due to some reason I have to deploy a service (NodePort IP) for each pod, so totally I have 2 services the application, for example the service NodePort IP is 192.142.1.11 and 192.142.1.12. And use these 2 ips I can access the application from any node.</p>
<p>Now I am going to use keepalived to set up HA for the application. So:</p>
<ol>
<li>What's the best practice to install the keepalived service? On each k8s node or deploy it as pod?</li>
<li>How to configure the <code>interface</code> in the <strong>keepalived.conf</strong> file? You know the NodePort ips are configured on <em>kube-ipvs0</em> interface created by k8s and its status is down. Seems it cannot be used as the interface in keepalived.conf. Should I use the Node external interface if I start keepalived service on each node?</li>
</ol>
<p>Thanks for your help.</p>
| <p>If your final goal is masters HA / users service load balancing in on-prem environment, then you can take a look on this two project:</p>
<p><a href="https://kube-vip.io/" rel="nofollow noreferrer">Kubevip</a>: can do both (HA masters + LoadBalancer type for user workload).</p>
<p><a href="https://github.com/metallb/metallb" rel="nofollow noreferrer">Metallb</a>:
user workload LoadBalancer</p>
|
<p>I'm building an app that spawns Jobs (batch/v1), I need to update my Custom Resource status with the Job status.</p>
<p>I setup the controller with the following:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *JobsManagedByRequestedBackupActionObserver) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&riotkitorgv1alpha1.RequestedBackupAction{}).
Owns(&batchv1.Job{}).
Owns(&batchv1.CronJob{}).
WithEventFilter(predicate.Funcs{
DeleteFunc: func(e event.DeleteEvent) bool {
return false
},
}).
Complete(r)
}
</code></pre>
<p>During the <code>Reconcile(ctx context.Context, req ctrl.Request)</code> I fetch my RequestedBackupAction object (basing on "req") and then I fetch Jobs from API using a dedicated tracking label.</p>
<pre class="lang-golang prettyprint-override"><code>list, err := kj.client.Jobs(namespace).List(ctx, metav1.ListOptions{LabelSelector: v1alpha1.LabelTrackingId + "=" + trackingId})
</code></pre>
<p>When I iterate over objects with:</p>
<pre class="lang-golang prettyprint-override"><code>for _, job := range list.Items {
logrus.Errorf("[++++++++++++] JOB name=%s, failed=%v, active=%v, succeeded=%v", job.Name, job.Status.Failed, job.Status.Active, job.Status.Succeeded)
}
</code></pre>
<p>Then I get multiple entries like this:</p>
<pre class="lang-bash prettyprint-override"><code>time="2022-12-12T20:00:55Z" level=error msg="[++++++++++++] JOB name=app1-backup-vmqrp, failed=0, active=1, succeeded=0"
</code></pre>
<p>But I don't finally get an entry, where there should be: failed=1, active=0, succeeded=0
even if the Job actually finished - the point is that the controller is not being notified.</p>
<p>That's the final Job status:</p>
<pre class="lang-yaml prettyprint-override"><code> status:
conditions:
- lastProbeTime: "2022-12-12T20:00:56Z"
lastTransitionTime: "2022-12-12T20:00:56Z"
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
ready: 0
startTime: "2022-12-12T20:00:50Z"
uncountedTerminatedPods: {}
</code></pre>
<p>What could be wrong?</p>
| <p>The solution was really dead simple - when the object is not ready, then requeue it, wich for Job means to wait until it will be finished. Still I don't understand why the controller is not notified about a state change from: active=1 to active=0 and from failed=0 to failed=1</p>
<p><strong>Example:</strong></p>
<pre class="lang-golang prettyprint-override"><code>if healthStatus.Running {
return ctrl.Result{Requeue: true}, nil
}
</code></pre>
|
<p>I am trying to pass entire set of yamls from <code>values.yaml</code> in helm to the <code>templates</code>, so that whatever yaml inputs I pass in the values.yaml section goes in the templates yaml as it is :</p>
<p>For example :</p>
<p><code>values.yaml</code></p>
<pre><code>...
...
metallbConfig: |-
apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
creationTimestamp: null
name: peer1
namespace: metallb-system
spec:
holdTime: 3s
keepaliveTime: 0s
myASN: 64026
passwordSecret: {}
peerASN: 65227
peerAddress: 10.252.254.194
status: {}
</code></pre>
<p><code>templates/resources.yaml</code> :</p>
<pre><code>{{ toYaml .Values.metallbConfig }}
</code></pre>
<p>Essentially what I want to achieve is whole <code>BGPPeer</code> section to be present in the <code>resources.yaml</code> when I deploy the chart.</p>
<p>Currently I am getting this error :</p>
<pre><code># helm template metallbcnf . --output-dir outputs --debug
...
...
Error: YAML parse error on metallb/templates/resources.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
helm.go:84: [debug] error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
</code></pre>
<p>Kindly help me resolve the same.</p>
| <p>If you want to embed the yaml entirely, you don't need the <code>|-</code></p>
<p>For example, I have this in <code>values.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>...
probes:
livenessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 80
initialDelaySeconds: 15
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 5
failureThreshold: 3
...
</code></pre>
<p>Then use this in my helm deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag}}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- toYaml .Values.probes | nindent 10 }}
...
...
</code></pre>
<p>You'll notice I need to be explicit about the indenting using <code>nindent</code> otherwise helm just pastes in the yaml as in the <code>values.yaml</code> which breaks the parsing</p>
|
<p>`</p>
<pre><code>rules:
- verbs:
- get
- list
apiGroups:
- ''
resources:
- namespaces
- pods
- pods/log
- verbs:
- get
- list
apiGroups:
- apps
resources:
- deployments
</code></pre>
<p>`</p>
<p>I want to know difference between</p>
<p>apiGroups:</p>
<ul>
<li>''</li>
</ul>
<p>and</p>
<p>apiGroups:</p>
<ul>
<li>apps</li>
</ul>
<p>whats the importance of apiGroups in manifests?</p>
| <p>The API group in kubernetes identifies which API group needs to target. This is necessary as different API groups can have the same verbs and also Kubernetes is highly extensible and allows for the addition of new APIs that can have verbs and resource names that clash with other APIs.</p>
<p>In the manifest file, The API group “ ” (empty string) represents the core Kubernetes API and it is used for pods : apigroups is “ ” . for Deployment, apigroups is “apps” and “extensions” is used.</p>
<p>Refer to this <a href="https://kubernetes.io/docs/reference/using-api/#api-groups" rel="nofollow noreferrer">API Group official doc</a></p>
|
<p>I am using an <code>Ubuntu 22.04</code> machine to run and test Kubernetes locally. I need some functionality like <code>Docker-Desktop</code>. I mean it seems both <code>master</code> and <code>worker</code> nodes/machines will be installed by <code>Docker-Desktop</code> on the same machine. But when I try to install Kubernetes and following the instructions like <a href="https://www.cloudsigma.com/how-to-install-and-use-kubernetes-on-ubuntu-20-04/" rel="nofollow noreferrer">this</a>, at some points it says run the following codes on <code>master</code> node:</p>
<pre><code>sudo hostnamectl set-hostname kubernetes-master
</code></pre>
<p>Or run the following comands on the <code>worker</code> node machine:</p>
<pre><code>sudo hostnamectl set-hostname kubernetes-worker
</code></pre>
<p>I don't know how to specify <code>master</code>/<code>worker</code> nodes if I have only my local Ubuntu machine?</p>
<p>Or should I run <code>join</code> command after <code>kubeadm init</code> command? Because I can't understand the commands I run in my terminal will be considered as a command for which <code>master</code> or <code>worker</code> machine?</p>
<p>I am a little bit confused about this <code>master</code>/<code>worker</code> nodes or <code>client</code>/<code>server</code> machine stuff while I am just using one machine for both client and server machines.</p>
| <p>Prerequisites for installing kubernetes in cluster:</p>
<ol>
<li>Ubuntu instance with 4 GB RAM - Master Node - (with ports open to all traffic)</li>
<li>Ubuntu instance with at least 2 GB RAM - Worker Node - (with ports open to all traffic)</li>
</ol>
<p>It means you need to create 3 instances from any cloud provider like Google (GCP), Amazon (AWS), Atlantic.Net Cloud Platform, cloudsigma as per your convenience.</p>
<p>For creating an instance in gcp follow this <a href="https://cloud.google.com/compute/docs/create-linux-vm-instance" rel="nofollow noreferrer">guide</a>. If you don’t have an account create a new account ,New customers also get $300 in free credits to run, test, and deploy workloads.</p>
<p>After creating instances you will get ips of the instance using them you can ssh into the instance using terminal in your local machine by using the command: <code>ssh root@<ip address></code></p>
<p>From there you can follow any guide for installing kubernetes by using worker and master nodes.</p>
<p>example:</p>
<pre><code>sudo hostnamectl set-hostname <host name>
</code></pre>
<p>Above should be executed in the ssh of the worker node, similarly you need to execute it into the worker node.</p>
|
<p>In Kubernetes, I want start a PostgreSQL pod, create database, user and grant all privileges to the database to the user. So I came up with the following command:</p>
<pre><code>kubectl run somepod --rm -i --restart=Never --image postgres:alpine --tty -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
</code></pre>
<p>It already works, but unfortunately the pod isn't terminated after the command. In Kubernetes, the pod is still alive and it's status is "Succeeded". I already tried adding <code>-c '\q'</code> to the command line because I assumed that psql is still running, but it didn't helped. Any ideas how I can remove the pod after the command has been succeeded?</p>
| <p>You can try something like this:</p>
<pre><code>kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- echo "hello"
hello
pod "test4" deleted
</code></pre>
<p>In your example, you are just executing a command in a running container, where postgres is started already.</p>
<p>You need to overvrite the command which is executed when the container is started.</p>
<p>Executing your command: (the error comes only because i didnt install any postgres)</p>
<p>kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"</p>
<p>This will work if your postgres on localhost has right credentials.</p>
<p>Or it will return like so, because connection couldnt be established:</p>
<pre><code>kubectl run test4 -i --tty --image=postgres --restart=Never --rm --command -- psql "postgresql://postgres:somepass@localhost" -c "CREATE DATABASE testdb" -c "CREATE USER myUser WITH PASSWORD 'password'" -c "GRANT ALL PRIVILEGES ON DATABASE testdb TO myUser"
psql: error: connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
pod "test4" deleted
</code></pre>
<p>But however the pod is beeing deleted.</p>
<p>Hint to not doing that always manually , you can put a initContainer in your deployment or execute a job. Both with same image and Command.</p>
|
<p>I got a task, to create a helm-chart and use a kubernetes database operator that creates automatically credentials for the database.</p>
<p>I don't really understand how is this possible.<br />
I found this operator in Operator hub: <a href="https://operatorhub.io/operator/postgresql-operator-dev4devs-com" rel="nofollow noreferrer">https://operatorhub.io/operator/postgresql-operator-dev4devs-com</a></p>
<p>This is an operator for postgreSQL. It is mentioned that :</p>
<ul>
<li>Allow you setup the operator to get the values required for its enviroment variables (user, password and database name) in an ConfigMap applied in the cluster already. <em>NOTE: It is very useful to centralize and share this information accross the cluster for your solutions. Also, you are able to configure each configMap keys that contains each EnvVar</em> required for the PostgreSQL database image.</li>
</ul>
<p>If someone has experience with kubernetes operators, could I get a hint, which one should I use? (easiest, it doesn't have to be Postgres)</p>
| <p>i am not much fan of operators not writing unless required or keeping it as the last option.</p>
<p>To understand it easily consider it like there are different types of operator</p>
<ul>
<li>Helm-based operator</li>
<li>Ansible operator</li>
<li>Go, python other language-based operators</li>
</ul>
<p>Which use the framework to <a href="https://github.com/operator-framework" rel="nofollow noreferrer">Operator framework</a></p>
<p>You have to create one Helm-based Operator which keep watch on a secret if removed or does not exist it will create the new one for the Database.</p>
<p>Steps :</p>
<ol>
<li>Install the Operator SDK first</li>
<li><code>operator-sdk new db-operator --api-version=harsh.com/v1alpha1 --kind=DbSecret --type=helm --helm-chart=db-secret --helm-chart-repo=<repo url></code></li>
<li>Add/Edit the helm chart YAML with DB secret creation as you are planning</li>
<li>Consider <strong>kind:DbSecret</strong> as values.yaml in helm</li>
<li>Build and deploy the Operator (operator-sdk build) it will watch for configmap/secret if not there it will create it</li>
</ol>
<p>you might have to create the necessary service account, role, and role binding.</p>
<p>Ref document to create the helm operator : <a href="https://sdk.operatorframework.io/docs/building-operators/helm/tutorial/" rel="nofollow noreferrer">https://sdk.operatorframework.io/docs/building-operators/helm/tutorial/</a></p>
|
<p>I have my application <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a> deployed on K8S, with an nginx ingress controller. HTTPS is resolved at nginx.</p>
<p>Now there is a need to expose one service on a specific port for example <a href="https://myapp.com:8888" rel="nofollow noreferrer">https://myapp.com:8888</a>. Idea is to keep <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a> secured inside the private network and expose only port number 8888 to the internet for integration.</p>
<p>Is there a way all traffic can be handled by the ingress controller, including tls termination, and it can also expose 8888 port and map it to a service?</p>
<p>Or
I need another nginx terminating tls and exposed on nodeport? I am not sure if I can access services like <a href="https://myapp.com" rel="nofollow noreferrer">https://myapp.com</a>:<node_port> with https.</p>
<p>Is using multiple ingress controllers an option?</p>
<p>What is the best practice to do this in Kubernetes?</p>
| <p>It is not a best practices to expose custom port over internet.</p>
<p>Instead, create a sub-domain (i.e <a href="https://custom.myapp.com" rel="nofollow noreferrer">https://custom.myapp.com</a>) which point to internal service in port 8888.</p>
<p>Then to create separate nginx ingress (not ingress controller) which point to that "https://custom.myapp.com" sub domain</p>
<p>Example manifest file as follow:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myapp-service
namespace: abc
rules:
- host: custom.myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 8888
</code></pre>
<p>Hope this helps.</p>
|
<h2>Problem statement</h2>
<p>The kubernetes metrics server starts, but doesn't collect metrics.
When running <code>$ kubectl top pods</code> it returns <code>error: Metrics not available for pod <namespace>/<deployment></code>.</p>
<h2>Artificats</h2>
<p>I'm using the following <code>metrics.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
</code></pre>
<p>Once deployed on my local machine, the deployment looks like this:</p>
<pre><code>$ kubectl describe deployment.apps/metrics-server -n metrics-server
Name: metrics-server
Namespace: metrics-server
CreationTimestamp: Tue, 13 Dec 2022 12:38:39 +0100
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
Port: 4443/TCP
Host Port: 4443/TCP
Args:
/metrics-server
--cert-dir=/tmp
--secure-port=4443
--kubelet-preferred-address-types=InternalIP
--kubelet-use-node-status-port
--kubelet-insecure-tls
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5988cd75cb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m2s deployment-controller Scaled up replica set metrics-server-5988cd75cb to 1
</code></pre>
<h2>Findings</h2>
<p>This exact same setup (without <code>--kubelet-insecure-tls</code>) is working on another DigitalOcean hosted cluster. It has the following kubernetes version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:19:57Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>My machine, where the metrics server isn't working has the version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:38:19Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<h2>Possible solutions?</h2>
<ol>
<li>Try to install the same kubernetes version on my local machine? --> Tried this but Docker desktop let me change the server version.</li>
</ol>
| <p>I think I found the solution. I overlooked the following comment: <a href="https://github.com/kubernetes-sigs/metrics-server/issues/1061#issuecomment-1239227118" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/metrics-server/issues/1061#issuecomment-1239227118</a></p>
<p>It turns out that Docker-Desktop 4.12 has this fixed. I was still on 4.9 but running a new Kubernetes version (1.24) where Dockershim has been removed. Several bugs have been fixed since 4.9. Updating Docker-Desktop to 4.15 (currently the latest version) fixed the issue for me.</p>
<p>So to summarize:</p>
<ol>
<li>Update docker desktop to 4.12 or later</li>
<li>Reset the Kubernetes cluster</li>
<li>Check if cluster was upgraded using <code>kubectl version</code></li>
<li>Apply <code>metrics.yml</code></li>
</ol>
<p><a href="https://i.stack.imgur.com/24L4m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/24L4m.png" alt="enter image description here" /></a></p>
<p>This is the final metrics.yml</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=40s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
</code></pre>
|
<p>I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.</p>
<p>This application just listen on HTTP <code>127.0.0.1:9897</code> address and send response.</p>
<p>Below is my yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
</code></pre>
<p>After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
<code>kubectl exec -it exporter-test-* -- sh</code>, then just run <code>curl 127.0.0.1:9897</code>, I can get the correct response.</p>
<p>Also, if I run <code>kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897</code>, I can get correct response from <code>localhost:8080</code>. So this application should work well.</p>
<p>However, when I trying to access this service from other application in same K8s cluster by <code>exporter-test-service.datenlord-monitoring.svc:30001</code> or just run <code>curl nodeIp:30001</code> in k8s node or run <code>curl clusterIp:8080</code> in k8s node, I got <code>Connection refused</code></p>
<p>Anyone had same issue before? Appreciate for any help! Thanks!</p>
| <p>you need to change <code>127.0.0.1:9897</code> to <code>0.0.0.0:9897</code> so that application listens to all incoming requests</p>
|
<p>From what I understand, with the above configuration, it is possible to have 2 pods that exist in the cluster associated with the deployment. However, the old Pod is guranteed to be in the 'Terminated' state. An example scenario is updating the image version associated with the deployment.</p>
<p>There should not be any scenario where there are 2 Pods that are associated with the deployment and both are in the 'Running' phase. Is this correct?</p>
<p>In the scenarios I tried, for example, Pod eviction or updating the Pod spec. The existing Pod enters 'Terminating' state and a new Pod is deployed.</p>
<p>This is what I expected. Just wanted to make sure that all possible scenarios around updating Pod spec or Pod eviction cannot end up with two Pods in the 'Running' state as it would violate the replica count = 1 config.</p>
| <p>It depends on your <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy" rel="nofollow noreferrer">update strategy</a>. Many times it's desired to have the new pod running and healthy before you shut down the old pod, otherwise you have downtime which may not be acceptable as per business requirements. By default, it's doing rolling updates.</p>
<p>The defaults look like the below, so if you don't specify anything, that's what will be used.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
</code></pre>
<p>So usually, you would have a moment where both pods are <code>running</code>. But Kubernetes will terminate the old pod as soon as the new pod becomes <code>ready</code>, so it will be hard, if not impossible, to literally see both in the state <code>ready</code>.</p>
<p>You can read about it in the docs: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment</a></p>
<blockquote>
<p>Deployment ensures that only a certain number of Pods are down while they are being updated. By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).</p>
<p>Deployment also ensures that only a certain number of Pods are created above the desired number of Pods. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).</p>
<p>For example, if you look at the above Deployment closely, you will see that it first creates a new Pod, then deletes an old Pod, and creates another new one. It does not kill old Pods until a sufficient number of new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. It makes sure that at least 3 Pods are available and that at max 4 Pods in total are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and 5.</p>
</blockquote>
<p>This is also explained here: <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></p>
<blockquote>
<p>Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.</p>
</blockquote>
<p>To get the behaviour, described by you, you would set <code>spec.strategy.type</code> to <code>Recreate</code>.</p>
<blockquote>
<p>All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.</p>
</blockquote>
|
<p>I'm trying to containerize a python application, where I used the Kubernetes package. My Dockerfile is:</p>
<pre class="lang-bash prettyprint-override"><code>FROM python:3.10.6
ADD App_pod.py .
ADD config.yml ~/.kube/config
RUN pip3 install requests beautifulsoup4
RUN pip3 install kubernetes
RUN apt-get install -y ca-certificates curl
CMD ["python3","./App_pod.py"]
</code></pre>
<p>when I enter docker run -i -t run, it shows me the following error:</p>
<pre class="lang-none prettyprint-override"><code>> Traceback (most recent call last):
> File "//./App_pod.py", line 9, in <module>
> config.load_kube_config()
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 813, in load_kube_config
> loader = _get_kube_config_loader(
> File "/usr/local/lib/python3.10/site-packages/kubernetes/config/kube_config.py", line 770, in _get_kube_config_loader
> raise ConfigException(
> kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
</code></pre>
<p>I tried to change the appearing error from raise exeption but it still shows me the same error, so I guess maybe it doesn't detect the changes I do.</p>
<p>I used ADD command to copy the config file to the specific path, but it still shows the same error. I also tried to mount the volume but it can't be mounted since there is no container yet, because it needs first the conf file to be created. Does anyone face this problem and know how to fix it?</p>
| <p>A couple of issues:</p>
<ul>
<li><code>~</code> has a special meaning on Linux; it is a shortcut to the user's home directory. I think <code>~</code> is not available for the destination path of a Dockerfile <code>ADD</code> (or <code>COPY</code>) command. You should use a specific path instead</li>
<li>It's not advisable to include a file that includes credentials (such as a kubeconfig file) in a container. You should instead mount data such as this.</li>
<li>You're renaming the file <code>config.yml</code> is the source but <code>config</code> is the destination which may be problematic.</li>
</ul>
<p>You don't include the full Dockerfile so it's unclear what your <code>FROM</code> image is and you don't include the source of your <code>App_pod.py</code> file so it's unclear where it looks for the kubeconfig file, but...</p>
<ol>
<li>Generalize your code's use of the kubeconfig file by providing it as an argument or using an environment variable. This way, the location is dynamic and the code can work when run outside of a container or within</li>
<li>Mount the file into the container when you <code>run</code> the container</li>
</ol>
<p>Let's assume you change your code to accept a command-line argument for <code>config.yml</code> in the working directory, e.g.:</p>
<pre class="lang-bash prettyprint-override"><code>python3 App_pod.py ~/.kube/config
</code></pre>
<p>Then, when you run the container, you will need to mount the <code>config.yml</code> into the container and reference it. In the following, I'm using different source and destination folders to demonstrate the point:</p>
<pre class="lang-bash prettyprint-override"><code>docker run \
--interactive --tty --rm \
--volume=~/.kube/config:/somewhere/.kube/config \
your-container /somewhere/.kube/config
</code></pre>
<p>You can use <code>~</code> in the <code>--volume</code> flag because <code>~</code> is meaningful on your (Linux) host. The file is mapped to <code>/somewhere/.kube/config</code> in the container and so your Python file needs to point to the container location (!) when you invoke it.</p>
<p>Also, so that you may use command-line parameters, I encourage you to use <code>ENTRYPOINT</code> instead of <code>CMD</code> to run your Python program:</p>
<pre><code>...
ENTRYPOINT ["python3","./App_pod.py"]
</code></pre>
|
<p>I need to execute <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-token-em-" rel="nofollow noreferrer">this operation</a> from my (.NET) application. How do I figure out which <a href="https://kubernetes.io/docs/reference/" rel="nofollow noreferrer">k8s API to map</a> this command to? Ideally, how do I use <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">the .NET SDK</a> to make this call?</p>
| <p>You can read the API docs: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#tokenrequest-v1-authentication-k8s-io" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#tokenrequest-v1-authentication-k8s-io</a></p>
<p>You can also inpsect what kubectl is doing by increasing the verbosity level: <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging</a></p>
<blockquote>
<p>Kubectl verbosity is controlled with the -v or --v flags followed by an integer representing the log level. General Kubernetes logging conventions and the associated log levels are described <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md" rel="nofollow noreferrer">here</a>.</p>
</blockquote>
|
<p>I am following Kubernetes documentations on <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a>. I have this <code>secret.yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
val1: YXNkZgo=
stringData:
val1: asdf
</code></pre>
<p>and <code>secret-pod.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysecretpod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: myval
mountPath: /etc/secret
readOnly: true
volumes:
- name: myval
secret:
secretName: val1
items:
- key: val1
path: myval
</code></pre>
<p>I use <code>kubectl apply -f</code> on both of these files. Then using <code>kubectl exec -it mysecretpod -- cat /etc/secret/myval</code>, I can see the value <code>asdf</code> in the file <code>/etc/secret/myval</code> of <code>mysecretpod</code>.</p>
<p>However I want the mounted path to be <code>/etc/myval</code>. Thus I make the following change in <code>secret-pod.yaml</code>:</p>
<pre><code> volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
</code></pre>
<p>After using <code>kubectl apply -f</code> on that file again, I check pod creation with <code>kubectl get pods --all-namespaces</code>. This is what I see:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default mysecretpod 0/1 CrashLoopBackOff 2 (34s ago) 62s
</code></pre>
<p>Looking into that pod using <code>kubectl describe pods mysecretpod</code>, this is what I see:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35s default-scheduler Successfully assigned default/mysecretpod to minikube
Normal Pulled 32s kubelet Successfully pulled image "nginx" in 2.635766453s
Warning Failed 31s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/4aaf54c61f7c80937a8edc094b27d6590538632e0209165e0b8c96e9e779a4b6/merged/etc/resolv.conf: read-only file system: unknown
Normal Pulled 28s kubelet Successfully pulled image "nginx" in 3.313846185s
Warning Failed 28s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/34af5138f14d192ade7e53211476943ea82cd2c8186d69ca79a3adf2abbc0978/merged/etc/resolv.conf: read-only file system: unknown
Warning BackOff 24s kubelet Back-off restarting failed container
Normal Pulling 9s (x3 over 34s) kubelet Pulling image "nginx"
Normal Created 7s (x3 over 32s) kubelet Created container mypod
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 2.73055072s
Warning Failed 6s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/01bfa6b2c35d5eb12ad7ad204a5acc58688c1e04d9b5891382e48c26d2e7077f/merged/etc/resolv.conf: read-only file system: unknown
</code></pre>
<p>Why does this fail? Is it possible to have a secret mounted at the <code>/etc</code> level instead of <code>/etc/something</code> level? If yes, how can I achieve that? Thank you so much!</p>
| <pre><code>volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
</code></pre>
<p>Instead of /etc <strong>directory</strong>, try mount as a single file:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: nginx
type: Opaque
stringData:
val1: asdf
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: myval
mountPath: /etc/myval
subPath: myval
volumes:
- name: myval
secret:
secretName: nginx
items:
- key: val1
path: myval
...
</code></pre>
|
<p>I understand that a service of type ExternalName will point to a specified deployment that is not exposed externally using the specified external name as the DNS name. I am using minikube in my local machine with docker drive. I created a deployment using a custom image. When I created a service with default type (Cluster IP) and Load Balancer for the specific deployment I was able to access it after port forwarding to the local ip address. This was possible for service type ExternalName also but accessible using the ip address and not the specified <strong>external name</strong>.</p>
<p>According to my understanding service of type ExternalName should be accessed when using the specified <strong>external name</strong>. But I wasn't able to do it. Can anyone say how to access an external name service and whether my understanding is correct?</p>
<p>This is the <code>externalName.yaml</code> file I used.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: k8s-hello-test
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
type: ExternalName
externalName: k8s-hello-test.com
</code></pre>
<p>After port forwarding using <code>kubectl port-forward service/k8s-hello-test 3000:3000</code> particular deployment was accessible using <code>http://127.0.0.1:300</code>
But even after adding it to <strong>etc/hosts</strong> file, cannot be accessed using <code>http://k8s-hello-test.com</code></p>
| <blockquote>
<p>According to my understanding service of type ExternalName should be
accessed when using the specified external name. But I wasn't able to
do it. Can anyone say how to access it using its external name?</p>
</blockquote>
<p>You are wrong, external service is for making external connections. Suppose if you are using the third party Geolocation API like <a href="https://findmeip.com" rel="nofollow noreferrer">https://findmeip.com</a> you can leverage the <strong>External Name</strong> service.</p>
<blockquote>
<p>An ExternalName Service is a special case of Service that does not
have selectors and uses DNS names instead. For more information, see
the ExternalName section later in this document.</p>
</blockquote>
<p>For example</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: geolocation-service
spec:
type: ExternalName
externalName: api.findmeip.com
</code></pre>
<p>So your application can connect to <strong>geolocation-service</strong>, which which forwards requests to the external DNS mentioned in the service.</p>
<p>As <strong>ExternalName</strong> service does not have selectors you can not use the port-forwarding as it connects to POD and forwards the request.</p>
<p>Read more at : <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#externalname</a></p>
|
<h1>edited</h1>
<p>I find myself frequently looking to see if something has stopped happening. To do this, it helps to see events in chronological order...</p>
<p><a href="https://stackoverflow.com/a/55858337/8656552">This solution</a> <em>seems</em> to work, but the formatting still drives me insane...</p>
<p>The solution I have "sort of" working -</p>
<pre><code>kubectl get events |
sed -E '/^[6789][0-9]s/{h; s/^(.).*/\1/; y/6789/0123/; s/^(.)/01m\1/;
x; s/^.(.*)/\1/; H;
x; s/\n//; };
s/^10([0-9]s)/01m4\1/; s/^11([0-9]s)/01m5\1/; s/^([0-9]s)/00m0\1/; s/^([0-9]+s)/00m\1/;
s/^([0-9]m)/0\1/; s/^([0-9]+m)([0-9]s)/\10\2/;
s/^L/_L/;' | sort -r
</code></pre>
<p>...this seems a bit like overkill to me.</p>
<p>The whitespace-delimited left-justified fields have no leading zeroes, report only seconds up to 2m as <code>[0-9]+s</code>, then report as <code>[0-9]+m[0-9]+s</code> up to 5m, after which it seems to report only <code>[0-9]+m</code>.</p>
<p>Anyone have a short, maybe even simple-ish, <em>easier to read</em> solution that works?<br />
No preference of tool (<code>sed</code>, <code>awk</code>, <code>perl</code>, native <code>bash</code>, etc), as long as it works and is likely to be already installed anywhere I need to work.</p>
<p>It's not a high priority, but seemed like a fun little challenge I thought I'd share.</p>
<p>My test data:</p>
<pre><code>$: cat sample
LAST ...
28s ...
2m22s ...
46m ...
7s ...
75s ...
119s ...
</code></pre>
<p>Result with desired output -</p>
<pre><code>$: sed -E '/^[6789][0-9]s/{h; s/^(.).*/\1/; y/6789/0123/; s/^(.)/01m\1/;
x; s/^.(.*)/\1/; H;
x; s/\n//; };
s/^10([0-9]s)/01m4\1/; s/^11([0-9]s)/01m5\1/; s/^([0-9]s)/00m0\1/; s/^([0-9]+s)/00m\1/;
s/^([0-9]m)/0\1/; s/^([0-9]+m)([0-9]s)/\10\2/;
s/^L/_L/;' sample | sort -r
_LAST ...
46m ...
02m22s ...
01m59s ...
01m15s ...
00m28s ...
00m07s ...
</code></pre>
<p>I've arbitrarily converted to a standardized version of the existing general output format just to keep it easily transferable to other members of the team. Either way, it's only being used for "eyeballing" the data, so other formats are not a problem as long as it's easy to read.</p>
<p>While there could theoretically include hours and days, such old events are usually not reported by this tool and are out of scope for this problem, and if needed I can likely extrapolate whatever solutions are presented. since I <em>can</em> get the order from <a href="https://stackoverflow.com/a/55858337/8656552">this approach</a> I'm really only looking for elegant formatting options.</p>
<p>A clumsy adaptation of Daweo's <code>awk</code> solution with formatting -</p>
<pre><code>$: awk '/^[0-9]/{ if($1!~/m/){$1="0m" $1}; split($1,arr,/m/);
t=arr[1]*60+arr[2]; m=(t-(t%60))/60; s=t-(m*60);
m=sprintf("%02dm",m); if(s){ s=sprintf("%02ds",s) } else s="";
$1=sprintf("%s%s",m,s); print; } /^L/{print "_"$0}' sample |
sort -r
_LAST ...
46m ...
02m22s ...
01m59s ...
01m15s ...
00m28s ...
00m07s ...
</code></pre>
<p>Others still appreciated.</p>
| <p>I'd convert everything to seconds first and then print it as <code>HH:MM:SS</code>, e.g.:</p>
<pre><code>$ cat tst.awk
BEGIN {
split("h m s",denoms)
fmts["s"] = fmts["m"] = fmts["h"] = "%02d"
mults["s"] = 1
mults["m"] = 60
mults["h"] = 60 * 60
}
sub(/^L/,"_L") {
print
next
}
{
time = $1
secs = 0
while ( match(time,/[0-9]+./) ) {
value = substr(time,1,RLENGTH-1)
denom = substr(time,RLENGTH)
time = substr(time,RLENGTH+1)
secs += value * mults[denom]
}
for ( i=1; i in denoms; i++ ) {
denom = denoms[i]
out = (i>1 ? out ":" : "") sprintf(fmts[denom],int(secs/mults[denom]))
secs %= mults[denom]
}
$1 = out
print | "sort -r"
}
</code></pre>
<p>
<pre><code>$ awk -f tst.awk sample
_LAST ...
00:46:00 ...
00:02:22 ...
00:01:59 ...
00:01:15 ...
00:00:28 ...
00:00:07 ...
</code></pre>
<p>Obviously add the definitions for <code>"d"</code> in the <code>BEGIN</code> section if you want to include days and similarly for other longer durations.</p>
|
<p>I am using an <code>Ubuntu 22.04</code> machine to run and test Kubernetes locally. I need some functionality like <code>Docker-Desktop</code>. I mean it seems both <code>master</code> and <code>worker</code> nodes/machines will be installed by <code>Docker-Desktop</code> on the same machine. But when I try to install Kubernetes and following the instructions like <a href="https://www.cloudsigma.com/how-to-install-and-use-kubernetes-on-ubuntu-20-04/" rel="nofollow noreferrer">this</a>, at some points it says run the following codes on <code>master</code> node:</p>
<pre><code>sudo hostnamectl set-hostname kubernetes-master
</code></pre>
<p>Or run the following comands on the <code>worker</code> node machine:</p>
<pre><code>sudo hostnamectl set-hostname kubernetes-worker
</code></pre>
<p>I don't know how to specify <code>master</code>/<code>worker</code> nodes if I have only my local Ubuntu machine?</p>
<p>Or should I run <code>join</code> command after <code>kubeadm init</code> command? Because I can't understand the commands I run in my terminal will be considered as a command for which <code>master</code> or <code>worker</code> machine?</p>
<p>I am a little bit confused about this <code>master</code>/<code>worker</code> nodes or <code>client</code>/<code>server</code> machine stuff while I am just using one machine for both client and server machines.</p>
| <p>The hostname does nothing about node roles.</p>
<p>If you do <code>kubeadm init</code>, the node will be a <code>master</code> node (currently called <code>control plane</code>).</p>
<p>This node can also be used as a <code>worker</code> node (currently called just a <code>node</code>), but by default, Pods cannot be scheduled on the control plane node.</p>
<p>You can turn off this restriction by removing its taints with the following command:</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
</code></pre>
<p>and then you can use this node as both <code>control-plane</code> and <code>node</code>.</p>
<p>But I guess some small kubernetes like <code>k0s</code>, <code>k3s</code>, and <code>microk8s</code> are better options for your use case rather than kubeadm.</p>
|
<p>My team and I are new to Kubernetes and are experimenting with running a few applications on it.</p>
<p>For proof of concept, we have a running <a href="https://docs.k3s.io/" rel="nofollow noreferrer">Lightweight Kubernetes (K3s)</a> install, which presumably does not have the full range of CRDs available on a standard Kubernetes. While trying to install Envoy proxy via <a href="https://artifacthub.io/packages/helm/cloudnativeapp/envoy" rel="nofollow noreferrer">Helm Chart</a>, we ran into the below error:</p>
<pre><code># helm install my-envoy cloudnativeapp/envoy --version 1.5.0
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "my-envoy" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
</code></pre>
<p>Presumably the message <code>ensure CRDs are installed first</code> refers to components that are missing in K3s. Is there a way to get these components installed (via Helm or some other methods)?</p>
| <p>Most likely the problem is not related to missing CRDs but to the kubernetes version. I assume you are using the latest K3S version, which is <a href="https://github.com/k3s-io/k3s/releases/tag/v1.25.4%2Bk3s1" rel="nofollow noreferrer">v1.25.4</a>. <code>PodDisruptionBudget</code> was moved from <code>policy/v1beta1</code> to <code>policy/v1</code> in version <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#poddisruptionbudget-v125" rel="nofollow noreferrer">v1.25</a>. As the Envoy helm chart that you are using does not seem to be actively maintained, probably you will have to downgrade K3S or find a different chart.</p>
|
<p>I deployed the consul client on my k8s cluster using helm .</p>
<pre><code>sudo helm install hi-consul hashicorp/consul -n consul-client -f config.yaml
</code></pre>
<p><a href="https://i.stack.imgur.com/43o2k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/43o2k.png" alt="enter image description here" /></a></p>
<p>One of the pods is not working now. Is there a way to restart that pod in the helm?</p>
<p>Thanks</p>
| <p>You can delete the POD with <code>kubectl delete <POD name> -n <Namespace name></code></p>
<p>If you want all pods to be restarted you can use the <code>kubectl rollout restart deployment <deployment name> -n <Namespace name></code></p>
|
<p>I'm using the GKE cluster and using the release version as a Regular channel. I noticed that the pods are restarting due to the GKE's auto-upgrade feature. In my cluster, I have a deployment of NiFi in the cluster couldn't bear continuous reboots because of some internal functionality. I know that's not the expected behavior of containers, but we are working on that. But till its completion, I would like my cluster not to reboot nodes or restart pods. Is there a way for that? I found that, using 'Stable version' instead of 'Regular Release Channels' would work, but in the documentation I noticed that:</p>
<blockquote>
<p>Even if the cluster is not enrolled in a release channel, GKE will
still regularly upgrade the cluster's control plane to newer versions.
GKE will upgrade control planes to the next minor version on its
scheduled upgrades start date.</p>
</blockquote>
<p>from this documentation <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels#no_channel" rel="nofollow noreferrer">link</a></p>
<p>Is there a way, if I want to stop/pause the auto-upgrade (security/patch update) in the GKE cluster?</p>
<p>I also noticed 'Maintenance exclusions'. If I exclude upgrades for a couple of weeks, does that works?</p>
| <p>Yes. Maintenance exclusions will stop upgrades to your cluster, depending on what you specify. If you specify NO upgrades allowed during the interval, then you are only allowed to set a 30-day window max (the window can be longer if you specify certain parts can be upgraded -- e.g. allow patch upgrades, allow minor upgrades, etc.)</p>
<p>The window can be moved but you will need to make sure you're still covering the current day or you may get upgraded.</p>
|
<p>I am trying to enable kubernetes for Docker Desktop. Kubernetes is however failing to start.</p>
<p>My log file shows:</p>
<pre><code>cannot get lease for master node: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority: Get "https://kubernetes.docker.internal:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/docker-desktop": x509: certificate signed by unknown authority
</code></pre>
<p>I have NO_PROXY env var set already, and my hosts file has
<code>127.0.0.1 kubernetes.docker.internal</code> at the end, as was suggested <a href="https://stackoverflow.com/questions/66364587/docker-for-windows-stuck-at-kubernetes-is-starting">here</a></p>
<p>I appreciate any help</p>
| <p>Below work around can help you resolve your issue.</p>
<p>You can solve this by</p>
<ul>
<li>Open ~.kube\config in a text editor</li>
<li>Replace <a href="https://kubernetes.docker.internal:6443" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443</a> to https://localhost:6443</li>
<li>Try connecting again</li>
</ul>
<p>From this <a href="https://forums.docker.com/t/waiting-for-kubernetes-to-be-up-and-running/47009" rel="nofollow noreferrer">issue</a></p>
<ul>
<li>Reset Docker to factory settings</li>
<li>Quit Docker</li>
<li>Set the KUBECONFIG environment variable to %USERPROFILE%.kube\config</li>
<li>Restart Docker and enable Kubernetes (still took a few minutes to start)</li>
</ul>
<p>Attaching troubleshooting <a href="https://bobcares.com/blog/docker-x509-certificate-signed-by-unknown-authority/" rel="nofollow noreferrer">blog1</a>, <a href="https://velaninfo.com/rs/techtips/docker-certificate-authority/" rel="nofollow noreferrer">bolg2</a> for your reference.</p>
|
<p>We want to get all the pods in the cluster so we are using something like following:</p>
<pre><code>pods, err := client.CoreV1().Pods("").List(context.Background(), metav1.ListOptions{})
</code></pre>
<p>This code will receive all the pods in the cluster.</p>
<p>My question is: If there is a code or lib which will bring all the pods with the <code>owner reference</code> .
i.e. if pods owns by <code>deployment</code> or <code>statfulset</code> etc you will get all the hierarchy ,the <code>trick</code> here is if I need to get additional level up like some recursion, for example <code>statefulset</code> which owns by controller which have custom kind</p>
| <p>As @CoolNetworking suggested,there is a single lib or code that will get you all the pods with their owner reference, but you can use k8s API to retrieve the owner references for each pod. You can then use the k8s API to retrieve the owner object for each owner reference. This will allow you to create a hierarchical structure of all the pods in the cluster.</p>
<p>The Kubernetes API is a resource-based (RESTful) programmatic interface provided via HTTP. It supports retrieving, creating, updating, and deleting primary resources via the standard HTTP verbs (POST, PUT, PATCH, DELETE, GET).</p>
<p>Most Kubernetes API resource types are objects: they represent a concrete instance of a concept on the cluster, like a pod or namespace.</p>
<p>Refer the document on <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#retrieving-large-results-sets-in-chunks" rel="nofollow noreferrer">kubernetes API</a> for more information</p>
|
<p>I have a kubernetes cluster in Amazon EKS, Autoscaling is set. So when there is load increase a new node spin-up in the cluster and spin-down with respect to load-running. We are monitoring it with Prometheus and send desired alerts with Alertmanager.</p>
<p>So help me with a query that will send alerts whenever Autoscaling is performed in my Cluster.</p>
| <p>The logic is not so great, but this works for me in a non-EKS Self Hosted Kubernetes Cluster on AWS EC2s.</p>
<pre><code>(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0
</code></pre>
<p>The above query fetches the currently up nodes and multiplies them by 0,</p>
<pre><code>or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1
</code></pre>
<p>Here, it adds all nodes that existed in the last 1 minute through the <code>delta()</code> function. The default value of the nodes in the <code>delta()</code> function output will be 1, but the existing nodes will be overridden by the value 0, because of the <code>OR</code> precedence. So finally, only the newly provisioned node(s) will have the value 1, and they will get filtered by the equality condition. You can also extract whether the new node is master/worker by the <code>kubernetes_io_role</code> label</p>
<p>Full Query:</p>
<pre><code>(group by (kubernetes_io_hostname, kubernetes_io_role) (container_memory_working_set_bytes ) * 0 or group by (kubernetes_io_hostname, kubernetes_io_role) (delta ( container_memory_working_set_bytes[1m]))) == 1
</code></pre>
<p>You can reverse this query for downscaling of nodes, although that will collide with the cases in which your Kubernetes node Shuts Down Abruptly due to reasons other than AutoScaling</p>
|
<p>Since 3 days, I have a problem with a deployment in kubernetes. I would like to deploy two react-js application : the first in <a href="http://app.my-domain.com" rel="nofollow noreferrer">http://app.my-domain.com</a> and the other in <a href="http://app.my-domain.com/shop" rel="nofollow noreferrer">http://app.my-domain.com/shop</a></p>
<p>NB: the two url is fake, it's just for example.</p>
<p>The first react application was very well deployed in <a href="http://app.my-domain.com" rel="nofollow noreferrer">http://app.my-domain.com</a> . Now when i want to put the second react application in <a href="http://app.my-domain.com/shop" rel="nofollow noreferrer">http://app.my-domain.com/shop</a>, it's not working.</p>
<p>NB: The two react applications are in different namespaces and exposed with two ingress configurations.</p>
<p>Here are my two ingress configuration which expose my clusterIP services relative to the two react-js applications
:</p>
<pre><code>spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /
backend:
serviceName: first-service
servicePort: 80
spec:
rules:
- host: app.my-domain.com
http:
paths:
- path: /shop
backend:
serviceName: second-service
servicePort: 80
</code></pre>
<p>So I have a blank page with the second application.</p>
<p>Please I would like your help for deploying this two react-js applications in this same domain.</p>
| <p>I think the simple solution for this, is to use one ingress resource with this domain (app.my-domain.com) and reference the two services with their fqdn : first-service.my-namespace-1.svc.cluster.local, second-service.my-namespace-2.svc.cluster.local.</p>
<p>I am very interested in another solution.</p>
|
<p>I have <code>yaml</code> with lines:</p>
<pre><code>accounts:
service:
- id: account1
username: foo
password: bar
- id: account2
username: foo1
password: bar1
</code></pre>
<p>It's default values for my app. But I want to change it in <code>ConfigMap</code>. I do something like that:</p>
<pre><code>accounts.service[0].id: id0
accounts.service[0].username: username0
accounts.service[0].password: password0
accounts.service[1].id: id1
accounts.service[1].username: username1
accounts.service[1].password: password1
accounts.service[2].id: id2
accounts.service[2].username: username2
accounts.service[2].password: password2
</code></pre>
<p>It doesn't work. How can I do it correctly? The list of objects can be with random size.</p>
<p>And, of course, password I'd like to storage in secrets.</p>
| <p>If your app, works and can read the yaml as in your example you can create a configMap from your yaml file like so:</p>
<p>kubectl create configmap myconfigmap --from-file=myfile.yaml --dry-run=client -o yaml > output-configmap-file.yaml</p>
<p>where myfile.yaml is your yaml file.</p>
<p>this will create locally the output-configmap-file.yaml in the right format for Kubernetes.
You can edit this, add namespaces, and what evver you need
kubectl apply -f output-configmap-file.yaml, will create your configmap in the cluster.</p>
<p>Inside your deployment of the app, create a volume with that configMap and mount it inside your container specs.
Like so:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
namespace: mynamespace
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
volumeMounts:
- name: myyaml
mountPath: /path/to-your-yaml/myyaml.yaml
subPath: myyaml.yaml
image: myimage
.........
........
volumes:
- name: myyaml
configMap:
name: myconfigMap
</code></pre>
<p>This will take the configMap and mount it as your yaml file inside the container.
So you don't even need to put the file in your image.</p>
<p>Hint: Passwords don't belong inside the configMap. Secrets are better place.</p>
|
<p>We are using the following to authenticate:</p>
<pre><code>import base64
import boto3
import string
import random
from botocore.signers import RequestSigner
class EKSAuth(object):
METHOD = 'GET'
EXPIRES = 60
EKS_HEADER = 'x-k8s-aws-id'
EKS_PREFIX = 'k8s-aws-v1.'
STS_URL = 'sts.amazonaws.com'
STS_ACTION = 'Action=GetCallerIdentity&Version=2011-06-15'
def __init__(self, cluster_id, region='us-east-1'):
self.cluster_id = cluster_id
self.region = region
def get_token(self):
"""
Return bearer token
"""
session = boto3.session.Session()
# Get ServiceID required by class RequestSigner
client = session.client("sts", region_name=self.region)
service_id = client.meta.service_model.service_id
signer = RequestSigner(
service_id,
session.region_name,
'sts',
'v4',
session.get_credentials(),
session.events
)
params = {
'method': self.METHOD,
'url': 'https://' + self.STS_URL + '/?' + self.STS_ACTION,
'body': {},
'headers': {
self.EKS_HEADER: self.cluster_id
},
'context': {}
}
signed_url = signer.generate_presigned_url(
params,
region_name=session.region_name,
expires_in=self.EXPIRES,
operation_name=''
)
return (
self.EKS_PREFIX +
base64.urlsafe_b64encode(
signed_url.encode('utf-8')
).decode('utf-8')
)
</code></pre>
<p>And then we call this by</p>
<pre><code>KUBE_FILEPATH = '/tmp/kubeconfig'
CLUSTER_NAME = 'cluster'
REGION = 'us-east-2'
if not os.path.exists(KUBE_FILEPATH):
kube_content = dict()
# Get data from EKS API
eks_api = boto3.client('eks', region_name=REGION)
cluster_info = eks_api.describe_cluster(name=CLUSTER_NAME)
certificate = cluster_info['cluster']['certificateAuthority']['data']
endpoint = cluster_info['cluster']['endpoint']
kube_content = dict()
kube_content['apiVersion'] = 'v1'
kube_content['clusters'] = [
{
'cluster':
{
'server': endpoint,
'certificate-authority-data': certificate
},
'name': 'kubernetes'
}]
kube_content['contexts'] = [
{
'context':
{
'cluster': 'kubernetes',
'user': 'aws'
},
'name': 'aws'
}]
kube_content['current-context'] = 'aws'
kube_content['Kind'] = 'config'
kube_content['users'] = [
{
'name': 'aws',
'user': 'lambda'
}]
# Write kubeconfig
with open(KUBE_FILEPATH, 'w') as outfile:
yaml.dump(kube_content, outfile, default_flow_style=False)
# Get Token
eks = auth.EKSAuth(CLUSTER_NAME)
token = eks.get_token()
print("Token here:")
print(token)
# Configure
config.load_kube_config(KUBE_FILEPATH)
configuration = client.Configuration()
configuration.api_key['authorization'] = token
configuration.api_key_prefix['authorization'] = 'Bearer'
# API
api = client.ApiClient(configuration)
v1 = client.CoreV1Api(api)
print("THIS IS GETTING 401!!")
ret = v1.list_namespaced_pod(namespace='default')
</code></pre>
<p>However, this is getting the error in the Lambda:</p>
<blockquote>
<p>[ERROR] ApiException: (401) Reason: Unauthorized</p>
</blockquote>
<p>Is there some type of way I have to generate the ~/.aws/credentials or config? I believe this might be why it is not able to authenticate?</p>
| <p>Your <code>EKSAuth</code> class works. Just checked it with my cluster.</p>
<p>Here is a working (simpler) snippet instead of the second one.</p>
<pre><code>import base64
import tempfile
import kubernetes
import boto3
from auth import EKSAuth
cluster_name = "my-cluster"
# Details from EKS
eks_client = boto3.client('eks')
eks_details = eks_client.describe_cluster(name=cluster_name)['cluster']
# Saving the CA cert to a temp file (working around the Kubernetes client limitations)
fp = tempfile.NamedTemporaryFile(delete=False)
ca_filename = fp.name
cert_bs = base64.urlsafe_b64decode(eks_details['certificateAuthority']['data'].encode('utf-8'))
fp.write(cert_bs)
fp.close()
# Token for the EKS cluster
eks_auth = EKSAuth(cluster_name)
token = eks_auth.get_token()
# Kubernetes client config
conf = kubernetes.client.Configuration()
conf.host = eks_details['endpoint']
conf.api_key['authorization'] = token
conf.api_key_prefix['authorization'] = 'Bearer'
conf.ssl_ca_cert = ca_filename
k8s_client = kubernetes.client.ApiClient(conf)
# Doing something with the client
v1 = kubernetes.client.CoreV1Api(k8s_client)
print(v1.list_pod_for_all_namespaces())
</code></pre>
<p>* Most of the code is taken form <a href="https://github.com/boto/boto3/issues/2309#issue-571897943" rel="nofollow noreferrer">here</a></p>
<p>And you also have to make sure you've granted permission for the IAM role your lambda run with in the eks cluster.
For that run:</p>
<pre><code>kubectl edit -n kube-system configmap/aws-auth
</code></pre>
<p>Add this lines under <code>mapRoles</code>. <code>rolearn</code> is the arn of your role. <code>username</code> is the name you want to give to that role inside the k8s cluster.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
# Add this #######################################
- rolearn: arn:aws:iam::111122223333:role/myLambda-role-z71amo5y
username: my-lambda-mapped-user
####################################################
</code></pre>
<p>And create a <code>clusterrolebinding</code> or <code>rolebinding</code> to grant this user permissions inside the cluster.</p>
<pre><code>kubectl create clusterrolebinding --clusterrole cluster-admin --user my-lambda-mapped-user my-clusterrolebinding
</code></pre>
|
<p>Can somebody explain the difference in tooling between Crossplane and Cluster API from the perspective of a managed kubernetes platform provision ?</p>
<p><a href="https://github.com/kubernetes-sigs/cluster-api" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/cluster-api</a></p>
<p>I was using crossplane for sometime to create k8s clusters and recently got to know that Cluster API is also having the same capability to provision k8s clusters.</p>
| <p><strong>Cluster API</strong></p>
<p>Cluster API is one of the projects in CNCF that you can use the create and manage the <strong>Kubernetes</strong> clusters. it's more like Kubeadm, kind, k3s, minikube(not for prod usecase) with <strong>YAML</strong> config & CLI.</p>
<p>You can create the YAML file and provision the Kubernetes cluster and manage it with that. There are different providers(AWS, AKS, GCP) available so you can use a provider-set service account or Access key secret and once you apply YAML to cluster API it will create a K8s cluster based on config and provider.</p>
<p>So with this, you can manage multiple Kubernetes clusters.</p>
<p>With cluster API YAML config you can create/manage K8s cluster on AWS, GCP, On-prem etc.</p>
<p><strong>Crossplane</strong></p>
<p>Consider <strong>crossplane</strong> as a deployment or service you are running on Kubernetes first.</p>
<p>You pass YAML config to cross-plane service based on rules it will create/manage the resources outside on cloud providers. It will also create/Manage the RDS, CloudSQL instances, Kubernetes clusters and other resources that the cloud provider supports.</p>
<p>It has also concept of provider(AWS, GCP, AKS)</p>
|
<p>I need a help with EKS managed node group.
I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?</p>
<p>Thanks in advance for help!</p>
| <p>By default EKS compute mode group, will have the default cluster Security Group attached which is created by AWS. Even if you provide additional security group to EKS cluster during the creation, that additional security group will not be attached to compute instances. So, to get this working, you have to use Launch Templates.</p>
|
<p>In CI, with gcp auth plugin I was using gcloud auth activate-service-account ***@developer.gserviceaccount.com --key-file ***.json prior to execute kubectl commands.
Now with gke-gcloud-auth-plugin I can’t find any equivalent to use a gcp service account key file.
I've installed <code>gke-gcloud-auth-plugin</code> and <code>gke-gcloud-auth-plugin --version</code> is giving me <code>Kubernetes v1.25.2-alpha+ae91c1fc0c443c464a4c878ffa2a4544483c6d1f</code>
Would you know if there’s a way?</p>
<p>I tried to add this command:
<code>kubectl config set-credentials my-user --auth-provider=gcp</code>
But I still get:</p>
<pre><code>error: The gcp auth plugin has been removed. Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
</code></pre>
| <p>You will need to set the env variable to use the new plugin before doing the <code>get-credentials</code>:</p>
<pre class="lang-bash prettyprint-override"><code>export USE_GKE_GCLOUD_AUTH_PLUGIN=True
gcloud container clusters get-credentials $CLUSTER \
--region $REGION \
--project $PROJECT \
--internal-ip
</code></pre>
<p>I would not have expected the env variable to still be required (now that the gcp auth plugin is completely deprecated) - but it seems it still is.</p>
<p>Your kubeconfig will end up looking like this if the new auth provider is in use.</p>
<pre class="lang-yaml prettyprint-override"><code>...
- name: $NAME
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: gke-gcloud-auth-plugin
installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
provideClusterInfo: true
</code></pre>
|
<p>I've a GCP GKE cluster with version '1.22.12-gke.300'. There are around 20 nodes distributed in 4 node pools. The cluster was created a month before and last day I noticed that all of my nodes are rebooted/restarted. When I see the details of the node using command <code>kubectl get nodes</code>, I'm seeing this result (below). The age of all the nodes are 16 or 17 hours.</p>
<pre><code>gke-company-name-gke-clust-company-name-default-n-97c8e50a-d63m Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-default-n-97c8e50a-l8zw Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-2uou Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-3mj1 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-demo-app--d251216f-doml Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-7mwd Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-gzxy Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-app--5ae07853-lgvo Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-27gf Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-7r4q Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-e680 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-m2vf Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-mtvg Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-mwiy Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-rwf9 Ready <none> 16h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-tqe0 Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-weai Ready <none> 17h v1.22.12-gke.300
gke-company-name-gke-clust-company-name-prod-nifi-fd3e5533-xmss Ready <none> 16h v1.22.12-gke.300
</code></pre>
<p>Is there any way to identify the reason behing this reboot/restart. Because this unexpected reboot caused some issues to my system. I just wanted to identify the reason of these reboots so that I can know that these types of reboots are expected in future.</p>
<p>Any help is appreciated.</p>
| <p>I received the answer to my question on this <a href="https://serverfault.com/questions/1078073/find-out-why-a-kubernetes-cluster-was-restarted-on-google-cloud">link</a> from the above comment. I'll just explain the scenario for others to benefit.</p>
<p>I created a cluster on GKE with the Release channels as "Regular". Because of choosing that, GKE will automatically upgrade the cluster and nodes whenever a new security or patch update got released.</p>
<p>You can find the upgrade status by using this command</p>
<pre><code>gcloud container operations list --filter="TYPE:UPGRADE_MASTER"
</code></pre>
|
<p>I am not able to install Kibana with simple helm command which used to work earlier.</p>
<p>"helm install kibana elastic/kibana -n kibana"</p>
<p>Are there any recent changes in kibana helm? Do we need to create elasticseach-master-certs and elasticsearch-credentials secrets prior to kibana install now.</p>
<p><a href="https://artifacthub.io/packages/helm/elastic/kibana" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/elastic/kibana</a></p>
| <p>As per the latest release notes, For Kibana 8.XX version they have enabled authentication and TLS by default. For me I already had elasticsearch installed which is serving on http port so we have to enable TLS in elasticsearch first. As the Kibana-elasticseach communication must be tls enabled as per latest kibana release.</p>
<p>For testing purpose(we are not working on Production anyways ) we are okay with accessing kibana using port-forward too so i have installed 7.XX version helm chart and moved ahead.</p>
<p><a href="https://github.com/elastic/helm-charts/blob/7.17/kibana/README.md" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/blob/7.17/kibana/README.md</a></p>
<p>If you want to enable tls for ES, kibana i found below link helpful.</p>
<p><a href="https://github.com/elastic/helm-charts/tree/main/kibana/examples/security" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/tree/main/kibana/examples/security</a> (this example is from kibana GIT repo itself)</p>
<p><a href="https://www.lisenet.com/2022/deploy-elasticsearch-and-kibana-on-kubernetes-with-helm/" rel="nofollow noreferrer">https://www.lisenet.com/2022/deploy-elasticsearch-and-kibana-on-kubernetes-with-helm/</a></p>
<p><a href="https://thomasdecaux.medium.com/setup-https-tls-for-kibana-with-helm-cert-manager-io-fd1a326085fe" rel="nofollow noreferrer">https://thomasdecaux.medium.com/setup-https-tls-for-kibana-with-helm-cert-manager-io-fd1a326085fe</a></p>
|
<p>I'm trying to follow their docs and create this pod monitoring
i apply it and i see nothing in metrics</p>
<p>what am i doing wrong?</p>
<pre><code>apiVersion: monitoring.googleapis.com/v1
kind: ClusterPodMonitoring
metadata:
name: monitoring
spec:
selector:
matchLabels:
app: blah
namespaceSelector:
any: true
endpoints:
- port: metrics
interval: 30s
</code></pre>
| <p>As mentioned in the offical <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed" rel="nofollow noreferrer">documnentation</a>:</p>
<p>The following manifest defines a PodMonitoring resource, prom-example, in the NAMESPACE_NAME namespace. The resource uses a Kubernetes label selector to find all pods in the namespace that have the label app with the value prom-example. The matching pods are scraped on a port named metrics, every 30 seconds, on the /metrics HTTP path.</p>
<pre><code>apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
name: prom-example
spec:
selector:
matchLabels:
app: prom-example
endpoints:
- port: metrics
interval: 30s
</code></pre>
<p>To apply this resource, run the following command:</p>
<pre><code>kubectl -n NAMESPACE_NAME apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/prometheus-engine/v0.5.0/examples/pod-monitoring.yaml
</code></pre>
<p>Also check the document on <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/observing" rel="nofollow noreferrer">Obeserving your GKE clusters</a>.</p>
<p><strong>UPDATE:</strong></p>
<p>After applying the manifests, the managed collection will be running but no metrics will be generated. You must deploy a PodMonitoring resource that scrapes a valid metrics endpoint to see any data in the Query UI.</p>
<p>Check the logs by running the below commands:</p>
<pre><code>kubectl logs -f -ngmp-system -lapp.kubernetes.io/part-of=gmp
kubectl logs -f -ngmp-system -lapp.kubernetes.io/name=collector -c prometheus
</code></pre>
<p>If you see any error follow this <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus/troubleshooting#ingest-problems" rel="nofollow noreferrer">link</a> to troubleshoot.</p>
|
<p>I'm still a Kubernetes newbie but I am already confronted with what I think is a mammoth task. My company is hosting a Kubernetes cluster. Because of internal policies, we are obliged to have everything georedundand. Which means we are supposed to build a second cluster (identical to the already existing one) in a different data center. Only ever one of them is active and the remaining one acts as a backup.</p>
<p>My task is to come up with different approaches on how we can synchronize those two clusters in real time, so that in case of failover operations can continue without much interruption.</p>
<p>From what I've heard such a replication can happen on different levels (hypervisor, storage, application...). I want to find out more about these different approaches but I lack the proper lingo to find any good online sources for my problem.
Do you guys know any sources who cover my question? Is there any literature on this? Or any personal experience you might want to share? Thanks in advance!</p>
| <p>Kubernetes is already a distributed system, by design.</p>
<p>It is more common to run a Kubernetes cluster by using <strong>3</strong> data centers - since it is built upon consensus algorithms like <a href="https://raft.github.io/" rel="nofollow noreferrer">raft</a>. This replaces older ways to run systems by using <strong>2</strong> data centers in an <a href="https://www.jscape.com/blog/active-active-vs-active-passive-high-availability-cluster" rel="nofollow noreferrer">active-passive</a> fashion.</p>
|
<p>I'd like to import the output of:</p>
<pre><code>kubectl get pods -o json
</code></pre>
<p>into a python pandas dataframe. This should contain also all containers and there resource request and limits.</p>
<p>My code starts as follows:</p>
<pre><code>import json
import numpy as np
import pandas as pd
import os
pods_raw = os.popen('kubectl get pods -o json').read()
pods_json = json.loads(pods_raw)['items']
</code></pre>
<p>from here on I struggle to get the data in a correct way in a dataframe, especially the 'spec.containers' should be split up when multiple containers exist.</p>
| <p>Here is an example how you can extract the data of interest to the dataframe. The output is only an example (as you didn't specify the required output in the question):</p>
<pre class="lang-py prettyprint-override"><code>import json
import pandas as pd
# open the Json data from file (or use os.popen):
with open("data.json", "r") as f_in:
data = json.load(f_in)
df = pd.DataFrame(data["items"])
# metadata:
df = pd.concat(
[df, df.pop("metadata").apply(pd.Series).add_prefix("meta_")], axis=1
)
# spec:
df = pd.concat(
[df, df.pop("spec").apply(pd.Series).add_prefix("spec_")], axis=1
)
# status:
df = pd.concat(
[df, df.pop("status").apply(pd.Series).add_prefix("status_")], axis=1
)
# keep only columns of interests:
df = df[["meta_name", "meta_namespace", "status_phase", "spec_containers"]]
# explode spec_containers column
df = df.explode("spec_containers")
df = pd.concat(
[
df,
df.pop("spec_containers")
.apply(pd.Series)
.add_prefix("spec_")[["spec_image", "spec_name"]],
],
axis=1,
)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> meta_name meta_namespace status_phase spec_image spec_name
0 apache-lb-648c5cb8cb-mw5zh default Running httpd apache
0 apache-lb-648c5cb8cb-mw5zh default Running index.docker.io/istio/proxyv2:1.13.4 istio-proxy
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/sig-storage/csi-node-driver-registrar:v2.0.1 driver-registrar
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/cephcsi/cephcsi:v3.3.1 csi-cephfsplugin
1 csi-cephfsplugin-fc79l default Running rocks.canonical.com:443/cdk/cephcsi/cephcsi:v3.3.1 liveness-prometheus
...and so on.
</code></pre>
|
<p>I'm trying to connect to a kubernetes cluster running on my Windows PC from my Mac. This is so I can continue to develop from my Mac but run everything on a machine with more resources. I know that to do this I need to change the kubectl context on my Mac to point towards my Windows PC but don't know how to manually do this.</p>
<p>When I've connected to a cluster before on AKS, I would use <code>az aks get-credentials</code> and this would correctly an entry to .kube/config and change the context to it. I'm basically trying to do this but on a local network.</p>
<p>I've tried to add an entry into kubeconfig but get <code>The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?</code>. I've also checked my antivirus on the Windows computer and no requests are getting blocked.</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {CERT}
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
current-context: windows-docker-desktop
kind: Config
preferences: {}
users:
- name: windows-docker-desktop
user:
client-certificate-data: {CERT}
client-key-data: {KEY}
</code></pre>
<p>I've also tried using <code>kubectl --insecure-skip-tls-verify --context=windows-docker-desktop get pods</code> which results in the same error: <code>The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?</code>.</p>
<p>Many thanks.</p>
| <p>From your MAC try if the port is open: Like nc -zv 192.168.yourwindowsIp 6443. If it doest respond Open, you have a network problem.
Try this.</p>
<pre><code>clusters:
- cluster:
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
insecure-skip-tls-verify: true
</code></pre>
<p>directly in the configfile</p>
<p>the set-context you dont need to specify as you have only one.
To be sure it is not your firewall, disable it just for a very short period, only to test the conection.</p>
<p>Last thing: Seems you are using Kubernetes in Docker-Desktop. If not and you have a local cluster with more than 1 node, you need to install a network fabric in your cluster like Flannel or Calico.
<a href="https://projectcalico.docs.tigera.io/about/about-calico" rel="nofollow noreferrer">https://projectcalico.docs.tigera.io/about/about-calico</a>
<a href="https://github.com/flannel-io/flannel" rel="nofollow noreferrer">https://github.com/flannel-io/flannel</a></p>
|
<p>I have a problem <strong>deploying with Terraform a node group in an EKS cluster</strong>. The error looks like one plugin is having problems but I do not know how to resolve it.</p>
<p>If I see the EC2 in the AWS console (web), I can see the instance of the cluster but I have this error in the cluster.</p>
<p>The error was shown in my <strong>pipeline</strong>:</p>
<blockquote>
<p>Error: waiting for EKS Node Group (UNIR-API-REST-CLUSTER-DEV:node_sping_boot) creation: NodeCreationFailure: Instances failed to join the kubernetes cluster. Resource IDs: [i-05ed58f8101240dc8]<br />
on EKS.tf line 17, in resource "aws_eks_node_group" "nodes":<br />
17: resource "aws_eks_node_group" "nodes"<br />
2020-06-01T00:03:50.576Z [DEBUG] plugin: plugin process exited: path=/home/ubuntu/.jenkins/workspace/shop_infraestucture_generator_pipline/shop-proyect-dev/.terraform/plugins/linux_amd64/terraform-provider-aws_v2.64.0_x4 pid=13475<br />
2020-06-01T00:03:50.576Z [DEBUG] plugin: plugin exited</p>
</blockquote>
<p>And the error is printed in <strong>AWS console</strong>:</p>
<p><a href="https://i.stack.imgur.com/Q7wUm.png" rel="nofollow noreferrer">Link</a></p>
<p>This is the code in Terraform I use to create my project:</p>
<p><strong>EKS.tf</strong> for creating the cluster and de nodes</p>
<pre><code>resource "aws_eks_cluster" "CLUSTER" {
name = "UNIR-API-REST-CLUSTER-${var.SUFFIX}"
role_arn = "${aws_iam_role.eks_cluster_role.arn}"
vpc_config {
subnet_ids = [
"${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
]
}
depends_on = [
"aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
"aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
"aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
]
}
resource "aws_eks_node_group" "nodes" {
cluster_name = "${aws_eks_cluster.CLUSTER.name}"
node_group_name = "node_sping_boot"
node_role_arn = "${aws_iam_role.eks_nodes_role.arn}"
subnet_ids = [
"${aws_subnet.unir_subnet_cluster_1.id}","${aws_subnet.unir_subnet_cluster_2.id}"
]
scaling_config {
desired_size = 1
max_size = 5
min_size = 1
}
# instance_types is mediumt3 by default
# Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
# Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
"aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy",
"aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy",
"aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly",
]
}
output "eks_cluster_endpoint" {
value = "${aws_eks_cluster.CLUSTER.endpoint}"
}
output "eks_cluster_certificat_authority" {
value = "${aws_eks_cluster.CLUSTER.certificate_authority}"
}
</code></pre>
<p><strong>securityAndGroups.tf</strong></p>
<pre><code>resource "aws_iam_role" "eks_cluster_role" {
name = "eks-cluster-${var.SUFFIX}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role" "eks_nodes_role" {
name = "eks-node-${var.SUFFIX}"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.eks_cluster_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.eks_cluster_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = "${aws_iam_role.eks_nodes_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = "${aws_iam_role.eks_nodes_role.name}"
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = "${aws_iam_role.eks_nodes_role.name}"
}
</code></pre>
<p><strong>VPCAndRouting.tf</strong> to create my routing, VPC, and Subnets</p>
<pre><code>resource "aws_vpc" "unir_shop_vpc_dev" {
cidr_block = "${var.NET_CIDR_BLOCK}"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "UNIR-VPC-SHOP-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
resource "aws_route_table" "route" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.unir_gat_shop_dev.id}"
}
tags = {
Name = "UNIR-RoutePublic-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "unir_subnet_aplications" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_APLICATIONS}"
availability_zone = "${var.ZONE_SUB}"
depends_on = ["aws_internet_gateway.unir_gat_shop_dev"]
map_public_ip_on_launch = true
tags = {
Name = "UNIR-SUBNET-APLICATIONS-${var.SUFFIX}"
Environment = "${var.SUFFIX}"
}
}
resource "aws_subnet" "unir_subnet_cluster_1" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_CLUSTER_1}"
map_public_ip_on_launch = true
availability_zone = "${var.ZONE_SUB_CLUSTER_2}"
tags = {
"kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
}
}
resource "aws_subnet" "unir_subnet_cluster_2" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
cidr_block = "${var.SUBNET_CIDR_CLUSTER_2}"
availability_zone = "${var.ZONE_SUB_CLUSTER_1}"
map_public_ip_on_launch = true
tags = {
"kubernetes.io/cluster/UNIR-API-REST-CLUSTER-${var.SUFFIX}" = "shared"
}
}
resource "aws_internet_gateway" "unir_gat_shop_dev" {
vpc_id = "${aws_vpc.unir_shop_vpc_dev.id}"
tags = {
Environment = "${var.SUFFIX}"
Name = "UNIR-publicGateway-${var.SUFFIX}"
}
}
</code></pre>
<p>My variables:</p>
<pre><code>SUFFIX="DEV"
ZONE="eu-west-1"
TERRAFORM_USER_ID=
TERRAFORM_USER_PASS=
ZONE_SUB="eu-west-1b"
ZONE_SUB_CLUSTER_1="eu-west-1a"
ZONE_SUB_CLUSTER_2="eu-west-1c"
NET_CIDR_BLOCK="172.15.0.0/24"
SUBNET_CIDR_APLICATIONS="172.15.0.0/27"
SUBNET_CIDR_CLUSTER_1="172.15.0.32/27"
SUBNET_CIDR_CLUSTER_2="172.15.0.64/27"
SUBNET_CIDR_CLUSTER_3="172.15.0.128/27"
SUBNET_CIDR_CLUSTER_4="172.15.0.160/27"
SUBNET_CIDR_CLUSTER_5="172.15.0.192/27"
SUBNET_CIDR_CLUSTER_6="172.15.0.224/27"
MONGO_SSH_KEY=
KIBANA_SSH_KEY=
CLUSTER_SSH_KEY=
</code></pre>
<p>Will be more logs necesary?</p>
| <p>There is also a quick way to troubleshoot such issues. You could use "AWSSupport-TroubleshootEKSWorkerNode" Runbook. This Runbook is designed to help troubleshooting EKS worker node that failed to join an EKS cluster.
You need to go to AWS Systems Manager -> Automation -> Select the Runbook -> Execute the Runbook with ClusterName and Instance-id.</p>
<p>This is pretty helpful in troubleshooting and provides a nice summary at the end of the execution.</p>
<p>You could also refer the documentation <a href="https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-troubleshooteksworkernode.html" rel="nofollow noreferrer">here</a>.</p>
|
<p>Hello Fellow Developers,</p>
<p>I am new to software development and looking forward to doing transitioning to software development.</p>
<p>I am learning new skills and tools to scale up and I came across DOCKER and KUBERNETES</p>
<p>I have completed my backend with spring boot, java and MYSQL I just wanted to know how to dockerize an image for the spring boot application in an industrial real-time manner just to get a hang of it.</p>
<p>Things to image</p>
<p>Environment: JAVA 17</p>
<p>Dependencies: POM.xml</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.5</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.greatlearning.employeemanagementservice</groupId>
<artifactId>employeemanagementservice</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>employeemanagementservice</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>17</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.19</version>
</dependency>
<!-- spring security -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
</project>
</code></pre>
<p>Application-properties:</p>
<pre><code>server.port=8082
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/empdb
spring.datasource.username=root
spring.datasource.password=*s12
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.main.allow-bean-definition-overriding=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect
</code></pre>
<p>This is an employee management project with simple CRUD operation I have done. It would be helpful if someone can guide me through Docker image settings.</p>
<p>Pardon me, If it is inappropriate to post such questions in a stack overflow.</p>
| <p>Try the follwoing Dockerfile. It performs a multi-stage build where in the first stage, your code is compiled to an executable. Then in the next stage, only the parts necessary for running your application are copied so that your Docker image has less size than if you copied all your uncompiled and compiled code in the Docker image. For more infomation about multi-stage builds, consult the <a href="https://docs.docker.com/build/building/multi-stage/" rel="nofollow noreferrer">official documentation</a> or one of the countless internet tutorials.</p>
<pre><code>FROM maven:3.8-openjdk-17 as maven
WORKDIR /app
COPY ./pom.xml .
RUN mvn dependency:go-offline
COPY ./src ./src
RUN mvn package -DskipTests=true
WORKDIR /app/target/dependency
RUN jar -xf ../employeemanagementservice.jar
FROM ibm-semeru-runtimes:open-17-jre-centos7
ARG DEPENDENCY=/app/target/dependency
COPY --from=maven ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=maven ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=maven ${DEPENDENCY}/BOOT-INF/classes /app
CMD java -server -Xmx1024m -Xss1024k -XX:MaxMetaspaceSize=135m -XX:CompressedClassSpaceSize=28m -XX:ReservedCodeCacheSize=13m -XX:+IdleTuningGcOnIdle -Xtune:virtualized -cp app:app/lib/* com.greatlearning.employeemanagementservice.Application
</code></pre>
<p>I made the following assumptions:</p>
<ol>
<li>Your build artifact is named <code>employeemanagementservice.jar</code>. Check your <code>target</code> directory after your maven build to verify. I always configure it like this in the pom.xml</li>
</ol>
<pre class="lang-xml prettyprint-override"><code> <build>
<finalName>${project.artifactId}</finalName>
</build>
</code></pre>
<ol start="2">
<li><p>You run your tests in a CI/CD pipeline and don't want to run them in the Docker image build step. If you wanted to run your tests you'd have to remove the <code>-DskipTests=true</code> from the <code>RUN mvn package -DskipTests=true</code> comannd.</p>
</li>
<li><p>Your main class is called <code>Application</code> and it resides in the <code>com.greatlearning.employeemanagementservice</code> package. If not, change it in the Docker <code>CMD</code> command.</p>
</li>
<li><p>You want your service to use at most 1GB of RAM. If not, change the <code>-Xmx1024m</code> to your desired amount. The other <code>java</code> arguments are for optimization purposes, you can look them up online.</p>
</li>
</ol>
|
<p>I'm configuring app to works in Kubernetes google cloud cluster. I'd like to pass parameters to <code>application.properties</code> in Spring boot application via <code>configMap</code>. I'm trying to pass value by <strong>Environment Variable</strong>.</p>
<p>I've created config map in google cloud Kubernetes cluster in namespace <code>default</code> like below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config-key: 12345789abde
kubectl create -f app-config.yaml -n default
configmap/app-config created
</code></pre>
<p>I'm checking if configMap has been created:</p>
<p><a href="https://i.stack.imgur.com/qYEoJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qYEoJ.png" alt="enter image description here" /></a></p>
<p>key value pair looks fine:</p>
<p><a href="https://i.stack.imgur.com/gPgBK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gPgBK.png" alt="enter image description here" /></a></p>
<p>I'm trying to deploy Spring boot app with using <code>cloudbuild.yaml</code>(it works when I not use configMap). The content is:</p>
<pre><code>substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-c # default value
_CLOUDSDK_CONTAINER_CLUSTER: kubernetes-cluster-test # default value
steps:
- id: 'Build docker image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA', '.']
- id: 'Push image to Container Registry'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA']
- id: 'Set image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,${_TECH_CONTAINER_IMAGE},gcr.io/${_TECH_RADAR_PROJECT_ID}/${_TECH_CONTAINER_IMAGE}:$SHORT_SHA," deployment.yaml']
- id: 'Create or update cluster based on last docker image'
name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'deployment.yaml']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
- id: 'Expose service to outside world via load balancer'
name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'service-load-balancer.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
options:
logging: CLOUD_LOGGING_ONLY
</code></pre>
<p><code>deployment.yaml</code> with reference to config map (container is also in default namespace) is:</p>
<pre><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "java-kubernetes-clusters-test"
namespace: "default"
labels:
app: "java-kubernetes-clusters-test"
spec:
replicas: 3
selector:
matchLabels:
app: "java-kubernetes-clusters-test"
template:
metadata:
labels:
app: "java-kubernetes-clusters-test"
spec:
containers:
- name: "app"
image: "kubernetes-cluster-test-image"
env:
- name: CONFIG_KEY
valueFrom:
configMapKeyRef:
name: app-config
key: config-key
</code></pre>
<p>Spring boot is not able to read placeholder and I'm obtaining an error in attempt to deply app to google cloud like below:</p>
<p><a href="https://i.stack.imgur.com/igoGi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/igoGi.png" alt="enter image description here" /></a></p>
<p>I'm trying to reference to env with name CONFIG_KEY in <code>application.properties</code>:</p>
<pre><code>com.example.dockerkubernetes.property.value=${CONFIG_KEY}
</code></pre>
<p>and then in Spring Boot controller:</p>
<pre><code>@RestController
@RequiredArgsConstructor
public class MyController {
@Value("${com.example.dockerkubernetes.property.value}")
private String testValue;
public String getConfigMapTestKey() {
return this.testValue;
}
}
</code></pre>
<p>Has anyone any idea why it dosen't work? Maybe some permissions are missing?
I would be grateful for help.</p>
| <p>You can read properties from the environment, but that could require a bit of coding & refactoring (depending on the rest of your code) to import and use such values, i.e. into a <code>Properties</code> object.</p>
<p>If you want to use an <code>application.properties</code> file directly you're better off mounting the single file into the app's container by <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">using SubPath</a>.</p>
<p>Make sure you mount the file exactly as specified in the linked official docs or you are likely to get errors related to the container's FS permissions</p>
|
<p>I know that the kubelet reports that the node is in diskpressure if there is not enough space on the node.<br />
But I want to know the exact threshold of diskpressure.<br />
Please let me know the source code of the kubelet related this issue if you could.<br />
Or I really thanks for your help about the official documentation from k8s or sth else.<br />
Thanks again!!</p>
| <p>Disk pressure is a condition indicating that a node is using too much disk space or is using disk space too fast, according to the thresholds you have set in your Kubernetes configuration.</p>
<p>DaemonSet can deploy apps to multiple nodes in a single step. Like deployments, DaemonSets must be applied using kubectl before they can take effect.</p>
<p>Since kubernetes is running on Linux,this is easily done by running du command.you can either manually ssh into each kubernetes nodes,or use a Daemonset As follows:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: disk-checker
labels:
app: disk-checker
spec:
selector:
matchLabels:
app: disk-checker
template:
metadata:
labels:
app: disk-checker
spec:
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.15
securityContext:
privileged: true
image: busybox
imagePullPolicy: IfNotPresent
name: disk-checked
command: ["/bin/sh"]
args: ["-c", "du -a /host | sort -n -r | head -n 20"]
volumeMounts:
- name: host
mountPath: "/host"
volumes:
- name: host
hostPath:
path: "/"
</code></pre>
<p>Available disk space and inodes on either the node's root filesystem or image filesystem has satisfied an eviction threshold, check complete <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#node-conditions" rel="nofollow noreferrer">Node Conditions</a> for more details.</p>
<p><strong>Ways to set Kubelet options :</strong></p>
<p>1)Command line options like --eviction-hard.</p>
<p>2)Config file.</p>
<p>3)More recent is dynamic configuration.</p>
<p>When you experience an issue with node disk pressure, your immediate thoughts should be when you run into the issue: an error in garbage collecting or log files. Of course the better answer here is to clean up unused files (free up some disk space).</p>
<p>So Monitor your clusters and get notified of any node disks approaching pressure, and get the issue resolved before it starts killing other pods inside the cluster.</p>
<p><strong>Edit :</strong>
AFAIK there is no magic trick to know the exact threshold of diskpressure . You need to start with reasonable (limits & requests) and refine using trial and error.</p>
<p>Refer to this <a href="https://serverfault.com/questions/1031424/gke-kill-pod-when-monitoring-tool-still-show-that-we-have-memory">SO</a> for more information on how to set the threshold of diskpressure.</p>
|
<p>I'm trying to troubleshoot an issue I'm having in kubernetes where after a job fails, the associated pod seemingly disappears and I can't view the logs. The job still exists though.</p>
<p>But that's not what my question is about. In reading through the documentation, it seemingly uses the terms "terminated" and "deleted" interchangably. This is leading me to be very confused. I would <em>assume</em> that terminated pods are not necessarily deleted, but the way the documentation is written, it implies that a terminated pod and a deleted pod are the same thing.</p>
<p><strong>Example 1:</strong> <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup</a></p>
<blockquote>
<p>When a Job completes, no more Pods are created, but the Pods are usually not deleted either</p>
</blockquote>
<p>"usually" then links to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy</a> which then describes the logic by which pods will be <em>terminated</em>. So here, a link to a section which purports to describe the logic by which pods will be <em>deleted</em>, instead describes the logic by which pods will be <em>terminated</em>, implying they are one and the same.</p>
<p><strong>Example 2:</strong> <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination-forced</a></p>
<p>This section is titled "Forced Pod <em>termination</em>" and proceeds to explain what happens when you attempt to force <em>delete</em> a pod. Again implying that terminating and deleting a pod are one-and-the-same.</p>
<p><strong>Example 3:</strong>
<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination</a></p>
<p>This section, titled "Termination of Pods", describes what happens when the user requests deletion of a pod.</p>
<p>The job in question is failing due to <code>DeadlineExceeded.</code> The documentation states "Once a Job reaches <code>activeDeadlineSeconds</code>, all of its running Pods are terminated and the Job status will become type: Failed with reason: <code>DeadlineExceeded</code>." If <em>terminated</em> and <em>deleted</em> mean the same thing, then that would explain why my pods are gone. I find that a strange design choice but it at least would explain my problem.</p>
<p>The kubernetes documentation asked me if the documentation was helpful, I said "no" and it told me to create a question on stack overflow, so that's what I'm doing :)</p>
| <p>As @karthikeayan said, Delete and Terminate are the same. And yes, your pods got deleted because activeDeadlineSeconds exceeded.</p>
<p>if your job have some error but you restartPolicy is not Never, so your job created pods will be deleted</p>
<p>restartPolicy can be set: This indicates that kubernetes will restart until the Job is completed successfully on OnFailure. However, the number of failures does not rise with each retry. To prevent a loop from failing, you can set activeDeadlineSeconds to a value.</p>
<p>As you have researched and gathered pretty good information it is good enough, To find the logs of a deleted pod follow this <a href="https://stackoverflow.com/questions/68572218/how-to-review-logs-of-a-deleted-pod">stack link</a> or else the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service as suggested by @Jonas</p>
|
<p>I am learning kubernetes in my home network. which is configured like this:</p>
<pre><code>-192.168.1.1(router)
-192.168.1.30(ubuntu machine 1, master node)
-192.168.1.71(ubuntu machine 2, worker node)
</code></pre>
<p>My router ip 192.168.1.1 can not be modified. When I execute <code>sudo kubeadm init --pod-network-cidr=192.168.1.0/24</code> in master node, master node iptable adds cni0 like below, and other nodes inside network(like ubuntu machine 2) becomes unavailable to connect</p>
<pre><code>361: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 32:61:29:d2:1d:16 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::3061:29ff:fed2:1d16/64 scope link
valid_lft forever preferred_lft forever
</code></pre>
<p>what could be the problem? thanks in advance</p>
| <p>Use another IP-range (private) for your cluster, like 10.10.x.x.
Do not use the network on which your nodes a runing.</p>
<pre><code>--pod-network-cidr=10.10.10.0/24
</code></pre>
<p>This will create a range of 255 ipaddresses each node can use for the pods, when they starts. so called clusterIps.</p>
<p>You need also a network fabric like flannel or calico.</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/v0.20.2/Documentation/kube-flannel.yml
</code></pre>
<p>Or you wont be able to connect to your cluster.</p>
|
<p>I have AKS with nginx load balancer (ingress controller) installed with helm as the entry point to my cluster.</p>
<p>I want to preserve the real IP addresses from clients for further whitelisting with <code>nginx.ingress.kubernetes.io/whitelist-source-range</code> on certain paths of specific services.</p>
<p>I found the solution here:</p>
<p><a href="https://stackoverflow.com/questions/60968806/nginx-ingress-controller-forward-source-ip">nginx ingress controller forward source ip</a></p>
<p>which works, but also I found in certain blogs and in the Kubernetes documentation, that setting externalTrafficPolicy=Local is not good for proper load balancing.</p>
<p>I tried to set different variations of settings to my nginx configmap, but with no luck.
I also found, that it can be done somehow with X-Forwarded-For header, but I couldn't resolve it.</p>
| <p>I also made a github issue with the same question here:
<a href="https://github.com/kubernetes/ingress-nginx/issues/9402" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/9402</a></p>
<p>And the discussion on the kuberenetes stack took me to the following link: <a href="https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies" rel="nofollow noreferrer">https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies</a></p>
<p>where he says that: "if you have a service receiving external traffic from an LB (using NodePorts), you almost always want to use externalTrafficPolicy: Local (with pod anti-affinity to alleviate imbalanced traffic)."</p>
<p>I didn't find the way how to preserve client IP without external LB or Proxy so this looks like the only possible solution. If you find any better, please comment.</p>
<p>For more info see the discussion on kubernetes stack: <a href="https://kubernetes.slack.com/archives/CANQGM8BA/p1670925074384859" rel="nofollow noreferrer">https://kubernetes.slack.com/archives/CANQGM8BA/p1670925074384859</a></p>
|
<p>It would be useful for me to be able to define at a higher level a way to determine a set of nodes that a set of pods should run ontop of.</p>
<p>Is there a way to use kustomize so that i can specify what nodeSelectors a deployment should have?</p>
| <p>OP asks for a way for all pods to run on a pre-defined set of nodes. So, adding to @yee379, you may define a patch to be applied to multiple generic resources at once like this:</p>
<pre class="lang-yaml prettyprint-override"><code>patches:
- patch: |-
kind: not-important
metadata:
name: not-important
spec:
template:
spec:
nodeSelector:
key: value
target:
kind: (StatefulSet|Deployment|Job)
# ... Include more resources
# If you'd like to include Pods specifically:
- patch: |-
kind: not-important
metadata:
name: not-important
spec:
nodeSelector:
key: value
target:
kind: Pod
</code></pre>
|
<p>I am trying to setup grafana on kubernetes and need to use mysql as its backend to store dashboards and its config instead of persistent volume.
Is there any way around to achieve it?</p>
<p>Tried grafana helm charts expecting any variable to use mysql as backend but could not find any.</p>
| <p>Yes Grafana supports the Mysql, Postgress</p>
<p><a href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#database" rel="nofollow noreferrer">https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#database</a></p>
<p>You can set the value at <strong>grafana.ini</strong></p>
<p><a href="https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L682" rel="nofollow noreferrer">https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L682</a></p>
<pre><code>grafana.ini:
database:
type: mysql
host: ${host}
user: ${username}
password: ${password}
</code></pre>
<p>if looking forward to set environment variables</p>
<pre><code>GF_DATABASE_TYPE=mysql
GF_DATABASE_HOST=host
GF_DATABASE_USER=username
GF_DATABASE_PASSWORD=password
</code></pre>
|
<pre><code>Warning SyncLoadBalancerFailed 54s (x4 over 91s) service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-05f3a11329a20bb93; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
Normal EnsuringLoadBalancer 14s (x5 over 94s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 13s service-controller Error syncing load balancer: failed to ensure load balancer: Multiple tagged security groups found for instance i-046c2cc46714af250; ensure only the k8s security group is tagged; the tagged groups were sg-08ca90265d3402e6c(education-eks-ooHfNJwm-node-20221205083117267100000007) sg-04ad04b5d3bb35e66(eks-cluster-sg-education-eks-ooHfNJwm-1857011925)
</code></pre>
<p>I created the cluster with this code: <a href="https://developer.hashicorp.com/terraform/tutorials/kubernetes/eks" rel="nofollow noreferrer">eks using terraform</a></p>
| <p>Add in eks module will fix issue.</p>
<pre><code>node_security_group_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = null
}
</code></pre>
<h2>Ref:</h2>
<p><a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1986" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1986</a>
<a href="https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1810" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1810</a></p>
|
<p>When we use <code>PM2</code> in cluster mode, we can find out the instance number inside node app using <code>process.env.NODE_APP_INSTANCE</code>, but how we can find that inside a kubernetes cluster without <code>pm2</code>. I'm looking for similar like find replica instance number or etc.</p>
<p>Imagine a node app with 2 or more replicas and we need to run <code>node-cron</code> scheduler only inside one of pods.</p>
| <p>I found that when use <code>Statefulset</code> instead of the <code>Deployment</code> then it's possible to inject the determinable POD name as an environment variable.</p>
<pre><code> ...
containers:
...
env:
- name: "POD_NAME"
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>And then <code>POD_NAME</code> variable has the value like this: pod-0 or pod-1 and so on.
So we can find out the instance number with that ordinal number.</p>
|
<p>I need the URI of requests that reach my <code>myapp</code> pods to be rewritten to remove the prefix <code>/foo</code> from the path. For example, a URI <code>/foo/bar</code> should be received as <code>/bar</code>. I am using a GCP load balancer that routes traffic directly to pods. I am not using Istio ingress, so Istio has no control over the load balancer's behavior.</p>
<p>I tried creating a <code>VirtualService</code> to handle the path rewrite:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp-route
spec:
hosts:
- myapp
http:
- match:
- uri:
prefix: "/foo/"
rewrite:
uri: "/"
route:
- destination:
host: myapp
</code></pre>
<p>(This may not be <em>exactly</em> correct as I adapted/simplified what I tried for the question.)</p>
<p>This works when sending requests from a pod with an Istio sidecar to the <code>myapp</code> service, but not from the load balancer. I can see the URI is being rewritten as it goes out from any other pod, not when it's coming into a <code>myapp</code> pod.</p>
<p>How can I get URI rewriting as an incoming rule?</p>
| <p>I found <a href="https://github.com/istio/istio/issues/22290#issuecomment-1317595537" rel="nofollow noreferrer">https://github.com/istio/istio/issues/22290#issuecomment-1317595537</a> which shows how to write a custom <code>EnvoyFilter</code> to do path rewriting and adapted it to my needs. I'm not at all sure if the directives to specify how and where the filter should be applied are correct, but it at least does the prefix rewrite as an inbound rule.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: myapp-rewrite-filter
spec:
workloadSelector:
labels:
app: myapp
configPatches:
# The first patch adds the lua filter to the listener/http connection manager
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: "envoy.filters.network.http_connection_manager"
subFilter:
name: "envoy.filters.http.router"
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.filters.http.lua
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua"
inlineCode: |
function remove_prefix(s, prefix)
return (s:sub(0, #prefix) == prefix) and s:sub(#prefix+1) or s
end
function envoy_on_request(request_handle)
local filter_name = "SIMPLE_PATH_REWRITER"
request_handle:logDebug(filter_name .. ": Starting")
local prefix = "/foo"
local path = request_handle:headers():get(":path")
local unprefixed = remove_prefix(path, prefix)
request_handle:headers():replace(":path", unprefixed)
local msg = filter_name .. ": Path " .. path .. " rewritten to " .. unprefixed
request_handle:logDebug(msg)
end
</code></pre>
|
<p>I am following <a href="https://www.datapacket.com/blog/build-kubernetes-cluster#chapter6" rel="nofollow noreferrer">this guide</a> to expose a service running on my bare metal k8s cluster to the world.</p>
<p>The guide suggests using <a href="https://github.com/metallb/metallb" rel="nofollow noreferrer">metallb</a> for giving external access. The problem is, during the setup process of metallb, I am asked to give a range of available IP addresses.</p>
<p>The hosting provider I am using is very basic, and all I have is the IP address of the Linux instance that is running my K8s node. So my question is, how can I provision an IP address for assigning to my application? Is this possible with a single IP?</p>
<p>Alternatively I'd love get this done with a NodePort, but I need to support HTTPS traffic and I am not sure its possible if I go that way.</p>
| <p>Specify a single IP using CIDR notation. your-ip/32 (192.168.10.0/32 for example)</p>
|
<p>I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?</p>
<p>Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
</code></pre>
<p>I might be missing some key understanding of k8s or yamls.</p>
<p>Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.</p>
| <p>Let's simplify labels, selectors and template labels first.</p>
<ul>
<li>The Labels in the metadata section are assigned to the deployment itself.</li>
<li>The Labels in the <code>.spec.template</code> section are assigned to the pods created by the deployment. These are actually called PodTemplate labels.</li>
<li>The selectors provide uniqueness to your resource. It is used to identify the resources that match the labels in <code>.spec.selector.matchLabels</code> section.</li>
</ul>
<p>Now, it is not mandatory to have all the podTemplate Labels in the matchLabels section. a pod can have many labels but only one of the matchLabels is enough to identify the pods. Here's an use case to understand why it has to be used</p>
<p>"Let’s say you have deployment X which creates two pods with label <code>nginx-pods</code> and image <code>nginx</code> and another deployment Y which applies to the pods with the same label <code>nginx-pods</code> but uses images <code>nginx:alpine</code>. If deployment X is running and you run deployment Y after, it will not create new pods, but instead, it will replace the existing pods with <code>nginx:alpine</code> image. Both deployment will identify the pods as the labels in the pods matches the labels in both of the deployments <code>.spec.selector.matchLabels</code>"</p>
|
<p>etcd:
enabled: true
name: etcd
replicaCount: 3
pdb:
create: false
image:
repository: "milvusdb/etcd"
tag: "3.5.0-r7"
pullPolicy: IfNotPresent</p>
<p>service:
type: ClusterIP
port: 2379
peerPort: 2380</p>
<p>auth:
rbac:
enabled: false</p>
<p>persistence:
enabled: true
storageClass:
accessMode: ReadWriteOnce
size: 10Gi</p>
<h2>Enable auto compaction</h2>
<h2>compaction by every 1000 revision</h2>
<h2></h2>
<p>autoCompactionMode: revision
autoCompactionRetention: "1000"</p>
<h2>Increase default quota to 4G</h2>
<h2></h2>
<p>extraEnvVars:</p>
<ul>
<li>name: ETCD_QUOTA_BACKEND_BYTES
value: "4294967296"</li>
<li>name: ETCD_HEARTBEAT_INTERVAL
value: "500"</li>
<li>name: ETCD_ELECTION_TIMEOUT<code>enter code here</code>
value: "2500"</li>
</ul>
<h2>Configuration values for the pulsar dependency</h2>
<h2>ref: <a href="https://github.com/apache/pulsar-helm-chart" rel="nofollow noreferrer">https://github.com/apache/pulsar-helm-chart</a></h2>
<p><a href="https://i.stack.imgur.com/h1hLr.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I am trying to run the milvus cluster using kubernete in ubuntu server.
I used helm menifest <a href="https://milvus-io.github.io/milvus-helm/" rel="nofollow noreferrer">https://milvus-io.github.io/milvus-helm/</a></p>
<p>Values.yaml
<a href="https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/milvus-io/milvus-helm/master/charts/milvus/values.yaml</a></p>
<p>I checked PersistentValumeClaim their was an error
no persistent volumes available for this claim and no storage class is set</p>
| <p>This error comes because you dont have a Persistent Volume.
A pvc needs a a pv with at least the same capacity of the pvc.</p>
<p>This can be done manually or with a Volume provvisioner.</p>
<p>The most easy way someone would say is to use the local storageClass, which uses the diskspace from the node where the pod is instanciated, adds a pod affinity so that the pod starts allways on the same node and can use the volume on that disk. In your case you are using 3 replicas. Allthough its possible to start all 3 instances on the same node, this is mostlikly not what you want to achieve with Kubernetes. If that node breaks you wont have any other instance running on another node.</p>
<p>You need first to thing about the infrastructure of your cluster. Where should the data of the volumes be stored?</p>
<p>An Network File System, nfs, might be a could solution.
In this case you have an nfs somewhere in your infrastructure and all the nodes can reach it.</p>
<p>So you can create a PV which is accessible from all your node.</p>
<p>To not allocate a PV always manualy you can install a Volumeprovisioner inside your cluster.</p>
<p>I use in some cluster this one here:
<a href="https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner</a></p>
<p>As i said you must have already an nfs and configure the provvisioner.yaml with the path.</p>
<p>it looks like this:</p>
<pre><code># patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nfs-client-provisioner
name: nfs-client-provisioner
spec:
template:
spec:
containers:
- name: nfs-client-provisioner
env:
- name: NFS_SERVER
value: <YOUR_NFS_SERVER_IP>
- name: NFS_PATH
value: <YOUR_NFS_SERVER_SHARE>
volumes:
- name: nfs-client-root
nfs:
server: <YOUR_NFS_SERVER_IP>
path: <YOUR_NFS_SERVER_SHARE>
</code></pre>
<p>If you use an nfs without provvisioner, you need to define a storageClass which is linked to your nfs.</p>
<p>There are a lot of solutions to hold persitent volumes.</p>
<p>Here you can find a list of StorageClasses:</p>
<pre><code>https://kubernetes.io/docs/concepts/storage/storage-classes/
</code></pre>
<p>At the end it depends also where your cluster is provvisioned if you are not managing it by yourself.</p>
|
<p>I am new to Kubernetes, so this might be obvious, but in a deployment yaml, why do we have to define the labels in the deployment metadata, then define the same labels in the template metadata, but match those separately in the selector?</p>
<p>Shouldn't it be obvious that the template belongs to the deployment it's under?
Is there a use case for a deployment to have a template it doesn't match?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
spec:
replicas: 2
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
#...etc
</code></pre>
<p>I might be missing some key understanding of k8s or yamls.</p>
<p>Having the template with no label, and it seems to work, but I don't understand why. Kubernetes could be auto-magically inserting the labels.</p>
| <p>Technically, the parameter <code>matchLabels</code> decides on which Pods belongs to the given Deployment (and the underlying ReplicaSet). In practice, I have never seen a Deployment with different <code>labels</code> than <code>matchLabels</code>. So, the reason might be the uniformity between other Kubernetes resources (like Service where the <code>matchLabels</code> makes more sense).</p>
<p>I recommend reading the blog post <a href="https://medium.com/@zwhitchcox/matchlabels-labels-and-selectors-explained-in-detail-for-beginners-d421bdd05362" rel="nofollow noreferrer">matchLabels, labels, and selectors explained in detail, for beginners</a>.</p>
|
<p>I'm running the example in chapter "Service Discovery" of the book "Kubernetes up and running". The original command to run a deployment is <code>kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:blue --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod"</code>, however in K8s version 1.25, the <code>--replicate</code> parameter in run command is not supported any more. I planned to run without replica and then use "kubectl scale" to scale the deployment later. Problem is the run command only creates a pod, not a deployment (the scale command expects a deployment). So how do i promote my pod to a deployment, my kubernetes verion is 1.25?</p>
| <p>There is no way to promote it however you can change label and all those stuff but instead of that you can create the new deployment delete the existing POD.</p>
<p>So easy step you can take input of existing running POD to YAML file first</p>
<pre><code>kubectl get pod <POD name> -o yaml > pod-spec.yaml
</code></pre>
<p>Create deployment spec YAML file now</p>
<pre><code>kubectl create deployment deploymentname --image=imagename --dry-run=client -o yaml > deployment-spec.yaml
</code></pre>
<p>Edit the <strong>deployment-spec.yaml</strong> file</p>
<p>and in other tab <strong>pod-spec.yaml</strong> you can copy the Spec part from POD file to new deployment file.</p>
<p>Once <strong>deployment-spec.yaml</strong> is ready you can apply it. Make sure if you are running service labels get matched properly</p>
<pre><code>kubectl apply -f deployment-spec.yaml
</code></pre>
<p>Delete the single running POD</p>
<pre><code>kubectl delete pod <POD name>
</code></pre>
|
<p>I have Gitlab Kubernetes integration in my <code>Project 1</code> and I am able to access the <code>kube-context</code> within that project's pipelines without any issues.</p>
<p>I have another project, <code>Project 2</code> where I want to use the same cluster that I integrated in my <code>Project 1</code>.</p>
<p>This i my agent config file:</p>
<pre><code># .gitlab/agents/my-agent/config.yaml
ci_access:
projects:
- id: group/project-2
</code></pre>
<p>When I try to add a Kubernetes cluster in my <code>Project 2</code>, I am expecting to see the cluster name that I set up for <code>Project 1</code> in the dropdown, but I don't see it:</p>
<p><a href="https://i.stack.imgur.com/LYZfr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LYZfr.png" alt="gitlab-agent" /></a></p>
| <p><a href="https://about.gitlab.com/releases/2022/12/22/gitlab-15-7-released/#share-cicd-access-to-the-agent-within-a-personal-namespace" rel="nofollow noreferrer">GitLab 15.7</a> (December 2022) suggests an alternative approach, which does not involve creating a new agent per project</p>
<blockquote>
<h2>Share CI/CD access to the agent within a personal namespace</h2>
<p>The GitLab agent for Kubernetes provides a more secure solution for managing your clusters with GitLab CI/CD.
You can use a single agent with multiple projects and groups by sharing access
to the agent connection. In previous releases, you could not share access with personal
namespaces. This release adds support for CI/CD connection sharing to personal namespaces.
You can now use a single agent from any of the projects under your personal namespace.</p>
<p><a href="https://i.stack.imgur.com/lGha6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGha6.png" alt="https://about.gitlab.com/images/15_7/configure-allow-agent-cicd-access-sharing-within-a-personal-namesp.png -- Share CI/CD access to the agent within a personal namespace" /></a></p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#authorize-the-agent" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/356831" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
<p>That means from a single agent, you should be able to access an agent connection from multiple projects (personal namespace or not).</p>
|
<p>I am trying to deploy a pod in kubernetes and link a service to it so I can access it via web browser.</p>
<p>The pod specification is the following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: visualizer-nautic
labels:
app: visualizer
spec:
replicas: 1
selector:
matchLabels:
app: visualizer
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: zenoh-network
labels:
app: visualizer
spec:
containers:
- name: api
image: hielito/visualizer:latest
ports:
- containerPort: 8100
</code></pre>
<p>It deploys a web app, its frontend is exposed at port 8100, so then I add a NodePort service to make it visible in my network:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: visualizer-service
spec:
type: NodePort
selector:
app: visualizer
ports:
- port: 8100
targetPort: 8100
nodePort: 30012
</code></pre>
<p>Once I have this service deployed, I try to access localhost:30012, but the web browser just gets stuck at loading and eventually throw a timeout error.</p>
<p>If anyone is wondering, the app deployed is working fine, if I execute a port-forward (<code>microk8s kubectl port-forward pods/visualizer-nautic-77957b94c9-mww6p 8100:8100 </code>) I can then access the app working perfectly at localhost:8100.</p>
<p>EDIT: I forgot to add that I am using multus CNI add-on and have a MACVLAN network interface added to the pod. When I remove the additional CNI it works fine again, so the MACVLAN network might be interfering with the service.</p>
<p>EDIT2: As requested, I leave specs and versions here:<br />
OS - Ubuntu 20.04 LTS<br />
Kubernetes - Microk8s 1.23</p>
| <p>The nodeport is on your kubernetes node.
Accessing from your local machine wont work.
Use http://masternodeipaddress:nodeport.</p>
<p>You can add a hostentry in your local hosts file like:</p>
<pre><code>masternodeipaddress mytestdomain.local
</code></pre>
<p>then access <a href="http://mytestdomain.local:nodeport" rel="nofollow noreferrer">http://mytestdomain.local:nodeport</a>.</p>
<p>Update</p>
<p>With service type Loadbalancer. <br>
Get the IP from your masternode. <br>
I tried microk8s enable host-access, but had issues so i put the nodes ip address in externalIPs, as below.
From the browser http://nmasternodeipaddress.</p>
<p>deployment.yaml</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- name: tcp
port: 80
protocol: TCP
targetPort: 80
externalIPs:
- 10.165.39.165 # master node ip
selector:
app: app
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app
name: app
spec:
replicas: 1
selector:
matchLabels:
app: app
strategy:
type: Recreate
template:
metadata:
labels:
app: app
spec:
containers:
- image: nginx
name: app
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
</code></pre>
<p>with nodePort</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app
spec:
ports:
- name: tcp
port: 80
protocol: TCP
targetPort: 80
nodePort: 31000 # optional, use kubectl get svc -o wide if you let it blank
externalIPs:
- 10.165.39.165 # master node ip
selector:
app: app
type: NodePort
</code></pre>
<p>To set a domain local:</p>
<pre><code>sudo vi /etc/hosts
</code></pre>
<p>add a line with masternode ip like:</p>
<pre><code>10.165.39.165 mydomain.local
</code></pre>
<p>Now you can access in your browser with</p>
<p><a href="http://mydomain.local" rel="nofollow noreferrer">http://mydomain.local</a></p>
<p>Note:
I have tested this with:
Ubuntu 22.04
multipass 3 nodes with microk8s 1.24
in microk8s dns ingress enabled.</p>
|
<p>When there's more than one search result from <code>/</code> filter command, how do you navigate to next item? Basically I'm looking for F3 (next search result) equivalent in k9s. Commands listed <a href="https://k9scli.io/topics/commands/" rel="noreferrer">here</a> does not seem to include what I'm looking for...</p>
| <p>Narain's answer works when searching in a list of resources but when looking at yaml the interaction is slightly different:</p>
<ol>
<li>Press <code>/</code></li>
<li>Type search term, press enter</li>
<li>Press <code>n</code> to go to next, <code>shift-n</code> to go to previous match</li>
</ol>
<p>Edit: I realize now that this is shown in the header. I suppose this explanation is good for other blind bats like myself.</p>
<p><a href="https://i.stack.imgur.com/3av2P.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3av2P.png" alt="enter image description here" /></a></p>
|
<p>I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.</p>
<p>Main services are two Python applications <code>MothershipService</code> and <code>Ship</code>. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. <code>Ship</code> is running some (untrusted) user code.</p>
<pre><code>MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
</code></pre>
<p>I can manage fine to get up the ship service</p>
<pre><code>> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
</code></pre>
<p>My question is how do I go about testing this via <code>curl</code> or a browser? These are all backend services so <code>NodePort</code> seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.</p>
<p>ship.yml (pseudo-spec)</p>
<pre><code>kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
</code></pre>
| <p>One possibility is to use the <code>kubectl port-forward</code> command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
</code></pre>
<p>I can expose that on my local system by running:</p>
<pre><code>kubectl port-forward deploy/example 8000:8000
</code></pre>
<p>As long as that <code>port-forward</code> command is running, I can point my browser (or <code>curl</code>) at <code>http://localhost:8000</code> to access the service.</p>
<hr />
<p>Alternately, I can use <code>kubectl exec</code> to run commands (like <code>curl</code> or <code>wget</code>) inside the pod:</p>
<pre><code>kubectl exec -it web -- wget -O- http://127.0.0.1:8000
</code></pre>
|
<p>I have deployed a Cloud SQL proxy and a service attached to it in Kubernetes. Both the proxy and the service are listening on port 5432. Other pods are not able to establish a connection to the Cloud SQL database through the proxy, but when I port forward to the pod of the proxy or to the service, I am able to connect to the database without any issue from my localhost.</p>
<p>The Kubernetes cluster and the Cloud SQL instance are both private. I have checked the service and deployment labels, the service and pod network configuration, the firewall rules, and the network configuration, but I am still unable to resolve the issue.</p>
<p>All pods are in the same namespace, logs of the proxy show no error, when I run <code>nc -v $service_name $port</code> in other pods, it yields no error and it doesn't show any sign of malfunctioning, it doesn't even print that the connection was successful. The problem is that these pods are not being able to establish a TCP connection to the service,</p>
<p>Here is an example of an error message:</p>
<pre><code>Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Connection to airbyte-db-svc:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
</code></pre>
<p>If needed here is the manifest that deploys the service and the proxy:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: airbyte-db-svc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
airbyte: db
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: airbyte-db-proxy
spec:
replicas: 1
selector:
matchLabels:
airbyte: db
template:
metadata:
labels:
airbyte: db
spec:
serviceAccountName: airbyte-admin
containers:
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command:
- "/cloud_sql_proxy"
- "-enable_iam_login"
- "-ip_address_types=PRIVATE"
- "-instances=PROJECT_ID_HERE:REGION_HERE:INSTANCE_CONNECTION_HERE=tcp:5432"
ports:
- containerPort: 5432
securityContext:
runAsNonRoot: true
</code></pre>
<p>The serviceAccount <code>airbyte-admin</code> has the good 'Cloud SQL Client' and the workloadIdentity configured in the GCP Project.</p>
<p>What could be the problem and how can I fix it?</p>
| <p>CloudSQL Proxy listens on localhost by default. If you want to expose it via a service, you'll want to add <code>--address 0.0.0.0</code> to your cloud_sql_proxy command options.</p>
|
<p>I am trying to install PGO (Postgres Operator) in k8s. I am following <a href="https://access.crunchydata.com/documentation/postgres-operator/5.3.0/quickstart/" rel="nofollow noreferrer">this</a> documentation.
At the 2nd step when I run following command</p>
<pre><code>kubectl apply --server-side -k kustomize/install/default
</code></pre>
<p>I see error</p>
<blockquote>
<p>master-k8s@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default<br />
error: containers path is not of type []interface{} but map[string]interface {}</p>
</blockquote>
<p><strong>System Specifications:</strong></p>
<ul>
<li>I have k8s 2 node cluster with one master node.</li>
<li>All running Ubuntu 20.4</li>
</ul>
<p><strong>What I have try:</strong></p>
<ul>
<li><p>I download repository again without clone and directory uploaded on master node</p>
</li>
<li><p>I try to provide full path and this time I received the same error</p>
</li>
<li><p>I checked the default directory there 2 files present</p>
</li>
<li><p>I try to run this command inside the directory.</p>
</li>
</ul>
<p><strong>What Do I need?</strong></p>
<p>I am looking for solution why I am not able to follow the 2nd step of document.
Please help me to find what did I missed or doing it wrong.</p>
<p>I really thankful.</p>
<p><strong>Update Question:</strong></p>
<p>I updated the version of k8s and kustomize and still see the same issue.</p>
<pre><code>master-k8s@masterk8s-virtual-machine:~/test/postgres-operator-examples-main$ kubectl apply --server-side -k kustomize/install/default
error: containers path is not of type []interface{} but map[string]interface {}
</code></pre>
<p><strong>Kustomize version:</strong></p>
<pre><code>{Version:kustomize/v4.5.7 GitCommit:56d82a8xxxxxxxxxxxxx BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
</code></pre>
| <p><strong>To fix your issue make sure kubectl integration like below :</strong></p>
<p>As <strong>@Ralle</strong> commented, Check versions. <strong>Kustomize v2.1.0 and v3.0.0+, and is included in kubectl 1.21+</strong>, for more information please look at the <a href="https://github.com/kubernetes/kubernetes/issues/107751" rel="nofollow noreferrer">Kustomize doesn't work with CRDs when specifying images</a>.</p>
<p><strong>Update :</strong> The kustomize build flow at v2.0.3 was added to kubectl v1.14. The kustomize flow in kubectl remained frozen at v2.0.3 until kubectl v1.21, which <a href="https://github.com/kubernetes/kubernetes/blob/4d75a6238a6e330337526e0513e67d02b1940b63/CHANGELOG/CHANGELOG-1.21.md#kustomize-updates-in-kubectl" rel="nofollow noreferrer">updated it to v4.0.5</a>. It will be updated on a regular basis going forward, Check your versions & updates in the Kubernetes release notes.
<a href="https://i.stack.imgur.com/kgvkQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kgvkQ.png" alt="enter image description here" /></a></p>
<p>For examples and guides for using the kubectl integration please see the <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">kubernetes documentation</a>.</p>
<p>Also check <a href="https://kubernetes.io/blog/2021/08/06/server-side-apply-ga/" rel="nofollow noreferrer">Kubernetes 1.22: Server Side Apply moves to GA</a> for more information.</p>
|
<p>I know that deployment uses replicaset underneath it, has revision control, creates another replicaset during rolling upgrade/downgrade.</p>
<p>I want to know what is the scenario in which only replicaset can be used and deployment can't be used.</p>
| <p>ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time and it checks how many pods need to maintain bases on which it creates or deletes the pods. ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. ReplicaSets can be used independently. With ReplicaSet you define the number of replicas you want to run for a particular service. You would have those many replicas running.</p>
<p>Whereas Deployment is the advancement of replica sets. When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets. As a replica looks only on creating and deleting the pods. Deployment is recommended for application services and
With deployment you should be able to do rolling upgrades or rollbacks. You can update images from v1 to v2.</p>
<p>Refer to this <a href="https://stackoverflow.com/questions/69448131/kubernetes-whats-the-difference-between-deployment-and-replica-set">SO1</a> , <a href="https://stackoverflow.com/questions/55437390/k8s-why-we-need-replicaset-when-we-have-deployments">SO2</a> and official documentation of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">Replicasets</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>.</p>
|
<p>kubectl version
/home/lenovo/.local/bin/kubectl: line 1: syntax error near unexpected token <code><' /home/lenovo/.local/bin/kubectl: line 1: </code><Code>NoSuchKey</Code>The specified key does not exist.No such object: kubernetes-release/release//bin/linux/amd64/kubectl'</p>
| <p>I cleared kubectl from <code>/usr/local/bin</code></p>
<p>and also from <code>/home/$USER/.local/bin</code></p>
<p>And run the commands below:</p>
<pre class="lang-bash prettyprint-override"><code>curl -LO "https://dl.k8s.io/release/v1.24.7/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/v1.24.7/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
sudo install -o root -g root -m 0755 kubectl /home/$USER/.local/bin/kubectl
</code></pre>
|
<p>One of the things I noticed is that Helm charts that require persistence (e.g. Postgres) contain a persistent volume claim but not a default persistent volume.</p>
<p>It seems that it is assumed that this should not be part of the Helm chart and should be provisioned separately.</p>
<p>Is there some best practice or idea behind the lack of a pv (maybe with templated options) or at least a default one? This would allow running the chart out of the box.</p>
| <p>Stateful apps' helm charts assume volumes were provisioned beforehand using storage static provisioning or on-demand using dynamic provisioning.</p>
<p>Storage provisioning includes storage allocation, creation of PV object and binding it to the storage. This is usually done manually by storage admins for static provisioning or automatically by default storage class for dynamic provisioning.</p>
<p>If you install a helm chart in managed Kubernetes cluster on cloud or even Minikube, there's default storage class which will dynamically provision the storage based on helm chart's PVC and bind the provisioned PV to it. (request for storage)</p>
<p>If you prefer the chart to use statically provisioned PVs, you should disable using default storage class in chart's <code>values.yaml</code> or even specify the volume to be used there (the exact syntax and options may change from chart to chart, but the idea is the same).</p>
<p>Using dynamic provisioning has many advantages over static provisioning which I summarized in my blog post on <a href="https://www.rokpoto.com/kubernetes-storage-dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Provisioning of Kubernetes Storage</a></p>
|
<p>I have been trying to install kubernetes in Oracle cloud server.<br />
I followed <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noreferrer">the kubernetes instruction</a>.</p>
<p>Output of <code>kubectl config view</code> :</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.0.0.208:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
</code></pre>
<p>Error when exucuting <code>sudo kubectl apply -f</code> :</p>
<pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>in addition, if I wanted to see the pods then came out another error message too.</p>
<p>Error when executing <code>kubectl get pods</code> :</p>
<pre><code>E1228 00:53:51.858542 44216 memcache.go:238] couldn't get current server API group list: Get "https://10.0.0.208:6443/api?timeout=32s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
</code></pre>
| <p><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#tls-certificate-errors" rel="noreferrer">TLS certificate errors</a></p>
<p>The following error indicates a possible certificate mismatch.</p>
<pre><code># kubectl get pods
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
</code></pre>
<ul>
<li><p>Verify that the <code>$HOME/.kube/config</code> file contains a valid certificate, and regenerate a certificate if necessary. The certificates in a kubeconfig file are base64 encoded. The <code>base64 --decode</code> command can be used to decode the certificate and <code>openssl x509 -text -noout</code> can be used for viewing the certificate information.</p>
</li>
<li><p>Unset the <code>KUBECONFIG</code> environment variable using:</p>
<p>unset KUBECONFIG</p>
</li>
</ul>
<p>Or set it to the default <code>KUBECONFIG</code> location:</p>
<pre><code>export KUBECONFIG=/etc/kubernetes/admin.conf
</code></pre>
<p>Another workaround is to overwrite the existing <code>kubeconfig</code> for the "admin" user:</p>
<pre><code>mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p>This error should only come up if you have <strong>no contexts configured in your client</strong> and <strong>Make sure you are executing the command from the right user</strong>. If you run <code>kubectl config view</code> and you get something like this:</p>
<pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Refer to <a href="https://github.com/Hawaiideveloper/Infastructure-as-Code-Sample_Env/issues/15#issuecomment-811377749" rel="noreferrer">github link</a> & <a href="https://discuss.kubernetes.io/t/the-connection-to-the-server-localhost-8080-was-refused-did-you-specify-the-right-host-or-port/1464" rel="noreferrer">Kubernetes community Forums</a> for more information to resolve your issue.</p>
|
<p>I'm a beginner in kubernetes, and when I was reading the book, I found that it is not recommended to use hostpath as the volume type for production environment, because it will lead to binding between pod and node, but if you don't use hostpath, then if you use other volume types, when reading and writing files, will it lead to extra network IO, and will this performance suffer? Will this have an additional performance impact?</p>
| <p>hostpath is, as the name suggests, reading and writing from a place on the host where the pod is running. If the host goes down, or the pod gets evicted or otherwise removed from the node, that data is (normally) lost. This is why the "binding" is mentioned -- the pod must stay on that same node otherwise it will lose that data.</p>
<p>Using a volume type and having volumes provisioned is better as the disk and the pod can be reattached together on another node and you will not lose the data.</p>
<p>In terms of I/O, there would indeed be a miniscule difference, since you're no longer talking to the node's local disk but a mounted disk.</p>
<p>hostPath volumes are generally used for temporary files or storage that can be lost without impact to the pod, in much the same way you would use <code>/tmp</code> on a desktop machine/</p>
|
<ol>
<li>Figure out what is the correct way to scale up the remote function.</li>
<li>Figure out scaling relations between replicas of the remote function, Flink <code>parallelism.default</code> configuration, ingress topic partition counts together with message partition keys. What is the design intentions behind this topic.</li>
</ol>
<p>As the docs suggest, one of the benefits of flink statefun remote functions is that the remote function can scale differently with the flink workers and task parallelism. To understand more about how these messages are sent to the remote function processes. I have tried following scenarios.</p>
<p><strong>Preparation</strong></p>
<ol>
<li>Use <a href="https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s" rel="nofollow noreferrer">https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s</a> this for my experiment.</li>
<li>Modify the <a href="https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/03-functions/functions.py" rel="nofollow noreferrer">https://github.com/apache/flink-statefun-playground/blob/main/deployments/k8s/03-functions/functions.py</a> to the following to check the logs how things are parallelized in practice</li>
</ol>
<pre class="lang-py prettyprint-override"><code>...
functions = StatefulFunctions()
@functions.bind(typename="example/hello")
async def hello(context: Context, message: Message):
arg = message.raw_value().decode('utf-8')
hostname = os.getenv('HOSTNAME')
for _ in range(10):
print(f"{datetime.utcnow()} {hostname}: Hello from {context.address.id}: you wrote {arg}!", flush=True)
time.sleep(1)
...
</code></pre>
<ol start="3">
<li>Play around the <code>parallelism.default</code> in the flink.conf, replicas count in the functions deployment configuration as well different partitioning configurations in the ingress topic: <code>names</code></li>
</ol>
<p><strong>Observations</strong></p>
<ol>
<li>When sending messages with the same partition key, everything seems to be running sequentially. Meaning if I send 5 messages like "key1:message1", "key1:message2", "key1:message3", "key1:message4", ""key1:message5". I can see that only one of the pod is getting requests even I have more replicas (Configured 5 replicas) of the remote function in the deployment. Regardless how I configure the parallelism or increasing the ingress topic partition count, it always stays the same behavior.</li>
<li>When sending messages with 10 partition keys (The topic is configured with 5 partitions, and parallelism is configured to 5 and the replicas of the remote function is configured to 5). How the replicas remote function receiving the requests seems to be random. Sometime, 5 of them receiving requests at the same time so that 5 of them can run some task together. But some time only 2 of them are utilized and other 3 are just waiting there.</li>
<li>Seems parallelism determines the number of consumers in the same consumer group that subscribing to the ingress topic. I suspect if I have if configured more parallelism than the number of partitions in the ingress topic. the extra parallelism will just stay idle.</li>
</ol>
<p><strong>My Expectations</strong></p>
<ol>
<li>What I really expect how this SHOULD work is that 5 of the replica remote functions should always be fully utilized if there is still backlogs in the ingress topic.</li>
<li>When the ingress topic is configured with multiple partitions, each partitions should be batched separately and multiplex with other parallelism (or consumers) batches to utilize all of the processes in the remote functions.</li>
</ol>
<p>Can some Flink expert help me understand the above behavior and design intentions more?</p>
| <p>There are two things happening here...</p>
<ol>
<li>Each partition in your topic is assigned to a sub-task. This is done round-robin, so if you have 5 topic partitions and 5 sub-tasks (your parallelism) then every sub-task is reading from a single different topic partition.</li>
<li>Records being read from the topic are keyed and distributed (what Flink calls partitioning). If you only have one unique key, then every record it sent to the same sub-task, and thus only one sub-task is getting any records. Any time you have low key cardinality relative to the number of sub-tasks you can get skewed distribution of data.</li>
</ol>
<p>Usually in Statefun you'd scale up processing by having more parallel functions, versus scaling up the number of task managers that are running.</p>
|
<p>I have a task to automate the uploading of AKS logs (control plane and workload) to the Azure storage account so that they can be viewed later or may be an alert notification to the email/teams channel in case of any failure. It would have been an easy task if the log analytics workspace would have been used however, to save the cost we have kept it disabled.</p>
<p>I have tried using the below cronjob which would upload the pod logs to storage account on a regular basis, but it is throwing me the below errors[1]</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: log-uploader
spec:
schedule: "0 0 * * *" # Run every day at midnight
jobTemplate:
spec:
template:
spec:
containers:
- name: log-uploader
image: mcr.microsoft.com/azure-cli:latest
command:
- bash
- "-c"
- |
az aks install-cli
# Set environment variables for Azure Storage Account and Container
export AZURE_STORAGE_ACCOUNT=test-101
export AZURE_STORAGE_CONTAINER=logs-101
# Iterate over all pods in the cluster and upload their logs to Azure Blob Storage
for pod in $(kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.namespace}{"\n"}{end}'); do
namespace=$(echo $pod | awk '{print $2}')
pod_name=$(echo $pod | awk '{print $1}')
# Use the Kubernetes logs API to retrieve the logs for the pod
logs=$(kubectl logs -n $namespace $pod_name)
# Use the Azure CLI to upload the logs to Azure Blob Storage
echo $logs | az storage blob upload --file - --account-name $AZURE_STORAGE_ACCOUNT --container-name $AZURE_STORAGE_CONTAINER --name "$namespace/$pod_name_`date`.log"
done
restartPolicy: OnFailure
</code></pre>
<p>Errors[1]</p>
<pre><code>error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples
</code></pre>
<blockquote>
<p>The same commands are running fine outside the container.</p>
</blockquote>
<p>Any thoughts/suggestions would be highly appreciated.</p>
<p>Regards,</p>
<p>Piyush</p>
| <p>A better approach for achieving this would be deploying a <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">fluentd daemonset</a> in your cluster and use the <a href="https://github.com/htgc/fluent-plugin-azurestorage" rel="nofollow noreferrer">azure storage plugin</a> to upload logs to a storage account.</p>
<p>This tool was built for this specific purpose and will probably serve you better for this purpose.</p>
|
<p>I have a problem. In my kubernetes cluster, I am trying to run my Rails application. I got the image loaded, but now I want to write a custom command. The default command in my Dockerfile is:</p>
<pre><code>CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]
</code></pre>
<p>But I want to also run <code>rails assets:precompile</code> at startup for production. I tried those commands using this config:</p>
<pre><code>command: ["bundle", "exec", "rails", "assets:precompile", "&&", "bundle", "exec", "rails", "server"]
</code></pre>
<p>But after the first command has been executed, I get the error:</p>
<pre><code>rails aborted!
Don't know how to build task '&&' (See the list of available tasks with `rails --tasks`)
</code></pre>
<p>I also tried the following with <code>args</code>:</p>
<pre><code>command: ["/bin/sh -c"]
args:
- bundle exec rails assets:precompile;
bundle exec rails server;
</code></pre>
<p>But that results in a very long error which basicly says that the format of args is incorrect. Can someone explain to me how I can run both commands at startup?</p>
| <p>Use <code>entrypoint</code> for that:</p>
<pre><code>services:
app:
build: .
entrypoint: ./entrypoint.sh
command: bundle exec rails s -p 3000 -b 0
ports:
- 3000:3000
</code></pre>
<pre class="lang-bash prettyprint-override"><code># entrypoint.sh
#!/bin/bash
set -e
# is this still an issue?
rm -f /myapp/tmp/pids/server.pid
# do what you have to do
bin/rails assets:precompile
# pass the torch to `command:`
exec "$@"
</code></pre>
<hr />
<p>Also the lazy way:</p>
<pre><code> command: bash -c "bin/rails assets:precompile && bin/rails s -p 3000 -b 0"
</code></pre>
<hr />
<p>You can also use <code>ENTRYPOINT</code> in <em>Dockerfile</em> and build it into the image:</p>
<pre><code>COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
</code></pre>
|
<p>In Jenkins, I want to understand why "docker ps" is not running inside my container despite I redact my Jenkins file like this :</p>
<pre><code>podTemplate(serviceAccount: 'jenkins', containers: [
containerTemplate(
name: 'docker',
image: 'docker',
resourceRequestCpu: '100m',
resourceLimitCpu: '300m',
resourceRequestMemory: '300Mi',
resourceLimitMemory: '500Mi',
command: 'cat',
privileged: true,
ttyEnabled: true
)
],
volumes: [
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]
) {
node(POD_LABEL) {
stage('Check running containers') {
container('docker') {
sh 'hostname'
sh 'hostname -i'
sh 'docker --version'
sh 'docker ps'
}
}
</code></pre>
<p>I have always this kind of message after I start my pipeline :
unix:///var/run/docker.sock.</p>
<p>Is the docker daemon running?
Thanks</p>
<p>It must have something like the result of a PS running with docker command.</p>
| <p>Even if you run Jenkins inside a docker container, you must install docker inside.</p>
<p>Best way create a new Dockerfile and install docker inside.</p>
<pre><code>FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
</code></pre>
<p>For the sake of clarity if you deploy to Kubernetes or use docker-compose, you will need also kubectl and docker-compose inside your container. If not you can remove the installation from Dockerfile above.</p>
|
<p>I'm working in a company that uses Github Actions and Argocd.(using argocd helm chart).
Needless to say that the Github repo is private and argocd is in an internal network that used by the company only.</p>
<p>The flow of what we want to do is that when we deploy the app and the deployment succeeded - Trigger another workflow that will run tests on the deployed environment.
Basically, the deployment will be the trigger for another workflow.</p>
<p>I have been trying to configure webhook from argocd to github but with no success.
What is the best approach to this situation, will be happy to provide more context if needed.</p>
<p>Edit:
The test workflow i'm trying to use workflow_dispatch.</p>
<pre><code>name: workflow_02
on:
push:
branches: [ argo-github-trigger ]
workflow_dispatch:
jobs:
log-the-inputs:
runs-on: ubuntu-latest
steps:
- run: |
echo "Worked"
</code></pre>
<p>I'm expecting to see a "Run workflow" button on github but it doesn't appear. On another repo, that I have Admin priviliges and can work on the main branch, I tried the same workflow and it worked.</p>
<p><a href="https://i.stack.imgur.com/qZmmO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qZmmO.png" alt="enter image description here" /></a></p>
| <p>The best approach would be to use a <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/" rel="nofollow noreferrer">post sync hook</a>.</p>
<p>Why is it necessary to run the tests in a GitHub actions workflow? Since the application is already deployed to the cluster, wouldn't it make more sense to run the tests directly on the cluster, instead of going through the trouble of communicating with Github?</p>
|
<p>We need to configure health check for our service which uses apscheduler to schedule and run the jobs. The idea is to check if apscheduler is running jobs fine at the specified time or working as expected.</p>
<p>We tried scheduler.running, but it shows true when it's not able to pick next jobs. Any suggestions here?</p>
| <p>You can use add_listener() and can listen to only particular types of events by giving the appropriate mask argument to add_listener(), OR’ing the different constants together. The listener callable is called with one argument, the event object.</p>
<p>Example:</p>
<pre><code>def my_listener(event):
if event.exception:
print('The job crashed :(')
else:
print('The job worked :)')
scheduler.add_listener(my_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
</code></pre>
<p>Refer to this <a href="https://apscheduler.readthedocs.io/en/3.x/userguide.html#scheduler-events" rel="nofollow noreferrer">document</a> for more information.</p>
|
<p>I have a container with pytorch. The container is 3GB. When i run 20 instances of this container it takes 60GB of disk space. I basically have 20 copies of the same library. Is it possible not to have separate copies of the filesystem of the container. If the container fs is set to readonly would that help? Im running in k8s.</p>
| <blockquote>
<p>Is it possible not to have separate copies of the filesystem of the<br />
container.</p>
</blockquote>
<p>Yes, you can do that.</p>
<p>You can have the <code>ReadOnlyMany</code> file system or <code>ReadWriteMany</code> file system.</p>
<p><strong>Hostpath</strong></p>
<p>You can also leverage the <strong><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">Hostpath</a></strong> to save the files over there and when POD is scheduled on that node it will use from there.</p>
<p>Make sure if your node goes down you might lose the saved file on that node, instead using above solution would be good.</p>
<p><strong>EFS or NFS</strong></p>
<p>If you are on GCP or AWS you can use the existing FS solution to mount the existing file system to multiple POD.</p>
<p>You can also create your own NFS file system and save files into it, when POD gets scheduled it will start using files from NFS shared file system.</p>
<p><a href="https://stackoverflow.com/a/68586922/5525824">GCP Filestore</a></p>
|
<p>I am trying to familiarise myself with kubernetes, and want to run a k8s stack on some low spec hw (think raspberry pi).</p>
<p>I found what <a href="https://www.linuxtechi.com/install-kubernetes-on-ubuntu-22-04/" rel="nofollow noreferrer">appears to be a great guide to set up kubernetes un Ubuntu</a> but I ran into issues, causing me to reinstall the OS several times to ensure that I had not made a fundamental mistake which was poisoning my attempt.</p>
<p>Getting fed up with waiting for the basics, <a href="https://github.com/JoSSte/k8s-playground" rel="nofollow noreferrer">I tried to set it up in a vagrant environment</a>, which does allow me to skip some of the time-consuming and tedious steps regarding reinstalls, but still seems like a fragile process. Looking at udemy and youtube, as well as getting started articles, a lot of focus appears to be in minikube... as I read it, that is essentially a vm with a ready to go kubernetes set up already.</p>
<p>My question is: is the overhead using minikube small enough to use on servers with minimal resources? Or is it only usable for testing and demonstration? Since I have issues getting a working cluster, I can't test and verify it myself...</p>
| <p>From minikube <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">documentation</a>: <code>minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes</code></p>
<p>If you want to learn more about Kubernetes, I suggest reading and implementing <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way" rel="nofollow noreferrer">this</a> repository.</p>
<p>In the end, if you want to use Kubernetes in production, please forget <code>minikube</code> and run Kubernetes.</p>
|
<p>I am trying to add ELK to my project which is running on kubernetes. I want to pass by filebeat -> logstach then elastic search. I prepared my filebeat.yml file and in my company the filebeat is configured as an agent in the cluster which i don't realy know what it means? I want to know how to configure the filebeat in this case ? just adding the file in the project and it will be taken into considiration once the pod started or how does it work ?</p>
| <p>You can configure the Filebeat in some ways.</p>
<p>1 - You can configure it using the DeamonSet, meaning each node of your Kubernetes architecture will have one POD of Filebeat. Usually, in this architecture, you'll need to use only one filebeat.yaml configuration file and set the inputs, filters, outputs (output to Logstash, Elasticsearch, etc.), etc. In this case, your filebeat will need root access inside your cluster.</p>
<p>2 - Using Filebeat as a Sidecar with your application k8s resource. You can configure an emptyDir in the Deployment/StatefulSet, share it with the Filebeat Sidecar, and set the Filebeat to monitor this directory.</p>
|
<p>I have a Kubernetes cluster, with Artifactory as my internal registry and proxy to pull images from external private registries. How do I authenticate to these external private registries when I want to pull an image from my Kubernetes cluster? Normally in Kubernetes, this is done by using image pull secrets, however, it is not clear if Artifactory is able to handle the secret to authenticate to the external regisitry. What alternatives do I have?</p>
| <p>Taking the comment by @Ivonet as the base - you configure the auth against the remote repository source in Artifactory itself. You can see the <a href="https://www.jfrog.com/confluence/display/JFROG/Advanced+Settings#AdvancedSettings-RemoteCredentials" rel="nofollow noreferrer">docs here</a>.</p>
<p>Once Artifactory is setup, you set your <code>imagePullSecret</code> to auth against Artifactory itself. See some examples in <a href="https://jfrog.com/knowledge-base/artifactory-installing-and-managing-kubernetes-registries/" rel="nofollow noreferrer">this knowledge-base article</a></p>
|
<p>I am trying to dynamic lookup available namespaces and able to create secrets in the namespaces using below helm chart.</p>
<blockquote>
<p>templates/secrets.yaml</p>
</blockquote>
<pre><code>{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{- end}}
</code></pre>
<blockquote>
<p>values.yaml</p>
</blockquote>
<pre><code>imageCredentials:
registry: quay.io
username: someone
password: sillyness
email: [email protected]
</code></pre>
<blockquote>
<p>_helpers.tpl</p>
</blockquote>
<pre><code>{{- define "imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
</code></pre>
<p>When i run this helm chart, i get below error</p>
<blockquote>
<p>Error: INSTALLATION FAILED: template: secrets/templates/_helpers.tpl:2:16: executing "imagePullSecret" at <.Values.imageCredentials>: nil pointer evaluating interface {}.imageCredentials</p>
</blockquote>
<p>I dont know what I am doing wrong here.</p>
| <p>When you reference the named template "imagePullSecret" inside the range, the context "." you are providing refers to the body of the loop, which does not have the "Values" attribute.</p>
<p>Try providing the root context instead:</p>
<pre class="lang-yaml prettyprint-override"><code>{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" $ }}
---
{{- end}}
</code></pre>
|
<p>I want to create a service account in Kubernetes v1.25.5 using this tutorial <a href="https://devopscube.com/kubernetes-api-access-service-account/" rel="nofollow noreferrer">https://devopscube.com/kubernetes-api-access-service-account/</a></p>
<p>Full log:</p>
<pre><code>root@node1:~# kubectl create namespace devops-tools
namespace/devops-tools created
root@node1:~# kubectl create serviceaccount api-service-account -n devops-tools
serviceaccount/api-service-account created
root@node1:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: api-service-account
namespace: devops-tools
EOF
Warning: resource serviceaccounts/api-service-account is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/api-service-account configured
root@node1:~# cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-cluster-role
namespace: devops-tools
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
EOF verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
clusterrole.rbac.authorization.k8s.io/api-cluster-role created
root@node1:~# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
bgppeers crd.projectcalico.org/v1 false BGPPeer
blockaffinities crd.projectcalico.org/v1 false BlockAffinity
caliconodestatuses crd.projectcalico.org/v1 false CalicoNodeStatus
clusterinformations crd.projectcalico.org/v1 false ClusterInformation
felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
hostendpoints crd.projectcalico.org/v1 false HostEndpoint
ipamblocks crd.projectcalico.org/v1 false IPAMBlock
ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
ipamhandles crd.projectcalico.org/v1 false IPAMHandle
ippools crd.projectcalico.org/v1 false IPPool
ipreservations crd.projectcalico.org/v1 false IPReservation
kubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfiguration
networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
networksets crd.projectcalico.org/v1 true NetworkSet
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
root@node1:~# cat <<EOF | kubectl apply -f -
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: api-cluster-role-binding
subjects:
- namespace: devops-tools
kind: ServiceAccount
name: api-service-account
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-cluster-role
EOF
clusterrolebinding.rbac.authorization.k8s.io/api-cluster-role-binding created
root@node1:~# kubectl auth can-i get pods --as=system:serviceaccount:devops-tools:api-service-account
yes
root@node1:~# kubectl auth can-i delete deployments --as=system:serviceaccount:devops-tools:api-service-account
yes
root@node1:~# kubectl get serviceaccount api-service-account -o=jsonpath='{.secrets[0].name}' -n devops-tools
root@node1:~#
root@node1:~# kubectl get secrets api-service-account-token-pgtrr -o=jsonpath='{.data.token}' -n devops-tools | base64 -D
base64: invalid option -- 'D'
Try 'base64 --help' for more information.
Error from server (NotFound): secrets "api-service-account-token-pgtrr" not found
root@node1:~# kubectl get endpoints | grep kubernetes
kubernetes 192.168.1.24:6443 3d22h
</code></pre>
<p>As you can see I should get the token secret but I get empty result. Do you know what should be the correct step?</p>
| <p>Blog you are following is from <strong>June 5, 2021</strong> might be using the old k8s release, while in <strong>2022-12-08</strong> <strong>1.24</strong> relased and you are using the <strong>1.25</strong></p>
<p>In <strong>1.24</strong> Kubernetes release there is change which don't auto crate the secret when you are creating the <strong>ServiceAccount</strong>.</p>
<p><strong>Steps to follow</strong></p>
<p>If you want to create the secret you can create it with</p>
<pre><code>kubectl create sa <Service account name>
</code></pre>
<p>Create a YAML file with below and update the name and Service Account name</p>
<pre><code>apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: "<Service Account name>"
</code></pre>
<p><code>kubectl apply -f <File created above>.yaml</code></p>
<p><strong>Extra option</strong></p>
<p>You can also use the kubectl in case just want to create the Token</p>
<pre><code>kubectl create token jwt-tk --duration=999999h
</code></pre>
<p><strong>Few ref</strong></p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md" rel="nofollow noreferrer">Release note</a> & <a href="https://github.com/kubernetes/kubernetes/pull/108309" rel="nofollow noreferrer">Pull request</a></p>
|
<p>My query is pretty much what the title says, I have a local file say <code>file.txt</code> and I want to copy it into <code>pod1</code>'s container <code>container1</code>.</p>
<p>If I was to do it using kubectl, the appropriate command would be :</p>
<p><code>kubectl cp file.txt pod1:file.txt -c container1</code></p>
<p>However, how do I do it using the Go client of kubectl?</p>
<p>I tried 2 ways but none of them worked :</p>
<pre><code>import (
"fmt"
"context"
"log"
"os"
"path/filepath"
g "github.com/sdslabs/katana/configs"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
//"k8s.io/kubectl/pkg/cmd/exec"
)
func CopyIntoPod(namespace string, podName string, containerName string, srcPath string, dstPath string) {
// Create a Kubernetes client
config, err := GetKubeConfig()
if err != nil {
log.Fatal(err)
}
client, err := kubernetes.NewForConfig(config)
if err != nil {
log.Fatal(err)
}
// Build the command to execute
cmd := []string{"cp", srcPath, dstPath}
// Use the PodExecOptions struct to specify the options for the exec request
options := v1.PodExecOptions{
Container: containerName,
Command: cmd,
Stdin: false,
Stdout: true,
Stderr: true,
TTY: false,
}
log.Println("Options set!")
// Use the CoreV1Api.Exec method to execute the command inside the container
req := client.CoreV1().RESTClient().Post().
Namespace(namespace).
Name(podName).
Resource("pods").
SubResource("exec").
VersionedParams(&options, metav1.ParameterCodec)
log.Println("Request generated")
exec, err := req.Stream(context.TODO())
if err != nil {
log.Fatal(err)
}
defer exec.Close()
// Read the response from the exec command
var result []byte
if _, err := exec.Read(result); err != nil {
log.Fatal(err)
}
fmt.Println("File copied successfully!")
}
</code></pre>
<p>This gave me the error message :</p>
<p><code>no kind is registered for the type v1.PodExecOptions in scheme "pkg/runtime/scheme.go:100"</code></p>
<p>I couldn't figure it out, so I tried another way :</p>
<pre><code>type PodExec struct {
RestConfig *rest.Config
*kubernetes.Clientset
}
func NewPodExec(config *rest.Config, clientset *kubernetes.Clientset) *PodExec {
config.APIPath = "/api" // Make sure we target /api and not just /
config.GroupVersion = &schema.GroupVersion{Version: "v1"} // this targets the core api groups so the url path will be /api/v1
config.NegotiatedSerializer = serializer.WithoutConversionCodecFactory{CodecFactory: scheme.Codecs}
return &PodExec{
RestConfig: config,
Clientset: clientset,
}
}
func (p *PodExec) PodCopyFile(src string, dst string, containername string, podNamespace string) (*bytes.Buffer, *bytes.Buffer, *bytes.Buffer, error) {
ioStreams, in, out, errOut := genericclioptions.NewTestIOStreams()
copyOptions := cp.NewCopyOptions(ioStreams)
copyOptions.Clientset = p.Clientset
copyOptions.ClientConfig = p.RestConfig
copyOptions.Container = containername
copyOptions.Namespace = podNamespace
err := copyOptions.Run()
if err != nil {
return nil, nil, nil, fmt.Errorf("could not run copy operation: %v", err)
}
return in, out, errOut, nil
}
</code></pre>
<p>However, there were some issues with the <code>copyOptions.Run()</code> command, it tried to look for <code>o.args[0] and o.args[0]</code> inside copyOptions but <code>o</code> is not imported so it couldn't be modified.</p>
<p>Context : <a href="https://pkg.go.dev/k8s.io/kubectl/pkg/cmd/cp#CopyOptions.Run" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/kubectl/pkg/cmd/cp#CopyOptions.Run</a></p>
<p>So, now I'm really lost and confused. Any help would be appreciated. Thanks.</p>
<p>Edit : I did think of a viable method where we can just call <code>cmd.exec()</code> and run the <code>kubectl cp</code> command directly but it seems kinda hacky and I'm not sure whether it would work, any thoughts?</p>
| <p>Here's how I finally managed to do it :</p>
<pre><code>package main
import (
"context"
"fmt"
"os"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/remotecommand"
)
func CopyIntoPod(podName string, namespace string, containerName string, srcPath string, dstPath string) {
// Get the default kubeconfig file
kubeConfig := filepath.Join(homedir.HomeDir(), ".kube", "config")
// Create a config object using the kubeconfig file
config, err := clientcmd.BuildConfigFromFlags("", kubeConfig)
if err != nil {
fmt.Printf("Error creating config: %s\n", err)
return
}
// Create a Kubernetes client
client, err := kubernetes.NewForConfig(config)
if err != nil {
fmt.Printf("Error creating client: %s\n", err)
return
}
// Open the file to copy
localFile, err := os.Open(srcPath)
if err != nil {
fmt.Printf("Error opening local file: %s\n", err)
return
}
defer localFile.Close()
pod, err := client.CoreV1().Pods(namespace).Get(context.TODO(), podName, metav1.GetOptions{})
if err != nil {
fmt.Printf("Error getting pod: %s\n", err)
return
}
// Find the container in the pod
var container *corev1.Container
for _, c := range pod.Spec.Containers {
if c.Name == containerName {
container = &c
break
}
}
if container == nil {
fmt.Printf("Container not found in pod\n")
return
}
// Create a stream to the container
req := client.CoreV1().RESTClient().Post().
Resource("pods").
Name(podName).
Namespace(namespace).
SubResource("exec").
Param("container", containerName)
req.VersionedParams(&corev1.PodExecOptions{
Container: containerName,
Command: []string{"bash", "-c", "cat > " + dstPath},
Stdin: true,
Stdout: true,
Stderr: true,
}, scheme.ParameterCodec)
exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL())
if err != nil {
fmt.Printf("Error creating executor: %s\n", err)
return
}
// Create a stream to the container
err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
Stdin: localFile,
Stdout: os.Stdout,
Stderr: os.Stderr,
Tty: false,
})
if err != nil {
fmt.Printf("Error streaming: %s\n", err)
return
}
fmt.Println("File copied successfully")
}
</code></pre>
|
<p>SpringBoot 2.3 introduced a feature to create OCI/Docker images by running <code>./gradlew bootBuildImage</code> instead of having a <code>Dockerfile</code> and execute <code>docker build .</code></p>
<p>When building on a Gitlab build server that is running inside a Kubernetes cluster there is no Docker daemon available though (no docker-in-docker service for security reasons). Instead images have to be build and uploaded using Google's "Kaniko" tool.</p>
<p>Is it possible to combine both somehow i.e. use "bootBuildImage" without a running Docker daemon?</p>
| <p>It is possible by using <a href="https://podman.io/" rel="nofollow noreferrer">Podman</a>. Podman includes a daemon that implements a Docker-compatible API. On a local machine this can be started via <code>podman system service --time 0 tcp://0.0.0.0:2375</code>.</p>
<p>When running in Kubernetes (or generally in a container) you can use the container image from Quay: <code>quay.io/containers/podman</code>. Start the service in the background and the run your build. Something like this should work:</p>
<pre class="lang-yaml prettyprint-override"><code>build:
image: my-java-builder
services:
- name: quay.io/containers/podman:v4.2.1
alias: docker
command: ["podman", "system", "service", "--time=0", "tcp://0.0.0.0:2375"]
variables:
DOCKER_HOST: tcp://docker:2375
script:
- ./gradlew bootBuildImage
</code></pre>
|
<p>I have a kubernetes cluster in Azure (AKS) with the log analytics enabled. I can see that a lot of pods are being killed by OOMKilled message but I want to troubleshoot this with the log analytics from Azure. My question is how can I track or query, from the log analytics, all the pods that are killed by the <code>OOMKilled</code> reason?</p>
<p>Thanks!</p>
| <p>The reason is somewhat hidden in the <code>ContainerLastStatus</code> field (JSON) of the <code>KubePodInventory</code> table. A query to get all pods killed with reason <code>OOMKilled</code> could be:</p>
<pre><code>KubePodInventory
| where PodStatus != "running"
| extend ContainerLastStatusJSON = parse_json(ContainerLastStatus)
| extend FinishedAt = todatetime(ContainerLastStatusJSON.finishedAt)
| where ContainerLastStatusJSON.reason == "OOMKilled"
| distinct PodUid, ControllerName, ContainerLastStatus, FinishedAt
| order by FinishedAt asc
</code></pre>
|