prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am tasked with building a new relic chart to show gitlab runner job count over time.
I am trying to determine what type of object is a gitlab runner job. Is is a deployment, a pod or a statefulset?</p>
<p>Any assistance with being able to visualize gitlab runner pods in new relic would be appreciated.</p>
| <p>As mentioned in "<a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">Kubernetes executor for GitLab Runner</a>":</p>
<blockquote>
<p>The Kubernetes executor, when used with GitLab CI, connects to the Kubernetes API in the cluster creating <strong>a Pod for each GitLab CI Job</strong>.</p>
<p>This Pod is made up of, at the very least:</p>
<ul>
<li>a build container,</li>
<li>a helper container, and</li>
<li>an additional container for each service defined in the <code>.gitlab-ci.yml</code> or <code>config.toml</code> files.</li>
</ul>
</blockquote>
<p>Since those are pods, you should see them in a <a href="https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer/" rel="nofollow noreferrer">NewRelic Kubernetes cluster explorer</a>.</p>
|
<p>I have installed Prometheus-adapter along with the default metrics-server that comes with k3s securely on port 443.</p>
<p>Unfortunately, I get no resources when I query custom.metrics.k8s.io</p>
<pre><code>$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": []
}
</code></pre>
<p>When I look at the logs of Prometheus-adapter I get <code>unable to update list of all metrics: unable to fetch metrics for query ...: x509: certificate is valid for localhost, localhost, not metrics-server.kube-system</code></p>
<p>How can I resolve this issue?</p>
| <p>To solve this issue, I had to create <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="nofollow noreferrer">separate certificate</a> for both metrics-server and adapter. Adapter also has an <a href="https://github.com/kubernetes-sigs/prometheus-adapter/issues/169" rel="nofollow noreferrer">issue</a> about adding capability to ignore cert validation which wasn't merged.</p>
<p>For metrics-server and cert request I used the following:</p>
<pre><code>{
"hosts": [
"prometheus-adapter",
"prometheus-adapter.monitoring",
"prometheus-adapter.monitoring.svc",
"prometheus-adapter.monitoring.pod",
"prometheus-adapter.monitoring.svc.cluster.local",
"prometheus-adapter.monitoring.pod.cluster.local",
"<pod ip>",
"<service ip>"
],
"CN": "prometheus-adapter.monitoring.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
},
}
</code></pre>
<pre><code>{
"hosts": [
"metrics-server",
"metrics-server.kube-system",
"metrics-server.kube-system.svc",
"metrics-server.kube-system.pod",
"metrics-server.kube-system.svc.cluster.local",
"metrics-server.kube-system.pod.cluster.local",
"<service ip>",
"<pod ip>"
],
"CN": "metrics-server.kube-system",
"key": {
"algo": "ecdsa",
"size": 256
},
}
</code></pre>
<p>For ca, you can create your certificate authority or use Kubernetes signers as indicated <a href="https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers" rel="nofollow noreferrer">here</a>
The only point worth noting here is that if you use either of signers, you should mount the ca bundle yourself to your deployments.</p>
<p>Finally, mount tls keys and ca bundle to your deployment.</p>
<pre><code> extraArguments:
- --tls-cert-file=/var/run/serving-cert/tls.crt
- --tls-private-key-file=/var/run/serving-cert/tls.key
- --client-ca-file=/etc/ssl/certs/ca.crt
</code></pre>
|
<p>I have an Apple Macbook Pro with an M1 chip, where I have a local kubernetes cluster running through <code>Kind</code>. The thing is I don't understand how <code>Kind</code> deals with docker images from different platforms/architectures. The thing is I have an application installed via <code>Helm</code> that points to some docker images with <code>linux/amd64</code> architecture, and when I install it through <code>helm</code> (<code>helm install -n [namespace] [repo] [app]</code>), it works like a charm (I can actually look at the downloaded images and see that their architecture is <code>amd64</code>). However, when I download the same image to my local docker registry with <code>docker pull [my-image]</code>, and then try to load it to the <code>kind</code> cluster with <code>kind load docker-image [my-image] [my-cluster]</code>, it doesn't work and I get the following error:</p>
<pre><code>Image: "[my-image]" with ID "sha256:d3a8..." not yet present on node "[my-cluster]-control-plane", loading...
ERROR: failed to load image: command "docker exec --privileged -i [my-cluster]-control-plane ctr --namespace=k8s.io images import --digests --snapshotter=overlayfs -" failed with error: exit status 1
Command Output: ctr: image might be filtered out
</code></pre>
<p>And after googling the error a little bit, I could see that it is due to the mismatch of architectures between the image and the <code>kind</code> cluster.</p>
<p>Could someone explain to me the reason for these different behaviors?</p>
| <p>After asking in the <a href="https://kubernetes.slack.com/archives/CEKK1KTN2" rel="nofollow noreferrer">Kind Slack channel in the Kubernetes workspace</a> I could finally find the answer to my question: <a href="https://kubernetes.slack.com/archives/CEKK1KTN2/p1665033551816149" rel="nofollow noreferrer">whole thread here</a>.</p>
<p>TL,DR; <code>Kind</code> was unable to load the images with architectures that don't match the host architecture because it lacked a required <code>--all-platforms</code> argument in the call to the <code>ctr</code> tool used by <code>kind load docker-image</code> to load the docker images into the cluster. A <a href="https://github.com/kubernetes-sigs/kind/pull/2957" rel="nofollow noreferrer">PR to fix the issue</a> was filed and it will be fixed in future releases of <code>Kind</code>.</p>
|
<p>I am looking for the python Kubernetes client equivalent for</p>
<pre><code>kubectl get deploy -n app -l team=blue
</code></pre>
<p>I'm aware of this function <code>deployment = api.read_namespaced_deployment(name='foo', namespace='bar')</code> but how can I have same thing with label?</p>
| <p><code>read_namespaced_deployment</code> expects deployment name, which does not make sense of using <code>label</code>, you should use <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/AppsV1Api.md#list_namespaced_deployment" rel="nofollow noreferrer">list_namespaced_deployment</a></p>
<p>you can use <code>label_selector</code></p>
<blockquote>
<p><code>:param str label_selector</code>: A selector to restrict the list of returned objects by their labels. Defaults to everything.</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config()
kube_client = client.AppsV1Api()
resp = kube_client.list_namespaced_deployment(namespace="app", label_selector="team=blue")
for deployment in resp.items:
print(deployment)
</code></pre>
<p>to get all label</p>
<pre><code> kubectl get deployment --show-labels
</code></pre>
|
<p>Forgive me for asking a stupid question but I can't seem to find anywhere in the Kubernetes API reference how to query logs via the REST API if there's more than one container running inside the pod?</p>
<p><code>cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log</code></p>
<p>Returns:</p>
<blockquote>
<p>{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"a
container name must be specified for pod my-app-1,
choose one of: [nginx php-fpm]","reason":"BadRequest","code":400}</p>
</blockquote>
<p>I tried:</p>
<p><code>cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/nginx/log</code></p>
<p>and it results in an error that the resource can't be found.</p>
<p>How do I specify the container name when making an HTTP request to the API?</p>
| <p>Figured it out - I needed to add <strong>container</strong> using a query parameter:</p>
<p><em>?container=nginx</em></p>
<p><strong>Working Example:</strong></p>
<p><code>cURL -k -H Authorization: Bearer my-super-secret-token https://kubernetes/api/v1/namespaces/default/pods/my-app-1/log?container=nginx</code></p>
|
<p>I was wondering if it was really relevant to set “requests” (CPU/MEM) values if I’m not using HPA ?</p>
<p>If those values are not used to scale up or down some pods, what is the point ?</p>
| <p>It's fine and it will work if you don't provide the <strong>requests</strong> (CPU/MEM) to workloads.</p>
<p>But consider the scenario, suppose you have 1-2 Nodes with a capacity of <strong>1</strong> <strong>GB</strong> and you have not mentioned the <strong>requests</strong>.</p>
<p>Already running application utilizing half of the node around <strong>0.5 GB</strong>. Your new app needs now <strong>1 GB</strong> to start so K8s will schedule the PODs onto that node as not aware of the minimum requirement to start the application.</p>
<p>After that whatever happens, we call it a <strong>Crash</strong>.</p>
<p>If you have extra resources in the cluster, setting affinity and confidence in the application code you can go without putting the requests (<strong>not best practice</strong>).</p>
|
<p>I have a .NET Core Web API hosted in Kubernetes as a Pod. It is also exposed as a Service.
I have created a Dev SSL certificate and it's produced a aspnetapp.pfx file.</p>
<p>Here is a snippet of my Docker file:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 443
ENV ASPNETCORE_URLS=https://+:443
ENV ASPNETCORE_HTTPS_PORT=443
ENV ASPNETCORE_Kestrel__Certificates__Default__Password={password}
ENV ASPNETCORE_Kestrel__Certificates__Default__Path=APIGateway/Certificates/aspnetapp.pfx
</code></pre>
<p>When I run the app in Kubernetes I receive an error in the container logs, and the container is failing to start:</p>
<pre><code>error:2006D002:BIO routines:BIO_new_file:system lib
</code></pre>
<p>I know its able to find the SSL certificate but, its throwing the above error.</p>
<p>Please help!:)</p>
| <p>I just ran into this same problem and even though things were working fine previously, <em>something</em> was updated (possibly .NET 6.0.402) which caused a problem.</p>
<p>What I noticed is that my exported dev cert pfx in the Docker container had it's permissions set to:</p>
<pre><code>-rw------- 1 root root 2383 Oct 18 14:40 cert.pfx
</code></pre>
<p>In my Dockerfile, I export the dotnet dev cert and run a chmod to add read permissions for everyone:</p>
<pre><code>RUN dotnet dev-certs https --clean && dotnet dev-certs https --export-path /app/publish/cert.pfx -p {password}
RUN chmod 644 /app/publish/cert.pfx
</code></pre>
<p>This resulted in permissions which were the same as my appsettings files:</p>
<pre><code>-rw-r--r-- 1 root root 535 Oct 18 14:11 appsettings.Development.json
-rw-r--r-- 1 root root 331 Sep 27 18:13 appsettings.json
-rw-r--r-- 1 root root 2383 Oct 18 14:40 cert.pfx
</code></pre>
<p>That fixed the error for me.</p>
|
<p>i want to access dotted named value with helm to use in ConfigMap
the value is something like that</p>
<pre><code>valuenum:
joji.json: zok
</code></pre>
<p>i want to use it in ConfigMap with helm as this</p>
<pre><code>{{ toYaml .Values.valuenum.joji.json }}
</code></pre>
<p>it returns syntax error.
could not find a fix for it.</p>
| <p>I found the answer myself, when using index we can search for nested variables with quotes.</p>
<pre><code>{{ index .Values.valuenum "joji.json" }}
</code></pre>
<p><a href="https://helm.sh/docs/chart_template_guide/variables/" rel="nofollow noreferrer">link for helm doc about index and more</a></p>
|
<p>I'm developing a kubernetes operator that represents a very simple api and a controller.
I would like to maximize the number of the CustomResources those could belonging to the specific CustomResourceDefinition that the operator defines. (As specially I would like to allow just one CR, if it is already defined, the operator should throw an error message and skip reconciling it.)
If I generate the api, there is a KindList struct default generated, and if I understand correctly, it should keep track of the CRs already defined for my CRD. It is also added to the scheme by default. See the example from kubebuilder documentation:</p>
<pre><code>//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CronJobSpec `json:"spec,omitempty"`
Status CronJobStatus `json:"status,omitempty"`
}
//+kubebuilder:object:root=true
// CronJobList contains a list of CronJob
type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
</code></pre>
<p>Unfortunately, I can not find out how to access this List from the controller. I have tried like this, but r.Get can not accept cacheList:</p>
<pre><code>cronjob/cronjob_controller.go
package controllers
import (
"context"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
cronjobv1alpha1 "github.com/example/cronjob-operator/api/v1alpha1"
)
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// TODO(user): your logic here
cronjob := cronjobv1alpha1.Memcached{}
if err := r.Get(ctx, req.NamespacedName, &cronjob); err != nil {
return ctrl.Result{}, err
}
cronjobList := cachev1alpha1.MemcachedList{}
if err := r.Get(ctx, req.NamespacedName, &cronjobList); err != nil {
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
</code></pre>
<p>If I get the list, I could validate the length of it, and do or skip the reconcile.</p>
<p>Is it even a correct approach? Is there a better way to achieve my goal? Should I create a webhook instead?</p>
| <p>Assuming you are using the default <code>sigs.k8s.io/controller-runtime/pkg/client</code>'s <code>client.Client</code>, you get access to the <code>List()</code> function.</p>
<p>In your case <code>r.List(...)</code>.</p>
<h2>Usage:</h2>
<h3>case 1: list by label</h3>
<pre class="lang-golang prettyprint-override"><code>func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.MatchingLabels{"foo": "bar"})
if err != nil {
return ctrl.Result{}, err
}
}
</code></pre>
<h3>case 2: list all in namespace</h3>
<pre class="lang-golang prettyprint-override"><code>func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.InNamespace("default"))
if err != nil {
return ctrl.Result{}, err
}
}
</code></pre>
<h3>case 3: list by field i.e. metadata.name</h3>
<pre class="lang-golang prettyprint-override"><code>// in your Reconciler Setup function create an index
func SetupWithManager(mgr ctrl.Manager) error {
r := &CronJobReconciler{
Client: mgr.GetClient(),
}
mgr.GetFieldIndexer().IndexField(context.TODO(), &cronjobv1alpha1.CronJob{}, "metadata.name", NameIndexer)
return ctrl.NewControllerManagedBy(mgr).
For(&cronjobv1alpha1.CronJob{}).
Complete(r)
}
func NameIndexer(o client.Object) []string {
m := o.(*cronjobv1alpha1.CronJob)
return []string{m.ObjectMeta.Name}
}
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
cronjobList := cronjobv1alpha1.CronJobList{}
err = r.List(ctx, &cronjobList, client.MatchingFields{"metadata.name": "test"})
if err != nil {
return ctrl.Result{}, err
}
}
</code></pre>
|
<p>My asp.net core web application has redirection applied on statup.cs as:</p>
<pre><code>app.UseHttpsRedirection();
</code></pre>
<p>The problem arises when I host it on kubernetes where this redirection does not happens. The app opens on both http and https url.</p>
<p><strong>I am hosting the app on local kubernetes cluster on my pc having kali linus.</strong></p>
<p>This is the ingress which i am using:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- secretName: ssl
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
</code></pre>
<p>The ingress on opens 2 ports which are - 80, 443.</p>
<p>How should the redirection be done in such a case?</p>
<p>The deployment is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: first-dep
labels:
app: aspnet-core-app
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: csimpleweb
image: simpleweb
imagePullPolicy: Never
ports:
- containerPort: 80
</code></pre>
<p>The Service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: first-service
spec:
type: NodePort
selector:
component: web
ports:
- port: 8080
targetPort: 80
</code></pre>
<p>So in summary I want to ask the following:</p>
<ol>
<li>How redirection works in k8s + ASP.NET Core. The redirection does not work in my case where I have applied the redirection on the app.</li>
<li>Should the redirection be done on ingress if so how. In that case I think we have remove <strong>app.UseHttpsRedirection()</strong> from the app.</li>
<li>What docker container ports, which the pods will host, we have to open and how?</li>
</ol>
<p>Thank you</p>
| <p>According to my research I found the following way.</p>
<ol>
<li>Remove the redirection from an ASP.NET Core app when we host it to kubernetes.</li>
<li>Apply the redirect to Ingress itself. By adding the following 2 lines to it:</li>
</ol>
<blockquote>
<p>nginx.ingress.kubernetes.io/force-ssl-redirect: "true"<br />
nginx.ingress.kubernetes.io/ssl-redirect: "true"</p>
</blockquote>
<p>So my ingress code becomes:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: first-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- secretName: ssl
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: first-service
port:
number: 8080
</code></pre>
<p>Let me know your thoughts. If you have a better answer the kindly add a new answer on the question.</p>
|
<p>I'm using fluent-bit to collect logs and pass it to fluentd for processing in a Kubernetes environment. Fluent-bit instances are controlled by DaemonSet and read logs from docker containers.</p>
<pre><code> [INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
</code></pre>
<p>There is a fluent-bit service also running</p>
<pre><code>Name: monitoring-fluent-bit-dips
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=fluent-bit-dips
app.kubernetes.io/version=1.8.10
helm.sh/chart=fluent-bit-0.19.6
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=fluent-bit-dips
Type: ClusterIP
IP Families: <none>
IP: 10.43.72.32
IPs: <none>
Port: http 2020/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.144:2020,10.42.1.155:2020,10.42.2.186:2020 + 1 more...
Session Affinity: None
Events: <none>
</code></pre>
<p>Fluentd service description is as below</p>
<pre><code>Name: monitoring-logservice
Namespace: dips
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=logservice
app.kubernetes.io/version=1.9
helm.sh/chart=logservice-0.1.2
Annotations: meta.helm.sh/release-name: monitoring
meta.helm.sh/release-namespace: dips
Selector: app.kubernetes.io/instance=monitoring,app.kubernetes.io/name=logservice
Type: ClusterIP
IP Families: <none>
IP: 10.43.44.254
IPs: <none>
Port: http 24224/TCP
TargetPort: http/TCP
Endpoints: 10.42.0.143:24224
Session Affinity: None
Events: <none>
</code></pre>
<p>But fluent-bit logs doesn't reach fluentd and getting following error</p>
<pre><code>[error] [upstream] connection #81 to monitoring-fluent-bit-dips:24224 timed out after 10 seconds
</code></pre>
<p>I tried several things like;</p>
<ul>
<li>re-deploying fluent-bit pods</li>
<li>re-deploy fluentd pod</li>
<li>Upgrade fluent-bit version from 1.7.3 to 1.8.10</li>
</ul>
<p>This is an Kubernetes environment where fluent-bit able to communicate with fluentd in the very earlier stage of deployment. Apart from that, this same fluent versions is working when I deploy locally with docker-desktop environment.</p>
<p>My guesses are</p>
<ul>
<li>fluent-bit cannot manage the amount of log process</li>
<li>fluent services are unable to communicate once the services are restarted</li>
</ul>
<p>Anyone having any experience in this or has any idea how to debug this issue more deeper?</p>
<hr />
<p>Updated following with fluentd running pod description</p>
<pre><code>Name: monitoring-logservice-5b8864ffd8-gfpzc
Namespace: dips
Priority: 0
Node: sl-sy-k3s-01/10.16.1.99
Start Time: Mon, 29 Nov 2021 13:09:13 +0530
Labels: app.kubernetes.io/instance=monitoring
app.kubernetes.io/name=logservice
pod-template-hash=5b8864ffd8
Annotations: kubectl.kubernetes.io/restartedAt: 2021-11-29T12:37:23+05:30
Status: Running
IP: 10.42.0.143
IPs:
IP: 10.42.0.143
Controlled By: ReplicaSet/monitoring-logservice-5b8864ffd8
Containers:
logservice:
Container ID: containerd://102483a7647fd2f10bead187eddf69aa4fad72051d6602dd171e1a373d4209d7
Image: our.private.repo/dips/logservice/splunk:1.9
Image ID: our.private.repo/dips/logservice/splunk@sha256:531f15f523a251b93dc8a25056f05c0c7bb428241531485a22b94896974e17e8
Ports: 24231/TCP, 24224/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 29 Nov 2021 13:09:14 +0530
Ready: True
Restart Count: 0
Liveness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/bin/healthcheck.sh] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SOME_ENV_VARS
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from monitoring-logservice-token-g9kwt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
monitoring-logservice-token-g9kwt:
Type: Secret (a volume populated by a Secret)
SecretName: monitoring-logservice-token-g9kwt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
| <p><a href="https://docs.fluentbit.io/manual/pipeline/filters/kubernetes" rel="nofollow noreferrer">https://docs.fluentbit.io/manual/pipeline/filters/kubernetes</a></p>
<pre><code> filters: |
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default:443
tls.verify Off
</code></pre>
<p>In my issue, Kubernetes Apiserver ssl error.</p>
|
<p>I'm using Hashicorp Vault in Kubernetes. I'm trying to mount secret file into main folder where my application resides. It would look like that: <code>/usr/share/nginx/html/.env</code> while application files are in <code>/usr/share/nginx/html</code>. But the container is not starting because of that. I suspect that that <code>/usr/share/nginx/html</code> was overwritten by Vault (annotation: <code>vault.hashicorp.com/secret-volume-path</code>). How can I mount only file <code>/usr/share/nginx/html/.env</code>?</p>
<p>My annotations:</p>
<pre><code>vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: configs/data/app/dev
vault.hashicorp.com/agent-inject-template-.env: |
{{- with secret (print "configs/data/app/dev") -}}{{- range $k, $v := .Data.data -}}
{{ $k }}={{ $v }}
{{ end }}{{- end -}}
vault.hashicorp.com/role: app
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html
</code></pre>
| <p>I tried to replicate the use case, but I got an error</p>
<pre><code>2022/10/21 06:42:12 [error] 29#29: *9 directory index of "/usr/share/nginx/html/" is forbidden, client: 20.1.48.169, server: localhost, request: "GET / HTTP/1.1", host: "20.1.55.62:80"
</code></pre>
<p>so it seems like vault changed the directory permission as well, as it create <code>.env</code> in the path, here is the config</p>
<pre><code> vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/develop/us-west-2/app1-secrets
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/develop/us-west-2/app1-secrets" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /usr/share/nginx/html/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: rolename
</code></pre>
<p>The work around was to overide the <code>command</code> of the desired container, for this use case, i used <code>nginx</code></p>
<pre><code>command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
</code></pre>
<p>Here is the compelete example with dummy value of <code>my-app</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: debug-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-.env: kv/my-app/develop/us-west-2/develop-my-app
vault.hashicorp.com/agent-inject-template-.env: |
"{{ with secret "kv/my-app/develop/us-west-2/develop-my-app" }}
{{ range $k, $v := .Data.data }}
{{ $k }} = "{{ $v }}"
{{ end }}
{{ end }} "
vault.hashicorp.com/agent-limits-ephemeral: ""
vault.hashicorp.com/secret-volume-path: /vault/secret/
vault.hashicorp.com/agent-inject-file-.env: .env
vault.hashicorp.com/auth-path: auth/kubernetes/develop/us-west-2
vault.hashicorp.com/role: my-app-develop-my-app
spec:
serviceAccountName: develop-my-app
containers:
- name: debug
image: nginx
command: ["bash", "-c", "cat /vault/secret/.env > /usr/share/nginx/html/.env && nginx -g 'daemon off;' "]
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
</code></pre>
|
<p>The Keycloak operator uses Quarkus: <a href="https://github.com/keycloak/keycloak/tree/main/operator" rel="nofollow noreferrer">https://github.com/keycloak/keycloak/tree/main/operator</a></p>
<p>In <code>application.properties</code> (<a href="https://github.com/keycloak/keycloak/blob/main/operator/src/main/resources/application.properties" rel="nofollow noreferrer">https://github.com/keycloak/keycloak/blob/main/operator/src/main/resources/application.properties</a>) we can set environment variables:
<a href="https://quarkus.io/guides/deploying-to-kubernetes#environment-variables-from-keyvalue-pairs" rel="nofollow noreferrer">https://quarkus.io/guides/deploying-to-kubernetes#environment-variables-from-keyvalue-pairs</a></p>
<p>For example:</p>
<pre><code>quarkus.kubernetes.env.vars.kc-hostname=localhost
quarkus.kubernetes.env.vars.kc-proxy=edge
quarkus.kubernetes.env.vars.proxy-address-forwarding=true
</code></pre>
<p>In the Kubernetes manifests which are generated, these environment variables appear in the <em>operator</em> container:</p>
<pre><code> spec:
containers:
- env:
...
- name: KC_HOSTNAME
value: localhost
- name: PROXY_ADDRESS_FORWARDING
value: "true"
...
- name: KC_PROXY
value: edge
image: keycloak/keycloak-operator:19.0.2
imagePullPolicy: Always
</code></pre>
<p>However, I need them to be set in the <em>application</em> container, instead.</p>
<hr />
<p>Here is another verification of this. The running operator container:</p>
<pre><code>$ kubectl describe pod keycloak-operator --namespace=keycloak
Name: keycloak-operator-6479dbc544-2wl4d
...
Controlled By: ReplicaSet/keycloak-operator-6479dbc544
Containers:
keycloak-operator:
Image: keycloak/keycloak-operator:19.0.2
...
Environment:
KUBERNETES_NAMESPACE: keycloak (v1:metadata.namespace)
KC_HOSTNAME: localhost
PROXY_ADDRESS_FORWARDING: true
OPERATOR_KEYCLOAK_IMAGE: quay.io/keycloak/keycloak:nightly
KC_PROXY: edge
</code></pre>
<p>and using the application manifest provided here (<a href="https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak" rel="nofollow noreferrer">https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak</a>) here is the running application pod:</p>
<pre><code>$ kubectl describe pod example-kc --namespace=keycloak
Name: example-kc-0
Namespace: keycloak
...
Containers:
keycloak:
Container ID:
Image: quay.io/keycloak/keycloak:nightly
...
Environment:
KC_CACHE_STACK: kubernetes
KC_HEALTH_ENABLED: true
KC_CACHE: ispn
KC_DB: postgres
KC_DB_URL_HOST: postgres-db
KC_DB_USERNAME: <set to the key 'username' in secret 'keycloak-db-secret'> Optional: false
KC_DB_PASSWORD: <set to the key 'password' in secret 'keycloak-db-secret'> Optional: false
KEYCLOAK_ADMIN: <set to the key 'username' in secret 'example-kc-initial-admin'> Optional: false
KEYCLOAK_ADMIN_PASSWORD: <set to the key 'password' in secret 'example-kc-initial-admin'> Optional: false
jgroups.dns.query: example-kc-discovery.keycloak
KC_HOSTNAME: test.keycloak.org
KC_HTTPS_CERTIFICATE_FILE: /mnt/certificates/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /mnt/certificates/tls.key
KC_PROXY: passthrough
</code></pre>
<p>Modifying the manifest (<a href="https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak" rel="nofollow noreferrer">https://www.keycloak.org/operator/basic-deployment#_deploying_keycloak</a>) to replace <code>hostname: test.keycloak.org</code> with <code>hostname: localhost</code> <em>does</em> work. But how about for the other environment variables <code>KC_PROXY</code> and <code>PROXY_ADDRESS_FORWARDING</code>? Thank you</p>
| <p>You should use the <code>unsupported</code> resource property. For instance:</p>
<pre><code>apiVersion: k8s.keycloak.org/v2alpha1
kind: Keycloak
metadata:
...
spec:
unsupported:
podTemplate:
spec:
containers:
- name: keycloak
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
</code></pre>
<p>See <code>kubectl explain Keycloak.spec.unsupported.podTemplate</code></p>
<blockquote>
<pre><code>KIND: Keycloak
VERSION: k8s.keycloak.org/v2alpha1
RESOURCE: podTemplate <Object>
DESCRIPTION:
You can configure that will be merged with the one configured by default by the operator.
...
FIELDS:
metadata <Object>
spec <Object>
</code></pre>
</blockquote>
|
<p>Using CRD of SeldonDeployment I want to configure HPA following this <a href="https://docs.seldon.io/projects/seldon-core/en/latest/examples/autoscaling_example.html" rel="nofollow noreferrer">example</a>
. The seldon deployment config is -</p>
<pre><code>apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- hpaSpec:
maxReplicas: 3
metrics:
- resource:
name: cpu
targetAverageUtilization: 10
type: Resource
minReplicas: 1
spec:
containers:
- image: seldonio/mock_classifier:1.5.0-dev
imagePullPolicy: IfNotPresent
name: classifier
resources:
requests:
cpu: '0.5'
terminationGracePeriodSeconds: 1
graph:
children: []
name: classifier
type: MODEL
name: example
</code></pre>
<p>When the above is applied this generates a deployment and a declarative HPA (along with other components). The deployment file is identical for QA, Alpha and Prod environments leading to the number of max / min replicas being the same. <strong>1. Can the number of replicas be made configurable for different environments?</strong> i.e different values for max / min replicas for different envs (in the CRD SeldonDeployment config file).</p>
<p>Note: for other (non-Seldon) deployments, we use declarative HPA for each environment. But this is not possible in this case since the Seldon controller manager reverts any diff between the applied CRD and current (generated from the CRD) deployment.</p>
<p><strong>2. More generalized question: how would we inject dynamic variables in CRD-based deployments?</strong></p>
| <p>I don't think that's a seldon-specific question.</p>
<p>I think you should define 3 charts like this for QA, Alpha and Prod. Each chart should have different values, and that's it!</p>
<p>Another thing is that this solution would duplicates the code - your static chart would be copied 3 times with just HPA params changed. One of the solutions for that is using <code>helm</code> charts. In this case, you write the chart <code>template</code>, that has a placeholder for the HPA values, and you just write 3 <code>values</code> files. That way you have one chart written and 3 <code>values</code> files that dynamically set the values for HPA.</p>
|
<p>Basically, I have a problem deleting my <code>spoc-volume-spoc-ihm-kube-test</code> PVC I tried with:</p>
<pre><code>kubectl delete -f file.yml
kubectl delete PVC
</code></pre>
<p>but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.</p>
<p>Capacity: 10Gi
Storage Class: rook-cephfs
Access Modes: RWX</p>
<p>Here is the status in my terminal:</p>
<blockquote>
<p>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
spoc-volume-spoc-ihm-kube-test Terminating pvc-<em>-</em> 10Gi RWX rook-cephfs 3d19h</p>
</blockquote>
<p>Thank You for your answers,
Stack Community :)</p>
| <p>You need to first check if the volume is attached to a resource using <code>kubectl get volume attachment</code>. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata <code>finalizers</code> are set to <code>kubernetes.io/pv-protection</code>.</p>
<h3>Solution 1:</h3>
<p>Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.</p>
<h3>Solution 2</h3>
<p>If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata <code>finalizers</code> to null as follows:</p>
<p>a) Edit the PV and PVC and delete or set to null the <code>finalizers</code> in the metadata</p>
<pre><code>kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}
</code></pre>
<p>b) Simply patch the PV and PVC as shown below:</p>
<pre><code>kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
</code></pre>
<p>Hope it helps.</p>
|
<p>I am trying to make GRPC work in a <code>digitalocean</code> <code>Kubernetes</code> cluster - but am still to succeed.</p>
<blockquote>
<p>PS: Apologies for the long content</p>
</blockquote>
<p>I have found some content regarding this but those revolve around some ingress. For me, these are internal services.</p>
<p>I have a <code>.proto</code> defined as such:</p>
<pre><code>syntax = "proto3";
package imgproto;
option go_package = ".;imgproto";
import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
service ImaginaryServer {
rpc Ping(PingPayload) returns (PongPayload) {}
}
message PingPayload {
google.protobuf.Timestamp ts = 1;
}
message PongPayload {
google.protobuf.Timestamp ts = 1;
}
</code></pre>
<p>After running <code>proto-gen-go</code>, I populate the <code>implementation</code> with:</p>
<pre><code>type ImaginaryServerImpl struct {
imgproto.UnimplementedImaginaryServer
}
func (s *ImaginaryServerImpl) Ping(_ context.Context, in *imgproto.PingPayload) (*imgproto.PongPayload, error) {
fmt.Printf("ImaginaryServerImpl.Ping: %v\n", in)
return &imgproto.PongPayload{
Ts: timestamppb.New(time.Now()),
}, nil
}
</code></pre>
<p>Create and register it to a GRPC server:</p>
<pre><code>grpcServer := grpc.NewServer()
imgproto.RegisterImaginaryServer(grpcServer, &ImaginaryServerImpl{})
</code></pre>
<p>And start the server:</p>
<pre><code>grpcListener, err := net.Listen("tcp", fmt.Sprintf(":%d", constants.PORT_GRPC))
if err != nil {
return err
}
go func() {
if err := grpcServer.Serve(grpcListener); err != nil {
fmt.Println("GRPC Server startup failed with", err)
}
}()
<-ctx.Done()
grpcServer.GracefulStop()
</code></pre>
<p>I wrote up the client as:</p>
<pre><code>grpcConnection, err := grpc.Dial(
endpoint,
grpc.WithBlock(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithUnaryInterceptor(func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
fmt.Println("Calling GRPC:", method, req, reply, invoker)
return nil
}))
if err != nil {
return ctx, err
}
client := pmqproto.NewImaginaryClient(grpcConnection)
fmt.Println(" >>>>>>>>>>> PING:")
pong, pingErr := client.Ping(ctx, &imgproto.PingPayload{Ts: timestamppb.Now()}, grpc.WaitForReady(false))
if pingErr != nil {
fmt.Println(pingErr)
}
fmt.Println(" >>>>>>>>>>> PONG: ", pong.Ts.AsTime().String())
</code></pre>
<p>But it looks like the client is returning without actually invoking the RPC.</p>
<p><code>Log</code> that I am seeing in the client:</p>
<pre><code>>>>>>>>>>>> PING:
Calling GRPC: /imgproto.ImaginaryServer/Ping ts:{seconds:1666113879 nanos:778900352} 0x127aa60
>>>>>>>>>>> PONG: 1970-01-01 00:00:00 +0000 UTC
</code></pre>
<blockquote>
<p>There are no logs in the server.</p>
</blockquote>
<p>My <code>k8s</code> yaml is as such:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: reality
namespace: reality-ns
spec:
type: ClusterIP
selector:
app: reality
ports:
- name: grpc
protocol: TCP
port: 6772
targetPort: 6772
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: reality
namespace: reality-ns
labels:
app: reality
spec:
selector:
matchLabels:
app: reality
template:
metadata:
labels:
app: reality
spec:
containers:
- name: reality
image: registry.example.com/binaek/reality
imagePullPolicy: Always
command: ["cmd"]
ports:
- name: grpc
containerPort: 6772
</code></pre>
<p>I am not able to locate what I am doing wrong. Desperately need help at this point.</p>
<blockquote>
<p>The image uses <code>gcr.io/distroless/base</code> as it's base.</p>
</blockquote>
| <p>Your interceptor usage is likely preventing execution of the RPC.</p>
<pre><code>// When a unary interceptor(s) is set on a ClientConn, gRPC
// delegates all unary RPC invocations to the interceptor, and it is the
// responsibility of the interceptor to call invoker to complete the processing
// of the RPC.
</code></pre>
<p><a href="https://github.com/grpc/grpc-go/blob/9127159caf5a3879dad56b795938fde3bc0a7eaa/interceptor.go#L31-L34" rel="nofollow noreferrer">https://github.com/grpc/grpc-go/blob/9127159caf5a3879dad56b795938fde3bc0a7eaa/interceptor.go#L31-L34</a></p>
<p>So your interceptor function should instead look like:</p>
<pre><code>grpc.WithUnaryInterceptor(func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
fmt.Println("Calling GRPC:", method, req, reply, invoker)
return invoker(ctx, method, req, reply, cc, opts...)
}))
</code></pre>
|
<p>Is it possible to apply <code>restartPolicy</code> for the pod based on docker exit code? What I mean is that if a container will exit with code 1 then the pod should be restarted (restartPolicy = Always), but when container will exit with code 2 then pod shouldn't be restarted (restartPolicy = Never)?</p>
<p>I have a program in the container which have 2 exit codes:</p>
<ul>
<li><p>exit 1 - where a failure is recoverable (e.g. disconnection from a server as server can be down/restarted),</p>
</li>
<li><p>exit 2 - where a failure is not recoverable (passed wrong argument).</p>
<p>I want kubernetes to restart the pod when exit code is 1 but not when exit code is 2 as there will be no chance to recovery.</p>
</li>
</ul>
<p>I think I can trick kubernetes by setting exit code for non-recoverable failure to 0 (normal termination) and exit code for recoverable failure to 1. Then set <code>restartPolicy</code> to <code>onFailure</code> but I don't like this solution.</p>
| <p>It is possible from Kubernetes 1.25: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-failure-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-failure-policy</a></p>
<p>It only exists for Kubernetes jobs.</p>
<p>In the case of your initial question, the syntax would be:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: Job
spec:
podFailurePolicy:
rules:
- action: Count
onExitCodes:
operator: In
values: [1]
- action: FailJob
onExitCodes:
operator: In
values: [2]
</code></pre>
|
<p>Let's say we are running some services in a Kubernetes cluster and one of them requires a PostgreSQL instance, expected to persist data reliably. Should the DB live in the cluster or be configured separately?</p>
<p>Imagine that the DB is deployed in the cluster. This probably means one of the following:</p>
<ol>
<li>We need a process for migrating the data to another node in case the current one goes down. This sounds like a non-trivial task. Or:</li>
<li>The node where the DB lives has to be treated in a special way. Horizontal scaling must be constrained to the other nodes and the cluster ceases to be homogeneous. This might be seen as a design flaw, going against the spirit of maintaining disposable, replaceable containers.</li>
</ol>
<p>Point (1) applies only to self-managed clusters where all the storage we have at our disposal is tied to machines where the nodes run. If we are using a managed cloud, we can use persistent volume claims and a new instance can pick up the data automatically. Still, this means that if the node with the DB is removed, we will suffer a database downtime until a new instance comes up. So point (2) remains valid also for managed K8s offerings.</p>
<p>Therefore I can well understand the argument for keeping the DB outside of Kubernetes. What would some counterarguments look like? There are a lot of official helm charts for various DBs which suggests that people keep their DBs in Kubernetes clusters after all.</p>
<p>Happy to learn some critical thoughts!</p>
| <p>This is not an anti-pattern. It is just difficult to implement and manage.</p>
<p><strong>Point 1</strong></p>
<ul>
<li>In a self hosted cluster also you can have persistent volume storage provisioned though GlusterFS and CEPH. So, you don't always have to use ephemeral storage. So, Point 1 is not fully valid.</li>
<li>The DBs are generally created as a <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">statefulsets</a>, where every instance gets its own copy of data.</li>
</ul>
<p><strong>Point 2</strong></p>
<ul>
<li>When your DB cluster horizontally scales, the 'init' container of the new DB pod or a CRD provided by the DB needs to register the 'secondary' DB pod so it becomes the part of your dB cluster</li>
<li>A <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">statefulset</a> needs to also run as a <a href="https://chamszamouri.medium.com/why-stateful-applications-in-k8s-need-a-headless-service-20d3db993872" rel="nofollow noreferrer">headless service</a> so the IPs of each endpoint is also known all the time for cluster healthcheck and primary->secondary data sync and to elect a new primary selection in case the primary node goes down</li>
<li>So, as long as the new pods register themselves to the DB cluster, you will be okay to run your db workload inside a kubernetes cluster</li>
</ul>
<p>Further reading: <a href="https://devopscube.com/deploy-postgresql-statefulset/" rel="nofollow noreferrer">https://devopscube.com/deploy-postgresql-statefulset/</a></p>
|
<p>A way to get the schema using as an example <code>secretproviderclasses.secrets-store.csi.x-k8s.io</code> i would use the command <code>kubectl describe crd secretproviderclasses.secrets-store.csi.x-k8s.io</code> and get as a result:</p>
<pre><code>Name: secretproviderclasses.secrets-store.csi.x-k8s.io
Namespace:
Labels: <none>
Annotations: controller-gen.kubebuilder.io/version: v0.9.0
helm.sh/resource-policy: keep
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
Metadata:
Creation Timestamp: 2022-10-11T15:27:15Z
Generation: 1
Managed Fields:
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:controller-gen.kubebuilder.io/version:
f:spec:
f:conversion:
.:
f:strategy:
f:group:
f:names:
f:kind:
f:listKind:
f:plural:
f:singular:
f:scope:
f:versions:
Manager: helm
Operation: Update
Time: 2022-10-11T15:27:15Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:acceptedNames:
f:kind:
f:listKind:
f:plural:
f:singular:
f:conditions:
k:{"type":"Established"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
k:{"type":"NamesAccepted"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
Manager: kube-apiserver
Operation: Update
Subresource: status
Time: 2022-10-11T15:27:15Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-10-11T15:27:38Z
API Version: apiextensions.k8s.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:helm.sh/resource-policy:
Manager: kubectl-patch
Operation: Update
Time: 2022-10-12T16:02:53Z
Resource Version: 123907610
UID: 4a251e0a-97fc-4369-903f-9aa9a13469c1
Spec:
Conversion:
Strategy: None
Group: secrets-store.csi.x-k8s.io
Names:
Kind: SecretProviderClass
List Kind: SecretProviderClassList
Plural: secretproviderclasses
Singular: secretproviderclass
Scope: Namespaced
Versions:
Name: v1
Schema:
openAPIV3Schema:
Description: SecretProviderClass is the Schema for the secretproviderclasses API
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: SecretProviderClassSpec defines the desired state of SecretProviderClass
Properties:
Parameters:
Additional Properties:
Type: string
Description: Configuration for specific provider
Type: object
Provider:
Description: Configuration for provider name
Type: string
Secret Objects:
Items:
Description: SecretObject defines the desired state of synced K8s secret objects
Properties:
Annotations:
Additional Properties:
Type: string
Description: annotations of k8s secret object
Type: object
Data:
Items:
Description: SecretObjectData defines the desired state of synced K8s secret object data
Properties:
Key:
Description: data field to populate
Type: string
Object Name:
Description: name of the object to sync
Type: string
Type: object
Type: array
Labels:
Additional Properties:
Type: string
Description: labels of K8s secret object
Type: object
Secret Name:
Description: name of the K8s secret object
Type: string
Type:
Description: type of K8s secret object
Type: string
Type: object
Type: array
Type: object
Status:
Description: SecretProviderClassStatus defines the observed state of SecretProviderClass
Properties:
By Pod:
Items:
Description: ByPodStatus defines the state of SecretProviderClass as seen by an individual controller
Properties:
Id:
Description: id of the pod that wrote the status
Type: string
Namespace:
Description: namespace of the pod that wrote the status
Type: string
Type: object
Type: array
Type: object
Type: object
Served: true
Storage: true
Deprecated: true
Deprecation Warning: secrets-store.csi.x-k8s.io/v1alpha1 is deprecated. Use secrets-store.csi.x-k8s.io/v1 instead.
Name: v1alpha1
Schema:
openAPIV3Schema:
Description: SecretProviderClass is the Schema for the secretproviderclasses API
Properties:
API Version:
Description: APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
Type: string
Kind:
Description: Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Type: string
Metadata:
Type: object
Spec:
Description: SecretProviderClassSpec defines the desired state of SecretProviderClass
Properties:
Parameters:
Additional Properties:
Type: string
Description: Configuration for specific provider
Type: object
Provider:
Description: Configuration for provider name
Type: string
Secret Objects:
Items:
Description: SecretObject defines the desired state of synced K8s secret objects
Properties:
Annotations:
Additional Properties:
Type: string
Description: annotations of k8s secret object
Type: object
Data:
Items:
Description: SecretObjectData defines the desired state of synced K8s secret object data
Properties:
Key:
Description: data field to populate
Type: string
Object Name:
Description: name of the object to sync
Type: string
Type: object
Type: array
Labels:
Additional Properties:
Type: string
Description: labels of K8s secret object
Type: object
Secret Name:
Description: name of the K8s secret object
Type: string
Type:
Description: type of K8s secret object
Type: string
Type: object
Type: array
Type: object
Status:
Description: SecretProviderClassStatus defines the observed state of SecretProviderClass
Properties:
By Pod:
Items:
Description: ByPodStatus defines the state of SecretProviderClass as seen by an individual controller
Properties:
Id:
Description: id of the pod that wrote the status
Type: string
Namespace:
Description: namespace of the pod that wrote the status
Type: string
Type: object
Type: array
Type: object
Type: object
Served: true
Storage: false
Status:
Accepted Names:
Kind: SecretProviderClass
List Kind: SecretProviderClassList
Plural: secretproviderclasses
Singular: secretproviderclass
Conditions:
Last Transition Time: 2022-10-11T15:27:15Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2022-10-11T15:27:15Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
</code></pre>
<p>Events: </p>
<p>Which is really verbose and hard to read, so is there some command that returns a json schema with examples?</p>
| <p>I am not aware of something what returns you an example directly. But you could use <code>kubectl explain</code> to get information about the fields of the api-resources schema:</p>
<p>e.g.</p>
<pre><code>$ kubectl explain secretproviderclasses.secrets-store.csi.x-k8s.io
</code></pre>
<p>To get the fields listed in a recursive way, use:</p>
<pre><code>kubectl explain secretproviderclasses.secrets-store.csi.x-k8s.io --recursive
</code></pre>
<p>Some operators also provide examples of their provided CRDs in the <code>alm-examples</code> annotation of their ClusterServiceVersion. So maybe this could be another way for you.</p>
|
<p>I have read <a href="https://stackoverflow.com/questions/55965342/kubernetes-load-balancing">this question</a> which is very similar to what I am asking, but still wanted to write a new question since the accepted answer there seems very incomplete and also potentially wrong.</p>
<p>Basically, it seems like there is some missing or contradictory information regarding built in load-balancing for regular Kubernetes Services (I am not talking about LoadBalancer services). For example, the official <a href="https://cilium.io/get-started/" rel="nofollow noreferrer">Cilium documentation</a> states that "Kubernetes doesn't come with an implementation of Load Balancing". In addition, I couldn't find any information in the official Kubernetes documentation about load balancing for internal services (there was only <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#load-balancing" rel="nofollow noreferrer">a section discussing this under ingresses</a>).</p>
<p>So my question is - how does load balancing or distribution of requests work when we make a request from within a Kubernetes cluster to the internal address of a Kubernetes service?</p>
<p>I know there's a Kubernetes proxy on each node that creates the DNS records for such services, but what about services that span multiple pods and nodes? There's got to be some form of request distribution or load-balancing, or else this just wouldn't work at all, no?</p>
| <p>A standard Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> provides basic load-balancing. Even for a ClusterIP-type Service, the Service has its own cluster-internal IP address and DNS name, and forwards requests to the collection of Pods specified by its <code>selector:</code>.</p>
<p>In normal use, it is enough to create a multiple-replica Deployment, set a Service to point at its Pods, and send requests only to the Service. All of the replicas will receive requests.</p>
<p>The documentation discusses <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">the implementation of internal load balancing</a> in more detail than an application developer normally needs. Unless your cluster administrator has done extra setup, you'll probably get round-robin request routing – the first Pod will receive the first request, the second Pod the second, and so on.</p>
<blockquote>
<p>... the official Cilium documentation states ...</p>
</blockquote>
<p>This is almost certainly a statement about <em>external</em> load balancing. As a cluster administrator (not a programmer) a "plain" Kubernetes installation doesn't include an external load-balancer implementation, and a LoadBalancer-type Service behaves identically to a NodePort-type Service.</p>
<p>There are obvious deficiencies to round-robin scheduling, most notably if you do wind up having individual network requests that take a long time and a lot of resource to service. As an application developer the best way to address this is to make these very-long-running requests run asynchronously; return something like an HTTP 201 Created status with a unique per-job URL, and do the actual work in a separate queue-backed worker.</p>
|
<p>Currently, my Kubernetes cluster is provisioned via <code>GKE</code>.</p>
<p>I use <code>GCE Persistent Disks</code> to persist my data.</p>
<p>In <code>GCE</code>, persistent storage is provided via <code>GCE Persistent Disks</code>. Kubernetes supports adding them to <code>Pods</code> or <code>PersistenVolumes</code> or <code>StorageClasses</code> via the <code>gcePersistentDisk</code> volume/provisioner type.</p>
<p>What if I would like to transfer my cluster from <code>Google</code> to, lets say, <code>Azure</code> or <code>AWS</code>?
Then I would have to change value of volume type to <code>azureFile</code> or <code>awsElasticBlockStore</code> respectively in all occurrences in the manifest files.</p>
<p>I hope <code>CSI</code> driver will solve that problem, unfortunately, they also use a different type of volume for each provider cloud provider, for example <code>pd.csi.storage.gke.io</code> for <code>GCP</code> or <code>disk.csi.azure.com</code> for <code>Azure</code>.</p>
<p>Is there any convenient way to make the Kubernetes volumes to be cloud agnostic? In which I wouldn't have to make any changes in manifest files before K8s cluster migration.</p>
| <p>You cannot have cloud agnostic storage by using the CSI drivers or the native VolumeClaims in Kubernetes. That's because these API's are the upstream way of provisioning storage which each cloud provider has to integrate with to translate them into the Cloud Specific API (PD for Google, EBS for AWS...)</p>
<p>Unless you have a self-managed Storage that you can access via an NFS driver or a specific driver from the tools managed above. And still with that the Self-Managed Storage solution is going to be based on a Cloud provider specific volume. So You are just going to shift the issue to a different place.</p>
|
<p>Auto scaler failing with the below error.</p>
<p>Procedure followed for <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">here</a></p>
<pre><code>aws_cloud_provider.go:369] Failed to generate AWS EC2 Instance Types: WebIdentityErr: failed to retrieve credentials
cluster autoscaler failed to generate aws ec2 instance types: unable to load ec2 instance type list
</code></pre>
| <p>Add <code>"ec2:DescribeInstanceTypes"</code> permission to your <code><AmazonEKSClusterAutoscalerPolicy></code> IAM policy.
Then restart the Autoscaller deployment.</p>
<pre><code>kubectl -n kube-system rollout restart deployment.apps/cluster-autoscaler
</code></pre>
<p>This worked for me.</p>
|
<p>I am able to exec into a pod , and in that pod i will also like to run a stolonctl command at the same time into the pod i just exec into. Here is an example of what i will like to achieve. I tried using the first command first and then tried to see if i write the second command if it will work based on the first but it didnt.</p>
<p><code>Special execution command ${cluster_name} kubectl exec -it pod -c container ${cluster_name} -- /bin/bash</code>
then in the bash i want to also run this
<code>stolonctl --cluster-name [cluster_name] --store-backend [store_backend] --store-endpoints kubernetes status</code>
i want to be able to achieve something like this in robot. be able to do something similar to ls in the pod</p>
<pre><code>controlplane $ kubectl run --image=nginx web --restart=Never
pod/web created
controlplane $ kubectl get po
NAME READY STATUS RESTARTS AGE
web 0/1 ContainerCreating 0 4s
controlplane $ kubectl exec -it web -- /bin/bash
root@web:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
</code></pre>
| <p>You don't need to open a bash in the container first. You can run the <code>stolonctl</code> command directly:</p>
<pre><code>$ kubectl exec -it pod -c container ${cluster_name} -- stolonctl --cluster-name [cluster_name] --store-backend [store_backend] --store-endpoints kubernetes status
</code></pre>
|
<p>I'm trying to use the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">K8S Go client</a> to list the <code>ClusterServiceVersion</code>s.
It could be enough to have the raw response body.</p>
<p>I've tried this:</p>
<pre class="lang-golang prettyprint-override"><code>data, err := clientset.RESTClient().Get().Namespace(namespace).
Resource("ClusterServiceVersions").
DoRaw(context.TODO())
if err != nil {
panic(err.Error())
}
fmt.Printf("%v", string(data))
</code></pre>
<p>But it returns the following error:</p>
<pre><code>panic: the server could not find the requested resource (get ClusterServiceVersions.meta.k8s.io)
</code></pre>
<p>How do I specify to use the <code>operators.coreos.com</code> group?</p>
<p>Looking at <a href="https://github.com/operator-framework/operator-lifecycle-manager/blob/master/pkg/api/client/clientset/versioned/typed/operators/v1alpha1/clusterserviceversion.go#L90" rel="nofollow noreferrer">some existing code</a> I've also tried to add</p>
<pre class="lang-golang prettyprint-override"><code>VersionedParams(&v1.ListOptions{}, scheme.ParameterCodec)
</code></pre>
<p>But it result in this other error:</p>
<pre><code>panic: v1.ListOptions is not suitable for converting to "meta.k8s.io/v1" in scheme "pkg/runtime/scheme.go:100"
</code></pre>
| <p>It is possible to do a raw request using the <code>AbsPath()</code> method.</p>
<pre class="lang-golang prettyprint-override"><code>path := fmt.Sprintf("/apis/operators.coreos.com/v1alpha1/namespaces/%s/clusterserviceversions", namespace)
data, err := clientset.RESTClient().Get().
AbsPath(path).
DoRaw(ctx)
</code></pre>
<p>Also notice that if you want to define <code>clientset</code> using the interface (<code>kubernetes.Interface</code>) instead of the concrete type (<code>*kubernetes.Clientset</code>) the method <code>clientset.RESTClient()</code> is not directly accessible, but you can use the following one:</p>
<pre><code>clientset.Discovery().RESTClient()
</code></pre>
|
<p>I want to make a <code>YAML</code> file with Deployment, Ingress, and Service (maybe with clusterissuer, issuer and cert) on one file, how can I do that? I tried</p>
<pre><code>kubectl apply -f (name_file.yaml)
</code></pre>
| <p>You can it with three dashes on your yaml file
like this</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mock
spec:
...
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mock
spec:
</code></pre>
<p>Source : <a href="https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a" rel="nofollow noreferrer">https://levelup.gitconnected.com/kubernetes-merge-multiple-yaml-into-one-e8844479a73a</a></p>
|
<p>I have the following setup in a kubernetes cluster:</p>
<pre><code>Node Name Node Label
worker-zone-a-1 topology.kubernetes.io/zone=a
worker-zone-a-2 topology.kubernetes.io/zone=a
worker-zone-a-3 topology.kubernetes.io/zone=a
worker-zone-a-4 topology.kubernetes.io/zone=a
worker-zone-b-1 topology.kubernetes.io/zone=b
worker-zone-b-2 topology.kubernetes.io/zone=b
worker-zone-b-3 topology.kubernetes.io/zone=b
</code></pre>
<p>And I have the following deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>The purpose of using <code>topologySpreadConstraints</code> is to distribute the Deployment in a way that zone-a will run 3 Pods and zone-b will run 2 Pods.</p>
<p>When applying the Deployment several times, sometimes the pods are distributed 3 for zone-a and 2 for zone-b and sometimes the pods are distributed 2 for zone-a and 3 for zone-b.</p>
<p>Is there a way to set the Pods distribution of the Deployment so it will always run 3 Pods in zone-a and 2 pods in zone-b?</p>
<p>Thanks :-)</p>
| <p>There is a solution.</p>
<p>First I want to thank @yair-elmaliah for providing the solution in a different channel.
<a href="https://stackoverflow.com/users/16521581/yair-elmaliah">https://stackoverflow.com/users/16521581/yair-elmaliah</a></p>
<p>So, my adjusted setup is as follows.</p>
<p>The nodes in the Kubernetes cluster have the following labels:</p>
<pre><code>Node Name Node Labels
worker-zone-a-1 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-2 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-3 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-a-4 topology.kubernetes.io/zone=a,topology.kubernetes.io/zone-a=true,node-role.kubernetes.io/data=data
worker-zone-b-1 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
worker-zone-b-2 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
worker-zone-b-3 topology.kubernetes.io/zone=b,topology.kubernetes.io/zone-b=true,node-role.kubernetes.io/data=data
</code></pre>
<p>And this is the adjusted deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
node-role.kubernetes.io/data: data
topologySpreadConstraints:
- topologyKey: topology.kubernetes.io/zone
maxSkew: 1
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: nginx
- topologyKey: topology.kubernetes.io/zone-a
maxSkew: 1
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: nginx
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>The first constraint (<code>topologyKey: topology.kubernetes.io/zone</code>) will distribute the 5 pods between zone <code>a</code> and zone <code>b</code> using a 3/2 or 2/3 ratio. This is good, but we cannot control where the 3 pods will be allocated.</p>
<p>The second constraint (<code>topologyKey: topology.kubernetes.io/zone-a</code>) will try to schedule one of the pods on a node that has the <code>zone-a</code> label.</p>
<p>Since all constraints are evaluated and combined together, this setup will tell the scheduler to deploy 4 pods based on zone name and the fifth pod based on zone-type.</p>
|
<p>I have an ArgoCD App which is generating symbols</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "myapp.labels" . | nindent 4 }}
annotations:
helm.sh/hook: pre-install,post-delete
name: {{ include "myapp.fullname" . }}
type: Opaque
data:
{{- if .Values.password }}
password: {{ .Values.password | b64enc | quote }}
{{- else }}
password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
</code></pre>
<p>A second service is adding additional values to the secrets later. I don't want this secrets in my chart</p>
<p>Now when something is changed on the app, the secrets are recreated.</p>
<p>How can i change this behaviour in ArgoCD?</p>
| <p>Add annotation to that particular object and it should work</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "myapp.labels" . | nindent 4 }}
annotations:
helm.sh/hook: pre-install,post-delete
argocd.argoproj.io/sync-options: Prune=false
</code></pre>
<blockquote>
<p>Some Sync Options can defined as annotations in a specific resource. Most of the Sync Options are configured in the Application resource <code>spec.syncPolicy.syncOptions</code> attribute. Multiple Sync Options which are configured with the <code>argocd.argoproj.io/sync-options</code> annotation can be concatenated with a , in the annotation value; white spaces will be trimmed.</p>
</blockquote>
<p><a href="https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#no-prune-resources" rel="nofollow noreferrer">no-prune-resources</a></p>
<p>Or if you don’t want to apply auto sync on this object then you can try <code>ApplyOutOfSyncOnly=false</code></p>
<p><a href="https://argo-cd.readthedocs.io/en/release-2.0/user-guide/sync-options/#selective-sync" rel="nofollow noreferrer">selective-sync</a></p>
|
<p>I'm having a problem with a simple template-in-template case and I cannot get a working solution. Here's the roundup:</p>
<p>This is a standard template created by Helm itself:</p>
<pre><code>{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "app.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p>Later I'd like to re-use <code>app.fullname</code> in this template:</p>
<pre><code>{{- define "app.userSessionSelector" -}}
workload.user.cattle.io/workloadselector: deployment-{{ .Release.Namespace }}-{{ include "app.fullname" . }}-usersession
{{- end }}
</code></pre>
<p>When I test this configuration I get:</p>
<pre><code>Error: Failed to render chart: exit status 1: install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /home/user/helm/app
Error: template: app/templates/_helpers.tpl:70:81: executing "app.userSessionSelector" at <include "app.fullname" .>: error calling include: template: app/templates/_helpers.tpl:18:16: executing "app.fullname" at <$name>: invalid value; expected string
helm.go:84: [debug] template: app/templates/_helpers.tpl:70:81: executing "app.userSessionSelector" at <include "app.fullname" .>: error calling include: template: app/templates/_helpers.tpl:18:16: executing "app.fullname" at <$name>: invalid value; expected string
</code></pre>
<p>What puzzles me is why there's a problem with <code>$name</code> while <code>app.fullname</code> template evaluates and is used in various places in the project. I bet this is explained in the documentation somewhere [or it's a bug] but I was unable to find the cause of this.</p>
<p>BTW: I'm using Helm v3.9.2.</p>
| <p>It may be caused by the scope.</p>
<p><code>$</code> represents the root of values.</p>
<p>In the <code>range</code> loop (or <code>with</code> etc), you should use . to represents the current element.</p>
<pre class="lang-yaml prettyprint-override"><code>{{- define "app.userSessionSelector" -}}
workload.user.cattle.io/workloadselector: deployment-{{ .Release.Namespace }}-{{ include "app.fullname" $ }}-usersession
{{- end }}
</code></pre>
|
<p>I was going by this update for EKS <a href="https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/" rel="nofollow noreferrer">https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/</a> and this blog from AWS <a href="https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/using-eks-encryption-provider-support-for-defense-in-depth/</a>.</p>
<p>This is a very cryptic line which never confirms whether EKS encrypts secrets or not by default</p>
<blockquote>
<p>In EKS, we operate the etcd volumes encrypted at disk-level using AWS-managed encryption keys.</p>
</blockquote>
<p>I did understand that:-</p>
<ul>
<li>KMS with EKS will provide envelope encryption,like encrypting the DEK using CMK.</li>
<li>But it never mentioned that if I don't use this feature ( of course KMS will cost ), does EKS encrypts data by default?</li>
</ul>
<p>Because Kubernetes by default does not encrypt data . <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Source</a></p>
<blockquote>
<p>Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret, and so can anyone with access to etcd. Additionally, anyone who is authorized to create a Pod in a namespace can use that access to read any Secret in that namespace; this includes indirect access such as the ability to create a Deployment.</p>
</blockquote>
| <p>I think I found it, the blog and update post by aws are very cryptic.</p>
<p>According to <a href="https://docs.aws.amazon.com/eks/latest/userguide/clusters.html" rel="nofollow noreferrer">docs</a> and console :-</p>
<blockquote>
<p>All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted using AWS KMS.</p>
</blockquote>
<p>Using KMS with EKS is additional encryption or a better way of envelope encryption. It allows deploying a defense-in-depth strategy for Kubernetes applications by encrypting Kubernetes secrets with a KMS key that you define and manage.</p>
<p><a href="https://i.stack.imgur.com/CVMJG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CVMJG.png" alt="enter image description here" /></a></p>
|
<p>This question concerns kubernetes v1.24 and up</p>
<p>So I can create tokens for service accounts with</p>
<pre><code>kubectl create token myserviceaccount
</code></pre>
<p>The created token works and serves the purpose, but what I find confusing is that when I <code>kubectl get sa</code> SECRETS field of myserviceaccount is still 0. The token doesn't appear in <code>kubectl get secrets</code> either.</p>
<p>I've also seen that I can pass <code>--bound-object-kind</code> and <code>--bound-object-name</code> to <code>kubectl create token</code> but this doesn't seem to do anything (visible) either...</p>
<p>Is there a way to see created token? And what is the purpose of --bound.. flags?</p>
| <p>Thanks to the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens" rel="nofollow noreferrer">docs link</a> I've stumbled upon today (I don't know how I've missed it when asking the question because I've spent quite some time browsing through the docs...) I found the information I was looking for. I feel like providing this answer because I find v1d3rm3's answer incomplete and not fully accurate.</p>
<p>The kubernetes docs confirm v1d3rm3's claim (which is btw the key to answering my question):</p>
<blockquote>
<p>The created token is a signed JSON Web Token (JWT).</p>
</blockquote>
<p>Since the token is JWT token the server can verify if it has signed it, hence no need to store it. JWTs expiry time is set not because the token is not associated with an object (it actually is, as we'll see below) but because the server has no way of invalidating a token (it would actually need to keep track of invalidated tokens because tokens aren't stored anywhere and any token with good signature is valid). To reduce the damage if a token gets stolen there is an expiry time.</p>
<p>Signed JWT token contains all the necessary information inside of it.</p>
<p>The decoded token (created with <code>kubectl create token test-sa</code> where test-sa is service account name) looks like this:</p>
<pre><code>{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712616,
"iat": 1666709016,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709016,
"sub": "system:serviceaccount:default:test-sa"
}
</code></pre>
<p>Contrary to v1d3rm3 answer, <strong>This token IS associated with a service account automatically</strong>, as the kubernets docs link confirm and as we can also see from the token content above.</p>
<p>Suppose I have a secret I want to bind my token to (for example <code>kubectl create token test-sa --bound-kind Secret --bound-name my-secret</code> where test-sa is service account name and my-secret is the secret I'm binding token to), the decoded token will look like this:</p>
<pre><code>{
"aud": [
"https://kubernetes.default.svc.cluster.local"
],
"exp": 1666712848,
"iat": 1666709248,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "default",
"secret": {
"name": "my-secret",
"uid": "2a44872f-1c1c-4f18-8214-884db5f351f2"
},
"serviceaccount": {
"name": "test-sa",
"uid": "dccf5808-b29b-49da-84bd-9b57f4efdc0b"
}
},
"nbf": 1666709248,
"sub": "system:serviceaccount:default:test-sa"
}
</code></pre>
<p>Notice that binding happens inside the token, under <strong>kubernetes.io</strong> key and if you describe my-secret you will still not see the token. So the --bound-... flags weren't visibly (from secret object) doing anything because binding happens inside the token itself...</p>
<p>Instead of decoding JWT tokens, we can also see details in TokenRequest object with</p>
<pre><code>kubectl create token test-sa -o yaml
</code></pre>
|
<p>I have Elasticsearch Data pods that are currently running on an AKS and are connected to Persistent Volumes that is using a Premium SSD Managed Disk Storage Class and I want to downgrade it to Standard SSD Managed Disk without losing the data I have on the currently used Persistent Volume.
I've created a new Storage Class that is defined with Standard SSD Managed Disk, but if I create a new PV from that it obviously doesn't keep the old data and I need to copy it somehow, so I was wondering what would be best practice switching PV's Storage Class.</p>
| <p>Unfortunately, once a PVC is created and a PV is provisioned for it, the only thing you can change without creating a new one is the volume's size</p>
<p>The only straightforward way I could think of without leveraging CSI snapshots/clones, which you might not have access to (depends on how you created PVCs/PVs AFAIK), would be to create a new PVC and mount both volumes on a Deployment whose Pod has root access and the <code>rsync</code> command.</p>
<p>Running <code>rsync -a /old/volume/mount/path /new/volume/mount/path</code> on such a Pod should get you what you want.</p>
<p>However, you should make sure that you do so <strong>BEFORE</strong> deleting PVCs or any other resource using your PVs. By default, most of the default storage classes create volumes with reclaim policies that immediately delete the PV as soon as all resources using it are gone, so there's a small risk of data loss</p>
|
<p>The following chain describes how Pods that define an API are reached from the outside.</p>
<pre><code>Client -> Route53 (.example.com)
-> LoadBalancer -> Nginx
-> Service -> Pod
</code></pre>
<p>Some pods, in addition to defining an API, communicate and use the API of others in the same cluster k8s. To allow communication between pods I can do it using the internal dns: eg. <code>api1.ns.svc.cluster.local</code> or using the Route53 <code>api1.example.com</code> domain.</p>
<p>The first case is more efficient but on the other hand I need to keep a list of the necessary services and namespaces for each pod.</p>
<p>The second case is easier to manage. I know that each API responds to <code>* .example.com</code> so I only need to know the subdomain to call. This approach is extremely inefficient:</p>
<pre><code>Pod1 -> Route53 (api2.example.com)
-> LoadBalancer -> Nginx
-> Service -> Pod2
</code></pre>
<p>In this scenario I would like to know if there are known solutions for which a pod to communicate with another can use the same domain managed by Route53 but without leaving the cluster and maintaining internal traffic.</p>
<p>I know I can use a core dns rewrite but in that case I should still keep an updated list, also Route53 holds subdomains pointing to services outside the cluster, e.g. <code>db.example.com</code></p>
<p>So the idea is an autodiscovery of the ingress and keep internal traffic if possible:</p>
<pre><code>Pod1 -> k8sdns with api2.example.com ingress
-> Nginx -> Service
-> Pod2
</code></pre>
<p>Or</p>
<pre><code>Pod1 -> k8sdns without db.example.com ingress
-> Route53 -> LoadBalancer
-> DB
</code></pre>
<p>Thanks</p>
| <p>Yes, you can do it using the CoreDNS <code>rewrite plugin</code>. This is the <a href="https://coredns.io/plugins/rewrite/" rel="nofollow noreferrer">official documentation</a> and I'll give you an example how to implement it.</p>
<ol>
<li>Edit the CoreDNS ConfigMap</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>kubectl edit cm -n kube-system coredns
</code></pre>
<ol start="2">
<li>Add this line inside the config:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
</code></pre>
<p>Your <code>cm</code> is going to look like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
rewrite name regex (.*)\.yourdomain\.com {1}.default.svc.cluster.local
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
</code></pre>
<ol start="3">
<li>Save the edit and delete your CoreDNS pods</li>
</ol>
<pre class="lang-bash prettyprint-override"><code>kubectl delete pod -n kube-system --selector k8s-app=kube-dns
</code></pre>
<ol start="4">
<li>And test it inside a dummy pod or query directly to the CoreDNS</li>
</ol>
<pre class="lang-bash prettyprint-override"><code># dig app1.yourdomain.com
; <<>> DiG 9.16.33-Debian <<>> app1.yourdomain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51020
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: c06f814fbf04a827 (echoed)
;; QUESTION SECTION:
;app1.yourdomain.com. IN A
;; ANSWER SECTION:
app1.default.svc.cluster.local. 30 IN A 10.110.113.195
;; Query time: 5 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Oct 26 04:49:47 UTC 2022
;; MSG SIZE rcvd: 107
</code></pre>
|
<p>I am getting the error</p>
<pre><code>dry-run failed, reason: Invalid, error: Deployment.apps "server" is invalid: spec.template.spec.containers[0].env[0].valueFrom: Invalid value:
"": may not be specified when `value` is not empty
</code></pre>
<p>when trying to override a value in <em>flux Kustomization</em> using <code>patchesStrategicMerge</code> like:</p>
<pre><code> env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: server-db-url
key: dburl
</code></pre>
<p>The secret <code>server-db-url</code> exists and contains the key <code>dburl</code></p>
<p>I am not sure if this has something to do with <a href="https://github.com/kubernetes/kubernetes/issues/46861" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/46861</a></p>
| <p>As workaround (mentioned in github issue you linked) you can add "value: null" to configuration:</p>
<pre><code> env:
- name: DATABASE_URL
value: null
valueFrom:
secretKeyRef:
name: server-db-url
key: dburl
</code></pre>
|
<p>I have my app deployed to Kubernetes and it's producing some logs. I can see the logs by running <code>kubectl logs -f <pod-id> -n staging</code>, but I can't find where the logs are physically located on the pod. The <code>/var/log/</code> folder is empty, and I can't find the logs anywhere else on the pod either.</p>
<p>Why is this happening, and where should the logs be?</p>
| <p>As @ Achraf Bentabib said</p>
<p>Kubernetes creates a directory structure to help you find logs based on Pods, so you can find the container logs for each Pod running on a node at</p>
<pre><code>/var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/
</code></pre>
<ol>
<li><p>Identify the node on which the Pod is running:</p>
<p>kubectl get pod pod-name -owide</p>
</li>
<li><p>SSH on that node, you can check which logging driver is being used by the node with:</p>
</li>
</ol>
<p>If you are using docker then:</p>
<pre><code>docker info | grep -i logging
</code></pre>
<p>If you are using kubernetes:</p>
<pre><code>kubectl ssh node NODE_NAME
</code></pre>
<ol start="3">
<li><p>If the logging driver writes to file, you can check the current output for a specific Pod by knowing the container id of that Pod, to do so, on a control-plane node</p>
<p>kubectl get pod pod-name ojsonpath='{.status.containerStatuses[0].containerID}'</p>
</li>
</ol>
<p>Example:</p>
<pre><code>var/log/containers/<pod-name>_<namespace>_<container-name-container-id>.log -> /var/log/pods/<some-uuid>/<container-name>_0.log
</code></pre>
|
<p>I get the following message when I run the <code>skaffold dev</code> command:</p>
<blockquote>
<p>Build Failed. Cannot connect to the Docker daemon at unix:
///var/run/docker.sock. Check if docker is running.</p>
</blockquote>
<p>Tools versions:</p>
<ol>
<li>MacOS Desktop Docker: 4.13.0 (89412)</li>
<li>Kubernetes: v1.25.2</li>
<li>Skaffold: v2.0.0</li>
</ol>
<p>Docker runs correctly in fact I can create resources on the cluster and create containers with the docker-cli commands. I successfully launch both docker info and docker version.</p>
<p>The command <code>/Applications/Docker.app/Contents/MacOS/com.docker.diagnose check</code></p>
<p>reports</p>
<blockquote>
<p>"No fatal errors detected."</p>
</blockquote>
<p>(all tests pass).</p>
<p>I also tried setting the <code>DOCKER_HOST</code> variable:
<code>DOCKER_HOST = /Users/<my folder>/.docker/run/docker.sock skaffold dev</code></p>
<p>Result:</p>
<pre><code>invalid skaffold config: error getting docker client: unable to parse docker host `/Users/<my folder>/.docker/run/docker.sock`
</code></pre>
<p>My Skaffold.yaml file</p>
<pre><code>apiVersion: skaffold/v3
kind: Config
metadata:
name: test
build:
local:
push: false
artifacts:
- image: <myimage>
context: <folder>
docker:
dockerfile: Dockerfile
manifests:
rawYaml:
- infra/k8s/deployment.yaml
</code></pre>
<p>How can I solve?</p>
| <p>The solution was to set the variable DOCKER_HOST before launching the <code>skaffold dev</code> command:</p>
<pre><code>DOCKER_HOST="unix:///Users/<you>/.docker/run/docker.sock" skaffold dev
</code></pre>
|
<p>If I understand correctly the conditions for Kubernetes to OOM kill a pod or container (<a href="https://komodor.com/learn/how-to-fix-oomkilled-exit-code-137/" rel="nofollow noreferrer">from komodor.com</a>):</p>
<blockquote>
<p>If a container uses more memory than its memory limit, it is terminated with an OOMKilled status. Similarly, if overall memory usage on all containers, or all pods on the node, exceeds the defined limit, one or more pods may be terminated.</p>
</blockquote>
<p>This means that if a container in the pod exceeds the total memory it will be killed (the container) but not the pod itself. Similarly, if there are multiple containers in a pod and the pod itself exceeds its memory limitation, which is the sum of memory limits of all the containers in that pod - the pod will be OOM killed. However, the latter only seems possibly if one of the containers exceeds its memory allowance. In this case - wouldn't the container be killed first?</p>
<p>I'm trying to understand the actual conditions in which a pod is OOM killed instead of a container.</p>
<p>I've also noticed that when there is one container in the pod and that container is exceeding its memory allowance repeatedly - the pod and container are killed <strong>intermittently</strong>. I observed this - <em>the container</em> would restart, which would be observable by watching the logs from the pod, and every second time - <em>the pod</em> is killed and restarted, incrementing its restart count.</p>
<p>If it helps to understand the behavior - the QOS class of the pod is <code>Burstable</code>.</p>
| <p>Pods aren't OOM killed at all. OOMKilled is a status ultimately caused by a kernel process (OOM Killer) that kills processes (containers are processes), which is then recognised by the kubelet which sets the status on the container. If the main container in a pod is killed then by default the pod will be restarted by the kubelet. A pod cannot be terminated, because a pod is a data structure rather than a process. Similarly, it cannot have a memory (or CPU) limit itself, rather it is limited by the sum of its component parts.</p>
<p>The article you reference uses imprecise language and I think this is causing some confusion. There is a <a href="https://medium.com/tailwinds-navigator/kubernetes-tip-how-does-oomkilled-work-ba71b135993b" rel="nofollow noreferrer">better, shorter, article on medium</a> that covers this more accurately, and <a href="https://mihai-albert.com/2022/02/13/out-of-memory-oom-in-kubernetes-part-4-pod-evictions-oom-scenarios-and-flows-leading-to-them/#oom-scenario-3-node-available-memory-drops-below-the-eviction-hard-flag-value" rel="nofollow noreferrer">a longer and much more in depth article here</a>.</p>
|
<p>I wan to create service account with token in Kubernetes. I tried this:</p>
<p>Full log:</p>
<pre><code>root@vmi1026661:~# ^C
root@vmi1026661:~# kubectl create sa cicd
serviceaccount/cicd created
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 5s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: cicd
spec:
serviceAccount: cicd
containers:
- image: nginx
name: cicd
EOF
pod/cicd created
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl create token cicd
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjoxNjY2NzkyNTIxLCJpYXQiOjE2NjY3ODg5MjEsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyMSwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.iBkpVDQ_w_UZmbr3PnpouwtQlLz9FzJs_cJ7IYbY WUphBM4NO4o8gPgBfnHGPG3uFVbEDbgdY2TsuxHKss0FosiCdjYBiLn8dp_SQd1Rdk0TMYGCLAOWRgZE XjpmXMLBcHtC5TexJY-bIpvw7Ni4Xls5XPbGpfqL_fcPuUQR3Gurkmk7gPSly77jRKSaF-kzj0oq78MPtwHu92g5hnIZs7ZLaMLzo9EvDRT092RVZXiVF0FkmflnUPNiyKxainrfvWTiTAlYSZreX6JfGjimklTAKCue4w9CqWZGNyGGumqH02ucMQ
xjAiHS6J_Goxyaho8QEvFsEhkVqNFndzbw
root@vmi1026661:~# kubectl create token cicd --duration=999999h
eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjo1MjY2Nzg1MzI2LCJpYXQiOjE2NjY3ODg5MjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyNiwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.N1V7i0AgW3DihJDWcGbM0kDvFH_nWodPlqZjLSHM KvaRAfmujOxSk084mrmjkZwIzWGanA6pkTQHiBIAGh8UhR7ijo4J6S58I-5Dj4gu2UWVOpaBzDBrKqBD SapFw9PjKpZYCHjsXTCzx6Df8q-bAEk_lpc0CsfpbXQl2jpJm3TTtQp1GKuIc53k5VKz9ON8MXcHY8lEfNs78ew8GiaoX6M4_5LmjSNVMHtyRy-Z_oIH9yK8LcHLxh0wqMS7RyW9UKN_9-qH1h01NwrFFOQWpbstFVuQKAnI-RyNEZDc9FZMNwYd_n
MwaKv54oNLx4TniOSOWxS7ZcEyP5b7U8mgBw
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
EOF
secret/cicd created
root@vmi1026661:~# cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ClusterRoleBind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cicd
namespace: default
EOF
clusterrolebinding.rbac.authorization.k8s.io/ClusterRoleBind created
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 60s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/cicd kubernetes.io/service-account-token 3 12s
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# kubectl describe secret cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cicd
kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZ XJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2Nvd W50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291b nQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OGY3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3M iIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLP lDMYgF_kzO7LV5B92voDjIPlXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtw GLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZB_lJPjZ90-lkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQ
T6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA
root@vmi1026661:~# kubectl describe sa cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: cicd
Events: <none>
root@vmi1026661:~# kubectl get sa cicd -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-10-26T12:54:45Z"
name: cicd
namespace: default
resourceVersion: "2206462"
uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
root@vmi1026661:~# kubectl get sa,secret
NAME SECRETS AGE
serviceaccount/cicd 0 82s
serviceaccount/default 0 16d
NAME TYPE DATA AGE
secret/cicd kubernetes.io/service-account-token 3 34s
secret/repo-docker-registry-secret Opaque 3 16d
secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d
root@vmi1026661:~# ^C
root@vmi1026661:~# kubectl describe secret cicd
Name: cicd
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cicd
kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW5
0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlc
m5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OG
Y3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLPlDMYgF_kzO7LV5-02voDjIP
lXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtwGLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZ
B_lJPjZ9DtBblkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQT6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA
root@vmi1026661:~#
root@vmi1026661:~#
</code></pre>
<p>As you can see I get error:</p>
<pre><code>root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: unable to upgrade connection: container not found ("cicd")
root@vmi1026661:~# kubectl create token cicd
</code></pre>
<p>Do you kno0w what should be the appropriate command to complete this step?</p>
<p>EDIT: here is the result from troubleshooting</p>
<pre><code>root@vmi1026661:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
cicd 1/1 Running 0 67m
repo-docker-registry-78d6c5bdb5-r68jb 0/1 Pending 0 16d
root@vmi1026661:~# kubectl describe pod cicd
Name: cicd
Namespace: default
Priority: 0
Service Account: cicd
Node: vmi1026660/38.242.240.39
Start Time: Wed, 26 Oct 2022 14:54:57 +0200
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.13
IPs:
IP: 10.244.1.13
Containers:
cicd:
Container ID: containerd://ab44fc463f97316ba807efce0c82e276cf06326e1d03846c1f6186484ff9fcbb
Image: nginx
Image ID: docker.io/library/nginx@sha256:47a8d86548c232e44625d813b45fd92e81d07c639092cd1f9a49d98e1fb5f737
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 26 Oct 2022 14:55:22 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xggpn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-xggpn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
root@vmi1026661:~#
</code></pre>
| <p>This is how you can complete your desired commands:</p>
<pre><code>kubectl exec -it <POD_NAME> -c <CONTAINER_NAME> -- /bin/bash
cat /run/secrets/kubernetes.io/serviceaccount/token && echo
</code></pre>
<p>NOTE: if you only have 1 container in your pod, you can omit <code>-c <CONTAINER_NAME></code></p>
<p><code>-it</code> is short for <code>stdin</code> and <code>tty</code> -> this is your [COMMAND]</p>
<p><code>/bin/bash</code> is an argument here, you can pass more than one</p>
<p><code>--</code> separates the arguments you want to pass</p>
<p>After the first command is ran, you're inside a bash shell and can run whatever other commands inside the container.</p>
|
<p>I have an application designed to run as a K8s application, and it imports some dependencies (that I don't own) that run <code>exec.Cmd</code>s. This is fine, except I want to capture those logs. For some reason, when I do:</p>
<pre><code>r := bufio.NewReader(os.Stdout)
...
line, err := r.ReadString('\n')
</code></pre>
<p>An error is thrown saying that <code>/dev/stdout</code> is a <code>bad file descriptor</code>. How can this be? Isn't that the standard local destination for console output?</p>
<p><code>kubectl logs</code> seems to be able to capture the output, and more specifically, our central log forwarder is able to capture it as well. But trying to capture logs from the kube API server inside the container that's actually generating those logs seems kinda silly... Is there a better way to do this?</p>
| <p>Generally, <code>stdin</code> is a read-only stream for retrieving input written to your program, while <code>stdout</code> is a write-only stream for sending output written by your program. <em>In other words, nobody can read from /dev/stdout, except Chuck Norris.</em></p>
<p>By default, <code>stdout</code> is "pointing" to your terminal. But it is possible to redirect <code>stdout</code> from your terminal to a file. This redirection is set up before your program is started.</p>
<p>What usually happens, is the following: The container runtime redirects <code>stdout</code> of the process of your container to a file on the node where your container is running (e.g., <code>/var/log/containers/<container-name>-<container-id>.log</code>). When you request logs with <code>kubectl logs</code>, kubectl connects to kube-apiserver, which connects to the kubelet on the node running your container and asks it to send back the content from the log file.</p>
<p>Also take a look at <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/</a> which explains the various logging design approaches.</p>
<hr />
<p>A solution, which from a security and portability perspective you would definitely NOT implement, is to add a <code>hostPath</code> mount in your container mounting the <code>/var/log/containers</code> directory of your node and to access the container log directly.</p>
<hr />
<p>A proper solution might be to change the command of your image and to write output to <code>stdout</code> of your container and also to a local file within your container. This can be achieved using the <code>tee</code> command. Your application can then read back the log from this file. But keep in mind, that without proper rotation, the log file will grow until your container is terminated.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
containers:
- image: bash:latest
name: log-to-stdout-and-file
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /tmp/test.log'
</code></pre>
<hr />
<p>A little more complex solution would be, to replace the log file in the container with a named pipe file created with <code>mkfifo</code>. This avoids the growing file size problem (as long as your application is continuously reading the log from the named pipe file).</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: log-to-stdout-and-file
spec:
# the init container creates the fifo in an empty dir mount
initContainers:
- image: bash:latest
name: create-fifo
command:
- bash
- -c
- mkfifo /var/log/myapp/log
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# the actual app uses tee to write the log to stdout and to the fifo
containers:
- image: bash:latest
name: log-to-stdout-and-fifo
command:
- bash
- -c
- '(while true; do date; sleep 10; done) | tee /var/log/myapp/log'
volumeMounts:
- name: ed
mountPath: /var/log/myapp
# this sidecar container is only for testing purposes, it reads the
# content written to the fifo (this is usually done by the app itself)
#- image: bash:latest
# name: log-reader
# command:
# - bash
# - -c
# - cat /var/log/myapp/log
# volumeMounts:
# - name: ed
# mountPath: /var/log/myapp
volumes:
- name: ed
emptyDir: {}
</code></pre>
|
<p>For example I have StatefulSet with custom labels/annotations applied on it.</p>
<p>Everyone mentions these two metrics should be used, because <code>kube-state-metrics</code> should generate labels/annotations as well.</p>
<pre><code>kube_statefulset_annotations
kube_statefulset_labels
</code></pre>
<p>The thing is, I can see only default ones (<code>job</code>,<code>instance</code>, <code>namespace</code>,...) but not additionally added labels/annotations.</p>
<p>Example of the manifest I am testing with:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
label_network: "111"
labels:
app: testing-labels
label_STATEFULSET_LABEL: "111"
label_network: "111"
name: testing-labels
namespace: sre-test
spec:
selector:
matchLabels:
app: testing-labels
serviceName: testing-labels-headless
template:
metadata:
labels:
app: testing-labels
label_network: "111"
</code></pre>
<p>I've added so many different labels/annotations but the <code>kube_statefulset_labels{statefulset='testing-labels'}</code> returns:</p>
<blockquote>
<p>kube_statefulset_labels{container="kube-rbac-proxy-main", instance="10.2.23.229:8443", job="kube-state-metrics", namespace="sre-test", prometheus="aws-monitoring/k8s", prometheus_replica="prometheus-k8s-1", statefulset="testing-labels"}</p>
</blockquote>
<p>...which certainly doesn't contain any extra label. Any idea would be helpful? 🙏🏼</p>
<h3>Versions:</h3>
<ul>
<li>kube-state-metrics:v2.5.0</li>
<li>kube-rbac-proxy:v0.12.0</li>
</ul>
| <p>In order to get custom metrics within <code>kube_statefulset_labels</code> we need to add</p>
<p><code>--metric-labels-allowlist</code> as a flag in kube-state-metrics. (<a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/cli-arguments.md#available-options" rel="nofollow noreferrer">docs</a>)</p>
<p>In this particular example it would be:</p>
<pre><code>--metric-labels-allowlist=statefulsets=[label_network]
</code></pre>
|
<p>Basically, my kubeconfig file has:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: <OAM ip address> this is what I want
(...)
</code></pre>
<p>I want to get the server address.
Previously searching , I've found this solution:</p>
<pre><code>config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
nodes, err := clientset.CoreV1().Nodes().List(metav1.ListOptions{})
if err != nil {
panic(err)
}
nodeip := []corev1.NodeAddress{}
for i := 0; i < len(nodes.Items); i++ {
nodeip = nodes.Items[i].Status.Addresses
fmt.Println(nodeip[0].Address)
}
fmt.Println(nodes.Items[0].Status.Addresses)
</code></pre>
<p>But it gives me the Internal IP, not the OAM server IP (which is inside the Kubernetes config file)</p>
| <p>If you want the server address from the <code>kubeconfig</code> file, just read it from your <code>config</code> variable:</p>
<pre><code>package main
import (
"flag"
"fmt"
"path/filepath"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/homedir"
)
func main() {
var kubeconfig *string
if home := homedir.HomeDir(); home != "" {
kubeconfig = flag.String("kubeconfig", filepath.Join(home, ".kube", "config"), "(optional) absolute path to the kubeconfig file")
} else {
kubeconfig = flag.String("kubeconfig", "", "absolute path to the kubeconfig file")
}
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
fmt.Printf("server: %s\n", config.Host)
}
</code></pre>
<hr />
<p>If you're curious what other fields are available on the <code>rest.Config</code> object, a quick solution is to print out the <code>config</code> variable using the <code>%+v</code> format specifier:</p>
<pre><code>fmt.Printf("%+v\n", config)
</code></pre>
<p>For more details, look at the <a href="https://pkg.go.dev/k8s.io/client-go/rest#Config" rel="nofollow noreferrer">reference documentation</a>.</p>
|
<p>I went through the steps listed here: <a href="https://kubernetes.io/docs/setup/production-environment/tools/kops/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kops/</a></p>
<p>After moving the kops file to /usr/local/bin/ and renaming to kops, I tried to confirm if it was in fact installed and executable by trying 'kops --help' and 'kops --version'/'kops version' and neither command worked. Any idea what the issue might be?</p>
<p>Edit: Here's what I did step by step</p>
<ol>
<li><p>curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s <a href="https://api.github.com/repos/kubernetes/kops/releases/latest" rel="nofollow noreferrer">https://api.github.com/repos/kubernetes/kops/releases/latest</a> | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64</p>
</li>
<li><p>sudo chmod +x kops-darwin-amd64</p>
</li>
<li><p>sudo mv kops-darwin-amd64 /usr/local/bin/kops</p>
</li>
</ol>
<p>It's a t2.micro Ubuntu 20.04 EC2 Instance.</p>
<p>Tried to confirm if kops was properly installed and executable by entering 'kops --help' and 'kops --version' and also 'kops version' but they all return this error:</p>
<p><code>-bash: /usr/local/bin/kops: cannot execute binary file: Exec format error</code></p>
| <p>I think its because you are using <code>kops-darwin-amd64</code>. This is for mac. I think you should be using <code>kops-linux-amd64</code> instead for linux.</p>
|
<p>I get the below error in my helm upgrade stage. I did the following change <code>apiVersion: networking.k8s.io/v1beta1</code> to <code>apiVersion: networking.k8s.io/v1</code> Could someone kindly let me know the reason why I encounter this issue and the fix for the same. Any help is much appreciated</p>
<pre><code>Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for
this kubernetes version and it is therefore unable to build the kubernetes objects for
performing the diff. error from kubernetes: unable to recognize "": no matches for
kind "Ingress" in version "networking.k8s.io/v1beta1"
</code></pre>
<p><a href="https://i.stack.imgur.com/0iq1z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0iq1z.png" alt="enter image description here" /></a></p>
| <p>The reason why you encounter the issue is Helm attempts to create a diff patch between the current deployed release (which contains the Kubernetes APIs that are removed in your current Kubernetes version) against the chart you are passing with the updated/supported API versions. So when Kubernetes removes an API version, the Kubernetes Go client library can no longer parse the deprecated objects and Helm therefore fails when calling the library.</p>
<p>Helm has the official documentation on how to recover from that scenario:
<a href="https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest" rel="nofollow noreferrer">https://helm.sh/docs/topics/kubernetes_apis/#updating-api-versions-of-a-release-manifest</a></p>
|
<p>I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.</p>
<pre><code>try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
</code></pre>
<p>However, I see two problems with this.</p>
<ol>
<li>If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.</li>
<li>Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.</li>
</ol>
<p>I'm basically looking to replace the <code>kubectl delete --wait</code> functionality with a python script.</p>
<p>Thanks for any insights into this.</p>
| <pre class="lang-py prettyprint-override"><code>import json
def delete_pod(pod_name):
return v1.delete_namespaced_pod(name=pod_name, namespace="default")
def delete_pod_if_exists(pod_name):
def run():
delete_pod(pod_name)
while True:
try:
run()
except ApiException as e:
has_deleted = json.loads(e.body)['code'] == 404
if has_deleted:
return
</code></pre>
|
<p>I recently changed the docker daemon from my local Docker Desktop to local minikube following these <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="nofollow noreferrer">instructions</a>.</p>
<pre><code>@FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env --shell cmd') DO @%i
</code></pre>
<p>After running some tests, I want to change it back to my previous setup. I already tried to change some environment variable but it did not succeeded.</p>
<pre><code>SET DOCKER_HOST=tcp://127.0.0.1:2375
</code></pre>
| <p>Run the below command to get the list of Docker Hosts..</p>
<pre><code>docker context ls
</code></pre>
<p>output will be something like below</p>
<pre><code>NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux unix:///home/sc2302/.docker/desktop/docker.sock
rootless Rootless mode unix:///run/user/1001/docker.sock
</code></pre>
<p>Now, from the output's, select the context you want to use.. ex: to switch to default context</p>
<pre><code>docker context use default
</code></pre>
|
<p>I am using Kubectl run command to execute pod on a specific node.</p>
<pre><code>kubectl --namespace=ns run pod1 --image image1 \
--overrides='{"spec":{"nodeSelector":{"appgroup":"app1"}}}' \
--command python3 script.py
</code></pre>
<p>Sometimes pod is going on pending status and it cannot progress to running phase.</p>
<p>How can I add tolerations to the run command?</p>
<p>Note: I do not have a yaml file.</p>
<p>Any help is appreciated</p>
| <p>You can do this adding a "tolerations" array.</p>
<p>If you want to ignore all tolerations, we can do this with an "operator=exists" condition.</p>
<pre><code>kubectl --namespace=$your_ns run $your_pod --image $your_image \
--overrides='{"spec":{"nodeSelector":{"appgroup":"app1"},"tolerations":[{"operator":"exists"}]}}' \
--command python3 script.py \
[--dry-run=client -o yaml]
</code></pre>
<p>Also note: if you want to use a YAML file, the "dry-run" and "-o yaml" option would help you generate a first copy. Maybe you would be more comfortable with a plaintext file testing your changes.</p>
|
<p>For one of the test AKS clusters I am trying to update, it gives the following error.</p>
<p>Error: SkuNotAvailable.
Message: The requested VM size for resource "Following SKUs have failed for capacity restrictions:
Standard_D4s_v4' is currently not available in location 'SouthAfricaNorth'. Please try another size or deploy to a different location or different size.</p>
<p>I have checked and found that the quota is available in the subscription for this SKU and region selected.
Now cluster and pools went in to failed status</p>
| <p>As far as I know, this error "SkuNotAvailable" is either a capacity issue in the region or that your SUBSCRIPTION doesn't have access to that specific size</p>
<p>You could once verify that by running the below Azure cli command
az vm list-skus --location centralus --size Standard_D --all --output table</p>
<p>If a SKU isn't available for your subscription in a location or zone that meets your business needs, submit a <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/general/region-access-request-process" rel="nofollow noreferrer">SKU request</a> to Azure Support.</p>
<p>If the subscription doesn't have access, please reach out to azure subscription and quota mgmt support team through as support case to check and make sure it's available to use the particular size on your subscription in case they cannot enable that for any reason, there will be an appropriate explanation.
At this point there is nothing can be done at the AKS side.</p>
|
<p>I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the <a href="https://operatorhub.io/operator/gitlab-runner-operator" rel="nofollow noreferrer">official documentation</a>, but I keep getting the error below. What could possibly be wrong?</p>
<p>This is the result from the command:</p>
<pre class="lang-bash prettyprint-override"><code>curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
</code></pre>
<pre class="lang-bash prettyprint-override"><code>/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
</code></pre>
<p>I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.</p>
<p>I haven't found one tackling installing a file using <code>curl</code>.</p>
<p>I'll gladly accept any help.</p>
| <p><strong>Using PowerShell <em>on Windows</em></strong>, <strong>you must explicitly ensure that the stdout lines emitted by <code>curl.exe</code> are separated with Unix-format LF-only newlines, <code>\n</code>, <em>when PowerShell passes them on to <code>bash</code></em></strong>, given that <code>bash</code>, like other Unix shells, doesn't recognize Windows-format CRLF newlines, <code>\r\n</code>:</p>
<p>The <strong>simplest way to <em>avoid</em> the problem is to call via <code>cmd /c</code></strong>:</p>
<pre><code>cmd /c 'curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0'
</code></pre>
<p><code>cmd.exe</code>'s pipeline (<code>|</code>) (as well as its redirection operator, <code>></code>), unlike PowerShell's (see below), acts as a <em>raw byte conduit</em>, so it simply streams whatever bytes <code>curl.exe</code> outputs to the receiving <code>bash</code> call, unaltered.</p>
<p><strong>Fixing the problem on the <em>PowerShell</em> side</strong> requires more work, and is inherently <em>slower</em>:</p>
<pre><code>(
(
curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh
) -join "`n"
) + "`n" | bash -s v0.22.0
</code></pre>
<p><sup>Note: <code>`n</code> is a <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Special_Characters" rel="nofollow noreferrer">PowerShell escape sequence</a> that produces a literal LF character, analogous to <code>\n</code> in certain <code>bash</code> contexts.</sup></p>
<p>Note:</p>
<ul>
<li><p>It is important to note that, <strong>as of PowerShell 7.2.x, passing <em>raw bytes</em> through the <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Pipelines" rel="nofollow noreferrer">pipeline</a> is <em>not</em> supported</strong>: external-program stdout output is invariably <em>decoded into .NET strings</em> on <em>reading</em>, and <em>re-encoded</em> based on the <a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#outputencoding" rel="nofollow noreferrer"><code>$OutputEncoding</code> preference variable</a> when <em>writing</em> to an(other) external program.</p>
<ul>
<li>See <a href="https://stackoverflow.com/a/59118502/45375">this answer</a> for more information, and <a href="https://github.com/PowerShell/PowerShell/issues/1908" rel="nofollow noreferrer">GitHub issue #1908</a> for potential <em>future</em> support for raw byte streaming between external programs and on redirection to a file.</li>
</ul>
</li>
<li><p>That is, <strong>PowerShell invariably interprets output from external programs, such as <code>curl.exe</code>, as <em>text</em>, and sends it <em>line by line</em> through the pipeline, <em>as .NET string objects</em></strong> (the PowerShell pipeline in general conducts (.NET) <em>objects</em>).</p>
<ul>
<li>Note that these lines (strings) do <em>not</em> have a trailing newline themselves; that is, the information about what specific newline sequences originally separated the lines is <em>lost</em> at that point (PowerShell itself recognizes CRLF and LF newlines interchangeably).</li>
</ul>
</li>
<li><p>However, <strong>if the receiving command is <em>also</em> an external program</strong>, <strong>PowerShell <em>adds a trailing platform-native newline</em> to each line, which on Windows is a CRLF newline</strong> - this is what caused the problem.</p>
</li>
<li><p>By collecting the lines in an array up front, using <code>(...)</code>, they can be sent as a <em>single, LF-separated multi-line string</em>, using the <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Join" rel="nofollow noreferrer"><code>-join</code>operator</a>, as shown above.</p>
<ul>
<li><p>Note that PowerShell appends a trailing platform-native newline to this single, multi-line string too, but a stray <code>\r\n</code> at the <em>very end</em> of the input is in effect ignored by <code>bash</code>, assuming that the last true input line ends in <code>\n</code>, which is what the extra <code>+ "`n"</code> at the end of the expression ensures.</p>
</li>
<li><p>However, there are scenarios where this trailing CRLF newline <em>does</em> cause problems - see <a href="https://stackoverflow.com/a/48372333/45375">this answer</a> for an example and workarounds via the platform-native shell.</p>
</li>
</ul>
</li>
</ul>
|
<p>I'm running PowerDNS recursor inside my k8s cluster. My python script is on a different <code>pod</code> that is doing rdns to my <code>powerdns</code> <code>rescursor</code> app. I have my hpa <code>Max replica</code> set to <code>8</code>. However, I do not think the load is the problem here. I'm unsure what to do to resolve this timeout error that I'm getting below. I can increase the replicas to solve the problem temporarily, and then it would happen again.</p>
<p><code>[ipmetadata][MainThread][source.py][144][WARNING]: dns_error code=12, message=Timeout while contacting DNS servers</code></p>
<p>It seems like my pods are rejecting incoming traffic therefore it's outputting the dns_error code=12.</p>
<p>Here is part of my script that's running the rdns</p>
<pre><code> return_value = {
'rdns': None
}
try:
async for attempt in AsyncRetrying(stop=stop_after_attempt(3)):
with attempt:
try:
if ip:
result = await self._resolver.query(ip_address(ip).reverse_pointer, 'PTR')
return_value['rdns'] = result.name
return return_value
except DNSError as dns_error:
# 1 = DNS server returned answer with no data
# 4 = Domain name not found
# (seems to just be a failure of rdns lookup no sense in retrying)
# 11 = Could not contact DNS servers
if int(dns_error.args[0]) in [1, 4, 11]:
return return_value
LOG.warning('dns_error code=%d, message=%s, ip=%s', dns_error.args[0], dns_error.args[1], ip)
raise
except RetryError as retry_ex:
inner_exception = retry_ex.last_attempt.exception()
if isinstance(inner_exception, DNSError):
# 12 = Timeout while contacting DNS servers
LOG.error('dns_error code=%d, message=%s, ip=%s', inner_exception.args[0], inner_exception.args[1], ip)
else:
LOG.exception('rnds lookup failed')
return return_value
</code></pre>
| <p>The error code 12 indicates that the PowerDNS recursor did not receive a response from any of the authoritative servers for the queried domain within the configured timeout. This could be due to network issues, firewall rules, rate limiting, or misconfiguration of the recursor or the authoritative servers.</p>
<h2>Possible solutions</h2>
<p>There are a few things you can try to resolve this timeout error:</p>
<ul>
<li>Check the network connectivity and latency between your python pod and your recursor pod, and between your recursor pod and the authoritative servers. You can use tools like <code>ping</code>, <code>traceroute</code>, or <code>dig</code> to diagnose network problems.</li>
<li>Check the firewall rules on your k8s cluster and on the authoritative servers. Make sure they allow UDP and TCP traffic on port 53 for DNS queries and responses. You can use tools like <code>iptables</code>, <code>nftables</code>, or <code>ufw</code> to manage firewall rules.</li>
<li>Check the rate limiting settings on your recursor and on the authoritative servers. Rate limiting is a mechanism to prevent denial-of-service attacks or abuse of DNS resources by limiting the number of queries per second from a given source. You can use tools like <code>pdnsutil</code> or <code>pdns_control</code> to configure rate limiting on PowerDNS recursor and authoritative servers.</li>
<li>Check the configuration of your recursor and the authoritative servers. Make sure they have the correct IP addresses, domain names, and DNSSEC settings. You can use tools like <code>pdnsutil</code> or <code>pdns_control</code> to manage PowerDNS configuration files and settings.</li>
</ul>
<h2>Examples</h2>
<p>Here are some examples of how to use the tools mentioned above to troubleshoot the timeout error:</p>
<ul>
<li>To ping the recursor pod from the python pod, you can use the following command:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import subprocess
recursor_pod_ip = "10.0.0.1" # replace with the actual IP address of the recursor pod
ping_result = subprocess.run(["ping", "-c", "4", recursor_pod_ip], capture_output=True)
print(ping_result.stdout.decode())
</code></pre>
<p>This will send four ICMP packets to the recursor pod and print the output. You should see something like this:</p>
<pre><code>PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.102 ms
64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=0.101 ms
--- 10.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3060ms
rtt min/avg/max/mdev = 0.098/0.106/0.123/0.010 ms
</code></pre>
<p>This indicates that the network connectivity and latency between the python pod and the recursor pod are good.</p>
<ul>
<li>To traceroute the authoritative server from the recursor pod, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- traceroute 8.8.8.8
</code></pre>
<p>This will trace the route taken by packets from the recursor pod to the authoritative server at 8.8.8.8 (Google DNS). You should see something like this:</p>
<pre><code>traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.0.0.1 (10.0.0.1) 0.123 ms 0.098 ms 0.102 ms
2 10.0.1.1 (10.0.1.1) 0.456 ms 0.432 ms 0.419 ms
3 10.0.2.1 (10.0.2.1) 0.789 ms 0.765 ms 0.752 ms
4 192.168.0.1 (192.168.0.1) 1.123 ms 1.098 ms 1.085 ms
5 192.168.1.1 (192.168.1.1) 1.456 ms 1.432 ms 1.419 ms
6 192.168.2.1 (192.168.2.1) 1.789 ms 1.765 ms 1.752 ms
7 192.168.3.1 (192.168.3.1) 2.123 ms 2.098 ms 2.085 ms
8 192.168.4.1 (192.168.4.1) 2.456 ms 2.432 ms 2.419 ms
9 192.168.5.1 (192.168.5.1) 2.789 ms 2.765 ms 2.752 ms
10 8.8.8.8 (8.8.8.8) 3.123 ms 3.098 ms 3.085 ms
</code></pre>
<p>This indicates that the route to the authoritative server is clear and there are no firewall blocks or network issues.</p>
<ul>
<li>To dig the domain name from the recursor pod, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- dig example.com
</code></pre>
<p>This will send a DNS query for the domain name example.com to the recursor pod and print the response. You should see something like this:</p>
<pre><code>; <<>> DiG 9.11.5-P4-5.1ubuntu2.1-Ubuntu <<>> example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12345
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;example.com. IN A
;; ANSWER SECTION:
example.com. 3600 IN A 93.184.216.34
;; Query time: 12 msec
;; SERVER: 10.0.0.1#53(10.0.0.1)
;; WHEN: Tue Jun 15 12:34:56 UTC 2021
;; MSG SIZE rcvd: 56
</code></pre>
<p>This indicates that the recursor pod received a valid response from the authoritative server for the domain name example.com.</p>
<ul>
<li>To check the rate limiting settings on the recursor pod, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- pdns_control get-all
</code></pre>
<p>This will print all the configuration settings of the recursor pod. You should look for the following settings:</p>
<pre><code>max-cache-entries=1000000
max-packetcache-entries=500000
max-recursion-depth=40
max-tcp-clients=128
max-udp-queries-per-round=1000
max-udp-queries-per-second=10000
</code></pre>
<p>These settings control the maximum number of cache entries, TCP clients, UDP queries, and recursion depth that the recursor pod can handle. You can adjust them according to your needs and resources. You can use the following command to set a new value for a setting:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl exec -it recursor-pod -- pdns_control set max-udp-queries-per-second 20000
</code></pre>
<p>This will set the maximum number of UDP queries per second to 20000.</p>
<ul>
<li>To check the configuration of the authoritative server at 8.8.8.8, you can use the following command:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>dig +short CHAOS TXT version.bind @8.8.8.8
</code></pre>
<p>This will send a DNS query for the version of the authoritative server at 8.8.8.8. You should see something like this:</p>
<pre><code>"google-public-dns-a.google.com"
</code></pre>
<p>This indicates that the authoritative server is running Google Public DNS, which is a well-known and reliable DNS service. You can check the documentation of Google Public DNS for more information on its configuration and features. You can also use the following command to check the DNSSEC status of the authoritative server:</p>
<pre class="lang-bash prettyprint-override"><code>dig +short CHAOS TXT id.server @8.8.8.8
</code></pre>
<p>This will send a DNS query for the identity of the authoritative server at 8.8.8.8. You should see something like this:</p>
<pre><code>"edns0"
</code></pre>
<p>This indicates that the authoritative server supports EDNS0, which is an extension of the DNS protocol that enables DNSSEC and other features. You can check the documentation of EDNS0 for more information on its functionality and benefits.</p>
|
<p>Im trying to create a job to list all resources because my connection is terrible. Is there any way to give permission to a pod run the below command?</p>
<p>Here is ClusterRole that I am trying:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system
name: workaround
rules:
- apiGroups: [""]
resources:
- '*'
verbs:
- '*'
- apiGroups: ['*']
resources:
- '*'
verbs:
- '*'
</code></pre>
<p>The command is:</p>
<pre><code>kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n ibm-rancher
</code></pre>
| <p>If you are just looking to give your workload an admin role, you can use the prebuilt <code>cluster-admin</code> cluster role which should be available on every k8s cluster.<br />
See the docs for more details - <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles</a></p>
|
<p>I am trying to install Operator Lifecycle Manager (OLM) — a tool to help manage the Operators running on your cluster — from the <a href="https://operatorhub.io/operator/gitlab-runner-operator" rel="nofollow noreferrer">official documentation</a>, but I keep getting the error below. What could possibly be wrong?</p>
<p>This is the result from the command:</p>
<pre class="lang-bash prettyprint-override"><code>curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.22.0/install.sh | bash -s v0.22.0
</code></pre>
<pre class="lang-bash prettyprint-override"><code>/bin/bash: line 2: $'\r': command not found
/bin/bash: line 3: $'\r': command not found
/bin/bash: line 5: $'\r': command not found
: invalid option6: set: -
set: usage: set [-abefhkmnptuvxBCEHPT] [-o option-name] [--] [-] [arg ...]
/bin/bash: line 7: $'\r': command not found
/bin/bash: line 9: $'\r': command not found
/bin/bash: line 60: syntax error: unexpected end of file
</code></pre>
<p>I've tried removing the existing curl and downloaded and installed another version but the issue has still persisted. Most solutions online are for Linux users and they all lead to Windows path settings and files issues.</p>
<p>I haven't found one tackling installing a file using <code>curl</code>.</p>
<p>I'll gladly accept any help.</p>
| <p>To start off, I need to be clear about a few things:</p>
<ol>
<li>Based on the tags to the question, I see we are in PowerShell rather than a linux/unix or even Windows cmd shell</li>
<li>In spite of this, we are using Unix <code>curl</code> (probably <code>curl.exe</code>), and not the PowerShell alias for <code>Invoke-WebRequest</code>. We know this because of the <code>-sL</code> argument. If Powershell was using the alias, we'd see a completely different error.</li>
</ol>
<p>Next, I need to talk briefly about line endings. Instead of just a single LF (<code>\n</code>) character as seen in Unix/linux and expected by bash, Windows by default uses the two-character LF/CR pair (<code>\n\r</code>) for line endings.</p>
<hr />
<p>With all that background out of the way, I can now explain what's causing the problem. It's this single pipe character:</p>
<pre><code>|
</code></pre>
<p>This a PowerShell pipe, not a Unix pipe, so the operation puts the output of the <code>curl</code> program in the PowerShell pipeline in order to send it to the <code>bash</code> interpreter. Each line is an individual item on the pipeline, and as such no longer includes any original line breaks. PowerShell pipeline will "correct" this before calling bash using the default line ending for the system, which in this case is the LF/CR pair used by Windows. Now when bash tries to interpret the input, it sees an extra <code>\r</code> character after every line and doesn't know what to do with it.</p>
<p>The trick is most of what we might do in Powershell to strip out those extra characters is still gonna get sent through another pipe after we're done. I guess we <em>could</em> tell curl to write the file to disk without ever using a pipe, and then tell bash to run the saved file, but that's awkward, extra work, and much slower.</p>
<hr />
<p>But we can do a little better. PowerShell by default treats each line returned by curl as a separate item on the pipeline. We can "trick" it to instead putting one big item on the pipeline using the <code>-join</code> operation. That will give us one big string that can go on the pipeline as a single element. It will still end up with an extra <code>\r</code> character, but by the time bash sees it the script will have done it's work.</p>
<p>Code to make this work is found in the other answer, and they deserve all the credit for the solution. The purpose of my post is to do a little better job explaining what's going on: why we have a problem, and why the solution works, since I had to read through that answer a couple times to really get it.</p>
|
<p>I am trying to deploy nats in k8s cluster. I need to override default server config.
Tried creating a configmap with --from-file and attached it to deployment, but it gives me the following error</p>
<pre><code>nats-server: read /etc/nats-server-conf/server.conf: is a directory
</code></pre>
<p>ConfigMap</p>
<pre><code>k describe configmaps nats-server-conf
</code></pre>
<pre><code>Name: nats-server-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
server.conf:
----
accounts: {
\$SYS: {
users: [{user: sys, password: pass}]
}
}
BinaryData
====
Events: <none>
</code></pre>
<p>Following is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-depl
spec:
replicas: 1
selector:
matchLabels:
app: nats
template:
metadata:
labels:
app: nats
spec:
containers:
- name: nats
image: nats
volumeMounts:
- mountPath: /etc/nats-server-conf/server.conf
name: nats-server-conf
args:
[
'-p',
'4222',
'-m',
'8222',
'-js',
'-c',
'/etc/nats-server-conf/server.conf'
]
volumes:
- configMap:
name: nats-server-conf
name: nats-server-conf
</code></pre>
<p>Thank you.</p>
| <pre><code>- mountPath: /etc/nats-server-conf/server.conf
</code></pre>
<p>The above setting will make a Pod mount <code>server.conf</code> as a directory, so try the below instead:</p>
<pre><code>- mountPath: /etc/nats-server-conf
</code></pre>
|
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: fifapes123/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>I am getting error on line number 5?I am using the skaffold/v2alpha3 in which manifests under kubectl is allowed then why i am getting "property manifests is not allowed"?</p>
| <pre><code>apiVersion: skaffold/v3
kind: Config
build:
artifacts:
- image: fifapes123/auth
context: auth
sync:
manual:
- src: src/**/*.ts
dest: .
docker:
dockerfile: Dockerfile
local:
push: false
manifests:
rawYaml:
- ./infra/k8s/*
deploy:
kubectl: {}
</code></pre>
<p>Try the above YAML config. You should <a href="https://skaffold.dev/docs/upgrading/" rel="noreferrer">update</a> your <code>skaffold.yaml</code> to the latest apiVersion (apiVersion: v3alpha1). This can be easily done by <code>skaffold fix</code> command</p>
|
<p>I'm trying to create a script in order to run multiple <code>kubectl exec</code> commands against multiple pods with multiple containers. The script seems to generate the command just fine but errors out when attempting to run it.</p>
<p>example command that is generated: <code>kubectl -n <namespace> exec <pod_name> -c <container_name> -- openssl version</code></p>
<p>When I copy the generated command and run it directly it works fine, but if I try to run the command within the script I get an error.</p>
<pre><code>OCI runtime exec failed: exec failed: unable to start container process: exec: "openssl version": executable file not found in $PATH: unknown
</code></pre>
<p>command terminated with exit code 126</p>
<p>snippet from .sh file:</p>
<pre><code>for pod in $PODS; do
CONTAINERS=($(kubectl -n $NAMESPACE get pods $pod -o jsonpath='{.spec.containers[*].name}' | tr -s '[[:space:]]' '\n'))
header "{pod: \"$pod\", containers: \"$(echo $CONTAINERS | tr -d '\n')\"}"
if [ "$DRYRUN" != "true" ]; then
for container in $CONTAINERS; do
echo "COMMAND BEING RUN: \"kubectl -n $NAMESPACE exec $pod -c $container -- $COMMAND\""
kubectl -n $NAMESPACE exec $pod -c $container -- $COMMAND
done
fi
done
</code></pre>
| <p>The command provided with <code>exec</code> will be executed without a shell. This means that environment variables will not be available. Thus the error specifying missing executable path in $PATH. Only executables installed in default <code>$PATH</code> (like <code>/bin/</code>) can be executed.</p>
<p>You can load the shell and provide <code>$command</code> as a command to be executed;</p>
<pre><code>kubectl exec -n $NAMESPACE --container $container -it $pod -- /bin/sh -c "$COMMAND"
</code></pre>
|
<p>I have following configuration of a service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: academy-backend-service
spec:
selector:
app: academy-backend-app
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30081
</code></pre>
<p>Behind this service there is a deployment that runs a docker image of a spring boot application that expose port 8081.
When I try to reach the application from browser on http://localhost:30081 I don't get anything (not reachable). However if I connect inside minikube cluster, the application is available on http:{servcieip}:8081.
Any clues what is not configured properly? I thought that nodePort is enough.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
academy-backend-service NodePort 10.97.44.87 <none> 8081:30081/TCP 34m
</code></pre>
| <p>Use <code>NodePorts</code> to expose the service nodePort on all nodes in the cluster.</p>
<p>From <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<blockquote>
<p>NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.</p>
</blockquote>
<p>If you want to expose your service outside the cluster, use <code>LoadBalancer</code> type:</p>
<blockquote>
<p>LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.</p>
</blockquote>
<p>or use ingress-controller which is a reverse-proxy that routs traffics from outside to your cluster:
<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a></p>
|
<p>I want to make a command which can clear all old deployments. For example, I have deployments in a namespace</p>
<blockquote>
<p>kubectl -n web get deploy --sort-by=.metadata.creationTimestamp</p>
</blockquote>
<pre><code>myproject-static-staging-master 1/1 1 1 54d
myproject-static-staging-task-13373 1/1 1 1 20d
myproject-static-staging-task-13274 1/1 1 1 19d
myproject-static-staging-task-13230 1/1 1 1 19d
myproject-static-staging-task-13323 1/1 1 1 19d
myproject-static-staging-task-13264 1/1 1 1 18d
myproject-static-staging-task-13319 1/1 1 1 13d
myproject-static-staging-task-13470 1/1 1 1 6d20h
myproject-static-staging-task-13179 1/1 1 1 6d20h
myproject-static-staging-task-13453 1/1 1 1 6d4h
myproject-static-staging-moving-to-old 1/1 1 1 6d
myproject-static-staging-moving-test 1/1 1 1 5d20h
</code></pre>
<p>I want to save only that's (5 newest)</p>
<pre><code>myproject-static-staging-task-13470 1/1 1 1 6d20h
myproject-static-staging-task-13179 1/1 1 1 6d20h
myproject-static-staging-task-13453 1/1 1 1 6d4h
myproject-static-staging-moving-to-old 1/1 1 1 6d
myproject-static-staging-moving-test 1/1 1 1 5d20h
</code></pre>
<p>I tried that command</p>
<blockquote>
<p>kubectl get deployment -n web --template '{{range
.items}}{{.metadata.name}}{{"\n"}}{{end}}'
--sort-by=.metadata.creationTimestamp | grep -v master | grep myproject-static-staging | head -n 5 | xargs -r kubectl -n web delete
deployment</p>
</blockquote>
<p>but it is no correct.</p>
| <p>You can use xargs command like this:</p>
<pre><code>command1 | xargs -I{} command2 {}
</code></pre>
<p>Xargs will replace the output from command1 with the empty {}. For example, if the output of command1 is '1 2 3', then Xargs will invoke commands: 'command2 1', 'command2 2', and 'command2 3'.</p>
<p>So in your case, you can use</p>
<pre><code>kubectl get deployment -n web --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' --sort-by=.metadata.creationTimestamp | grep -v master | grep myproject-static-staging | tail -r | tail -n +6 | xargs -I{} kubectl -n web delete deployment {}
</code></pre>
<p>'tail -r' will reverse the order, and 'tail -n +6' will select all rows except the first 5.</p>
|
<p>I have a GKE Autopilot cluster with one service to publish a message to GCP Pub/Sub topic for testing.</p>
<p>I have created a KSA for the deployment, have used GCP Workload Identity Management to authorize the KSA to act as a GCP Service Account (GSA). Then I gave the GSA the Pub/Sub editor role.</p>
<h3>The following commands are what I used:</h3>
<pre><code>kubectl create serviceaccount KSA_NAME
</code></pre>
<pre><code>gcloud iam service-accounts add-iam-policy-binding GSA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[KSA_NAME]"
</code></pre>
<p>But even after doing all of this, I receive the following error when I try to publish a message to the topic.</p>
<pre class="lang-bash prettyprint-override"><code>ERROR:google.cloud.pubsub_v1.publisher._batch.thread:Failed to publish 1 messages.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 65, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "User not authorized to perform this action."
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.250.192.42:443 {grpc_message:"User not authorized to perform this action.", grpc_status:7, created_time:"2022-11-01T10:27:21.972013149+00:00"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/_batch/thread.py", line 272, in _commit
timeout=self._commit_timeout,
File "/usr/local/lib/python3.7/site-packages/google/pubsub_v1/services/publisher/client.py", line 613, in publish
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 154, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 288, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/usr/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 99, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 67, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.PermissionDenied: 403 User not authorized to perform this action.
</code></pre>
<h3>Here is the concerned code:</h3>
<pre class="lang-py prettyprint-override"><code>
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/new_pub/<pages>')
def new_pubsub(pages: int):
publish_message(pages)
return json.dumps({'success': True}), 200, {
'ContentType': 'application/json'}
from google.cloud import pubsub_v1
import json
def publish_message(data):
d = {"message": data}
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path("MY_PROJECT", "TOPIC_NAME")
publisher.publish(topic_path, json.dumps(d).encode('utf-8'))
</code></pre>
| <p>Found the issue.</p>
<p>I missed to annotate the KSA with the email address of the IAM service account.</p>
<p>After these two commands,</p>
<pre class="lang-bash prettyprint-override"><code>kubectl create serviceaccount KSA_NAME
gcloud iam service-accounts add-iam-policy-binding GSA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "GSA_NAME:PROJECT_ID.svc.id.goog[KSA_NAME]"
</code></pre>
<p>I had to run this command:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl annotate serviceaccount KSA_NAME \
--namespace default \
iam.gke.io/gcp-service-account=GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com
</code></pre>
|
<p>Currently, I am developing some back-end API, let's call them A, B and C. I'm willing to use Kubernetes for orchestration.</p>
<p>I need to have Authentication and authorization for only A and B APIs.</p>
<p>How could I implement authentication and authorization for only some api?</p>
| <p>Kubernetes is an orchestration service it doesn't provide the feature of Auth Z/Auth N.</p>
<p>You have options like creating the <strong>Auth</strong> <strong>service</strong> in the backend or using the service Mesh or API gateway to implement the AuthZ/Auth N at the edge.</p>
<p>If you are looking for more functionality i would recommend implementing the <strong>Kong API gateway</strong> with K8s and Keycloak as an identity provider.</p>
<p>With you can implement the <strong>AuthZ/AuthN</strong>, rate limiting, API key auth, Basic auth, JWT etc.</p>
<p>You can read my article for implementation : <a href="https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56" rel="nofollow noreferrer">https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56</a></p>
<p>Article implementing Oauth and OIDC however with Kong and Keycloak you can change the Auth option as per need, it's flexible.</p>
<p>So at the kong level, you can create the API and route and upon that you can set the security policy as you have use case. Kong also supports the UI dashboard (Konga) so it would be easy to manage also.</p>
|
<p>I need to scale a set of pods that run queue-based workers. Jobs for workers can run for a long time (hours) and should not get interrupted. The number of pods is based on the length of the worker queue. Scaling would be either using the horizontal autoscaler using custom metrics, or a simple controller that changes the number of replicas.</p>
<p>Problem with either solution is that, when scaling down, there is no control over which pod(s) get terminated. At any given time, most workers are likely working on short running jobs, idle, or (more rare) processing a long running job. I'd like to avoid killing the long running job workers, idle or short running job workers can be terminated without issue.</p>
<p>What would be a way to do this with low complexity? One thing I can think of is to do this based on CPU usage of the pods. Not ideal, but it could be good enough. Another method could be that workers somehow expose a priority indicating whether they are the preferred pod to be deleted. This priority could change every time a worker picks up a new job though.</p>
<p>Eventually all jobs will be short running and this problem will go away, but that is a longer term goal for now.</p>
| <p>Since version 1.22 there is a beta feature that helps you do that. You can add the annotation <code>controller.kubernetes.io/pod-deletion-cost</code> with a value in the range <code>[-2147483647, 2147483647]</code> and this will cause pods with lower value to be killed first. Default is 0, so anything negative on one pod will cause a pod to get killed during downscaling, e.g.</p>
<p><code>kubectl annotate pods my-pod-12345678-abcde controller.kubernetes.io/pod-deletion-cost=-1000</code></p>
<p>Link to discussion about the implementation of this feature: <a href="https://github.com/kubernetes/enhancements/issues/2255" rel="nofollow noreferrer">Scale down a deployment by removing specific pods (PodDeletionCost) #2255 </a></p>
<p>Link to the documentation: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#pod-deletion-cost" rel="nofollow noreferrer">ReplicaSet / Pod deletion cost</a></p>
|
<p>I'm using kubernetes plugin in jenkins to do the testing on kubernetes cluster through jenkins pipeline as code. The cluster details change frequently, so I'm configuring the kubernetes plugin through groovy scripts just before the testing stage.</p>
<p>Problem : Jenkins is not able to create pod for testing on the cluster. If I check the configuration of that particular kubernetes cloud in the configure system console, it is as per expectation (the IP, token, jenkins url etc).'Test Connection' is also successful.
Tried adding sleep time after configuring the plugin, but no luck.
Any idea what could be happening here?</p>
<p>Thanks in advance!</p>
<p>If I manually create a new kubernetes cloud through console and copy the same details manually, the pipeline is able to create the pods and perform tasks further.</p>
<pre><code>Jenkins logs:
[Pipeline] podTemplate
[Pipeline] {
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘XXXXXX-bdd-runner-21-XXXXXXX’ are offline.
Jenkins version: 2.150.3
Kubernetes plugin version: 1.14.5
</code></pre>
| <p>I had this problem this week, I solved this by access by accessing: Manage Jenkins => Manage Nodes and Clouds => Configure Clouds => choose your kubernetes Cluster => Kubernetes Cloud details... => set Concurrency Limit to a bigger number(if your current is 10 set it to 20)</p>
|
<p>Create an NetworkPolicy named cka-netpol in the namespace netpol.
1] Allow the pods to communicate if they are running on port 8080 within the namespace.
2] Ensure the NetworkPolicy doesn’t allow other pods that are running other than port 8080.
3] The communication from and to the pods running on port 8080. No pods running on port 8080 from other namespaces to allowed.</p>
<p>I want yaml file with some description theoretically.</p>
| <blockquote>
<p>Allow the pods to communicate if they are running on port 8080 within
the namespace.</p>
</blockquote>
<p>We will only open and accept requests on port <strong>8080</strong> to satisfy the above request.</p>
<blockquote>
<p>The communication from and to the pods running on port 8080. No pods
running on port 8080 from other namespaces to allowed.</p>
</blockquote>
<p>Using namespace selector to filter out the traffic from specific namespace.</p>
<blockquote>
<p>Ensure the NetworkPolicy doesn’t allow other pods that are running
other than port 8080.</p>
</blockquote>
<p>We have applied the network policy with port as input on the namespace level</p>
<p>check the namespace label</p>
<pre><code>kubectl get namespace netpol --show-labels
</code></pre>
<p>Example YAML</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cka-netpol
namespace: netpol
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
namespace: netpol #Use label accordingly
ports:
- protocol: TCP
port: 8080
</code></pre>
<p>You check more example and use this link for ref : <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/09-allow-traffic-only-to-a-port.md" rel="nofollow noreferrer">https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/09-allow-traffic-only-to-a-port.md</a></p>
|
<p>currently I'm trying the following setup:</p>
<p>I have:</p>
<ul>
<li>one cluster</li>
<li>one Ingress Controller</li>
<li>one url (myapp.onazure.com)</li>
<li>two namespaces for two applications default and default-test</li>
<li>two deployments, ingress objects, services for the namespaces</li>
</ul>
<p>I can easily reach my app from the default namespace with path based routing '/' as a prefix rule
Now i have tried to configure the second namespace and following rule: /testing to hit another service</p>
<p>Unfortunately i get an HTTP404 when i try to hit the following URL myapp.onazure.com/testing/openapi.json</p>
<p>What did I miss?</p>
<p><strong>Working Ingress 1</strong></p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object
namespace: default
annotations:
kubernetes.io/ingress.class: public-nginx
spec:
tls:
- hosts:
- myapp-region1.onazure.com
- myapp-region2.onazure.com
secretName: ingress-tls-csi
rules:
- host: - myapp-region1.onazure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: liveapi-svc
port:
number: 8080
- host: myapp-region2.onazure.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: liveapi-svc
port:
number: 8080
</code></pre>
<p><strong>Not working Ingress 2</strong></p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object-testing
namespace: default-testing
annotations:
kubernetes.io/ingress.class: public-nginx
#nginx.ingress.kubernetes.io/rewrite-target: /testing
spec:
tls:
- hosts:
- myapp-region1.onazure.com
- myapp-region2.onazure.com
secretName: ingress-tls-csi-testing
rules:
- host: myapp-region1.onazure.com
http:
paths:
- path: /testing
#pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8080
- host: myapp-region2.onazure.com
http:
paths:
- path: /testing
#pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8080
</code></pre>
<p>Maybe I am missing a rewrite target to simply '/' in the testing namespace ingress?</p>
| <p>Finally I figured out the missing part. I had to add the following statement to the not working ingress object:</p>
<pre><code> annotations:
kubernetes.io/ingress.class: public-nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
</code></pre>
<p>Please see the complete ingress object:</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: liveapi-ingress-object
namespace: default-testing
annotations:
kubernetes.io/ingress.class: public-nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- myapp.onazure.com
secretName: ingress-tls-csi-testing
rules:
- host: myapp.onazure.com
http:
paths:
- path: /testing/(.*)
pathType: Prefix
backend:
service:
name: liveapi-svc-testing
port:
number: 8000
</code></pre>
|
<p>I have a container written in go. It deploys and runs on my DockerDesktop & on my Kubernetes cluster in DockerDesktop.</p>
<p>I have pushed the same container to Artefact Repository and it fails to deploy.</p>
<p>So I deployed it to CloudRun, and it works! Very confused.</p>
<p>My GKE cluster is autopilot so I assume the are no resource issues.</p>
<p>I expected to get a running container however i got</p>
<p>Cannot schedule pods: Insufficient cpu.
PodUnschedulable
Reason
Cannot schedule pods: Insufficient cpu.
Learn more
Source
gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp
gmail-sender-7944d6d4d4-kdlds
PodUnschedulable Cannot schedule pods: Insufficient memory.</p>
<p>My deployment file is as follows</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gmail-sender
labels:
app: gmail-sender
spec:
replicas: 1
selector:
matchLabels:
app: gmail-sender
template:
metadata:
labels:
app: gmail-sender
spec:
containers:
- name: gmail-sender
image: europe-west2-docker.pkg.dev/ea-website-359514/gmail-sender/gmail-sender:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8099
---
apiVersion: v1
kind: Service
metadata:
name: gmail-sender-cluster-ip
labels:
app: gmail-sender
spec:
ports:
- port: 8099
protocol: TCP
</code></pre>
| <p>Looking at the error it is clear that node doesnt have sufficient memory and cpu to schedule/run the workload. Check the node configuration and ensure that resources are available on the node to host the workload</p>
<pre><code>Cannot schedule pods: Insufficient cpu.
PodUnschedulable Reason Cannot schedule pods: Insufficient cpu.
Learn more Source gmail-sender-7944d6d4d4-tsdt9
gmail-sender-7944d6d4d4-pc9xp gmail-sender-7944d6d4d4-kdlds PodUnschedulable
Cannot schedule pods: Insufficient memory.
</code></pre>
|
<p>I am trying to deploy a K8s application using Kustomize. Up to now I have done simple implementations where we have a few of the K8s files such as <code>ingress.yaml</code> with something like the following:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
namespace: namespace
labels:
app: appname
spec:
tls:
- hosts:
- $(variable1)
secretName: $(variable2)-tls
</code></pre>
<p>Under my <code>overlays</code> directory for each environment, I then have another <code>kustomize.yaml</code> which gives the values in a configmap:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- path
configMapGenerator:
- behavior: merge
literals:
- variable1=foo
- variable2=bar
name: configmapname
images:
- name: imagename
newName: registryurl
</code></pre>
<p>This works well, but now I need to do something more complicated. Say for example I have multiple ingress. Instead of creating multiple base ingress yaml files, how can I have one base yaml file that creates every ingress based on the values in my overlay file? Is that possible?</p>
| <p>Kustomize isn't a templating system and doesn't do variable substitution. It <em>can</em> perform a variety of YAML patching tricks, so one option you have is to start with a base manifest like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingressname
spec:
tls:
- hosts: []
secretName:
</code></pre>
<p>And then patch it in your <code>kustomization.yaml</code> files:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ingressname
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
</code></pre>
<p>What I've shown here works well if you have an application consisting of a single Ingress and you want to produce multiple variants (maybe one per cluster, or per namespace, or something). That is, you have:</p>
<ul>
<li>A Deployment</li>
<li>A Service</li>
<li>An Ingress</li>
<li>(etc.)</li>
</ul>
<p>Then you would have one directory for each variant of the app, giving you a layout something like:</p>
<pre><code>.
├── base
│ ├── deployment.yaml
│ ├── ingress.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
├── variant1
│ └── kustomization.yaml
└── variant2
└── kustomization.yaml
</code></pre>
<p>If your application has <em>multiple</em> Ingress resources, and you want to apply the same patch to all of them, <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md" rel="nofollow noreferrer">Kustomize can do that</a>. If you were to modify the patch in your <code>kustomization.yaml</code> so that it looks like this instead:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Ingress
name: ".*"
patch: |
- op: replace
path: /spec/tls
value:
- hosts:
- host1.example.com
secretName: host1-tls
</code></pre>
<p>This would apply the same patch to all matching Ingress resources (which is "all of them", in this case, because we used <code>.*</code> as our match expression).</p>
|
<p>I am running a regional GKE kubernetes cluster in is-central1-b us-central-1-c and us-central1-f. I am running 1.21.14-gke.700. I am adding a confidential node pool to the cluster with this command.</p>
<pre><code>gcloud container node-pools create card-decrpyt-confidential-pool-1 \
--cluster=svcs-dev-1 \
--disk-size=100GB \
--disk-type=pd-standard \
--enable-autorepair \
--enable-autoupgrade \
--enable-gvnic \
--image-type=COS_CONTAINERD \
--machine-type="n2d-standard-2" \
--max-pods-per-node=8 \
--max-surge-upgrade=1 \
--max-unavailable-upgrade=1 \
--min-nodes=4 \
--node-locations=us-central1-b,us-central1-c,us-central1-f \
--node-taints=dedicatednode=card-decrypt:NoSchedule \
--node-version=1.21.14-gke.700 \
--num-nodes=4 \
--region=us-central1 \
--sandbox="type=gvisor" \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--service-account="card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com" \
--shielded-integrity-monitoring \
--shielded-secure-boot \
--tags=testingdonotuse \
--workload-metadata=GKE_METADATA \
--enable-confidential-nodes
</code></pre>
<p>This creates a node pool but there is one problem... I can still SSH to the instances that the node pool creates. This is unacceptable for my use case as these node pools need to be as secure as possible. I went into my node pool and created a new machine template with ssh turned off using an instance template based off the one created for my node pool.</p>
<pre><code>gcloud compute instance-templates create card-decrypt-instance-template \
--project=corp-dev-project
--machine-type=n2d-standard-2
--network-interface=aliases=gke-svcs-dev-1-pods-10a0a3cd:/28,nic-type=GVNIC,subnet=corp-dev-project-private-subnet,no-address
--metadata=block-project-ssh-keys=true,enable-oslogin=true
--maintenance-policy=TERMINATE --provisioning-model=STANDARD
--service-account=card-decrpyt-confidential@corp-dev-project.iam.gserviceaccount.com
--scopes=https://www.googleapis.com/auth/cloud-platform
--region=us-central1 --min-cpu-platform=AMD\ Milan
--tags=testingdonotuse,gke-svcs-dev-1-10a0a3cd-node
--create-disk=auto-delete=yes,boot=yes,device-name=card-decrpy-instance-template,image=projects/confidential-vm-images/global/images/cos-89-16108-766-5,mode=rw,size=100,type=pd-standard
--shielded-secure-boot
--shielded-vtpm -
-shielded-integrity-monitoring
--labels=component=gke,goog-gke-node=,team=platform --reservation-affinity=any
</code></pre>
<p>When I change the instance templates of the nodes in the node pool the new instances come online but they do not attach to the node pool. The cluster is always trying to repair itself and I can't change any settings until I delete all the nodes in the pool. I don't receive any errors.</p>
<p>What do I need to do to disable ssh into the node pool nodes with the original node pool I created or with the new instance template I created. I have tried a bunch of different configurations with a new node pool and the cluster and have not had any luck. I've tried different tags network configs and images. None of these have worked.</p>
<p>Other info:
The cluster was not originally a confidential cluster. The confidential nodes are the first of its kind added to the cluster.</p>
| <p>One option you have here is to enable private IP addresses for the nodes in your cluster. The <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create#--enable-private-nodes" rel="nofollow noreferrer"><code>--enable-private-nodes</code> flag</a> will make it so the nodes in your cluster get <em>private</em> IP addresses (rather than the default public, internet-facing IP addresses).</p>
<p>Note that in this case, you would still be able to SSH into these nodes, but only from within your VPC network.</p>
<p>Also note that this means you would not be able to access <code>NodePort</code> type services from outside of your VPC network. Instead, you would need to use a <code>LoadBalancer</code> type service (or provide some other way to route traffic to your service from outside of the cluster, if required).</p>
<hr />
<p>If you'd like to prevent SSH access even from within your VPC network, your easiest option would likely be to configure a firewall rule to deny SSH traffic to your nodes (TCP/UDP/SCTP port 22). Use network tags (the <code>--tags</code> flag) to target your GKE nodes.</p>
<p>Something along the lines of:</p>
<pre><code>gcloud compute firewall-rules create fw-d-i-ssh-to-gke-nodes \
--network NETWORK_NAME \
--action deny \
--direction ingress \
--rules tcp:22,udp:22,sctp:22 \
--source-ranges 0.0.0.0/0 \
--priority 65534 \
--target-tags my-gke-node-network-tag
</code></pre>
<hr />
<p>Finally, one last option I'll mention for creating a hardened GKE cluster is to use Google's <a href="https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest/submodules/safer-cluster" rel="nofollow noreferrer"><code>safer-cluster</code></a> Terraform module. This is an opinionated setup of a GKE cluster that follows many of the principles laid out in Google's <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster" rel="nofollow noreferrer">cluster hardening guide</a> and the Terraform module takes care of a lot of the nitty-gritty boilerplate here.</p>
|
<p>I have this KongPlugin on Kubernetes</p>
<pre><code>apiVersion: configuration.konghq.com/v1
kind: KongPlugin
config:
remove:
headers:
- Authorization
append:
headers:
- Authorization:Basic <SECRET>
plugin: request-transformer
</code></pre>
<p>Is it possible to move the header Authorization to a Kubernetes secret?</p>
| <p>if you are using the <strong>request-transformer</strong> Enterprise version you can do it.</p>
<blockquote>
<p>config.append.headers optional</p>
<p>Type: array of string elements List of headername:value pairs. If the
header is not set, set it with the given value. If it is already set,
a new header with the same name and the new value will be set.</p>
</blockquote>
<blockquote>
<p>This field is referenceable, which means it can be securely stored as
a secret in a vault. References must follow a specific format.</p>
</blockquote>
<p>Ref : <a href="https://docs.konghq.com/hub/kong-inc/request-transformer-advanced/" rel="nofollow noreferrer">https://docs.konghq.com/hub/kong-inc/request-transformer-advanced/</a></p>
<p>There is no direct way as much i have tried if not exterprise version, but you can update or write the plugin code and create the docker image and run it.</p>
<p>Here is example code which append the headers : <a href="https://github.com/Kong/kong-plugin-request-transformer/blob/master/kong/plugins/request-transformer/access.lua#L228" rel="nofollow noreferrer">https://github.com/Kong/kong-plugin-request-transformer/blob/master/kong/plugins/request-transformer/access.lua#L228</a></p>
<p>You can read article do build custom docker image with plugin code : <a href="https://faun.pub/building-kong-custom-docker-image-add-a-customized-kong-plugin-2157a381d7fd" rel="nofollow noreferrer">https://faun.pub/building-kong-custom-docker-image-add-a-customized-kong-plugin-2157a381d7fd</a></p>
|
<p>I am using Loki v2.4.2 and have configured S3 as a storage backend for both index and chunk.</p>
<p>I want to ensure that all logs older than 90 days are deleted without risk of corruption. The documentation about retention is confusing, and steps are not clear. Should I just set TTL on object storage on root prefix i.e., <code>/</code>. Or should I configure something like this? I don't want to run the compactor.</p>
<pre><code>table_manager:
retention_deletes_enabled: true
retention_period: 2160h
</code></pre>
<p>Here is my Loki configuration. Please suggest what changes should be made in this configuration and the corresponding S3 TTL. I don't want to run the compactor.</p>
<pre><code>config:
# existingSecret:
auth_enabled: false
ingester:
chunk_idle_period: 3m
chunk_block_size: 262144
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /data/loki/wal
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
## Different ring configs can be used. E.g. Consul
# ring:
# store: consul
# replication_factor: 1
# consul:
# host: "consul:8500"
# prefix: ""
# http_client_timeout: "20s"
# consistent_reads: true
limits_config:
max_query_series: 5000
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
schema_config:
configs:
- from: 2021-09-27
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
server:
http_listen_port: 3100
storage_config:
aws:
s3: s3://ap-southeast-1/loki-s3-bucket
boltdb_shipper:
active_index_directory: /data/loki/boltdb-shipper-active
cache_location: /data/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
filesystem:
directory: /data/loki/chunks
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
compactor:
working_directory: /data/loki/boltdb-shipper-compactor
shared_store: filesystem
</code></pre>
| <p>Deleting old log and index data seems to be the responsibility of S3, not Loki. You'll need to add one or more lifecycle rules to your buckets to handle this.</p>
<p><a href="https://grafana.com/docs/loki/latest/operations/storage/retention/#table-manager" rel="nofollow noreferrer">https://grafana.com/docs/loki/latest/operations/storage/retention/#table-manager</a></p>
<p>"When using S3 or GCS, the bucket storing the chunks needs to have the expiry policy set correctly. For more details check S3’s documentation or GCS’s documentation."</p>
<p>IMO the Loki documentation is very weak on this topic, I'd like it if they talked about this in more detail.</p>
|
<p>I have problem with start order of istio-sidecar and main application in kubernetes.
When pod starts, main application getting error "connection refused" for external services. When istio-envoy proxy ready, main application starts correctly on next attempt.</p>
<p>While Istio is not ready, the main application has time to crash and restart 2-3 times</p>
<p>how do I make the main application wait for istio-sidecar to start and only then start running itself?</p>
| <p>I believe if you set holdApplicationUntilProxyStarts to true will solve your issue.
You can get more information about it here:
<a href="https://istio.io/latest/docs/ops/common-problems/injection/" rel="nofollow noreferrer">https://istio.io/latest/docs/ops/common-problems/injection/</a>
Hope it can help you.</p>
|
<p>I wanted to deploy Strapi CMS to Kubernetes. On my local machine, I am trying to do this with Minikube. The structure of the project is MySQL in a different container outside of the cluster. I want to access the MySQL database from inside the cluster via this IP <code>172.17.0.2:3306</code></p>
<p>The Database is outside of the cluster and lives in a docker container. But the Strapi project lives in a cluster of Kubernetes.</p>
<p>This is my deployment YAML file for doing the Kubernetes stuff:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: cg-api
spec:
selector:
matchLabels:
app: cg-api
replicas: 2
template:
metadata:
labels:
app: cg-api
spec:
containers:
- name: cg-api
image: alirezahd/cg-api
env:
- name: DATABASE_HOST
value: "172.17.0.2"
- name: DATABASE_PORT
value: "3306"
ports:
- containerPort: 1337
</code></pre>
| <p>First of all you should expose MySQL port 3306 to outside of a container.
Assume host, on which Docker runs MySQL container, has 192.168.0.100 IP address. Then you can <strong>telnet 192.168.0.100:3306</strong> to be sure port is exposed and available.</p>
<p>Here is an example from <a href="https://docs.docker.com/engine/reference/commandline/run/" rel="nofollow noreferrer">Dockers documentation</a> :</p>
<pre><code>docker run -p 192.168.0.100:3306:3306/tcp mysql-container-name bash
</code></pre>
<p>This binds port 3306 of the container to TCP port 3306 on 192.168.0.100 of the host machine.</p>
<p>Then you CMS can connect to it within minikube.</p>
<p>Try to deploy <strong>dnsutils</strong> under minikube. It provides network tools like nslookup, telnet etc. It very helps to resolve network issues within minikube. You can connect to it <code>sudo kubernetes exec -it dnsutils -n default -- bash</code>.</p>
|
<p>I'm trying to assign static IPs for Load Balancers in GKE to services by storing them in the <code>values.yaml</code> file as:</p>
<pre><code>ip:
sandbox:
service1: xxx.xxx.201.74
service2: xxx.xxx.80.114
dev:
service1: xxx.xxx.249.203
service2: xxx.xxx.197.77
test:
service1: xxx.xxx.123.212
service2: xxx.xxx.194.133
prod:
service1: xxx.xx.244.211
service2: xxx.xxx.207.177
</code></pre>
<p>All works fine till I want to deploy to prod and that will fail as:</p>
<pre><code>Error: UPGRADE FAILED: template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
helm.go:94: [debug] template: chart-v1/templates/service2-service.yaml:24:28: executing "chart-v1/templates/service2-service.yaml" at <.Values.ip.prod.service2>: nil pointer evaluating interface {}.service2
</code></pre>
<p>and the part for <code>service2-service.yaml</code> looks like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
appName: {{ include "common.fullname" . }}
componentName: service2
labels:
io.kompose.service: service2
name: service2
spec:
ports:
- name: "{{ .Values.service.service2.ports.name }}"
port: {{ .Values.service.service2.ports.port }}
protocol: {{ .Values.service.service2.ports.protocol }}
targetPort: {{ .Values.service.service2.ports.port }}
type: LoadBalancer
{{ if eq .Values.target.deployment.namespace "sandbox" }}
loadBalancerIP: {{ .Values.ip.sandbox.service2 }}
{{ else if eq .Values.target.deployment.namespace "dev" }}
loadBalancerIP: {{ .Values.ip.dev.service2 }}
{{ else if eq .Values.target.deployment.namespace "test" }}
loadBalancerIP: {{ .Values.ip.test.service2 }}
{{ else if eq .Values.target.deployment.namespace "prod" }}
loadBalancerIP: {{ .Values.ip.prod.service2 }}
{{ else }}
{{ end }}
selector:
io.kompose.service: service2
status:
loadBalancer: {}
</code></pre>
<p>Any clue why is complaining that is <code>nil</code> (empty)?</p>
| <p>it could be due to the function changing the context and defined in <code>values.yaml</code></p>
<p>Normally with range, we can use the <strong>$</strong> for global scope, <code>appName: {{ include "common.fullname" $ }}</code></p>
<p>When tested the same template by keeping the static value of the <strong>appName</strong> it worked for me, so there is no issue with access from <code>values.yaml</code> unless <strong>nil</strong> is getting set at <strong>.Values.ip.prod.service2</strong>.</p>
<p>in other case as you mentioned <code>{{ (.Values.ip.prod).service2 }}</code> multiple level nesting will solve issue.</p>
|
<p>I’ve k8s operator which works as expected, I need to add a “watch” to other operator CRD (not mine), to make it simple lets call it <code>extCR</code> and our operator cr called <code>inCR</code>,</p>
<p>I tried the following but there is an issue how its right to trigger the reconcile.</p>
<pre><code>func (r *Insiconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&Inv1alpha1.Iget{}}).
Watches(&source.Kind{Type: &ext.Se{}}, handler.EnqueueRequestsFromMapFunc(r.FWatch)).
Complete(r)
}
func (r *Insiconciler) FWatch(c client.Object) []reconcile.Request {
val := c.(*ivi.Srv)
req := reconcile.Request{NamespacedName: types.NamespacedName{Name: val.Name, Namespace: val.Namespace}}
return []reconcile.Request{req}
}
</code></pre>
<p>The problem here that I trigger the reconcile with the <code>extCR</code> , I want inside the <code>FWatch</code> to update the <code>inCR</code> and start the reconcile with inCR and not with extCR, how can I do it ?</p>
<p>I mean, to avoid something like the following code as sometimes the reconcile is done for the <code>inCR</code> and sometimes for the <code>extCR</code> and than I can get some ugly if's</p>
<pre><code>func (r *Insiconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var inCR FOO
var extCR BAR
if err := r.Get(ctx, req.NamespacedName, &inCR); err != nil {
return ctrl.Result{}, err
}
if err := r.Get(ctx, req.NamespacedName, &extCR); err != nil {
return ctrl.Result{}, err
}
</code></pre>
<p><strong>I want to know what is the right/clean way to handle such case</strong></p>
<p>case when you need to listen to externalCR (not part of your controller) and also internalCR (from your controller) .</p>
<p>One more thing - the CR are different GVK but the exteranlCR contain lot of fields which is not required, just some of them. but the <strong>required fields</strong> is having the <strong>same names</strong> on both cr's</p>
<p><strong>update</strong></p>
<pre><code>type inCR struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec inSpec `json:"spec,omitempty"` / / ————————here is the difference
Status InsightTargetStatus `json:"status,omitempty"`
}
</code></pre>
<p>//————— This is defined on other program which is not owned by us, therefore cannot “reuse”</p>
<pre><code>type Bar struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec extSpec `json:"spec,omitempty"` // ———————here is the difference
Status ServiceStatus `json:"status,omitempty"`
}
</code></pre>
<p>And <code>inSpec</code> is having the following fields (subset of extSpec)</p>
<pre><code>type inSpec struct {
name string
age int
}
</code></pre>
<p>and <code>extSpec</code> have those fields and many more which is not related</p>
<pre><code>type extSpec struct {
name string
age int
foo string // not relevant
bar string // not relevant
bazz string // not relevant
}
</code></pre>
<p>at the end, <strong>Inside the reconcile</strong> I need to move the <code>relevant</code> fields to some functions. <strong>exactl</strong>y same functions just take sometime the fields from extCR and sometimes for inCR, according to the event that happens (like updating the extCR or update the inCR by users )</p>
<p><strong>Update2</strong></p>
<pre><code>func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
in := c.(*inCR)
vPass , e := vps.Get(ctx, r.Client, in.Spec.foo, in.Spec.bar)
return ctrl.Result{}, nil
}
But for extCR I should do the following
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
ext := c.(*extCR)
vPass , e := vps.Get(ctx, r.Client, ext.Spec.val.foo, ext.Spec.val.bar)
return ctrl.Result{}, nil
}
</code></pre>
| <p>Few things to keep in mind:</p>
<ul>
<li>Each controller is responsible for <em>exactly</em> one resource.</li>
<li>Reconcile request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. It does NOT contain information about any specific Event or the object contents itself.</li>
</ul>
<hr />
<p>You can create a second controller without the resource definition. In your main file, both controllers will be registered.</p>
<p>This could be useful if the CRDs are not related at all or if the external resource references the internal one, so you can make changes to the internal resource in the external reconciler.</p>
<pre class="lang-bash prettyprint-override"><code>kubebuilder create api --group other --version v2 --kind External \
--resource=false --controller=true
</code></pre>
<p>This gives you a controller with a <code>SetupWithManager</code> method that looks like the below.</p>
<pre class="lang-golang prettyprint-override"><code>func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
// For().
Complete(r)
}
</code></pre>
<p>Note how the For method is commented out because you need to import the resource to watch from somewhere else and reference it.</p>
<pre class="lang-golang prettyprint-override"><code>import (
...
otherv2 "other.io/external/api/v2"
)
...
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&otherv2.External{}).
Complete(r)
}
</code></pre>
<p>If you cannot import the external resource you could fall back to mocking it yourself but this is probably not a very clean way. You should really try to import it from the other controller project.</p>
<pre class="lang-bash prettyprint-override"><code>kubebuilder edit --multigroup=true
kubebuilder create api --group=other --version v2 --kind External \
--resource --controller
</code></pre>
<hr />
<p>Another way is when the resources are related to each other such that the internal resource has a reference in its spec to the external resource and knows how to get the external resource in its spec, when it reconciles. An example of this can be found here <a href="https://book.kubebuilder.io/reference/watching-resources/externally-managed.html" rel="nofollow noreferrer">https://book.kubebuilder.io/reference/watching-resources/externally-managed.html</a></p>
<pre class="lang-golang prettyprint-override"><code>type InternalSpec struct {
// Name of an external resource
ExternalResource string `json:"externalResource,omitempty"`
}
</code></pre>
<p>This means that in each reconciliation loop, the controller will look up the external resource and use it to manage the internal resource.</p>
<pre class="lang-golang prettyprint-override"><code>func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
internal := examplev1.Internal{}
if err := r.Get(context.TODO(), types.NamespacedName{
Name: req.Name,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
external := otherv2.External{}
if err := r.Get(context.TODO(), types.NamespacedName{
// note how the name is taken from the internal spec
Name: internal.Spec.ExternalResource,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
// do something with internal and external here
return ctrl.Result{}, nil
}
</code></pre>
<p>The problem with this is, that when the internal resource does not change, no reconciliation event will be triggered, even when the external resource has changed. To work around that, we can trigger the reconciliation by watching the external resource. Note the <code>Watches</code> method:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *InternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&examplev1.Main{}).
Watches(
&source.Kind{Type: &otherv2.ExternalResource{}},
handler.EnqueueRequestsFromMapFunc(r.triggerReconcileBecauseExternalHasChanged),
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
</code></pre>
<p>In order to know for which internal object we should trigger an event, we use a mapping function to look up all the internal that have a reference to the external resource.</p>
<pre class="lang-golang prettyprint-override"><code>func (r *InternalReconciler) triggerReconcileBecauseExternalHasChanged(o client.Object) []reconcile.Request {
usedByInternals := &examplev1.InternalList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(".spec.ExternalResource", o.GetName()),
Namespace: o.GetNamespace(),
}
err := r.List(context.TODO(), usedByInternals, listOps)
if err != nil {
return []reconcile.Request{}
}
requests := make([]reconcile.Request, len(usedByInternals.Items))
for i, item := range usedByInternals.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
</code></pre>
<hr />
<p>Since you updated your question, I suggest doing something like below.</p>
<p>I am creating a new project and 2 controllers. Note on the second controller command no resource is created along with the controller. this is because the controller
will watch an external resource.</p>
<pre class="lang-bash prettyprint-override"><code>mkdir demo && cd demo
go mod init example.io/demo
kubebuilder init --domain example.io --repo example.io/demo --plugins=go/v4-alpha
kubebuilder create api --group=demo --version v1 --kind Internal --controller --resource
kubebuilder create api --group=other --version v2 --kind External --controller --resource=false
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ tree controllers
controllers
├── external_controller.go
├── internal_controller.go
└── suite_test.go
</code></pre>
<p>Now we need some shared logic, for example by adding this to the controllers package. We will call this from both reconcilers.</p>
<pre class="lang-golang prettyprint-override"><code>// the interface may need tweaking
// depending on what you want to do with
// the reconiler
type reconciler interface {
client.Reader
client.Writer
client.StatusClient
}
func sharedLogic(r reconciler, kobj *demov1.Internal) (ctrl.Result, error) {
// do your shared logic here operating on the internal object struct
// this works out because the external controller will call this passing the
// internal object
return ctrl.Result{}, nil
}
</code></pre>
<p>Here is an example for the internal reconciler.</p>
<pre class="lang-golang prettyprint-override"><code>func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
</code></pre>
<p>And in the external reconciler we do the same.</p>
<pre class="lang-golang prettyprint-override"><code>func (r *ExternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// note, we can use the internal object here as long as the external object
// does contain the same fields we want. That means when unmarshalling the extra
// fields are dropped. If this cannot be done, you could first unmarshal into the external
// resource and then assign the fields you need to the internal one, before passing it down
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// note the external resource is imported from another project
// you may be able to watch this without import by creating a minimal
// type with the right GKV
For(otherv2.External{}).
Complete(r)
}
</code></pre>
|
<p>How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?</p>
<p>Pulumi up Diagnostics:</p>
<pre><code> kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
</code></pre>
<p>THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0</p>
<p>ArgoCD install yaml used with CRD2Pulumi: <a href="https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml</a></p>
<p>Source:</p>
<pre><code>...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
</code></pre>
<p>Any ideas as to fixing this oddity between the version error and the actual cluster version?</p>
<p>I've tried:</p>
<ul>
<li>Deleting everything and starting over.</li>
<li>Updating to the latest ArgoCD install yaml.</li>
</ul>
| <p>I could reproduce your issue, though I am not quite sure what causes the mismatch between versions. Better open an issue at <a href="https://github.com/pulumi/pulumi-kubernetes" rel="nofollow noreferrer">pulumi's k8s repository</a>.</p>
<p>Looking at the history of <a href="https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml" rel="nofollow noreferrer">https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml</a>, you can see that the kubeversion requirement has been added after 5.9.1. So using that version successfully deploys the helm chart. E.g.</p>
<pre><code>import * as k8s from "@pulumi/kubernetes";
const namespaceName = "argo";
const namespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: namespaceName,
}
});
const argo = new k8s.helm.v3.Release("argo", {
repositoryOpts: {
repo: "https://argoproj.github.io/argo-helm"
},
chart: "argo-cd",
version: "5.9.1",
namespace: namespace.metadata.name,
})
</code></pre>
<p>(Not Recommended) Alternatively, you could also clone the source code of the chart, comment out the kubeVersion requirement in <code>Chart.yaml</code> and install the chart from your local path.</p>
|
<p>I have a python program launching a batch job. The job outputs a json file, I'd like to know what is the easiest way to get this result back to the python program that launched it.</p>
<p>So far I thought of these solutions:</p>
<ul>
<li><p>Upload the json file to S3 (pretty heavy)</p>
</li>
<li><p>Display it in the pod logs then read the logs from the python program (pretty hacky/dirty)</p>
</li>
<li><p>Mount a PVC, launch a second pod with the same PVC, and create a shared disk between this pod and the job (pretty overkill)</p>
</li>
</ul>
<p>The json file is pretty lightweight. Isn't there a solution to do something like adding some metadata to the pod when the job completes? The python program can then just poll those metadata.</p>
| <p>An easy way not involving any other databases/pods is to run the first pod as an init container, mount a volume that is shared in both containers and use the JSON file in the next python program. (Also, this approach does not need a persistent volume, just a shared one), see this example:</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</a></p>
<p>Also, depending on the complexity of these jobs, would recommend taking a look at Argo workflows or any dag-related job schedulers.</p>
|
<p>We have a Keycloak 15.1.1 deployment on Kubernetes with multiple replicas with the AWS RDS Postgres 13 Database backend. I did not find any upgrade guide or experience of other people regarding this setup or even with Kubernetes with Postgres using PVC upgrading Keycloak with multiple major version changes.</p>
<p>Does anyone have any experience with the Keycloak upgrade on Kubernetes?</p>
<p>I went through the change log and was able to run Keycloak locally using docker-compose only in HTTP mode as we terminate SSL at the reverse proxy.</p>
<p>From upgrade instructions from Keycloak <a href="https://www.keycloak.org/docs/latest/upgrading/index.html" rel="nofollow noreferrer">documentation</a> is the following strategy is the right one without losing any data</p>
<p>Update the docker image with a new image running only in HTTP mode in our helm charts</p>
<p>Initially start the new deployment with only a single replica so that the database schema changes are applied</p>
<pre><code>kc.sh start --spi-connections-jpa-default-migration-strategy=update
</code></pre>
<p>When I tried to upgrade my local deployment with the above command, Keycloak was not accessible until the next restart.</p>
<p>Restart the deployment with more replicas with command</p>
<pre><code>kc.sh start --optimized
</code></pre>
| <p>I got the answer from the Keycloak GitHub support forums <a href="https://github.com/keycloak/keycloak/discussions/14682" rel="nofollow noreferrer">https://github.com/keycloak/keycloak/discussions/14682</a>.</p>
<p>Running kc.sh start automatically upgrades the DB on first install, and the first container running this command automatically locks the DB until migration is complete. So its not required to change my helm chart.</p>
|
<p>I'm actually trying for a couple of days to setup TLS on my ingress controller.
My Kubernetes cluster is hosted on Azure (AKS).</p>
<p>When I check the Ingress controller logs, I get this :</p>
<blockquote>
<p>W1104 08:49:29.472478 7 backend_ssl.go:45] Error obtaining X.509 certificate: unexpected error creating SSL Cert: no valid PEM formatted block found</p>
</blockquote>
<blockquote>
<p>W1104 08:49:29.472595 7 controller.go:1334] Error getting SSL certificate "myapp-test/myapp-tls": local SSL certificate myapp-test/myapp-tls was not found. Using default certificate</p>
</blockquote>
<blockquote>
<p>W1104 08:49:29.472611 7 controller.go:1334] Error getting SSL certificate "myapp-test/myapp-tls": local SSL certificate myapp-test/myapp-tls was not found. Using default certificate</p>
</blockquote>
<p>Here is my myapp-ingress.yml</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: myapp-test
namespace: myapp-test
spec:
ingressClassName: nginx
tls:
- hosts:
- test-app.myapp.io
- test-api.myapp.io
secretName: myapp-tls
rules:
- host: test-app.myapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-frontend
port:
number: 80
- host: test-api.myapp.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-backend-monolith
port:
number: 80
</code></pre>
<p>Here is my Secret.yml</p>
<pre><code>kind: Secret
apiVersion: v1
metadata:
name: myapp-tls
namespace: myapp-test
data:
tls.crt: >-
BASE64 ENCODED CRT FILE CONTENT
tls.key: >-
BASE64 ENCODED KEY FILE CONTENT
type: kubernetes.io/tls
</code></pre>
<p>I actully tried to create ingresses and/or secrets in every namespaces. But Ingress controller still can't find SSL certificate.</p>
| <p>based on below error, appears your cert format is not right.</p>
<blockquote>
<p>no valid PEM formatted block found</p>
</blockquote>
<p>Is your original cert in <a href="https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/" rel="nofollow noreferrer">PEM</a> format? you can decode the cert data in secret and double check, using a command like below (you might need to install <code>jq</code> command, or you can copy the tls.crt data manually and decode it with <code>base64 -d</code> command):</p>
<pre><code>kubectl get secret your-secret-name -n your-namespace -o json | jq '."data"."tls.crt"'| sed 's/"//g'| base64 -d -
</code></pre>
<p>below is what I did using a self-signed test cert/key file.</p>
<pre><code> kubectl get secret mytest-ssl-secret -o json
</code></pre>
<blockquote>
<pre><code>{
"apiVersion": "v1",
"data": {
"tls.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVGVENDQXYyZ0F3SUJBZ0lVWG12blRrcGtqMlhiQkx...tLS0K",
"tls.key": "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQkt...tLS0K"
},
"kind": "Secret",
"metadata": {
"creationTimestamp": "2022-11-05T04:22:12Z",
"name": "mytest-ssl-secret",
"namespace": "default",
"resourceVersion": "2024434",
"uid": "d63dce3d-8e5c-478a-be9e-815e59e4bd21"
},
"type": "kubernetes.io/tls"
}
</code></pre>
</blockquote>
<pre><code>kubectl get secret mytest-ssl-secret -o json | jq '."data"."tls.crt"'| sed 's/"//g'| base64 -d -
</code></pre>
<blockquote>
<pre><code>-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIUXmvnTkpkj2XbBLRJo+mpBfp4mvAwDQYJKoZIhvcNAQEL
BQAwgZkxCzAJBgNVBAYTAkNOMRAwDgYDVQQIDAdKaWFuZ3N1MQ0wCwYDVQQHDARX
dXhpMRowGAYDVQQKDBFUZXN0IGxpbWl0ZWQgSW5jLjELMAkGA1UECwwCSVQxHDAa
BgNVBAMME3Rlc3QwMDguZXhhbXBsZS5jb20xIjAgBgkqhkiG9w0BCQEWE3Rlc3Qw
...
XO8B+zyFRP1PZnCAkeUdvh6rpMbVHWvfM0QOG4m736b9FK1VmjTG4do=
-----END CERTIFICATE-----
</code></pre>
</blockquote>
|
<p>I'm trying work with Strimzi to create kafka-connect cluster and encountering the following error</p>
<pre><code>unable to recognize "kafka-connect.yaml": no matches for kind "KafkaConnect" in
version "kafka.strimzi.io/v1beta2"
</code></pre>
<p>Here's the kafka-connect.yaml I have:</p>
<pre><code>apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect
namespace: connect
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 2.4.0
replicas: 1
bootstrapServers: host:port
tls:
trustedCertificates:
- secretName: connectorsecret
certificate: cert
config:
group.id: o
offset.storage.topic: strimzi-connect-cluster-offsets
config.storage.topic: strimzi-connect-cluster-configs
status.storage.topic: strimzi-connect-cluster-status
sasl.mechanism: scram-sha-256
security.protocol: SASL_SSL
secretName: connectorsecret
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username=username password=password
</code></pre>
<p>Then I tried to apply the config via <code>kubectl apply -f kafka-connect.yaml</code></p>
<p>Is there anything necessities to create resources using Strimzi or something I'm doing wrong?</p>
| <p>I think there are two possibilities:</p>
<ol>
<li>You did not installed the CRD resources</li>
<li>You are using Strimzi version which is too old and does not support the <code>v1beta2</code> API</li>
</ol>
<p>Judging that you are trying to use Kafka 2.4.0, I guess the second option is more likely. If you ŕeally want to do that, you should make sure to use the documentation, examples and everything from the version of Strimzi you use- they should be useing one of the older APIs (<code>v1alpha1</code> or <code>v1beta1</code>).</p>
<p>But in general, I would <strong>strongly</strong> recommend you to use the latest version of Strimzi and not a version which is several years old.</p>
<hr />
<p>One more note: If you want to configure the SASL authentication for your Kafka Connect cluster, you should do it in the <code>.spec.authentication</code> section of the custom resource and not in <code>.spec.config</code>.</p>
|
<p>I am starting up a cluster using minikube and found the node to be started as control-plane node, but interestingly I found the taints to be none. Because the taints is none, I see pods being scheduled on this node.</p>
<pre><code>NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 76d v1.24.3
minikube-m02 Ready <none> 18m v1.24.3
</code></pre>
<p>And below is the taint information,</p>
<pre><code> node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 21 Aug 2022 22:27:29 -0500
Taints: <none>
</code></pre>
<p>Any idea why there is this behaviour ?</p>
| <p>This is the default behavior of minikube because tainting the control plane node is not a Kubernetes requirement especially for the dev/test environments.</p>
<p>You may manually taint your master node using:</p>
<pre><code>kubectl taint nodes minikube node-role.kubernetes.io/master:NoSchedule
</code></pre>
<p>After you apply the taint, drain the master node so the pods scheduled on it shift to the worker node.</p>
<pre><code>kubectl drain minikube --ignore-daemonsets
</code></pre>
|
<p>OpenShift ConfigMap for Jenkins pod template:</p>
<pre><code><org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
<value>100Gi</value>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
</code></pre>
<p>Spits out:</p>
<pre><code>apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
vault.hashicorp.com/agent-limits-ephemeral: "100Gi"
</code></pre>
<p>I'm trying to achieve this:</p>
<pre><code>apiVersion: "v1"
kind: "Pod"
metadata:
annotations:
vault.hashicorp.com/agent-limits-ephemeral: ""
</code></pre>
<p>But I don't know what to put here:</p>
<pre><code><org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
<value>???</value>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
</code></pre>
<p>These do not seem to work:</p>
<pre><code><value>""</value>
<value>''</value>
<value></value>
<value/>
</code></pre>
| <p>You can remove the <code>value</code> tag:</p>
<pre><code><org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
<key>vault.hashicorp.com/agent-limits-ephemeral</key>
</org.csanchez.jenkins.plugins.kubernetes.PodAnnotation>
</code></pre>
|
<p>I have setup a backend and frontend service running on Kubernetes. Frontend would be <code>www.<myDomain>.com</code> and backend would be <code>api.<myDomain>.com</code></p>
<p>I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <code><myDomain>.com</code> should cover both <code>www.</code> and <code>api.</code>.</p>
<p>Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the <code>www.<myDomain>.com</code> working with https. Things doesn't work when I tried to add the <code>api.<myDomain>.com</code></p>
<p>I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again</p>
<pre><code>Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound
</code></pre>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.allow-http: "true"
cert-manager.io/issuer: letsencrypt-staging
spec:
tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
rules:
- host: "www.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: angular-service
port:
number: 80
- host: "api.<myDomain>.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: spring-boot-service
port:
number: 8080
</code></pre>
| <p>I faced the same requirement as you.
from</p>
<pre><code> tls:
- secretName: web-ssl
hosts:
- <myDomain>.com
</code></pre>
<p>change to</p>
<pre><code> tls:
- hosts:
- www.<myDomain>.com
secretName: web-ssl
- hosts:
- api.<myDomain>.com
secretName: web-ssl
</code></pre>
<p>Help me to solve the issue!</p>
|
<p>I created PV as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: foo-pv
spec:
storageClassName: "my-storage"
claimRef:
name: foo-pvc
namespace: foo
</code></pre>
<p>Why we need to give storageClassName in PV? When Storage class creates PV, why to give storageClassName in PV?</p>
<p>Can someone help me to understand this?</p>
| <p>You can have 2 types of PVs:</p>
<ol>
<li>dynamic provisioned by StorageClasses</li>
<li>manually/static created by admins</li>
</ol>
<p><strong>Dynamically</strong> -> this is often used within cloud, for instance when you want to mount an Azure blob/file to a pod. In this case you don't have control over PV name, StorageClass create and bound random created PVs.</p>
<p><a href="https://i.stack.imgur.com/m5wOi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m5wOi.png" alt="enter image description here" /></a></p>
<p><strong>Manually</strong> -> this will give more control, you can assign a specific name to PV, a specific StorageClass that has Retain policy (do not delete PV after Released by Pod). In result is much easier to reuse that PV, knowing it name and StorageClass membership.</p>
<p><a href="https://i.stack.imgur.com/rjj56.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rjj56.png" alt="enter image description here" /></a></p>
|
<p>Crontab to run a job every minute from 11pm to 7:30am</p>
<p>I have this so far which is every minute from 11pm to 7:00am
the problem is the half hour.</p>
<pre><code>* 23,0-7 * * *
</code></pre>
<p>You can play around with it here <a href="https://crontab.guru/#*_23,0-7_*_*_*" rel="nofollow noreferrer">crontab_guru</a>
Any ideas?</p>
| <p>@Dunski : I have checked in many ways this <code>*,0-30 23,0-7 * * *</code> expression could stop at 07:59 min only but not yet 07:30 am.</p>
<p>As @jordanm suggested we have only a way to run two jobs from :</p>
<p>11 pm to 7 am expression <code>* 23,0-7 * * *</code> (“At every minute past hour 23 and every hour from 0 through 7.”) and then</p>
<p>7 am to 7:30 am <code>0-30 7 * * *</code> (“At every minute from 0 through 30 past hour 7.”).</p>
|
<p>We're creating dynamic test environments for our developers. Each environment goes into one namespace called <code>test-<something></code>, where <code><something></code> is entered by the developer when creating the environment (we use Gitlab-CI for the automation).</p>
<p>We want to grant them limited access to the K8s API to see deployments, exec into pods for instance. So the plan is to apply a (cluster)role (yet to decide) like this.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: "{{ ns }}"
name: "test-{{ ns }}"
rules:
- apiGroups: ["apps"]
resources: ["deploys"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
</code></pre>
<p>However we preferably don't want to apply it for all namespaces but only the <code>test-*</code> ones.</p>
<p>We could add the creation of the namespaced role and rolebinding during the app deploy, but that would mean granting our Gitlab-CI runner the permission to create and delete roles and rolebindings. We're concerned by the security implications of this and the possible privilege escalations.</p>
<ul>
<li>Is it possible to create a clusterrolebinding limited to a regexp-ed set of namespaces?</li>
<li>Alternatively, if we want to grant the permissions via the automation, is it possible to limit the namespaces where the rolebindings can be created by the runner?</li>
</ul>
<p>We looked at the docs but couldn't find such things</p>
<p>Worst case scenario is probably to go for the clusterrolebinding route and not give too many privileges to the automation. So asking if we can find a better way</p>
<p>Thanks in advance</p>
| <p>I also stumbled into this problem and <a href="https://kubernetes.io/blog/2020/08/14/introducing-hierarchical-namespaces/" rel="nofollow noreferrer">Hierarchical Namespaces</a> seem like a decent solution, as you can give the permissions to a single "static" namespace. Every namespace afterwards will inherit the permissions. Hope it helps.</p>
|
<p>I want to create a controller and listen to the pod events when new pod is created (by a deployment) then add all labels belong to deployment to the created pod, is this possible at scale with <code>client-go</code>?</p>
| <p>In order to observe pod events, you need to use informers. Informers have built-in optimizations to avoid overloading API servers.</p>
<p>There is a patch method available in the PodInterface that allows you to add a label to a pod.</p>
<p>Here is a sample code for your reference. In the main function, the informer code is added, and the LabelPod function implements the label logic.</p>
<pre><code>package main
import (
"context"
"encoding/json"
"fmt"
"time"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
)
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func main() {
clientSet := GetK8sClient()
labelOptions := informers.WithTweakListOptions(func(opts *metav1.ListOptions) {
opts.LabelSelector = GetLabelSelectorForDeployment("deployment-name", "namespace-name")
})
informers := informers.NewSharedInformerFactoryWithOptions(clientSet, 10*time.Second, informers.WithNamespace("namespace-name"), labelOptions)
podInformer := informers.Core().V1().Pods()
podInformer.Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: handleAdd,
},
)
informers.Start(wait.NeverStop)
informers.WaitForCacheSync(wait.NeverStop)
}
func GetLabelSelectorForDeployment(Name string, Namespace string) string {
clientSet := GetK8sClient()
k8sClient := clientSet.AppsV1()
deployment, _ := k8sClient.Deployments(Namespace).Get(context.Background(), Name, metav1.GetOptions{})
labelSet := labels.Set(deployment.Spec.Selector.MatchLabels)
return string(labelSet.AsSelector().String())
}
func handleAdd(obj interface{}) {
k8sClient := GetK8sClient().CoreV1()
pod := obj.(*v1.Pod)
fmt.Println("Pod", pod.GetName(), pod.Spec.NodeName, pod.Spec.Containers)
payload := []patchStringValue{{
Op: "replace",
Path: "/metadata/labels/testLabel",
Value: "testValue",
}}
payloadBytes, _ := json.Marshal(payload)
_, updateErr := k8sClient.Pods(pod.GetNamespace()).Patch(context.Background(), pod.GetName(), types.JSONPatchType, payloadBytes, metav1.PatchOptions{})
if updateErr == nil {
fmt.Println(fmt.Sprintf("Pod %s labelled successfully.", pod.GetName()))
} else {
fmt.Println(updateErr)
}
}
func GetK8sClient() *kubernetes.Clientset {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
return clientset
}
</code></pre>
|
<p>I'm trying to connect to a cluster and I'm getting the following error:</p>
<pre><code>gcloud container clusters get-credentials cluster1 --region europe-west2 --project my-project
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable.
Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
kubeconfig entry generated for dbcell-cluster.
</code></pre>
<p>I have installed Google Cloud SDK 400, kubektl 1.22.12, gke-gcloud-auth-plugin 0.3.0, and also setup /~.bashrc with</p>
<p><code>export USE_GKE_GCLOUD_AUTH_PLUGIN=True</code></p>
<pre><code>gke-gcloud-auth-plugin --version
Kubernetes v1.24.0-alpha+f42d1572e39979f6f7de03bd163f8ec04bc7950d
</code></pre>
<p>but when I try to connect to the cluster always I'm getting the same error, any idea here?</p>
<p>Thanks</p>
<hr />
<p>The cluster exist in that region, also I verfied the env variable</p>
<p>with</p>
<pre><code>echo $USE_GKE_GCLOUD_AUTH_PLUGIN
True
</code></pre>
<p>I installed the <code>gke-gcloud-auth-plugin using gcloud co</code>mponents install... I do not know what more can I check</p>
<p><a href="https://i.stack.imgur.com/55mjc.png" rel="nofollow noreferrer">gcloud components list</a></p>
| <p>I solved the same problem by removing my current <code>kubeconfig</code> context for GCP.</p>
<ol>
<li><p>Get your context name running:</p>
<p>kubectl config get-contexts</p>
</li>
<li><p>Delete the context:</p>
<p>kubectl config delete-context CONTEXT_NAME</p>
</li>
<li><p>Reconfigure the credentials</p>
<p>gcloud container clusters get-credentials CLUSTER_NAME --region REGION --project PROJECT</p>
</li>
</ol>
<p>The warning message should be gone by now.</p>
|
<p>I am trying to deploy <strong>istio's sample bookinfo application</strong> using the below command:</p>
<pre><code>kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
</code></pre>
<p>from <a href="https://istio.io/latest/docs/setup/getting-started/#bookinfo" rel="nofollow noreferrer">here</a></p>
<p>but each time I am getting <strong>ImagePullBackoff error</strong> like this:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
details-v1-c74755ddf-m878f 2/2 Running 0 6m32s
productpage-v1-778ddd95c6-pdqsk 2/2 Running 0 6m32s
ratings-v1-5564969465-956bq 2/2 Running 0 6m32s
reviews-v1-56f6655686-j7lb6 1/2 ImagePullBackOff 0 6m32s
reviews-v2-6b977f8ff5-55tgm 1/2 ImagePullBackOff 0 6m32s
reviews-v3-776b979464-9v7x5 1/2 ImagePullBackOff 0 6m32s
</code></pre>
<p>For error details, I have run :</p>
<pre><code>kubectl describe pod reviews-v1-56f6655686-j7lb6
</code></pre>
<p>Which returns these:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m41s default-scheduler Successfully assigned default/reviews-v1-56f6655686-j7lb6 to minikube
Normal Pulled 7m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 7m39s kubelet Created container istio-init
Normal Started 7m39s kubelet Started container istio-init
Warning Failed 5m39s kubelet Failed to pull image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 5m39s kubelet Error: ErrImagePull
Normal Pulled 5m39s kubelet Container image "docker.io/istio/proxyv2:1.15.3" already present on machine
Normal Created 5m39s kubelet Created container istio-proxy
Normal Started 5m39s kubelet Started container istio-proxy
Normal BackOff 5m36s (x3 over 5m38s) kubelet Back-off pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
Warning Failed 5m36s (x3 over 5m38s) kubelet Error: ImagePullBackOff
Normal Pulling 5m25s (x2 over 7m38s) kubelet Pulling image "docker.io/istio/examples-bookinfo-reviews-v1:1.17.0"
</code></pre>
<p>Do I need to build dockerfile first and push it to the local repository? There are no clear instructions there or I failed to find any.</p>
<p>Can anybody help?</p>
| <p>If you check in dockerhub the image is there:</p>
<p><a href="https://hub.docker.com/r/istio/examples-bookinfo-reviews-v1/tags" rel="nofollow noreferrer">https://hub.docker.com/r/istio/examples-bookinfo-reviews-v1/tags</a></p>
<p>So the error that you need to deal with is <code>context deadline exceeded</code> while trying to pull it from dockerhub. This is likely a networking error (a generic Go error saying it took too long), depending on where your cluster is running you can do manually a docker pull from the nodes and that should work.</p>
<p>EDIT: for minikube do a minikube ssh and then a docker pull</p>
|
<p>I want to use the direct translation from k8s secret-keys to SpringBoot properties.
Therefore I have a helm chart (but similar with plain k8s):</p>
<pre><code>apiVersion: v1
data:
app.entry[0].name: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
</code></pre>
<p>With that my intention is that this behaves as if I'd set the spring property file:</p>
<pre><code>app.entry[0].name: "someName"
</code></pre>
<p>But when I do this I get an error:</p>
<pre><code> Invalid value: "[app.entry[0].name]": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+'),
</code></pre>
<p>So, <code>[0]</code> seems not to be allowed as a key name for the secrets.</p>
<p><strong>Any idea how I can inject an array entry into spring directly from a k8s secret name?</strong></p>
<p>Shooting around wildly I tried these that all failed:</p>
<ul>
<li><code> app.entry[0].name: ...</code> -- k8s rejects '['</li>
<li><code> app.entry__0.name: ...</code> -- k8s ok, but Spring does not recognize this as array (I think)</li>
<li><code> "app.entry[0].name": ...</code> -- k8s rejects '['</li>
<li><code> 'app.entry[0].name': ...</code> -- k8s rejects '['</li>
</ul>
| <p>You should be able to use environnment variables like described in <a href="https://github.com/spring-projects/spring-boot/wiki/Relaxed-Binding-2.0#lists-1" rel="nofollow noreferrer">sprint-boot-env</a>.</p>
<p><code>app.entry[0].name</code> property will be set using <code>APP_ENTRY_0_NAME</code> environment variable. This could be set in your deployment.</p>
<p>Using secret like:</p>
<pre><code>apiVersion: v1
data:
value: {{.Values.firstEntry.name | b64enc }}
kind: Secret
metadata:
name: my-secret
type: Opaque
</code></pre>
<p>and then use it with</p>
<pre><code> env:
- name: APP_ENTRY_0_NAME
valueFrom:
secretKeyRef:
name: my-secret
key: value
</code></pre>
|
<p>For example I want to list all the possible values I can put in the yaml file to create a <code>Pod</code>. I want to also look at the meanings of those values. e.g. in the container section you put in the details of the container</p>
| <p>You can see field documentation for a Pod with</p>
<pre><code>kubectl explain Pod
</code></pre>
<p>then you can look deeper into the structure with e.g. (or any other field)</p>
<pre><code>kubectl explain Pod.spec
</code></pre>
|
<p>As said in title, I'm trying to add an AKS cluster to my Azure Machine Learning workspace as <code>Attached computes</code>.</p>
<p>In the wizard that ML studio shows while adding it</p>
<p><a href="https://i.stack.imgur.com/vlVVj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vlVVj.png" alt="enter image description here" /></a></p>
<p>there's a link to a guide to <a href="https://learn.microsoft.com/en-gb/azure/machine-learning/how-to-attach-kubernetes-anywhere" rel="nofollow noreferrer">install AzureML extension</a>.</p>
<p>Just 4 steps:</p>
<ol>
<li>Prepare an Azure Kubernetes Service cluster or Arc Kubernetes
cluster.</li>
<li>Deploy the AzureML extension.</li>
<li>Attach Kubernetes cluster to
your Azure ML workspace.</li>
<li>Use the Kubernetes compute target from CLI
v2, SDK v2, and the Studio UI.</li>
</ol>
<p>My issue comes ad 2nd step.</p>
<p>As suggested I'm trying to <a href="https://learn.microsoft.com/en-gb/azure/machine-learning/how-to-deploy-kubernetes-extension?tabs=deploy-extension-with-cli#azureml-extension-deployment---cli-examples-and-azure-portal" rel="nofollow noreferrer">create a POC</a> trough az cli</p>
<p><code>az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster</code></p>
<p>I'm already logged on right subscription (where I'm owner), ad using right cluster name and resource group. as extension-name I've used <code>test-ml-extension</code>, but I keep to get this error</p>
<p><code>(ExtensionOperationFailed) The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{"error":{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}. Code: ExtensionOperationFailed Message: The extension operation failed with the following error: Request failed to https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.ContainerService/managedclusters/<cluster-name>/extensionaddons/test-ml-extension?api-version=2021-03-01. Error code: Unauthorized. Reason: Unauthorized.{"error":{"code":"InvalidAuthenticationToken","message":"The received access token is not valid: at least one of the claims 'puid' or 'altsecid' or 'oid' should be present. If you are accessing as application please make sure service principal is properly created in the tenant."}}.</code></p>
<p>Am I missing something?</p>
| <p><em><strong>I tried to reproduce the same issue in my environment and got the below results</strong></em></p>
<p><em>I have created the Kubernetes cluster and launched the AML studio</em></p>
<p><em>In the AML I have created the workspace and created the compute with AKS cluster</em></p>
<p><img src="https://i.imgur.com/Wr6vMIR.png" alt="enter image description here" /></p>
<p><em>Deployed the azureML extension using the below command</em></p>
<pre><code>az k8s-extension create --name Aml-extension --extension-type Microsoft.AzureML.Kubernetes --config enableTraining=True enableInference=True inferenceRouterServiceType=LoadBalancer allowInsecureConnections=True inferenceLoadBalancerHA=False --cluster-type managedClusters --cluster-name my-aks-cluster --resource-group Alldemorg --scope cluster
</code></pre>
<p><img src="https://i.imgur.com/076IslA.png" alt="enter image description here" /></p>
<p><em>I am able to see all the deployed clusters using below commands</em></p>
<pre><code>az k8s-extension show --name <extension_name> --cluster-type connectedClusters --cluster-name <connected_cluster_name> --resource-group <rg_name>
</code></pre>
<p><em>After deploying the AzureML extension I am able to attach the Kubernetes cluster to azureML workspace.</em></p>
<p><img src="https://i.imgur.com/JboOCSZ.png" alt="enter image description here" /></p>
<p><em><strong>NOTE:</strong></em></p>
<p><em>ExtensionOperationFailed error may occur for the below of reasons,</em></p>
<p><em>1). Blocking of the the regions, for some AML clusters few of the regions are not allowed it will be blocked</em></p>
<p><em>2). Please check the version and upgrade to the latest version</em></p>
<p><em>3). While creating the extension please make sure cluster name it will be there in the AML workspace</em></p>
<p><em>4). The service principal for the cluster does not exist in the tenant in which we are trying to access</em></p>
<p><em>5). Each tenant in which we want to access must consent to the cluster, this will ensure the service principal exists in their tenant and that service principal has the access</em></p>
|
<p>I have a Yaml file in a folder as <code>abc.yaml</code> and the content is</p>
<pre><code>metadata:
test1: apple
test2: banana
test3: cat
container:
image: foo
text: xyz
variables:
ojb: one
meta: two
</code></pre>
<p>and I have another file <code>values.yaml.j2</code> which needs part of the above content.</p>
<pre><code>metadata:
test4: dog
test5: elephant
{{ .... Here I need test1, test2, test3 from the above (abc.yaml).... }}
container:
name: test
{{ .... Here I need image and text from the above (abc.yaml) ....}}
variables:
ping: pong
{{ ..... Here I need ojb and meta from the above (abc.yaml) .... }}
</code></pre>
<p>When I was exploring Helm go templates, I found, <code>Files.Lines</code> will return line by line. But I need specific lines as I mentioned above.</p>
<p>Any solution with go template wo get the part of other yaml file?</p>
| <p>If you know the other file is YAML, Helm contains a lightly-documented <code>fromYaml</code> extension that can parse it.</p>
<pre class="lang-none prettyprint-override"><code>{{- $abc := .Files.Get "abc.yaml" | fromYaml }}
</code></pre>
<p>From there you have a couple of options on how to proceed. One tool you have is the corresponding, also lightly-documented, <code>toYaml</code> extension that converts an arbitrary structure back to YAML.</p>
<p>So one choice is to directly emit the values you think you need:</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
test4: dog
test5: elephant
test1: {{ $abc.metadata.test1 }}
test2: {{ $abc.metadata.test2 }}
test3: {{ $abc.metadata.test3 }}
</code></pre>
<p>A second is to emit the new values for each block, plus the existing content:</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
test4: dog
test5: elephant
{{ $abc.metadata | toYaml | indent 2 -}}
</code></pre>
<p>A third is to modify the structure in-place, and then ask Helm to write out the whole thing as YAML. Unusual for Helm template functions, <a href="https://docs.helm.sh/docs/chart_template_guide/function_list/#set" rel="nofollow noreferrer"><code>set</code></a> modifies the dictionary it's given in place.</p>
<pre class="lang-yaml prettyprint-override"><code>{{- $_ := $abc.metadata | set "test4" "dog" | set "test5" "elephant" -}}
{{- toYaml $abc -}}
</code></pre>
|
<p>I have a network policy which will allow connection only from my IP address to the NGINX ingress controller. The network policy is given below. The ingress is using a load balancer and app is in different namespace (default).</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-access
namespace: ingress-nginx
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 11.11.11.11/32
ports:
- protocol: TCP
port: http
- protocol: TCP
port: https
</code></pre>
<p>But after deploying the network policy I can't establish a connection to my application. So I have changed the CIDR range from 11.11.11.11/32 to 0.0.0.0/0 and checked the logs to see from which IP address I am receiving the connection.</p>
<blockquote>
<p>22.22.22.22 - - [10/Nov/2022:05:14:35 +0000] "GET / HTTP/1.1" 200 7250 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
462 0.006 [default-hello-one-80] [] 10.2.0.6:80 7243 0.004 200
4d3fd980e5d07a084b162c1f9305f2d9</p>
</blockquote>
<p>It means I am getting hit by one of my kubernetes worker node. (22.22.22.22)</p>
<p>I tried adding all worker node IP address and load balancer IP address under ipBlock, but still my application is not accessible.can somebody help me?</p>
| <p>You need to set <code>externalTrafficPolicy: Local</code> on the Ingress controller service.</p>
<p>Documentation <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">here</a></p>
<p>Setting this allows the original client IP (which would be outside the cluster) to be preserved, so things like allow/deny list sand networkpolicies would work properly.</p>
|
<p>I'm using a HPA based on a custom metric on GKE.</p>
<p>The HPA is not working and it's showing me this error log:</p>
<blockquote>
<p>unable to fetch metrics from custom metrics API: the server is currently unable to handle the request</p>
</blockquote>
<p>When I run <code>kubectl get apiservices | grep custom</code> I get</p>
<blockquote>
<p>v1beta1.custom.metrics.k8s.io services/prometheus-adapter False (FailedDiscoveryCheck) 135d</p>
</blockquote>
<p>this is the HPA spec config :</p>
<pre><code>spec:
scaleTargetRef:
kind: Deployment
name: api-name
apiVersion: apps/v1
minReplicas: 3
maxReplicas: 50
metrics:
- type: Object
object:
target:
kind: Service
name: api-name
apiVersion: v1
metricName: messages_ready_per_consumer
targetValue: '1'
</code></pre>
<p>and this is the service's spec config :</p>
<pre><code>spec:
ports:
- name: worker-metrics
protocol: TCP
port: 8080
targetPort: worker-metrics
selector:
app.kubernetes.io/instance: api
app.kubernetes.io/name: api-name
clusterIP: 10.8.7.9
clusterIPs:
- 10.8.7.9
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
</code></pre>
<p>What should I do to make it work ?</p>
| <p>Adding this block in my EKS nodes security group rules solved the issue for me:</p>
<pre><code>node_security_group_additional_rules = {
...
ingress_cluster_metricserver = {
description = "Cluster to node 4443 (Metrics Server)"
protocol = "tcp"
from_port = 4443
to_port = 4443
type = "ingress"
source_cluster_security_group = true
}
...
}
</code></pre>
|
<p>I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.</p>
<p>In this scenario I have two pods:</p>
<ul>
<li><code>sleeper</code>, the client</li>
<li><code>frontend</code>, the server behind a Service called <code>frontend-svc</code> which forwards port 8080 to the pods port 80</li>
</ul>
<p>Both running in the same namespace: <code>ns</code></p>
<p>In the <code>sleeper</code> pod I simply <code>wget</code> a ping endpoint in the <code>frontend</code> pod:</p>
<p><code>wget -qO- http://frontend-svc.ns:8080/api/Ping</code></p>
<p>Here's my egress policy:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-to-frontend-egress
namespace: ns
spec:
podSelector:
matchLabels:
app: sleeper
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: frontend
</code></pre>
<p>As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.</p>
<p>Unfortunately, this breaks my ping:<br />
<code>wget: bad address 'frontend-svc.ns:8080'</code></p>
<p>However if I retrieve the pod's ip (using <code>kubectl get po -o wide</code>) and talk to the frontend directly I do get a response:<br />
<code>wget -qO- 10.x.x.x:80/api/Ping</code> (x obviously replaced with values)</p>
<p>My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-kube-system
namespace: ns
spec:
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
podSelector: {}
policyTypes:
- Egress
</code></pre>
<p>For now I don't want to bother with the exact pod and port, so I allow all pods from the <code>ns</code> namespace to egress to <code>kube-system</code> pods.</p>
<p>However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.</p>
<p>I'm running on Azure Kubernetes with Calico Network Policies.</p>
<p>Any clue what might be the issue, because I'm out of ideas.</p>
<hr />
<p>After getting it up and running, here's a more locked-down version of the DNS egress policy:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-pods-dns-egress
namespace: ns
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
# This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
kubernetes.io/metadata.name: "kube-system"
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
</code></pre>
| <p>I recreated your deployment and the final networkpolicy (egress to <code>kube-system</code> for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're <code>wget</code> command when accessing the pod directly (80).</p>
<p>Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: <a href="https://docs.otterize.com/quick-tutorials/k8s-network-policies" rel="nofollow noreferrer">https://docs.otterize.com/quick-tutorials/k8s-network-policies</a>.</p>
<p>It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.</p>
|
<p>I currently have a go app that uses a lot <code>fmt.printf</code>. Whenever that app would run in a pod I was able to get the logs back by doing</p>
<p><code>kubectl logs podname</code></p>
<p>However I also needed to integrate remote debugging. I need to use <code>dlv</code> to allow my ide(GoLand) to remotely connect to the pod. It connects to the Pod at port 40000.
Also when the Pods image runs it exposes port 40000 i.e the docker file has this in it 40000</p>
<p>I also have a service that looks like this in my minikube</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mydebug
spec:
type: ClusterIP
selector:
app: fooapp
ports:
- protocol: TCP
port: 40000
targetPort: 40000
name: delve
</code></pre>
<p>Now when I do <code>kubectl logs podname</code> I only get this back</p>
<pre><code>API server listening at: [::]:40000
2022-10-30T21:18:57Z warning layer=rpc Listening for remote connections (connections are not authenticated nor encrypted)
</code></pre>
<p>Is there anyway to get my logs back ? Ho</p>
| <p>You can use the <code>--continue</code> exec flag, to continue the debugged process on the start, which then will lead to continued logs.</p>
<p>So start delve e.g. with:</p>
<pre><code>dlv --listen=:2345 --headless exec your/app --continue
</code></pre>
<p>Without the <code>--continue</code> flag, delve will wait for remote connections and halt your application. With the <code>--continue</code> flag, the application instead will start already.</p>
<p>From <code>dlv help exec</code>:</p>
<pre><code>...
Usage:
dlv exec <path/to/binary> [flags]
Flags:
--continue Continue the debugged process on start.
...
</code></pre>
|
<p>I have a small NodeJS script that I want to run inside a container inside a kubernetes cluster as a CronJob. I'm having a bit of a hard time figuring out how to do that, given most examples are simple "run this Bash command" type deals.</p>
<p>package.json:</p>
<pre><code>{
...
"scripts": {
"start": "node bin/path/to/index.js",
"compile": "tsc"
}
}
</code></pre>
<p><code>npm run compile && npm run start</code> works on the command-line. Moving on to the Docker container setup...</p>
<p>Dockerfile:</p>
<pre><code>FROM node:18
WORKDIR /working/dir/
...
RUN npm run compile
CMD [ "npm", "run", "start" ]
</code></pre>
<p>When I build and then docker run this container on the command-line, the script runs successfully. This gives me confidence that most things above are correct and it must be a problem with my CronJob...</p>
<p>my-cron.yaml:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: cron-foo
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: job-foo
image: gcr.io/...
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
</code></pre>
<p>When I <code>kubectl apply -f my-cron.yaml</code> sure enough I get some pods that run, one per-minute, however they all error out:</p>
<pre><code>% kubectl logs cron-foo-27805019-j8gbp
> [email protected] start
> node bin/path/to/index.js
node:internal/modules/cjs/loader:998
throw err;
^
Error: Cannot find module '/working/dir/bin/path/to/index.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
at Module._load (node:internal/modules/cjs/loader:841:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
at node:internal/main/run_main_module:23:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Node.js v18.11.0
</code></pre>
<p>The fact that it's trying to run the correct command means the correct Docker container is being pulled successfully, but I don't know why the script is not being found...</p>
<p>Any help would be appreciated. Most CronJob examples I've seen have a <code>command:</code> list in the template spec...</p>
| <p>The error you show about the path not being found should appear when you <code>docker run ...</code> - but it didn't!</p>
<p>So, I assume it is related to the <code>imagePullPolicy</code>. Something is fixed, checked locally and then re-pushed to the given registry for your Kubernetes workloads to use. If it was re-pushed with the <strong>same</strong> tag, then don't forget to tell Kubernetes to query the registry download the new digest by changing the <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer"><code>imagePullPolicy</code></a> to <code>Always</code>.</p>
|
<p>I am using node version v17.9.0.
OS is Linux.
I am using Kubernetes and launching a nodejs application to connect with snowflake and execute a query.</p>
<p>version of snowflake-promise is 2.2.0.
version of snowflake-sdk is 1.6.14.</p>
<p>I am getting below error while connecting to snowflake. Can you please suggest.</p>
<pre><code>/app/node_modules/snowflake-sdk/lib/agent/ocsp_response_cache.js:157
cache.set(certId, response);
^
TypeError: Cannot read properties of undefined (reading 'set')
at OcspResponseCache.set (/app/node_modules/snowflake-sdk/lib/agent/ocsp_response_cache.js:157:11)
at /app/node_modules/snowflake-sdk/lib/agent/socket_util.js:232:32
at done (/app/node_modules/snowflake-sdk/lib/agent/check.js:142:5)
at ocspResponseVerify (/app/node_modules/snowflake-sdk/lib/agent/check.js:201:7)
at done (/app/node_modules/snowflake-sdk/lib/agent/check.js:71:7)
at IncomingMessage.<anonymous> (/app/node_modules/snowflake-sdk/lib/agent/check.js:99:7)
at IncomingMessage.emit (node:events:539:35)
at IncomingMessage.emit (node:domain:475:12)
at endReadableNT (node:internal/streams/readable:1345:12)
at processTicksAndRejections (node:internal/process/task_queues:83:21)
</code></pre>
| <p>I got this error too..</p>
<p>This error occurs when you are working in organisation environment!
You can reach Admin team to enable access to snowflake inside your deployment environment!! and if it is happening in local environment you need to enable proxies</p>
|
<p>Following the YugabyteDB Voyager database migration steps (<a href="https://docs.yugabyte.com/preview/migrate/migrate-steps/" rel="nofollow noreferrer">https://docs.yugabyte.com/preview/migrate/migrate-steps/</a>) going from PostgreSQL to YugabyteDB on a local Kubernetes, on Docker Desktop, on WSL2, on Windows.
Using Ubuntu 22.04 on WSL2 to run yb-voyager, I get an error on the Import Data step:</p>
<pre><code>import of data in "postgres" database started
Target YugabyteDB version: 11.2-YB-2.15.2.1-b0
Error Resolving name=yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: lookup yb-tserver-1.yb-tservers.yb-demo.svc.cluster.local: no such host
</code></pre>
<p>The Import Schema step worked correctly (from using pgAdmin connected to the YugabyteDB), so I know that the database can be connected to. Command used:</p>
<pre><code>yb-voyager import schema --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME}
</code></pre>
<p>The command used to import the data, which fails:</p>
<pre><code>yb-voyager import data --export-dir ${EXPORT_DIR} --target-db-host ${TARGET_DB_HOST} --target-db-user ${TARGET_DB_USER} --target-db-password ${TARGET_DB_PASSWORD} --target-db-name ${TARGET_DB_NAME}
</code></pre>
<p>ENV variables:</p>
<pre><code>EXPORT_DIR=/home/abc/db-export
TARGET_DB_HOST=127.0.0.1
TARGET_DB_USER=ybvoyager
TARGET_DB_PASSWORD=password
TARGET_DB_NAME=postgres
</code></pre>
<p>Why does the import data fail when the import schema works connecting to the same database?</p>
| <p>Putting solution here in case anybody runs into this issue.
If there is a Load Balancer is present and YugabyteDB server's IP is not resolvable from the voyager machine, then import data command is erroring out.
Ideally it should use the load balancer for importing the data.</p>
<p>Use <code>--target-endpoints=LB_HOST:LB_PORT</code> to force the server address.</p>
<p>See tickets:<br />
<a href="https://github.com/yugabyte/yb-voyager/issues/553" rel="nofollow noreferrer">Import data 'Error Resolving name' on local kubernetes #553</a><br />
<a href="https://github.com/yugabyte/yb-voyager/issues/585" rel="nofollow noreferrer">Import Data failed if LB is present and cluster servers host is not resolvable #585</a></p>
|
<p>I'm running a kubernetes cluster (bare metal) with a mongodb (version 4, as my server cannot handle newer versions) replicaset (2 replicas), which is initially working, but from time to time (sometimes 24 hours, somtimes 10 days) one or more mongodb pods are failing.</p>
<pre><code>Warning BackOff 2m9s (x43454 over 6d13h) kubelet Back-off restarting failed container
</code></pre>
<p>The relevant part of the logs should be</p>
<pre><code>DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory
</code></pre>
<p>But I do not change anything and initially it is working. Also the second pod is currently running (but which will fail the next days).</p>
<p>I'm using longhorn (before I tried nfs) for the storage and I installed mongodb using bitnami helm chart with these values:</p>
<pre><code>image:
registry: docker.io
repository: bitnami/mongodb
digest: "sha256:916202d7af766dd88c2fff63bf711162c9d708ac7a3ffccd2aa812e3f03ae209" # tag: 4.4.15
pullPolicy: IfNotPresent
architecture: replicaset
replicaCount: 2
updateStrategy:
type: RollingUpdate
containerPorts:
mongodb: 27017
auth:
enabled: true
rootUser: root
rootPassword: "password"
usernames: ["user"]
passwords: ["userpass"]
databases: ["db"]
service:
portName: mongodb
ports:
mongodb: 27017
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
volumePermissions:
enabled: true
livenessProbe:
enabled: false
readinessProbe:
enabled: false
</code></pre>
<p><strong>logs</strong></p>
<pre><code>mongodb 21:25:05.55 INFO ==> Advertised Hostname: mongodb-1.mongodb-headless.mongodb.svc.cluster.local
mongodb 21:25:05.55 INFO ==> Advertised Port: 27017
mongodb 21:25:05.56 INFO ==> Pod name doesn't match initial primary pod name, configuring node as a secondary
mongodb 21:25:05.59
mongodb 21:25:05.59 Welcome to the Bitnami mongodb container
mongodb 21:25:05.60 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 21:25:05.60 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 21:25:05.60
mongodb 21:25:05.60 INFO ==> ** Starting MongoDB setup **
mongodb 21:25:05.64 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 21:25:05.78 INFO ==> Initializing MongoDB...
mongodb 21:25:05.82 INFO ==> Deploying MongoDB with persisted data...
mongodb 21:25:05.83 INFO ==> Writing keyfile for replica set authentication...
mongodb 21:25:05.88 INFO ==> ** MongoDB setup finished! **
mongodb 21:25:05.92 INFO ==> ** Starting MongoDB **
{"t":{"$date":"2022-10-29T21:25:05.961+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2022-10-29T21:25:05.963+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:05.968+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-10-29T21:25:05.969+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/bitnami/mongodb/data/db","architecture":"64-bit","host":"mongodb-1"}}
{"t":{"$date":"2022-10-29T21:25:06.011+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.15","gitVersion":"bc17cf2c788c5dda2801a090ea79da5ff7d5fac9","openSSLVersion":"OpenSSL 1.1.1n 15 Mar 2022","modules":[],"allocator":"tcmalloc","environment":{"distmod":"debian10","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"","version":"Kernel 5.15.0-48-generic"}}}
{"t":{"$date":"2022-10-29T21:25:06.012+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"config":"/opt/bitnami/mongodb/conf/mongodb.conf","net":{"bindIp":"*","ipv6":false,"port":27017,"unixDomainSocket":{"enabled":true,"pathPrefix":"/opt/bitnami/mongodb/tmp"}},"processManagement":{"fork":false,"pidFilePath":"/opt/bitnami/mongodb/tmp/mongodb.pid"},"replication":{"enableMajorityReadConcern":true,"replSetName":"rs0"},"security":{"authorization":"disabled","keyFile":"/opt/bitnami/mongodb/conf/keyfile"},"setParameter":{"enableLocalhostAuthBypass":"true"},"storage":{"dbPath":"/bitnami/mongodb/data/db","directoryPerDB":false,"journal":{"enabled":true}},"systemLog":{"destination":"file","logAppend":true,"logRotate":"reopen","path":"/opt/bitnami/mongodb/logs/mongodb.log","quiet":false,"verbosity":0}}}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system). Ensure the user executing mongod is the owner of the lock file and has the appropriate permissions. Also make sure that another mongod instance is not already running on the /bitnami/mongodb/data/db directory"}}
{"t":{"$date":"2022-10-29T21:25:06.013+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"REPL", "id":4784907, "ctx":"initandlisten","msg":"Shutting down the replica set node executor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-10-29T21:25:06.014+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-10-29T21:25:06.015+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
</code></pre>
<p><strong>Update</strong></p>
<p>I checked the syslog and before the the logs <code>Nov 14 23:07:17 k8s-worker2 kubelet[752]: E1114 23:07:17.749057 752 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mongodb\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mongodb pod=mongodb-2_mongodb(314f2776-ced4-4ba3-b90b-f927dc079770)\"" pod="mongodb/mongodb-2" podUID=314f2776-ced4-4ba3-b90b-f927dc079770</code></p>
<p>I find these logs:</p>
<pre><code>Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341806] sd 2:0:0:1: [sda] tag#42 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=11s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341866] sd 2:0:0:1: [sda] tag#42 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341891] sd 2:0:0:1: [sda] tag#42 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341899] sd 2:0:0:1: [sda] tag#42 CDB: Write(10) 2a 00 00 85 1f b8 00 00 40 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.341912] blk_update_request: critical medium error, dev sda, sector 8724408 op 0x1:(WRITE) flags 0x800 phys_seg 8 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.352012] Aborting journal on device sda-8.
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.354980] EXT4-fs error (device sda) in ext4_reserve_inode_write:5726: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.355103] sd 2:0:0:1: [sda] tag#40 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357056] sd 2:0:0:1: [sda] tag#40 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357061] sd 2:0:0:1: [sda] tag#40 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357066] sd 2:0:0:1: [sda] tag#40 CDB: Write(10) 2a 00 00 44 14 88 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357068] blk_update_request: critical medium error, dev sda, sector 4461704 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.357088] EXT4-fs error (device sda): ext4_dirty_inode:5922: inode #131080: comm mongod: mark_inode_dirty error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.359566] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 557715)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.361432] EXT4-fs error (device sda) in ext4_dirty_inode:5923: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.362792] Buffer I/O error on device sda, logical block 557713
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.364010] Buffer I/O error on device sda, logical block 557714
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365222] sd 2:0:0:1: [sda] tag#43 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=8s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365228] sd 2:0:0:1: [sda] tag#43 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365230] sd 2:0:0:1: [sda] tag#43 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365233] sd 2:0:0:1: [sda] tag#43 CDB: Write(10) 2a 00 00 44 28 38 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.365234] blk_update_request: critical medium error, dev sda, sector 4466744 op 0x1:(WRITE) flags 0x0 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367434] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558344)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.367442] Buffer I/O error on device sda, logical block 558343
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368593] sd 2:0:0:1: [sda] tag#41 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368597] sd 2:0:0:1: [sda] tag#41 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368599] sd 2:0:0:1: [sda] tag#41 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368602] sd 2:0:0:1: [sda] tag#41 CDB: Write(10) 2a 00 00 44 90 70 00 00 10 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.368604] blk_update_request: critical medium error, dev sda, sector 4493424 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370907] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131081 starting block 561680)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370946] sd 2:0:0:1: [sda] tag#39 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] sd 2:0:0:1: [sda] tag#39 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370952] sd 2:0:0:1: [sda] tag#39 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370949] EXT4-fs error (device sda): ext4_journal_check_start:83: comm kworker/u4:0: Detected aborted journal
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.370954] sd 2:0:0:1: [sda] tag#39 CDB: Write(10) 2a 00 00 10 41 98 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.372081] blk_update_request: critical medium error, dev sda, sector 1065368 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374353] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131080 starting block 133172)
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.374396] Buffer I/O error on device sda, logical block 133171
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.388492] EXT4-fs error (device sda) in __ext4_new_inode:1136: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.390763] EXT4-fs error (device sda) in ext4_create:2786: Journal has aborted
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.391732] sd 2:0:0:1: [sda] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392941] sd 2:0:0:1: [sda] tag#46 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392944] sd 2:0:0:1: [sda] tag#46 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392948] sd 2:0:0:1: [sda] tag#46 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.392950] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.395562] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396945] sd 2:0:0:1: [sda] tag#45 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396953] sd 2:0:0:1: [sda] tag#45 Sense Key : Medium Error [current]
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396955] sd 2:0:0:1: [sda] tag#45 Add. Sense: Unrecovered read error
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396958] sd 2:0:0:1: [sda] tag#45 CDB: Write(10) 2a 08 00 84 00 00 00 00 08 00
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396959] blk_update_request: critical medium error, dev sda, sector 8650752 op 0x1:(WRITE) flags 0x20800 phys_seg 1 prio class 0
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.396930] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.399771] Buffer I/O error on dev sda, logical block 1081344, lost sync page write
Nov 14 23:06:59 k8s-worker2 kernel: [3413829.403897] JBD2: Error -5 detected when updating journal superblock for sda-8.
Nov 14 23:07:01 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.w3zzIL.mount: Deactivated successfully.
Nov 14 23:07:05 k8s-worker2 kubelet[752]: E1114 23:07:05.415798 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 kubelet[752]: E1114 23:07:06.412219 752 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 46.38.252.230 46.38.225.230 2a03:4000:0:1::e1e6"
Nov 14 23:07:06 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.nK23K3.mount: Deactivated successfully.
Nov 14 23:07:11 k8s-worker2 systemd[1]: run-docker-runtime\x2drunc-moby-d1c0f0dc3e024723707edfc12e023b98fb98f1be971177ecca5ac0cfdc91ab87-runc.L5TkRU.mount: Deactivated successfully.
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411831] sd 2:0:0:1: [sda] tag#44 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411888] sd 2:0:0:1: [sda] tag#44 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411898] sd 2:0:0:1: [sda] tag#44 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411952] sd 2:0:0:1: [sda] tag#44 CDB: Write(10) 2a 00 00 44 28 40 00 00 50 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.411965] blk_update_request: critical medium error, dev sda, sector 4466752 op 0x1:(WRITE) flags 0x0 phys_seg 10 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.419273] EXT4-fs warning (device sda): ext4_end_bio:344: I/O error 7 writing to inode 131083 starting block 558354)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430398] sd 2:0:0:1: [sda] tag#47 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=15s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430407] sd 2:0:0:1: [sda] tag#47 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430409] sd 2:0:0:1: [sda] tag#47 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430412] sd 2:0:0:1: [sda] tag#47 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.430415] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.433686] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.436088] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444291] sd 2:0:0:1: [sda] tag#32 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=14s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444300] sd 2:0:0:1: [sda] tag#32 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444304] sd 2:0:0:1: [sda] tag#32 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444308] sd 2:0:0:1: [sda] tag#32 CDB: Write(10) 2a 00 00 41 01 18 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.444313] blk_update_request: critical medium error, dev sda, sector 4260120 op 0x1:(WRITE) flags 0x3000 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.449491] Buffer I/O error on dev sda, logical block 532515, lost async page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453591] sd 2:0:0:1: [sda] tag#33 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453600] sd 2:0:0:1: [sda] tag#33 Sense Key : Medium Error [current]
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453603] sd 2:0:0:1: [sda] tag#33 Add. Sense: Unrecovered read error
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453607] sd 2:0:0:1: [sda] tag#33 CDB: Write(10) 2a 08 00 00 00 00 00 00 08 00
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.453610] blk_update_request: critical medium error, dev sda, sector 0 op 0x1:(WRITE) flags 0x23800 phys_seg 1 prio class 0
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.459072] Buffer I/O error on dev sda, logical block 0, lost sync page write
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.461189] EXT4-fs (sda): I/O error while writing superblock
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.464347] EXT4-fs (sda): Remounting filesystem read-only
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.466527] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131081, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.470833] Buffer I/O error on device sda, logical block 561678
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.473548] Buffer I/O error on device sda, logical block 561679
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.477384] EXT4-fs (sda): failed to convert unwritten extents to written extents -- potential data loss! (inode 131083, error -30)
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.482014] Buffer I/O error on device sda, logical block 558344
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.484881] Buffer I/O error on device sda, logical block 558345
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.487224] Buffer I/O error on device sda, logical block 558346
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.488837] Buffer I/O error on device sda, logical block 558347
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.490543] Buffer I/O error on device sda, logical block 558348
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.492061] Buffer I/O error on device sda, logical block 558349
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.493494] Buffer I/O error on device sda, logical block 558350
Nov 14 23:07:14 k8s-worker2 kernel: [3413844.494931] Buffer I/O error on device sda, logical block 558351
</code></pre>
<p>Not sure, if this is really related to the problem.</p>
| <p>Generally when you see this error message:</p>
<pre><code>"error":"DBPathInUse: Unable to create/open the lock file: /bitnami/mongodb/data/db/mongod.lock (Read-only file system)
</code></pre>
<p>It most probably mean that your mongodb pod did not shutted down gracefully and had no time to remove the mongod.lock file so when your pod was re-created in another k8s node the "new" mongod process cannot start because it is finding the previous mongod.lock file.</p>
<p>The easiest way to resolve the current availability issue is to scale up and add immediately one more replicaSet member so the new member to init-sync from the available good member:</p>
<pre><code>helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=3
</code></pre>
<p>and elect again primary.</p>
<p>You can check if mongoDB replicaSet elected PRIMARY from mongo shell inside the pod with the command:</p>
<pre><code> rs.status()
</code></pre>
<p>For affected pod with the issue you can do as follow:</p>
<p>You can plan maitenance window and scale down ( scaling down stateFullset do not expect to automatically delete the <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/" rel="nofollow noreferrer">pvc/pv</a> , but good to make backup just in case.</p>
<p>After you scale down you can start custom helper pod to mount the pv so you can remove the mongod.lock file:</p>
<p>Temporary pod that you will start to mount the affected dbPath and remove the mongodb.lock file:</p>
<pre><code> kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: mongo-pvc-helper
spec:
securityContext:
runAsUser: 0
containers:
- command:
- sh
- -c
- while true ; do echo alive ; sleep 10 ; done
image: busybox
imagePullPolicy: Always
name: mongo-pvc-helper
resources: {}
securityContext:
capabilities:
drop:
- ALL
volumeMounts:
- mountPath: /mongodata
name: mongodata
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: <your_faulty_pod_pvc_name>
EOF
</code></pre>
<p>After you start the pod you can do:</p>
<pre><code>kubectl exec mongo-pvc-helper -it sh
$ chown -R 0:0 /mongodata
$ rm /mongodata/mongod.lock
$ exit
</code></pre>
<p>Or you can complete wipe up the entire pv(if you prefer safely to init-sync entirely this member):</p>
<pre><code>rm -rf /mongodata/*
</code></pre>
<p>And terminate the pod so you can finish the process:</p>
<pre><code> kubectl delete pod mongo-pvc-helper
</code></pre>
<p>And again scale-up:</p>
<pre><code> helm upgrade mongodb bitnami/mongodb \
--set architecture=replicaset \
--set auth.replicaSetKey=myreplicasetkey \
--set auth.rootPassword=myrootpassword \
--set replicaCount=2
</code></pre>
<p>Btw, good to have at least 3x data members in replicaSet for better redundancy to allow during single member down event election to keep still the PRIMARY up and running...</p>
<p>How to troubleshoot this further:</p>
<ol>
<li><p>Ensure your pods have the terminationGracePeriod set (at least 10-20 sec) so it allow some time for the mongod process to flush data to storage and remove the mongod.lock file.</p>
</li>
<li><p>Depending from pod memory limits/requests , you can set some safer value for storage.wiredTiger.engineConfig.cacheSizeGB (if not set it is allocating ~50% from memory ).</p>
</li>
<li><p>Check the kubelet logs from node where pod was killed there maybe more details why pod was killed.</p>
</li>
</ol>
|
<p>Since I lost data because I removed (accidently) a namepace including a persistant volume, I am trying to enable Backup for my GKE cluster.</p>
<p>But after I try to enable it I get the message:</p>
<p>'Etcd alias ip must be enabled for Backup for GKE addon.'</p>
<p>I cannot find anything when searching for this, I do not know (or understand) how to enable Etcd alias ip. Does someone know what I can do?</p>
| <p>Thank you for the replies, I just tried again and now it is working. I don't know why, did not make any other changes.</p>
|
<p>I use Spark 2.4.0 + K8s cluster deployment mode + python 3.5.</p>
<p>I pack all libraries into zip archive and send it to AWS S3, then attach to context</p>
<pre><code>sc = pyspark.SparkContext(appName=args.job_name, environment=environment)
sc.addPyFile('s3a://.../libs.zip')
sc.addPyFile('s3a://.../code.zip')
</code></pre>
<p>Import works, I can import any package. But if I import package, that reads some files from package-related folders - I get error:</p>
<p><code>NotADirectoryError: [Errno 20] Not a directory: '/var/data/spark-ce45d34b-8d2f-4fd0-b3d6-d53ecede8ef1/spark-6ce9d14f-3d90-4c3c-ba2d-9dd6ddf32457/userFiles-08e6e9ec-03fa-447d-930f-bf1bd520f55a/libs.zip/airflow/config_templates/default_airflow.cfg'</code></p>
<p>How could I solve it?</p>
<p>PS. Use <code>sc.addFile('s3a:/..')</code> and unzipping not works because spark is running in cluster mode.</p>
<p>UPDATE:</p>
<p>I've temporary solved this by installing all packages I need to the docker container I'm using for spark workers.</p>
| <p>Some pip-installed packages are not safe to be compressed into a zip. For example, used <a href="https://github.com/apache/airflow/blob/5786dcdc392f7a2649f398353a0beebef01c428e/setup.py#L685" rel="nofollow noreferrer">Airflow v1.10.15 was not ZIP-safe</a> (not sure about new versions)</p>
|
<p>I have a program which I have set up several jobs inside it. According to the figure, these jobs are executed every day at a certain time and, for example, send an SMS to a group of numbers.
When I deploy this to Kubernetes, multiple copies are created.
I want to know, do all these original and replica versions do this and send SMS? If it is true that one SMS should be sent to one number, not that several SMS messages should be sent to the same number.
My question is, how does Kubernetes deal with these programs and how should we deploy them correctly?
<a href="https://i.stack.imgur.com/nlLGN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nlLGN.png" alt="enter image description here" /></a></p>
<p>I have read various articles but I don't know which is the right way.</p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob</a> for that.</p>
<p>Since you have a single application with all cronjob code. You can modify the c# application to use an environment variable to decide the job type. This way you can create <code>CronJob</code> resource for each job you want.</p>
<h3>Notification Job</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: notification-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: notification-job
image: org/my-csharp-cron-app:latest
imagePullPolicy: IfNotPresent
env:
- name: JOB_TYPE
value: "NotificationJob"
restartPolicy: OnFailure
</code></pre>
<h3>Logger Job</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: logger-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: logger-job
image: org/my-csharp-cron-app:latest
imagePullPolicy: IfNotPresent
env:
- name: JOB_TYPE
value: "LoggerJob"
restartPolicy: OnFailure
</code></pre>
|
<p>I am trying to write a network policy on Kubernetes that works under AWS EKS. What I want to achieve is to allow traffic to pod/pods from the same Namespace and allow external traffic that is forwarded from AWS ALB Ingress.</p>
<p>AWS ALB Ingress is created under the same NameSpace so I was thinking that only using <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/04-deny-traffic-from-other-namespaces.md" rel="nofollow noreferrer">DENY all traffic from other namespaces</a> would suffice but when I use that traffic from ALB Ingress Load Balancer (whose internal IP addresses are at at the same nameSpace with the pod/pods) are not allowed. Then if I add <a href="https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/08-allow-external-traffic.md" rel="nofollow noreferrer">ALLOW traffic from external clients</a> it allows to Ingress but ALSO allows other namespaces too.</p>
<p>So my example is like: (this does not work as expected)</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
namespace: os
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: os
spec:
podSelector:
matchLabels:
app: nginx
tier: prod
customer: os
ingress:
- ports:
- port: 80
from: []
</code></pre>
<p>When using first policy ALB Ingress is blocked, with adding second one other namespaces are also allowed too which i dont want. I can allow only internal IP address of AWS ALB Ingress but it can change over time and it is created dynamically.</p>
| <p>The semantics of the built-in Kubernetes NetworkPolicies are kind of fiddly. There are no deny rules, only allow rules.</p>
<p>The way they work is if no network policies apply to a pod, then all traffic is allowed. Once there is a network policy that applies to a pod, then all traffic <em>not allowed</em> by that policy is blocked.</p>
<p>In other words, you can't say something like "deny this traffic, allow all the rest". You have to effectively say, "allow all the rest".</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html#:%7E:text=The%20AWS%20Load%20Balancer%20Controller%20supports%20the%20following%20traffic%20modes%3A" rel="nofollow noreferrer">The documentation for the AWS ALB Ingress controller states that traffic can either be sent to a NodePort for your service, or directly to pods</a>. This means that the traffic originates from an AWS IP address outside the cluster.</p>
<p>For traffic that has a source that isn't well-defined, such as traffic from AWS ALB, this can be difficult - you don't know what the source IP address will be.</p>
<p>If you are trying to allow traffic from the Internet using the ALB, then it means anyone that can reach the ALB will be able to reach your pods. In that case, there's effectively no meaning to blocking traffic within the cluster, as the pods will be able to connect to the ALB, even if they can't connect directly.</p>
<p>My suggestion then is to just create a network policy that allows all traffic to the pods the Ingress covers, but have that policy as specific as possible - for example, if the Ingress accesses a specific port, then have the network policy only allow that port. This way you can minimize the attack surface within the cluster only to that which is Internet-accessible.</p>
<p>Any other traffic to these pods will need to be explicitly allowed.</p>
<p>For example:</p>
<pre><code>---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
spec:
podSelector:
matchLabels:
app: <your-app> # app-label
ingress:
- from: []
ports:
- port: 1234 # the port which should be Internet-accessible
</code></pre>
<p>This is actually a problem we faced when implementing the Network Policy plugin for the Otterize Intents operator - the operator lets you declare which pods you want to connect to within the cluster and block all the rest by automatically creating network policies and labeling pods, but we had to do that without inadvertently blocking external traffic once the first network policy had been created.</p>
<p>We settled on automatically detecting whether a <code>Service</code> resource of type <code>LoadBalancer</code> or <code>NodePort</code> exists, or an <code>Ingress</code> resource, and creating a network policy that allows all traffic to those ports, as in the example above. A potential improvement for that is to support specific Ingress controllers that have in-cluster pods (so, not AWS ALB, but could be nginx ingress controller, for example), and only allowing traffic from the specific ingress pods.</p>
<p>Have a look here: <a href="https://github.com/otterize/intents-operator" rel="nofollow noreferrer">https://github.com/otterize/intents-operator</a>
And the documentation page explaining this: <a href="https://docs.otterize.com/components/intents-operator/#network-policies" rel="nofollow noreferrer">https://docs.otterize.com/components/intents-operator/#network-policies</a></p>
<p>If you wanna use this and add support for a specific Ingress controller you're using, hop onto to the Slack or open an issue and we can work on it together.</p>
|