prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have created 2 pods within the same cluster. One service is initialized as</p>
<pre><code>kubectl create deployment my-web --image=nginx --port=80
kubectl expose deployment my-web --target-port=80 --type=NodePort
</code></pre>
<p>to my understanding, this creates a deployment with one pod <code>my-web-<string></code> and exposes a port. With <code>kubectl describe services my-web</code>, I find that the following information:</p>
<pre><code>Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32004/TCP
Endpoints: 10.244.0.10:80
</code></pre>
<p>testing pod:</p>
<pre><code>kubectl run test-pod --image=nginx --restart=Never
</code></pre>
<p>this creates another pod and I try to curl the nginx of my-web pod with the command <code>curl 10.244.0.10:32004</code>. That request times out. But somehow it works when I use <code>curl 10.244.0:80</code>. Why is that? I thought the service was exposed on port 32004 outside the my-web pod?</p>
<p>Please also let me know what IP and port to curl from my host machine to access the my-web pod. I am running the cluster from minikube on MacOS.</p>
<p>Thanks for the help!</p>
<p>I try to curl the nginx of my-web pod with the command <code>curl 10.244.0.10:32004</code>. That request times out. But somehow it works when I use <code>curl 10.244.0:80</code>.</p>
| <p>NodePort is used to access a service within the cluster scope.
You might create a firewall rule that allows TCP traffic on your node port. Create a firewall rule that allows TCP traffic on port 32004.
On Ubuntu you can do something like:</p>
<blockquote>
<p>sudo ufw allow 32004/tcp</p>
</blockquote>
<p>And check port status with:</p>
<blockquote>
<p>sudo ufw status</p>
</blockquote>
<p>Once you are sure the port is opened you can curl the ip:port</p>
<pre><code>curl http://10.244.0.10:32004
</code></pre>
<p>For further info check the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">Kubernetes official documentation.</a></p>
|
<p>Hello I am trying to deploy a simple tomcat service. Below are the details:</p>
<p>1.minikube version: v1.8.1</p>
<p>2.OS: mac</p>
<p>3.The <strong>deployment.yaml</strong> file (I am in the directory of the yaml file)</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deployment
spec:
selector:
matchLabels:
app: tomcat
replicas: 1
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat:9.0
ports:
- containerPort: 8080
</code></pre>
<p>4.Commands used to deploy and expose the service</p>
<pre><code>kubectl apply -f deployment.yaml
kubectl expose deployment tomcat-deployment --type=NodePort
minikube service tomcat-deployment --url
curl [URL]
</code></pre>
<p>I get a 404 when I curl the URL.
I am unsure if there's an issue with the deployment.yaml file or some minikube settings.</p>
| <p>Sara's answer above pointed me to the right direction. Copying the files works but this requires a restart of the tomcat service which reverts the changes. I had to use 'cp -r' on the deployment yaml as per below:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/tomcat/webapps.dist/manager/META-INF/context.xml
name: tomcat-configmap
subPath: context1
- mountPath: /usr/local/tomcat/webapps.dist/host-manager/META-INF/context.xml
name: tomcat-configmap
subPath: context2
mountPath: /usr/local/tomcat/conf/tomcat-users.xml
name: tomcat-configmap
subPath: tomcat-users
command: ["/bin/bash"]
args: [ "-c", "cp -r /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps/ && catalina.sh start; sleep inf" ]
volumes:
- name: tomcat-configmap
configMap:
name: tomcat-configmap</code></pre>
</div>
</div>
</p>
|
<p>Imagine you want to get kind/struct called <code>KubeadmControlPlane</code> from the kubernetes API server.</p>
<p>This means you need to import the related struct into your code.</p>
<p>A matching import statement for <code>KubeadmControlPlane</code> would be:</p>
<blockquote>
<p>kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1"</p>
</blockquote>
<p>Up to now I need to much time to find a matching import statement.</p>
<p>I use vscode.</p>
<p>How do you get from the CRD kind to an import statement?</p>
| <p>In principle, a <code>go get sigs.k8s.io/[email protected]</code> (done in the folder where your <code>go.mod</code> is) should be enough to:</p>
<ul>
<li>update your <code>go.mod</code>,</li>
<li>add the library in your <code>$GOPATH</code> and</li>
<li>enable VSCode auto-import to work.</li>
</ul>
<p>That means, when you start typing the name of a struct, like <code>KubeadmControlPlane</code>, the <a href="https://code.visualstudio.com/docs/languages/go" rel="nofollow noreferrer">VSCode Go extension</a> should suggest an auto-import if it can find a matching package in your <code>GOPATH</code> or in your project's vendor directory.</p>
<hr />
<p>If not, the manual process would be:</p>
<ol>
<li><p><strong>Identify the API Group and Version of the CRD:</strong> This information is usually found in the <code>apiVersion</code> field of the CRD YAML file. For example, the <code>KubeadmControlPlane</code> is part of the <code>controlplane.cluster.x-k8s.io/v1beta1</code> API group and version.</p>
</li>
<li><p><strong>Find the Go Package for the API Group:</strong> You need to find the corresponding Go package for this API group.<br />
In the case of the <code>KubeadmControlPlane</code>, it is part of the <code>sigs.k8s.io/cluster-api</code> project and the specific package path is <code>sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1</code>.<br />
A <a href="https://pkg.go.dev/search?q=KubeadmControlPlane" rel="nofollow noreferrer">search in <code>pkg.go.dev</code></a> works too, pending an official API to lookup packages (<a href="https://github.com/golang/go/issues/36785" rel="nofollow noreferrer">issue 36785</a>).</p>
</li>
<li><p><strong>Identify the Go Struct for the CRD:</strong> The Go struct is usually named similarly to the Kind of the CRD. In this case, it is <code>KubeadmControlPlane</code>.</p>
</li>
<li><p><strong>Create the Go Import Statement:</strong> Once you have the package path and struct name, you can create the Go import statement. For example:</p>
</li>
</ol>
<pre class="lang-golang prettyprint-override"><code>import (
kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1"
)
</code></pre>
|
<p>I am deploying some Flink jobs which require access to some services under a service mesh implemented via Linkerd and I'm running into this error:</p>
<pre><code>java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job
</code></pre>
<p>I can confirm that the jar file contains the class that cannot be found apparently, so it's not a problem with the jar itself, but seems to be related to Linkerd. In particular, I'm using the following pod annotations for both the jobmanager and the taskmanager pods (taken from my Helm Chart values file):</p>
<pre><code>podAnnotations:
linkerd.io/inject: enabled
config.linkerd.io/skip-outbound-ports: 6123,6124
config.linkerd.io/proxy-await: enabled
</code></pre>
<p>For what it's worth, I'm using the <a href="https://www.ververica.com/" rel="nofollow noreferrer">Ververica Platform</a> (Community Edition) for deploying my jobs to Kubernetes, although I don't think the issue is VVP-specific:</p>
<pre><code>{{- define "vvp.deployment" }}
kind: Deployment
apiVersion: v1
metadata:
name: my-job
spec:
template:
spec:
artifact:
kind: jar
flinkImageRegistry: {{ .Values.flink.imageRegistry }}
flinkVersion: "1.15.1"
flinkImageTag: 1.15.1-stream1-scala_2.12-java11-linkerd
entryClass: foo.bar.Job
kubernetes:
jobManagerPodTemplate:
metadata:
{{- with .Values.flink.podAnnotations }}
annotations:
{{- toYaml . | nindent 14 }}
{{- end }}
spec:
containers:
- name: flink-jobmanager
command:
- linkerd-entrypoint.sh
taskManagerPodTemplate:
metadata:
{{- with .Values.flink.podAnnotations }}
annotations:
{{- toYaml . | nindent 14 }}
{{- end }}
{{- end }}
</code></pre>
<p>where the contents of <code>linkerd-entrypoint.sh</code> are:</p>
<pre><code>#!/bin/bash
set -e
exec linkerd-await --shutdown -- "$@"
</code></pre>
<p>For extra context, the VVP and the flink jobs are deployed into different namespaces. Also, for the VVP pods, I'm not using any linkerd annotations whatsoever.</p>
<p>Has anyone encountered similar problems? The closest troubleshooting resource/guide that I've found so far is <a href="https://ververica.zendesk.com/hc/en-us/articles/7233687970460-How-to-integrate-Istio-with-Ververica-Platform" rel="nofollow noreferrer">this one</a>, which targets Istio instead of Linkerd.</p>
| <p>Answering to myself after having determined the root cause of the issue.</p>
<p>Regarding Linkerd, everything was correctly setup. The main precaution that one needs to take is adding the <code>linkerd-await</code> binary to the Flink image and making sure to override the entrypoint for the jobmanager since otherwise you will run into issues when upgrading your jobs. The jobmanager won't kill the Linkerd proxy, and because of that it will hang around with <code>NotReady</code> status. Again, that is easily solved by wrapping the main cmd in a <code>linkerd-await</code> call. So, first add the <code>linkerd-await</code> binary to your docker image:</p>
<pre><code># Add linkerd-await and linkerd-entrypoint.sh
USER root
RUN apt-get update && apt-get install -y wget
RUN wget https://github.com/linkerd/linkerd-await/releases/download/release%2Fv0.2.7/linkerd-await-v0.2.7-amd64 -O ./linkerd-await && chmod +x ./linkerd-await
COPY scripts/flink/linkerd-entrypoint.sh ./linkerd-entrypoint.sh
</code></pre>
<p>Then, for the jobmanager only, override the entrypoint like this:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
containers:
- name: flink-jobmanager
command:
- linkerd-entrypoint.sh # defined above
</code></pre>
<p>Alternatively one could use the <code>LINKERD_DISABLED</code> or <code>LINKERD_AWAIT_DISABLED</code> env vars for bypassing the <code>linkerd-await</code> wrapper. For more info on using jobs & Linkerd consult the following resources:</p>
<ul>
<li><a href="https://itnext.io/three-ways-to-use-linkerd-with-kubernetes-jobs-c12ccc6d4c7c" rel="nofollow noreferrer">https://itnext.io/three-ways-to-use-linkerd-with-kubernetes-jobs-c12ccc6d4c7c</a> (solution #3 is the one explained here)</li>
<li><a href="https://github.com/linkerd/linkerd-await" rel="nofollow noreferrer">https://github.com/linkerd/linkerd-await</a></li>
</ul>
<p>Also, regarding the annotation</p>
<pre class="lang-yaml prettyprint-override"><code>config.linkerd.io/proxy-await: enabled
</code></pre>
<p>, it does only the waiting but not the shutdown part, so if we are going to manually run <code>linkerd-await --shutdown -- "$@"</code> anyway, that annotation can be safely removed since it's redundant:</p>
<ul>
<li><a href="https://github.com/linkerd/linkerd2/issues/8006" rel="nofollow noreferrer">https://github.com/linkerd/linkerd2/issues/8006</a></li>
</ul>
<p>Finally, regarding:</p>
<pre><code>java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job
</code></pre>
<p>let me clarify that this had nothing to do with Linkerd. This was mostly a config error along the lines of:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/7325579/java-lang-noclassdeffounderror-could-not-initialize-class-xxx">java.lang.NoClassDefFoundError: Could not initialize class XXX</a></li>
</ul>
<p>Essentially (the specific details are irrelevant), there were some env vars missing in the taskmanager pods. Note that the exception message says "Could not initialize class foo.bar.Job" which is different from "Could not find class...".</p>
<p>Sorry for the confusion!</p>
|
<p>I have this use case:</p>
<blockquote>
<p>When there is much load on a specific queue in RabbitMQ, i want to start more replicas. Let's say, my app can handle 5 messages ( = tasks) simultaneously and they all take 1 min to complete. When there are more than 10 "ready" messages in the rabbitmq Queue, i want the HPA to start a new replica. When there are 20 "ready" messages, start 2, at 30 "ready" messages start 3 etc.</p>
</blockquote>
<p>I used this helm chart for install prometheus-adapter:</p>
<p><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-adapter" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-adapter</a></p>
<p>And in my helm values yaml i added:</p>
<pre class="lang-yaml prettyprint-override"><code>rules:
default: true
custom:
- seriesQuery: '{__name__=~"rabbitmq_detailed_queue_messages_ready"}'
name:
matches: "^(.*)"
as: "open_tasks"
resources:
overrides:
kubernetes_namespace: { resource: "namespace" }
kubernetes_name: { resource: "service" }
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,queue="my-task-queue"}) by (<<.GroupBy>>)
</code></pre>
<p>Now, this should be exposed, but it isn't:</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq . | grep open_tasks
</code></pre>
<p>Now this is my main question. After that i could deploy an HPA for my app <code>task-processor</code> like this:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
name: task-processor-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: task-processor
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
metric:
name: open_tasks
describedObject:
apiVersion: "/v1"
kind: Service
name: open_tasks
target:
type: Value
value: 10
</code></pre>
<p>Now my questions:</p>
<ol>
<li>Why is the metric not exposed in the raw query?</li>
<li>Is my yaml for the HPA correct? I have the feeling that i'm missing some essential stuff here, but i'm not able to get my head around that.</li>
</ol>
| <p>Your use case is essentially autoscaling based on RabbitMQ queue length, which is a bit challenging due to the limitations of the native Kubernetes HPA controller, as it "typically" autoscale based on CPU and memory usage.</p>
<p><strong>Alternatively, you may consider utilizing <a href="https://keda.sh/docs/2.10/concepts" rel="nofollow noreferrer">KEDA</a></strong> (Kubernetes Event-Driven Autoscaling):</p>
<p>KEDA is a Kubernetes-based event-driven autoscaler. With KEDA, you can drive the scaling of any deployment based on the various criteria (depends on the scaler adapter). It supports <a href="https://keda.sh/docs/2.10/scalers/rabbitmq-queue" rel="nofollow noreferrer">RabbitMQ</a>, which makes it particularly suitable for your case.</p>
<p>Here's a sample <code>ScaledObject</code> for your case:</p>
<pre><code>apiVersion: keda.sh/v1alpha1
kind: ScaledObject
spec:
scaleTargetRef:
apiVersion: apps/v1 # Optional (default: apps/v1)
kind: Deployment
name: task-processor
triggers:
- type: rabbitmq
metadata:
queueName: 'my-task-queue'
mode: QueueLength # Trigger on number of messages in the queue.
value: '5' # Target number of tasks per pod.
</code></pre>
|
<p>I am getting the IP address assigned to the pod using <code>kubectl get pods -o custom-columns="POD_IP":.status.podIPs</code> command.</p>
<p>And based on same approach, I am using <code>kubectl get pods -o custom-columns="POD_PORT":.spec.containers.ports.containerPort</code> command to get the port number but it is coming as blank.</p>
<pre><code>cloudshell:~$ kubectl get pods -o custom-columns="POD_IP":.status.podIPs
POD_IP
[map[ip:10.32.0.194]]
cloudshell:~$ kubectl get pods -o custom-columns="POD_PORT":.spec.containers.ports.containerPort
POD_PORT
<none>
cloudshell:~$ kubectl get pods -o custom-columns="POD_PORT":.spec.containers
POD_PORT
[map[image:nginx:1.10.1 imagePullPolicy:IfNotPresent name:servic1 ports:[map[containerPort:8080 protocol:TCP]] resources:map[limits:map[cpu:500m ephemeral-storage:1Gi memory:2Gi] requests:map[cpu:500m ephemeral-storage:1Gi memory:2Gi]] securityContext:map[capabilities:map[drop:[NET_RAW]]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File volumeMounts:[map[mountPath:/var/run/secrets/kubernetes.io/serviceaccount name:kube-api-access-mgk8k readOnly:true]]]]
cloudshell:~$
</code></pre>
<p>I have tried to use <code>kubectl get pods -o custom-columns="Port Number of Pod":.spec.containers</code> command and I can see that my mapping (<code>.spec.containers.ports.containerPort</code>) is correct but somehow it is still not working.</p>
<p>I am totally sure that <code>.spec.containers.ports.containerPort</code> mapping correct, and same command format is giving IP address, so not able to catch what is wrong.</p>
<p>Is anyone able to catch what is wrong here?</p>
| <p>Try:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods \
--output=custom-columns=\
"POD_PORT":.spec.containers[*].ports[*].containerPort
</code></pre>
<p>You can include <code>.metadata.name</code> too to aid clarity:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods \
--output=custom-columns=\
"NAME":.metadata.name,\
"POD_PORT":.spec.containers[*].ports[*].containerPort
</code></pre>
<p>It's not clearly (!?) <a href="https://kubernetes.io/docs/reference/kubectl/#custom-columns" rel="noreferrer">documented</a> but I suspect the format is kubectl's <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">JSONPath</a> and there (appears to be) a subtle distinction between e.g. <code>.spec.containers[]</code> and <code>.spec.containers[*]</code> where the former stops when the property is not found and the latter includes everything.</p>
<p>Because <code>.spec</code> will always include one or more <code>.containers</code>, but each <code>container</code> may not have <code>.ports</code>, you can also:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pods \
--output=custom-columns=\
"POD_PORT":.spec.containers[].ports[*].containerPort
</code></pre>
<p>Which <code>containers[]</code> but <code>ports[*]</code> to the same effect.</p>
<p><strong>NOTE</strong> as explained in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#container-v1-core" rel="noreferrer">Container v1 core</a> see "ports", ports that are exposed by the container need not be specified through <code>ports</code> i.e. this command will return documented ports but this list may exclude ports that are exposed by the containers (and not documented).</p>
|
<p>I have a deployment with multiple pods in Azure Kubernetes Service.<br>
There is a K8s service that is used to connect deployment pods.<br>
The service has a private IP accessible in Azure Virtual Network. The service type is LoadBalancer.<br>
I want to monitor and see if the service is up. If it is not up, trigger an email alert.</p>
<p>I have identified two options:</p>
<p><strong>Option 1:</strong><br>
I enabled AKS diagnostics so that I get the service logs. When I check the logs with the query below, I can see service failure logs. I think I can use these logs in Azure Monitor to trigger an alert. I still need to verify if it will work in every type of failure.</p>
<pre><code>KubeEvents
| where TimeGenerated > ago(7d)
| where not(isempty(Namespace))
| where ObjectKind == 'Service'
</code></pre>
<p><strong>Option 2:</strong><br>
Create an Azure Function with HTTPS API enabled so I can call it externally from Pingdom.
Make sure the function uses AppService with a VM so that it can access private IPs and the service (As this is using VM, it is increasing the cost). The function checks the private IP and sees if it is returning 200, and it will return 200; otherwise, it will return an error code. So Pingdom will keep the uptime details and also alert accordingly when it is down.</p>
<p><strong>Summary:</strong><br>
I am not 100% sure about option one. For the second option, it seems like doing too much work, and I think that there should be a better and more robust way of doing it.</p>
<p>So I am interested in getting feedback from some Azure and K8s experts who dealt with the problem and solved it in a more robust way.</p>
| <p>Using Azure Application Insights there are two [private monitoring options] (<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/availability-private-test" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-monitor/app/availability-private-test</a>) described.</p>
<ol>
<li>Allowing limited inbound connectivity</li>
<li>Using Azure Functions, as you have described in your Option 2.</li>
</ol>
<p>Personally I prefer endpoint monitoring to be more independent from the resource that's hosting the service.</p>
|
<p>I'm having kafka cluster deployed in openshift in different namespace also opentelemetry is delpoyed in another namespace.</p>
<p>This is my yaml file for opentelemetry</p>
<pre><code>apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: demo-otlcol-otlp
namespace: otlcol-demo
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
logging:
loglevel: info
kafka:
brokers:
- dev-kafka-kafka-brokers.kafka.svc.cluster.local:9092
protocol_version: 3.4.0
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging,kafka]
mode: daemonset
resources: {}
targetAllocator: {}
</code></pre>
<p>The error i'm getting is,</p>
<pre><code>Error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:
* error decoding 'exporters': unknown type: "kafka" for id: "kafka" (valid values: [logging otlp otlphttp jaeger])
2023/05/31 09:24:46 collector server run finished with error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding:
* error decoding 'exporters': unknown type: "kafka" for id: "kafka" (valid values: [logging otlp otlphttp jaeger])
</code></pre>
<p>Can anyone explain how to use kafka in opentelemetry exporters</p>
| <p>I can confirm what <a href="https://stackoverflow.com/users/396567/michael-hausenblas">Michael</a> said. Kafka is atm not supported and not part of Red Hats otel distro. <a href="https://github.com/os-observability/redhat-opentelemetry-collector/blob/52c064b83e0531809db6f5faf4057e127cadf5bd/manifest.yaml" rel="nofollow noreferrer">Here</a> you see what otel components will be productized in the next release. To bypass your issue, you can <a href="https://opentelemetry.io/docs/collector/distributions/" rel="nofollow noreferrer">create your own distro</a>. The easiest way would be to extend Red Hats <code>manifest.yaml</code> with the Kafka exporter. For testing, you can also use the contrib image offered by the OpenTelemetry community. There is a section in the OTEL CRD to overwrite the collector image.</p>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: simplest
spec:
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.78.0
config: ...
</code></pre>
|
<p>I got a .net6 service hosted in an AKS cluster with app insight and profiler enabled. Logs appear in app insight and live metrics are working and I can see every action in app insight.</p>
<p>When I click the "Profile now" button in the performance tab, it says a profiling session is in progress and I fire a few requests. Eventually I get a message in app insight of a timeout and no session is added to the list. Why could it happen?</p>
| <p>Sadly, Azure profiler just does not support dotnet 6. There might be other solutions for Azure witr this dotnet version</p>
|
<p>I have created an EKS cluster with ALB setup. I tried installing superset by following the steps provided in <a href="https://superset.apache.org/docs/installation/running-on-kubernetes/" rel="nofollow noreferrer">https://superset.apache.org/docs/installation/running-on-kubernetes/</a></p>
<p>my-values.yaml</p>
<pre><code>ingress:
enabled: true
ingressClassName: ~
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: instance
# kubernetes.io/tls-acme: "true"
## Extend timeout to allow long running queries.
# nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
# nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
path: /
pathType: ImplementationSpecific
hosts:
- chart-example.local
tls: []
extraHostsRaw: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
</code></pre>
<p>When I am running <code>helm upgrade --install --values my-values.yaml superset superset/superset --timeout 10m30s</code>, it takes a lot of time and returns</p>
<pre><code>Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred:
* timed out waiting for the condition
</code></pre>
<p>and when I run</p>
<pre><code>[ec2-user@ip-1**-**-**-*** ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
superset-7866fcc8b4-tcpk4 0/1 Init:0/1 8 (6m53s ago) 33m
superset-init-db-6q9dp 0/1 Init:Error 0 5m24s
superset-init-db-7hqz4 0/1 Init:Error 0 7m48s
superset-init-db-jt87x 0/1 Init:Error 0 12m
superset-init-db-rt85r 0/1 Init:Error 0 10m
superset-init-db-zptz6 0/1 Init:Error 0 2m40s
superset-postgresql-0 0/1 Pending 0 33m
superset-redis-master-0 1/1 Running 0 33m
superset-worker-748db75bf7-9kzfp 0/1 Init:0/1 8 (6m56s ago) 33m
</code></pre>
<p>I am new to kubernetes and this is new to me. Please help!</p>
<p>Edit:1
Added EBS CSI driver and Storage Class and went ahead with superset installation. Ran the following commands. Attaching responses</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
superset-7866fcc8b4-q59nd 0/1 Init:0/1 4 (109s ago) 13m
superset-init-db-gq9b9 0/1 Pending 0 13m
superset-postgresql-0 0/1 Pending 0 13m
superset-redis-master-0 1/1 Running 0 13m
superset-worker-748db75bf7-n7t2r 0/1 Init:0/1 5 (91s ago) 13m
[ec2-user@ip-172-31-23-209 ~]$ kubectl logs superset-worker-748db75bf7-n7t2r
Defaulted container "superset" out of: superset, wait-for-postgres-redis (init)
Error from server (BadRequest): container "superset" in pod "superset-worker-748db75bf7-n7t2r" is waiting to start: PodInitializing
[ec2-user@ip-172-31-23-209 ~]$ kubectl logs superset-7866fcc8b4-q59nd
Defaulted container "superset" out of: superset, wait-for-postgres (init)
Error from server (BadRequest): container "superset" in pod "superset-7866fcc8b4-q59nd" is waiting to start: PodInitializing
kubectl describe pod superset-postgresql-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m20s (x4 over 16m) default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
</code></pre>
| <p>If you're running PostgreSQL, and you're using EKS 1.23 or higher, you'll need to install a CSI driver, e.g. the <a href="https://github.com/kubernetes-sigs/aws-ebs-csi-driver" rel="nofollow noreferrer">EBS CSI driver</a>. Starting with 1.23, EKS no longer ships with a storage driver (the in-tree driver was deprecated). After installing the CSI driver, create a default storage class. Your pods should start shortly thereafter. If you're new to Kubernetes, I'd recommend installing the CSI driver through <a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-add-ons.html" rel="nofollow noreferrer">EKS addons</a>.</p>
|
<p>I have deploy my kubernetes cluster in AWS EKS. I have setup prometheus and grafana to monitor the EKS cluster. How to find Max no of running pods throughout the day ??</p>
| <p>In the query editor of the graph panel, enter the PromQL query to retrieve the maximum number of running pods throughout the day. The query should use the max_over_time function to find the highest value over a time range.
This one worked for me - <code>max_over_time(kube_pod_status_phase{phase="Running"}[1d])</code></p>
<p>I also found this open-source project on GitHub- <a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="nofollow noreferrer">Awesome-CloudOps-Automation</a>. They are creating an open-source framework for writing Runbooks using Jupyter Notebooks. They have many <a href="https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos" rel="nofollow noreferrer">Prometheus</a> and <a href="https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/AWS/legos" rel="nofollow noreferrer">EKS</a> actions that can be used to automate some of the Prometheus and AWS EKS related tasks. You can use the <a href="https://github.com/unskript/Awesome-CloudOps-Automation/tree/master/Prometheus/legos/prometheus_get_metric_statistics" rel="nofollow noreferrer">Prometheus get metrics statistics action</a> to fire your PromQL query.</p>
|
<p>Im having k8s controller which needs to install some resources and update the status and condition accordantly</p>
<p>The flow in the reconcile is like following:</p>
<ol>
<li>Install the resource and don’t wait</li>
<li>Call to the function <code>checkAvailability</code> and update the status accordantly if ready/ pending install/ error</li>
</ol>
<p>I’ve two main questions:</p>
<ol>
<li>This is the first time that I use status and conditions, is it right way or do I miss something</li>
<li>Sometimes when I do the update <code>r.Status().Update</code> I got error :<code>Operation cannot be fulfilled on eds.core.vtw.bmw.com “resouce01”: the object has been modified; please apply your changes to the latest version and try again , so I’ve added the check </code>conditionChanged` which solve the problem but not sure if its correct as I update the status once and if it doesn’t chanted I don’t touch it so user can see status ready from a while ago and the reconcile doesn’t update the date and time for the ready condition as it skip it when its already “ready”</li>
</ol>
<p>I use the following</p>
<pre><code>func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) {
vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace)
condition := metav1.Condition{
Type: ebdmanv1alpha1.KubernetesvfmHealthy,
Observebdneration: ebd.Generation,
LastTransitionTime: metav1.Now(),
}
if err != nil {
// There was an error checking readiness - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ReasonError
condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err)
} else if vfmReady {
// The vfm is ready - Set status to true
condition.Status = metav1.ConditionTrue
condition.Reason = ebdmanv1alpha1.ReasonReady
condition.Message = "vfm custom resource is ready"
} else {
// The vfm is not ready - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ResourceProgressing
condition.Message = "vfm custom resource is not ready"
}
// Get the latest version of the ebd
latestebd := ebdmanv1alpha1.ebd{}
if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil {
return vfmReady, err
}
oldConditions := latestebd.Status.Conditions
meta.SetStatusCondition(&latestebd.Status.Conditions, condition)
if !conditionChanged(&oldConditions, &latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy) {
return vfmReady, nil
}
if err := r.Status().Update(ctx, &latestebd); err != nil {
r.Log.Error(err, "failed to update vfm status")
return vfmReady, err
}
return vfmReady, nil
}
func conditionChanged(oldConditions, newConditions *[]metav1.Condition, conditionType string) bool {
newCondition := meta.FindStatusCondition(*newConditions, conditionType)
oldCondition := meta.FindStatusCondition(*oldConditions, conditionType)
if oldCondition == nil && newCondition == nil {
return false
}
if oldCondition == nil || newCondition == nil {
return true
}
return oldCondition.Status != newCondition.Status || oldCondition.Reason != newCondition.Reason || oldCondition.Message != newCondition.Message
}
</code></pre>
| <p>To your questions:</p>
<ol>
<li><p>This is the first time that I use status and conditions, is it right way or do I miss something?</p>
<p>Your approach to managing the status and conditions of Kubernetes resources is generally fine. The <a href="https://book-v1.book.kubebuilder.io/basics/status_subresource.html" rel="nofollow noreferrer"><code>status</code> subresource</a> in a Kubernetes API object is typically used to represent the current state of the system, and it can include conditions.</p>
<p>A condition is a collection of fields that describe the state of an object in a more detailed way than just <code>true</code> or <code>false</code>. Each condition typically has a <code>type</code>, <code>status</code>, <code>reason</code>, <code>message</code>, and <code>lastTransitionTime</code>. Your code correctly sets these fields based on whether the <code>vfm</code> custom resource is ready or not.</p>
<p>It is good to note that conditions should be leveled - meaning they should be set to their current observed value regardless of their previous value. They should also be set (either <code>true</code>, <code>false</code>, or <code>unknown</code>) for all the significant or user-meaningful aspects of the component's current state. This makes conditions a good mechanism to indicate "transient states" like <code>Progressing</code> or <code>Degraded</code> that might be expected to change over time or based on external state.</p>
</li>
<li><p>Sometimes when I do the update <code>r.Status().Update</code> I got error: <code>Operation cannot be fulfilled on eds.core.vtw.bmw.com “resource01”: the object has been modified; please apply your changes to the latest version and try again</code>.</p>
<p>This error occurs because another client updated the same object while your controller was processing it. This could be another controller or even another instance of the same controller (if you run more than one).</p>
<p>One possible way to handle this is to use a retry mechanism that re-attempts the status update when this error occurs. In your case, you have implemented a <code>conditionChanged</code> check to only attempt the status update if the condition has changed. This is a good approach to avoid unnecessary updates, but it does not completely prevent the error, because another client could still update the object between your <code>Get</code> call and <code>Status().Update</code> call.</p>
<p>You could also consider using <code>Patch</code> instead of <code>Update</code> to modify the status, which reduces the risk of conflicting with other updates. Patching allows for partial updates to an object, so you are less likely to encounter conflicts.</p>
<p>Regarding the timing issue, you could consider updating the <code>LastTransitionTime</code> only when the status actually changes, instead of every time the health check is done. This would mean the <code>LastTransitionTime</code> reflects when the status last changed, rather than the last time the check was performed.</p>
<p>One thing to keep in mind is that frequent updates to the status subresource, even if the status does not change, can cause unnecessary API server load. You should strive to update the status only when it changes.</p>
</li>
</ol>
<p>A possible updated version of your <code>checkHealth</code> function considering those points could be:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) {
vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace)
condition := metav1.Condition{
Type: ebdmanv1alpha1.KubernetesvfmHealthy,
Status: metav1.ConditionUnknown, // start with unknown status
}
latestebd := ebdmanv1alpha1.ebd{}
if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil {
return vfmReady, err
}
oldCondition := meta.FindStatusCondition(latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy)
if err != nil {
// There was an error checking readiness - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ReasonError
condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err)
} else if vfmReady {
// The vfm is ready - Set status to true
condition.Status = metav1.ConditionTrue
condition.Reason = ebdmanv1alpha1.ReasonReady
condition.Message = "vfm custom resource is ready"
} else {
// The vfm is not ready - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ResourceProgressing
condition.Message = "vfm custom resource is not ready"
}
// Only update the LastTransitionTime if the status has changed
if oldCondition == nil || oldCondition.Status != condition.Status {
condition.LastTransitionTime = metav1.Now()
} else {
condition.LastTransitionTime = oldCondition.LastTransitionTime
}
meta.SetStatusCondition(&latestebd.Status.Conditions, condition)
if oldCondition != nil && condition.Status == oldCondition.Status && condition.Reason == oldCondition.Reason && condition.Message == oldCondition.Message {
return vfmReady, nil
}
// Retry on conflict
retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
// Retrieve the latest version of ebd before attempting update
// RetryOnConflict uses exponential backoff to avoid exhausting the apiserver
if getErr := r.Get(ctx, req.NamespacedName, &latestebd); getErr != nil {
return getErr
}
if updateErr := r.Status().Update(ctx, &latestebd); updateErr != nil {
return updateErr
}
return nil
})
if retryErr != nil {
r.Log.Error(retryErr, "Failed to update vfm status after retries")
return vfmReady, retryErr
}
return vfmReady, nil
}
</code></pre>
<p>In this updated version:</p>
<ul>
<li><p>The <code>LastTransitionTime</code> field is updated only when the condition's status changes. This will ensure that the <code>LastTransitionTime</code> accurately reflects when the status was last changed rather than when the <code>checkHealth</code> function was last run. This should provide a more accurate timeline of when the resource's status actually changed, rather than when the reconciliation loop was run.</p>
</li>
<li><p>A retry mechanism is added using <code>retry.RetryOnConflict</code> to re-attempt the status update when a conflict error occurs. Note that you'll need to import the <a href="https://pkg.go.dev/k8s.io/client-go/util/retry" rel="nofollow noreferrer">"<code>k8s.io/client-go/util/retry</code>" package</a> for this.<br />
This is a common pattern for dealing with the <code>Operation cannot be fulfilled...</code> error.</p>
</li>
</ul>
<p>These changes should help to address the issues you were facing with updating the status and conditions of your Kubernetes resources.<br />
Remember that you may still occasionally get conflict errors, especially if there are other clients updating the same object. In these cases, the <code>RetryOnConflict</code> function will retry the update with the latest version of the object.</p>
|
<p>I think I'm going crazy. I had kubernetes setup to deploy a postgres database. Worked fine. I wanted to add an init script so I made a custom image, deployed to private docker hub repo, and then updated the deployment to pull new image. But no matter what I do kubernetes keeps deploying the old database.... I updated the user, password, and database name to confirm that its not getting upgraded. Here's the kubectl file:</p>
<pre><code># PostgreSQL StatefulSet Service
apiVersion: v1
kind: Service
metadata:
name: postgres-db-lb
spec:
selector:
app: postgres-stockticker
ports:
- name: "5432"
port: 5432
targetPort: 5432
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-stockticker
spec:
selector:
matchLabels:
app: postgres-stockticker
replicas: 1
template:
metadata:
labels:
app: postgres-stockticker
spec:
containers:
- name: postgres-stockticker
image: goodwinmcd/postgres:1.0
imagePullPolicy: Always
ports:
- containerPort: 5432
# Data Volume
volumeMounts:
- name: postgresql-db-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: postgresql-db-disk
persistentVolumeClaim:
claimName: postgres-pv-claim
imagePullSecrets:
- name: docker-hub-repo-key
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume # Sets PV's name
labels:
type: local # Sets PV's type to local
app: postgres-stockticker
spec:
storageClassName: manual
capacity:
storage: 5Gi # Sets PV Volume
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim # Sets name of PVC
labels:
app: postgres-stockticker
spec:
storageClassName: manual
accessModes:
- ReadWriteMany # Sets read and write access
resources:
requests:
storage: 5Gi # Sets volume size
</code></pre>
<ol>
<li>I've verified the following: Pulled down the image in a local docker image and confirmed that it has the correct user, password, database, and tables in it.</li>
<li>Described the pod to make sure it's pulling and using the correct image</li>
<li>Logged onto the pod and confirmed that the new username, password, and database are present on the pod with the correct values.</li>
<li>I tried running psql to connect to the database with the new username and password but no luck. It succeeded with the old credentials</li>
<li>Recreated and mounted new pv and new pvc to the pod</li>
<li>I deleted the service, the deployment, the pv, and the pvc and recreated them with the apply command</li>
<li>I tried restarting the postgres service (service postgres restart) and get the following error: <code>No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).</code></li>
</ol>
<p>So somehow kubernetes is pulling down this container that has the new username, password, and database baked into the image but then starting up the old postgres server on the cluster. It makes no sense to me. Someone please help. I'm losing my sanity making sure I'm not missing something obvious.</p>
<p>Let me know if you need any other information.</p>
| <p>Figured it out. The kubernetes retention policy of the volume was set to retain. I tried setting it to delete but the data was still persisting. I ended up logging on to the instance and blowing away everything where I mounted the PV. This forced the pod to restart, and when it did it actually went through the correct postgres init process.</p>
|
<p>I would like to list pods created within 24 hours. I didn't find any kubectl commands or anything to get those. Could anyone please help me with the kubectl command to get only the pods created in last 24 hours.</p>
| <p>Previous answers didn't work to me, the regex seems wrong and the field with the actual creationTimestamp is the sixth as per today, so :</p>
<pre><code>kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match($6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^2[0-4]h/) {print $0}'
</code></pre>
<p>It may be also useful to append <code>|grep -v Completed</code> to filter out Completed pods/jobs.</p>
<p><a href="https://regex101.com/r/l5gLKu/1" rel="nofollow noreferrer">https://regex101.com/r/l5gLKu/1</a></p>
<p>Client Version: v1.26.1<br />
Kustomize Version: v4.5.7<br />
Server Version: v1.25.5</p>
<p>Fatality:</p>
<pre><code>alias kpods1day="kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match(\$6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^2[0-4]h/) {print \$0}' |grep -v \"Completed\|kube-system\""
alias kpods2days="kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match(\$6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^[2-3][0-9]h|^4[0-8]h/) {print \$0}' |grep -v \"Completed\|kube-system\""
</code></pre>
|
<p>I just started working with ArgoCD and I have an issue I can't find the answer for.</p>
<p>I have a file called <code>clusters.yaml</code> in my Git repo:</p>
<pre><code>clusters:
- name: cluster1-eu-k8s-002
url: https://cluster1.hcp.northeurope.azmk8s.io:443
values:
nameOverride: ReplaceWithThis
</code></pre>
<p>And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label <code>group:dev</code>.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: opentelemetry-applicationset
namespace: argocd
spec:
generators:
- git:
repoURL: [email protected]:removed/cloud.git
revision: HEAD
files:
- path: GitOps/argocd-apps/clusters.yaml
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
group: dev
template:
metadata:
name: 'opentelemetry-{{name}}'
spec:
project: default
sources:
- repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
chart: opentelemetry-collector
targetRevision: 0.51.3
helm:
valueFiles:
- $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml
parameters:
- name: nameOverride
value: '{{ index .Clusters.values "nameOverride" }}'
- repoURL: [email protected]:removed/cloud.git
ref: values
destination:
server: '{{ server }}'
namespace: opentelemetry
</code></pre>
<p>I am trying to replace a parameter called <code>nameOverride</code> with my value <code>ReplaceWithThis</code> from <code>clusters.yaml</code>.</p>
<p>ArgoCD is not deploying my app because of this line: <code>value: '{{ index .Clusters.values "nameOverride" }}'</code></p>
<p>ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string.</p>
<p>What exactly is the issue with the way I am trying to pull that value?</p>
| <p>In the <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/" rel="nofollow noreferrer">Argo CD <code>ApplicationSet</code> controller</a>, you are using <code>{{ index .Clusters.values "nameOverride" }}</code> to access the <code>nameOverride</code> value. However, <code>Clusters</code> is an array in your <code>clusters.yaml</code> file, not a dictionary. So, you should not be trying to directly index it as if it is a dictionary. (In YAML, an array (or list) is denoted by items beginning with a dash (<code>-</code>).)</p>
<p>The <code>.Clusters</code> field will contain an <em>array</em> of clusters from your <code>clusters.yaml</code> file, and you want to access the <code>values.nameOverride</code> field of each cluster. However, your current syntax is treating <code>Clusters</code> as if it were a dictionary that can be indexed directly with <code>.values</code>.</p>
<p>You should instead iterate over the <code>Clusters</code> array to access each <code>values</code> dictionary individually. You may need to use a loop structure to do this, or modify your configuration so that <code>values</code> is not nested within an array.</p>
<p>You can also use a different structure for your <code>clusters.yaml</code> file.<br />
If you only have one cluster, you could structure your <code>clusters.yaml</code> file like this:</p>
<pre class="lang-yaml prettyprint-override"><code>clusters:
name: cluster1-eu-k8s-002
url: https://cluster1.hcp.northeurope.azmk8s.io:443
values:
nameOverride: ReplaceWithThis
</code></pre>
<p><em>Then</em>, in this case, you can directly access <code>nameOverride</code> with <code>{{ index .Clusters.values "nameOverride" }}</code>.</p>
<p>If you have multiple clusters and need a unique <code>nameOverride</code> for each, you could create a separate file for each cluster in your repository and adjust the <code>files</code> field in your <code>ApplicationSet</code> to match the new file structure.</p>
<p>That would be how a <a href="https://argocd-applicationset.readthedocs.io/en/stable/Generators-Git/#git-generator-files" rel="nofollow noreferrer">Git file generator</a> would be able to read each of those files, and access the <code>values.nameOverride</code> field of each cluster in their respective file.</p>
|
<p>I have a docker-compose file that connects an app with a database.
I'm using docker secrets to hide sensitive data, such as in this case DB credentials, while I can do so with the database service, I'm not able to do the same within the app service, specifically the ConnectionString.
I couldn't find much in the documentation regarding this, neither did I find such cases online, there is one example that is given in the documentation with Wordpress service that suggests the solution I'm seeking, but that seems to be only Wordpress related.
Where do I go from here?</p>
<p>Wordpress example in documentation:</p>
<pre><code> wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
</code></pre>
<p>My docker-compose file:</p>
<pre><code>version: '3.8'
services:
testapp:
image: testapp
depends_on:
- testapp_db
build:
context: .
dockerfile: TestApp/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
# This works
- ConnectionStrings__DefaultConnection=Server=testapp_db;Database=TestApp;UserId=postgres;Password=postgres;
# But I want to do something like this
- ConnectionStrings__DefaultConnection=/run/secrets/connection_string
ports:
- "60001:80"
secrets:
- connection_string
networks:
- testapp-network
testapp_db:
image: postgres:latest
environment:
POSTGRES_USER_FILE: /run/secrets/db_user
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
POSTGRES_DB: TestApp
ports:
- "5500:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
- ./Persistence/Scripts/db_init.sql:/docker-entrypoint-initdb.d/create_tables.sql
restart: always
secrets:
- db_user
- db_password
networks:
- testapp-network
volumes:
postgres-data:
secrets:
db_password:
file: ./run/secrets/docker-postgres/db_password
db_user:
file: ./run/secrets/docker-postgres/db_user
connection_string:
file: ./run/secrets/docker-postgres/connection_string
networks:
testapp-network:
driver: bridge
</code></pre>
| <p>It's actually not related to <strong>WordPress</strong>, it's general purpose only you can use similar way <em>Docker mounts</em> the <strong>secret</strong> content at <code>/run/secrets/connection_string</code></p>
<p>Your code can read the variable and it will get <strong>path</strong> from there which is <code>/run/secrets/connection_string</code> now your code can read that file runtime and get the content of the <strong>secret</strong> as simple as that</p>
<p>Here you can refer my Github repo for example : <a href="https://github.com/harsh4870/docker-compose-secret" rel="nofollow noreferrer">https://github.com/harsh4870/docker-compose-secret</a></p>
<p>Node js example code : <a href="https://github.com/harsh4870/docker-compose-secret/blob/main/index.js" rel="nofollow noreferrer">https://github.com/harsh4870/docker-compose-secret/blob/main/index.js</a></p>
<p><strong>Update</strong></p>
<p>you can use write docker-compose file with <strong>entrypoint</strong> will set the Environment variable from a file and after that it will the <strong>Main</strong> process</p>
<pre><code>version: '3'
services:
redis-server:
image: 'redis'
node-app:
secrets:
- connection_string
build: .
restart: "no"
entrypoint: [ "sh", "-c", "export connection=$(cat /run/secrets/connection_string) && npm start"]
ports:
- "4001:8000"
secrets:
connection_string:
file: ./connection_string
</code></pre>
<p>You application or code will be able to use Env var <strong>connection</strong> and access <strong>direct value</strong></p>
|
<p>How to achieve internal service to service communication in Anthos multiple clusters Example
service A deployed in GKE cluster and service B deployed in AKS cluster, here how we can call service A from service B (internally) ?</p>
| <p>As suggested by @Harsh Manver you can <a href="https://cloud.google.com/service-mesh/docs/unified-install/off-gcp-multi-cluster-setup" rel="nofollow noreferrer">set up a multi cluster mesh outside Google Cloud</a> to achieve internal service to service communication in Anthos multiple clusters.</p>
<p>As mentioned in the <a href="https://cloud.google.com/service-mesh/docs/unified-install/multi-cloud-hybrid-mesh" rel="nofollow noreferrer">document</a>:</p>
<blockquote>
<p>The cluster's Kubernetes control plane address and the gateway address
need to be reachable from every cluster in the mesh. The Google Cloud
project in which GKE clusters are located should be allowed to create
external load balancing types.</p>
<p>We recommend that you use authorized networks and VPC firewall rules
to restrict the access and ensure traffic should not exposed to
public internet</p>
</blockquote>
|
<p>Is it possible to modify a live Kubernetes manifest, on-the-fly and non-interactively? I know <code>kubectl edit</code> allows for this behavior, but this requires user interaction in an editor that is opened when the command is invoked. I need to be able to do this without user interaction (for example in a script, etc.). Is this possible with a simple command - perhaps a variation of <code>kubectl edit</code>?</p>
| <p>The Answer is <code>kubectl patch</code> (<a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="nofollow noreferrer">docs</a>).</p>
<p>For example, Given a deployment you can for example update the <code>.spec.template.spec.containers[].image</code> field using the following command:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl patch deployment my-deploy \
--patch '{"spec": {"template": {"spec": {"containers": [{"image": "nginx:1.12.5"}]}}}}'
</code></pre>
<p>This way, you only specify the fields of the object that have changed. You may also pass a file with the changed fields instead of inline json, eg:</p>
<pre class="lang-bash prettyprint-override"><code>cat ./patch.yaml
spec:
template:
spec:
containers:
- image: nginx:1.12.5
kubectl patch deployment my-deploy --patch-file ./patch.yaml
</code></pre>
<p>Both versions support <code>json</code> and <code>yaml</code> to specify the changeset.</p>
|
<p>I've been trying to get the http-01 challenge method working with traefik v2 and cert-manager, both installed through their current helm charts. The LB endpoint can be requested through the ip and hostname, and I've tested that the http host passes on letsdebug (<code>No issues were found with <domain></code>).</p>
<p>Traefik lives in the <code>traefik</code> namespace, while cert-manager lives in its own <code>cert-manager</code> namespace. I've created a <code>ClusterIssuer</code> inside the <code>cert-manager</code> namespace:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: traefik
ingressTemplate:
metadata:
namespace: cert-manager
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
</code></pre>
<p>The <code>ingressTemplate</code> part is my attempt at making the randomly created ingress from cert-manager map to the correct traefik endpoint - this hasn't changed anything, but I leave it in in case I've fubared anything here.</p>
<p>I've then created a <code>Certificate</code> and applied it - I've tried using both the cert-manager, traefik and default namespace for this, without any differing luck (the actual domain name has been replaced with domain.example.com):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: domain.example.com
spec:
secretName: domain-example-com-tls
issuerRef:
kind: ClusterIssuer
name: letsencrypt-staging
commonName: domain.example.com
dnsNames:
- domain.example.com
</code></pre>
<p>Looking at the logs for the cert-manager pod, I can see both a 404 error and then a "DNS A record error" - the DNS record error seems spurious as it can be resolved with other services and has been present for > 24hrs.</p>
<pre><code>I0413 12:37:51.478359 1 conditions.go:201] Setting lastTransitionTime for Certificate "domain.example.com" condition "Issuing" to 2022-04-13 12:37:51.478353098 +0000 UTC m=+6998.327004050
I0413 12:37:51.760018 1 controller.go:161] cert-manager/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="default/domain.example.com" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"domain.example.com\": the object has been modified; please apply your changes to the latest version and try again"
I0413 12:37:51.769026 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Approved" to 2022-04-13 12:37:51.769016958 +0000 UTC m=+6998.617667914
I0413 12:37:51.836517 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Ready" to 2022-04-13 12:37:51.836496254 +0000 UTC m=+6998.685147170
I0413 12:37:51.868932 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Ready" to 2022-04-13 12:37:51.868921204 +0000 UTC m=+6998.717572135
I0413 12:37:51.888553 1 controller.go:161] cert-manager/certificaterequests-issuer-acme "msg"="re-queuing item due to optimistic locking on resource" "key"="default/domain.example.com-r98k2" "error"="Operation cannot be fulfilled on certificaterequests.cert-manager.io \"domain.example.com-r98k2\": the object has been modified; please apply your changes to the latest version and try again"
E0413 12:37:53.529269 1 controller.go:210] cert-manager/challenges/scheduler "msg"="error scheduling challenge for processing" "error"="Operation cannot be fulfilled on challenges.acme.cert-manager.io \"domain.example.com-r98k2-2809069211-587139531\": the object has been modified; please apply your changes to the latest version and try again" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1"
I0413 12:37:55.028477 1 pod.go:71] cert-manager/challenges/http01/ensurePod "msg"="creating HTTP01 challenge solver pod" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:37:55.237109 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:37:55.237350 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:37:55.237539 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
E0413 12:37:55.260608 1 sync.go:186] cert-manager/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:37:55.299879 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:37:55.300223 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:37:55.300570 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
E0413 12:37:55.316802 1 sync.go:186] cert-manager/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:38:05.261345 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:38:05.263416 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:38:05.263822 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
E0413 12:38:25.541964 1 sync.go:386] cert-manager/challenges/acceptChallenge "msg"="error waiting for authorization" "error"="context deadline exceeded" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
E0413 12:38:25.542087 1 controller.go:166] cert-manager/challenges "msg"="re-queuing item due to error processing" "error"="context deadline exceeded" "key"="default/domain.example.com-r98k2-2809069211-587139531"
I0413 12:38:30.542803 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:38:30.543062 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
I0413 12:38:30.543218 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
E0413 12:38:46.682039 1 sync.go:386] cert-manager/challenges/acceptChallenge "msg"="error waiting for authorization" "error"="acme: authorization error for domain.example.com: 400 urn:ietf:params:acme:error:dns: During secondary validation: DNS problem: query timed out looking up A for domain.example.com; DNS problem: query timed out looking up AAAA for domain.example.com" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01"
E0413 12:38:46.888731 1 controller.go:102] ingress 'default/cm-acme-http-solver-pbs7c' in work queue no longer exists
</code></pre>
<p>Looking at Traefik's pod log, I can see that the ingress gets created, but that Traefik is unable to route any requests to it because it can't find the endpoint (this is what I tried to fix with the annotation in the ingressTemplate above):</p>
<pre><code>time="2022-04-13T12:37:57Z" level=error msg="Skipping service: no endpoints found" providerName=kubernetes namespace=default servicePort="&ServiceBackendPort{Name:,Number:8089,}" ingress=cm-acme-http-solver-pbs7c serviceName=cm-acme-http-solver-gvvkt
time="2022-04-13T12:38:46Z" level=error msg="Skipping service: no endpoints found" serviceName=cm-acme-http-solver-gvvkt servicePort="&ServiceBackendPort{Name:,Number:8089,}" providerName=kubernetes ingress=cm-acme-http-solver-pbs7c namespace=default
time="2022-04-13T12:38:46Z" level=error msg="Cannot create service: service not found" servicePort="&ServiceBackendPort{Name:,Number:8089,}" providerName=kubernetes ingress=cm-acme-http-solver-pbs7c namespace=default serviceName=cm-acme-http-solver-gvvkt
time="2022-04-13T12:38:46Z" level=error msg="Cannot create service: service not found" servicePort="&ServiceBackendPort{Name:,Number:8089,}" namespace=default providerName=kubernetes serviceName=cm-acme-http-solver-gvvkt ingress=cm-acme-http-solver-pbs7c
</code></pre>
<p>And there's where I'm stuck currently, since the plan is to use Traefik's <code>IngressRoute</code> CRD for exposing hosts and not use regular ingress entries. Another option would be to test the experimental Gateway support, but as this is the initial setup for a prod cluster I'm not planning to go down that route yet.</p>
<p>Any ideas or further debug information that could be useful?</p>
| <p>We have faced the same issue and the problem was related to the fact, that the Ingress generated by the <code>certificate manger</code> contained the <code>Ingress Controller</code> reference using the deprecated Annotation <code>kubernetes.io/ingress.class</code>.</p>
<p>What we wanted:</p>
<pre><code>spec:
ingressClassName: my-traefik-controller
</code></pre>
<p>What we got:</p>
<pre><code>annotations:
kubernetes.io/ingress.class: "my-traefik-controller"
</code></pre>
<p>This way, the <code>traefik</code> Ingress Controlelr found the Ingress, but was not able to find the service.
There is a whole discussion on this topic in the <a href="https://github.com/cert-manager/cert-manager/issues/4821" rel="nofollow noreferrer">cert-manger Github repo</a>.</p>
<p>The solution was to use the <a href="https://cert-manager.io/docs/usage/ingress/#supported-annotations" rel="nofollow noreferrer">cert-manager Annotation</a> <code>acme.cert-manager.io/http01-edit-in-place: "true"</code> on an existing Ingress.</p>
<pre><code> annotations:
cert-manager.io/cluster-issuer: my-issuer
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
ingressClassName: my-traefik-controller
</code></pre>
<p>This way, only the existing Ingress (containing the correct <code>ingressClassName</code> reference) gets modified and no new solver Ingress gets created.</p>
|
<p>In my k8 cluster I have pods running that the pod name contains the word "inventory" (actually 2pods with this name)
I need to write a shell script to delete deployment and re-deploy and then it should check the specific pod is running or not and then display a message.
Here is what I tried.</p>
<pre><code>#!/bin/bash
cd /apps/application/application_yamls_develop/deployment-artifacts/inventory-service
kubectl delete -f inventory-service.yaml -n back-end
kubectl apply -f inventory-service.yaml -n back-end
sleep 20
pod_name="inventory"
namespace="back-end"
# Get the pod status
pod_status=$(kubectl get pods -n "$namespace" -o jsonpath="{.items[?(@.metadata.name.includes('$pod_name'))].status.phase}")
# Check if any matching pod is running
if [[ -n "$pod_status" ]]; then
echo "Pod running"
else
echo "Pod not running"
fi
</code></pre>
<p>But this is giving the bellow error.</p>
<pre><code>error: error parsing jsonpath {.items[?(@.metadata.name.includes('inventory'))].status.phase}, unclosed array expect ]
</code></pre>
<p>Can someone please pointout the issue.</p>
<p>Thanks..!</p>
| <p>Can you try this</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>#!/bin/bash
pod_name="devopslk"
namespace="devops"
# Get the pod status
pod_status=$(kubectl get pod "$pod_name" -n "$namespace" -o jsonpath='{.status.phase}')
# Check if any matching pod is running
if [[ -n "$pod_status" ]]; then
echo "Pod running"
else
echo "Pod not running, Current status: $pod_status"
fi</code></pre>
</div>
</div>
</p>
<p><a href="https://i.stack.imgur.com/UhZZ7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UhZZ7.png" alt="enter image description here" /></a></p>
|
<p>Currently, I have two microservices. I want to send message to KubeMQ Queue from first microservice and want it to be received by Second microservice. I am able to send message to a KubeMQ Queue using below code:</p>
<pre><code>Queue queue = new Queue("QueueName", "ClientID", "localhost:50000");
SendMessageResult resSend = queue.SendQueueMessage(new Message()
.setBody(Converter.ToByteArray("some-simple_queue-queue-message"))
.setMetadata("someMeta"));
if (resSend.getIsError()) {
System.out.printf("Message enqueue error, error: %s", resSend.getError());
}
</code></pre>
<p>I need Listener in the second microservice in order to receive the message from Queue.
Below is code provided by KubeMQ to receive the message:</p>
<pre><code> Queue queue = new Queue("QueueName", "ClientID", "localhost:50000");
ReceiveMessagesResponse resRec = queue.ReceiveQueueMessages(10, 1);
if (resRec.getIsError()) {
System.out.printf("Message dequeue error, error: %s", resRec.getError());
return;
}
System.out.printf("Received Messages %s:", resRec.getMessagesReceived());
for (Message msg : resRec.getMessages()) {
System.out.printf("MessageID: %s, Body:%s", msg.getMessageID(), Converter.FromByteArray(msg.getBody()));
}
</code></pre>
<p>How to configure it in the second microservice to receive message instantly as they are added into the queue?</p>
<p>Please help.</p>
| <blockquote>
<p>I need a Listener in the second microservice in order to receive the message from Queue.</p>
</blockquote>
<p>Why polling, when you can be notified through the <a href="https://docs.kubemq.io/learn/message-patterns/pubsub" rel="nofollow noreferrer">KubeMQ Pub/Sub pattern</a>?</p>
<p><a href="https://i.stack.imgur.com/0Y2hd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Y2hd.png" alt="https://3720647888-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-M2b9dwAGbMPWty0fPGr%2F-M2cg-nL53EVg5rPnOZB%2F-M2chJWo9-utUu1Q8v2C%2Fpubsub.png?alt=media&token=e003b332-df67-4ffb-be81-6b9218cefc4e" /></a></p>
<p>In the context of message queues, "polling" refers to a process where your application continually checks the queue to see if a new message has arrived. This can be inefficient, as it requires your application to make many requests when there may not be any new messages to process.</p>
<p>On the other hand, a "listener" (also known as a subscriber or a callback) is a function that is automatically called when a new message arrives. This is more efficient because your application does not need to continually check the queue; instead, it can wait and react when a message arrives.</p>
<hr />
<p>The Publish-Subscribe pattern (or pub/sub) is a messaging pattern supported by KubeMQ, and it differs slightly from the queue-based pattern you are currently using.<br />
In the pub/sub pattern, senders of messages (publishers) do not program the messages to be sent directly to specific receivers (subscribers). Instead, the programmer “publishes” messages (events), without any knowledge of any subscribers there may be. Similarly, subscribers express interest in one or more events and only receive messages that are of interest, without any knowledge of any publishers.</p>
<p>In this pattern, KubeMQ provides two types of event handling, <code>Events</code> and <code>Events Store</code>.</p>
<ul>
<li><p>The <code>Events</code> type is an asynchronous real-time Pub/Sub pattern, meaning that messages are sent and received in real-time but only if the receiver is currently connected to KubeMQ. There is no message persistence available in this pattern.</p>
</li>
<li><p>The <code>Events Store</code> type, however, is an asynchronous Pub/Sub pattern with persistence. This means that messages are stored and can be replayed by any receiver, even if they were not connected at the time the message was sent.<br />
The system also supports replaying all events from the first stored event, replaying only the last event, or only sending new events.</p>
</li>
</ul>
<p>However, it is important to note that the uniqueness of a client ID is essential when using Events Store.<br />
At any given time, only one receiver can connect with a unique Client ID.<br />
If two receivers try to connect to KubeMQ with the same Client ID, one of them will be rejected. Messages can only be replayed once per Client ID and Subscription type. If a receiver disconnects and reconnects with any subscription type, only new events will be delivered for this specific receiver with that Client ID. To replay messages, a receiver needs to connect with a different Client ID.</p>
<p>Given these features, if you switch your architecture to a pub/sub pattern using the Events Store type, your second microservice could instantly receive messages as they are added into the channel, and even replay old messages if needed. You would need to ensure each microservice has a unique Client ID and manages its subscriptions appropriately.</p>
<p>However, the pub/sub pattern may require changes in the architecture and coding of your microservices, so you would need to evaluate whether this change is suitable for your use case. It is also important to note that the pub/sub pattern, especially with message persistence, may have different performance characteristics and resource requirements compared to the queue pattern.</p>
<hr />
<p>Here is a high-level overview of the classes that are present and their usage:</p>
<ol>
<li><p><code>Channel.java</code>: This class appears to represent a channel for sending events in a publish-subscribe model.</p>
</li>
<li><p><code>ChannelParameters.java</code>: This class defines the parameters for creating a Channel instance.</p>
</li>
<li><p><code>Event.java</code>: This class represents an event that can be sent via a Channel.</p>
</li>
<li><p><code>EventReceive.java</code>: This class is used to process received events.</p>
</li>
<li><p><code>Result.java</code>: This class contains the result of a sent event.</p>
</li>
<li><p><code>Subscriber.java</code>: This class allows you to subscribe to a channel and handle incoming events.</p>
</li>
</ol>
<p>So here is an example of how you might use the existing classes to publish and subscribe to messages.</p>
<pre class="lang-java prettyprint-override"><code>import io.kubemq.sdk.Channel;
import io.kubemq.sdk.ChannelParameters;
import io.kubemq.sdk.Result;
import io.kubemq.sdk.event.Event;
import io.kubemq.sdk.event.Subscriber;
public class KubeMQExample {
public static void main(String[] args) {
try {
// Initialize ChannelParameters
ChannelParameters params = new ChannelParameters();
params.setChannel("your_channel");
params.setClient("your_client_id");
// Initialize a new Channel
Channel channel = new Channel(params);
// Create a new Event
Event event = new Event();
event.setBody("Your message here".getBytes());
// Send the Event
Result sendResult = channel.SendEvent(event);
System.out.println("Event sent, Result: " + sendResult.getIsError());
// Initialize a new Subscriber
Subscriber subscriber = new Subscriber("localhost:5000");
// Subscribe to the Channel
subscriber.SubscribeToEvents(params, (eventReceive) -> {
System.out.println("Received Event: " + new String(eventReceive.getBody()));
});
} catch (Exception e) {
e.printStackTrace();
}
}
}
</code></pre>
<p>Do note that this code is based on the existing SDK and may not reflect the functionality of the original code. You will need to replace "<code>your_channel</code>" and "<code>your_client_id</code>" with your actual channel name and client ID. The event body can also be replaced with the actual message you want to send.</p>
<p>The <code>Subscriber</code> class is used here to listen for and process incoming events. The <code>SubscribeToEvents</code> method takes a <code>ChannelParameters</code> object and a lambda function that processes received events.</p>
<p>Do also note that the <code>Queue</code> and <code>EventsStore</code> classes seem to have been removed from the SDK. The SDK now seems to primarily use the publish-subscribe model, which differs from queue-based communication in that messages are not stored if no consumer is available to consume them.<br />
Events Store was a hybrid model that allowed for persistence in the pub/sub model, storing events that could be replayed by receivers connecting at a later time.</p>
<p>For your original functionality of reading queue messages and peeking at messages in a queue, unfortunately, it does not seem like the current state of the Kubemq Java SDK on the provided GitHub repository supports these actions.</p>
|
<p>I am trying to do tls termination at pod level.
ingress(nlb) -->service(clusterip) --> pod</p>
<p>I have set ingress with ssl passthrough by adding the following annotations,</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
</code></pre>
<p>After this, I am not sure what needs to be done at pod to terminate tls. I am unable to find any document related to this. Is mTLS the only solution? I need to do tls termination for only 2 pods that are running in the namespace.</p>
<p>I tried below configuration. It is not working.
Am I missing anything?</p>
<p>nginx ingress controller:</p>
<pre><code>enabled ssl-passthrough
</code></pre>
<p>Ingress annotations:</p>
<pre><code> nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
</code></pre>
<p>Ingress spec:</p>
<pre><code>spec:
rules:
- host: xyz.test.com
http:
paths:
- backend:
service:
name: xyz
port:
number: 443
path: /
pathType: Prefix
</code></pre>
<p>Service - ports section in spec:</p>
<pre><code> ports:
- name: https
port: 443
protocol: TCP
targetPort: 1443
</code></pre>
<p>nginx sidecar container in app pod:
ports:
- containerPort: 1443
name: https
protocol: TCP</p>
<pre><code>volumeMounts:
- name: secret
mountPath: /etc/nginx/ssl
readOnly: true
volumes:
- name: secret
secret:
secretName: xyz-tls
</code></pre>
<p>I could exec into the pod and see that the certificates and key files are present in /etc/nginx/ssl</p>
<p>I also updated /etc/nginx/conf.d with following changes:</p>
<pre><code>server {
listen 1443 default_server ssl;
listen [::]:1443 default_server ipv6only=on;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
.....
</code></pre>
| <p>My advice: don't do it! You don't want to have to deal with certs inside your pods. Get the TLS done on your load balancers, managed by your cloud provider, or managed in your cluster by <code>cert-manager</code> and a proper cert provider (like Let's Encrypt)</p>
<p>If you really want to terminate TLS in your pod, then use TCP as the protocol, and that will let encrypted messages through.</p>
|
<p>Im using nginx as a reverse proxy to serve an Express.js backend and a Vue.js frontend. The whole application runs in two Docker Compose images, one for development, and one for production. I'd like to be able to test the production image locally with <strong>HTTPS</strong> (especially because Vue's PWA framework depends on HTTPS to properly register service workers).</p>
<p>The issue is that to configure a let's encrypt certificate with certbot, I need to be running on a server associated with my domain through DNS. What is the correct way to set up a CI/CD workflow where I can test HTTPS locally and also push it to my production server? Do I have to buy a certificate from a different CA and update it manually?</p>
<p>I tried using certbot, but it will not work since I am not on the server that is associated with mine through DNS.</p>
| <p>With <strong>certbot</strong> there is type of verification method <strong>DNS</strong> and <strong>HTTP</strong></p>
<p><strong>DNS</strong> method verify the <strong>record</strong> in <strong>DNS</strong> while <strong>HTTP</strong> check for <strong>200</strong> response from your <strong>Endpoint</strong>.</p>
<p>In <strong>CI/CD</strong> you can go with the <strong>HTTP</strong> method which checks for the Domain status</p>
<p>Refer repo for more : <a href="https://gitlab.com/gdm85/lets-gitlab/-/blob/master/scripts/letsencrypt/authenticator.sh" rel="nofollow noreferrer">Auth script</a></p>
<pre><code>certbot certonly $CERTBOT_DEBUG --non-interactive --manual --preferred-challenges=http \
-m "$LETSENCRYPT_CONTACT_EMAIL" \
--manual-auth-hook authenticator.sh \
--no-self-upgrade --agree-tos \
$DOMAIN_OPTS
</code></pre>
<p>Refer gist for <a href="https://gist.github.com/gaoyifan/b72f680df3e2a5d760b2f928d62d2a4f" rel="nofollow noreferrer">DNS verification</a></p>
<p><strong>Another option Manual one</strong></p>
<p>During <strong>CI/CD</strong> or just want set cert in docker, i would suggest downloading/creating a cert first and use multiple time <a href="https://www.sslforfree.com/" rel="nofollow noreferrer">SSL for free</a></p>
<p>You locally create the <strong>cert</strong> first and <strong>re-use</strong> multiple times by injecting it during <strong>CI/CD</strong> process storing it in a <strong>variable</strong> or downloading from <strong>Bucket</strong> if you are using any <strong>cloud</strong>.</p>
|
<p>I am trying to mount a Kubernetes secret via a <code>kubernetes_manifest</code> like this, however, <code>port</code> is a number <code>5342</code></p>
<pre><code>resource "kubernetes_manifest" "test" {
manifest = {
"apiVersion" = "secrets-store.csi.x-k8s.io/v1alpha1"
"kind" = "SecretProviderClass"
"metadata" = {
namespace = "test-namespace"
"name" = "test"
}
"spec" = {
"provider" = "aws"
"secretObjects" = [{
"secretName" = "test"
"type" = "Opaque"
data = [{
"objectName" = "test123"
"key" = "port"
}
]
}]
"parameters" = {
objects = yamlencode([{
objectName = aws_secretsmanager_secret.test.name
objectType = "secretsmanager"
objectAlias = "test"
jmesPath = [{
path = "port"
objectAlias = "test123"
}]
}])
}
}
}
}
</code></pre>
<p>When I <code>terraform apply</code> this, I get the error:</p>
<blockquote>
<p>err: rpc error: code = Unknown desc = Invalid JMES search result type for path:port. Only string is allowed</p>
</blockquote>
<p>Is there a way to mount <code>port</code> despite it being a number? Can I convert it to a string somehow?</p>
| <p>JMESPath does have a <a href="https://jmespath.org/specification.html#to-string" rel="nofollow noreferrer"><code>to_string</code></a> function.</p>
<p>So, you can use a JMESPath query in the <code>path</code> field and do:</p>
<pre><code>path = "to_string(port)"
</code></pre>
|
<p>I have a deployment. The pod container's have no readinessProbe(s), because the healtcheck will be configured using a BackendConfig.</p>
<p>The service is as follows:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: my-app-service
namespace: my-namespace
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/backend-config: '{"default": "my-app-backendconfig"}'
spec:
type: NodePort
externalTrafficPolicy: Local
ports:
- name: flower-nodeport-port
port: 80
targetPort: 5555
protocol: TCP
selector:
app: my-app
</code></pre>
<p>And this is the BackendConfig to create the health check:</p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-app-backendconfig
namespace: my-namespace
labels:
app: my-app
spec:
healthCheck:
checkIntervalSec: 20
timeoutSec: 1
healthyThreshold: 1
unhealthyThreshold: 5
type: TCP
</code></pre>
<p>The problem is that Google Cloud is not applying the healtcheck as I described it.
Check the screenshot below:</p>
<p><a href="https://i.stack.imgur.com/gBgiu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gBgiu.png" alt="enter image description here" /></a></p>
<p>As you can see, values like "Unhealthy threshold" and "Timeout" are not being taken into account.</p>
<p>What am I doing wrong?</p>
| <p>your type field on BackendConfig is TCP, docs say only http/https allowed <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#direct_health" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#direct_health</a></p>
|
<p>In a scenario where a zone/dc dropped and 2 master nodes out of 5 are now offline, I would like to restore etcd on the remaining 3 master nodes.</p>
<p>So far the best I could manage was restoring from etcd backup, but I found myself needing to destroy the remaining 2 and recreating them. Otherwise, I got a split brain issue.</p>
<p>Is there a way to remove the 2 dropped members from etcd and restore quorum for the remaining 3?</p>
<p>(OKD 4.7)</p>
| <p>As per this <a href="https://platform9.com/kb/kubernetes/restore-etcd-cluster-from-quorum-loss" rel="nofollow noreferrer">doc</a> by Platform9.</p>
<p><strong>Restoring ETCD Backup to Recover Cluster From Loss of Quorum</strong>:</p>
<p>Master nodes going offline or a lack of connectivity between the master nodes leading to an unhealthy cluster state could cause a loss of quorum.</p>
<blockquote>
<p>The idea behind Etcd restore is to restore Etcd from a backup using
etcdctl and reduce the master count to 1. In order for it to start up
as a brand-new etcd cluster once that is finished, we may need to
manually make some adjustments. When things are back up, we increment
the expert count individually for example join hubs.</p>
<p>If the master nodes are hard offline or unreachable after restoring
from the etcd backup, proceed with deauthorizing the nodes as well.
From Kubectl perspective, the detached master nodes will be seen to be
in "NotReady" state. Delete these nodes from the cluster.</p>
<p>At this point, the cluster should be back up and running with a single
master node. Verify the same</p>
<p>Once the nodes are scaled back up, they should have a PMK stack
running on them which will ensure ETCD members will sync amongst each
other.</p>
</blockquote>
<p>Refer to this <a href="https://docs.openshift.com/container-platform/3.11/admin_guide/assembly_restore-etcd-quorum.html" rel="nofollow noreferrer">doc 1</a> and <a href="https://docs.openshift.com/container-platform/4.8/backup_and_restore/control_plane_backup_and_restore/replacing-unhealthy-etcd-member.html" rel="nofollow noreferrer">doc 2</a> by Red hat for more information about Restoring etcd quorum and Replacing an unhealthy etcd member</p>
|
<p>I've setup a PostgreSQL database with Helm Chart using this ConfigMap, that contains the initial script which allows to create the databases and schemas I need:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.postgres.configmap }}
namespace: {{ .Release.Namespace }}
data:
init_script.sh: |+
#!/bin/bash
psql -v <<-EOSQL
CREATE DATABASE my-db;
GRANT ALL PRIVILEGES ON DATABASE my-db to <user>
...
</code></pre>
<p>I want to add an environmental variable for the user and one for the database name in order to:</p>
<ul>
<li>use them when I connect to <code>psql</code></li>
<li>use the user name to give the correct rights to access the Database created with the script.</li>
</ul>
<p>I don't know how to set them.</p>
| <p>Update your script with the Postgress User and database name to pass to the psql script as environmental variables.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.postgres.configmap.name }}
namespace: {{ .Release.Namespace }}
data:
init_script.sh: |+
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 -U $POSTGRES_USER --dbname $POSTGRES_DB <<-EOSQL
CREATE DATABASE my-db;
GRANT ALL PRIVILEGES ON DATABASE my-db to $POSTGRES_USER;
...
EOSQL
</code></pre>
<p>Then, add into your Deloyment file the two enviroment variables in the <code>Env</code> section.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: {{ .Values.postgres.image }}
env:
- name: POSTGRES_USER
value: {{ .Values.postgres.user }}
- name: POSTGRES_DB
value: {{ .Values.postgres.db }}
</code></pre>
|
<p>I just started working with ArgoCD and I have an issue I can't find the answer for.</p>
<p>I have a file called <code>clusters.yaml</code> in my Git repo:</p>
<pre><code>clusters:
- name: cluster1-eu-k8s-002
url: https://cluster1.hcp.northeurope.azmk8s.io:443
values:
nameOverride: ReplaceWithThis
</code></pre>
<p>And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label <code>group:dev</code>.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: opentelemetry-applicationset
namespace: argocd
spec:
generators:
- git:
repoURL: [email protected]:removed/cloud.git
revision: HEAD
files:
- path: GitOps/argocd-apps/clusters.yaml
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
group: dev
template:
metadata:
name: 'opentelemetry-{{name}}'
spec:
project: default
sources:
- repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
chart: opentelemetry-collector
targetRevision: 0.51.3
helm:
valueFiles:
- $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml
parameters:
- name: nameOverride
value: '{{ index .Clusters.values "nameOverride" }}'
- repoURL: [email protected]:removed/cloud.git
ref: values
destination:
server: '{{ server }}'
namespace: opentelemetry
</code></pre>
<p>I am trying to replace a parameter called <code>nameOverride</code> with my value <code>ReplaceWithThis</code> from <code>clusters.yaml</code>.</p>
<p>ArgoCD is not deploying my app because of this line: <code>value: '{{ index .Clusters.values "nameOverride" }}'</code></p>
<p>ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string.</p>
<p>What exactly is the issue with the way I am trying to pull that value?</p>
| <p>The issue you're encountering is related to the way you're accessing the <code>nameOverride</code> value from <code>clusters.yaml</code> in the <code>value</code> field of your ArgoCD ApplicationSet.</p>
<p>In your current configuration, you're using the following expression to access the value:</p>
<pre><code>value: '{{ index .Clusters.values "nameOverride" }}'
</code></pre>
<p>However, the problem lies in the fact that the <code>values</code> field is defined at the top level of the <code>clusters.yaml</code> file, not nested under each cluster. Therefore, the correct path to access the value would be:</p>
<pre><code>value: '{{ index .ApplicationSetParameters.values "nameOverride" }}'
</code></pre>
<p>By modifying the expression as shown above, you should be able to access the <code>nameOverride</code> value correctly from <code>clusters.yaml</code> and deploy your application successfully.</p>
|
<p>I've setup a PostgreSQL database with Helm Chart using this ConfigMap, that contains the initial script which allows to create the databases and schemas I need:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.postgres.configmap }}
namespace: {{ .Release.Namespace }}
data:
init_script.sh: |+
#!/bin/bash
psql -v <<-EOSQL
CREATE DATABASE my-db;
GRANT ALL PRIVILEGES ON DATABASE my-db to <user>
...
</code></pre>
<p>I want to add an environmental variable for the user and one for the database name in order to:</p>
<ul>
<li>use them when I connect to <code>psql</code></li>
<li>use the user name to give the correct rights to access the Database created with the script.</li>
</ul>
<p>I don't know how to set them.</p>
| <p>If you are using the standard <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">Docker Hub <code>postgres</code> image</a> and you are just trying to create a database, the easiest thing to do is to use its environment-variable settings</p>
<pre class="lang-yaml prettyprint-override"><code>image: postgres:15
env:
- name: POSTGRES_USER
value: <user>
- name: POSTGRES_DB
value: my-db
</code></pre>
<p>For these settings you do not need a separate ConfigMap. In the context of a Helm chart, if you want to make these values configurable, you can</p>
<pre class="lang-yaml prettyprint-override"><code>value: {{ .Values.postgres.db | default "my-db" }}
</code></pre>
<p>which will use a value from the configuration (<code>values.yaml</code>, <code>helm install --set</code> option, additional <code>helm install -f</code> files)</p>
<pre class="lang-yaml prettyprint-override"><code>postgres:
db: database-name
</code></pre>
<p>or <code>my-db</code> if it's not set.</p>
<p>If you do specifically want to use an init script, but won't know the database user name until deploy time, you can ask Helm to inject this into the init script. If you name the script <code>*.sql</code> then the image will run it under <code>psql</code> for you so you don't need the credentials.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.postgres.configmap }}
data:
init_script.sql: |+
CREATE DATABASE my-db;
GRANT ALL PRIVILEGES ON DATABASE my-db to {{ .Values.postgres.user }}
...
</code></pre>
<p>Helm will substitute the templated value before creating the ConfigMap.</p>
<p>In all of these cases, note that the initialization scripts and environment variables are only considered the very first time the database is used, if the corresponding storage is empty. If you change one of the Helm values, it will change the environment-variable or ConfigMap setting, but that won't actually cause a change in the database.</p>
<p>Practically, my experience has been that the best approach here is to use the environment variables to create a database and user, and then to use your application framework's database-migration system to actually create the tables. You'll need the migrations in other contexts so it's good to have a path to run them, and they're useful if the database schema ever changes; you can't just re-run the <code>/docker-entrypoint-initdb.d</code> scripts.</p>
|
<p>I am following this tutorial: <a href="https://cloud.google.com/sql/docs/mysql/connect-instance-kubernetes" rel="nofollow noreferrer">Connect to Cloud SQL for MySQL from Google Kubernetes Engine</a>.
I have created a cluster. I have created a docker image in the repository. I have created a database. I am able to run my application outside of Kubernetes and it connects to the database. But after deploying application, pods are not in a valid state and I see in the logs of the pod error:</p>
<pre><code>Caused by: java.lang.RuntimeException: [quizdev:us-central1:my-instance] Failed to update metadata for Cloud SQL instance.
...[na:na]
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
GET https://sqladmin.googleapis.com/sql/v1beta4/projects/quizdev/instances/my-instance/connectSettings
{
"code": 403,
"details": [
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "ACCESS_TOKEN_SCOPE_INSUFFICIENT",
"domain": "googleapis.com",
"metadata": {
"service": "sqladmin.googleapis.com",
"method": "google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings"
}
}
],
"errors": [
{
"domain": "global",
"message": "Insufficient Permission",
"reason": "insufficientPermissions"
}
],
"message": "Request had insufficient authentication scopes.",
"status": "PERMISSION_DENIED"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-2.2.0.jar:2.2.0]
...
2023-06-14T06:57:49.508Z WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata
</code></pre>
<p>What could be the issue? What can I check to diagnose the problem?</p>
<h2>Edit</h2>
<p>I have created the cluster using this command:</p>
<pre><code> gcloud container clusters create questy-java-cluster \
--num-nodes 2 \
--machine-type n1-standard-1 \
--zone us-central1-c
</code></pre>
| <p>I'm pretty sure that you create a cluster by default. If you did that, you used the Compute Engine default parameter that you can see here</p>
<p><a href="https://i.stack.imgur.com/FKEyE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKEyE.png" alt="enter image description here" /></a></p>
<p>Default service account and access scope. If you did that, it's normal you have no access: the minimal scope does not allow the Cloud SQL access.</p>
<hr />
<p>To solve that, you have to select either a user managed service account (the best solution) or still use the default service account but allow full scopes access.</p>
<p>2 solutions to enforce that:</p>
<ul>
<li>Either delete and recreate correctly your cluster</li>
<li>Or, you can create another node pool with the correct parameters.</li>
</ul>
|
<p>Im using nginx as a reverse proxy to serve an Express.js backend and a Vue.js frontend. The whole application runs in two Docker Compose images, one for development, and one for production. I'd like to be able to test the production image locally with <strong>HTTPS</strong> (especially because Vue's PWA framework depends on HTTPS to properly register service workers).</p>
<p>The issue is that to configure a let's encrypt certificate with certbot, I need to be running on a server associated with my domain through DNS. What is the correct way to set up a CI/CD workflow where I can test HTTPS locally and also push it to my production server? Do I have to buy a certificate from a different CA and update it manually?</p>
<p>I tried using certbot, but it will not work since I am not on the server that is associated with mine through DNS.</p>
| <p>If the servers are independent - You don't want to expose your production systems private key on a development machine - , you want different certificates and the question becomes "How do you use TLS on your development / testing infrastructure". This is a widely discussed question. E.g.:</p>
<p><a href="https://web.dev/how-to-use-local-https/" rel="nofollow noreferrer">https://web.dev/how-to-use-local-https/</a></p>
<p><a href="https://security.stackexchange.com/questions/121163/how-do-i-run-proper-https-on-an-internal-network">https://security.stackexchange.com/questions/121163/how-do-i-run-proper-https-on-an-internal-network</a></p>
<p>If your Testing Infrastructure is accessible from the Internet for a Verification Method, you can setup certbot similar to a production machine, but you'd have to aquire a domain/subdomain for this.</p>
<p>Since the reverse-proxys config and certificates are likely not part of your software, you can treat it as infrastructure for your tests and just leave them in between tests. Treating infrastructure concerns independently of your application improves modularity and therefore maintainability.</p>
|
<p>I set up a Kubernetes Cluster on Hetzner following theses steps: <a href="https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner" rel="nofollow noreferrer">https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner</a></p>
<pre><code>Client Version: v1.26.3
Kustomize Version: v4.5.7
Server Version: v1.26.4+k3s1
Mongosh Version: 1.8.1
</code></pre>
<p>I am unable to connect to either my own mongodb server (docker deployment) or a hosted one on <code>mongodb.net</code>:</p>
<pre><code>root@trustsigner-frontend-deployment-59644b6b55-pqgmm:/usr/share/nginx/html# mongosh mongodb+srv://<removed-user>:<removed-password>@cluster0.fdofntt.mongodb.net/test
Current Mongosh Log ID: 6447807561ebcee04b00165d
Connecting to: mongodb+srv://<credentials>@cluster0.fdofntt.mongodb.net/test?appName=mongosh+1.8.1
MongoServerSelectionError: Server selection timed out after 30000 ms
</code></pre>
<p>Same error when using my own one with <code>mongodb://</code> instead of <code>mongodb+srv//</code>.</p>
<p>But surprisingly it is possible to use the same connection string with mongodb compass or mongosh that is installed on my machine (not in a kubernetes pod)</p>
<p>Ping to 8.8.8.8 or to any other side works and I can fetch via curl. But no chance to establish a mongodb connection...</p>
| <p>Experiencing the same issue while using <a href="https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner" rel="nofollow noreferrer">kube-hetzner</a> as you, I looked at the configuration file and found the culprit: by default, outbound traffic is filtered, except for a few popular ports like HTTP or HTTPS.</p>
<p>There are at least 2 solutions:</p>
<ol>
<li><p>you set the variable <code>restrict_outbound_traffic</code> to <em>false</em>:
<a href="https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/kube.tf.example#L395" rel="nofollow noreferrer">https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/kube.tf.example#L395</a></p>
</li>
<li><p>You add a firewall rule to allow outbound traffic on port 27017.</p>
</li>
</ol>
<p>Once done, the connection to a Mongo Atlas cluster is working perfectly!</p>
|