prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am configuring an ingress-nginx load balancer on Azure Kubernetes service. I have installed the load balancer using Helm, and set up ingress. Here is the behavior I'm encountering:</p>
<ul>
<li>When I include a host in my pathing rules in my ingress config, I cannot access service at that host URL. The request times out</li>
<li>When I don't include a host in my pathing rules, I can access the service at that host URL with no issues</li>
<li>Regardless of whether or not the host is included in the pathing rules, I can successfully access the service at the host URL when I CURL it from any pod in the cluster.</li>
<li>Nslookup successfully resolves the host on my machine</li>
</ul>
<p>I'm trying to figure out why I'm unable to reach my service when host is included in my ingress configuration. Any ideas? Technical details are below.</p>
<p>Note that the configuration is only pointing to one service currently, but filtering by host will eventually be necessary - I'm planning to have multiple services with different domains running through this load balancer.</p>
<p>Ingress controller configuration:</p>
<pre><code>helm install --replace ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=127.0.0.1 \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \
--set controller.service.loadBalancerIP=$IP \
</code></pre>
<p>The ingress configuration:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
- hosts:
- my.host.com
secretName: tls-secret
rules:
- host: my.host.com //Removing this item makes the service reachable
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: xrcfrontend
port:
number: 80
</code></pre>
<p>This is the curl command I'm running. It returns the correct results when run inside the pod, and times out when run outside.</p>
<pre><code>curl https://my.host.com --insecure
</code></pre>
| <p>If you are using AKS v>=1.24, then try adding below annotation with path <strong>/healthz</strong> instead of <strong>127.0.0.1</strong> during nginx ingress controller installation or in nginx ingress controller service and use host based routing with nginx ingress routes -</p>
<blockquote>
<p>service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=<strong>/healthz</strong></p>
</blockquote>
<p>If the above helps then
<strong>Why was it not working with host earlier?</strong></p>
<ul>
<li>because backend pool of LB goes unhealthy because of wrong health-probe path of ingress controller. Ingress route is only accepting traffic for the particular host name and hence health probe of ingress controller service(Azure LB) is failing because / or 127.0.0.1 for http protocol returns 404.</li>
</ul>
<p>Github discussion on changes - <a href="https://github.com/Azure/AKS/issues/2903#issuecomment-1115720970" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/2903#issuecomment-1115720970</a></p>
<p>More details on installation - <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli#basic-configuration</a></p>
|
<p>I'm using Traefik 2.7.0 on an AKS Kubernetes Cluster 1.22.6.
Currently, everything routes to the same service:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`)
kind: Rule
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-my-domain-com-cert
</code></pre>
<p>I'm currently in the process of externalizing an API resource from this service to a dedicated new service ("/users") because there will be other services in the future that will need the same functionality.</p>
<p>What I'm trying (and failing) to do, is to route calls to "/users" to the new service:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`) && Path(`/users`)
kind: Rule
services:
- name: users-api
namespace: namespace2
port: 80
- match: Host(`api.my-domain.com`)
kind: Rule
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-baywa-lusy-com-cert
</code></pre>
<p>I tried <code>Path(..)</code> and <code>PathPrefix(..)</code>. No success. Everything is still routed to the old service. The new service has slightly different output. So I can tell with certainty that it's still routed to the old service.</p>
<p>Adding the priority manually didn't help either:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: api
namespace: namespace1
spec:
entryPoints:
- websecure
routes:
- match: Host(`api.my-domain.com`) && Path(`/users`)
kind: Rule
priority: 2000
services:
- name: users-api
namespace: namespace2
port: 80
- match: Host(`api.my-domain.com`)
kind: Rule
priority: 1000
services:
- name: api
namespace: namespace1
port: 80
tls:
secretName: api-baywa-lusy-com-cert
</code></pre>
<p>Am I Missing something here? Any help is appreciated!</p>
<p>Thanks,</p>
<p>best regards,</p>
<p>Pascal</p>
| <p>You can only expose services in the same namespace as your IngressRoute resource. If you watch the logs of your Traefik pod when you deploy your IngressRoute, you should see something like:</p>
<blockquote>
<p>time="2023-01-26T13:57:17Z" level=error msg="service namespace2/users-api not in the parent resource namespace namespace1" providerName=kubernetescrd ingress=namespace1 namespace=namespace1</p>
</blockquote>
<p>To do what you want, you need to create two separate IngressRoute resources, one in <code>namespace1</code> and one in <code>namespace2</code>.</p>
<p>In <code>namespace1</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
app: old-api
name: old-api
namespace: namespace1
spec:
entryPoints:
- web
routes:
- kind: Rule
priority: 1000
match: Host(`api.my-domain.com`)
services:
- name: old-api
port: 80
</code></pre>
<p>In <code>namespace2</code>:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
labels:
app: new-api
name: new-api
namespace: namespace2
spec:
entryPoints:
- web
routes:
- kind: Rule
priority: 2000
match: Host(`api.my-domain.com`) && PathPrefix(`/users`)
services:
- name: new-api
port: 80
</code></pre>
<p>You can find all the files I used to test this configuration <a href="https://github.com/larsks/so-example-75218872-traefik-ingress" rel="nofollow noreferrer">here</a>.</p>
<p>I don't know if the explicit priorities are necessary or not; it worked for me without them but maybe I was just lucky so I left them there. I would generally assume that a "more specific route" takes precedence over a "less specific route", but I don't know if that's actually true.</p>
|
<p>I need to use <a href="https://github.com/Azure/azure-sdk-for-python" rel="nofollow noreferrer">Azure Python SDK</a> and <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a> to list the Pods CPU limits for a cluster running in AKS.</p>
<p>Although its straight forward using CLI/PowerShell but I need to use Python exclusively.
Must not use <a href="https://stackoverflow.com/questions/53535855/how-to-get-kubectl-configuration-from-azure-aks-with-python">subprocess calls</a>.</p>
<p>Here is snippet that gets <code>KubeConfig</code> object after authentication with Azure:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.mgmt.containerservice import ContainerServiceClient
credential = DefaultAzureCredential(exclude_cli_credential=True)
subscription_id = "XXX"
resource_group_name= 'MY-SUB'
cluster_name = "my-aks-clustername"
container_service_client = ContainerServiceClient(credential, subscription_id)
kubeconfig = container_service_client.managed_clusters. \
list_cluster_user_credentials(resource_group_name, cluster_name). \
kubeconfigs[0]
</code></pre>
<p>But I am unsure how to put this to be used by K8s Python client:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config() ## How to pass?
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
| <p>You can use the <code>config.load_kube_config</code> method and pass in the kubeconfig object you obtained earlier as a parameter. The method accepts a <code>config_file</code> parameter, which can be a file object, a file-like object, or a string file path.</p>
<p>Since <strong>kubeconfig</strong> is a string, you can pass it as a string file path, like so:</p>
<pre><code>from kubernetes import client, config
# Pass the kubeconfig string as a file path
config.load_kube_config(config_file=kubeconfig)
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
|
<p>I need to use <a href="https://github.com/Azure/azure-sdk-for-python" rel="nofollow noreferrer">Azure Python SDK</a> and <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Kubernetes Python Client</a> to list the Pods CPU limits for a cluster running in AKS.</p>
<p>Although its straight forward using CLI/PowerShell but I need to use Python exclusively.
Must not use <a href="https://stackoverflow.com/questions/53535855/how-to-get-kubectl-configuration-from-azure-aks-with-python">subprocess calls</a>.</p>
<p>Here is snippet that gets <code>KubeConfig</code> object after authentication with Azure:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import DefaultAzureCredential
from azure.mgmt.containerservice import ContainerServiceClient
credential = DefaultAzureCredential(exclude_cli_credential=True)
subscription_id = "XXX"
resource_group_name= 'MY-SUB'
cluster_name = "my-aks-clustername"
container_service_client = ContainerServiceClient(credential, subscription_id)
kubeconfig = container_service_client.managed_clusters. \
list_cluster_user_credentials(resource_group_name, cluster_name). \
kubeconfigs[0]
</code></pre>
<p>But I am unsure how to put this to be used by K8s Python client:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
config.load_kube_config() ## How to pass?
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
| <p>I tried a repro and was able to achieve as below -</p>
<ol>
<li><p>Getting kubeconfig in correct decoded format from kubeconfigs[0] which is <a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-hybridkubernetes/azure.mgmt.hybridkubernetes.models.credentialresults?view=azure-python" rel="nofollow noreferrer">CredentialResults</a>.</p>
</li>
<li><p>Writing the generated kubeconfig in a file.</p>
</li>
<li><p>Loading the file in config module.</p>
<pre><code>from azure.identity import DefaultAzureCredential
from azure.mgmt.containerservice import ContainerServiceClient
import os
from kubernetes import client, config
credential = DefaultAzureCredential(exclude_cli_credential=True)
subscription_id = "XXX"
resource_group_name= 'MY-SUB'
cluster_name = "my-aks-clustername"
container_service_client = ContainerServiceClient(credential, subscription_id)
# getting kubeconfig in a decoded format from CredentialResult
kubeconfig = container_service_client.managed_clusters. \
list_cluster_user_credentials(resource_group_name, cluster_name). \
kubeconfigs[0].value.decode(encoding='UTF-8')
# writing generated kubeconfig in a file
f=open("kubeconfig","w")
f.write(kubeconfig)
f.close()
# loading the config file
config.load_kube_config('kubeconfig')
# deleting the kubeconfig file
os.remove('kubeconfig')
v1 = client.CoreV1Api()
print("Listing containers with their CPU limits:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
for container in i.spec.containers:
if container.resources.limits:
if 'cpu' in container.resources.limits:
print( container.name, container.resources.limits['cpu'])
</code></pre>
</li>
</ol>
|
<p>It's now quite a few days that I'm trying to configure the cluster on AKS but I keep jumping between parts of the docs, various questions here on SO, articles on Medium.. all to keep failing at it.</p>
<p>The goal is get a static ip with a dns that I can use to connect my apps to the server deployed on AKS.</p>
<p>I have created via terraform the infrastructure which consists of a resource group in which I created a Public IP and the AKS cluster, so far so good.</p>
<p><a href="https://i.stack.imgur.com/eGOrm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eGOrm.png" alt="resource groups" /></a></p>
<p><a href="https://i.stack.imgur.com/Vp4CH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vp4CH.png" alt="fixi-resource-group" /></a></p>
<p><a href="https://i.stack.imgur.com/saHuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/saHuz.png" alt="MC_fixit-resource-group" /></a></p>
<p>After trying to use the ingress controller that gets installed when you use the option <code>http_application_routing_enabled = true</code> on cluster creation which the docs are discouraging for production <a href="https://learn.microsoft.com/en-us/azure/aks/http-application-routing" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/http-application-routing</a>, I'm trying the recommended way and install the ingress-nginx controller via Helm <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli</a>.</p>
<p>In terraform I'm installing it all like this</p>
<h1>resource group and cluster</h1>
<pre><code>resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "Test"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
name = "server_cluster"
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
dns_prefix = "fixit"
kubernetes_version = var.kubernetes_version
# sku_tier = "Paid"
default_node_pool {
name = "default"
node_count = 1
min_count = 1
max_count = 3
# vm_size = "standard_b2s_v5"
# vm_size = "standard_e2bs_v5"
vm_size = "standard_b4ms"
type = "VirtualMachineScaleSets"
enable_auto_scaling = true
enable_host_encryption = false
# os_disk_size_gb = 30
# enable_node_public_ip = true
}
service_principal {
client_id = var.sp_client_id
client_secret = var.sp_client_secret
}
tags = {
Environment = "Production"
}
linux_profile {
admin_username = "azureuser"
ssh_key {
key_data = var.ssh_key
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "standard"
# load_balancer_sku = "basic"
}
# http_application_routing_enabled = true
http_application_routing_enabled = false
}
</code></pre>
<h1>public ip</h1>
<pre><code>resource "azurerm_public_ip" "public-ip" {
name = "fixit-public-ip"
location = var.location
resource_group_name = var.resource_group_name
allocation_method = "Static"
domain_name_label = "fixit"
sku = "Standard"
}
</code></pre>
<h1>load balancer</h1>
<pre><code>resource "kubernetes_service" "cluster-ingress" {
metadata {
name = "cluster-ingress-svc"
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "fixit-resource-group"
# Warning SyncLoadBalancerFailed 2m38s (x8 over 12m) service-controller Error syncing load balancer:
# failed to ensure load balancer: findMatchedPIPByLoadBalancerIP: cannot find public IP with IP address 52.157.90.236
# in resource group MC_fixit-resource-group_server_cluster_westeurope
# "service.beta.kubernetes.io/azure-load-balancer-resource-group" = "MC_fixit-resource-group_server_cluster_westeurope"
# kubernetes.io/ingress.class: addon-http-application-routing
}
}
spec {
# type = "Ingress"
type = "LoadBalancer"
load_balancer_ip = var.public_ip_address
selector = {
name = "cluster-ingress-svc"
}
port {
name = "cluster-port"
protocol = "TCP"
port = 3000
target_port = "80"
}
}
}
</code></pre>
<h1>ingress controller</h1>
<pre><code>resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "default"
set {
name = "rbac.create"
value = "false"
}
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
# --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
set {
name = "controller.service.annotations.service\\.beta\\.kubernetes\\.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
</code></pre>
<p>but the installation fails with this message from terraform</p>
<pre><code>Warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again.
│
│ with module.ingress_controller.helm_release.nginx,
│ on modules/ingress_controller/controller.tf line 2, in resource "helm_release" "nginx":
│ 2: resource "helm_release" "nginx" {
│
╵
╷
│ Error: timed out waiting for the condition
│
│ with module.ingress_controller.helm_release.nginx,
│ on modules/ingress_controller/controller.tf line 2, in resource "helm_release" "nginx":
│ 2: resource "helm_release" "nginx" {
</code></pre>
<p>the controller print out</p>
<pre><code>vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: default
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.5.1
helm.sh/chart=ingress-nginx-4.4.2
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: default
service: map[beta:map[kubernetes:map[io/azure-load-balancer-internal:true]]]
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.173.243
IPs: 10.0.173.243
IP: 52.157.90.236
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31709/TCP
Endpoints:
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30045/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32500
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 32s (x5 over 108s) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 31s (x5 over 107s) service-controller Error syncing load balancer: failed to ensure load balancer: findMatchedPIPByLoadBalancerIP: cannot find public IP with IP address 52.157.90.236 in resource group mc_fixit-resource-group_server_cluster_westeurope
vincenzocalia@vincenzos-MacBook-Air helm_charts % az aks show --resource-group fixit-resource-group --name server_cluster --query nodeResourceGroup -o tsv
MC_fixit-resource-group_server_cluster_westeurope
</code></pre>
<p>Why is it looking in the <code>MC_fixit-resource-group_server_cluster_westeurope</code> resource group and not in the <code>fixit-resource-group</code> I created for the Cluster, Public IP and Load Balancer?</p>
<p>If I change the controller load balancer ip to the public ip in <code>MC_fixit-resource-group_server_cluster_westeurope</code> then terraform still outputs the same error, but the controller prints out to be correctly assigned to the ip and load balancer</p>
<pre><code>set {
name = "controller.service.loadBalancerIP"
value = "20.73.192.77" #var.public_ip_address
}
</code></pre>
<pre><code>vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster-ingress-svc LoadBalancer 10.0.110.114 52.157.90.236 3000:31863/TCP 104m
ingress-nginx-controller LoadBalancer 10.0.106.201 20.73.192.77 80:30714/TCP,443:32737/TCP 41m
ingress-nginx-controller-admission ClusterIP 10.0.23.188 <none> 443/TCP 41m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 122m
vincenzocalia@vincenzos-MacBook-Air helm_charts % kubectl describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: default
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.5.1
helm.sh/chart=ingress-nginx-4.4.2
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: default
service: map[beta:map[kubernetes:map[io/azure-load-balancer-internal:true]]]
service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: /healthz
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.106.201
IPs: 10.0.106.201
IP: 20.73.192.77
LoadBalancer Ingress: 20.73.192.77
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30714/TCP
Endpoints:
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32737/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32538
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 39m (x2 over 41m) service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 39m (x2 over 41m) service-controller Ensured load balancer
vincenzocalia@vincenzos-MacBook-Air helm_charts %
</code></pre>
<p>Reading here <a href="https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks</a></p>
<blockquote>
<p>To enable this architecture, each AKS deployment spans two resource groups:
You create the first resource group. This group contains only the Kubernetes service resource. The AKS resource provider automatically creates the second resource group during deployment. An example of the second resource group is MC_myResourceGroup_myAKSCluster_eastus. For information on how to specify the name of this second resource group, see the next section.
The second resource group, known as the node resource group, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like MC_myResourceGroup_myAKSCluster_eastus. AKS automatically deletes the node resource group whenever the cluster is deleted, so it should only be used for resources that share the cluster's lifecycle.</p>
</blockquote>
<p>Should I pass the first or the second group depending of what kind of resource I'm creating?
E.g. <code>kubernetes_service</code> needs 1st rg, while <code>azurerm_public_ip</code> needs the 2nd rg?</p>
<p>What is it that I'm missing out here?
Please explain it like I was 5 years old because I'm feeling like right now..</p>
<p>Many thanks</p>
| <p>Finally found what the problem was.</p>
<p>Indeed the <code>Public IP</code> needs to be created in the <code>node resource group</code> because the ingress controller, with the <code>loadBalancerIP</code> assigned to the <code>Public IP</code> address, is going to look for it in the <code>node resource group</code> so if you create it in the <code>resource group</code> fails with the error I was getting.</p>
<p>The node resource group name is assigned at cluster creation eg. <code>MC_myResourceGroup_myAKSCluster_eastus</code>, but you can name it as you wish using the parameter <code>node_resource_group = var.node_resource_group_name</code>.</p>
<p>Also, the Public IP <code>sku</code> <code>"Standard"</code> (to be specified) or <code>"Basic"</code> ( default), and the cluster <code>load_balancer_sku</code> <code>"standard"</code> or <code>"basic"</code>(no default value her, it needs to be specified) have to match.</p>
<p>I also put the Public IP in the cluster module so it can depend on it, to avoid being created before it and failing as the <code>node resource group</code> has not been created yet, couldn't set that dependency correctly in <code>main.tf</code> file.</p>
<p>So the working configuration is now:</p>
<h1>main</h1>
<pre><code>terraform {
required_version = ">=1.1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
subscription_id = var.azure_subscription_id
tenant_id = var.azure_subscription_tenant_id
client_id = var.service_principal_appid
client_secret = var.service_principal_password
}
provider "kubernetes" {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
provider "helm" {
kubernetes {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
}
module "cluster" {
source = "./modules/cluster"
location = var.location
vm_size = var.vm_size
resource_group_name = var.resource_group_name
node_resource_group_name = var.node_resource_group_name
kubernetes_version = var.kubernetes_version
ssh_key = var.ssh_key
sp_client_id = var.service_principal_appid
sp_client_secret = var.service_principal_password
}
module "ingress-controller" {
source = "./modules/ingress-controller"
public_ip_address = module.cluster.public_ip_address
depends_on = [
module.cluster.public_ip_address
]
}
</code></pre>
<h1>cluster</h1>
<pre><code>resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "test"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
name = "server_cluster"
### choose the resource goup to use for the cluster
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
### decide the name of the cluster "node" resource group, if unset will be named automatically
node_resource_group = var.node_resource_group_name
dns_prefix = "fixit"
kubernetes_version = var.kubernetes_version
# sku_tier = "Paid"
default_node_pool {
name = "default"
node_count = 1
min_count = 1
max_count = 3
vm_size = var.vm_size
type = "VirtualMachineScaleSets"
enable_auto_scaling = true
enable_host_encryption = false
# os_disk_size_gb = 30
}
service_principal {
client_id = var.sp_client_id
client_secret = var.sp_client_secret
}
tags = {
Environment = "Production"
}
linux_profile {
admin_username = "azureuser"
ssh_key {
key_data = var.ssh_key
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "basic"
}
http_application_routing_enabled = false
depends_on = [
azurerm_resource_group.resource_group
]
}
resource "azurerm_public_ip" "public-ip" {
name = "fixit-public-ip"
location = var.location
# resource_group_name = var.resource_group_name
resource_group_name = var.node_resource_group_name
allocation_method = "Static"
domain_name_label = "fixit"
# sku = "Standard"
depends_on = [
azurerm_kubernetes_cluster.server_cluster
]
}
</code></pre>
<h1>ingress controller</h1>
<pre><code>resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
</code></pre>
<h1>ingress service</h1>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
# namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
spec:
ingressClassName: nginx
rules:
# - host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
...
other services omitted
</code></pre>
<p>Hope this can help others having difficulties in getting the setup right.
Cheers.</p>
|
<p>The certs in kubernetes are base64 encoded and on the disk are plain certificate. We might have certs embedded in a yaml file. I was wondering if we can quickly verify the certs using either md5sum or viewing the cert contents as follows:</p>
<p>The first function tries to use pipes in variables and combines into a final form for the commandline, the second processes the intermediate results and uses the intermediate results in the final command.
The first function doesn't work because of extra quotes. The second function doesn't work because the '\n' in the content is lost somehow.</p>
<p>But the variable does not have the newline so the openssl doesnt read it well from the stdin.</p>
<p>Any suggestions on how to fix these functions?</p>
<pre class="lang-bash prettyprint-override"><code>function md5cert()
{
md5cert_usage() { echo "md5cert: [-f <file> |-c <cmd>] -p <pat>" 1>&2; exit; }
local OPTIND o a
pflag=0
while getopts ":a:" o; do
case "${o}" in
f)
f="${OPTARG}"
cmd="cat $f"
decode=" base64 -d | "
;;
c)
cmd="${OPTARG}"
decode=" base64 -d | "
;;
o)
o="${OPTARG}"
cmd="cat ${f}"
decode=" "
;;
p)
p="${OPTARG}"
pflag=1
;;
*)
md5cert_usage
;;
esac
done
shift $((OPTIND-1))
if [ $pflag -eq 1 ]; then
greppat="grep $p | awk '{print $2}' | "
fi
$cmd | $greppat $decode md5sum
}
function readcert()
{
readcert_usage() { echo "readcert: [-f <file> |-c <cmd>] -p <pat>" 1>&2; exit; }
local OPTIND o a
pflag=0
while getopts ":f:c:o:p:" o; do
case "${o}" in
f)
f="${OPTARG}"
cmd="cat $f"
decode=" base64 -d | "
;;
c)
cmd="${OPTARG}"
decode=" base64 -d | "
;;
o)
o="${OPTARG}"
cmd="cat ${f}"
decode=" "
;;
p)
p="${OPTARG}"
pflag=1
;;
*)
readcert_usage
;;
esac
done
shift $((OPTIND-1))
if [ $pflag -eq 1 ]; then
greppat="grep --color=auto $p | awk '{print \$2}' | "
content=$($cmd | grep $p | awk '{print $2}')
if [ -n "$decode" ]; then
content="$(echo $content | base64 -d)"
fi
fi
echo -e $content | openssl x509 -in /dev/stdin -noout -text
}
</code></pre>
<p>The commands as such works, but having problems in generalizing the approach using functions</p>
| <p>I'd try hard to avoid building up a command line out of unquoted strings as you show. It will in fact be prone to quoting problems, and if the input contains any sort of character that has special shell syntax the process is prone to behaving unexpectedly (and this can have security implications).</p>
<p>The first change I'd make is to remove the <code>-c</code> and <code>-f</code> options. Don't have these functions try to run the commands themselves; instead, just have them filter whatever input they get.</p>
<pre class="lang-bash prettyprint-override"><code># prone to quoting problems
readcert -c 'kubectl get secret something -o jsonpath="..."'
# one fewer level of quoting
kubectl get secret something -o jsonpath="..." | readcert
</code></pre>
<p>If you can construct a fixed pipeline then many of the problems go away. Say you can put "maybe" as a verb in the pipeline; then after parsing the command line, in concept you could get approximately</p>
<pre class="lang-bash prettyprint-override"><code>md5cert() {
maybe grep something | \
maybe print out some of the line | \
maybe base64 decode it |
md5sum
}
</code></pre>
<p>To pick the "maybe base64 decode" as an example:</p>
<pre class="lang-bash prettyprint-override"><code># Maybe base64 decode the input. Takes one argument. If it is "yes",
# base64 decodes stdin to stdout; otherwise passes stdin to stdout
# unmodified.
maybe_base64_decode() {
if [ "$1" = yes ]; then
base64 -d
else
cat
fi
}
md5cert() {
decode=yes
while getopt ... o; do
case "$o" in
o)
decode=no
;;
esac
done
... | maybe_base64_decode "$decode" | ...
}
</code></pre>
<p>In similar tricks, <code>grep ''</code> (with an empty regexp) will print out a file unmodified (every line matches), as will <code>awk '{ print $0 }'</code>. You can also include the (possibly empty) regexp in the <code>awk</code> call.</p>
<p>I might simplify the first function to:</p>
<pre class="lang-bash prettyprint-override"><code>maybe_base64_decode() { ... } # as above
md5cert() {
pattern='' # a regexp
words='$0' # an awk expression
decode=yes
while getopts ":op:" o; do
case "${o}" in
o)
decode=no
;;
p)
pattern="$OPTARG"
words="$2"
;;
*)
echo 'Usage: md5cert [-o] [-p PATTERN]' >&2
return 1
;;
esac
done
awk "/$pattern/ { print $words }" | maybe_base64_decode "$decode"
}
</code></pre>
<p>I've also skipped the bash-specific <code>function</code> and <code>local</code> keywords; this setup should work with any POSIX shell.</p>
<p>You don't specifically describe how you're using <code>grep</code> and <code>awk</code> here, but if you're reading a Kubernetes manifest, the <code>yq</code> or <code>jq</code> tools could be more robust ways of processing YAML or JSON output. I hint at <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer"><code>kubectl get -o jsonpath</code></a> in an example and that also could be a way to extract a value from an in-cluster object without trying to do text processing on YAML.</p>
|
<p>I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.</p>
<p>If it's not possible, can we replace environment variable name, and secret key name/key pair all together?</p>
<pre><code> containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
</code></pre>
<p>Any help on this will be appreciated! Thanks!</p>
| <p>If you're looking remove that <code>NAMESPACE</code> variable from the manifest, you can use the special <code>$patch: delete</code> directive to do so.</p>
<p>If I start with this Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
</code></pre>
<p>If I write in my <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
</code></pre>
<p>Then the output of <code>kustomize build</code> is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
</code></pre>
<p>Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.</p>
|
<p>I'm having a hard time with pulling image from private repository. Here's the drill down:</p>
<p>The pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
[...]
spec:
containers:
- image: gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main
imagePullPolicy: Always
[...]
imagePullSecrets:
- name: gitlab-dc-cdndemo-2
</code></pre>
<p>And the pull secret:</p>
<pre><code>$ base64 -d <(kubectl -n test-cdndemo get secret gitlab-dc-cdndemo-2 -o json | jq -r '.data.".dockerconfigjson"') | jq
{
"auths": {
"https://gitlab.private:31443": {
"username": "gitlab+deploy-token-22",
"password": "EGDLqGKJwBtfYYf9cDFg",
"email": "[email protected]",
"auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw=="
}
}
}
</code></pre>
<p>It's a playbook example of how it should be done. But when I deploy this I get:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14m default-scheduler Successfully assigned test-cdndemo/appcdnmanagerui-68c8f8c6dd-qcxr5 to node-waw107
Normal Pulling 13m (x4 over 14m) kubelet Pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main"
Warning Failed 13m (x4 over 14m) kubelet Failed to pull image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main": rpc error: code = Unknown desc = failed to pull and unpack image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to resolve referen
ce "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden
Warning Failed 13m (x4 over 14m) kubelet Error: ErrImagePull
Warning Failed 12m (x6 over 14m) kubelet Error: ImagePullBackOff
Normal BackOff 4m41s (x43 over 14m) kubelet Back-off pulling image "gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main"
gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main
</code></pre>
<p>Notice the error, it's 403 Forbidden, not 401 Unauthorized so the credentials do work. Despite this, the image cannot be pulled from my private repo. But when I do this manually on a worker node everything goes smoothly:</p>
<pre><code>$ crictl --debug pull --creds gitlab+deploy-token-22:EGDLqGKJwBtfYYf9cDFg gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest
DEBU[0000] get image connection
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest,Annotations:map[string]string{},},Auth:&AuthConfig{Username:gitlab+deploy-token-22,Password:EGDLqGKJwBtfYYf9cDFg,Auth:,ServerAddress:,IdentityToken:,RegistryToken:,},SandboxConfig:nil,}
DEBU[0006] PullImageResponse: &PullImageResponse{ImageRef:sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260,}
Image is up to date for sha256:0c3b5d355c164d02aaa7b6cbe91bbfa12bd35826566472317efac63cb467d260
$ crictl image
IMAGE TAG IMAGE ID SIZE
[...]
gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main latest 0c3b5d355c164 105MB```
</code></pre>
<p>It alwo works when I use auth data instead (as expected):</p>
<pre><code>crictl --debug pull --auth Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw== gitlab.private:31443/cdn/cdndemo/appcdnmanagerui/main:latest
</code></pre>
<p>Now I'm stuck. The only thing that comes to my mind is some kind of weird bug that's in the k3s release I'm using:</p>
<pre><code># k3s -v
k3s version v1.25.4+k3s1 (0dc63334)
go version go1.19.3
# crictl -v
crictl version v1.25.0-k3s1
</code></pre>
<p>The bug hypothesis is based on previous experience with older k8s deployment, where this approach is used and it works. But on a fresh cluster all I get is 403 error from k3s despite crictl pulling the image with no problems.</p>
<p>Anyone had this kind of problem before and solved it?</p>
| <p>OK, I've found the problem. Instead of:</p>
<pre><code>{
"auths": {
"https://gitlab.private:31443": {
"username": "gitlab+deploy-token-22",
"password": "EGDLqGKJwBtfYYf9cDFg",
"email": "[email protected]",
"auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw=="
}
}
}
</code></pre>
<p>It should have been:</p>
<pre><code> {
"auths": {
"gitlab.private:31443": {
"username": "gitlab+deploy-token-22",
"password": "EGDLqGKJwBtfYYf9cDFg",
"email": "[email protected]",
"auth": "Z2l0bGFiK2RlcGxveS10b2tlbi0yMjpFR0RMcUdLSndCdGZZWWY5Y0RGZw=="
}
}
}
</code></pre>
<p>Apparently the documentation at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a> is a bit misleading since the example there does contain https:// in the URL.</p>
<pre><code>{
"auths": {
"https://index.docker.io/v1/": {
"auth": "c3R...zE2"
}
}
}
</code></pre>
<p>And yes, my private repo does work over HTTPS connections.</p>
|
<p>previously I was using</p>
<blockquote>
<p>"google.golang.org/api"=>v0.10.0</p>
</blockquote>
<p>in my go.mod file and I was able to import</p>
<blockquote>
<p>gensupport "google.golang.org/api/gensupport"</p>
</blockquote>
<p>But with new</p>
<blockquote>
<p>"google.golang.org/api"=>v0.103.0</p>
</blockquote>
<p>I was unable to import gensupport and as I and my colleague have investigated, gensupport was shifted to</p>
<blockquote>
<p>"google.golang.org/api/internal/gensupport"</p>
</blockquote>
<p>but while using this import there is one error saying</p>
<blockquote>
<p>Use of the internal package is not allowed</p>
</blockquote>
<p>So now how I will be able to use gensupport with new "google.golang.org/api"=>v0.103.0 version.</p>
| <p>Your only option is to vendor an older version of the library.
This package has been marked for internal use for a long time and was also marked to allow for breakages. See <a href="https://github.com/googleapis/google-api-go-client/issues/416" rel="nofollow noreferrer">https://github.com/googleapis/google-api-go-client/issues/416</a> for options.</p>
|
<p>I've implemented a controller for Custom Resource (CR) A by <code>controller-runtime</code> lib. CR A is dependent on resource B as</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: CRD.API.Version
kind: A
metadata:
name: a
spec:
bName: b
otherSpecs: ...
</code></pre>
<p>And I add <code>ownerReference</code> on B instances in my controller when A is reconciled, which enables my controller <code>Watches</code> B and would trigger a reconciliation for A if its dependent B is updated. Besides, when A is deleted, related <code>ownerReference</code> on B would be removed:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *MyReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&A{}).
Watches(&source.Kind{Type: &B{}}, &handler.EnqueueRequestForOwner{OwnerType: &A{}, IsController: false}).
Complete(r)
}
func (r *MyReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
a := &A{}
reconciler.Get(ctx, request.NamespacedName, a)
b := &B{}
reconciler.Get(ctx, NamespacedName{Name: a.Spec.bName, Namespace: a.Namespace}, b)
if !a.DeletionTimestamp.IsZero() {
clearOwnerReference(b)
reconciler.Update(ctx, b)
return ...
}
setOwnerReference(b.OwnerReference, a)
reconciler.Update(ctx, b)
// Other reconciliation logic
}
</code></pre>
<p>But Kubernetes API server acts more quickly: it detects B is orphan and deletes B, i.e., GC.</p>
<p>I also tried to add finalizer on B. But when my controller removes both <code>ownerReference</code> and <code>finalizers</code> from B, it's gone, since it's already been marked as deleted.</p>
<p>Any other config on B can be utilized to prevent such cascading deletion? Or any best practice to handle such dependency?</p>
| <p>You will need to add a finalizer to A, so that when A is deleted, in the finalizer logic you remove the OwnerReference from B. This way A's deletion is not orphaning B in Kubernetes GC's eyes, as when A is deleted after the finalizer logic, it does not own anything.</p>
|
<p>Is there a way to define a Persistent Volume in kubernetes which can grow automatically if it runs out of space?<br />
If not, are there any extensions/workarounds, other than over-provisioning storage to protect our applications?</p>
| <p>This is possible with an controller called the <a href="https://github.com/DevOps-Nirvana/Kubernetes-Volume-Autoscaler" rel="nofollow noreferrer">Kubernetes Volume Autoscaler</a>. This will require that you have defined your <code>StorageClasses</code> with the flag of <code>allowVolumeExpansion</code> enabled and that you have Prometheus installed. It will also require that your disk controller supports hot-expansion, and your cloud provider supports it. See the README on this repo for more information on this.</p>
<p>I can confirm this tool works perfectly on both AWS and Google Cloud's Kubernetes clusters.</p>
<p><strong>Disclaimer</strong>: I authored this, but it is open source and now has a handful of contributors.</p>
|
<p>I am new to kubernetes and was trying to apply horizontal pod autoscaling to my existing application. and after following other stackoverflow details - got to know that I need to install metric-server - and I was able to - but some how it's not working and unable to handle request.</p>
<ul>
<li>Further I followed few more things but unable to resolve the issue - I will really appreciate any help here.
Please let me know for any further details you need for helping me :) Thanks in advance.</li>
</ul>
<p>Steps followed:</p>
<p>kubectl apply -f <a href="https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml" rel="noreferrer">https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</a></p>
<pre><code>serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
</code></pre>
<p>kubectl get deploy,svc -n kube-system | egrep metrics-server</p>
<pre><code>deployment.apps/metrics-server 1/1 1 1 2m6s
service/metrics-server ClusterIP 10.32.0.32 <none> 443/TCP 2m6s
</code></pre>
<p>kubectl get pods -n kube-system | grep metrics-server</p>
<pre><code>metrics-server-64cf6869bd-6gx88 1/1 Running 0 2m39s
</code></pre>
<p>vi ana_hpa.yaml</p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: ana-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: common-services-auth
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 160
</code></pre>
<p>k apply -f ana_hpa.yaml</p>
<p>horizontalpodautoscaler.autoscaling/ana-hpa created</p>
<p>k get hpa</p>
<pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ana-hpa StatefulSet/common-services-auth <unknown>/160%, <unknown>/80% 1 10 0 4s
</code></pre>
<p>k describe hpa ana-hpa</p>
<pre><code>Name: ana-hpa
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 12 Apr 2022 17:01:25 +0530
Reference: StatefulSet/common-services-auth
Metrics: ( current / target )
resource memory on pods (as a percentage of request): <unknown> / 160%
resource cpu on pods (as a percentage of request): <unknown> / 80%
Min replicas: 1
Max replicas: 10
StatefulSet pods: 3 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 38s (x8 over 2m23s) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas 38s (x8 over 2m23s) horizontal-pod-autoscaler invalid metrics (2 invalid out of 2), first error is: failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 23s (x9 over 2m23s) horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
<p>kubectl get --raw /apis/metrics.k8s.io/v1beta1</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request
</code></pre>
<p>kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request
</code></pre>
<p>kubectl edit deployments.apps -n kube-system metrics-server</p>
<p>Add hostNetwork: true</p>
<pre><code>deployment.apps/metrics-server edited
</code></pre>
<p>kubectl get pods -n kube-system | grep metrics-server</p>
<p>metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m</p>
<p>k describe pod metrics-server-5dc6dbdb8-42hw9 -n kube-system</p>
<pre><code>Name: metrics-server-5dc6dbdb8-42hw9
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: pusntyn196.apac.avaya.com/10.133.85.196
Start Time: Tue, 12 Apr 2022 17:08:25 +0530
Labels: k8s-app=metrics-server
pod-template-hash=5dc6dbdb8
Annotations: <none>
Status: Running
IP: 10.133.85.196
IPs:
IP: 10.133.85.196
Controlled By: ReplicaSet/metrics-server-5dc6dbdb8
Containers:
metrics-server:
Container ID: containerd://024afb1998dce4c0bd5f4e58f996068ea37982bd501b54fda2ef8d5c1098b4f4
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
Image ID: k8s.gcr.io/metrics-server/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00
Port: 4443/TCP
Host Port: 4443/TCP
Args:
--cert-dir=/tmp
--secure-port=4443
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--kubelet-use-node-status-port
--metric-resolution=15s
State: Running
Started: Tue, 12 Apr 2022 17:08:26 +0530
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6p4g (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-g6p4g:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 2s
node.kubernetes.io/unreachable:NoExecute op=Exists for 2s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m31s default-scheduler Successfully assigned kube-system/metrics-server-5dc6dbdb8-42hw9 to pusntyn196.apac.avaya.com
Normal Pulled 2m32s kubelet Container image "k8s.gcr.io/metrics-server/metrics-server:v0.6.1" already present on machine
Normal Created 2m31s kubelet Created container metrics-server
Normal Started 2m31s kubelet Started container metrics-server
</code></pre>
<p>kubectl get --raw /apis/metrics.k8s.io/v1beta1</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request
</code></pre>
<p>kubectl get pods -n kube-system | grep metrics-server</p>
<pre><code>metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m
</code></pre>
<p>kubectl logs -f metrics-server-5dc6dbdb8-42hw9 -n kube-system</p>
<pre><code>E0412 11:43:54.684784 1 configmap_cafile_content.go:242] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0412 11:44:27.001010 1 configmap_cafile_content.go:242] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
k logs -f metrics-server-5dc6dbdb8-42hw9 -n kube-system
I0412 11:38:26.447305 1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0412 11:38:26.899459 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0412 11:38:26.899477 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0412 11:38:26.899518 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0412 11:38:26.899545 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0412 11:38:26.899546 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0412 11:38:26.899567 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0412 11:38:26.900480 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0412 11:38:26.900811 1 secure_serving.go:266] Serving securely on [::]:4443
I0412 11:38:26.900854 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0412 11:38:26.900965 1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0412 11:38:26.999960 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0412 11:38:26.999989 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I0412 11:38:26.999970 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
E0412 11:38:27.000087 1 configmap_cafile_content.go:242] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0412 11:38:27.000118 1 configmap_cafile_content.go:242] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
</code></pre>
<p>kubectl top nodes</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
</code></pre>
<p>kubectl top pods</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
<p>Edit metrics server deployment yaml</p>
<pre><code>Add - --kubelet-insecure-tls
</code></pre>
<p>k apply -f metric-server-deployment.yaml</p>
<pre><code>serviceaccount/metrics-server unchanged
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader unchanged
clusterrole.rbac.authorization.k8s.io/system:metrics-server unchanged
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator unchanged
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server unchanged
service/metrics-server unchanged
deployment.apps/metrics-server configured
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io unchanged
</code></pre>
<p>kubectl get pods -n kube-system | grep metrics-server</p>
<pre><code>metrics-server-5dc6dbdb8-42hw9 1/1 Running 0 10m
</code></pre>
<p>kubectl top pods</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
<p>Also tried by adding below to metrics server deployment</p>
<pre><code> command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
</code></pre>
| <p>First of all, execute the following command:</p>
<pre><code>kubectl get apiservices
</code></pre>
<p>And checkout the availablity (status) of <code>kube-system/metrics-server</code> service.</p>
<ul>
<li><p>In case the availability is <em>True</em>:
Add <code>hostNetwork: true</code> to the <code>spec</code> of your metrics-server deployment by executing the following command:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl edit deployment -n kube-system metrics-server
</code></pre>
<p>It should look like the following:</p>
<pre class="lang-yaml prettyprint-override"><code>...
spec:
hostNetwork: true
...
</code></pre>
<p>Setting hostNetwork to true means that Pod will have access to
the host where it's running.</p>
</li>
<li><p>In case the availability is <em>False (MissingEndpoints)</em>:</p>
<ol>
<li><p>Download metrics-server:</p>
<pre><code>wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml
</code></pre>
</li>
<li><p>Remove (legacy) metrics server:</p>
<pre><code>kubectl delete -f components.yaml
</code></pre>
</li>
<li><p>Edit downloaded file and add <code>- --kubelet-insecure-tls</code> to args list:</p>
<pre><code>...
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls # add this line
...
</code></pre>
</li>
<li><p>Create service once again:</p>
<pre><code>kubectl apply -f components.yaml
</code></pre>
</li>
</ol>
</li>
</ul>
|
<p>Hope you can help.</p>
<p>I am trying to enable audit logging in a docker container so if anybody <code>kubectl exec</code> to a docker container and runs any commands, then those commands get logged and we can view them in <code>kubectl logs</code> and capture with <code>fluentd</code> for example.</p>
<p>An option with adding the following line to <code>/etc/profile</code> of a container works for <code>root</code> but not for a <code>non-root</code> user as <code>/proc/1/fd/1</code> is owned and writable by only <code>root</code> user and changing ownership or permissions, is not an option, unfortunately.</p>
<pre><code>trap 'echo "$USER":"$BASH_COMMAND" >> /proc/1/fd/1' DEBUG
</code></pre>
<p>So far have tried the following:</p>
<ul>
<li>A working option would be to run the container as a <code>non-root</code>, but unfortunately this is not an option</li>
<li>Option with just changing permissions/ownership doesn't change permissions/ownership</li>
<li>Also adding <code>mesg y</code> to <code>/etc/profile</code> to allow access to root's didn't work either as when doing <code>su - non-root</code> the permission gets denied (<code>mesg: cannot open /dev/pts/2: Permission denied</code>) - cannot change the permission</li>
<li>An option with adding a special file and trying to redirect the logs from there didn't work either, still the permission gets denied. For example:</li>
</ul>
<pre><code>mkfifo -m 666 /tmp/logpipe #create the special file
trap 'echo "$USER":"$BASH_COMMAND" <> /tmp/logpipe > /proc/1/fd/1' DEBUG # in /etc/profile
</code></pre>
<ul>
<li>Changing to <code>trap 'echo "$USER":"$BASH_COMMAND"' DEBUG</code> won't work either as the logs need to go to <code>/proc/1/fd/1</code> in case of docker</li>
</ul>
<p>How would you enable command line audit logging in docker container or workaround the <code>/proc/1/fd/1</code> permission issue for <code>non-root</code> in a container run as <code>root</code> user?</p>
<p>Any ideas highly appreciated.</p>
| <p>Ah, came across my own question :D
So, <a href="https://falco.org/" rel="nofollow noreferrer">Falco</a> can be used as a HIDS (Host-based intrusion detection system), which will alert on any unusual activity based on rules defined in Flaco configuration. This can be deployed as a Deamonset (privileged) or directly on nodes.</p>
|
<p>I'm trying to use internal service DNS for service-to-service HTTP communication.</p>
<p>If I try to curl a pod from another deployment pod it is working but unable to use it in golang net/http service</p>
<pre><code>2023/01/27 15:48:37 oauth2.go:90: oauth2 url http://hydra-admin.microservices.svc.cluster.local:4445/oauth2/auth/requests/login/accept
2023/01/27 15:48:37 oauth2.go:101: Unable to make http request Put "http://localhost:4445/admin/oauth2/auth/requests/login/accept?login_challenge=b569006c8b834a298cf1cd72e2424953": dial tcp [::1]:4445: connect: connection refused
</code></pre>
<p>hydra-admin.microservices.svc.cluster.local is resolved to localhost when the API call is made</p>
<p>but curl works as you see below</p>
<pre><code>/ # curl -X PUT http://hydra-admin:4445/admin/oauth2/auth/requests/login/accept?login_challenge=6f51146e49c54b739de8a37b25a72349
{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed. Unable to decode body because: EOF"}
</code></pre>
<p>What am I missing here?</p>
| <p>Per my comment, depending on how you are building your <code>go</code> executable will have an effect on how it behaves within a <code>k8s</code> environment. Are you using a <code>scratch</code> image or a <code>CGO_ENABLED=1</code> image?</p>
<p>From the <code>dns</code> package <a href="https://pkg.go.dev/net#hdr-Name_Resolution" rel="nofollow noreferrer">docs</a> there's a caveat on DNS behavior</p>
<blockquote>
<p>By default the pure Go resolver is used, because a blocked DNS request
consumes only a goroutine, while a blocked C call consumes an
operating system thread. When cgo is available, the cgo-based resolver
is used instead under a variety of conditions:</p>
</blockquote>
<blockquote>
<p>... when /etc/resolv.conf or /etc/nsswitch.conf specify the use of features
that the Go resolver does not implement, <em><strong>and when the name being
looked up ends in .local</strong></em> or is an mDNS name.</p>
</blockquote>
<p>So I would suggest - to maximized your success rate for both external & internal DNS requests - building your <code>go</code> executable for <code>k8s</code> like so:</p>
<pre><code>CGO_ENABLED=1 go build -tags netgo
</code></pre>
|
<p>I am trying to run a nextflow pipeline but the output file is not created.</p>
<p>The main.nf file looks like this:</p>
<pre><code>#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process my_script {
"""
Rscript script.R
"""
}
workflow {
my_script
}
</code></pre>
<p>In my nextflow.config I have:</p>
<pre><code>
process {
executor = 'k8s'
container = 'rocker/r-ver:4.1.3'
}
</code></pre>
<p>The script.R looks like this:</p>
<pre><code> FUN <- readRDS("function.rds");
input = readRDS("input.rds");
output = FUN(
singleCell_data_input = input[[1]], savePath = input[[2]], tmpDirGC = input[[3]]
);
saveRDS(output, "output.rds")
</code></pre>
<p>After running <strong>nextflow run main.nf</strong> the output.rds is not created</p>
| <p>Nextflow processes are run independently and isolated from each other from inside the working directory. For your script to be able to find the required input files, these must be localized inside the process working directory. This should be done by defining an <a href="https://www.nextflow.io/docs/latest/process.html#inputs" rel="nofollow noreferrer"><code>input</code></a> block and declaring the files using the <a href="https://www.nextflow.io/docs/latest/process.html#input-type-path" rel="nofollow noreferrer"><code>path</code></a> qualifier, for example:</p>
<pre><code>params.function_rds = './function.rds'
params.input_rds = './input.rds'
process my_script {
input:
path my_function_rds
path my_input_rds
output:
path "output.rds"
"""
#!/usr/bin/env Rscript
FUN <- readRDS("${my_function_rds}");
input = readRDS("${my_input_rds}");
output = FUN(
singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]]
);
saveRDS(output, "output.rds")
"""
}
workflow {
function_rds = file( params.function_rds )
input_rds = file( params.input_rds )
my_script( function_rds, input_rds )
my_script.out.view()
}
</code></pre>
<p>In the same way, the script itself would need to be localized inside the process working directory. To avoid specifying an absolute path to your R script (which would not make your workflow portable at all), it's possible to simply embed your code, making sure to specify the Rscript shebang. This works because process scripts are not limited to Bash<sup><a href="https://www.nextflow.io/docs/latest/process.html#scripts-a-la-carte" rel="nofollow noreferrer">1</a></sup>.</p>
<p>Another way, would be to make your Rscript executable and move it into a directory called <code>bin</code> in the the root directory of your project repository (i.e. the same directory as your 'main.nf' Nextflow script). Nextflow automatically adds this folder to the <code>$PATH</code> environment variable and your script would become automatically accessible to each of your pipeline processes. For this to work, you'd need some way to pass in the input files as command line arguments. For example:</p>
<pre><code>params.function_rds = './function.rds'
params.input_rds = './input.rds'
process my_script {
input:
path my_function_rds
path my_input_rds
output:
path "output.rds"
"""
script.R "${my_function_rds}" "${my_input_rds}" output.rds
"""
}
workflow {
function_rds = file( params.function_rds )
input_rds = file( params.input_rds )
my_script( function_rds, input_rds )
my_script.out.view()
}
</code></pre>
<p>And your R script might look like:</p>
<pre><code>#!/usr/bin/env Rscript
args <- commandArgs(trailingOnly = TRUE)
FUN <- readRDS(args[1]);
input = readRDS(args[2]);
output = FUN(
singleCell_data_input=input[[1]], savePath=input[[2]], tmpDirGC=input[[3]]
);
saveRDS(output, args[3])
</code></pre>
|
<p>pinniped cli is not working in widnows.
pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized.</p>
<p>C:\Users\hello>pinniped
pinniped is not recognized as a internal command or external command, operable program or batch file.</p>
<p>Seem windows is not recognizing this .exe file as published by a valid publisher.</p>
<p>pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work.</p>
| <p>Hi <code>Chai</code> I have gone through the link and tried installing pinniped cli it throwed me same error, then I troubleshooted and found that pinniped cli’s executable file is not getting added to the path and we can run pinniped commands by executing the exe file, however everytime you need to go to the directory where your pinniped-cli.exe file is present. Inorder to resolve this you can add pinniped cli’s exe to you path and it will solve your problem, follow this <a href="https://helpdeskgeek.com/how-to/fix-not-recognized-as-an-internal-or-external-command/" rel="nofollow noreferrer">document</a> for more information.</p>
|
<p>I have this kind of routing in each namespace :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
janitor/expires: ${EXPIRY_DATE}
nginx.ingress.kubernetes.io/ssl-redirect: "false" # Set to true once SSL is set up.
spec:
ingressClassName: nginx
rules:
- host: api.${KUBE_DEPLOY_HOST}
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: api-js
port:
number: 111
</code></pre>
<p>Served by ingress-nginx (!= nginx-ingress) 1.2.1 (same issue with 1.5.1) with Kube 1.22 (or 1.23), one deployment in the ingress-nginx namespace, two replicas in the deployment.</p>
<p>When I check my logs I see that sometimes, I think especially when I deploy new ingress rules in new namespaces (during and after the ingress-nginx reload event) I get 502 / 503 / 404 HTTP error responses from the ingress-nginx controller.</p>
<p>When I look into the detailed log, I see :</p>
<pre><code>IP - - [time] "GET API_ROUTE HTTP/1.1" 503 592 "master.frontend.url" UA 449 0.000 [development-branch-api] [] - - - - ID
</code></pre>
<p>Which makes me think the request goes wrong because the master frontend is being served a development API response by the ingress-nginx controller, sometimes when the new api service is not even ready.</p>
<p>When I check the ingress from GKE's view it looks like it is serving 3 pods, corresponding to 3 namespaces that should not overlap / mix requests, instead of the one api pod in the namespace corresponding to the ingress :</p>
<p><a href="https://i.stack.imgur.com/TlQ71.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TlQ71.png" alt="enter image description here" /></a></p>
<p>So the error is seen here, all the ingresses for each 3 namespsace serve 3 pods instead of one pod, which means it is all mixed up, right.</p>
<p>I am sure there is one pod per deployment in my namespaces :</p>
<p><a href="https://i.stack.imgur.com/pMcyG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pMcyG.png" alt="enter image description here" /></a></p>
<p>So if I understand correctly, it seems that the situation is ingress A, ingress B and ingress C, all three of them, serve api A AND api B AND api C instead of serving just the one api pod from their namespace (A, B, C).</p>
<p>But what I don't know is how is it possible that the ingress matches pods from other namespaces, when I am not using externalname, it is the opposite of what an ingress does by default.</p>
<p>I believe the issue is at the ingress level and not at the service level, as when I look into each service, I see that it just serve the one pod corresponding to its namespace and not 3.</p>
<p>The controller is the default ingress-nginx installation edited to use 2 replicas instead of one.</p>
<p>Example service and deployment (issue happens for all of them) :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-js
labels:
component: api-js
role: api-js
annotations:
janitor/expires: ${EXPIRY_DATE}
spec:
type: ClusterIP
selector:
role: perfmaker-api-js
ports:
- name: httpapi
port: 111
targetPort: 111
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-js
annotations:
janitor/expires: ${EXPIRY_DATE}
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: api-js
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
labels:
app: api-js
role: api-js
spec:
containers:
- name: api-js
image: registry/api
</code></pre>
<p>When I change the api name / selectors on one branch, it "untangles" the situation and each branch / namespace's ingress only serves the pod it should serve.</p>
<p>But the errors happen during and after 'reload' event on the ingress-controller, not all the time, an event which is fired when ingress resources are added / removed / updated. In my case it is when there is a new branch in the CI/CD which makes a new namespace and deployment + ingress, or when a finished pipeline triggers a namespace deletion.</p>
| <p>Alas I must admit I just discovered the error does not originate from the kubernetes / ingress-nginx part of the setup but from the testing system, which includes a collision between services at deploy time, because of bad separation in the CI / CD job. Sorry for your time !</p>
<p>So in fact the logs from ingress nginx that stunned me :</p>
<pre><code>IP - - [time] "GET API_ROUTE HTTP/1.1" 503 592 "master.frontend.url" UA 449 0.000 [development-branch-api] [] - - - - ID
</code></pre>
<p>Shows that a service I deploy is overwritten by another environment deployment with different variables, which makes it start to make request to another namespace. The ingress routing is correct.</p>
|
<p>I have the following chartInflator.yml file:</p>
<pre><code>apiVersion: builtin
kind: ChartInflator
metadata:
name: project-helm-inflator
chartName: helm-k8s
chartHome: ../../../helm-k8s/
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
</code></pre>
<p>When I ran it using this, I got the error message below:</p>
<pre><code>$ kustomize build .
Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty
</code></pre>
<p>Here is my project structure:</p>
<pre><code>project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
</code></pre>
| <p>I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="nofollow noreferrer">here</a>. Reading that implies several changes:</p>
<ol>
<li><p>Include the inflator directly in your <code>kustomization.yaml</code> in the <code>helmCharts</code> section.</p>
</li>
<li><p>Use <code>name</code> instead of <code>chartName</code>.</p>
</li>
<li><p>Set <code>chartHome</code> in the <code>helmGlobals</code> section rather than per-chart.</p>
</li>
</ol>
<p>That gets us something like this in our <code>kustomization.yaml</code>:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../../../helm-k8s/
helmCharts:
- name: helm-k8s
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
</code></pre>
<p>I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that <em>contains</em> your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from <em>outside</em> of the chart templates directory.</p>
<p>You may need to add <code>--load-restrictor LoadRestrictionsNone</code> when calling <code>kustomize build</code> for this to work; by default, the <code>chartHome</code> location must be contained by the same directory that contains your <code>kustomization.yaml</code>.</p>
<hr />
<p><strong>Update</strong></p>
<p>To make sure things are clear, this is what I'm recommending:</p>
<ol>
<li><p>Remove the kustomize bits from your helm chart, so that it looks <a href="https://github.com/larsks/open-electrons-deployments/tree/less-weird-layout/open-electrons-monitoring" rel="nofollow noreferrer">like this</a>.</p>
</li>
<li><p>Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at <a href="http://oddbit.com/open-electrons-deployments/" rel="nofollow noreferrer">http://oddbit.com/open-electrons-deployments/</a>.</p>
</li>
<li><p>Use kustomize to deploy the chart with transformations. Here we add a <code>-prod</code> suffix to all the resources:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: open-electrons-monitoring
repo: http://oddbit.com/open-electrons-deployments/
nameSuffix: -prod
</code></pre>
</li>
</ol>
|
<p>I have web services running in the GKE Kubernetes Engine. I also have monitoring services running in the cloud that are monitoring these services. Everything is working fine....except that I don't know how to access the Prometheus, and Kibana dashboards. I know I can use <strong>port-forward</strong> to temporarily forward a local port and access that way but that cannot scale with more and more engineers using the system. I was thinking of a way to provide access to these dashboards to engineers but not sure what would be the best way.</p>
<ol>
<li>Should I create a load balancer for each of these?</li>
<li>What about security? I only want a few engineers to have access to these systems.</li>
</ol>
<p>There are other considerations as well, would love to get your thoughts.</p>
| <blockquote>
<p>Should I create a load balancer for each of these?</p>
</blockquote>
<p>No, you can create but not a good idea.</p>
<blockquote>
<p>What about security? I only want a few engineers to have access to
these systems.</p>
</blockquote>
<p>You can create an account in Kibana and manage access or else you can use the <strong>IAP</strong> <code>(Identity-Aware Proxy)</code> to restrict access. <a href="https://codelabs.developers.google.com/codelabs/user-auth-with-iap#0" rel="nofollow noreferrer">Ref doc</a></p>
<p>You have multiple options. You can use the LoadBalancer as you used but not a good idea though.</p>
<p>A good way to expose different applications is using the <strong>ingress</strong>. So i you are running the <strong>Prometheus</strong>, <strong>Jaeger</strong>, and <strong>Kibana</strong> in your <strong>GKE</strong>.</p>
<p>You can create the different hosts with <strong>domain prom.example.com</strong>, <strong>tracing.example.com</strong>, <strong>kibana.example.com</strong> so there will be <strong>single ingress controller</strong> service with type <strong>LoadBalancer</strong> and you can map <strong>IP</strong> to <strong>DNS</strong>.</p>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#creating_an_ingress_resource" rel="nofollow noreferrer">Ref doc</a></p>
|
<p>I ran the following command:</p>
<pre><code>kubectl create secret tls nexus-tls --cert cert.crt --key privateKey.pem
</code></pre>
<p>where <code>cert.crt</code> contains my certificate and <code>privateKey.pem</code> contains my private key (provisioned using CloudFlare).</p>
<p>I then installed the <code>stable/sonatype-nexus</code> Helm chart with the following configuration:</p>
<pre><code>nexusProxy:
env:
nexusDockerHost: containers.<<NEXUS_HOST>>
nexusHttpHost: nexus.<<NEXUS_HOST>>
nexusBackup:
enabled: true
nexusAdminPassword: <<PASSWORD>>
env:
targetBucket: gs://<<BACKUP_BUCKET_NAME>>
persistence:
storageClass: standard
ingress:
enabled: true
path: /*
annotations:
kubernetes.io/ingress.allow-http: true
kubernetes.io/tls-acme: true
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: <<STATIC_IP_ADDRESS_NAME>>
tls:
enabled: true
secretName: nexus-tls
persistence:
storageClass: standard
storageSize: 1024Gi
resources:
requests:
cpu: 250m
memory: 4800Mi
</code></pre>
<p>by running the command:</p>
<pre><code>helm install -f values.yaml stable/sonatype-nexus
</code></pre>
<p>The possible configuration values for this chart are documented <a href="https://github.com/helm/charts/tree/master/stable/sonatype-nexus#configuration" rel="nofollow noreferrer">here</a>.</p>
<p>When I visit <code>http://nexus.<<NEXUS_HOST>></code>, I am able to access the Nexus Repository. However, when I access <code>https://nexus.<<NEXUS_HOST>></code>, I receive mixed content warnings, because HTTP resources are being served.</p>
<p>If I set the <code>nexusProxy.env.enforceHttps</code> environment variable to true, when I visit <code>https://nexus.<<NEXUS_HOST>></code>, I get a response back which looks like:</p>
<pre><code>HTTP access is disabled. Click here to browse Nexus securely: https://nexus.<<NEXUS_HOST>>.
</code></pre>
<p>How can I ensure that Nexus is served securely? Have I made a configuration error, or does the issue lie elsewhere?</p>
| <p>For legacy reasons I must stand up nexus on GKE. While this question doesn't directly state it is on Google Cloud, the <code>gs://</code> and <code>ingress.class: gce</code> suggest it was; despite the older answer from Xuan Huy being about AWS.</p>
<p>I had a heck of a time getting Nexus TLS to work on GKE, but I finally managed. Google Ingress resources are not the most stable. If you're iterating, they can wedge up and you might find finalizers unable to complete due to getting stuck on L4 ILB cleanup. Things got so screwed up in GCP with just innocent deploy and delete cycles that I had to trash projects and start new ones to test and finally get to a working combination.</p>
<p>My Helm <code>values.yaml</code> has the following. Note I am using Terraform also, so my <code>${variables}</code> are replaced by Terraform with my particular environment settings before running Helm.</p>
<pre><code>service:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/backend-config: '{"ports": {"8081":"sonatype-backendcfg"}}'
ingress:
ingressClassName: null # on GCP, null this, and use annotations instead
enabled: true
hostPath: / # don't use /* that is suggested multiple places
hostRepo: ${sonatype_dns_name} # public facing FQDN
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: "${gce_ssl_cert_name}"
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
# unrelated hint - I use external-dns for DNS registration
external-dns.alpha.kubernetes.io/hostname: "${sonatype_dns_name}."
tls:
- secretName: "${tls_secret_name}"
hosts:
- "${sonatype_cluster_dns_name}" # the svc.cluster.local FQDN
</code></pre>
<p>Before running Helm, my installer places the TLS certs in the GCE cert store for the ILB to use.</p>
<p>Also before Helm, <code>${tls_secret_name}</code> kubesecret is prepared with the cert in key names <code>tls.crt</code> and <code>tls.key</code> (many other apps use this pattern).</p>
<p>I also have a <code>backendconfig</code> resource:</p>
<pre><code>apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: sonatype-backendcfg
namespace: sonatype
spec:
healthCheck:
checkIntervalSec: 30
healthyThreshold: 1
port: 8081
requestPath: /service/rest/v1/status
timeoutSec: 15
type: HTTP
unhealthyThreshold: 10
</code></pre>
<p>The folks at Nexus are not supporting this scenario much longer, so we're working on moving to Harbor so we can cancel our Nexus license.</p>
|
<p>When using Istio with Kubernetes, a number of different manifests require the same environment-specific values. For example, the host address is required by both the Gateway (under <code>spec/servers/hosts</code>) and VirtualService (under <code>spec/hosts)</code>. The typical approach to changing the address for different environments is to apply Kustomize patches. Is it possible to use a single patch to transform/insert the value into each manifest that needs it, or somehow maintain just one copy of the address that gets inserted where needed? The alternative is having to maintain multiple patch files with the same host address, and I would like to avoid duplication.</p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: foo
spec:
selector:
istio: bar
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: secret
hosts:
- test.acme.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bazz
spec:
hosts:
- test.acme.com
gateways:
- foo
http:
- match:
- uri:
prefix: /api/v1
route:
- destination:
host: blarg
port:
number: 80
</code></pre>
| <p>This isn't going to be possible with just Kustomize other than by using multiple patches. Because you're looking to change the value in objects of different types, this can't be done with a single patch. So you could do this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- manifests.yaml
patches:
- target:
kind: Gateway
name: foo
patch: |
- op: replace
path: /spec/servers/0/hosts/0
value: foo.acme.com
- target:
kind: VirtualService
name: bazz
patch: |
- op: replace
path: /spec/hosts/0
value: foo.acme.com
</code></pre>
<p>If you find you need to do this frequently (maybe you have a bunch of similar services), you could move the manifests into a helm chart and then <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="nofollow noreferrer">inflate that with kustomize</a>.</p>
|
<p>The database and the server are not connected.
Attempting to deploy in Kubernetes environment.</p>
<p>this is deployment, sevice of mongodb , golang http server</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- image: royroyee/backend:0.8
name: backend
ports:
- containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
run: backend-service
spec:
ports:
- port: 9001
targetPort: 9001
protocol: TCP
selector:
app: backend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongo
</code></pre>
<p>and my golang code ...
mongodb session</p>
<pre><code>func getSession() *mgo.Session {
s, err := mgo.Dial("mongodb://mongo-service:27017/mongo-db")
</code></pre>
<p>pls let me know ..</p>
<p>also I tried something like this.</p>
<pre><code>// mongodb://mongo-service:27017/backend
// mongodb://mongo-service:27017/mongo-db
// mongodb://mongo-service:27017
</code></pre>
| <p>To connect MongoDB with Golang in a Kubernetes environment, you need to follow these steps:</p>
<p>Deploy MongoDB as a statefulset or a deployment in your Kubernetes cluster.</p>
<p>Create a Service for MongoDB to access the deployed pods from your Golang application.</p>
<p>In your Golang application, use the official MongoDB Go driver to establish a connection to the MongoDB service by specifying the service name and port.</p>
<p>Verify the connection by running a simple test that inserts and retrieves data from the MongoDB database.</p>
<p>Finally, package the Golang application as a Docker image and deploy it as a deployment in the same Kubernetes cluster.</p>
<p>Here is a sample Go code to connect to MongoDB:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"context"
"fmt"
"log"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
// Set client options
clientOptions := options.Client().ApplyURI("mongodb://mongodb-service:27017")
// Connect to MongoDB
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {
log.Fatal(err)
}
// Check the connection
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected to MongoDB!")
}
</code></pre>
<p>Here's a sample YAML file for deploying MongoDB as a StatefulSet and a Go application as a Deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- name: mongodb
port: 27017
targetPort: 27017
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app
spec:
replicas: 1
selector:
matchLabels:
app: go-app
template:
metadata:
labels:
app: go-app
spec:
containers:
- name: go-app
image: <your-go-app-image>
ports:
- containerPort: 8080
</code></pre>
<p>Note: You will need to replace <code>your-go-app-image</code> with the actual Docker image of your Go application.</p>
|
<p>I use the Google Cloud Code extension on VSCode.
I have a minikube running on my macbook (using the virtualbox driver).</p>
<p>I can run <code>skaffold debug</code> from my terminal just fine; the Helm chart gets deployed, but I haven't done the debugger setup so the breakpoints don't hit (as expected).</p>
<p>I want to use the Cloud Code extension to avoid manually doing the debugger setup.
However, if I run "debug on Kubernetes" in the Cloud Code extension, I get a prompt saying "Docker was found in the path but does not appear to be running. Start Docker to continue":
<a href="https://i.stack.imgur.com/0HAI0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0HAI0.png" alt="VSCode prompt asking me to start docker" /></a></p>
<p>If I select "start Docker", then Docker Desktop will be started, which I want to avoid. It seems to me that Cloud Code needs to do the equivalent of running <code>eval $(minikube -p minikube docker-env)</code> to use the minikube Docker daemon. Is there a setting to get it to do that?</p>
| <p>my work around.</p>
<pre><code>ls -l /var/run/docker.sock
# /var/run/docker.sock -> $HOME/.docker/run/docker.sock
sudo rm /var/run/docker.sock
ssh -i ~/.minikube/machines/minikube/id_rsa -L $HOME/.minikube/docker.sock:/var/run/docker.sock docker@$(minikube ip) -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null
sudo ln -s $HOME/.minikube/docker.sock /var/run/docker.sock
if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null
then
echo "Running"
else
echo "Not running"
fi
# Running
</code></pre>
|
<p>I want to list the pods that are owned by the resource <code>X</code> from the Kubernetes cluster using Kubuilder's <code>List(ctx context.Context, list ObjectList, opts ...ListOption)</code> method. <code>ListOptions</code> contains options for limiting or filtering results. Here is the the structure of the <code>ListOptions</code></p>
<pre class="lang-golang prettyprint-override"><code>type ListOptions struct {
// LabelSelector filters results by label. Use labels.Parse() to
// set from raw string form.
LabelSelector labels.Selector
// FieldSelector filters results by a particular field. In order
// to use this with cache-based implementations, restrict usage to
// a single field-value pair that's been added to the indexers.
FieldSelector fields.Selector
// Namespace represents the namespace to list for, or empty for
// non-namespaced objects, or to list across all namespaces.
Namespace string
// Limit specifies the maximum number of results to return from the server. The server may
// not support this field on all resource types, but if it does and more results remain it
// will set the continue field on the returned list object. This field is not supported if watch
// is true in the Raw ListOptions.
Limit int64
// Continue is a token returned by the server that lets a client retrieve chunks of results
// from the server by specifying limit. The server may reject requests for continuation tokens
// it does not recognize and will return a 410 error if the token can no longer be used because
// it has expired. This field is not supported if watch is true in the Raw ListOptions.
Continue string
// Raw represents raw ListOptions, as passed to the API server. Note
// that these may not be respected by all implementations of interface,
// and the LabelSelector, FieldSelector, Limit and Continue fields are ignored.
Raw *metav1.ListOptions
}
</code></pre>
<p>Now, How can I provide the owner information to this <code>ListOptions</code> so the <code>List</code> method will only list the pods that are owned by <code>X</code>?</p>
<p>Here is an example from the KubeBuilder book that shows how to filter results by a particular field,</p>
<pre class="lang-golang prettyprint-override"><code> listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(configMapField, configMap.GetName()),
Namespace: configMap.GetNamespace(),
}
err := r.List(context.TODO(), attachedConfigDeployments, listOps)
</code></pre>
| <p>Unfortunately it's not possible to use field selector for every field of a resource. In your case for example, you can only use <a href="https://github.com/kubernetes/kubernetes/blob/9d577d8a29893062dfbd669997396dbd01ab0e47/pkg/apis/core/v1/conversion.go#L33" rel="nofollow noreferrer">these fields</a> as field selector. It's also stated in <a href="https://stackoverflow.com/a/59443446/11764782">this thread</a>.</p>
<p>Alternatively, you can put labels to pods that is owned by a custom resource and use label selectors. Or you can get all pods and apply programmatic filter to get necessary pods. (I recommend the first approach since <code>metadata.ownerReferences</code> is an array and the cost is O(n^2))</p>
|
<p>Having needed several times in the last few days to upload a 1Gb image after some micro change, I can't help but wonder why there isnt a deploy path built into docker and related tech (e.g. k8s) to push <em>just the application files</em> (Dockerfile, docker-compose.yml and app related code) and have it build out the infrastructure <em>from within</em> the (live) docker host?</p>
<p>In other words, why do I have to upload an entire linux machine whenever I change my app code?</p>
<p>Isn't the whole point of Docker that the configs describe a purely deterministic infrastructure output? I can't even see why one would need to upload the whole container image unless they make changes to it manually, outside of Dockerfile, and then wish to upload that modified image. But that seems like bad practice at the very least...</p>
<p>Am I missing something or this just a peculiarity of the system?</p>
| <p>Good question.</p>
<p>Short answer:</p>
<p>Because storage is cheaper than processing power, building images "Live" might be complex, time-consuming and it might be unpredictable.</p>
<p>On your Kubernetes cluster, for example, you just want to pull "cached" layers of your image that you know that it works, and you just run it... In seconds instead of compiling binaries and downloading things (as you would specify in your Dockerfile).</p>
<p>About building images:</p>
<p>You don't have to build these images locally, you can use your CI/CD runners and run the <code>docker build</code> and <code>docker push</code> from the pipelines that run when you push your code to a git repository.</p>
<p>And also, if the image is too big you should look into ways of reducing its size by using <a href="https://docs.docker.com/build/building/multi-stage/" rel="nofollow noreferrer">multi-stage building</a>, using lighter/minimal base images, using few layers (for example multiple <code>RUN apt install</code> can be grouped to one <code>apt install</code> command listing multiple packages), and also by using <code>.dockerignore</code> to not ship unnecessary files to your image. And last read more about <a href="https://docs.docker.com/build/cache/" rel="nofollow noreferrer">caching in docker builds</a> as it may reduce the size of the layers you might be pushing when making changes.</p>
<hr />
<p>Long answer:</p>
<p>Think of the Dockerfile as the source code, and the Image as the final binary. I know it's a classic example.</p>
<p>But just consider how long it would take to build/compile the binary every time you want to use it (either by running it, or importing it as a library in a different piece of software). Then consider how indeterministic it would download the dependencies of that software, or compile them on different machines every time you run them.</p>
<p>You can take for example Node.js's Dockerfile:
<a href="https://github.com/nodejs/docker-node/blob/main/16/alpine3.16/Dockerfile" rel="nofollow noreferrer">https://github.com/nodejs/docker-node/blob/main/16/alpine3.16/Dockerfile</a></p>
<p>Which is based on Alpine: <a href="https://github.com/alpinelinux/docker-alpine" rel="nofollow noreferrer">https://github.com/alpinelinux/docker-alpine</a></p>
<p>You don't want your application to perform all operations specified in these files (and their scripts) on runtime before actually starting your applications as it might be unpredictable, time-consuming, and more complex than it should be (for example you'd require firewall exceptions for an Egress traffic to the internet from the cluster to download some dependencies which you don't know if they would be available).</p>
<p>You would instead just ship an image based on the base image you tested and built your code to run on. That image would be built and sent to the registry then k8s will run it as a black box, which might be predictable and deterministic.</p>
<p>Then about your point of how annoying it is to push huge docker images every time:</p>
<p>You might cut that size down by following some best practices and well designing your Dockerfile, for example:</p>
<ul>
<li>Reduce your layers, for example, pass multiple arguments whenever it's possible to commands, instead of re-running them multiple times.</li>
<li>Use multi-stage building, so you will only push the final image, not the stages you needed to build to compile and configure your application.</li>
<li>Avoid injecting data into your images, you can pass it later on-runtime to the containers.</li>
<li>Order your layers, so you would not have to re-build untouched layers when making changes.</li>
<li>Don't include unnecessary files, and use <code>.dockerignore</code>.</li>
</ul>
<p>And last but not least:</p>
<p>You don't have to push images from your machine, you can do it with CI/CD runners (for example <a href="https://github.com/docker/build-push-action" rel="nofollow noreferrer">build-push Github action</a>), or you can use your cloud provider's "Cloud Build" products (like <a href="https://cloud.google.com/build/docs/building/build-containers" rel="nofollow noreferrer">Cloud Build for GCP</a> and <a href="https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html" rel="nofollow noreferrer">AWS CodeBuild</a>)</p>
|
<p>We are deploying with Ansible scripts to Openshift 3 using <code>oc apply</code>. When we change template to add more environment variables, we receive a very vague error: <code>"unrecognized type: string"</code> and status code 500.</p>
<p>Setting <code>--loglevel 10</code> leads to no more details:</p>
<pre><code>$ /usr/local/bin/oc_v3.11.715 apply -f \"/tmp/ansible.YtEqVm_deploy/app.yml.json\" -n test-env --loglevel 10 2&> log.log
(several GET to get secret, deploymentconfigs, etc.)
...
I0127 11:49:05.455217 605 request.go:897] Request Body: {xxxxxxxx}
I0127 11:49:05.455280 605 round_trippers.go:386] curl -k -v -XPATCH -H "User-Agent: oc_v3.11.715/v1.11.0+d4cacc0 (linux/amd64) kubernetes/d4cacc0" -H "Authorization: Bearer xxxxxx" -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" 'https://test-env:8443/apis/apps.openshift.io/v1/namespaces/test-app/deploymentconfigs/app'
I0127 11:49:05.466278 605 round_trippers.go:405] PATCH https://test-env:8443/apis/apps.openshift.io/v1/namespaces/test-env-app/deploymentconfigs/app 500 Internal Server Error in 10 milliseconds
I0127 11:49:05.466287 605 round_trippers.go:411] Response Headers:
I0127 11:49:05.466291 605 round_trippers.go:414] Content-Length: 118
I0127 11:49:05.466294 605 round_trippers.go:414] Date: Fri, 27 Jan 2023 09:49:05 GMT
I0127 11:49:05.466297 605 round_trippers.go:414] Audit-Id: 1d3f3398-14fc-4bfa-854b-6faf9b105680
I0127 11:49:05.466302 605 round_trippers.go:414] Cache-Control: no-store
I0127 11:49:05.466307 605 round_trippers.go:414] Content-Type: application/json
I0127 11:49:05.466321 605 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unrecognized type: string","code":500}
I0127 11:49:05.466603 605 helpers.go:201] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "unrecognized type: string",
"code": 500
}]
F0127 11:49:05.466618 605 helpers.go:119] Error from server: unrecognized type: string
</code></pre>
<p>The request body is like:</p>
<pre class="lang-json prettyprint-override"><code>{
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
},
"spec": {
"template": {
"spec": {
"$setElementOrder/containers": [{
"name": "app"
}],
"containers": [{
"$setElementOrder/env": [{
"name": "OLD_VAR_1"
}, {
"name": "OLD_VAR_2"
}, {
"name": "OLD_VAR_3"
}, {
"name": "OLD_VAR_4"
}, {
"name": "NEW_VAR_1"
}, {
"name": "NEW_VAR_2"
}, {
"name": "NEW_VAR_3"
}],
"dnsPolicy": "ClusterFirst",
"env": [{
"name": "OLD_VAR_4",
"value": false
}, {
"name": "NEW_VAR_1",
"value": 10
}, {
"name": "NEW_VAR_2",
"value": 20
}, {
"name": "NEW_VAR_3",
"value": 6
}],
"name": "app",
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 300
}]
}
}
}
}
</code></pre>
<p><code>OLD_VAR_x</code> are old environment variables; we want to add <code>NEW_VAR_[1-3]</code>. Notice strangely that not all old vars are in <code>env</code>, only <code>OLD_VAR_4</code>, but all new vars are in <code>env</code>.</p>
<p>This also happens when we use <code>oc patch</code> with the same request body. Same error response.</p>
<p>What is wrong?</p>
<p>A workaround is first, deployment, fail, and add new vars in Openshift manually, and deploy in Openshift webconsole on top of the last, failed deployment. It works.</p>
| <p>Solved by quoting the var values in the template, like:</p>
<pre><code>- name: NEW_VAR_X
value: "${NEW_VAR_VALUE_X}"
</code></pre>
<p>No errors ever since.</p>
|
<p>Using <code>microk8s</code> installation. An image is tagged, so should use <code>IfNotPresent</code> policy.</p>
<pre><code>apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 name: lh-graphql
5 labels:
6 app: lh-graphql
7 spec:
8 selector:
9 matchLabels:
10 app: lh-graphql
11 strategy:
12 type: Recreate
13 template:
14 metadata:
15 labels:
16 app: lh-graphql
17 spec:
18 containers:
19 - image: hasura/graphql-engine:v2.13.2.cli-migrations-v3
20 name: lh-graphql
21 ports:
22 - containerPort: 8080
23 name: lh-graphql
24 env:
25 - name: HASURA_GRAPHQL_DATABASE_URL
26 value: postgresql://postgres:postgres@$(ORCH_POSTGRES_IP):5432/lh
</code></pre>
<p>Image is already pulled to docker:</p>
<pre><code>light@siddhalok:~$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
postgres 12 3d6880d04326 2 weeks ago 373MB
hasura/graphql-engine v2.13.2.cli-migrations-v3 4cd490369623 2 months ago 570MB
</code></pre>
<p>However, it keeps pulling after a deployment is deleted and created again.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 112s default-scheduler Successfully assigned default/lh-graphql-6db75544cf-j65wp to siddhalok
Normal Pulling 112s kubelet Pulling image "hasura/graphql-engine:v2.13.2.cli-migrations-v3"
</code></pre>
<p><strong>UPD:</strong></p>
<p>The same happens when creating from command line:</p>
<pre><code>microk8s kubectl run det2 --image=registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector:current --image-pull-policy=IfNotPresent
</code></pre>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
postgres 12 3d6880d04326 2 weeks ago 373MB
lh-develop.img latest f26c3c667fbe 5 weeks ago 2.82GB
dpage/pgadmin4 latest 4d5afde0a02e 6 weeks ago 361MB
detector latest e6f7e6567b73 7 weeks ago 3.81GB
lh-detetctor.img latest e6f7e6567b73 7 weeks ago 3.81GB
registry.dev.mpksoft.ru/lighthouse/lh-detector/lh-detector current e6f7e6567b73 7 weeks ago 3.81GB
</code></pre>
| <p>If you are running the <strong>microK8s</strong> and Docker still it's necessary <strong>microk8s</strong> to have an idea about the docker that is running on your machine. local Docker daemon is not part of the <strong>MicroK8s</strong> Kubernetes cluster.</p>
<p>You can <strong>export</strong> the image and inject to <strong>cache</strong></p>
<pre><code>docker save <image name> > myimage.tar
microk8s ctr image import myimage.tar
</code></pre>
<p>Ref : <a href="https://microk8s.io/docs/registry-images" rel="nofollow noreferrer">doc</a></p>
|
<p>This is my docker file</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
FROM node:10.8.0 as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
ARG configuration=production
RUN npm run build -- --output-path=./dist/out --configuration $configuration
FROM nginx:1.15
#Copy ci-dashboard-dist
COPY --from=build-stage /app/dist/out/ /usr/share/nginx/html
#Copy default nginx configuration
COPY ./nginx-custom.conf /etc/nginx/conf.d/default.conf
</code></pre>
<p>I am getting error CrashLoopBackOff. When i try to find the log it says <code>exec /usr/sbin/nginx: exec format error</code>. How can i resolve this and get the app running</p>
| <p>Try something like adding <code>--platform=linux/amd64</code> in your <strong>Dockerfile</strong></p>
<pre><code>FROM --platform=linux/amd64 node:18-alpine as builder
</code></pre>
<p>Build the image and run it on <strong>K8s</strong>.</p>
<p>it could be due to the building image on Mac or Arm and your K8s cluster is not supporting that architecture.</p>
<blockquote>
<p>The optional <strong>--platform</strong> flag can be used to specify the platform
of the image in case FROM references a multi-platform image. For
example, linux/amd64, linux/arm64, or windows/amd64. By default, the
target platform of the build request is used.</p>
</blockquote>
<p><a href="https://The%20optional%20--platform%20flag%20can%20be%20used%20to%20specify%20the%20platform%20of%20the%20image%20in%20case%20FROM%20references%20a%20multi-platform%20image.%20For%20example,%20linux/amd64,%20linux/arm64,%20or%20windows/amd64.%20By%20default,%20the%20target%20platform%20of%20the%20build%20request%20is%20used." rel="nofollow noreferrer">Doc ref</a></p>
|
<p>What I am going to use:</p>
<ul>
<li>Microk8s</li>
<li>istio addon</li>
<li>Metallb addon</li>
<li>Cert manager (if possible)</li>
</ul>
<hr />
<p>With microk8s, I want to deploy several micro services.</p>
<p>I want to set up istio gateway to check whether rest api requests are using https protocol.</p>
<ul>
<li>Each micro service has its own virtual service.</li>
</ul>
<p>If there is no dns, but can only use a private ip address(ex. 192.168.2xx.xxx), what do I have to do first? If this approach is not possible technically, please let me know.</p>
<p>(With dns, letsencrypt would be a solution using cert-manager. Is there any options for a private IP address that works like letsencrypt?)</p>
| <blockquote>
<p>Are there any options for a private IP address that works like
letsencrypt? :</p>
</blockquote>
<p>If you are using a private IP address and do not have DNS, you cannot use LetsEncrypt to obtain a SSL certificate. Instead, you will need to use a certificate from a Certificate Authority (CA) that can generate certificates for private IPs. To do this, you will need to generate a Certificate Signing Request (CSR) and submit it to the CA. The CA will then generate a certificate that is signed with its private key and send it back to you. You will then install this certificate on your Istio gateway and use it to check whether requests to your microservices are using HTTPS protocol. Additionally, you will need to ensure that each microservice has its own virtual service to make sure that the requests are routed to the correct microservice. Refer to this <a href="https://www.digitalocean.com/community/tutorials/a-comparison-of-let-s-encrypt-commercial-and-private-certificate-authorities-and-self-signed-ssl-certificates" rel="nofollow noreferrer">doc</a> for more information.</p>
<blockquote>
<p>To set up istio gateway to check whether rest api requests are using
https protocol.</p>
</blockquote>
<p>To set up an <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#configuring-ingress-using-a-gateway" rel="nofollow noreferrer">Istio gateway</a> to check whether REST API requests are using the HTTPS protocol, you need to configure a gateway and virtual service in Istio. The gateway should be configured to route traffic on the HTTPS port to the port where your REST API is running. The virtual service should be configured to match requests that have the X-Forwarded-Proto header set to https and route them to the correct service. You can also configure Istio to reject requests that don't have the X-Forwarded-Proto header set to https or that have the X-Forwarded-Proto header set to http. Once you have configured the gateway and virtual service, you should be able to test that requests to your REST API are using the HTTPS protocol.</p>
<p>Refer this <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-tls-ingress-gateway-for-a-single-host" rel="nofollow noreferrer">doc</a> on configuring TLS ingress gateway .</p>
|
<p>kustomize build --enable-helm .I have the following project structure:</p>
<pre><code>project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
</code></pre>
<p>I'm trying to build my kustomization file using helm like below:</p>
<pre><code>project/helm-k8s/templates/base/$ kubectl kustomize build . --enable-helm -> dummy.yml
</code></pre>
<p>I get an error message like this:</p>
<pre><code>project/helm-k8s/templates/base$ kubectl kustomize . --enable-helm
error: accumulating resources: accumulation err='accumulating resources from 'project-namespace.yml': missing metadata.name in object {{v1 Namespace} {{ } map[name:] map[]}}': must build at directory: '/home/my-user/project/helm-k8s/templates/base/project-namespace.yml': file is not directory
</code></pre>
<p>Is it not possible for kustomize to use the values.yml which is located directly under helm-k8s folder and create the final manifest for my cluster? What am I doing wrong here?</p>
<p>EDIT: Here is how my kustomization.yml looks like:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
name: open-electrons-monitoring-kustomization
resources:
# 0. Get the namespaces first
- project-namespace.yml
# 1. Set up monitoring services (prometheus)
#- monitoring/prometheus/prometheus-roles.yml
- prometheus/prometheus-config.yml
- prometheus/prometheus-roles.yml
- prometheus/prometheus-deployment.yml
- prometheus/prometheus-service.yml
# 2. Set up monitoring services (grafana)
- grafana/grafana-datasource-config.yml
- grafana/grafana-deployment.yml
- grafana/grafana-service.yml
</code></pre>
| <p>I think you may have misunderstood the use of the <code>--enable-helm</code> parameter. It does not allow kustomize to perform helm-style templating on files, so when you write:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.app.namespace }}
labels:
name: {{ .Values.app.namespace }}
</code></pre>
<p>That doesn't do anything useful. It just generates invalid YAML output.</p>
<hr />
<p>The <code>--enable-helm</code> option allows you to explode Helm charts using Kustomize; see <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/chart.md" rel="nofollow noreferrer">here</a> for the documentation, but for example it allows you to process a <code>kustomization.yaml</code> file like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: traefik
repo: https://helm.traefik.io/traefik
includeCRDs: true
releaseName: example
version: 20.8.0
valuesInline:
deployment:
replicas: 3
logs:
access:
enabled: true
</code></pre>
<p>Running <code>kubectl kustomize --enable-helm</code> will cause kustomize to fetch the helm chart and run <code>helm template</code> on it, producing YAML manifests on stdout.</p>
|
<p>I need to get all resources based on <strong>label</strong>, I used the following code <strong>which works</strong>, However, it takes <strong>too much time</strong> ( ~20sec) to get the response, even which I restrict it to only one namespace (vrf), any idea what im doing wrong here?</p>
<pre><code>resource.NewBuilder(flags).
Unstructured().
ResourceTypes(res...).
NamespaceParam("vrf").AllNamespaces(false).
LabelSelectorParam("a=b").SelectAllParam(selector == "").
Flatten().
Latest().Do().Object()
</code></pre>
<p><a href="https://pkg.go.dev/k8s.io/[email protected]/pkg/resource#Builder" rel="nofollow noreferrer">https://pkg.go.dev/k8s.io/[email protected]/pkg/resource#Builder</a></p>
<p>As I already using <code>label</code> and <code>ns</code>, not sure what should else I do in this case.</p>
<p>Ive checked the cluster connection and it seems that everything is ok, running regular <code>kubectl</code> are getting <strong>very fast</strong> response, just this query took much time.</p>
| <p>The search may be heavy due to the sheer size of the resources the query has to search into. Have you looked into this possibility and further reduce the size using one more label or filter on top of current.</p>
<p>Also check the performance of you Kubernetes api server when the operation is being performed and optimize it.</p>
|
<p>I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment.</p>
<p>I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach:</p>
<ol>
<li><p>Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed.</p>
</li>
<li><p>Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions.</p>
</li>
<li><p>Some other approach I haven't thought of</p>
</li>
</ol>
| <p>It has been almost 2 years since I posted this question. Now with 20-20 hindsight I thought I would post my solution. I ended up simply provisioning an Azure Cosmos Db in the region where my cluster lives, and hitting the Cosmos Db for all my query-side requirements.</p>
<p>(My cluster already lives in the Azure Cloud)</p>
<p>I maintain one Postges Db in my original cluster for my write-side requirements. And my app scales nicely in the cluster.</p>
<p>I have not yet needed to deploy clusters to new regions. When that happens, I will provision a replica of the Cosmos Db to that additional region or regions. But still just one postgres db for write-side requirements. Not going to bother to try to maintain/sync replicas of the postgres db.</p>
<p>Additional insight #1. By provisioning the the Cosmos Db separately from my cluster (but in the same region), I am taking the load off of my cluster nodes. In effect, the Cosmos Db has its own dedicated compute resources. And backup etc.</p>
<p>Additional insight #2. It is obvious now but wasnt back then, that tightly coupling a document db (such as MongoDb) to a particular pod is...a bonkers bad idea. Imagine horizontally scaling your app and with each new instance of your app you would instantiate a new document db. You would quickly bloat up your nodes and crash your cluster. One read-side document db per cluster is an efficient and easy way to roll.</p>
<p>Additional insight #3. The read side of any CQRS can get a nice jolt of adrenaline with the help of an in-memory cache like Redis. You can first see if some data is available in the cache before you hit the docuement db. I use this approach for data such as for a checkout cart, where I will leave data in the cache for 24 hours but then let it expire. You could conceivably use redis for all your read-side requirements, but memory could quickly become bloated. So the idea here is consider deploying an in-memory cache on your cluster -- only one instance of the cache -- and have all your apps hit it for low-latency/high-availability, but do not use the cache as a replacemet for the document db.</p>
|
<p>to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals.</p>
<p>I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry.</p>
<p>The Python app connects to the following address:
<code>KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092')</code></p>
<p>kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: kafka
name: kafka-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: kafka
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: kafka
spec:
containers:
- env:
- name: KAFKA_LISTENERS
value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "LISTENER_INTERNAL"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS
value: "0"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: finnhub-streaming-data-pipeline-kafka:latest
imagePullPolicy: Never
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","/kafka-setup-k8s.sh"]
name: kafka-app
ports:
- containerPort: 9092
- containerPort: 29092
resources: {}
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
selector:
app: kafka
ports:
- protocol: TCP
name: firstport
port: 9092
targetPort: 9092
- protocol: TCP
name: secondport
port: 29092
targetPort: 29092
</code></pre>
<p>finnhub-producer.yaml (aka my Python app deployment):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: finnhubproducer
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: finnhubproducer
spec:
containers:
- env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_SERVER
value: kafka-service.default.svc.cluster.local
- name: KAFKA_TOPIC_NAME
value: market
image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest
imagePullPolicy: Never
name: finnhubproducer
ports:
- containerPort: 8001
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
ports:
- name: "8001"
port: 8001
targetPort: 8001
selector:
io.kompose.service: finnhubproducer
status:
loadBalancer: {}
</code></pre>
<p>pipeline-network-networkpolicy.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: pipeline-network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
</code></pre>
<p>EDIT:
Dockerfile for Kafka image:</p>
<pre><code>FROM confluentinc/cp-kafka:6.2.0
COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh
</code></pre>
<p>kafka-setup-k8s.sh:</p>
<pre><code>
# blocks until kafka is reachable
kafka-topics --bootstrap-server localhost:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server localhost:29092 --list
</code></pre>
| <p>I have managed to make it work by deleting services from deployment and running <code>kubectl expose deployment kafka-app</code>. The issue comes from Kompose labeling.</p>
|
<h3>Context</h3>
<p>I am running a job-processing task (specifically, <a href="https://github.com/resque/resque" rel="nofollow noreferrer">Resque</a>) in a kubernetes setup. This task sets up one or more workers to takes job messages off a queue and processes them. A typical invocation is to set your desired worker count, e.g. <code>COUNT=8 QUEUE=* resque:work</code>.</p>
<p>Of course, in Kubernetes, I am going to add autoscaling to the Deployment running this task.</p>
<p>There's a prevailing recommendation to run 1 process per pod (see below). My concern is that doing so can be wasteful when the process I wish to run has a built-in multi-process management system to run identical processes. I am trying to understand the theory & docs to inform decisions and experiments.</p>
<p>My motivation question is: is there any reason to continue setting worker count, or does it make more sense to use only one worker process per pod? I.e. is there significant overhead in each pod instance compared to letting Resque spawn multiple processes?</p>
<h3>Question</h3>
<p>The objective question is: where should I expect / look for overhead in running 1 process per pod vs letting 1 pod's main process spawn multiple children?</p>
<p>E.g. IIUC each pod is running its own version of the OS and other utils installed in the container. So that at least is some memory overhead vs running a single container, single OS, mutli-Resque-worker setup; is that correct? What else should I be looking at, prior to simply benchmarking a bunch of guesses, to model resource consumption for this setup?</p>
<h3>More Context</h3>
<p>I understand that small process count allows for more granular scaling. I don't consider scaling at a finer resolution than, say, 4 processes at a time to be much benefit, so I'd start there if pod overhead should be considered. Am I overthinking it, and should I forget about pod overhead and just use a worker count of 1 per pod?</p>
<p>This question is informed off of many "one process per pod" references out there. Many listed in <a href="https://stackoverflow.com/questions/60758848/does-it-make-sense-to-run-multiple-similar-processes-in-a-container">this similar question</a> and a <a href="https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container">stack exchange question linked therein</a>.</p>
<ul>
<li>The linked question was concerned with scaling processes inside a pod to optimize node compute usage, which I get is well managed by k8s already</li>
<li>The nested links are more about limiting to one <em>concern</em> per pod, which is the case in my question.</li>
</ul>
<p>My question is about overhead of running 4 identical worker processes in 4 pods vs in 1 pod.</p>
| <p>Either way is fine and I wouldn't expect it to make a huge difference, except maybe at large scale.</p>
<p>There is nothing architecturally wrong with running multiple worker tasks inside a single container, particularly within a framework that's specifically designed to do it. As you note it's usually considered good form to run one concern per container, and there are various issues with running multiple processes (recovering from failed processes, collecting logs); but if you have a system that's specifically designed to launch and manage subprocesses, running it in a container is fine.</p>
<p>The questions I'd start asking here are around how many Resque workers you're planning to run at the same time. If it's "thousands", then you can start hitting limits around the number of pods per node and pressure on the Kubernetes scheduler. In that case, using multiple workers per container to cut down the number of pods can make some sense. If it's "dozens" then limiting it to just one could make it a little easier to visualize and manage.</p>
<p>Starting up a new container can be somewhat expensive (I'm used to seeing 30-60s startup times, but depending heavily on the image) but having a running container isn't especially so. It looks like Resque has a manager process on top of some number of workers so you'll have those extra Rubies, but that's probably not significant memory or storage.</p>
|
<p>I cannot pull artifact registry images to a newly created GKE cluster with Terraform and a user-defined service account.</p>
<p>The terraform used to stand up the cluster is below.</p>
<pre><code>locals {
service = "example"
resource_prefix = format("%s-%s", local.service, var.env)
location = format("%s-b", var.gcp_region)
}
resource "google_service_account" "main" {
account_id = format("%s-sa", local.resource_prefix)
display_name = format("%s-sa", local.resource_prefix)
project = var.gcp_project
}
resource "google_container_cluster" "main" {
name = local.resource_prefix
description = format("Cluster primarily servicing the service %s", local.service)
location = local.location
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "main" {
name = format("%s-node-pool", local.resource_prefix)
location = local.location
cluster = google_container_cluster.main.name
node_count = var.gke_cluster_node_count
node_config {
preemptible = true
machine_type = var.gke_node_machine_type
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.main.email
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
]
}
autoscaling {
min_node_count = var.gke_cluster_autoscaling_min_node_count
max_node_count = var.gke_cluster_autoscaling_max_node_count
}
}
</code></pre>
<p>I run a helm deployment to deploy an application and get the following issue.</p>
<pre><code>default php-5996c7fbfd-d6xf5 0/1 ImagePullBackOff 0 37m
</code></pre>
<pre><code>Normal Pulling 36m (x4 over 37m) kubelet Pulling image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest"
Warning Failed 36m (x4 over 37m) kubelet Failed to pull image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to resolve reference "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden
Warning Failed 36m (x4 over 37m) kubelet Error: ErrImagePull
Warning Failed 35m (x6 over 37m) kubelet Error: ImagePullBackOff
</code></pre>
<p>Seems to me that I've missed something to do with the service account. Although using cloud ssh I am able to generate an oauth token, but that also does not work using <code>crictl</code></p>
<p><strong>UPDATE: issue resolved</strong></p>
<p>I have been able to resolve my problem with the following additional terraform code.</p>
<pre><code>resource "google_project_iam_member" "artifact_role" {
role = "roles/artifactregistry.reader"
member = "serviceAccount:${google_service_account.main.email}"
project = var.gcp_project
}
</code></pre>
| <p>As error says : <strong>unexpected status: 403 Forbidden</strong></p>
<p>You might be having an issue with the Deployment <strong>imagepull</strong> <strong>secret</strong>.</p>
<p>For <strong>GKE</strong> you can use the service account <strong>JSON</strong></p>
<p>Ref doc : <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#json-key" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/advanced-authentication#json-key</a></p>
<p>Terraform create <strong>secret</strong> in <strong>GKE</strong> which you can use it to <em>deployment</em></p>
<pre><code>resource "kubernetes_secret" "gcr" {
type = "kubernetes.io/dockerconfigjson"
metadata {
name = "gcr-image-pull"
namespace = "default"
}
data = {
".dockerconfigjson" = jsonencode({
auths = {
"gcr.io" = {
username = "_json_key"
password = base64decode(google_service_account_key.myaccount.private_key)
email = google_service_account.main.email
auth = base64encode("_json_key:${ base64decode(google_service_account_key.myaccount.private_key) }")
}
}
})
}}
</code></pre>
<p>Or use the <strong>kubectl</strong> to create the <strong>secret</strong></p>
<pre><code>kubectl create secret docker-registry gcr \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat google-service-account-key.json)" \
--docker-email=<Email address>
</code></pre>
<p>Now if you have the POD or deployment you can create <strong>YAML</strong> config like</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: uses-private-registry
spec:
containers:
- name: hello-app
image: <image URI>
imagePullSecrets:
- name: secret-that-you-created
</code></pre>
<p><strong>Update:</strong></p>
<p>As per <strong>Guillaume's</strong> suggestion for <strong>GKE/GCP</strong> you can follow *<em><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">workload identity*</a></em> option as best practice with other extern repo it might could not work.</p>
<p>Create the <strong>IAM service account</strong> in GCP:</p>
<pre><code>gcloud iam service-accounts create gke-workload-indentity \
--project=<project-id>
</code></pre>
<p>Create a service account in the <strong>K8s</strong> cluster :</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
iam.gke.io/gcp-service-account: [email protected]
name: gke-sa-workload
namespace: default
</code></pre>
<p>Policy binding run below <strong>Gcloud</strong> command :</p>
<pre><code>gcloud iam service-accounts add-iam-policy-binding gke-workload-indentity@PROJECT_ID.iam.gserviceaccount.com \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[default/K8s_SANAME]"
</code></pre>
<p>Now you can create the deployment POD with image in GCR/astifact repo just update the ServiceAccount</p>
<pre><code>spec:
containers:
- name: container
image: IMAGE
serviceAccountName: gke-sa-workload
</code></pre>
<p>Read more at : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
|
<p>to start with - I am a sort of newbie to Kubernetes and I might omit some fundamentals.</p>
<p>I have a working containerized app that is orchestrated with docker-compose (and works alright) and I am rewriting it to deploy into Kubernetes. I've converted it to K8s .yaml files via Kompose and modified it to some degree. I am struggling to set up a connection between a Python app and Kafka that are running on separate pods. The Python app constantly returns NoBrokersAvailable() error no matter what I try to apply - it's quite obvious that it cannot connect to a broker. What am I missing? I've defined proper listeners and network policy. I am running it locally on Minikube with local Docker images registry.</p>
<p>The Python app connects to the following address:
<code>KafkaProducer(bootstrap_servers='kafka-service.default.svc.cluster.local:9092')</code></p>
<p>kafka-deployment.yaml (the Dockerfile image is based on confluentinc/cp-kafka:6.2.0 with a topics setup script added to it):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: kafka
name: kafka-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: kafka
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: kafka
spec:
containers:
- env:
- name: KAFKA_LISTENERS
value: "LISTENER_INTERNAL://0.0.0.0:29092,LISTENER_EXTERNAL://0.0.0.0:9092"
- name: KAFKA_ADVERTISED_LISTENERS
value: "LISTENER_INTERNAL://localhost:29092,LISTENER_EXTERNAL://kafka-service.default.svc.cluster.local:9092"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "LISTENER_EXTERNAL:PLAINTEXT,LISTENER_INTERNAL:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "LISTENER_INTERNAL"
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS
value: "0"
- name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_MIN_ISR
value: "1"
- name: KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR
value: "1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: finnhub-streaming-data-pipeline-kafka:latest
imagePullPolicy: Never
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","/kafka-setup-k8s.sh"]
name: kafka-app
ports:
- containerPort: 9092
- containerPort: 29092
resources: {}
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
spec:
selector:
app: kafka
ports:
- protocol: TCP
name: firstport
port: 9092
targetPort: 9092
- protocol: TCP
name: secondport
port: 29092
targetPort: 29092
</code></pre>
<p>finnhub-producer.yaml (aka my Python app deployment):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: finnhubproducer
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.network/pipeline-network: "true"
io.kompose.service: finnhubproducer
spec:
containers:
- env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_SERVER
value: kafka-service.default.svc.cluster.local
- name: KAFKA_TOPIC_NAME
value: market
image: docker.io/library/finnhub-streaming-data-pipeline-finnhubproducer:latest
imagePullPolicy: Never
name: finnhubproducer
ports:
- containerPort: 8001
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.27.0 (b0ed6a2c9)
creationTimestamp: null
labels:
io.kompose.service: finnhubproducer
name: finnhubproducer
spec:
ports:
- name: "8001"
port: 8001
targetPort: 8001
selector:
io.kompose.service: finnhubproducer
status:
loadBalancer: {}
</code></pre>
<p>pipeline-network-networkpolicy.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: pipeline-network
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
podSelector:
matchLabels:
io.kompose.network/pipeline-network: "true"
</code></pre>
<p>EDIT:
Dockerfile for Kafka image:</p>
<pre><code>FROM confluentinc/cp-kafka:6.2.0
COPY ./scripts/kafka-setup-k8s.sh /kafka-setup-k8s.sh
</code></pre>
<p>kafka-setup-k8s.sh:</p>
<pre><code>
# blocks until kafka is reachable
kafka-topics --bootstrap-server localhost:29092 --list
echo -e 'Creating kafka topics'
kafka-topics --bootstrap-server localhost:29092 --create --if-not-exists --topic market --replication-factor 1 --partitions 1
echo -e 'Successfully created the following topics:'
kafka-topics --bootstrap-server localhost:29092 --list
</code></pre>
| <p>Your service's app selector is <code>kafka</code>, whereas the deployment is <code>kafka-app</code>, so they aren't connected.</p>
<p>I suggest you use Strimzi (or Confluent for Kubernetes if you want to use their images), not convert your existing Docker Compose file using Kompose, as it rarely gets network policies correct. If fact, you can probably remove the network labels and remove the network policy completely, as it isn't really necessary in the same namespace.</p>
<p>Regarding your Python app, you shouldn't need to separately define Kafka host and port; use one variable for <code>KAFKA_BOOTSTRAP_SERVERS</code>, which can accept multiple brokers, including their ports</p>
|
<p>I use <code>kubectl</code> to list Kubernetes custom resources of a kind <code>mykind</code> with an additional table column <code>LABEL</code> that contains the value of a label <code>a.b.c.com/key</code> if present:</p>
<pre><code>kubectl get mykind -o=custom-columns=LABEL:.metadata.labels.'a\.b\.c\.com/key'
</code></pre>
<p>This works, i.e., the label value is properly displayed.</p>
<p>Subsequently, I wanted to add a corresponding additional printer column to the custom resource definition of <code>mykind</code>:</p>
<pre><code>- description: Label value
jsonPath: .metadata.labels.'a\.b\.c\.com/key'
name: LABEL
type: string
</code></pre>
<p>Although the additional column is added to <code>kubectl get mykind</code>, it is empty and no label value is shown (in contrast to above <code>kubectl</code> command). My only suspicion were problems with escaping of the special characters - but no variation helped.</p>
<p>Are you aware of any difference between the JSON path handling in <code>kubectl</code> and additional printer columns? I expected strongly that they are exactly the same.</p>
| <p>mdaniel's comment works!</p>
<pre class="lang-yaml prettyprint-override"><code>- description: Label value
jsonPath: '.metadata.labels.a\.b\.c\.com/key'
name: LABEL
type: string
</code></pre>
<p>You need to use <code>\.</code> instead of <code>.</code> and use single quotes <code>' '</code>. It doesn't work with double quotes for the reasons I don't understand</p>
|
<p>I am building a Go Kubernetes operator. I have used kubebuilder to create it.
I want to store some internal details in the CRD status. I have tried :</p>
<ul>
<li>To update the whole resource :</li>
</ul>
<pre><code>if err = r.Client.Update(ctx, upCRD); err != nil {
return ctrl.Result{}, client.IgnoreNotFound(err)
}
</code></pre>
<ul>
<li>And to update only the status :</li>
</ul>
<pre><code>if err = r.Status().Update(ctx, upCRD); err != nil {
return reconcile.Result{}, client.IgnoreNotFound(err)
}
</code></pre>
<p>The status struct is defined as follows :</p>
<pre><code>type HAAuditStatus struct {
ChaosStrategyCron cron.EntryID `json:"chaosStrategyCron,omitempty"`
TestStatus TestStatus `json:"testStatus,omitempty"`
MetricStatus MetricStatus `json:"metricStatus,omitempty"`
RoundRobinStrategy RoundRobinStrategy `json:"roundRobinStrategy,omitempty"`
FixedStrategy FixedStrategy `json:"fixedStrategy,omitempty"`
NextChaosDateTime int64 `json:"nextChaosDateTime,omitempty"`
Created bool `json:"created,default=false"`
}
</code></pre>
<p>No error is raised and the specs fields modified are actually persisted but not the status field whose values remain the default at the next reconciling step.
I have looked at the other issues on GitHub or StackOverflow but any suggestion made solved my issue and I can't figure out what is the problem. For a a bigger picture, you can refer to the <a href="https://github.com/PaulBarrie/ha-audit-operator" rel="nofollow noreferrer">repo</a> where the operator is located.</p>
<p>Any suggestion is very welcomed :)</p>
| <p>I might have found the reason why the status were not updated.</p>
<p>Before updating the status, I was also updating the spec fields (to give some feedback to the user on created resources).</p>
<p>The issue is caused by the fact that the specs updates trigger a new reconcilation, and the instruction after this update (among them the status update) were not execute.</p>
<p>I realized that using specs to give feedback to the user is not suitable and the events were more appropriate for this purpose.</p>
|
<p>Is calling Delete in the Kubernetes go API an idempotent operation, i.e. can it safely be called twice?
If so, is there any documentation defining this property?
The go code just states</p>
<blockquote>
<p>Delete deletes the given obj from Kubernetes cluster.</p>
</blockquote>
<p>Essentially, this statement is what one would expect anyway when looking at the code.</p>
| <p>The api service is based on http, so you can check <a href="https://www.rfc-editor.org/rfc/rfc7231" rel="nofollow noreferrer">RFC 7231</a> about that</p>
<blockquote>
<p>A request method is considered "idempotent" if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request. Of the request methods defined by this specification, PUT, DELETE, and safe request methods are idempotent.</p>
</blockquote>
<p>If your're using <code>kubectl</code> the delete command will fail on the second run, because the resource can not be found. You can prevent it from failing by using the <code>--ignore-not-found</code> flag.</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl run nginx --image nginx
pod/nginx created
$ kubectl delete pod nginx
pod "nginx" deleted
$ kubectl delete pod nginx
Error from server (NotFound): pods "nginx" not found
$ kubectl delete pod nginx --ignore-not-found
</code></pre>
<p>So it's idempotent on the server but not on the client.</p>
|
<p>I have a question about labels and names, in this example manifest file</p>
<p>apiVersion: apps/v</p>
<pre><code>1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>I can see that the <strong>name of the deployment</strong> is "nginx-deployment" and the <strong>pod name</strong> is "nginx"? or is it the running container?
Then I see in the console that the pods would have a hash attached to the end of its name, I believe this is the revision number?
I just want to decipher the <strong>names</strong> from the <strong>labels</strong> from the <strong>matchLables</strong>, so for example I can use this service manifest to expose the pods with a certain label:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 60000
targetPort: 80
</code></pre>
<p>will this service expose all pods with the <strong>selector</strong> : app:nginx ?</p>
<p>thanks</p>
| <p>The Deployment has your specifed name "nginx-deployment".</p>
<p>However you did not define a Pod with a fixed name, you define a template for the pods managed by this deployment.</p>
<p>The Deployment manages 3 pods (because of your replicas 3), so it will use the template to build this three pods.</p>
<p>There will also be a Replica Set with a hash and this will manage the Pods, but this is better seen by following the example below.</p>
<p>Since a deployment can manage multiple pods (like your example with 3 replicas) or needing one new Pod when updating them, a deployment will not exactly use the name specified in the template, but will always append a hash value to keep them unique.</p>
<p>But now you would have the problem to have all Pods loadbalanced behind one Kubernetes Service, because they have different names.</p>
<p>This is why you denfine a label "app:nignx" in your template so all 3 Pods will have this label regardless of there name and other labels set by Kubernetes.</p>
<p>The Service uses the selector to find the correct Pods. In your case it will search them by label "app:nginx".</p>
<p>So yes the Service will expose all 3 Pods of your deployment and will loadbalance trafic between them.</p>
<p>You can use <code>--show-labels</code> for <code>kubectl get pods</code> to see the name and the assigned labels.</p>
<p>For a more complete example see:
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></p>
|
<p>During pod startup Kubernetes is creating some environment variables based on services i created (via downward API?). Problem is that one of them, MY_APPLICATION_PORT, seems to be initialized incorrectly, it looks like:</p>
<pre><code>MY_APPLICATION_PORT=tcp://192.168.0.5:7777
</code></pre>
<p>whereas i expect it to hold only 7777 value. The problem is that i have a Spring Boot application that has this property in application.properties:</p>
<pre><code>my.application.port=7777
</code></pre>
<p>So when spring resolves it's properties, it prefers value from environment variable over one from .properties file, thus overwriting it with incorrect value.</p>
<p>My question is - do you guys know how to control creation of kubernetes env variables? I can overwrite it in my deployment.yaml, but I wonder if there's another way.</p>
<p>EDIT:</p>
<p>I've found this as a closest description of my issue I've seen online:
<a href="https://github.com/kubernetes/kubernetes/issues/65130" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/65130</a></p>
| <p>This environment variable comes from compatibility with a very old Docker feature. You can disable it in Kubernetes by setting <code>enableServiceLinks: false</code> on a <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#containers" rel="nofollow noreferrer">Container</a> object in a Pod spec, anywhere that may appear. For example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
enableServiceLinks: false
env: [...]
</code></pre>
<p>In particular the syntax is intended to be compatible with the environment variables generated by container links in first-generation Docker networking. Since then Docker has also introduced a DNS system into its core, and in pure Docker using links at all is now considered obsolete. It should be safe to always set this Kubernetes property, especially if it causes conflicts like what you describe here.</p>
|
<p>I have an Azure kubernetes cluster created using the following Terraform code</p>
<pre><code># Required Provider
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
required_version = ">= 1.1.0"
}
data "azurerm_client_config" "current" {}
provider "azurerm" {
subscription_id = local.subscription_id
tenant_id = local.tenant_id
client_id = local.client_id
client_secret = local.client_secret
features {}
}
resource "random_pet" "rg-name" {
prefix = var.resource_group_name_prefix
}
resource "azurerm_resource_group" "rg" {
name = random_pet.rg-name.id
location = var.resource_group_location
}
resource "azurerm_virtual_network" "test" {
name = var.virtual_network_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = [var.virtual_network_address_prefix]
subnet {
name = var.aks_subnet_name
address_prefix = var.aks_subnet_address_prefix
}
tags = var.tags
}
data "azurerm_subnet" "kubesubnet" {
name = var.aks_subnet_name
virtual_network_name = azurerm_virtual_network.test.name
resource_group_name = azurerm_resource_group.rg.name
depends_on = [azurerm_virtual_network.test]
}
resource "azurerm_kubernetes_cluster" "k8s" {
name = var.aks_name
location = azurerm_resource_group.rg.location
dns_prefix = var.aks_dns_prefix
private_cluster_enabled = var.private_cluster
resource_group_name = azurerm_resource_group.rg.name
http_application_routing_enabled = false
linux_profile {
admin_username = var.vm_user_name
ssh_key {
key_data = file(var.public_ssh_key_path)
}
}
default_node_pool {
name = "agentpool"
node_count = var.aks_agent_count
vm_size = var.aks_agent_vm_size
os_disk_size_gb = var.aks_agent_os_disk_size
vnet_subnet_id = data.azurerm_subnet.kubesubnet.id
}
service_principal {
client_id = local.client_id
client_secret = local.client_secret
}
network_profile {
network_plugin = "azure"
dns_service_ip = var.aks_dns_service_ip
docker_bridge_cidr = var.aks_docker_bridge_cidr
service_cidr = var.aks_service_cidr
load_balancer_sku = "standard"
}
# Enabled the cluster configuration to the Azure kubernets with RBAC
azure_active_directory_role_based_access_control {
managed = var.azure_active_directory_role_based_access_control_managed
admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids
azure_rbac_enabled = var.azure_rbac_enabled
}
timeouts {
create = "20m"
delete = "20m"
}
depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace]
tags = var.tags
}
</code></pre>
<p>It creates the Load Balancer with Public IP as shown below</p>
<p><a href="https://i.stack.imgur.com/jQTPT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jQTPT.png" alt="enter image description here" /></a></p>
<p>However, I don't want to have a public IP for the Load balancer instead it should have the Internal Private IP.</p>
<p><a href="https://i.stack.imgur.com/RV9I8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RV9I8.png" alt="enter image description here" /></a></p>
<p>What should I to do have this load balancer with Internal Private IP and Service is not exposed over the Internet using the Public IP?</p>
<p><strong>Note:</strong> As per the <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb#:%7E:text=internal%2Dapp%20%20%20LoadBalancer%20%20%2010.0.248.59%20%20%2010.240.0.7%20%20%20%2080%3A30555/TCP%20%20%202m" rel="nofollow noreferrer">Microsoft documentation</a>, even if you annotate with <strong>annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true"</strong>, external IP is still assigned which I am trying to avoid.</p>
| <p>The load balancer that gets created with the AKS cluster (usually called kubernetes) is used for egress (not ingress) traffic and is a public LB, and it cannot be private. This is part of the outbound type configuration.</p>
<p>The "outbound type" of the AKS cluster can be set to "LoadBalancer, UserDefinedRouting or managedNatGateway". if you choose any option other than LB, then you would need to configure your network to route the traffic externally. check <a href="https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype" rel="nofollow noreferrer">this doc</a> for more info.</p>
<p>For ingress traffic, you have the choice to use public or private LB. This is configured in the service resource (of type LoadBalancer) under kubernetes, where you would use the annotation that you mentioned to create a private LB. Public rules will use the same public LB created with the cluster.</p>
<p>You can set the private Ip of the LB using annotations as well:</p>
<pre><code> annotations:
service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.240.0.25
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
</code></pre>
<p>This is mentioned in the <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">same doc</a> that you shared.</p>
|
<p>i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns:
failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate</p>
| <p>If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of <code>kube_config_<file>.yml</code> file from the local directory when you perform rke up.</p>
<p>This similar kind of issue was reported and reproduced in this <a href="https://github.com/rancher/rke/issues/1244#issuecomment-481644616" rel="nofollow noreferrer">git link</a> . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works.</p>
<p>Refer to this latest <a href="https://stackoverflow.com/questions/71603038/rancher-rke-up-errors-on-etcd-host-health-checks-remote-error-tls-bad-certific">SO</a> and <a href="https://www.claudiokuenzler.com/blog/1043/how-to-solve-kubernetes-upgrade-rancher-2-failing-error-bad-certificate" rel="nofollow noreferrer">doc</a> for more information.</p>
|
<p>Does anyone had success implementing preStop hook with distroless/java17:nonroot ? Here is my (default) deployment:</p>
<pre class="lang-java prettyprint-override"><code># [...]
lifecycle:
preStop:
exec:
command: ["sh", "-c", "sleep 10"]
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
privileged: false
runAsUser: 65532
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp-volume
mountPath: /tmp
</code></pre>
| <p>Solved using actuator endpoint</p>
<pre class="lang-java prettyprint-override"><code>@Slf4j
@Component
@ControllerEndpoint(id = "preStopHook")
class WebMvcPreStopHookEndpoint {
@ResponseStatus(OK)
@GetMapping("/{delayInMillis}")
public ResponseEntity<Void> preStopHook(@PathVariable("delayInMillis") final long delayInMillis)
throws InterruptedException {
log.info("[preStopHook] received signal to sleep for {}ms", delayInMillis);
Thread.sleep(delayInMillis);
return null;
}
}
</code></pre>
|
<p>I am seeing a very strange issue trying to start the official <code>postgres:14.6-alpine</code> image on Kubernetes.</p>
<p>For reference the official postgres image allows for configuring the initialization script using the <code>POSTGRES_USER</code>, <code>POSTGRES_PASSWORD</code>, and <code>POSTGRES_DB</code> environment variables.</p>
<p>I have the following secret and configmap defined:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Secret
metadata:
namespace: default
name: postgres-credentials
data:
DATABASE_URL: cG9zdGdyZXM6Ly9sZXRzY2h1cmNoOnBhc3N3b3JkQHBvc3RncmVzOjU0MzIvbGV0c2NodXJjaA==
POSTGRES_USER: bGV0c2NodXJjaA==
POSTGRES_PASSWORD: cGFzc3dvcmQ=
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: default
name: postgres-config
data:
POSTGRES_DB: letschurch
</code></pre>
<p>The value <code>POSTGRES_USER</code> value of <code>bGV0c2NodXJjaA==</code> decodes to <code>letschurch</code> and the <code>POSTGRES_PASSWORD</code> value of <code>cGFzc3dvcmQ=</code> decodes to <code>password</code>.</p>
<p>I also have the following deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
restartPolicy: Always
containers:
- image: postgres:14.6-alpine
name: postgres
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
- secretRef:
name: postgres-credentials
</code></pre>
<p>When I shell into the running container, I can echo out the environment variables, and they appear to be intact:</p>
<pre><code>postgres-74f67b778-lsv4c:/# echo $POSTGRES_USER
letschurch
postgres-74f67b778-lsv4c:/# echo $POSTGRES_PASSWORD
password
postgres-74f67b778-lsv4c:/# echo $POSTGRES_DB
letschurch
postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_USER | wc -c
10
postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_PASSWORD | wc -c
8
postgres-74f67b778-lsv4c:/# echo -n $POSTGRES_DB | wc -c
10
postgres-74f67b778-lsv4c:/# [ "$POSTGRES_USER" = "$POSTGRES_DB" ] && echo 'good!'
good!
</code></pre>
<p>However, I am not able to connect with the role <code>letschurch</code>. I can connect as <code>temporal</code> (another role I have set up with an init script), and when I run <code>\l</code> and <code>\du</code> I see that the role (but not the database name) have a <code>+</code> appended:</p>
<pre><code>temporal=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
---------------------+------------+----------+------------+------------+--------------------------
letschurch | letschurch+| UTF8 | en_US.utf8 | en_US.utf8 |
temporal=> \du
List of roles
Role name | Attributes | Member of
------------+------------------------------------------------------------+-----------
letschurch+| Superuser, Create role, Create DB, Replication, Bypass RLS | {}
| |
temporal | | {}
</code></pre>
<p>At first I thought that the base64-encoded <code>POSTGRES_USER</code> environment variable might have some whitespace or something encoded in it, so I double checked that I was encoding the value properly with <code>echo -n letschurch | base64</code>, and as you can see in the shell output above the resulting value is exactly 10 characters long, no extra whitespace. Also, the <code>POSTGRES_USER</code> and <code>POSTGRES_DB</code> environment variables are equal, but they appear to result in different outcomes in postgres.</p>
<p>Also, this does not happen with <code>docker-compose</code>. Given the following configuration, everything works as expected:</p>
<pre class="lang-yaml prettyprint-override"><code> postgres:
image: postgres:14.6-alpine
environment:
POSTGRES_USER: letschurch
POSTGRES_PASSWORD: password
POSTGRES_DB: letschurch
ports:
- '5432:5432'
</code></pre>
<p>What am I missing here? Why does the <code>letschurch</code> role get a <code>+</code> appended to it? Is there something to do with <code>secret</code>s that doesn't apply to <code>configMap</code>s that I'm missing?</p>
| <p>I think this had to do with a specific setting I had enabled for Docker Desktop for Mac:</p>
<p><a href="https://i.stack.imgur.com/7f60o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7f60o.png" alt="Docker Desktop Settings" /></a></p>
<p>I had changed from the default value (<code>gRPC FUSE</code>) to <code>VirtioFS</code>. When I changed it back to <code>gRPC FUSE</code> a number of issues resolved themselves, this one included.</p>
|
<p>Almost two years later, we are experiencing the same issue as described in <a href="https://stackoverflow.com/questions/67175908/gke-fails-to-mount-volumes-to-deployments-pods-timed-out-waiting-for-the-condit">this SO post</a>.</p>
<p>Our workloads had been working without any disruption since 2018, and they suddenly stopped because we had to renew certificates. Then we've not been able to start the workloads again... The failure is caused by the fact that pods try to mount a persistence disk via NFS, and the
<code>nfs-server</code> pod (based on <code>gcr.io/google_containers/volume-nfs:0.8</code>) can't mount the persistent disk.</p>
<p>We have upgraded from 1.23 to 1.25.5-gke.2000 (experimenting with a few intermediary previous) and hence have also switched to <code>containerd</code>.</p>
<p>We have recreated everything multiple times with slight varioations, but no luck. Pods definitely cannot access any persistent disk.</p>
<p>We've checked basic things such as: the persistent disks and cluster are in the same zone as the GKE cluster, the service account used by the pods has the necessary permissions to access the disk, etc.</p>
<p>No logs are visible on, each pod, which is also strange since logging seems to be correctly configured.</p>
<p>Here is the <code>nfs-server.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
role: nfs-server
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- image: gcr.io/google_containers/volume-nfs:0.8
imagePullPolicy: IfNotPresent
name: nfs-server
ports:
- containerPort: 2049
name: nfs
protocol: TCP
- containerPort: 20048
name: mountd
protocol: TCP
- containerPort: 111
name: rpcbind
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /exports
name: webapp-disk
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- gcePersistentDisk:
fsType: ext4
pdName: webapp-data-disk
name: webapp-disk
status: {}
</code></pre>
| <p>OK, fixed. I had to enable the CI driver on our legacy cluster, <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/gce-pd-csi-driver" rel="nofollow noreferrer">as described here</a>...</p>
|
<p>Update:</p>
<p>What I am trying to achieve: I have the IP address of a cluster resource (10.1.239.128 in the examples below) and want to figure out to which POD/service/... it belongs.</p>
<hr />
<p>I set up a Kuebernetes cluster using Microk8s running, among others, vault and cert-manager via Helm charts. In two of the PODs created for vault and cert-manager, I get multiple the following log entry:</p>
<pre><code>> kubectl logs pod/cert-manager-webhook-6787f645b9-s5fxl
http: TLS handshake error from 10.1.239.128:37763: EOF
</code></pre>
<pre><code>> kubectl logs pod/release-name-vault-0
http: TLS handshake error from 10.1.239.128:48609: remote error: tls: unknown certificate authority
</code></pre>
<p>I am struggling with finding out where these requests come from.</p>
<p>I tried:</p>
<ul>
<li><code>kubectl get svc --all-namespaces</code></li>
<li><code>kubectl get endpoints --all-namespaces</code></li>
<li><code>kubectl get endpointslices --all-namespaces</code></li>
<li><code>ping -a 10.1.239.128</code> using from a POD created using <a href="https://github.com/nicolaka/netshoot" rel="nofollow noreferrer">netshoot</a></li>
</ul>
<p>to figure out from where the requests are coming from, but to no luck. The ip address is not listed or resolved to a DNS name to further investigate.</p>
<p>Any further ideas of how to get the resource issuing the request or further debug this? Thanks in advance.</p>
| <p>Here is a workaround, not the finest way of doing it, but it may help to get in the right direction:</p>
<pre><code>kubectl api-resources -oname |while read r;
do
echo -n "$r ----> ";
kubectl get $r -A -o yaml |grep -oPz '\d+\.\d+\.\d+\.\d+';
echo "" ;
done
</code></pre>
<p>Assuming the IP, you are searching for is a cluster resource.</p>
|
<p>It seems that excluding logs in a pod using the configuration below does not work.</p>
<p>extrascrapeconfig.yaml:</p>
<pre><code> - job_name: kubernetes-pods-app
pipeline_stages:
- docker: {}
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: drop
regex: .+
source_labels:
- __meta_kubernetes_pod_label_name
###
- action: keep
regex: ambassador
source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_namespace
###
</code></pre>
<h2>To Reproduce</h2>
<p>Steps to reproduce the behavior:</p>
<p>Deployed helm loki-stack :</p>
<pre><code>helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml"
</code></pre>
<p><strong>loki-stack-values-v2.4.1.yaml:</strong></p>
<pre><code>loki:
enabled: true
config:
promtail:
enabled: true
extraScrapeConfigs: extrascrapeconfig.yaml
fluent-bit:
enabled: false
grafana:
enabled: false
prometheus:
enabled: false
</code></pre>
<p>Attach grafana to loki datasource</p>
<p>Query: {namespace="kube-system"} in Grafana Loki</p>
<p><strong>RESULT:</strong></p>
<p><strong>See logs</strong></p>
<p><strong>Expected behavior:</strong></p>
<p>Not seeing any logs</p>
<p>Environment:</p>
<p>Infrastructure: Kubernetes<br />
Deployment tool: Helm</p>
<p>What am I missing?</p>
| <p>If you need Helm to pick up a specific file and pass it as a value, you should not pass the value itself in the values YAML file, but via another flag when installing or upgrading the release.</p>
<p>The command you are using is just applying the Helm values as-is, since the <code>-f</code> flag does not support parsing other files into the values by itself. Instead, use <code>--set-file</code>, which works similarly to <code>--set</code>, but gets the value content from the passed file.</p>
<p>Your command would now look like this:</p>
<pre><code> helm install loki grafana/loki-stack --version "${HELM_CHART_VERSION}" \
--namespace=monitoring \
--create-namespace \
-f "loki-stack-values-v${HELM_CHART_VERSION}.yaml" \
--set-file promtail.extraScrapeConfigs=extrascrapeconfig.yaml
</code></pre>
|
<p>In my namespace, I have several pods named with the same prefix, followed by the random string. There are also other pods, named differently. The result of <code>kubectl get pods</code> would look something like this:</p>
<pre><code>service-job-12345abc
service-job-abc54321
other-job-54321cba
</code></pre>
<p>I need to find the nameof the most recently created pod starting with "service-job-".</p>
<p>I found <a href="https://stackoverflow.com/questions/51829475/get-name-of-most-recently-created-pod">this thread</a>, which helps getting the name of the most recent pod in general. <a href="https://stackoverflow.com/questions/70282381/kubectl-get-pods-how-to-filter-pods-by-partial-name">This one</a> gets me the complete names of pods starting with a specific prefix.</p>
<p>What I struggle with is combining these two methods. With each one, I seem to lose the information I need to perform the other one.</p>
<p>Note: I am not an administrator of the cluster, so I cannot change anything about the naming etc. of the pods. The pods could also be in any possible state.</p>
| <p>This works as you expect:</p>
<pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep service-job- | head -1
</code></pre>
|
<p>I cannot pull artifact registry images to a newly created GKE cluster with Terraform and a user-defined service account.</p>
<p>The terraform used to stand up the cluster is below.</p>
<pre><code>locals {
service = "example"
resource_prefix = format("%s-%s", local.service, var.env)
location = format("%s-b", var.gcp_region)
}
resource "google_service_account" "main" {
account_id = format("%s-sa", local.resource_prefix)
display_name = format("%s-sa", local.resource_prefix)
project = var.gcp_project
}
resource "google_container_cluster" "main" {
name = local.resource_prefix
description = format("Cluster primarily servicing the service %s", local.service)
location = local.location
remove_default_node_pool = true
initial_node_count = 1
}
resource "google_container_node_pool" "main" {
name = format("%s-node-pool", local.resource_prefix)
location = local.location
cluster = google_container_cluster.main.name
node_count = var.gke_cluster_node_count
node_config {
preemptible = true
machine_type = var.gke_node_machine_type
# Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
service_account = google_service_account.main.email
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append"
]
}
autoscaling {
min_node_count = var.gke_cluster_autoscaling_min_node_count
max_node_count = var.gke_cluster_autoscaling_max_node_count
}
}
</code></pre>
<p>I run a helm deployment to deploy an application and get the following issue.</p>
<pre><code>default php-5996c7fbfd-d6xf5 0/1 ImagePullBackOff 0 37m
</code></pre>
<pre><code>Normal Pulling 36m (x4 over 37m) kubelet Pulling image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest"
Warning Failed 36m (x4 over 37m) kubelet Failed to pull image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": rpc error: code = Unknown desc = failed to pull and unpack image "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to resolve reference "europe-docker.pkg.dev/example-999999/eu.gcr.io/example-php-fpm:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden
Warning Failed 36m (x4 over 37m) kubelet Error: ErrImagePull
Warning Failed 35m (x6 over 37m) kubelet Error: ImagePullBackOff
</code></pre>
<p>Seems to me that I've missed something to do with the service account. Although using cloud ssh I am able to generate an oauth token, but that also does not work using <code>crictl</code></p>
<p><strong>UPDATE: issue resolved</strong></p>
<p>I have been able to resolve my problem with the following additional terraform code.</p>
<pre><code>resource "google_project_iam_member" "artifact_role" {
role = "roles/artifactregistry.reader"
member = "serviceAccount:${google_service_account.main.email}"
project = var.gcp_project
}
</code></pre>
| <p>Turning comment to answer as it resolved @David's issue.</p>
<p>Because the user defined service account is being used for the node_pool the appropriate roles need to be bound to this service account.</p>
<p>In this case: <code>roles/artifactregistry.reader</code></p>
<p><a href="https://cloud.google.com/artifact-registry/docs/access-control#permissions" rel="nofollow noreferrer">Configuring artifact registry permissions</a></p>
<p>Best practice is to grant the minimum required roles.</p>
|
<p>I'm deploying my Node.js server on AKS and I'm experiencing a few unexpected behaviours not happening on my local machine. I've tried various things to debug it but I'm not succeeding at all.</p>
<p>I use a Middleware as</p>
<pre><code>exports.clientApiKeyValidation = (req, res, next) => {
try {
const clientApiKey = req.get('api_key');
console.log(`received api_key is ${clientApiKey}
and expected API_KEY is ${process.env.API_KEY}`);
const receivedTypeof = console.log('clientApiKey ', typeof clientApiKey);
const expectedTypeof = console.log('expected ', typeof process.env.API_KEY);
console.log('req.headers is: ', req.headers);
if (!clientApiKey) {
return res.status(400).send({
status: false,
message: "Missing Api Key"
});
}
if (clientApiKey === process.env.API_KEY) {
// console.log('Api key correct');
next();
}
else {
return res.status(400).send({
status: false,
message: "Invalid Api Key"
});
}
} catch (error) {
res.status(401).json({
error: new Error('Invalid request!')
});
}
}
</code></pre>
<p>used on routes like</p>
<pre><code>router.get('/users', auth.clientApiKeyValidation, userController.findUsers);
</code></pre>
<p>In <code>app.js</code> I also set a few things, including a middleware function to allow this params as</p>
<pre><code>app.disable('x-powered-by');
app.use(express.json({ limit: '50mb' }));
app.use(function (req, res, next) {
// either res.setHeader or res.header works.
// res.setHeader('Access-Control-Allow-Origin', 'http://localhost:5000');
res.setHeader(
'Access-Control-Allow-Origin',
'https://xxx.westeurope.cloudapp.azure.com'
);
res.setHeader(
'Access-Control-Allow-Methods',
'GET, POST, PUT, DELETE'
);
// res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE');
res.setHeader(
'Access-Control-Allow-Headers',
'content-type, api_key, AuthToken, apikey'
);
// res.setHeader('Access-Control-Allow-Credentials', true);
// res.header('Access-Control-Allow-Origin', 'http://localhost:5000');
// res.header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
// res.header('Access-Control-Allow-Headers', 'Origin, Content-Type, api_key, Accept');
// console.log(' res headers added: ', res.getHeaders());
next();
});
app.use(express.urlencoded({ extended: true }));
app.use(cors());
app.use(express.json());
app.use(express.json({ type: 'application/json' }));
</code></pre>
<p>I'm sending requests to it from my Flutter app adding headers like</p>
<pre><code> await _firebaseAuth.currentUser?.getIdToken().then((idToken) {
headers = {
'Content-Type': 'application/json',
'api_key': Environment.dbApiKey,
'AuthToken': idToken
};
});
FixitUser? userDetails;
// final Uri uri = Uri.http(Environment.dbUrl, '/api/users');
final Uri uri = Uri.https(Environment.dbUrl, '/server/api/users');
log('headers are $headers');
await get(uri, headers: headers).then((resp) {
log('UserMongoDBRepository.downloadUserDetails resp.body is : ${resp.body}');
...
</code></pre>
<p>When running the server directly middleware logs show it all works as expected</p>
<pre><code>received api_key is hjkdiu-slia7h-si9udd989jw-ols8dh
and expected API_KEY is hjkdiu-slia7h-si9udd989jw-ols8dh
clientApiKey string
expected string
req.headers is: {
'user-agent': 'Dart/2.18 (dart:io)',
'accept-encoding': 'gzip',
api_key: 'hjkdiu-slia7h-si9udd989jw-ols8dh',
host: '192.168.1.48:3000',
'content-type': 'application/json',
authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjc1Nzc1MTg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU4NDY4MDAsImV4cCI6MTY3NTg1MDQwMCwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.ZNJFrxWlycMVgg4VAdWt6Q0WR5yrWPar5_UJwhQ9-hVX25aKC69yDpoM2adx3OIQ-hlsGz1bNvVEUSfvWUWRWF-TaX2TVeLb5z0blAtl8A1cGcvnaryffr0jpCtN_nhDg3WTtHw4B2USDe432TxSXx0ICrk8bx_fum8jsfBvAh_xU8bnr6lLsc0pltcmU-zun0rhZcC6jpxua2d0jIwWyWSXurjNBkLkduzOpUbw6KCLur7wVcxz-HmuD67D0qx5dr37malLbOBt5mcfOtdJEJcEq55XqXqUanopT_OWTWxByMrD_CXyZnLu_q1DgSBMM4kaLrSA14ETD_EvIeqHOQ',
}
</code></pre>
<p>When instead is running on AKS cluster <code>api_key</code> is missing from received request headers as the logs show, so the middleware responds with <code>{"status":false,"message":"Missing Api Key"}</code></p>
<pre><code>received api_key is undefined
and expected API_KEY is some api key
req.headers is: {
host: 'xxx.westeurope.cloudapp.azure.com',
'x-request-id': '2ecc2ec74c808cf40f816921374f72d4',
'x-real-ip': '81.56.11.23',
'x-forwarded-for': '81.56.11.23',
'x-forwarded-host': 'xxx.westeurope.cloudapp.azure.com',
'x-forwarded-port': '443',
'x-forwarded-proto': 'https',
'x-forwarded-scheme': 'https',
'x-scheme': 'https',
'user-agent': 'Dart/2.18 (dart:io)',
'accept-encoding': 'gzip',
'content-type': 'application/json',
authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjcxODc0Nzg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU3NjkyNzksImV4cCI6MTY3NTc3Mjg3OSwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.d8emU2BKNBV5oe3YQVHT8M8otFos_RvEmeyutdmYBDhnzyPgMZTAn_l3JikRAbcTNdDOAVoutZgTb5s8d6di3plAoE240OwwZTuSwxVpSaS7fDPt_rjQf9k2RmVsRa-fq1SWIP2ejdEbma_QngLSpXO0-PSPx4wa7mThjv2enP00TpUB9RDsOPK2QKlwOX9i1gc1_7kOPGJwouG3S3W4_kOXIxSoVjAT0P9k2xtHa99W-_gwn-9YqM1UoHrLkEs-ONKpe5SWLIet9r_PvI2l1zqb-1fGBmoeBhyzSijw_cOLJSayEoImtkCOmAA0rhMNYc--Yxuzd8EMyyp1U9dThg'
}
</code></pre>
<p>The headers prints from Flutter shows the api_key being correctly set.</p>
<pre><code>headers are {Content-Type: application/json, api_key: some api key, AuthToken: eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjcxODc0Nzg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU3NjkyNzksImV4cCI6MTY3NTc3Mjg3OSwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.d8emU2BKNBV5oe3YQVHT8M8otFos_RvEmeyutdmYBDhnzyPgMZTAn_l3JikRAbcTNdDOAVoutZgTb5s8d6di3plAoE240OwwZTuSwxVpSaS7fDPt_rjQf9k2RmVsRa-fq1SWIP2ejdEbma_QngLSpXO0-PSPx4wa7mThjv2enP00TpUB9RDsOPK2QKlwOX9i1gc1_7kOPGJwouG3S3W4_kOXIxSoVjAT0P9k2xtHa99W-_gwn-9YqM1UoHrLkEs-ONKpe5SWLIet9r_PvI2l1zqb-1fGBmoeBhyzSijw_cOLJSayEoImtkCOmAA0rhMNYc--Yxuzd8EMyyp1U9dThg}
</code></pre>
<p>So as a test instead of <code>api_key</code> I used <code>apikey</code> and it gets retrieved in the request, but then the <code>===</code> check fails so the middleware responds with <code>{"status":false,"message":"Invalid Api Key"}</code> even it their type and value are identical, as checked here <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Strict_equality" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Strict_equality</a></p>
<pre><code>received api_key is hjkdiu-slia7h-si9udd989jw-ols8dh
and expected API_KEY is hjkdiu-slia7h-si9udd989jw-ols8dh
clientApiKey string
expected string
req.headers is: {
host: 'fixit1.westeurope.cloudapp.azure.com',
'x-request-id': '515ad2a00b1a3db24a69e09f6d181036',
'x-real-ip': '81.56.11.23',
'x-forwarded-for': '81.56.11.23',
'x-forwarded-host': 'fixit1.westeurope.cloudapp.azure.com',
'x-forwarded-port': '443',
'x-forwarded-proto': 'https',
'x-forwarded-scheme': 'https',
'x-scheme': 'https',
'user-agent': 'Dart/2.18 (dart:io)',
'accept-encoding': 'gzip',
'content-type': 'application/json',
authtoken: 'eyJhbGciOiJSUzI1NiIsImtpZCI6IjVhNTA5ZjAxOWY3MGQ3NzlkODBmMTUyZDFhNWQzMzgxMWFiN2NlZjciLCJ0eXAiOiJKV1QifQ.eyJuYW1lIjoidmluY2Vuem8gY2FsaWEiLCJwaWN0dXJlIjoiaHR0cHM6Ly9saDYuZ29vZ2xldXNlcmNvbnRlbnQuY29tLy13TVNwMUxZd2hPZy9BQUFBQUFBQUFBSS9BQUFBQUFBQUFCSS9OdE43TTlTMEVIUS9zOTYtYy9waG90by5qcGciLCJpc3MiOiJodHRwczovL3NlY3VyZXRva2VuLmdvb2dsZS5jb20vZml4LWl0LWI0YjAwIiwiYXVkIjoiZml4LWl0LWI0YjAwIiwiYXV0aF90aW1lIjoxNjc1Nzc1MTg2LCJ1c2VyX2lkIjoiWnhtejJHSmxNUlBXdjBMRGgyRDg4Y0o3T3V6MSIsInN1YiI6Ilp4bXoyR0psTVJQV3YwTERoMkQ4OGNKN091ejEiLCJpYXQiOjE2NzU4NDMxNzgsImV4cCI6MTY3NTg0Njc3OCwiZW1haWwiOiJ2aW5jZW56by5jYWxpYS4xOTc2QGdtYWlsLmNvbSIsImVtYWlsX3ZlcmlmaWVkIjp0cnVlLCJwaG9uZV9udW1iZXIiOiIrMzkzNjYxNDcxMzEzIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJnb29nbGUuY29tIjpbIjExNjkyMTg2MTcwOTA0NTM5MzU5MiJdLCJwaG9uZSI6WyIrMzkzNjYxNDcxMzEzIl0sImVtYWlsIjpbInZpbmNlbnpvLmNhbGlhLjE5NzZAZ21haWwuY29tIl19LCJzaWduX2luX3Byb3ZpZGVyIjoiZ29vZ2xlLmNvbSJ9fQ.lDMgmTJkrm6ZnoyEU1F7Sjoo7Y8or8ZKzIrwBJ9ssQiR8yN5KD2ZhyM6yyR_Arscmyg1ZV_6RsTnFgGsVsmjiMzyX6TOXmYkmRlvvMjjjFsV8rW_W_gIdVld6vSg-JMrOlLcCeBknFDJC50bbNGYBSwQ2_C_MZIKlbFWWrqME988MOiUBlyT86t5Oofc5uVMETrpBf0a-wsFRdyEX-3uj-T3MRHza62PTcpHURoptQdIzYsBSc6WxR6WCINVjx__DbWlWrGt612Mw4iLv1XReiGriQEjTDc9cXbG0ngbiRsn0ojvZ-Jb8Pb6kj7gWRYDRsKFg2nxxFMhVeSDuIeO-w',
apikey: 'hjkdiu-slia7h-si9udd989jw-ols8dh'
}
</code></pre>
<p>As a second test I changed the <code>===</code> operator to <code>==</code> in the middleware but still it returns false and responds with <code>{"status":false,"message":"Invalid Api Key"}</code>.</p>
<p>Then I tried to use <code>const serverApiKey = JSON.stringify(process.env.API_KEY);</code> and <code>const clientApiKey = JSON.stringify(req.get('apikey'));</code> for the comparison and they actually do yield different results</p>
<pre><code>clientApiKey json string is: "hjkdiu-slia7h-si9udd989jw-ols8dh"
serverApiKey json string is: "hjkdiu-slia7h-si9udd989jw-ols8dh\n"
</code></pre>
<p><code>env.API_KEY</code> does get it's value from a k8s secret, base64 encoded with <code>echo -n hjkdiu-slia7h-si9udd989jw-ols8dh |base64</code> command. To see if it was a matter of an empty space in the encoded value from the secret I tried encoding it with and without the <code>-n</code> flag but they result in the same json encoded string.</p>
<p>I'm not sure it could have something to do with the docker image I create so here's the <code>Dockrfile</code></p>
<pre><code>FROM node:18.3.0
WORKDIR /usr/app
# where available (npm@5+)
COPY ./ package.json ./
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
# Istall the dependencies
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
COPY ./ ./
# Add the following lines for create-react-app bug, workaround
ENV CI=true
ENV WDS_SOCKET_PORT=0
CMD ["npm", "start"]
</code></pre>
<ol>
<li>Why <code>api_key</code> param is not present while <code>apikey</code> param is in the received request headers?</li>
<li>Why the <code>\n</code> in the <code>serverApiKey</code> but not in the <code>clientApiKey</code> ?</li>
<li>Why is this happening only on AKS and not on my local machine?</li>
</ol>
<p>Thank you very much as always.</p>
| <p>I don't know anything about AKS, but some HTTP server implementations (for instance <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers" rel="nofollow noreferrer">nginx</a>) don't accept underscores in HTTP header names.</p>
<p>As for why the server API keys contains a trailing newline: it may be a platform thing. I would just be pragmatic about it and remove it before use: <code>process.API_KEY.trim()</code></p>
|
<p>According to articles below, it seems we can pull container image to GKE from Artifact Registry without any additional authentication when these in same project.</p>
<p><a href="https://cloud.google.com/artifact-registry/docs/integrate-gke" rel="nofollow noreferrer">https://cloud.google.com/artifact-registry/docs/integrate-gke</a></p>
<p><a href="https://www.youtube.com/watch?v=BfS7mvPA-og" rel="nofollow noreferrer">https://www.youtube.com/watch?v=BfS7mvPA-og</a></p>
<p><a href="https://stackoverflow.com/questions/73205712/error-imagepullbackoff-and-error-errimagepull-errors-with-gke">Error: ImagePullBackOff and Error: ErrImagePull errors with GKE</a></p>
<p>But when I try it, I faced <code>ImagePullBackOff</code> error.<br />
Is there any mistake? misunderstanding? Or should I need use another authentication?</p>
<h2>Reproduce</h2>
<p>It's convenient to use Google Cloud Shell in some project on <a href="https://console.cloud.google.com" rel="nofollow noreferrer">https://console.cloud.google.com</a> .</p>
<h3>Create Artifact Registry</h3>
<pre class="lang-bash prettyprint-override"><code>gcloud artifacts repositories create test \
--repository-format=docker \
--location=asia-northeast2
</code></pre>
<h3>Push sample image</h3>
<pre class="lang-bash prettyprint-override"><code>gcloud auth configure-docker asia-northeast2-docker.pkg.dev
docker pull nginx
docker tag nginx asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image
docker push asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image
</code></pre>
<h3>Create GKE Autopilot cluster</h3>
<p>Create GKE Autopilot cluster by using GUI console.</p>
<p>Almost all options is default but I changed these 2.</p>
<ul>
<li>Set cluster name as test.</li>
<li>Set region same as registry's one. (In this case, asia-northeast2)</li>
<li>Enabled Anthos Service Mesh.</li>
</ul>
<h3>Deploy container image to GKE from Artifact Registry</h3>
<pre class="lang-bash prettyprint-override"><code>gcloud container clusters get-credentials test --zone asia-northeast2
kubectl run test --image asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image
</code></pre>
<h3>Check Pod state</h3>
<pre class="lang-bash prettyprint-override"><code>kubectl describe po test
</code></pre>
<pre><code>Name: test
Namespace: default
Priority: 0
Service Account: default
Node: xxxxxxxxxxxxxxxxxxx
Start Time: Wed, 08 Feb 2023 12:38:08 +0000
Labels: run=test
Annotations: autopilot.gke.io/resource-adjustment:
{"input":{"containers":[{"name":"test"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"reque...
seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status: Pending
IP: 10.73.0.25
IPs:
IP: 10.73.0.25
Containers:
test:
Container ID:
Image: asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-szq85 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-szq85:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: kubernetes.io/arch=amd64:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s gke.io/optimize-utilization-scheduler Successfully assigned default/test to xxxxxxxxxxxxxxxxxxx
Normal Pulling 16s kubelet Pulling image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image"
Warning Failed 16s kubelet Failed to pull image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image": rpc error: code = Unknown desc = failed to pull and unpack image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest": failed to resolve reference "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image:latest": failed to authorize: failed to fetch oauth token: unexpected status: 403 Forbidden
Warning Failed 16s kubelet Error: ErrImagePull
Normal BackOff 15s kubelet Back-off pulling image "asia-northeast2-docker.pkg.dev/${PROJECT_NAME}/test/sample-nginx-image"
Warning Failed 15s kubelet Error: ImagePullBackOff
</code></pre>
<p>then, I got <code>ImagePullBackOff</code>.</p>
| <p>This could be because the GKE Autopilot service account does not have the necessary permissions to access the Artifact Registry. You can grant the needed permissions by adding the <code>roles/artifactregistry.reader</code> role to the service account that the GKE Autopilot node pool is configured to use. Additionally, you may need to adjust the <a href="https://cloud.google.com/container-registry/docs/access-control#permissions_and_roles" rel="nofollow noreferrer">IAM permissions</a> for the service account so that it has access to the private Artifact Registry.</p>
<pre><code>gcloud artifacts repositories add-iam-policy-binding <repository-name> \
--location=<location> \
--member=serviceAccount:<nnn>[email protected] \
--role="roles/artifactregistry.reader"
</code></pre>
<p>Can you try creating a new service account and granting it the necessary permissions to pull the image and try to pull the image once.</p>
<p>Simple <a href="https://cloud.google.com/container-registry/docs/troubleshooting#imagepullbackoff" rel="nofollow noreferrer">troubleshooting steps</a> are:</p>
<ol>
<li>you should ensure that your GKE cluster is configured to allow access to the Artifact Registry. You can do this by going to the GKE dashboard and making sure that the “Allow access to Artifact Registry” option is enabled.</li>
<li>The container image you are trying to pull does not exist in the Artifact Registry. You should check the registry to make sure that the container image is correctly uploaded and can be accessed.</li>
<li>you can look into the error logs to get more information on what is causing this issue. Additionally, you can check the GKE documentation for more information on troubleshooting this issue.</li>
</ol>
|
<p>I have currently Trino deployed in my Kubernetes cluster using the official <a href="https://github.com/trinodb/charts" rel="nofollow noreferrer">Trino(trinodb) Helm Chart</a>. In the same way I deployed <a href="https://github.com/apache/superset/blob/master/helm/superset/Chart.yaml" rel="nofollow noreferrer">Apache superset</a>.</p>
<ul>
<li><p>Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using:</p>
<p>./trino --server http:localhost:8080</p>
</li>
<li><p>I don't have any authentication set</p>
</li>
<li><p>mysql is setup correctly as Trino catalog</p>
</li>
</ul>
<p>when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs:</p>
<pre><code>trino://trino@localhost:8080/mysql
trino://localhost:8080/mysql
</code></pre>
<p>When I test the connection from Superset UI, I get the following error:</p>
<blockquote>
<p>ERROR: Could not load database driver: TrinoEngineSpec</p>
</blockquote>
<p>Please advise how I could solve this issue.</p>
| <p>You should install <code>sqlalchemy-trino</code> to make the trino driver available.</p>
<p>Add these lines to your <code>values.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>additionalRequirements:
- sqlalchemy-trino
bootstrapScript: |
#!/bin/bash
pip install sqlalchemy-trino &&\
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
</code></pre>
<p>If you want more details about the problem, see this <a href="https://github.com/apache/superset/issues/13640" rel="nofollow noreferrer">Github issue</a>.</p>
<p>I added two options that do the same thing because in some versions the <code>additionalRequirements</code> doesn't work and you may need the <code>bootstrapScript</code> option to install the driver.</p>
|
<p>In normal kubernetes CI/CD process , following process occurs :</p>
<ul>
<li>clone code from git</li>
<li>build and push docker image</li>
<li>update kubernetes deployment with updated code</li>
</ul>
<p>According to gitops definition</p>
<p><em>GitOps is a new approach to Continuous Deployment that leverages Git as a single source of truth for declarative infrastructure and applications, providing both revision and change control. With GitOps, a system is run by submitting pull requests (and subsequent merges) to achieve the desired state of the system represented in a Git repository</em></p>
<p>what my understanding of gitops is , when you update anything in git (as this is the source of truth) , the desired state of kubernetes changes to the latest one and the latest code is deployed.</p>
<p><strong>End Result of traditional CI/CD without gitops</strong>: new code is deployed as kubernetes deployment</p>
<p><strong>End Result of gitops</strong>: new code is deployed as kubernetes deployment</p>
<p>I am unable to figure the difference . sorry if it sounds weird to you. But I am kinda new and exploring gitops .</p>
<p>Thanks in advance for your response</p>
| <p>GitOps is nothing but extending CI/CD principles beyond application code: to infra code. Simple. You can think of using Git as source of truth that combines with Terraform (provisioning), Ansible (config mgmt) and Kubernetes (orchestration) as an example ...in order to achieve the goal of keeping Git as 1:1 reflection of your infra. It's nothing new and do not be worried about such fancy terms...</p>
|
<p>I am trying to update status subresource for a Custom Resource and I see a discrepency with curl and <code>kubectl patch</code> commands. when I use curl call it works perfectly fine but when I use <code>kubectl patch</code> command it says patched but with <code>no change</code>. Here are the command that I used</p>
<p><strong>Using Curl:</strong></p>
<p>When I connect to <code>kubectl proxy</code> and run the below curl call, it's successful and updates status subresource on my CR.</p>
<pre><code>curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status'
</code></pre>
<p><strong>Kubectl patch command:</strong></p>
<p>Using kubectl patch says the CR is patch but with <code>no change</code> and the status sub-resource is updated.</p>
<pre><code>$ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]'
myresource.acme.com/default patched (no change)
</code></pre>
<p>However when I do the <code>kubectl patch</code> on the other sub-resources like <code>spec</code> it works fine. Am i missing something here?</p>
| <p>As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g. <code>--subresource=status</code>. This flag is considered "Alpha" but does not require enabling the feature.</p>
<p>As an example, with a yaml merge:</p>
<pre><code>kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}'
</code></pre>
<p>The <a href="https://sysdig.com/blog/kubernetes-1-24-whats-new/#2590" rel="nofollow noreferrer">Sysdig "What's New?" for v1.24</a> includes some more words about this flag:</p>
<blockquote>
<p>Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources.</p>
<p>You now can stop using complex curl commands to directly update subresources.</p>
</blockquote>
<p>The <code>--subresource</code> flag is scheduled for promotion to "Beta" in Kubernetes v1.27 through <a href="https://github.com/kubernetes/enhancements/pull/3729" rel="nofollow noreferrer">KEP-2590: graduate kubectl subresource support to beta</a>. The lifecycle of this feature can be tracked in <a href="https://github.com/kubernetes/enhancements/issues/2590" rel="nofollow noreferrer">#2590 Add subresource support to kubectl</a>.</p>
|
<p>I want to prevent unsafe requested to reach my application running in GCP GKE with Google Ingress (not nginx) and trying to do this using path rules.
I know nginx Ingress can configure paths using regex but I don know the best way to do with Google Ingress.
Right now I am just duplicating the same rules change the path prefix like this:</p>
<pre><code>spec:
rules:
- http:
paths:
- backend:
service:
name: my-api-service
port:
number: 80
path: /api
pathType: Prefix
- backend:
service:
name: my-api-service
port:
number: 80
path: /auth
pathType: Prefix
- backend:
service:
name: my-api-service
port:
number: 80
path: /admin
pathType: Prefix
</code></pre>
<p>Is there a better way to do this?</p>
| <p>Everything you're looking for is covered in <a href="https://cloud.google.com/load-balancing/docs/url-map-concepts#pm-constraints" rel="nofollow noreferrer">this</a> document. As GKE ingress is essentially a GCP Load Balancer, the <code>path</code> key is using a <code>url-map</code> to configure and route the traffic to what you've specified in the config. As you'd be able to see there, regexs are not allowed in <code>Path</code> keys.</p>
<p>One option if you're using Helm is to make use of the templates to generate this automatically from a variable. Given the following variable in your <code>values.yaml</code> file:</p>
<pre><code>paths:
- name: /api
- name: /admin
- name: /auth
</code></pre>
<p>Then in your ingress YAML definition you can do the following:</p>
<pre><code>spec:
rules:
- http:
paths:
{{ range $paths := .Values.paths }}
- backend:
service:
name: my-api-service
port:
number: 80
path: {{ .name }}
pathType: Prefix
{{ end }}
</code></pre>
|
<p>I am trying to pass given part of values.yaml into helm template:</p>
<pre><code> receivers:
test1:
test2:
test3:
test4:
</code></pre>
<p>using function:</p>
<p><code>{{ .Values.receivers | toYaml | nindent 2}}</code></p>
<p>Code is placed in correct format, however empty fields get filled with 'null':</p>
<pre><code>receivers:
test1:
test2:
test3: null
test4: null
</code></pre>
<p>Is there any way to prevent this?</p>
<p>I am expecting correct templating without insterted null fields.</p>
| <p>There are no fields inserted. The processor only replaces values that already exist with a different serialization that has the same semantics.</p>
<p><code>test3:</code> in YAML without a value is parsed as having an empty scalar value. The <a href="https://yaml.org/spec/1.2.2/#103-core-schema" rel="nofollow noreferrer">YAML Core Schema</a> defines the following for empty values:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Regular expression</th>
<th>Resolved to tag</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>null | Null | NULL | ~</code></td>
<td>tag:yaml.org,2002:null</td>
</tr>
<tr>
<td><em><code>/* Empty */</code></em></td>
<td>tag:yaml.org,2002:null</td>
</tr>
</tbody>
</table>
</div>
<p>Since the empty value is resolved to have the tag <code>!!null</code> (which is a shorthand for the full form shown above), it is loaded as <strong><code>nil</code></strong> into Go.</p>
<p>When <code>toYaml</code> receives your data, it doesn't know that the <strong><code>nil</code></strong> values originated from empty scalars. It needs to choose one of the possible serializations and chooses <code>null</code>. This adheres to the YAML spec and is therefore correct behavior.</p>
<p>Any downstream processor that supports the Core Schema should process <code>test3: null</code> in the same way it processes <code>test3:</code> without value. Therefore there should be no problem.</p>
<p>If you want <code>test3:</code> to specifically have the <em>empty string</em> as value instead of <code>null</code>, write</p>
<pre class="lang-yaml prettyprint-override"><code>test3: ""
</code></pre>
<p>If you want it to contain an empty mapping, write</p>
<pre class="lang-yaml prettyprint-override"><code>test3: {}
</code></pre>
|
<p>I just follow the Keycloak Documentation for Kubernetes.</p>
<p><a href="https://www.keycloak.org/getting-started/getting-started-kube" rel="nofollow noreferrer">https://www.keycloak.org/getting-started/getting-started-kube</a></p>
<p><a href="https://i.stack.imgur.com/8oCpF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8oCpF.png" alt="I" /></a></p>
<p>But After deployed it like exactly how they are saying in the documentation.</p>
<p>When I try to load the keyclaok page, I'm getting this,</p>
<p><a href="https://i.stack.imgur.com/LglSn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LglSn.png" alt="enter image description here" /></a></p>
<p>if you can give me a solution or explain why this is happening, Really appreciate it!</p>
<p>My ingress config (keycloak-ingress.yaml) is,</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- keycloak.192.168.49.2.nip.io
rules:
- host: keycloak.192.168.49.2.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
</code></pre>
| <p>Make sure you have updated the ingress file with the proper <strong>IP</strong> of <strong>minikube</strong>.</p>
<p>Also check with <strong>http</strong> instead <strong>https</strong> & <strong>KEYCLOAK_HOSTNAME</strong> value</p>
<p>Try below YAML :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:20.0.3
args: ["start-dev"]
env:
- name: KEYCLOAK_ADMIN
value: "admin"
- name: KEYCLOAK_ADMIN_PASSWORD
value: "admin"
- name: KC_PROXY
value: "edge"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
</code></pre>
<p>it will creat the <strong>LB</strong> service for you so you will be able to access it without <strong>ingress config. Run <code>kubectl get svc -n <namespace-name></code> and check <strong>External</strong> IP</strong> and try opening that in browser.</p>
<p><strong>Extra :</strong></p>
<p>You can refer to this YAML if the default one is not working. i am using Postgres & Dpeloying the Keycloak with that.</p>
<p>GitHub repo path : <a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment</a></p>
<p>Ref : <a href="https://faun.pub/keycloak-kubernetes-deployment-409d6ccd8a39" rel="nofollow noreferrer">https://faun.pub/keycloak-kubernetes-deployment-409d6ccd8a39</a></p>
|
<p>I am still new to K8S infrastructure but I am trying to convert VM infrastructure to K8S on GCP/GKE and I am stuck at forwarding the logs properly after getting Prometheus metrics forwarded correctly. I am also trying to do this without helm, to better understand K8S.</p>
<p>The logs of the loki pod, look as expected when comparing to a docker format in a VM setup.
But I do not know how to start the promtail service without a port, since in a docker format promtail does not have to expose a port. I get the following error:</p>
<blockquote>
<p>The Service "promtail" is invalid: spec.ports: Required value</p>
</blockquote>
<p>My configuration files look like:
<code>loki-config.yml</code></p>
<pre><code>auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
ingester:
wal:
enabled: true
dir: /tmp/wal
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed
max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h
chunk_target_size: 1048576 # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
max_transfer_retries: 0 # Chunk transfers disabled
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /tmp/loki/boltdb-shipper-active
cache_location: /tmp/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /tmp/loki/chunks
compactor:
working_directory: /tmp/loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
ingestion_burst_size_mb: 16
ingestion_rate_mb: 16
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
ruler:
storage:
type: local
local:
directory: /tmp/loki/rules
rule_path: /tmp/loki/rules-temp
alertmanager_url: http://localhost:9093
ring:
kvstore:
store: inmemory
enable_api: true
</code></pre>
<p><code>promtail-config.yml</code></p>
<pre><code>server:
http_listen_port: 9080
grpc_listen_port: 0
# this is the place where promtail will store the progress about how far it has read the logs
positions:
filename: /tmp/positions.yaml
# address of loki server to which promtail should push the logs
clients:
- url: https://999999:[email protected]/api/prom/push
# which logs to read/scrape
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*log
- job_name: node
static_configs:
- targets:
- localhost
labels:
job: node # label-1
host: localhost # label-2
__path__: /var/lib/docker/containers/*/*log
</code></pre>
<p>Then the deployment files:
<code>loki-deploy.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: loki
spec:
selector:
matchLabels:
app: loki
network: cluster-1
replicas: 1
template:
metadata:
labels:
app: loki
network: cluster-1
spec:
containers:
- name: loki
image: grafana/loki
ports:
- containerPort: 3100
volumeMounts:
- name: loki-config-volume
mountPath: /etc/loki/loki.yml
subPath: loki.yml
volumes:
- name: loki-config-volume
configMap:
name: "loki-config"
---
apiVersion: v1
kind: Service
metadata:
name: loki
namespace: monitoring
spec:
selector:
app: loki
type: NodePort
ports:
- name: loki
protocol: TCP
port: 3100
</code></pre>
<p>And finally <code>promtail-deploy.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: promtail
spec:
selector:
matchLabels:
app: promtail
network: cluster-1
replicas: 1
template:
metadata:
labels:
app: promtail
network: cluster-1
spec:
containers:
- name: promtail
image: grafana/promtail
volumeMounts:
- name: promtail-config-volume
mountPath: /mnt/config/promtail-config.yml
subPath: promtail.yml
volumes:
- name: promtail-config-volume
configMap:
name: "promtail-config"
---
apiVersion: v1
kind: Service
metadata:
name: promtail
namespace: monitoring
</code></pre>
| <p>The issue you're describing is answered exactly by the error message.</p>
<p>Your second Kubernetes Service manifest, named <code>promtail</code>, does not have any specification. For services, at least <code>spec.ports</code> is required. You should add a label selector as well, so the Service can pick up the Deployment's pods properly.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: promtail
namespace: monitoring
spec:
selector:
app: promtail
ports:
- port: <ServicePort>
targetPort: <PodPort>
</code></pre>
<p>However, if you do not need to communicate with the Promtail pods from external services, then simply skip creating the Service itself.</p>
<p>May I add, if you need to expose these logs to a service running outside of your cluster, such as Grafana Cloud, you should create a Service of <code>LoadBalancer</code> type for Loki instead. This will request a public IP for it, making it accessible worldwide - assuming your Kubernetes cluster is managed by some cloud provider.</p>
<p>Making Loki public is insecure, but a good first step towards consuming these logs externally.</p>
|
<p>I am creating a custom Builder Image using S2i dotnet core. This will run in OpenShift linux container</p>
<p>I have modified the custom builder image and included few lines to copy few dlls and ".so" files</p>
<p>When running the container in OpenShift I am facing the below error</p>
<p>error says</p>
<pre><code>"unable to load shared library 'CustomCppWrapper' or one of its dependencies. In order to help diagnose loading problems,
consider setting the LD_DEBUG environment variable: libWrapperName: cannot open shared object file: No such file or directory"
</code></pre>
<p>I have set the LD_DEBUG environment variable and found below few errors</p>
<pre><code>/lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by /opt/app-root/app/libCWrappeNamer.so) (fatal)
/lib64/libstdc++.so.6: error: version lookup error: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so) (fatal)
</code></pre>
<p>I did below command and found below</p>
<p><code>ldd libCWrappeNamer.so</code></p>
<pre><code>./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./libCWrappeNamer.so)
./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libabc.so)
./libCWrappeNamer.so: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /ab/sdk/customlib/gcc540/lib/libxmlc.so)
</code></pre>
<p>Below is my Custom Docker file builder image</p>
<pre><code>FROM dotnet/dotnet-31-runtime-rhel7
# This image provides a .NET Core 3.1 environment you can use to run your .NET
# applications.
ENV PATH=/opt/app-root/src/.local/bin:/opt/app-root/src/bin:/opt/app-root/node_modules/.bin:${PATH} \
STI_SCRIPTS_PATH=/usr/libexec/s2i
LABEL io.k8s.description="Platform for building and running .NET Core 3.1 applications" \
io.openshift.tags="builder,.net,dotnet,dotnetcore,rh-dotnet31"
# Labels consumed by Red Hat build service
LABEL name="dotnet/dotnet-31-rhel7" \
com.redhat.component="rh-dotnet31-container" \
version="3.1" \
release="1" \
architecture="x86_64"
#-------------------------- COPY CPP LIBS
COPY CustomCppWrapper.lib /opt/app-root/app
COPY libCWrappeNamer.so /opt/app-root/app
#----------------------------------
# Labels consumed by Eclipse JBoss OpenShift plugin
LABEL com.redhat.dev-mode="DEV_MODE:false" \
com.redhat.deployments-dir="/opt/app-root/src"
# Switch to root for package installs
USER 0
# Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH.
COPY ./s2i/bin/ /usr/libexec/s2i
RUN INSTALL_PKGS="rh-nodejs10-npm rh-nodejs10-nodejs-nodemon rh-dotnet31-dotnet-sdk-3.1 rsync" && \
yum install -y --setopt=tsflags=nodocs --disablerepo=\* \
--enablerepo=rhel-7-server-rpms,rhel-server-rhscl-7-rpms,rhel-7-server-dotnet-rpms \
$INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all -y && \
# yum cache files may still exist (and quite large in size)
rm -rf /var/cache/yum/*
# Directory with the sources is set as the working directory.
RUN mkdir /opt/app-root/src
WORKDIR /opt/app-root/src
# Trigger first time actions.
RUN scl enable rh-dotnet31 'dotnet help'
# Build the container tool.
RUN /usr/libexec/s2i/container-tool build-tool
# Since $HOME is set to /opt/app-root, the yum install may have created config
# directories (such as ~/.pki/nssdb) there. These will be owned by root and can
# cause actions that work on all of /opt/app-root to fail. So we need to fix
# the permissions on those too.
RUN chown -R 1001:0 /opt/app-root && fix-permissions /opt/app-root
ENV ENABLED_COLLECTIONS="$ENABLED_COLLECTIONS rh-nodejs10" \
# Needed for the `dotnet watch` to detect changes in a container.
DOTNET_USE_POLLING_FILE_WATCHER=true
# Run container by default as user with id 1001 (default)
USER 1001
# Set the default CMD to print the usage of the language image.
CMD /usr/libexec/s2i/usage
</code></pre>
| <p>Your code depends on libstdc++.so.6 but it would seem that version isn't installed</p>
<p>In your Dockerfile, add the <code>yum install</code> command that should do it. It would depend on what operating system you're using, but for RHEL 7, for example, you could do:</p>
<p><code>RUN yum install -y libstdc++</code></p>
<p>With more details of the operating system I can give a more specific command</p>
<p>In this specific examples the Dockerfile could look something like this:</p>
<pre><code>FROM centos:7
RUN yum install -y libstdc++
CMD ["/bin/bash"]
</code></pre>
|
<p>I have an asp.net core 7.0 api application in a docker container hosted in Kubernetes behind an Nginx ingress controller.</p>
<p>To get the client ip address i'm using</p>
<blockquote>
<p>context.HttpContext.Connection.RemoteIpAddress</p>
</blockquote>
<p>for all user requests I get a Private Ip address like '10.244.0.1'</p>
<p>In such instances i'm expecting Public IP address</p>
| <p>You need to configure nginx to <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-7.0#nginx-configuration" rel="nofollow noreferrer">forward the <code>X-Forwarded-For</code> and <code>X-Forwarded-Proto</code> headers</a>. Example:</p>
<pre><code>server {
listen 80;
server_name example.com *.example.com;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
</code></pre>
<p>Then setup the <code>UseForwardedHeaders</code> middleware. This middleware will update <code>HttpContext.Connection.RemoteIpAddress</code> using the <code>X-Forwarded-For</code> header value.</p>
<pre class="lang-cs prettyprint-override"><code>var builder = WebApplication.CreateBuilder(args);
// ...
builder.Services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});
var app = builder.Build();
app.UseForwardedHeaders();
// ...
</code></pre>
<p>Check: <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-7.0#forwarded-headers" rel="nofollow noreferrer">Configure ASP.NET Core to work with proxy servers and load balancers</a></p>
|
<p>I have this redis instance in a pod running on 1.21 cluster.</p>
<p>Running <code>kubectl cp -c redis --request-timeout=1m redis-pod:data/dump.rdb /tmp/data/dump.rdb</code></p>
<p>or</p>
<p><code>kubectl exec --request-timeout=1m -i redis-pod -- cat /data/dump.rdb > /tmp/data/dump.rdb</code></p>
<p>I get the following two errors:</p>
<p>The <code>kubectl cp</code> approach generates: <code>error: unexpected EOF</code></p>
<p>The second approach creates the file but incomplete, with a smaller size and when loading it with another redis instance I got a server message like the DB is corrupted and wont start.</p>
<p>I've ran redis <code>SAVE</code> command before. Also copied the <code>dump.rdb</code> to another directory a tried to copy that one.</p>
<p>Any help would be appreciated.</p>
| <p>You can use the <strong>kubectl cp</strong> <strong>--retries=10</strong> if the connection is getting timeout while copying the file.</p>
<p>If you are in <strong>Redis POD</strong> and just want to <strong>take out</strong> the <strong>data</strong> you can install the <strong>aws</strong>, <strong>gcp</strong> <strong>CLI</strong> and upload the whole file from POD to the bucket.</p>
<p>Touching Redis data files is fine, i to have played lot but would recommend checking out the <a href="https://github.com/redis-developer/riot" rel="nofollow noreferrer"><strong>Riot-Redis</strong></a></p>
<p>Used it a lot in migrating large production system Redis data from one to another cluster.</p>
<p>Read more about Riot-Redis doc : <a href="https://developer.redis.com/riot/riot-redis/" rel="nofollow noreferrer">https://developer.redis.com/riot/riot-redis/</a></p>
<p><strong>RIOT-Redis</strong> is a <strong>data</strong> <strong>migration</strong> tool for <strong>Redis</strong>.</p>
<blockquote>
<p>Most Redis migration tools available today are offline in nature.
Migrating data from AWS ElastiCache to Redis Enterprise Cloud for
example means backing up your Elasticache data to an AWS S3 bucket and
importing it into Redis Enterprise Cloud using its UI. RIOT-Redis
allows for live data migration between any Redis databases.</p>
</blockquote>
|
<p>What is the correct way of memory handling in OpenShift/Kubernetes?</p>
<p>If I create a project in OKD, how can I determine optimal memory usage of pods? For example, if I use 1 deployment for 1-2 pods and each pod uses 300-500 Mb of RAM - Spring Boot apps. So technically, 20 pods uses around 6-10GB RAM, but as I see, sometimes each project could have around 100-150 containers which needs at least 30-50Gb of RAM.</p>
<p>I also tried with horizontal scale, and/or request/limits but still lot of memory used by each micro-service.</p>
<p>However, to start a pod, it requires around 500-700MB RAM, after spring container has been started they can live with around 300MB as mentioned.</p>
<p>So, I have 2 questions:</p>
<ul>
<li>Is it able to give extra memory but only for the first X minutes for each pod start?</li>
<li>If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod?</li>
</ul>
<p>Thanks for the answer in advance!</p>
| <blockquote>
<p>Is it able to give extra memory but only for the first X minutes for each pod start?</p>
</blockquote>
<p>You do get this behavior when you set the <strong>limit</strong> to a higher value than the <strong>request</strong>. This allows pods to burst, unless they all need the memory at the same time.</p>
<blockquote>
<p>If not, than what is the best practice to handle memory shortage, if I have limited memory (16GB) and wants to run 35-40 pod?</p>
</blockquote>
<p>It is common to use some form of cluster autoscaler to add more nodes to your cluster if it needs more capacity. This is easy if you run in the cloud.</p>
<p>In general, Java and JVM is memory hungry, consider some other technology if you want to use less memory. How much memory an application needs/uses totally depends on your application, e.g what data structures are used.</p>
|
<p>I provisioned alertmanager using Helm (and ArgoCD).
I need to insert smtp_auth_password value but not as a plain text.</p>
<pre><code>smtp_auth_username: 'apikey'
smtp_auth_password: $API_KEY
</code></pre>
<p>How can I achieve it? I heard about "external secret" but this should be the easiest way?</p>
| <h3>Solution</h3>
<p>if you use <code>prometheus-community/prometheus</code> which includes this alertmanager <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager" rel="nofollow noreferrer">chart</a> as a dependency, then you can do the following:</p>
<p><strong>create secret</strong> in the same namespace where your alertmanager pod is running:</p>
<pre class="lang-bash prettyprint-override"><code>k create secret generic alertmanager-secrets \
--from-literal="opsgenie-api-key=YOUR-OPSGENIE-API-KEY" \
--from-literal="slack-api-url=https://hooks.slack.com/services/X03R2856W/A14T19TKEGM/...."
</code></pre>
<p><strong>mount that secret</strong> via use of extraSecretMounts</p>
<pre class="lang-yaml prettyprint-override"><code>alertmanager:
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
# contains secret values for opsgenie and slack receivers
extraSecretMounts:
- name: secret-files
mountPath: /etc/secrets
subPath: ""
secretName: alertmanager-secrets
readOnly: true
</code></pre>
<p><strong>use them in your receivers</strong>:</p>
<pre class="lang-yaml prettyprint-override"><code>receivers:
- name: slack-channel
slack_configs:
- channel: '#client-ccf-ccl-alarms'
api_url_file: /etc/secrets/slack-api-url <-------------------THIS
title: '{{ template "default.title" . }}'
text: '{{ template "default.description" . }}'
pretext: '{{ template "slack.pretext" . }}'
color: '{{ template "slack.color" . }}'
footer: '{{ template "slack.footer" . }}'
send_resolved: true
actions:
- type: button
text: "Query :mag:"
url: '{{ template "alert_query_url" . }}'
- type: button
text: "Silence :no_bell:"
url: '{{ template "alert_silencer_url" . }}'
- type: button
text: "Karma UI :mag:"
url: '{{ template "alert_karma_url" . }}'
- type: button
text: "Runbook :green_book:"
url: '{{ template "alert_runbook_url" . }}'
- type: button
text: "Grafana :chart_with_upwards_trend:"
url: '{{ template "alert_grafana_url" . }}'
- type: button
text: "KB :mag:"
url: '{{ template "alert_kb_url" . }}'
- name: opsgenie
opsgenie_configs:
- send_resolved: true
api_key_file: /etc/secrets/opsgenie-api-key <-------------------THIS
message: '{{ template "default.title" . }}'
description: '{{ template "default.description" . }}'
source: '{{ template "opsgenie.default.source" . }}'
priority: '{{ template "opsgenie.default.priority" . }}'
tags: '{{ template "opsgenie.default.tags" . }}'
</code></pre>
<p>If you want to use email functionality of <a href="https://prometheus.io/docs/alerting/latest/configuration/#email_config" rel="nofollow noreferrer">email_config</a>
then simply use the same approach with:</p>
<pre><code>[ auth_password_file: <string> | default = global.smtp_auth_password_file ]
</code></pre>
|
<p>Below are the setup details:
keda installed,
prometheus deployed,
through application using below scaled object template for hpa:</p>
<pre><code>keda:
triggers:
- metadata:
metricName: container_memory_usage
query: avg(floor((container_memory_usage_bytes{pod=~"pythonb-.*",container=~"proxy"} / container_spec_memory_limit_bytes != +Inf) * 100))
serverAddress: <serveraddress>
threshold: '70'
type: prometheus
</code></pre>
<p>basically we want to scale the deployment based on the given prom query.(based on container memory utilisation of particular pods..if it exceeds 70% then hpa will scale the pods. )
when we try the above query on Prometheus it returns the results as 8.<em>, 10.</em>. , 25.3. Basically single element response
But though keda it gives the result as below:</p>
<pre><code>kubectl get hpa -n integration keda-pythonb
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-pythonb Deployment/pythonb 3500m/70 (avg), 34%/87% + 1 more... 2 10 2 14m
</code></pre>
<p>Instead of single value it gives 3500m as current value.
does keda convert the data returned from prom query? Any pointers would be helpful.
I hope the prom query is correct.</p>
| <p>We just solved this one after a lot of off-and-on hunting. Turns out KEDA has an option called <code>metricType</code> that you can specify under <code>triggers</code>. TLDR you need to set that to <code>"Value"</code>.</p>
<p>To understand why you need to dive into how HPA works in Kubernetes. When you define a <code>kind: HorizontalPodAutoscaler</code> you specify the metrics that are used for scaling. KEDA does this for you and creates an external metric like this:</p>
<pre class="lang-yaml prettyprint-override"><code> metrics:
- external:
metric:
name: ...
selector:
matchLabels:
scaledobject.keda.sh/name: ...
target:
type: AverageValue
averageValue: ...
type: External
</code></pre>
<p>There are <code>Value</code> and <code>AverageValue</code> metric types. <code>AverageValue</code> is the default, meant for metrics like <code>http-requests-per-second</code>, which would need to be divided by the number of replicas before compared to the target. <code>Value</code>, on the other hand, takes the direct value from your metric without averaging it.</p>
<p>Since your Prometheus query is returning an average across pods already, you need to use <code>Value</code>. The clue is in your <code>kubectl get hpa</code> output: <code>3500m/70 (avg)</code>.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects" rel="nofollow noreferrer">docs on HPA with external metrics</a>.</p>
<p>In KEDA that is specified using the <code>metricType</code> option under the <code>triggers</code> field.</p>
<p>See <a href="https://keda.sh/docs/2.9/concepts/scaling-deployments/#triggers" rel="nofollow noreferrer">KEDA: Scaling Deployments</a></p>
|
<p>Lets say you use <a href="https://argoproj.github.io/cd/" rel="nofollow noreferrer">Argocd</a> to deploy helm charts to Kubernetes. Things work great but you have a kubernetes resource <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/" rel="nofollow noreferrer">finalizer</a> on a resource. Somebody deletes the resource and now Argo just waits in the state of 'Progressing' or 'Deleting'. It can't actually do the delete due to the finalizer. This is a good protection mechanism for very important files like aws iam files.</p>
<p>But I am hopeful somebody can help me figure out. Is there anyway to stop the operation given to argo and instead just let it sync again as normal? Maybe somebody made a mistake and the finalizer worked as intended. Instead of clearing the finalizer and dealing with the consequences. Can the consequences be prevented by undoing argocds operation?</p>
<p>Thank you</p>
| <p>Either you need to delete the corresponding Argocd application or you need to roll back the deployment. If you delete the application it will remove all the resources created by the application and it will stop the operation. If you roll back to the previous version it will undo the changes you have made in the current deployment and bring all your resources to previous versions.
You can use Argo CD CLI argocd app rollback’ -r ’, to roll back to the particular version you want.</p>
<p>You can also roll back from Argo CD UI. If your finalizer is still present you need to manually remove the finalizer and then re-apply the resource definitions.</p>
<p>Please check this <a href="https://argoproj.github.io/argo-rollouts/generated/kubectl-argo-rollouts/kubectl-argo-rollouts_undo/" rel="nofollow noreferrer">document</a></p>
|
<p>I have a use-case where I need to mount a <code>hostpath</code> to a pod but that <code>hostpath should have a subdirectory with PODNAME</code>, I know something like this can be used with statefulsets but due to some constraints a hostpath mount is needed in deployment where the root directory will have sub directories as <code>podnames</code>.</p>
<p>Can this be achieved?</p>
| <p>You can use the <strong><a href="https://kubernetes.io/docs/concepts/workloads/pods/downward-api/" rel="nofollow noreferrer">downward API</a></strong> of Kubernetes as <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Environment variables</a> :</p>
<pre><code>- name: pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
</code></pre>
<p>Feel free to refer my GitHub YAML : <a href="https://github.com/harsh4870/OCI-public-logging-uma-agent/blob/main/deployment.yaml" rel="nofollow noreferrer">https://github.com/harsh4870/OCI-public-logging-uma-agent/blob/main/deployment.yaml</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: app
image: <App image>
env:
- name: pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
volumeMounts:
- mountPath: /app_logs
name: app-logs
subPathExpr: $(pod_name)
volumes:
- name: mydir
hostPath:
path: /var/log/app_logs
type: DirectoryOrCreate
</code></pre>
<p>Here is full article i have used <strong>downwardAPI</strong> for logging : <a href="https://medium.com/@harsh.manvar111/oke-logging-with-uma-agent-in-oci-d6f55a8bcc02" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/oke-logging-with-uma-agent-in-oci-d6f55a8bcc02</a></p>
|
<p>I have a kubernetes namespace in <code>phase: Terminating</code> (waiting on finalizers or something) and I want it to stop terminating, without deleting it.</p>
<p>Is there a way to do this?</p>
| <p>Stop a kubernetes namespace from terminating by updating the finalizers associated with it.</p>
<p>The finalizers of a namespace determine the conditions that must be met before the namespace can be deleted. If a namespace is in a "Terminating" state, it means that there are still finalizers associated with it that need to be resolved. To stop the termination, you need to remove the finalizers from the namespace's resource definition.</p>
<p>You can do it by:</p>
<ol>
<li>retrieving the current definition of the namespace:</li>
</ol>
<pre><code>kubectl get namespace <namespace-name> -o yaml > namespace.yaml
</code></pre>
<ol start="2">
<li>then in the definition file, look for the finalizers field and remove the items associated with it:</li>
</ol>
<pre><code># vi/nano/emacs namespace.yaml # open with your favorite editor
# and replace:
apiVersion: v1
kind: Namespace
metadata:
name: <namespace-name>
finalizers:
- kubernetes
</code></pre>
<p>save it.</p>
<ol start="3">
<li>and then update the namespace definition with the edited file:</li>
</ol>
<pre><code>kubectl replace -f namespace.yaml
</code></pre>
<p>After that, the namespace should no longer be in a "Terminating" state and will remain intact.</p>
|
<p>I have a basic kubernetes helm template like below</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/version: {{ .Chart.AppVersion }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
data:
config.tpl: |
{{- default ((tpl .Values.configTpl .) | indent 4) (tpl (.Files.Get "files/config.tpl") . | indent 4) -}}
</code></pre>
<p>and the values.yml file</p>
<pre><code>configTpl: |
{{ x=8gwifi.org }}
</code></pre>
<p>When i apply the helm chart it throw me an error</p>
<pre><code>❯ helm upgrade mychart . --namespace=test --create-namespace --debug
upgrade.go:142: [debug] preparing upgrade for mychart
Error: UPGRADE FAILED: YAML parse error on 8gwifi.org-test/templates/configmap-logfilegen.yaml: error converting YAML to JSON: yaml: line 11: did not find expected comment or line break
helm.go:84: [debug] error converting YAML to JSON: yaml: line 11: did not find expected comment or line break
</code></pre>
<p>I tried different configuration</p>
<pre><code>config.tpl: |
{{- default (tpl .Values.configTpl . | indent 4) (tpl (.Files.Get "files/config.tpl") . | indent 4) -}}
</code></pre>
<p>still resulting in same error, Is there a way to specify a config value if none is passed then used the hardcoded one</p>
<p>I'm sure it's an YAML syntx issue couldn't figure it out checked all cases</p>
<p><strong>Based on David suggestion</strong></p>
<p>Template debug is showing this</p>
<pre><code>data:
config.tpl: |-
x := 8gwifi.org
y := "functions"
</code></pre>
<p><strong>I can cleary see y is not indent and throwing YAML syntax error, not sure how to fix this</strong></p>
<p>This is the updated definition</p>
<pre><code>data:
config.tpl: |-
{{ (tpl .Values.configTpl . | indent 4) | default (tpl (.Files.Get "files/config.tpl") . | indent 4) -}}
</code></pre>
<p>values.yml</p>
<pre><code>configTpl: |
x := "8gwifi.org"
y := "function"
</code></pre>
| <p>You're hitting problems with whitespace in the first line of the block scalar. You should check two things:</p>
<ol>
<li>The template block containing <code>indent</code> must not itself be indented, it must start at the first column of its line; and</li>
<li>The template block containing <code>indent</code> must not have a <code>-</code> inside the opening curly braces.</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>{{- $configTpl := .Values.configTpl | default (.Files.Get "tiles/config.tpl") }}
config.tpl: |
{{ tpl $configTpl . | indent 4 }}
</code></pre>
<p>The templating language isn't really aware of YAML syntax as it runs. If you have spaces in front of the <code>indent</code> line, they will get emitted, and then <code>indent</code> adds its own leading space, resulting in the last output you get where the first line is indented extra. The <code>-</code> whitespace control marker will also consume the preceding newline, resulting in the first line of the output being on the same line as the YAML block scalar marker.</p>
|
<p>We use <code>kubectl set image</code> to rollout a new version <code>2.0.0</code> of an existing application. We then use <code>kubectl rollout status</code> to wait for the new pod to become ready so that we can run some basic tests.</p>
<p>The problem is, <code>kubectl rollout status</code> returns (implying the new v2 pod is ready) but when we use <code>kubectl exec</code> we ALWAYS land in the old v1 pod.</p>
<pre class="lang-bash prettyprint-override"><code>$ date
Mon 13 Feb 2023 02:33:50 PM CET
$ k set image deploy/myapp myapp=myapp:2.0.0 && k rollout status deploy/myapp
deployment.apps/myapp image updated
Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "myapp" rollout to finish: 1 old replicas are pending termination...
deployment "myapp" successfully rolled out
</code></pre>
<p>Here, we assume the new version is running. Let's check:</p>
<pre class="lang-bash prettyprint-override"><code>$ k exec deploy/myapp -- show_version
1.0.0
</code></pre>
<p>Nope, it's still the old version.<br />
Check the deplyoment:</p>
<pre class="lang-bash prettyprint-override"><code>$ k get deploy/myapp
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 1/1 1 1 273d
</code></pre>
<p>Looks ready (K9S shows 1 pod "Terminating" and 1 pod ready).<br />
Check again:</p>
<pre class="lang-bash prettyprint-override"><code>$date
Mon 13 Feb 2023 02:34:00 PM CET
$ k exec deploy/myapp -- show_version
1.0.0
</code></pre>
<p>Nope, check the pods:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get pod | grep myapp-
myapp-79454d746f-zw5kg 1/1 Running 0 31s
myapp-6c484f86d4-2zsk5 1/1 Terminating 0 3m5s
</code></pre>
<p>So our pod is running, we just can't exec into it - it always "picks" the terminating pod:</p>
<pre class="lang-bash prettyprint-override"><code>$date
Mon 13 Feb 2023 02:34:10 PM CET
$ k exec deploy/myapp -- show_version
1.0.0
</code></pre>
<p>Wait 20-30s:</p>
<pre class="lang-bash prettyprint-override"><code>$ date
Mon 13 Feb 2023 02:34:25 PM CET
$ k exec deploy/myapp -- show_version
2.0.0
</code></pre>
<p>Finally we have <code>exec</code> on the correct pod.</p>
<p>Why/how can we wait for the old pod to terminate?
OR
How can we ensure we exec into the correct pod for testing?</p>
| <h2>Update</h2>
<blockquote>
<p>Even better would be to get the new_pod id and exec directly into that.</p>
</blockquote>
<p>Also possible, yes. Try this:</p>
<pre><code>k rollout status deploy/myapp >/dev/null && \
k get po -l app=myapp | grep Running | awk '{print $1}' | xargs -I{} kubectl exec {} -- show_version
</code></pre>
<blockquote>
<p>I would love to know what controls that 30s time.</p>
</blockquote>
<p>This can be configured using the <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace" rel="nofollow noreferrer">terminationGracePeriodSeconds</a> field in the pod's spec. The value defaults to, you guessed it right, 30s. If you're not concerned about data loss (due to the immediate shutdown), it can be set to 0. After that you can directly exec into the new pod:</p>
<pre class="lang-yaml prettyprint-override"><code> spec:
terminationGracePeriodSeconds: 0
</code></pre>
<pre><code>k rollout status deploy/myapp >/dev/null && k exec deploy/myapp -- show_version
</code></pre>
<hr />
<p>While being "Terminated" the old pod is still in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">phase <em>Running</em></a>, and the <code>kubectl exec deploy/myapp</code> seems to use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">first <em>Running</em> pod of the deployment </a>.</p>
<p>I would suggest:</p>
<ol>
<li>Retrieve and store the name of the currently running pod in a temp variable prior to deployment (assuming the pod has the label <code>app=myapp</code>)</li>
</ol>
<pre><code>$ old_pod=$(kubectl get pods -l app=myapp -o jsonpath='{.items[0].metadata.name}')
</code></pre>
<ol start="2">
<li>Deploy</li>
</ol>
<pre><code>$ k apply -f Deployment.yaml
</code></pre>
<ol start="3">
<li>Wait <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-" rel="nofollow noreferrer">until the rollout is done</a></li>
</ol>
<pre><code>$ k rollout status deploy/myapp
</code></pre>
<ol start="4">
<li>Wait <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer">until the <code>old_pod</code> is deleted</a></li>
</ol>
<pre><code>$ k wait --for=delete pod/$old_pod --timeout -1s
</code></pre>
<ol start="5">
<li>Check the new pod</li>
</ol>
<pre><code>$ k exec deploy/myapp -- show_version
</code></pre>
|
<p>I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster.</p>
<p>From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken.</p>
<p>Is there anyway to create one cluster-issuer to manage all ingresses across the cluster?</p>
<p>Code included below</p>
<h3>Cluster-issuer.yaml</h3>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-grafana
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-grafana
solvers:
- selector:
dnsZones:
- "foo.com"
dns01:
route53:
region: eu-central-1
hostedZoneID: foo
accessKeyID: foo
secretAccessKeySecretRef:
name: aws-route53-creds
key: password.txt
</code></pre>
<h3>Ingress.yaml</h3>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: loki
annotations:
cert-manager.io/cluster-issuer: letsencrypt-grafana
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "125m"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- grafana.foo.com
secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret.
rules:
- host: grafana.foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: loki-grafana
port:
number: 80
</code></pre>
| <p>i would recommend creating the <strong>wildcard</strong> <strong>certificate</strong> using <strong>issuer/clusterissuer</strong>.</p>
<p>So you will be having the <strong>single secret</strong> with <strong>wildcard</strong> <strong>cert</strong> so you can use that across all ingress.</p>
<p>As you are already using <strong>DNS</strong> verification it will work well, as <strong>wildcard</strong> not supports the <strong>HTTP</strong></p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector:
dnsZones:
- "devops.example.in"
dns01:
route53:
region: us-east-1
hostedZoneID: Z0152EXAMPLE
accessKeyID: AKIA5EXAMPLE
secretAccessKeySecretRef:
name: route53-secret
key: secret-access-key
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: Issuer
name: letsencrypt-prod
commonName: "*.devops.example.in"
dnsNames:
- "*.devops.example.in"
</code></pre>
<p>Read my full article : <a href="https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2</a></p>
<p><strong>Ingress</strong> & <strong>secret</strong> example</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: dns01
certmanager.k8s.io/acme-dns01-provider: route53
name: ingress-resource-tls
namespace: default
spec:
rules:
- host: "hello.devops.example.in"
http:
paths:
- backend:
serviceName: hello-app
servicePort: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- "hello.devops.example.in"
secretName: tls-secret
</code></pre>
|
<p>I have a helm chart A which depends on a third-party subchart B.
Chart B defines some CRDs which are used by chart A.
However, when I install chart A (hence B too), I get an error saying that the CRDs are not recognized. It seems that the CRs are being stored before CRDs.</p>
<p>Helm documentation about <a href="https://helm.sh/docs/chart_best_practices/custom_resource_definitions/" rel="noreferrer">CRD</a>
describes two ways to handle this order, either put the CRDs in a folder called crds, or use two separate charts and install them one after the other.</p>
<p>My questions are the following:</p>
<ol>
<li>Why doesn't Helm apply first the CRDs regardless of where they are? Why is the crds folder needed? What if the CRDs are in a chart that you don't wish to modify (like in my case)?</li>
<li>Doesn't the second option make the dependencies specification useless. Can't there be an order of execution for dependencies?</li>
<li>Is there a way, that I might have overlooked, to still keep 1 chart with a dependency and somehow make sure that the CRDs defined in the dependency are stored before being used? (hooks?)</li>
</ol>
<p>(you don't have to answer all the questions, an answer to any of them is appreciated)</p>
| <p>There's a pretty simple way to let your main chart install the objects defined by your dependency chart's CRDs. Just make them installed and upgraded with the <code>post-install</code> and <code>post-upgrade</code> hooks.</p>
<p>Just a simple example for you. Let's imagine, you need to install a cert-manager as a subchart and then your main chart needs to install an <code>Issuer</code>. Obviously, the initial installation fails, as the CRDs aren't installed yet, so the <code>Issuer</code> doesn't pass validation. But if you use the hook (by adding the following annotation to the template of your <code>Issuer</code>: <code>"helm.sh/hook": post-install,post-upgrade</code>), then the <code>Issuer</code> will be installed only in the very end of the installation procedure, when cert-manager is up and kicking.</p>
|
<p>I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (<a href="https://github.com/jetstack/cert-manager/issues/1083" rel="noreferrer">https://github.com/jetstack/cert-manager/issues/1083</a>) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates.</p>
<p>I'm fairly new to kubernetes, so if I missed something let me know. </p>
<p>Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes?</p>
<p>For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services.</p>
<p>Thanks in advance!</p>
| <p>The accepted solution does work -- a different issuer is one way to go. Though if you want to use the ACME issuer, you'll need to solve challenges. This can be done via either a <a href="https://cert-manager.io/docs/configuration/acme/http01/" rel="nofollow noreferrer">HTTP01</a> solver or a <a href="https://cert-manager.io/docs/configuration/acme/dns01/" rel="nofollow noreferrer">DNS01</a> solver. If you choose to go with the DNS01 solver, you'll either need:</p>
<ol>
<li>to move your DNS hosting from GoDaddy to one of the supported <a href="https://cert-manager.io/docs/configuration/acme/dns01/#supported-dns01-providers" rel="nofollow noreferrer">providers</a>.</li>
<li>or you can try using this GoDaddy Webhook <a href="https://github.com/snowdrop/godaddy-webhook" rel="nofollow noreferrer">provider</a>, which you may already be aware of. Though I can't guarantee that the project is in working status.</li>
</ol>
|
<p>I need to reduce my 'if else code' in my helm chart template<br />
How can I do that ?.</p>
<pre><code> {{- if .Values.global }}
{{- if .Values.global.namespace }}
namespace: {{ .Values.global.namespace }}
{{- else }}
namespace: {{ .Values.namespace }}
{{- end }}
{{- else }}
namespace: {{ .Values.namespace }}
{{- end}}
name: {{.Values.name}}
</code></pre>
| <p>"If <code>x</code> is truthy, then use its value, otherwise use <code>y</code>" is what the Helm (Sprig) <a href="https://docs.helm.sh/docs/chart_template_guide/function_list/#default" rel="nofollow noreferrer"><code>default</code></a> function does. You could replace the inner conditional with</p>
<pre class="lang-yaml prettyprint-override"><code>namespace: {{ .Values.global.namespace | default .Values.namespace }}
</code></pre>
<p>The outer conditional is trickier. The problem you're trying to work around here is, if <code>.Values.global</code> isn't defined, it will evaluate to <code>nil</code>, and then <code>.Values.global.namespace</code> is an error. The usual approach I use here is to again use <code>default</code> to get an empty dictionary if it isn't defined, at which point you can successfully do a lookup.</p>
<p>So you should be able to replace the entire block with</p>
<pre class="lang-yaml prettyprint-override"><code>{{- $global := .Values.global | default dict }}
namespace: {{ $global.namespace | default .Values.namespace }}
</code></pre>
|
<p>I need to reduce my 'if else code' in my helm chart template<br />
How can I do that ?.</p>
<pre><code> {{- if .Values.global }}
{{- if .Values.global.namespace }}
namespace: {{ .Values.global.namespace }}
{{- else }}
namespace: {{ .Values.namespace }}
{{- end }}
{{- else }}
namespace: {{ .Values.namespace }}
{{- end}}
name: {{.Values.name}}
</code></pre>
| <p>You could use a variable and also <code>{{with}}</code> (which sets the dot), e.g.:</p>
<pre><code>{{- $ns := .Values.namespace -}}
{{- with .Values.global }}{{ with.namespace }}{{ $ns = . }}{{end}{{ end -}}
namespace: {{ $ns }}
name: {{.Values.name}}
</code></pre>
|
<p>I'm configuring startup/liveness/readiness probes for kubernetes deployments serving spring boot services. As of the spring boot documentation it's best practice to use the corresponding liveness & readiness actuator endpoints as describes here:
<a href="https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot" rel="nofollow noreferrer">https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot</a></p>
<p>What do you use for your startup probe?
What are your recommendations for failureThreshold, delay, period and timeout values?
Did you encounter issues when deploying isito sidecars to an existing setup?</p>
| <p>The startup probe is optional.</p>
<p>Originally, there were two types of probes in Kubernetes: readiness and liveness. However, people have encountered issues with slow-start containers. When a container takes a long time to start, Kubernetes does the first check on the liveness probe after initialDelaySeconds. If the check fails, Kubernetes attempts failureThreshold times with an interval of periodSeconds. If the liveness probe still fails, Kubernetes assumes that the container is not alive and restarts it. Unfortunately, the container will likely fail again, resulting in an endless cycle of restarting.</p>
<p>You may want to increase failureThreshold and periodSeconds to avoid the endless restarting, but it can cause longer detection and recovery times in case of a thread deadlock.</p>
<p>You may want to make the initialDelaySeconds longer to allow sufficient time for the container to start. However, it can be challenging to determine the appropriate delay since your application can run on various hardware. For instance, increasing initialDelaySeconds to 60 seconds to avoid this problem in one environment may cause unnecessary slow startup when deploying the service to a more advanced hardware that only requires 20 seconds to start. In such a scenario, Kubernetes waits for 60 seconds for the first liveness check, causing the pod to be idle for 40 seconds, and it still takes 60 seconds to serve.</p>
<p>To address this issue, Kubernetes introduced the startup probe in 1.16, which defers all other probes until a pod completes its startup process. For slow-starting pods, the startup probe can poll at short intervals with a high failure threshold until it is satisfied, at which point the other probes can begin.</p>
<p>If a container’s components take a long time to be ready except for the API component, the container can simply report 200 in the liveness probe, and the startup probe is not needed. Because the API component will be ready and report 200 very soon, Kubernetes will not restart the container endlessly, it will patiently wait until all the readiness probes indicate that the containers are all “ready” then take traffic to the pod.</p>
<p>The startup probe can be implemented in the same way as the liveness probe. Once the startup probe confirms that the container is initialized, the liveness probe will immediately report that the container is alive, leaving no room for Kubernetes to mistakenly restart the container.</p>
<p>Regarding the initialDelaySeconds, periodSeconds, failureThreshold, and timeout, it is really a balance between sensitivity and false positives. e.g., if you have a high failureThreshold and a high periodSeconds for the readiness probe, k8s cannot timely detect issues in the container and your pod continues to take traffic, hence many requests will fail. If you put a low failureThreshold and a low periodSeconds for the readiness probe, a temporary problem could take the pod out of traffic, that's a false positive. I kind of prefer default failureThreshold to 3 and periodSeconds to 5, successThreshold to 1 or 2.</p>
<p>BTW, don't use the default health check from Spring Boot, you always need to customize them. More details here: <a href="https://danielw.cn/health-check-probes-in-k8s" rel="nofollow noreferrer">https://danielw.cn/health-check-probes-in-k8s</a></p>
|
<p>I'm experimenting with the new Flink Kubernetes operator and I've been able to do pretty much everything that I need besides one thing: getting a JAR file from the S3 file system.</p>
<h2>Context</h2>
<p>I have a Flink application running in a EKS cluster in AWS and have all the information saved in a S3 buckets. Things like savepoints, checkpoints, high availability and JARs files are all stored there.</p>
<p>I've been able to save the savepoints, checkpoints and high availability information in the bucket, but when trying to get the JAR file from the same bucket I get the error:
<code>Could not find a file system implementation for scheme 's3'. The scheme is directly supported by Flink through the following plugins: flink-s3-fs-hadoop, flink-s3-fs-presto.</code></p>
<p>I was able to get to <a href="https://www.mail-archive.com/[email protected]/msg48176.html" rel="nofollow noreferrer">this thread</a>, but I wasn't able to get the resource fetcher to work correctly. Also the solution is not ideal and I was searching for a more direct approach.</p>
<h2>Deployment files</h2>
<p>Here's the files that I'm deploying in the cluster:</p>
<p>deployment.yml</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-deployment
spec:
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
containers:
- name: flink-main-container
env:
- name: ENABLE_BUILT_IN_PLUGINS
value: flink-s3-fs-presto-1.15.3.jar;flink-s3-fs-hadoop-1.15.3.jar
volumeMounts:
- mountPath: /flink-data
name: flink-volume
volumes:
- name: flink-volume
hostPath:
path: /tmp
type: Directory
image: flink:1.15
flinkVersion: v1_15
flinkConfiguration:
state.checkpoints.dir: s3://kubernetes-operator/checkpoints
state.savepoints.dir: s3://kubernetes-operator/savepoints
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
high-availability.storageDir: s3://kubernetes-operator/ha
jobManager:
resource:
memory: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
serviceAccount: flink
</code></pre>
<p>session-job.yml</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: flink-session-job
spec:
deploymentName: flink-deployment
job:
jarURI: s3://kubernetes-operator/savepoints/flink.jar
parallelism: 3
upgradeMode: savepoint
savepointTriggerNonce: 0
</code></pre>
<p>The Flink Kubernetes operator version that I'm using is 1.3.1</p>
<p>Is there anything that I'm missing or doing wrong?</p>
| <p>The download of the jar happens in flink-kubernetes-operator pod. So, when you apply FlinkSessionJob, the fink-operator would recognize the Crd and will try to download the jar from jarUri location and construct a JobGraph and submit the sessionJob to JobDeployment. Flink Kubernetes Operator will also have flink running inside it to build a JobGraph.
So, You will have to add <strong>flink-s3-fs-hadoop-1.15.3.jar</strong> in location <strong>/opt/flink/plugins/s3-fs-hadoop/</strong> inside flink-kubernetes-operator</p>
<p>You can add the jar either by extending the ghcr.io/apache/flink-kubernetes-operator image, curl the jar and copy it to plugins location</p>
<p>or</p>
<p>You can write an initContainer which will download the jar to a volume and mount that volume</p>
<pre><code> volumes:
- name: s3-plugin
emptyDir: { }
initContainers:
- name: busybox
image: busybox:latest
volumeMounts:
- mountPath: /opt/flink/plugins/s3-fs-hadoop
name: s3-plugin
containers:
- image: 'ghcr.io/apache/flink-kubernetes-operator:95128bf'
name: flink-kubernetes-operator
volumeMounts:
- mountPath: /opt/flink/plugins/s3-fs-hadoop
name: s3-plugin
</code></pre>
<p>Also, if you are using serviceAccount for S3 authentication, give below config in flinkConfig</p>
<pre><code>fs.s3a.aws.credentials.provider: com.amazonaws.auth.WebIdentityTokenCredentialsProvider
</code></pre>
|
<p>I am using loki-stack helm chart I am doing following configuration to disable kube-system namespace logs in promtail so that loki doesnt use it</p>
<pre><code>promtail:
enabled: true
#
# Enable Promtail service monitoring
# serviceMonitor:
# enabled: true
#
# User defined pipeline stages
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
</code></pre>
<p>Please help in solving inside container this values are not getting updated</p>
<p>The configuration is already mentioned above</p>
| <p>I had the same issue with this configuration and it seems like the <code>pipelineStages</code> at this level is being ignored. I solved my problem by moving it to <code>snippets</code>.</p>
<pre><code>promtail:
enabled: true
config:
snippets:
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
</code></pre>
<p>This worked for me and I hope it helps someone else who might run into the same problem. For more details, please check out this link: <a href="https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml" rel="nofollow noreferrer">https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml</a></p>
|
<p>Following the k8s/controller-runtime/client example code (see <a href="https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/client#example-Client-Create" rel="nofollow noreferrer">here</a>), which goes a bit like this</p>
<pre class="lang-golang prettyprint-override"><code>var c client.Client
func main() {
// Using a typed object.
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Namespace: "namespace",
Name: "name",
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Image: "nginx",
Name: "nginx",
},
},
},
}
// c is a created client.
_ = c.Create(context.Background(), pod) // nil deref here
}
</code></pre>
<p>I get a nullptr dereference on <code>_ = c.Create(context.Background(), pod)</code>. To me this makes sense, since I declared <code>c</code>, but never initialised it. However the example code also does that. What is going on here?</p>
| <p>The correct way to initialise the client can be found here: <a href="https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/client#example-New" rel="nofollow noreferrer">https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/client#example-New</a></p>
<pre class="lang-golang prettyprint-override"><code>cl, err := client.New(config.GetConfigOrDie(), client.Options{})
if err != nil {
fmt.Println("failed to create client")
os.Exit(1)
}
</code></pre>
|
<p>I created this chart using Grafana and the query
<code>rate(container_memory_usage_bytes{ image!="", container_name!="POD"}[5m]) /1024 / 1024</code>.</p>
<p>When highlighting the data points I receive all the details about the pod that is too much information.</p>
<p>How can I show only the <code>pod</code> field?</p>
<p><a href="https://i.stack.imgur.com/OxzcE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OxzcE.png" alt="enter image description here" /></a></p>
| <p>Found the answer: go to Options / Legend and include the field to be used as legend :) In this case: <code>{{pod}}</code></p>
<p><a href="https://i.stack.imgur.com/ZVA8u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZVA8u.png" alt="enter image description here" /></a></p>
<p>Thanks <a href="https://www.linkedin.com/in/fabiotavaresvieira/" rel="nofollow noreferrer">Fabio Vieira</a></p>
|
<p>In Kubernetes, there is deletionTimestamp to signal an ongoing deletion and there are finalizers to model tasks during the process of deletion.
However, it could be, that during the deletion, the specification of a parent object changes in a way that would effective make cancelling the deletion the most desirable solution.</p>
<p>I'd expect a clear and complete documentation of deletionTimestamp and finalization covering the entire lifecycle of deletionTimestamp. It seems that most people seem to assume that it is either zero or nonzero and cannot be changed while it is nonzero. However, there seems to be no documentation on that. I also do not want to "just check", because just check is subject to change and may stop working tomorrow.</p>
| <p>The answer is No,</p>
<p>Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. Finalizers alert controllers to clean up resources the deleted object owned. Documentation is <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/" rel="nofollow noreferrer">here</a></p>
<p>Reason being garbage collection used this identifier, In foreground cascading deletion, the owner object you're deleting first enters a deletion in progress state Read through this for detailed understanding
<a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion" rel="nofollow noreferrer">Foreground cascading deletion</a></p>
|
<p>I have a redis statefulset running on my cluster, with 3 nodes (master, slave, slave), the master node is for write only, and the other 2 nodes for read, they replication is working fine, but I want to connect a backend application to this redis-cluster and somehow manage the requests, in which the write requests goes to master and the reads goes to slaves, any idea how I can achieve this ? Does it have to be controlled in application level, or is there any solution in Kubernetes to achieve this?</p>
<p>In addition, I want to know about the situation when I am scaling up the statefulset replicas or scale down, then the traffic direction still should be going to appropriate pod.</p>
<p>My Kubernetes cluster is running on premises.</p>
| <p>Not sure how you have setup the Redis and you are running the <strong>Redis cluster</strong> or running <strong>Redis master-slave</strong> with <strong>sentinel</strong> option.</p>
<p><a href="https://redis.io/docs/management/sentinel/" rel="nofollow noreferrer">Official doc</a></p>
<p>If you have used <strong>helm chart</strong> they have the option to enable the <strong>sentinel</strong>.</p>
<pre><code>sentinel:
enabled: true
</code></pre>
<p>Check this <a href="https://github.com/bitnami/charts/tree/main/bitnami/redis#choose-between-redis-helm-chart-and-redis-cluster-helm-chart" rel="nofollow noreferrer">Github Doc</a> there is two helm charts or topology available and which one you have used.</p>
<p>So the idea behind sentinel is it writes or gives you back the <strong>current IP</strong> of the master and slave.</p>
<p>For example below the python code snippet, when you will hit the service of <strong>Redis</strong> it will give you both <strong>master</strong> and <strong>slave IP</strong>, you can use it as per need either <strong>write</strong> or <strong>read</strong>.</p>
<pre><code>from redis import Sentinel
sentinel = Sentinel([('<**Single K8s service Name**>', 26379)], socket_timeout=0.1)
sentinel.discover_master('mymaster')
('127.0.0.1', 6379)
sentinel.discover_slaves('mymaster')
[('127.0.0.1', 6380)]
</code></pre>
<p><strong>Reference</strong> : <a href="https://github.com/redis/redis-py#sentinel-support" rel="nofollow noreferrer">https://github.com/redis/redis-py#sentinel-support</a></p>
<p>Ref for few more answer might be helpful :</p>
<ul>
<li><a href="https://stackoverflow.com/a/68551193/5525824">https://stackoverflow.com/a/68551193/5525824</a></li>
<li><a href="https://stackoverflow.com/a/70271427/5525824">https://stackoverflow.com/a/70271427/5525824</a></li>
<li><a href="https://stackoverflow.com/a/70454335/5525824">https://stackoverflow.com/a/70454335/5525824</a></li>
</ul>
|
<p>pinniped cli is not working in widnows.
pinniped-cli-windows-amd64.exe is downloaded but when i type pinniped, it's not recognized.</p>
<p>C:\Users\hello>pinniped
pinniped is not recognized as a internal command or external command, operable program or batch file.</p>
<p>Seem windows is not recognizing this .exe file as published by a valid publisher.</p>
<p>pinniped should show the pinniped cli options and be recognized as command. I created a folder called pinniped and copied .exe file and tried ...that did work.</p>
| <p>I have faced the same issue, so i had purged the cache here: C:\Users\user.kube\cache</p>
<p>And then, i have modified the path of the program pinniped in the config file below, at the line <strong>command</strong> (obviously, the program pinniped itself has to be present in this path) :</p>
<p>C:\Users\user.kube\config</p>
<ul>
<li>name: cluster-np-a2op-cluster-pinniped
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1</li>
</ul>
<p>.....
.....
.....</p>
<ul>
<li>--upstream-identity-provider-flow=browser_authcode
command: <strong>C:\Program Files\Pinniped\pinniped.exe</strong></li>
</ul>
<p>env: []</p>
<p>.....
.....
.....</p>
<p>Hope this will help.</p>
<p>;)</p>
|
<p>I'm using rke2 cluster, i.e. a k8s distribution.</p>
<p>And I want to add a nameserver for '*.example.org' to the cluster DNS system, for which I should change the core file of coredns like below.</p>
<pre><code>.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 172.16.0.1
cache 30
loop
reload
loadbalance
}
example.org:53 { #加一个block
errors
cache 30
forward . 10.150.0.1
}
</code></pre>
<p>However, rke2 install coredns with helm system, so I should change the helm values to add somethings to the corefile.</p>
<p>How should I achieve this. Thank you a lot.</p>
| <p>You map or edit the configmap like</p>
<p>you can map the domain to the service name using <strong>rewrite</strong>, <code>rewrite name example.io service.default.svc.cluster.local</code></p>
<pre><code>rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
</code></pre>
<p>YAML for ref</p>
<pre><code>apiVersion: v1
data:
Corefile: |-
.:5353 {
bind {$POD_IP}
cache 30
errors
health {$POD_IP}:8080
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
rewrite name example.io service.default.svc.cluster.local
loadbalance round_robin
prometheus {$POD_IP}:9153
forward . /etc/resolv.conf
reload
}
kind: ConfigMap
metadata:
labels:
app: coredns
k8s-app: coredns
name: coredns
namespace: kube-system
</code></pre>
<p>Other answers for ref :</p>
<ul>
<li><a href="https://stackoverflow.com/a/73078010/5525824">https://stackoverflow.com/a/73078010/5525824</a></li>
<li><a href="https://stackoverflow.com/a/70672297/5525824">https://stackoverflow.com/a/70672297/5525824</a></li>
</ul>
|
<p>I removing other versions from cert-manager. After that i install the new version with helm using.</p>
<p>Installation works fine.</p>
<p>but when i use the command:</p>
<pre><code>$ kubectl get orders,challenges,clusterissuers
Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2
Error from server: request to convert CR from an invalid group/version: acme.cert-manager.io/v1alpha2
Error from server: request to convert CR from an invalid group/version: cert-manager.io/v1alpha2
</code></pre>
<p>The CRDs:</p>
<pre><code>Name: orders.acme.cert-manager.io
Namespace:
Labels: app=cert-manager
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cert-manager
app.kubernetes.io/version=v1.8.2
helm.sh/chart=cert-manager-v1.8.2
Annotations: cert-manager.io/inject-ca-from-secret: cert-manager/cert-manager-webhook-ca
meta.helm.sh/release-name: cert-manager
meta.helm.sh/release-namespace: cert-manager
API Version: apiextensions.k8s.io/v1
Kind: CustomResourceDefinition
....
Last Transition Time: 2022-06-24T15:25:23Z
Message: no conflicts found
Reason: NoConflicts
Status: True
Type: NamesAccepted
Last Transition Time: 2022-06-24T15:25:23Z
Message: the initial names have been accepted
Reason: InitialNamesAccepted
Status: True
Type: Established
Stored Versions:
v1
</code></pre>
<p>i dont found the CR where still using cert-manager.io/v1alpha2 and acme.cert-manager.io/v1alpha2</p>
| <p>First of all, suggest to backup all your objects (certs, orders, issuer, clusterissuer, etc. with <a href="https://velero.io/" rel="nofollow noreferrer">velero</a> for example) !</p>
<p><a href="https://cert-manager.io/docs/installation/upgrading/remove-deprecated-apis/" rel="nofollow noreferrer">Cert-manger documentation</a> suggests using the cert-manager cli as:</p>
<pre><code>cmctl upgrade migrate-api-version
</code></pre>
<p>You may need <code>--skip-stored-version-check</code> if you already tried to fix CRD api manually (<a href="https://github.com/cert-manager/cert-manager/issues/3944#issuecomment-848742996" rel="nofollow noreferrer">like described in this issue</a>):</p>
<pre><code>cmctl upgrade migrate-api-version --skip-stored-version-check
</code></pre>
<p>Finally, if it's failing with the same message, <a href="https://cert-manager.io/docs/installation/helm/" rel="nofollow noreferrer">install</a> the 1.6.3 (if CRD is manually installed, upgrade CRD with 1.6.3) and repeat the command <code>cmctl upgrade migrate-api-version</code>.</p>
<p>At last, you can upgrade minor version one by one (1.7, 1.8, etc.) <a href="https://cert-manager.io/docs/installation/upgrading/" rel="nofollow noreferrer">as recommended</a></p>
|
<p>I’m quite new to k8s in general, only been using for smaller projects but made it work. I hope btw this is the right channel to ask questions (in this case about ingress-nginx). I’m trying to setup a cluster with a gateway-api and a few microservices (all written in NestJs). To give a little background, I first had everything in docker-compose and my entry was also a Nginx container with letsencrypt. The whole docker, works great locally.</p>
<p>This was the config used for my NGinx Docker:</p>
<pre><code>upstream equmedia-api {
server equmedia-api:3000;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
return 301 https://$server_name$request_uri;
}
server {
listen 80;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
keepalive_timeout 70;
server_name subdomain.example.com;
ssl_session_cache shared:SSR:10m;
ssl_session_timeout 10m;
ssl_certificate /etc/letsencrypt/live/equmedia.pixeliner.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/equmedia.pixeliner.com/privkey.pem;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx.error.log;
location / {
proxy_pass http://equmedia-api;
# proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
</code></pre>
<p>As you can see, it upstreamed to my api container.</p>
<p>Eventually I wanted to turn the whole deployment into k8s. Seemed like a good followup practice after the small projects.</p>
<p>I learned about ingress-nginx and gave it my first try, but I seem to have struck a wall.</p>
<p>Here is the setup I'm trying to achieve:</p>
<p><a href="https://i.stack.imgur.com/DH7uz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DH7uz.png" alt="k8s setup" /></a></p>
<p>Through DigitalOcean the setup will be behind a LoadBalancer.</p>
<p>Here is my Ingress NGinx controller:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: equmedia-ingress-api
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/proxy-protocol: "true"
nginx.ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https"
spec:
tls:
- hosts:
- subdomain.example.com
secretName: quickstart-example-tls
rules:
- host: subdomain.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: equmedia-api
port:
number: 3000
</code></pre>
<p>And my api service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: equmedia-api
name: equmedia-api
spec:
ports:
- port: 3000
targetPort: 3000
selector:
io.kompose.service: equmedia-api
status:
loadBalancer: {}
</code></pre>
<p>When I try to access "https://subdomain.example.com/api/health", I get a 504 Gateway Time-out. Looking at the ingress controller logs I get the following:</p>
<pre><code>2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
</code></pre>
<p>Anyone that can point me into the right direction, to fix this issue?</p>
<p><strong>EDIT</strong></p>
<p>The outcome for</p>
<p><code>kubectl get pods -l io.kompose.service=equmedia-api</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
equmedia-api 1/1 Running 0 2d2h
</code></pre>
<p><code>kubectl get svc</code>:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
equmedia-api ClusterIP 10.245.173.11 <none> 3000/TCP 23h
equmedia-api-rabbitmq ClusterIP 10.245.17.225 <none> 5672/TCP,15673/TCP 2d17h
equmedia-api-redis ClusterIP 10.245.120.11 <none> 6379/TCP 2d17h
equmedia-auth-db ClusterIP 10.245.94.21 <none> 5432/TCP 2d17h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 2d17h
quickstart-ingress-nginx-controller LoadBalancer 10.245.36.216 179.128.139.106 80:31194/TCP,443:31609/TCP 2d16h
quickstart-ingress-nginx-controller-admission ClusterIP 10.245.232.77 <none> 443/TCP 2d16h
</code></pre>
<p>EDIT2:</p>
<p>I've requested my domain <a href="https://subdomain.example.com/api/health" rel="nofollow noreferrer">https://subdomain.example.com/api/health</a> through browser, curl and postman. All of them return timeouts.</p>
<p><code>kubectl get pods -A -o wide | grep 10.244.0.228 </code> returns:</p>
<pre><code>default equmedia-api 1/1 Running 0 2d4h 10.244.0.228 temp-pool-qyhii <none> <none>
</code></pre>
<p><code>kubectl get svc -A | grep 10.244.0.228</code> returns nothing</p>
<p>EDIT3:</p>
<p>Here is the logs of my API:</p>
<pre><code>[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [NestFactory] Starting Nest application...
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AppModule dependencies initialized +136ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] RedisCacheModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] UtilsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AxiosWrapperModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PassportModule dependencies initialized +32ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] JwtModule dependencies initialized +3ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] TerminusModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] DiscoveryModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +0ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ScheduleModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +61ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +17ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +7ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ClientsModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HealthModule dependencies initialized +8ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CacheModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] MailModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] HttpModule dependencies initialized +3ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullModule dependencies initialized +24ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] BullQueueModule dependencies initialized +7ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] PaymentModule dependencies initialized +8ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] CustomerModule dependencies initialized +1ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] ContentModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AdserveModule dependencies initialized +3ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] AuthModule dependencies initialized +2ms
[Nest] 1 - 02/17/2023, 10:52:27 AM LOG [InstanceLoader] OpenIdModule dependencies initialized +65ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] HealthController {/api/health}: +18ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health, GET} route +5ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-ping, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-disk, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-memory, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/health/check-microservice/:name, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] OpenIdController {/api/open-id}: +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/login, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/user, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/callback, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/open-id/logout, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AuthController {/api/auth}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signup, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signin, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/signout, POST} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/auth/refresh, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UserController {/api/user}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-user-id/email?, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/get-authenticated-user, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/change-user-password, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/:id/delete-user-account, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/confirm/:token, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/forgot-password, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/user/set-new-password/:token, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] UsersController {/api/users}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/users, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PaymentController {/api/payment}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/create/:id, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/payment/:id, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CustomerController {/api/customer}: +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/profile/:id, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/create, POST} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/delete/:id, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/customer/update/:id, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ContentController {/api/content}: +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content, GET} route +2ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/create, POST} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/update/:contentId, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/delete/:contentId, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/category/:categoryId, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/content/slug/:slug, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] CategoryController {/api/category}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/update/:categoryId, PUT} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/category/delete/:categoryId, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] WidgetController {/api/widget}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/create, POST} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/update/:widgetId, PUT} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/widget/delete/:widgetId, DELETE} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdvertiserController {/api/adserve/advertiser}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/advertiser/:advertiserId/campaigns/:campaignId/assign, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] AdserveController {/api/adserve}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/serve, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/redirect, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] PublisherController {/api/adserve}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId, GET} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/publisher/:publisherId/zone/create, POST} route +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RoutesResolver] ReportController {/api/adserve/report}: +1ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [RouterExplorer] Mapped {/api/adserve/report, GET} route +0ms
[Nest] 1 - 02/17/2023, 10:52:28 AM LOG [NestApplication] Nest application successfully started +58ms
-- API GATEWAY RUNNING - PORT: 3000 --
</code></pre>
<p>No errors are logged, and through a port-forward I also see my api working.</p>
<p>EDIT4:</p>
<p>Here is the gist with all pods/services/claims/...</p>
<p><a href="https://gist.github.com/pixeliner/2c89048294197155b0d4833ab4045f3c" rel="nofollow noreferrer">https://gist.github.com/pixeliner/2c89048294197155b0d4833ab4045f3c</a></p>
| <p>Your output text:</p>
<pre><code>2023/02/17 15:51:44 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:49 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
2023/02/17 15:51:54 [error] 356#356: *336457 upstream timed out (110: Operation timed out) while connecting to upstream, client: 164.92.221.107, server: subdomain.example.com, request: "GET /api/health HTTP/2.0", upstream: "http://10.244.0.228:3000/", host: "subdomain.example.com"
</code></pre>
<p>Implies the request is timing out on the IP <code>10.244.0.228:3000</code></p>
<p>Things to check:</p>
<ul>
<li><p>Is the service IP <code>10.244.0.228</code>: <code>kubectl get svc equmedia-api</code> (it will likely be of type <code>ClusterIP</code>)</p>
</li>
<li><p>Port forward to the service directly: <code>kubectl port-forward svc/equmedia-api 3000:3000</code> and then try to access <code>localhost:3000</code> in another terminal or in your browser. Does it respond, does it error or does it timeout?</p>
</li>
<li><p>Check the pods your service is trying to match: <code>kubectl get pods -l io.kompose.service=equmedia-api</code> -- does this return any pods? If so, are they in <code>Ready</code> state or are they erroring? Do they have a value greater than 0 in the <code>Restarts</code> count?</p>
</li>
<li><p>Check the logs of the pod(s) <code>kubectl logs -f {pod-name}</code> and see if it is unable to start up, or if it is repeatedly starting.</p>
</li>
</ul>
<hr />
<p>UPDATE 1</p>
<p>Please add the output of the following commands to your question. Wrap the output with three backticks (`) on a single line before and after to preserve formatting:</p>
<pre><code>kubectl get pods -l io.kompose.service=equmedia-api
</code></pre>
<pre><code>kubectl get svc
</code></pre>
<hr />
<p>UPDATE 2</p>
<p>Since the IP that your controller is <code>10.244.0.228</code> see if any of your pods or services actually have that IP. Please add the output of these commands:</p>
<pre><code>kubectl get pods -A -o wide | grep 10.244.0.228
</code></pre>
<pre><code>kubectl get svc -A | grep 10.244.0.228
</code></pre>
<hr />
<p>UPDATE 3</p>
<p>I've yet to try deploying the gist, but I have noticed something</p>
<p>You have networkpolicies setup and you have labelled your pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.network/backend: "true" # <<--- HERE
io.kompose.service: equmedia-api
name: equmedia-api-pod
spec:
...
</code></pre>
<p>This matches your network policy here:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
creationTimestamp: null
name: backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
io.kompose.network/backend: "true"
podSelector:
matchLabels:
io.kompose.network/backend: "true"
</code></pre>
<p>Now, this network policy reads (based in the information off <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">this link</a>)</p>
<p>"Allow connections from Pods with the label <code>io.kompose.network/backend="true"</code> (last three lines) to pods that match the labels <code>io.kompose.network/backend="true"</code> (the <code>ingress.from.podSelector</code> bit)</p>
<p>Sooo.... assuming I'm reading this correct, the reason the ingress controller is not able to talk to the pod, is because the controller pod does not have a label <code>io.kompose.network/backend="true"</code>, and since you did not include that in your gist, I'm assuming you're using the ingress controller chart as a subchart/dependency. And if so, then out of the box, that chart won't have this label. This would explain why we were able to port-forward to the pod and the service directly, but the controller pod was not able to talk to the pod.</p>
<p>And easy way to verify this is to either delete the <code>backend</code> networkpolicy, or modify it to allow all ingress traffic as a test (something like the example <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#allow-all-ingress-traffic" rel="nofollow noreferrer">here</a>)</p>
<p>If this works, it will confirm the networkpolicy is blocking the traffic.</p>
|
<p>Trying to get the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/#optional-kubectl-configurations-and-plugins" rel="nofollow noreferrer">auto-complete</a> feature of <code>kubectl</code> working with an alias of <code>k</code>, using <code>Windows PowerShell</code> and <code>PowerShell Core</code> on Windows Server 2022, but can't get it to work</p>
<p>tested with</p>
<pre><code>echo 'Set-Alias -Name k -Value kubectl' >> $PROFILE
kubectl completion powershell >> $PROFILE
</code></pre>
<p>or even simply this to try to get it working in the same session, without worrying about saving it to the PowerShell profile</p>
<pre><code>Set-Alias -Name k -Value kubectl
kubectl completion powershell | Out-String | Invoke-Expression
</code></pre>
<p>Results so far:</p>
<p><strong>Powershell 5.1</strong><br />
Tab completion with <code>kubectl</code>: <strong>OK</strong><br />
Tab completion with <code>k</code>: <strong>Fails</strong></p>
<p><strong>Powershell Core 7.3</strong><br />
Tab completion with <code>kubectl</code>: <strong>Fails</strong><br />
Tab completion with <code>k</code>: <strong>Fails</strong></p>
<p>The results above are tested with both <code>Windows Terminal</code> and <code>cmd</code></p>
<p>Am I missing something obvious? (like I normally do)</p>
<p>Thanks!</p>
| <p>PowerShell completions by alias in cobra supported only from version 1.6.0. kubectl depends on cobra 1.6.0 since v1.26 version. So, you should use kubectl v1.26+ and <a href="https://github.com/spf13/cobra/blob/main/shell_completions.md#powershell-completions" rel="nofollow noreferrer">powershell 5.0+</a></p>
<p>I use it with powershell 7.3, kubectl v1.26.1 in windows 10
but it suggest variants only after typing first letter (it doesn't show select menu without typing any letter)</p>
<pre><code>Set-Alias -Name k -Value kubectl
kubectl completion powershell | Out-String | Invoke-Expression
Register-ArgumentCompleter -CommandName k -ScriptBlock $__kubectlCompleterBlock
</code></pre>
|
<p>How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?</p>
<p>Pulumi up Diagnostics:</p>
<pre><code> kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
</code></pre>
<p>THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0</p>
<p>ArgoCD install yaml used with CRD2Pulumi: <a href="https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml</a></p>
<p>Source:</p>
<pre><code>...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
</code></pre>
<p>Any ideas as to fixing this oddity between the version error and the actual cluster version?</p>
<p>I've tried:</p>
<ul>
<li>Deleting everything and starting over.</li>
<li>Updating to the latest ArgoCD install yaml.</li>
</ul>
| <p>Upgrade helm. I had a similar issue where my k8s was 1.25 but helm complained it was 1.20. Tried everything else, upgrading helm worked.</p>
|
<p>I was using kubectl with Docker Desktop - everything was fine</p>
<p>I think logged into Openshift via the Command line and kubectl seemed to point to OC cluster</p>
<p>I then logged out of Openshift and now kubectl doesn't seem to point at anything? How do I get to point it at docker desktop again?</p>
| <p>sorted - kubectl config use-context docker-desktop</p>
|
<p>I am a beginner in Kubernetes and have been using the kubectl command to create pods for several months. However, I recently encountered a problem where Kubernetes did not create a pod after I executed the <code>kubectl create -f mypod.yaml</code> command. When I run kubectl get pods, the mypod does not appear in the list of pods and I am unable to access it by name as if it does not exist. However, if I try to create it again, I receive a message saying that the pod has already been created.</p>
<p>To illustrate my point, let me give you an example. I frequently generate pods using a YAML file called tpcds-25-query.yaml. The contents of this file are as follows:</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: tpcds-25-query
namespace: default
spec:
type: Scala
mode: cluster
image: registry.cn-beijing.aliyuncs.com/kube-ai/ack-spark-benchmark:1.0.1
imagePullPolicy: Always
sparkVersion: 2.4.5
mainClass: com.aliyun.spark.benchmark.tpcds.BenchmarkSQL
mainApplicationFile: "local:///opt/spark/jars/ack-spark-benchmark-assembly-0.1.jar"
arguments:
# TPC-DS data localtion
- "oss://spark/data/tpc-ds-data/150g"
# results location
- "oss://spark/result/tpcds-25-query"
# Path to kit in the docker image
- "/tmp/tpcds-kit/tools"
# Data Format
- "parquet"
# Scale factor (in GB)
- "150"
# Number of iterations
- "1"
# Optimize queries
- "false"
# Filter queries, will run all if empty - "q70-v2.4,q82-v2.4,q64-v2.4"
- "q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4"
# Logging set to WARN
- "true"
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
restartPolicy:
type: Never
timeToLiveSeconds: 86400
hadoopConf:
# OSS
"fs.oss.impl": "OSSFileSystem"
"fs.oss.endpoint": "oss.com"
"fs.oss.accessKeyId": "DFDSMGDNDFMSNGDFMNGCU"
"fs.oss.accessKeySecret": "secret"
sparkConf:
"spark.kubernetes.allocation.batch.size": "200"
"spark.sql.adaptive.join.enabled": "true"
"spark.eventLog.enabled": "true"
"spark.eventLog.dir": "oss://spark/spark-events"
driver:
cores: 4
memory: "8192m"
labels:
version: 2.4.5
spark-app: spark-tpcds
role: driver
serviceAccount: spark
nodeSelector:
beta.kubernetes.io/instance-type: ecs.g6.13xlarge
executor:
cores: 48
instances: 1
memory: "160g"
memoryOverhead: "16g"
labels:
version: 2.4.5
role: executor
nodeSelector:
beta.kubernetes.io/instance-type: ecs.g6.13xlarge
</code></pre>
<p>After I executed <code>kubectl create --validate=false -f tpcds-25-query.yaml</code> command, k8s returns this:</p>
<pre><code>sparkapplication.sparkoperator.k8s.io/tpcds-25-query created
</code></pre>
<p>which means the pod has been created. However, when I executed <code>kubectl get pods</code>, it gave me this:</p>
<pre><code>No resources found in default namespace.
</code></pre>
<p>When I created the pod again, it gave me this:</p>
<pre><code>Error from server (AlreadyExists): error when creating "tpcds-25-query.yaml": sparkapplications.sparkoperator.k8s.io "tpcds-25-query" already exists
</code></pre>
<p>I know the option <code>-v=8</code> can print more detailed logs. So I execute the command <code>kubectl create --validate=false -f tpcds-25-query.yaml -v=8</code>, its output is:</p>
<pre><code>I0219 05:50:17.121661 2148722 loader.go:372] Config loaded from file: /root/.kube/config
I0219 05:50:17.124735 2148722 round_trippers.go:432] GET https://172.16.0.212:6443/apis/metrics.k8s.io/v1beta1?timeout=32s
I0219 05:50:17.124747 2148722 round_trippers.go:438] Request Headers:
I0219 05:50:17.124753 2148722 round_trippers.go:442] Accept: application/json, */*
I0219 05:50:17.124759 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577
I0219 05:50:17.132864 2148722 round_trippers.go:457] Response Status: 503 Service Unavailable in 8 milliseconds
I0219 05:50:17.132876 2148722 round_trippers.go:460] Response Headers:
I0219 05:50:17.132881 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1
I0219 05:50:17.132890 2148722 round_trippers.go:463] Content-Length: 20
I0219 05:50:17.132894 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT
I0219 05:50:17.132898 2148722 round_trippers.go:463] Audit-Id: 3ab06f73-0c88-469a-834d-54ec06e910f1
I0219 05:50:17.132902 2148722 round_trippers.go:463] Cache-Control: no-cache, private
I0219 05:50:17.132906 2148722 round_trippers.go:463] Content-Type: text/plain; charset=utf-8
I0219 05:50:17.132909 2148722 round_trippers.go:463] X-Content-Type-Options: nosniff
I0219 05:50:17.132913 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede
I0219 05:50:17.134365 2148722 request.go:1181] Response Body: service unavailable
I0219 05:50:17.135255 2148722 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
I0219 05:50:17.135265 2148722 cached_discovery.go:78] skipped caching discovery info due to the server is currently unable to handle the request
I0219 05:50:17.136050 2148722 request.go:1181] Request Body: {"apiVersion":"sparkoperator.k8s.io/v1beta2","kind":"SparkApplication","metadata":{"name":"tpcds-25-query","namespace":"default"},"spec":{"arguments":["oss://lfpapertest/spark/data/tpc-ds-data/150g","oss://lfpapertest/spark/result/tpcds-runc-150g-48core-160g-1pod-25-query","/tmp/tpcds-kit/tools","parquet","150","1","false","q1-v2.4,q11-v2.4,q14a-v2.4,q14b-v2.4,q16-v2.4,q17-v2.4,q22-v2.4,q23a-v2.4,q23b-v2.4,q24a-v2.4,q24b-v2.4,q25-v2.4,q28-v2.4,q29-v2.4,q4-v2.4,q49-v2.4,q5-v2.4,q51-v2.4,q64-v2.4,q74-v2.4,q75-v2.4,q77-v2.4,q78-v2.4,q80-v2.4,q9-v2.4","true"],"dnsPolicy":"ClusterFirstWithHostNet","driver":{"cores":4,"labels":{"role":"driver","spark-app":"spark-tpcds","version":"2.4.5"},"memory":"8192m","nodeSelector":{"beta.kubernetes.io/instance-type":"ecs.g6.13xlarge"},"serviceAccount":"spark"},"executor":{"cores":48,"instances":1,"labels":{"role":"executor","version":"2.4.5"},"memory":"160g","memoryOverhead":"16g","nodeSelector":{"beta.kubernetes.io/instance-type":"ecs.g6.13xlarge"}},"hadoopConf":{"fs.oss.acce [truncated 802 chars]
I0219 05:50:17.136091 2148722 round_trippers.go:432] POST https://172.16.0.212:6443/apis/sparkoperator.k8s.io/v1beta2/namespaces/default/sparkapplications?fieldManager=kubectl-create
I0219 05:50:17.136098 2148722 round_trippers.go:438] Request Headers:
I0219 05:50:17.136104 2148722 round_trippers.go:442] Accept: application/json
I0219 05:50:17.136108 2148722 round_trippers.go:442] Content-Type: application/json
I0219 05:50:17.136113 2148722 round_trippers.go:442] User-Agent: kubectl/v1.22.3 (linux/amd64) kubernetes/9377577
I0219 05:50:17.144313 2148722 round_trippers.go:457] Response Status: 201 Created in 8 milliseconds
I0219 05:50:17.144327 2148722 round_trippers.go:460] Response Headers:
I0219 05:50:17.144332 2148722 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: e75a0286-dd47-4533-a65c-79d95dac5bb1
I0219 05:50:17.144337 2148722 round_trippers.go:463] Content-Length: 2989
I0219 05:50:17.144341 2148722 round_trippers.go:463] Date: Sun, 19 Feb 2023 05:50:17 GMT
I0219 05:50:17.144345 2148722 round_trippers.go:463] Audit-Id: 8eef9d08-04c0-44f7-87bf-e820853cd9c6
I0219 05:50:17.144349 2148722 round_trippers.go:463] Cache-Control: no-cache, private
I0219 05:50:17.144352 2148722 round_trippers.go:463] Content-Type: application/json
I0219 05:50:17.144356 2148722 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 7f136704-82ad-4f6c-8c86-b470a972fede
I0219 05:50:17.144396 2148722 request.go:1181] Response Body: {"apiVersion":"sparkoperator.k8s.io/v1beta2","kind":"SparkApplication","metadata":{"creationTimestamp":"2023-02-19T05:50:17Z","generation":1,"managedFields":[{"apiVersion":"sparkoperator.k8s.io/v1beta2","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{".":{},"f:arguments":{},"f:driver":{".":{},"f:cores":{},"f:labels":{".":{},"f:role":{},"f:spark-app":{},"f:version":{}},"f:memory":{},"f:nodeSelector":{".":{},"f:beta.kubernetes.io/instance-type":{}},"f:serviceAccount":{}},"f:executor":{".":{},"f:cores":{},"f:instances":{},"f:labels":{".":{},"f:role":{},"f:version":{}},"f:memory":{},"f:memoryOverhead":{},"f:nodeSelector":{".":{},"f:beta.kubernetes.io/instance-type":{}}},"f:hadoopConf":{".":{},"f:fs.oss.accessKeyId":{},"f:fs.oss.accessKeySecret":{},"f:fs.oss.endpoint":{},"f:fs.oss.impl":{}},"f:image":{},"f:imagePullPolicy":{},"f:mainApplicationFile":{},"f:mainClass":{},"f:mode":{},"f:restartPolicy":{".":{},"f:type":{}},"f:sparkConf":{".":{},"f:spark.eventLog.dir":{},"f:spark.eventLog.enabled":{},"f:spark.kubernetes. [truncated 1965 chars]
sparkapplication.sparkoperator.k8s.io/tpcds-25-query created
</code></pre>
<p>From the logs, we can see the only error "Response Status: 503 Service Unavailable in 8 milliseconds", I don't know what it means.</p>
<p>So I want to ask what may cause this, and how would I diagnose the problem? Any help is appreciated!</p>
| <p>There might be multiple reasons for this, initially let’s check whether the pod is really created or not. Like <code>ehmad11</code> suggested use <code>kubectl get pods --all-namespaces</code> for listing pods in all the namespaces. However in your case it might not work because your application is getting directly deployed in defaulf namespace. Regarding the error “Response Status: 503 Service Unavailable in 8 milliseconds” once you are able to locate the pod use <code>kubectl describe <pod></code> for finding logs specific to your pod and follow the troubleshooting steps provided in this <a href="https://komodor.com/learn/how-to-fix-kubernetes-service-503-service-unavailable-error/" rel="nofollow noreferrer">document</a> for rectifying it.</p>
<p><strong>Note:</strong> The reference document is provided from <code>komodor</code> site, here they have articulated each troubleshooting step in highly detailed and understandable manner.</p>
|
<p>I'm new to spark and doing on POC to download a file and then read it. However, I am facing issue that the file doesn't exists.</p>
<blockquote>
<p>java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist</p>
</blockquote>
<p>But when I printed the path of the file I find out the file exists and the path is also correct.</p>
<p>This is the output</p>
<pre><code>23/02/19 13:10:46 INFO BlockManagerMasterEndpoint: Registering block manager 10.14.142.21:37515 with 2.2 GiB RAM, BlockManagerId(1, 10.14.142.21, 37515, None)
FILE IS DOWNLOADED
['/app/data-Feb-19-2023_131049.json']
23/02/19 13:10:49 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir.
23/02/19 13:10:49 INFO SharedState: Warehouse path is 'file:/app/spark-warehouse'.
23/02/19 13:10:50 INFO InMemoryFileIndex: It took 39 ms to list leaf files for 1 paths.
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 206.6 KiB, free 1048.6 MiB)
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 35.8 KiB, free 1048.6 MiB)
23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 35.8 KiB, free: 1048.8 MiB)
23/02/19 13:10:51 INFO SparkContext: Created broadcast 0 from json at <unknown>:0
23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1
23/02/19 13:10:51 INFO FileInputFormat: Total input files to process : 1
23/02/19 13:10:51 INFO SparkContext: Starting job: json at <unknown>:0
23/02/19 13:10:51 INFO DAGScheduler: Got job 0 (json at <unknown>:0) with 1 output partitions
23/02/19 13:10:51 INFO DAGScheduler: Final stage: ResultStage 0 (json at <unknown>:0)
23/02/19 13:10:51 INFO DAGScheduler: Parents of final stage: List()
23/02/19 13:10:51 INFO DAGScheduler: Missing parents: List()
23/02/19 13:10:51 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0), which has no missing parents
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 9.0 KiB, free 1048.6 MiB)
23/02/19 13:10:51 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.8 KiB, free 1048.5 MiB)
23/02/19 13:10:51 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on experian-el-d41b428669cc1e8e-driver-svc.environments-quin-dev-1.svc:7079 (size: 4.8 KiB, free: 1048.8 MiB)
23/02/19 13:10:51 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1513
23/02/19 13:10:51 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at json at <unknown>:0) (first 15 tasks are for partitions Vector(0))
23/02/19 13:10:51 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks resource profile 0
23/02/19 13:10:51 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.14.142.21:37515 (size: 4.8 KiB, free: 2.2 GiB)
23/02/19 13:10:52 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.14.142.21:37515 (size: 35.8 KiB, free: 2.2 GiB)
23/02/19 13:10:52 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator.isEmpty(Iterator.scala:387)
at scala.collection.Iterator.isEmpty$(Iterator.scala:387)
at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249)
at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248)
at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256)
at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256)
at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 1]
23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 2]
23/02/19 13:10:52 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3) (10.14.142.21, executor 1, partition 0, PROCESS_LOCAL, 4602 bytes) taskResourceAssignments Map()
23/02/19 13:10:52 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on 10.14.142.21, executor 1: java.io.FileNotFoundException (File file:/app/data-Feb-19-2023_131049.json does not exist) [duplicate 3]
23/02/19 13:10:52 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
23/02/19 13:10:52 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
23/02/19 13:10:52 INFO TaskSchedulerImpl: Cancelling stage 0
23/02/19 13:10:52 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
23/02/19 13:10:52 INFO DAGScheduler: ResultStage 0 (json at <unknown>:0) failed in 1.128 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.14.142.21 executor 1): java.io.FileNotFoundException: File file:/app/data-Feb-19-2023_131049.json does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:779)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:1100)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:769)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:462)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:160)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:372)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:976)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStream(CodecStreams.scala:40)
at org.apache.spark.sql.execution.datasources.CodecStreams$.createInputStreamWithCloseResource(CodecStreams.scala:52)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.dataToInputStream(JsonDataSource.scala:195)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.createParser(JsonDataSource.scala:199)
at org.apache.spark.sql.execution.datasources.json.MultiLineJsonDataSource$.$anonfun$infer$4(JsonDataSource.scala:165)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$3(JsonInferSchema.scala:86)
at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2763)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$2(JsonInferSchema.scala:86)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator.isEmpty(Iterator.scala:387)
at scala.collection.Iterator.isEmpty$(Iterator.scala:387)
at scala.collection.AbstractIterator.isEmpty(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceLeftOption(TraversableOnce.scala:249)
at scala.collection.TraversableOnce.reduceLeftOption$(TraversableOnce.scala:248)
at scala.collection.AbstractIterator.reduceLeftOption(Iterator.scala:1431)
at scala.collection.TraversableOnce.reduceOption(TraversableOnce.scala:256)
at scala.collection.TraversableOnce.reduceOption$(TraversableOnce.scala:256)
at scala.collection.AbstractIterator.reduceOption(Iterator.scala:1431)
at org.apache.spark.sql.catalyst.json.JsonInferSchema.$anonfun$infer$1(JsonInferSchema.scala:103)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
</code></pre>
<p>This is my code to download the file and and print its path</p>
<pre><code> def find_files(self, filename, search_path):
result = []
# Wlaking top-down from the root
for root, dir, files in os.walk(search_path):
if filename in files:
result.append(os.path.join(root, filename))
return result
def downloadData(self, access_token, data):
headers = {
'Content-Type': 'application/json',
'Charset': 'UTF-8',
'Authorization': f'Bearer {access_token}'
}
try:
response = requests.post(self.kyc_url, data=json.dumps(data), headers=headers)
response.raise_for_status()
logger.debug("received kyc data")
response_filename = ("data-" + time.strftime('%b-%d-%Y_%H%M%S', time.localtime()) + ".json")
with open(response_filename, 'w', encoding='utf-8') as f:
json.dump(response.json(), f, ensure_ascii=False, indent=4)
f.close()
print("FILE IS DOWNLOADED")
print(self.find_files(response_filename, "/"))
except requests.exceptions.HTTPError as err:
logger.error("failed to fetch kyc data")
raise SystemExit(err)
return response_filename
</code></pre>
<p>This is my code to read the file and upload to minio</p>
<pre><code> def load(spark: SparkSession, json_file_path: str, destination_path: str) -> None:
df = spark.read.option("multiline", "true").json(json_file_path)
df.write.format("delta").save(f"s3a://{destination_path}")
</code></pre>
<p>I'm running spark in k8s with spark operator.</p>
<p>This is my SparkApplication manifest</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: myApp
namespace: demo
spec:
type: Python
pythonVersion: "3"
mode: cluster
image: "myImage"
imagePullPolicy: Always
mainApplicationFile: local:///app/main.py
sparkVersion: "3.3.1"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
timeToLiveSeconds: 86400
deps:
packages:
- io.delta:delta-core_2.12:2.2.0
- org.apache.hadoop:hadoop-aws:3.3.1
driver:
env:
- name: NAMESPACE
value: demo
cores: 2
coreLimit: "2000m"
memory: "2048m"
labels:
version: 3.3.1
serviceAccount: spark-driver
executor:
cores: 4
instances: 1
memory: "4096m"
coreRequest: "500m"
coreLimit: "4000m"
labels:
version: 3.3.1
dynamicAllocation:
enabled: false
</code></pre>
<p>Can someone please point out what I am doing wrong ?</p>
<p>Thank you</p>
| <p>If you are running in cluster mode then you need your input files to be shared on a shared FS like <code>HDFS</code> or <code>S3</code> but not on local FS, since both of driver and executors should have access to the input file.</p>
|
<p>How to delete the failed jobs in the kubernetes cluster using a cron job in gke?. when i tried to delete the failed jobs using following YAML, it has deleted all the jobs (including running)</p>
<pre><code>
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: XXX
namespace: XXX
spec:
schedule: "*/30 * * * *"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
serviceAccountName: XXX
containers:
- name: kubectl-runner
image: bitnami/kubectl:latest
command: ["sh", "-c", "kubectl delete jobs $(kubectl get jobs | awk '$2 ~ 1/1' | awk '{print $1}')"]
restartPolicy: OnFailure
</code></pre>
| <p>This one visually looks better for me:</p>
<pre><code>kubectl delete job --field-selector=status.phase==Failed
</code></pre>
|
<p>I am trying to use VPA for autoscaling my deployed services. Due to limitation in resources in my cluster I set the min_replica option to 1. The workflow of VPA that have seen so far is that it first deletes the existing pod and then re-create the pod. This approach will cause a downtime to my services. What I want is that the VPA first create the new pod and then deletes the old pod, completely similar to the rolling updates for deployments. Is there an option or hack to reverse the flow to the desired order in my case?</p>
| <p>This can be achieved by using python script or by using an IAC pipeline, you can get the metrics of the kubernetes cluster and whenever these metrics exceed a certain threshold, trigger this python code for creating new pod with the required resources and shutdown the old pod. Follow this github link for more info on <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">python plugin</a> for kubernetes.</p>
<p>Ansible can also be used for performing this operation. This can be achieved by triggering your ansible playbook whenever the threshold breaches a certain limit and you can specify the new sizes of the pods that need to be created. Follow this official <a href="https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_scale_module.html" rel="nofollow noreferrer">ansible document</a> for more information. However both these procedures involve manual analysis for selecting the desired pod size for scaling. So if you don’t want to use vertical scaling you can go for horizontal scaling.</p>
<p><strong>Note:</strong> The information is gathered from official Ansible and github pages and the urls are referred to in the post.</p>
|
<p>I'm trying to setup <code>Nginx-ingress controller</code> to manage two paths on the same <code>hostname</code> in bare metal based cluster.</p>
<p>In the <strong>app1</strong> namespace i have below nginx resource:-</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
</code></pre>
<p>And in the <strong>app2</strong> namespace i have below nginx resource:-</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app2-ingress
namespace: app2
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
</code></pre>
<p>My <code>app1-service</code> applied first and it is running fine, now when i applied the second <code>app2-service</code> it shows below warning and not able to access it on browser.</p>
<pre><code>Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
Warning Rejected 54s nginx-ingress-controller All hosts are taken by other resources
</code></pre>
<p>How do i configure my nginx ingress resource to connect multiple service paths on the same hostname?</p>
| <p>Default Nginx Ingress controller doesn't support having different <code>Ingress</code> resources with the same hostname. You can have one <code>Ingress</code> resource that contains multiple paths, but in this case all apps should live in one namespace. Like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-ingress
namespace: app1
spec:
ingressClassName: nginx
rules:
- host: web.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
</code></pre>
<p>Splitting ingresses between namespaces is currently not supported by standard Nginx Ingress controller.</p>
<p>You may however take a look at an alternative implementation of <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">Nginx Ingress</a> by Nginx Inc. They have support for <a href="https://github.com/nginxinc/kubernetes-ingress/tree/main/examples/ingress-resources/mergeable-ingress-types" rel="nofollow noreferrer">Mergeable Ingresses</a>.</p>
|
<p>I tried the following but failed to get a list of all pods like these:</p>
<pre><code>my-pod-1
my-pod-19
my-pod-24
</code></pre>
<p>etc.</p>
<pre><code>kgp | grep -E my-pod-[0-9]+
kgp | grep -E my-pod-[0-9]
kgp | grep -i -E my-pod-[0-9]
kgp | grep -i -E "my-pod-[0-9]"
kgp | grep -i -E "^my-pod-[0-9]"
kgp | grep "^my-pod-[0-9]"
kgp | grep -v 'my-pod-[0-9]'
kgp == kubectl get pods
</code></pre>
<hr />
<p>Also following kind Timor's suggestion I tried the following:</p>
<pre><code>user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my-
pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$'
user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod
my-pod-0 1/1 Running 0 4d9h
my-pod-1 1/1 Running 0 22h
my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m
my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h
</code></pre>
<hr />
<p>Please advise.</p>
| <p>This one worked!!!</p>
<p>kubectl get pods | grep -P 'my-pod-[0-9]+'</p>
<p>Thank you all (and especially to @Timur who enlightened me about the Perl regexp option)</p>
|
<p>I tried Keda with AKS and I really appreciate when pod are automatically instanciate based on Azure Dev Ops queue job for release & build.</p>
<p>However I noticed something strange and often AKS/Keda remove pod while processing which makes workflow failed.
<a href="https://i.stack.imgur.com/vbsm7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vbsm7.png" alt="enter image description here" /></a></p>
<p>Message reads
<code>We stopped hearing from agent aks-linux-768d6647cc-ntmh4. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error. For more information, see: https://go.microsoft.com/fwlink/?linkid=846610</code></p>
<p>Expected behavior: pods must complete the tasks then Keda/AKS can remove this pod.</p>
<p>I share with you my keda yml file:</p>
<pre><code># deployment.yaml
apiVersion: apps/v1 # The API resource where this workload resides
kind: Deployment # The kind of workload we're creating
metadata:
name: aks-linux # This will be the name of the deployment
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: aks-linux # Labels follow the `name: value` template
replicas: 1
template: # This is the template of the pod inside the deployment
metadata: # Metadata for the pod
labels:
app: aks-linux
spec:
nodeSelector:
agentpool: linux
containers: # Here we define all containers
- image: <My image here>
name: aks-linux
env:
- name: "AZP_URL"
value: "<myURL>"
- name: "AZP_TOKEN"
value: "<MyToken>"
- name: "AZP_POOL"
value: "<MyPool>"
resources:
requests: # Minimum amount of resources requested
cpu: 2
memory: 4096Mi
limits: # Maximum amount of resources requested
cpu: 4
memory: 8192Mi
</code></pre>
<p>I used latest version of AKS and Keda. Any idea ?</p>
| <p>Check <a href="https://keda.sh/blog/2021-05-27-azure-pipelines-scaler/" rel="nofollow noreferrer">the official Keda docs</a>:</p>
<blockquote>
<p>When running your agents as a deployment you have no control on which pod gets killed when scaling down.</p>
</blockquote>
<p>So, to solve it you need to use <code>ScaledJob</code>:</p>
<blockquote>
<p>If you run your agents as a Job, KEDA will start a Kubernetes job for each job that is in the agent pool queue. The agents will accept one job when they are started and terminate afterwards. Since an agent is always created for every pipeline job, you can achieve fully isolated build environments by using Kubernetes jobs.</p>
</blockquote>
<p>See there how to implement it.</p>
|
<p>I have already a google managed SSL certificate created (with dns verification option). I want to use same certificate in my istio-ingress for SSL. Is there any possible annotations available ?</p>
<p>We can create ManagedCertificate resource in GKE, but it is uses the loadbalancer verification option which does not support wildcard certificate.</p>
<p>What to do if I want to create certificate like (*.example.com) and attached it with istio-ingress or gke ingress ?</p>
| <p>You can create the wild card certificate with the Cert-manger.</p>
<p>Here is my <a href="https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2" rel="nofollow noreferrer">article</a> on requesting the <strong>wild card</strong> certificate with <strong>DNS</strong> verification as it's not supported with <strong>HTTP</strong>.</p>
<p><a href="https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2</a></p>
<p>For GCP DNS verification you can follow official guide : <a href="https://cert-manager.io/docs/configuration/acme/dns01/google/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/acme/dns01/google/</a></p>
<p>Once auth is successful you will be able to request the certificate and it will get stored in K8s secret.</p>
<p><strong>create a service account</strong> :</p>
<pre><code>PROJECT_ID=myproject-id
gcloud iam service-accounts create dns01-solver --display-name "dns01-solver"
</code></pre>
<p><strong>Binding policy</strong> :</p>
<pre><code>gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:dns01-solver@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/dns.admin
</code></pre>
<p><strong>K8s secret</strong> :</p>
<pre><code>gcloud iam service-accounts keys create key.json \
--iam-account dns01-solver@$PROJECT_ID.iam.gserviceaccount.com
kubectl create secret generic clouddns-dns01-solver-svc-acct \
--from-file=key.json
</code></pre>
<p><strong>issuer</strong></p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: gcp-issuer
spec:
acme:
...
solvers:
- dns01:
cloudDNS:
# The ID of the GCP project
project: $PROJECT_ID
# This is the secret used to access the service account
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: key.json
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: Issuer
name: letsencrypt-prod
commonName: "*.devops.example.in"
dnsNames:
- "*.devops.example.in"
</code></pre>
<p>You can attach this newly auto-created <strong>secret</strong> to <strong>Ingress</strong> or <strong>Gateway</strong> in <strong>Istio</strong> as per need. That <strong>secret</strong> will be storing your wild card <strong>certificate</strong>.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
annotations:
cert-manager.io/issuer: gcp-issuer
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: tls-secret # This should match the Certificate secretName
hosts:
- *.devops.example.in
</code></pre>
|
<p>Is it possible to create a Kubernetes service and pod in different namespaces, for example, having myweb-svc pointing to the actual running myweb-pod, while myweb-svc and myweb-pod are in different namespaces?</p>
| <p>YAML manifest to create both the pod and the service in their respective namespaces. You need to specify the ‘namespace’ field in the ‘metadata’ section of both the ‘pod’ and ‘service’ objects to specify the namespace in which they should be created.</p>
<p>Also, if you want to point your Service to a Service in a different namespace or on another cluster you can use service without a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">pod selector</a>.</p>
<p>Refer to this link on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/" rel="nofollow noreferrer">Understanding kubernetes Object</a> for more information.</p>
|
<p>I upgrade AKS using Azure portal from 1.23.5 to 1.24.9. This part finished properly (or so I assumed) based on below status on Azure portal.
<a href="https://i.stack.imgur.com/RuT23.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RuT23.png" alt="enter image description here" /></a></p>
<p>I continued with 1.24.9 to 1.25.5. This time it worked partly. Azure portal shows 1.25.5 for nodepool with provision state "Failed". While nodes are still at 1.24.9.</p>
<p>I found that some nodes were having issues connecting to network including outside e.g. github as well as internal "services". For some reason it is intermittent issue. On same node it sometime works and sometimes not. (I had pods running on each node with python.)</p>
<p><a href="https://i.stack.imgur.com/Yy6M5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yy6M5.png" alt="enter image description here" /></a></p>
<p>Each node has cluster IP in resolv.conf
<a href="https://i.stack.imgur.com/bipoQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bipoQ.png" alt="enter image description here" /></a></p>
<p>One of the question on SO had a hint about <code>ingress-nginx</code> compatibility. I found that I had an incompatible version. So I upgraded it to <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">1.6.4 which is compatible with 1.24 and 1.25 both</a></p>
<p>But this network issue still persists. I am not sure if this is because AKS provisioning state of "Failed". Connectivity check for this cluster in Azure portal is Success. Only issue reported in Azure portal diagnostics is nodepool provisioning state.
<a href="https://i.stack.imgur.com/gPyiy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gPyiy.png" alt="enter image description here" /></a></p>
<p>is there anything I need to do after <code>ingress-nginx</code> upgrade for all nodes/pods to get the new config?
Or is there a way to re-trigger this upgrade? although I am not sure why, but just assuming that it may reset the configs on all nodes and might work.</p>
| <p>Ok, posting the solution and journey to it here, so just in case someone comes across similar issue.</p>
<p>There was a network issue in the cluster after the upgrade. This was the reason all pods had DNS issue. Due to these network issues metrics-server was not in running state and it's pdb allowed disruption remained at 0 and it was causing <code>PodDrainFaulure</code> errors while upgrading the node.</p>
<p>I was able to force the upgrade to upgrade all nodes to 1.25.5 by running the command <code>az aks nodepool upgrade -n agentpool -g rg_name --cluster-name aks_name --node-image-only</code> . However, after executing this, I had to keep on <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/error-code-poddrainfailure" rel="nofollow noreferrer">deleting the pdb</a> to get all the nodes to upgrade.</p>
<p>This helped to get control plane and all nodes to reach 1.25.5 version, however overall status still remained in failed(Running) state. This was solved by triggering another upgrade with --cluster-plane-only flag</p>
<pre><code> --resource-group <ResourceGroupName> --name <AKSClusterName> \
--control-plane-only \
--kubernetes-version <KubernetesVersion>
</code></pre>
<p>However, this did not solve the core networking issues, as starting from metrics server to application pods, all were failing trying to resolve hostnames. Interesting thing was that internal services were not reachable at all, however outside network e.g. github.com, microsoft.com etc would work intermittently.</p>
<p>Based on <a href="https://github.com/Azure/AKS/issues/2903" rel="nofollow noreferrer">AKS issue 2903</a> and related <a href="https://github.com/kubernetes/ingress-nginx/issues/8501" rel="nofollow noreferrer">ingress-nginx issue 8501</a> found that after k8s 1.24 ingress-nginx needs special annotation to keep health probes running properly. had to update helm with below command</p>
<pre><code>helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--reuse-values \
--namespace <NAMESPACE> \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
</code></pre>
<p>This did get the Azure aks health dashboard and metrics server to show as running state, but it did not solve the underlying network issue. used all the scenarios and commands mentioned in all sections of <a href="https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/basic-troubleshooting-outbound-connections" rel="nofollow noreferrer">MS troubleshooting guide for outbound connections</a> to narrow down that issue is somehow with <code>kube-dns</code> service and CoreDNS pods.</p>
<p>as DNS resolution would work if upstreaming nameserver directly to coreDNS pod IP (need to run on the same node) OR public DNS, but it will fail if it was using <code>kube-dns</code> service ip as configured in <code>resolv.conf</code></p>
<pre><code>jovyan@elyra-web-59f899c447-xw5x2:~$ host -a microsoft.com
Trying "microsoft.com.elyra-airflow.svc.cluster.local"
;; connection timed out; no servers could be reached
jovyan@elyra-web-59f899c447-xw5x2:~$ nslookup microsoft.com
;; connection timed out; no servers could be reached
jovyan@elyra-web-59f899c447-xw5x2:~$ nslookup microsoft.com 1.1.1.1
Server: 1.1.1.1
Address: 1.1.1.1#53
Non-authoritative answer:
Name: microsoft.com
Address: 20.103.85.33
Name: microsoft.com
Address: 20.112.52.29
Name: microsoft.com
Address: 20.81.111.85
Name: microsoft.com
Address: 20.84.181.62
Name: microsoft.com
Address: 20.53.203.50
</code></pre>
<p>I restarted coreDNS, konnectivity-agent and so on, but no help.
At the end found a <a href="https://github.com/Azure/AKS/issues/1320#issuecomment-555897357" rel="nofollow noreferrer">hint from AKS issue 1320</a> which helped solved the issue. Even though this issue is related to k8s version 1.13 thus suggesting it was not a version specific problem. I deleted ALL pods from namespace <code>kube-system</code> at once. Immediately after these pods were up and running, DNS issue was gone and all is working as before.</p>
<p>Phew, this was some journey of 5 days to get it solved. Looking forward to next upgrade in March now !</p>
|
<p>I have a k3s (light weighted k8s) cluster running on my Raspberry PI. So, I am not using any cloud hosted cluster but a bear metal one on my Raspberry PI.</p>
<p>I have deployed a application with this manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: myapp
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: bashofmann/rancher-demo:1.0.0
imagePullPolicy: Always
resources:
requests:
cpu: 200m
ports:
- containerPort: 8080
name: web
protocol: TCP
</code></pre>
<p>I also created a service to forward traffic to the application pod. Its manifest is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: demo-app-svc
namespace: myapp
spec:
selector:
app: hello-world
ports:
- name: web
protocol: TCP
port: 31113
targetPort: 8080
</code></pre>
<p>Then, I created a Ingress for the routing rules:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ing
namespace: myapp
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: myapp.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: demo-app-svc
port:
number: 31113
</code></pre>
<p>I successfully deployed above application pod, service & Ingress to my k3s cluster. Like the manifests indicate, they are under namespace <code>myapp</code>.</p>
<p>The next thing I would like to do is to deploy the <strong>Kubernetes Nginx Ingress Controller</strong> in order to have the clients outside the cluster be able to access the deployed application.</p>
<p>So, I deployed it by :</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>The above command successfully deployed <strong>Ingress Controller</strong> under namespace <code>ingress-nginx</code> along with other objects as shown below with command <code>k get all -n ingress-nginx</code>:
<a href="https://i.stack.imgur.com/QZH7P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QZH7P.png" alt="enter image description here" /></a></p>
<p>As you can see above, the <code>LoadBalancer</code> type <code>service</code> external IP is with value <code><pending></code>. So, client outside the cluster still can not access the application pod.</p>
<p>Why is that & what do I miss deploying the Nginx Ingress Controller on a bear metal machine? The goal is to have an external IP that can be used to access the application pod from outside cluster, how can I achieve that?</p>
<p><strong>===== Update =====</strong></p>
<p>Based on the answer below from @Dawid Kruk , I decided to use the k3s default Traefik Ingress Controller.</p>
<p>So, I deleted all the deployed Nginx Ingress Controller resources by <code>k delete all --all -n ingress-nginx</code> .</p>
<p>Then, I checked the Traefik Ingress related <code>LoadBalancer</code> type service:
<a href="https://i.stack.imgur.com/gjrbN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gjrbN.png" alt="enter image description here" /></a></p>
<p>The <code>external IP</code> of that Traefik service is exactly my Raspberry PI's IP address!</p>
<p>So, added this IP to <code>/etc/hosts</code> to map it to the hostname defined in my Ingress object:</p>
<pre><code>192.168.10.203 myapp.com
</code></pre>
<p>I opened browser & use address <a href="http://myapp.com" rel="nofollow noreferrer">http://myapp.com</a>, with the routing rules defined in my <code>Ingress</code> object (see the manifest for my ingress above), I hoped I could see my deployed web application now. But get <code>404 Page Not Found</code>. What am I missing now to access my deployed application?</p>
<p><strong>Another side question:</strong> I noticed when I check the deployed <code>Ingress</code> object, its IP address is empty, I wonder am I supposed to see an IP address for this object or not when the Traefik Ingress Controller takes effect?
<a href="https://i.stack.imgur.com/3GDNq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3GDNq.png" alt="enter image description here" /></a></p>
<p><strong>Another issue:</strong> Now, when I re-deploy my ingress manifest by <code>k apply -f ingress.yaml</code>, I get error:</p>
<pre><code>Resource: "networking.k8s.io/v1, Resource=ingresses", GroupVersionKind: "networking.k8s.io/v1, Kind=Ingress"
...
for: "ingress.yaml": error when patching "ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
</code></pre>
<p>It looks like even I decided to use Traefik Ingress Controller, I still need to instal Nginx Ingress Controller. I get confused now, anyone can explain it?</p>
| <p>I'm not K3S expert but I think I found a piece of documentation that is addressing your issue.</p>
<p>Take a look:</p>
<blockquote>
<h3>Service Load Balancer</h3>
<p>Any service load balancer (LB) can be used in your K3s cluster. By default, K3s provides a load balancer known as <a href="https://github.com/k3s-io/klipper-lb" rel="nofollow noreferrer">ServiceLB</a> (formerly Klipper Load Balancer) that uses available host ports.</p>
<p>Upstream Kubernetes allows Services of type LoadBalancer to be created, but doesn't include a default load balancer implementation, so these services will remain <code>pending</code> until one is installed. Many hosted services require a cloud provider such as Amazon EC2 or Microsoft Azure to offer an external load balancer implementation. By contrast, the K3s ServiceLB makes it possible to use LoadBalancer Services without a cloud provider or any additional configuration.</p>
<h3>How the Service LB Works</h3>
<p>The ServiceLB controller watches Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> with the <code>spec.type</code> field set to <code>LoadBalancer</code>.</p>
<p>For each LoadBalancer Service, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> is created in the <code>kube-system</code> namespace. This DaemonSet in turn creates Pods with a <code>svc-</code> prefix, on each node. <strong>These Pods use iptables to forward traffic from the Pod's NodePort, to the Service's ClusterIP address and port.</strong></p>
<p>If the ServiceLB Pod runs on a node that has an external IP configured, the node's external IP is populated into the Service's <code>status.loadBalancer.ingress</code> address list. Otherwise, the node's internal IP is used.</p>
<p>If multiple LoadBalancer Services are created, a separate DaemonSet is created for each Service.</p>
<p>It is possible to expose multiple Services on the same node, as long as they use different ports.</p>
<p>If you try to create a LoadBalancer Service that listens on port 80, the ServiceLB will try to find a free host in the cluster for port 80. If no host with that port is available, the LB will remain Pending.</p>
<p>-- <em><a href="https://docs.k3s.io/networking#service-load-balancer" rel="nofollow noreferrer">Docs.k3s.io: Networking</a></em></p>
</blockquote>
<p>As a possible solution, I'd recommend to use <code>Traefik</code> as it's a default <code>Ingress</code> controller within <code>K3S</code>.</p>
<p>The <code>Pending</code> status on your <code>LoadBalancer</code> is most likely caused by another service used on that port (<code>Traefik</code>).</p>
<p>If you wish to still use <code>NGINX</code>, the same documentation page explains how you can disable <code>Traefik</code>.</p>
<hr />
<h3>UPDATE</h3>
<p>I'd be more careful to delete resources as you did. The following command:</p>
<ul>
<li><code>k delete all --all -n ingress-nginx</code></li>
</ul>
<p>Will not delete all of the resources created. The better way in my opinion would be to use the command that you've used to create and instead of:</p>
<ul>
<li><code>kubectl create -f ...</code></li>
</ul>
<p>Use:</p>
<ul>
<li><code>kubectl delete -f ...</code></li>
</ul>
<p>I assume that you did not modify your <code>Ingress</code> definition, hence you receive the error and the <code>kubectl get ingress</code> is showing incorrect results.</p>
<p>What you will need to do:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
ingressClassName: nginx # <-- DELETE IT OR CHANGE TO "traefik"
</code></pre>
<p>Either delete or change should work as <code>traefik</code> is set to be a default <code>IngressClass</code> for this setup.</p>
|