prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am attempting to migrate my on premises cluster to GKE. In order to facilitate this transition I need to be able to resolve the names of legacy services.</p>
<p>Assume that the networking/VPN is a solved problem.</p>
<p>Is there are way to do this with GKE currently?</p>
<p>Effectively I am attempting to add a NS to every /etc/resolv.conf</p>
| <p>I want to add to what Eric said, and mutate it a bit.</p>
<p>One of the realizations we had during the kubernetes 1.1 "settling period" is that there are not really specs for things like resolv.conf and resolver behavior. Different resolver libraries do different things, and this was causing pain for our users.</p>
<p>Specifically, some common resolvers assume that all <code>nameserver</code>s are fungible and would break if you had nameservers that handled different parts of the DNS namespace. We made a decision that for kube 1.2 we will NOT pass multiple <code>nameserver</code> lines into containers. Instead, we pass only the kube-dns server, which handles <code>cluster.local</code> queries and forwards any other queries to an "upstream" nameserver.</p>
<p>How do we know what "upstream" is? We use the <code>nameservers</code> of the node. There is a per-pod dnsPolicy field that governs this choice. The net result is that containers see a single <code>nameserver</code> in resolv.conf, which we own, and that nameserver handles the whole DNS namespace.</p>
<p>What this practically means is that there's not a great hook for you to interject your own nameserver. You could change the <code>--cluster-dns</code> flag to kubelets to point to your own DNS server, which would then forward to the kube-dns, which would then forward to "upstream". The problem is that GKE doesn't really support changing flags that way. If/when the node is updated, the flag will disappear in favor of the default.</p>
<p>Possible solutions:</p>
<ul>
<li><p>Have kubelets read their flags from an in-cluster config. This is already plan of record, but is not in v1.2</p></li>
<li><p>Have kube-dns take a flag indicating what "upstream" is. Kube-dns is a "cluster addon" and as such isn't really mutable by end users (we will update it with your cluster and lose your changes).</p></li>
<li><p>Have kube-dns read its flags from an in-cluster config, and take a flag indicating what "upstream" is. This is a doable idea, but probably not for v1.2 (too late). It <em>might</em> be possible to patch this into a v1.2.x but it's not really a bugfix, it's a feature.</p></li>
<li><p>Get your own DNS server into the resolv.conf on each node so that kube-dns would use you as upstream. I don't think GKE has a way to configure this that won't also get lost on node upgrades. You could write a controller that periodically SSH'ed to VMs and wrote that out, and subsequently checked your kube-dns container for correctness. Blech.</p></li>
</ul>
<p>I think the right answer is to use in-cluster configmaps to inform either kubelets or DNS (or both). If you think these might be workable answers (despite the timeframe issues), it would be great if you opened a GitHub issue to discuss. It will get more visibility there.</p>
|
<p>I don't want to setup another etcd cluster.</p>
<p>How can I access the built-in etcd from kubernetes pod?</p>
<p>I suppose first I need create a service account and use this account to launch the pod. </p>
<p>Then how can the container in this pod discover the URI of built-in etcd?</p>
<p>Thank you</p>
| <p>The etcd instance used by the Kubernetes apiserver is generally treated as an implementation detail of the apiserver and is not designed to be reused by user applications. By default it is installed to only listen for connections on localhost and run on a machine where no user applications are scheduled.</p>
<p>It isn't difficult to run a second etcd instance for your own use. For example, the DNS cluster add-on includes a <a href="https://github.com/kubernetes/kubernetes/blob/a3c00aadd5da91288cca856dabbefbc9f261be69/cluster/addons/dns/skydns-rc.yaml.in#L23-L48" rel="nofollow noreferrer">private instance of etcd</a> that is separate from the etcd used by the apiserver. </p>
|
<p>I have tried all the basics of Kubernetes and if you want to update your application all you can use <code>kubectl rolling-update</code> to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called <code>Deployment</code> on version <code>v1beta1</code>. I am confused since I there is a line on the Deployment docs:</p>
<blockquote>
<p>Next time we want to update pods, we can just update the deployment again.</p>
</blockquote>
<p>Isn't this the role for <code>rolling-update</code>? Any inputs would be very useful.</p>
| <p>Deployment is an Object that lets you define a declarative deploy.
It encapsulates </p>
<ul>
<li><p>DeploymentStatus object, that is in charge of managing the number of replicas and its state.</p></li>
<li><p>DeploymentSpec object, which holds number of replicas, templateSpec , Selectors, and some other data that deal with deployment behaviour.</p></li>
</ul>
<p>You can get a glimpse of actual code here:
<a href="https://github.com/kubernetes/kubernetes/blob/5516b8684f69bbe9f4688b892194864c6b6d7c08/pkg/apis/extensions/v1beta1/types.go#L223-L253" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/5516b8684f69bbe9f4688b892194864c6b6d7c08/pkg/apis/extensions/v1beta1/types.go#L223-L253</a></p>
<p>You will mostly use Deployments to deploy services/applications, in a <strong>declarative</strong> manner.</p>
<p>If you want to modify your deployment, update the yaml/json you used without changing the metadata.</p>
<p>In contrast, kubectl rolling-update isn't declarative, no yaml/json involved, and needs an existing replication controller.</p>
|
<p>I have specified a certain <a href="http://kubernetes.io/v1.1/docs/user-guide/images.html#specifying-imagepullsecrets-on-a-pod" rel="nofollow">image pull secret</a> for my replication controller, but it doesn't appear to be applied when downloading the Docker image:</p>
<pre><code>$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf get events
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
8h 8s 3074 web-73na4 Pod spec.containers{web} Pulling {kubelet ip-172-31-29-110.eu-central-1.compute.internal} Pulling image "quay.io/aknuds1/realtime-music"
8h 5s 3074 web-73na4 Pod spec.containers{web} Failed {kubelet ip-172-31-29-110.eu-central-1.compute.internal} Failed to pull image "quay.io/aknuds1/realtime-music": image pull failed for quay.io/aknuds1/realtime-music, this may be because there are no credentials on this request. details: (Error: Status 403 trying to pull repository aknuds1/realtime-music: "{\"error\": \"Permission Denied\"}")
</code></pre>
<p>How do I make the replication controller use the image pull secret "quay.io" when downloading the image? The replication controller spec looks as follows:</p>
<pre><code>{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "web",
"labels": {
"app": "web"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "web"
},
"template": {
"metadata": {
"labels": {
"app": "web"
}
},
"spec": {
"containers": [
{
"name": "web",
"image": "quay.io/aknuds1/realtime-music",
"ports": [
{
"name": "http-server",
"containerPort": 80
}
]
}
],
"imagePullSecrets": [
{
"name": "quay.io"
}
]
}
}
}
}
</code></pre>
<h1>Edit</h1>
<p>I created the quay.io secret like this: <code>~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf create -f /tmp/image-pull-secret.yaml</code>. The content of /tmp/image-pull-secret.yaml is basically like this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: quay.io
data:
.dockercfg: <base64 encoded dockercfg>
type: kubernetes.io/dockercfg
</code></pre>
<h1>Output from <code>kubectl get pods web-73na4 -o yaml</code>, in response to @PaulMorie</h1>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"web","uid":"e1b7a3f0-e349-11e5-a136-02420a022e02","apiVersion":"v1","resourceVersion":"31503"}}
creationTimestamp: 2016-03-06T03:16:56Z
generateName: web-
labels:
app: web
name: web-73na4
namespace: default
resourceVersion: "31516"
selfLink: /api/v1/namespaces/default/pods/web-73na4
uid: e1b89066-e349-11e5-a136-02420a022e02
spec:
containers:
- image: quay.io/aknuds1/realtime-music
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 80
name: http-server
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5s7kd
readOnly: true
dnsPolicy: ClusterFirst
nodeName: ip-172-31-29-110.eu-central-1.compute.internal
restartPolicy: Always
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-5s7kd
secret:
secretName: default-token-5s7kd
status:
conditions:
- lastProbeTime: null
lastTransitionTime: null
status: "False"
type: Ready
containerStatuses:
- image: quay.io/aknuds1/realtime-music
imageID: ""
lastState: {}
name: web
ready: false
restartCount: 0
state:
waiting:
message: 'image pull failed for quay.io/aknuds1/realtime-music, this may be
because there are no credentials on this request. details: (Error: Status
403 trying to pull repository aknuds1/realtime-music: "{\"error\": \"Permission
Denied\"}")'
reason: PullImageError
hostIP: 172.31.29.110
phase: Pending
podIP: 10.2.2.3
startTime: 2016-03-06T03:16:56Z
</code></pre>
| <p><strong>Update</strong> (That is what I did to use a private image):</p>
<p>First log in quay.io:</p>
<pre><code>$ docker login quay.io
Username (username):
Password:
WARNING: login credentials saved in /Users/user/.docker/config.json
Login Succeeded
</code></pre>
<p>Then I created a new file (my-credentials.json) that only had the quay.io credentials that docker added in the config.json file (I realized that it had more credentials apart from the quay.io).</p>
<pre><code>$ cat config.json
{
"quay.io": {
"auth": "xxxxxxxxxxxxxxxxxxxxx",
"email": "[email protected]"
}
}
</code></pre>
<p>After that, I generated the base64:</p>
<pre><code> $ cat ./my-credentials.json | base64
<base64-value>
</code></pre>
<p>And I created the secret resource:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
data:
.dockercfg: <base64-value>
type: kubernetes.io/dockercfg
$ kubectl create -f image-pull-secret.yaml
</code></pre>
<p>Finally, I created the pod:</p>
<pre><code>{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "web",
"labels": {
"app": "web"
}
},
"spec": {
"replicas": 1,
"selector": {
"app": "web"
},
"template": {
"metadata": {
"labels": {
"app": "web"
}
},
"spec": {
"containers": [
{
"name": "web",
"image": "quay.io/username/myimage",
"ports": [
{
"name": "http-server",
"containerPort": 80
}
]
}
],
"imagePullSecrets": [
{
"name": "myregistrykey"
}
]
}
}
}
}
$ kubectl create -f pod.yaml
</code></pre>
<p>I have used <code>myregistrykey</code> as the name of the <code>imagePullSecrets</code> instead of <code>quay.io</code>, but I don't think that the issue is for that.</p>
<hr>
<p>The issue seems to be due to you didn't create a <code>secret</code> to save your credentials.</p>
<p>Note that the value of the <code>name</code> key in the <code>imagePullSecrets</code> section (in your case "quay.io") should be the same that you specified in your <code>secret</code> resource.</p>
|
<p>I have the following replication controller in Kubernetes on GKE:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 2
selector:
app: myapp
deployment: initial
template:
metadata:
labels:
app: myapp
deployment: initial
spec:
containers:
- name: myapp
image: myregistry.com/myapp:5c3dda6b
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: myregistry.com-registry-key
</code></pre>
<p>Now, if I say</p>
<pre><code>kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b
</code></pre>
<p>the rolling update is performed, but no re-pull. Why?</p>
| <p>Kubernetes will pull upon Pod creation if either (see <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="noreferrer">updating-images doc</a>):</p>
<ul>
<li>Using images tagged <code>:latest</code></li>
<li><code>imagePullPolicy: Always</code> is specified</li>
</ul>
<p>This is great if you want to always pull. But what if you want to do it <strong>on demand</strong>: For example, if you want to use <code>some-public-image:latest</code> but only want to pull a newer version manually when you ask for it. You can currently:</p>
<ul>
<li>Set <code>imagePullPolicy</code> to <code>IfNotPresent</code> or <code>Never</code> and <a href="https://kubernetes.io/docs/concepts/containers/images/#pre-pulled-images" rel="noreferrer"><em>pre-pull</em></a>: Pull manually images on each cluster node so the latest is cached, then do a <code>kubectl rolling-update</code> or similar to restart Pods (ugly easily broken hack!)</li>
<li><em>Temporarily</em> change <code>imagePullPolicy</code>, do a <code>kubectl apply</code>, restart the pod (e.g. <code>kubectl rolling-update</code>), revert <code>imagePullPolicy</code>, redo a <code>kubectl apply</code> (ugly!)</li>
<li><em>Pull and push</em> <code>some-public-image:latest</code> to your private repository and do a <code>kubectl rolling-update</code> (heavy!)</li>
</ul>
<p>No good solution for on-demand pull. If that changes, please comment; I'll update this answer.</p>
|
<p>I'm working through the sample tutorial on OpenShift 3. I created the example application nodejs-mongodb-example. But in the "edit parameters" step, I put the wrong URL to my forked repository, and I get a failed build.</p>
<p>I thought maybe I'd be able to extract the buildConfig file (is that a template?) on the command line, but I haven't found a way to do that.</p>
<p>Is there a way to edit and replace this bad buildConfig without deleting all of the application objects and starting over?</p>
| <p>You can use the <code>oc edit</code> command to edit an existing object. For example, <code>oc edit buildconfig/myapp</code> to edit the BuildConfig named myapp.</p>
|
<p>I have implemented basic authentication on my kubernetes api-server, now I am trying to configure my <code>./kube/config</code> file in a way I could simply run, <code>kubectl get pods</code></p>
<pre><code> kubectl config set-cluster digitalocean \
--server=https://SERVER:6443 \
--insecure-skip-tls-verify=true \
--api-version="v1"
kubectl config set-context digitalocean --cluster=digitalocean --user=admin
kubectl config set-credentials admin --password="PASSWORD"
kubectl config use-context digitalocean
</code></pre>
<p>But now, it asks for credentials twice like :</p>
<pre><code>dev@desktop: ~/code/go/src/bitbucket.org/cescoferraro
$ kubectl get pods
Please enter Username: admin
enter Password: PASSWORD
Please enter Username: admin
Please enter Password: PASSWORD
NAME READY STATUS RESTARTS AGE
</code></pre>
<p>or I need to pass the flags like</p>
<pre><code>kubectl get pods --username=admin --password=PASSWORD
</code></pre>
<p>is this the default behavior? I want my config to know me. What can I do?</p>
| <p>Can you provide the output of <code>kubectl config view</code>? I think the problem might be you need to do something like</p>
<p><code>kubectl config set-credentials cluster-admin --username=admin --password=PASSWORD</code></p>
<p>instead of </p>
<p><code>kubectl config set-credentials admin --password="PASSWORD"</code>. </p>
|
<p>I have set up DNS in my Kubernetes (v1.1.2+1abf20d) system, on CoreOS/AWS, but I cannot look up services via DNS. I have tried debugging, but cannot for the life of me find out why. This is what happens when I try to look up the kubernetes service, which should always be available:</p>
<pre><code>$ ~/.local/bin/kubectl --kubeconfig=/etc/kubernetes/kube.conf exec busybox-sleep -- nslookup kubernetes.default
Server: 10.3.0.10
Address 1: 10.3.0.10 ip-10-3-0-10.eu-central-1.compute.internal
nslookup: can't resolve 'kubernetes.default'
error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1
</code></pre>
<p>I have installed the DNS addon according to this spec:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v10
namespace: kube-system
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v10
template:
metadata:
labels:
k8s-app: kube-dns
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.12
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 1
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.3.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
</code></pre>
<p>Why isn't DNS lookup for services working in my Kubernetes setup? Please let me know what other info I need to provide.</p>
| <p>There were two things I needed to do:</p>
<ol>
<li>Configure kube2sky via kubeconfig, so that it's properly configured for TLS.</li>
<li>Configure kube-proxy via kubeconfig, so that it's properly configured for TLS and finds the master node.</li>
</ol>
<h2>/etc/kubernetes/kube.conf on master node</h2>
<pre><code>apiVersion: v1
kind: Config
clusters:
- name: kube
cluster:
server: https://127.0.0.1:443
certificate-authority: /etc/ssl/etcd/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/ssl/etcd/master-client.pem
client-key: /etc/ssl/etcd/master-client-key.pem
contexts:
- context:
cluster: kube
user: kubelet
</code></pre>
<h2>/etc/kubernetes/kube.conf on worker node</h2>
<pre><code>apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/ssl/etcd/ca.pem
server: https://<master IP>:443
users:
- name: kubelet
user:
client-certificate: /etc/ssl/etcd/worker.pem
client-key: /etc/ssl/etcd/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
</code></pre>
<h2>dns-addon.yaml (install this on master)</h2>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting
# it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting
# it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
volumeMounts:
- name: kubernetes-etc
mountPath: /etc/kubernetes
readOnly: true
- name: etcd-ssl
mountPath: /etc/ssl/etcd
readOnly: true
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain=cluster.local.
- --kubecfg-file=/etc/kubernetes/kube.conf
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting
# it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local \
127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
- name: kubernetes-etc
hostPath:
path: /etc/kubernetes
- name: etcd-ssl
hostPath:
path: /etc/ssl/etcd
dnsPolicy: Default # Don't use cluster DNS.
</code></pre>
<h2>/etc/kubernetes/manifests/kube-proxy.yaml on master node</h2>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube:v1.1.2
command:
- /hyperkube
- proxy
- --master=https://127.0.0.1:443
- --proxy-mode=iptables
- --kubeconfig=/etc/kubernetes/kube.conf
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
- mountPath: /etc/kubernetes
name: kubernetes
readOnly: true
- mountPath: /etc/ssl/etcd
name: kubernetes-certs
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
- hostPath:
path: /etc/kubernetes
name: kubernetes
- hostPath:
path: /etc/ssl/etcd
name: kubernetes-certs
</code></pre>
<h2>/etc/kubernetes/manifests/kube-proxy.yaml on worker node</h2>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: gcr.io/google_containers/hyperkube:v1.1.2
command:
- /hyperkube
- proxy
- --kubeconfig=/etc/kubernetes/kube.conf
- --proxy-mode=iptables
- --v=2
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/kube.conf
name: "kubeconfig"
readOnly: true
- mountPath: /etc/ssl/etcd
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/kube.conf"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/ssl/etcd"
</code></pre>
|
<p>Imagine the common scenario of a memcache (e.g. Redis) running with multiple pods across several nodes. Another service, such as a PHP application, uses Redis, and is configured to use the cluster IP of the Redis service. From my understanding this is routed through to kube-proxy (or in newer Kubernetes versions, handled in iptables) which then pushes the request to a running pod.</p>
<p>My question is, if a running pod is available on the local node, that should be preferred to one running on a remote node, as it reduces the network bandwidth usage. Does kube-proxy do this, or does it blindly RR loadbalance across all available pods?</p>
| <p>As you mentioned, as of Kubernetes 1.1 the load balancing algorithm is a plain Round Robin so the location of the pods is not taken into account.</p>
|
<p>Could someone explain how to setup the external Ip on the 'frontend' service. I know that Vagrant don't support "type: LoadBalancer" and I don't know how to expose an Ip to my host. Thanks</p>
| <p>First of all you should change your service type in the guestbook service definition:</p>
<pre><code>diff --git a/guestbook-service.json b/guestbook-service.json
index cc7640e..fadef78 100644
--- a/guestbook-service.json
+++ b/guestbook-service.json
@@ -17,6 +17,6 @@
"selector":{
"app":"guestbook"
},
- "type": "LoadBalancer"
+ "type": "NodePort"
}
}
</code></pre>
<p>Then stop and restart the service with:</p>
<pre><code>kubectl delete -f guestbook-service.json
kubectl create -f guestbook-service.json
</code></pre>
<p>Look at your node IP address with:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>For example, for me this was the result:</p>
<pre><code>$ kubectl get nodes
NAME LABELS STATUS AGE
172.17.4.99 kubernetes.io/hostname=172.17.4.99 Ready 3h
</code></pre>
<p>Finally, you can find out your service nodeport with:</p>
<pre><code>kubectl describe services guestbook
</code></pre>
<p>For example, for me this was the result:</p>
<pre><code>$ kubectl describe services guestbook
Name: guestbook
Namespace: default
Labels: app=guestbook
Selector: app=guestbook
Type: NodePort
IP: 10.3.0.47
Port: <unnamed> 3000/TCP
NodePort: <unnamed> 32757/TCP
Endpoints: 10.2.76.12:3000,10.2.76.8:3000,10.2.76.9:3000
Session Affinity: None
No events.
</code></pre>
<p>At this point, using the node IP you got earlier, and the NodePort you just found, you should be able to connect:</p>
<pre><code>$ curl 172.17.4.99:32757
<!DOCTYPE html>
<html lang="en">
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta charset="utf-8">
<meta content="width=device-width" name="viewport">
<link href="/style.css" rel="stylesheet">
<title>Guestbook</title>
[...]
</code></pre>
<p>Note: the NodePort is usually allocated from a flag-configured range, by default it is 30000-32767.</p>
|
<p>I am using WmWare Workstation running a Linux vm which runs the vagrant and kubernetes environment.</p>
<p>I have a problem using kubernetes with vagrant. Every time if I shutdown the kubernetes cluster with the kube-down.sh tool and restart with kube-up.sh, I cannot connect to the minions anymore! I think it has something to do with the IP binding. Does somebody know what to do?</p>
<p>The other problem is if I try to install the guestbook example I cannot download the redis image. The pods are always stays in PENDING state. Is there a way to download the image manually and add it as a file?</p>
<p>Thank you in advance.</p>
<p>Regards
:)</p>
| <p>With a slow network it will take a while to set up everything behind the scenes; for me it took about 1h before I saw every pod in a running state:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-7ihd2 1/1 Running 0 2h
guestbook-8fjs3 1/1 Running 0 2h
guestbook-fifmm 1/1 Running 0 2h
redis-master-ivebc 1/1 Running 0 3h
redis-slave-6qxga 1/1 Running 0 2h
redis-slave-r8bk4 1/1 Running 0 2h
</code></pre>
<p>But in the end it worked!</p>
|
<p>I hope everyone here is doing good. I am trying to find a way to add entries to the containers /etc/hosts file while spinning up a pod. I was just wondering to know if there is any option/parameter that I could mention in my "pod1.json" which adds the entries to the containers /etc/hosts when its being created. Something like "--add-host node1.example.com:${node1ip}" that serves the same purpose for docker as shown below.</p>
<pre><code>docker run \
--name mongo \
-v /home/core/mongo-files/data:/data/db \
-v /home/core/mongo-files:/opt/keyfile \
--hostname="node1.example.com" \
--add-host node1.example.com:${node1ip} \
--add-host node2.example.com:${node2ip} \
--add-host node3.example.com:${node3ip} \
-p 27017:27017 -d mongo:2.6.5 \
--smallfiles \
--keyFile /opt/keyfile/mongodb-keyfile \
--replSet "rs0"
</code></pre>
<p>Any pointers are highly appreciated. Thank you.</p>
<p>Regards,
Aj</p>
| <p>You can actually do this in the way that you initially were expecting to.</p>
<p>Thanks to this answer for helping me get there - <a href="https://stackoverflow.com/a/33888424/370364">https://stackoverflow.com/a/33888424/370364</a></p>
<p>You can use the following approach to shove hosts into your container's /etc/hosts file</p>
<pre><code>command: ["/bin/sh","-c"]
args: ["echo '192.168.200.200 node1.example.com' >> /etc/hosts && commandX"]
</code></pre>
<p>If you want to dynamically set the ip at pod start time you can create a pod from stdin and pass it through sed to perform substitution before passing it to kubectl.</p>
<p>So the pod yaml would look like the following</p>
<pre><code>command: ["/bin/sh","-c"]
args: ["echo 'NODE_1_IP node1.example.com' >> /etc/hosts && commandX"]
</code></pre>
<p>Then execute it with</p>
<pre><code>cat pod1.yaml | sed -- "s|NODE_1_IP|${node1ip}|" | kubectl create -f -
</code></pre>
<p>I realise that this is not way that kubernetes intended for this kind of thing to be achieved but we are using this for starting up a test pod locally and we need to point it at the default network device on the local machine. Creating a service just to satisfy the test pod seems like overkill. So we do this instead.</p>
|
<p>I have am using OpenShift 3, and have been trying to get Fabric8 setup.</p>
<p>Things havent been going to well, so I decided to remove all services and pods.</p>
<p>When I run</p>
<pre><code>oc delete all -l provider=fabric8
</code></pre>
<p>The cli output claims to have deleted a lot of pods, however, they are still showing in the web console, and I can run the same command in the CLI again and get the exact same list of pods that OpenShift cli claims it deleted.</p>
<p>How do I actually delete these pods?
Why is this not working as designed?</p>
<p>Thanks</p>
| <p>Deletion is graceful by default, meaning the pods are given an opportunity to terminate themselves. You can force a graceless delete with <code>oc delete all --grace-period=0 ...</code></p>
|
<p>After playing with docker landscape for several months, I still found it is really counter intuitive to use Kubernetes Pod. I have not encountered any use case where pod is more natural a fit than container. When I am asked to use a Pod, I usually just use a single container Pod. I am trying to do a demo showcasing the strength of pod concept, but I just couldn't figure out a non-trival use case. </p>
<p>In my demo, I started a server pod with two service container listening on different port, one for transcribe letters to upper case, and one for transcribe letters to lower case. Then I have a client pod with two client containers talking to each server container... This use case seems really forced, I don't see why I need to use the Pod concept. </p>
<p>I have read through lots of tutorials and docs, and they all just touch on WHAT is a pod, without a convincing use case of WHY we must use pod... Am I missing something? What is a solid use case for using a Pod concept? Thanks.</p>
<p>Edit:
To be specific, suppose there are two services A and B that requires co-location and shared network stack, and this is a natural fit for Pod concept. What is the advantage of using the Pod (with two collocated containers running service A and service B) compared to having service A and B running in the same container, which guarantees the collocation and shared network stack? Is there a rule of thumb for the granularity?</p>
<p>My original question is to find out such service A and service B that requires co-location and shared network stack. Thanks to Jared and Robert for the pointers, and I will dig through these use cases.</p>
| <p>Jared pointed to some good examples in his comments above. As Brian Grant mentioned in the linked github issue, pushing log data and loading data are the most common uses inside of Google.</p>
<p>For a concrete example in the Kubernetes repository, you can look at the definition for the <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.in" rel="nofollow">DNS cluster add-on</a>. It uses a pod to co-locate a DNS server (<a href="https://github.com/skynetservices/skydns" rel="nofollow">skyDNS</a>), local storage using etcd, and a simple program to pull Kubernetes API objects down, convert them, and put them into local storage. Rather than building a new custom DNS server, this pod leverages an existing DNS server and add some customization to it to make it aware of the cluster environment. Since all of the containers are in a pod, they can rely on localhost networking to communicate and don't need any form of sophisticated service discovery. </p>
|
<p>According to <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/custom-metrics.md" rel="nofollow" title="Kubernetes Custom Metrics">Kubernetes Custom Metrics Proposal</a> containers can expose its app-level metrics in Prometheus format to be collected by Heapster. </p>
<p>Could anyone elaborate, if metrics are <em>pulled</em> by Heapster that means after the container terminates metrics for the last interval are lost? Can app <em>push</em> metrics to Heapster instead?</p>
<p>Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?</p>
| <p>Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical).</p>
<p>There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.</p>
|
<p>After switching gcloud projects using ...</p>
<pre><code>gcloud init
</code></pre>
<p>... and then try to do some kubectl command, like this for instance:</p>
<pre><code>kubectl get rc
</code></pre>
<p>... I get this error:</p>
<pre><code>error: couldn't read version from server: Get
https://130.211.59.254/api: x509: certificate signed by unknown authority
</code></pre>
<p>Why is this and how can I solve it?</p>
| <p>This is because the keys to your old cluster is cached. I'm not sure why they are not updated by the gcloud init command (that's what one would intuitively expect, or at least some kinder error message from kubectl)</p>
<p>You solve it by simply getting the credentials of the cluster in the new configuration:</p>
<pre><code>gcloud container clusters get-credentials YOURCLUSTERHERE --zone YOURCLUSTERZONEHERE
</code></pre>
|
<p>What are the benefits of a Job with single Pod over just a single Pod with restart policy OnFailure to reliably execute once in kubernetes?</p>
<p>As discussed in <a href="https://github.com/kubernetes/kubernetes/issues/20255" rel="noreferrer">Job being constanly recreated despite RestartPolicy: Never</a>, in case of a Job a new Pod will be created endlessly in case container returned non-zero status. The same applies to a single OnFailure Pod, only this time no new pods are created which is even cleaner. </p>
<p>What are the cons and pros of either approach? Can Pod restart parameters, such as restart delay, or number of retry attempts can be controlled in either case?</p>
| <p>The difference is that if a Job doesn't complete because the node that its pod was on went offline for some reason, then a new pod will be created to run on a different node. If a single pod doesn't complete because its node became unavailable, it won't be rescheduled onto a different node.</p>
|
<p>I am running a Kubernetes cluster hosted on GKE and would like to write an application (written in Go) that speaks to the Kubernetes API. My understanding is that I can either provide a client certificate, bearer token, or HTTP Basic Authentication in order to authenticate with the apiserver. I have already found the right spot to inject any of these into the <a href="https://github.com/kubernetes/kubernetes/tree/release-1.1/pkg/client/" rel="nofollow">Golang client library</a>.</p>
<p>Unfortunately, the examples I ran across tend to reference to existing credentials stored in my personal kubeconfig file. This seems non-advisable from a security perspective and makes me believe that I should create a new client certificate / token / username-password pair in order to support easy revocation/removal of compromised accounts. However, I could not find a spot in the documentation actually describing how to go about this when running on managed Kubernetes in GKE. (There's <a href="http://kubernetes.io/docs/admin/authentication/" rel="nofollow">this guide on creating new certificates</a> explaining that the apiserver needs to get restarted with updated parameters eventually, something that to my understanding cannot be done in GKE.)</p>
<p>Are my security concerns for reusing my personal Kubernetes credentials in one (or potentially multiple) applications unjustified? If not, what's the right approach to generate a new set of credentials?</p>
<p>Thanks.</p>
| <p>If your application is running inside the cluster, you can use <a href="http://kubernetes.io/docs/user-guide/service-accounts/" rel="nofollow">Kubernetes Service Accounts</a> to authenticate to the API server.</p>
<p>If this is outside of the cluster, things aren't as easy, and I suppose your concerns are justified. Right now, GKE does not allow additional custom identities beyond the one generated for your personal kubeconfig file.</p>
<p>Instead of using your credentials, you could grab a service account's token (inside a pod, read from <code>/var/run/secrets/kubernetes.io/serviceaccount/token</code>), and use that instead. It's a gross hack, and not a great general solution, but it might be slightly preferable to using your own personal credentials.</p>
|
<p>I have a kubernetes single-node setup (see <a href="https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html">https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html</a> )</p>
<p>I have a service and an replication controller creating pods. Those pods need to connect to the other pods in the same service (Note: this is ultimately so that I can get mongo running w/replica sets (non localhost), but this simple example demonstrates the problem that mongo has).</p>
<p>When I connect from any node to the service, it will be distributed (as expected) to one of the pods. This will work until it load balances to itself (the container that I am on). Then it fails to connect. </p>
<p>Sorry to be verbose, but I am going to attach all my files so that you can see what I'm doing in this little example.</p>
<p>Dockerfile:</p>
<pre><code>FROM ubuntu
MAINTAINER Eric H
RUN apt-get update; apt-get install netcat
EXPOSE 8080
COPY ./entry.sh /
ENTRYPOINT ["/entry.sh"]
</code></pre>
<p>Here is the entry point</p>
<pre><code>#!/bin/bash
# wait for a connection, then tell them who we are
while : ; do
echo "hello, the date=`date`; my host=`hostname`" | nc -l 8080
sleep .5
done
</code></pre>
<p>build the dockerfile</p>
<p><code>docker build -t echoserver .</code></p>
<p>tag and upload to my k8s cluster's registry</p>
<pre><code>docker tag -f echoserver:latest 127.0.0.1:5000/echoserver:latest
docker push 127.0.0.1:5000/echoserver:latest
</code></pre>
<p>Here is my Replication Controller</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
labels:
role: echo-server
app: echo
name: echo-server-1
spec:
replicas: 3
template:
metadata:
labels:
entity: echo-server-1
role: echo-server
app: echo
spec:
containers:
-
image: 127.0.0.1:5000/echoserver:latest
name: echo-server-1
ports:
- containerPort: 8080
</code></pre>
<p>And finally, here is my Service</p>
<pre><code>kind: Service
metadata:
labels:
app: echo
role: echo-server
name: echo-server-1
name: echo-server-1
spec:
selector:
entity: echo-server-1
role: echo-server
ports:
- port: 8080
targetPort: 8080
</code></pre>
<p>Create my service
<code>kubectl create -f echo.service.yaml</code></p>
<p>Create my rc
<code>kubectl create -f echo.controller.yaml</code></p>
<p>Get my PODs</p>
<pre><code>kubectl get po
NAME READY STATUS RESTARTS AGE
echo-server-1-jp0aj 1/1 Running 0 39m
echo-server-1-shoz0 1/1 Running 0 39m
echo-server-1-y9bv2 1/1 Running 0 39m
</code></pre>
<p>Get the service IP</p>
<pre><code>kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
echo-server-1 10.3.0.246 <none> 8080/TCP entity=echo-server-1,role=echo-server 39m
</code></pre>
<p>Exec into one of the pods
<code>kubectl exec -t -i echo-server-1-jp0aj /bin/bash</code></p>
<p>Now connect to the service multiple times... It will give me the app-message for all pods except for when it gets to itself, whereupon it hangs.</p>
<pre><code>root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:02:38 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
^C
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:02:43 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
^C
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:19 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:23 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:26 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:27 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
</code></pre>
<p>How can I configure things so that all members of a service can connect to all other members, including itself?</p>
| <p>Thanks to all those who helped on GitHub.<br>
The workaround turned out to be as follows: </p>
<blockquote>
<p>tanen01 commented on Feb 4 Seeing the same problem here on k8s v1.1.7
stable</p>
<p>Issue occurs with:</p>
</blockquote>
<pre><code>kube-proxy --proxy-mode=iptables
</code></pre>
<blockquote>
<p>Once I changed it to:</p>
</blockquote>
<pre><code> --proxy-mode=userspace
</code></pre>
<blockquote>
<p>(also the default), then it works again.</p>
</blockquote>
<p>So, if you are experiencing this, please try turning off <code>--proxy-mode</code> when you start <code>kube-proxy</code>.</p>
|
<p>I am using Kubernetes on a coreOS cluster hosted on DigitalOcean.
And using <a href="https://github.com/cescoferraro/kubernetes-do" rel="nofollow noreferrer">this</a> repo to set it up. I started the apiserver with the following line:</p>
<pre><code> /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \
--insecure-bind-address=0.0.0.0 --insecure-port=8080 \
--secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \
--logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \
--service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0
</code></pre>
<p>The problem is that it accepts requests from anyone! I want to be able to provide a simple user/password authentication. I have been reading <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/admin/authentication.md" rel="nofollow noreferrer">this</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/security.md" rel="nofollow noreferrer">this</a> and it seems that I have to do something like the below, but I cannot afford to take the cluster down for a long period of time, so I need your guys to help with this one. Btw, my pods do not create another pods, so I only need a few user, like 1/2 for devs and 1 for CI.</p>
<p>I am thinking of doing something like including authorization-mode and authorization-policy-file flags as it seems required and making the insecure-bind-address localhost to make it only available locally. I am missing something?</p>
<pre><code> /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \
--authorization-mode=ABAC --authorization-policy-file=/access.json \
--insecure-bind-address=127.0.0.1 --insecure-port=8080 \
--secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \
--logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \
--service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0
</code></pre>
<p>###/access.json</p>
<pre><code>{"user":"admin"}
{"user":"wercker"}
{"user":"dev1"}
{"user":"dev2"}
</code></pre>
<p>But where are the passwords? How do I actually make the request with kubectl and curl or httpie?</p>
| <p>If you want your users to authenticate using HTTP Basic Auth (user:password), you can add:</p>
<pre><code>--basic-auth-file=/basic_auth.csv
</code></pre>
<p>to your kube-apiserver command line, where each line of the file should be <code>password, user-name, user-id</code>. E.g.:</p>
<pre><code>@dm1nP@ss,admin,admin
w3rck3rP@ss,wercker,wercker
etc...
</code></pre>
<p>If you'd rather use access tokens (HTTP Authentication: Bearer), you can specify:</p>
<pre><code>--token-auth-file=/known-tokens.csv
</code></pre>
<p>where each line should be <code>token,user-name,user-id[,optional groups]</code>. E.g.:</p>
<pre><code>@dm1nT0k3n,admin,admin,adminGroup,devGroup
w3rck3rT0k3n,wercker,wercker,devGroup
etc...
</code></pre>
<p>For more info, checkout the <a href="http://kubernetes.io/docs/admin/authentication/" rel="noreferrer">Authentication docs</a>. Also checkout <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/auth/authorizer/abac/example_policy_file.jsonl" rel="noreferrer">example_policy_file.jsonl</a> for an example ABAC file.</p>
|
<p>I've been able to successfully install Kubernetes in a CentOS 7 testing environment using the "virt7-testing" repo as described in the CentOS "<a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/centos/centos_manual_config.md" rel="nofollow">Getting Started Guide</a>" in the Kubernetes github repo. My production environment will be running on Oracle Linux 7, and so far enabling "virt7-testing" on OL7 hasn't been working.</p>
<p>Are there any other yum repositories out there that are compatible with OL7 and include Kubernetes? </p>
| <p>It's not the best solution to pull outside of OEL, but I couldn't find an OEL repository with these packages, so I used this:</p>
<pre><code>[]# cat /etc/yum.repos.d/virt7-common.repo
[virt7-common]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=http://mirror.centos.org/centos/7/extras/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
</code></pre>
|
<p>kube-proxy has an option called --proxy-modeοΌand according to the help message, this option can be <strong>userspace</strong> or <strong>iptables</strong>.(See below)</p>
<pre><code># kube-proxy -h
Usage of kube-proxy:
...
--proxy-mode="": Which proxy mode to use: 'userspace' (older, stable) or 'iptables' (experimental). If blank, look at the Node object on the Kubernetes API and respect the 'net.experimental.kubernetes.io/proxy-mode' annotation if provided. Otherwise use the best-available proxy (currently userspace, but may change in future versions). If the iptables proxy is selected, regardless of how, but the system's kernel or iptables versions are insufficient, this always falls back to the userspace proxy.
...
</code></pre>
<p>I can't figure out what does <strong>userspace</strong> mode means here.</p>
<p>Anyone can tell me what the working principle is when kube-proxy runs under <strong>userspace</strong> mode?</p>
| <p>Userspace and iptables refer to what actually handles the connection forwarding. In both cases, local iptables rules are installed to intercept outbound TCP connections that have a destination IP address associated with a service. </p>
<p>In the userspace mode, the iptables rule forwards to a local port where a go binary (kube-proxy) is listening for connections. The binary (running in userspace) terminates the connection, establishes a new connection to a backend for the service, and then forwards requests to the backend and responses back to the local process. An advantage of the userspace mode is that because the connections are created from an application, if the connection is refused, the application can retry to a different backend. </p>
<p>In iptables mode, the iptables rules are installed to directly forward packets that are destined for a service to a backend for the service. This is more efficient than moving the packets from the kernel to kube-proxy and then back to the kernel so it results in higher throughput and better tail latency. The main downside is that it is more difficult to debug, because instead of a local binary that writes a log to <code>/var/log/kube-proxy</code> you have to inspect logs from the kernel processing iptables rules. </p>
<p>In both cases there will be a kube-proxy binary running on your machine. In userspace mode it inserts itself as the proxy; in iptables mode it will configure iptables rather than to proxy connections itself. The same binary works in both modes, and the behavior is switched via a flag or by setting an annotation in the apiserver for the node. </p>
|
<p>Is it possible to schedule upcoming Pods/Containers based on their priorities?
(if container1 is critical and needs resources, the google orchestrator can kill other low priority containers)
If yes, is there some specific priorities tags (like: critical, monitoring, production...)? </p>
| <p>This use case is described in both the <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf" rel="noreferrer">Borg paper</a> and the <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41684.pdf" rel="noreferrer">Omega paper</a>. However, it is not presently implemented within Kubernetes. Here are some related links to ongoing proposals: </p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/issues/147" rel="noreferrer">QoS Tiers</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/22212" rel="noreferrer">Preemption Policy / Scheme</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/resource-qos.md#under-development" rel="noreferrer">Resource Quality of Service</a></li>
</ul>
|
<pre><code>docker run \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged=true --v=2
</code></pre>
<p>A <code>curl localhost:8080</code>confirms that the API is running.</p>
<p>But trying to access it with the host's IP like <code>curl dockerHostIp:8080</code>fails:</p>
<pre><code>Failed to connect to ipOfDockerHost port 8080: Connection refused
</code></pre>
<p>How can I expose k8s to the outside? (docker-host is an ubuntu server)
As far as I understand using --net=host should solve this problem. But it does not work in this case.</p>
| <p>When you start kubernetes with docker, you choose between two models:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/images/hyperkube/master.json" rel="nofollow">--config=/etc/kubernetes/manifests</a> </li>
<li><a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/images/hyperkube/master-multi.json" rel="nofollow">--config=/etc/kubernetes/manifests-multi</a>.</li>
</ul>
<p>If you look in these files, you will notice one difference: <code>--insecure-bind-address is different</code>.</p>
<p>When you use <code>--config=/etc/kubernetes/manifests</code>, you ask for a local access only.</p>
<p>You should start with <code>--config=/etc/kubernetes/manifests-multi</code>.</p>
<p>Note that:</p>
<ul>
<li>you will need to start etcd manually when you use --config=/etc/kubernetes/manifests-multi</li>
<li>follow <a href="https://github.com/kubernetes/kubernetes/issues/4869" rel="nofollow">this post</a> as docker support is not working for now</li>
</ul>
|
<p>I have 3 kubernetes services which are:</p>
<pre><code>service 1:
name: abc
service 2:
name: def
service 3:
name: hgk
</code></pre>
<p>In application running on service 1, I successfully use environment variables to get cluster IP of other services. </p>
<pre><code>System.getenv(DEF_SERVICE_HOST); --> success
System.getenv(HGK_SERVICE_HOST); --> success
</code></pre>
<p>However, when I call the service 1 's environemnet, it return null</p>
<pre><code>System.get(ABC_SERVICE_HOST); ---> null
</code></pre>
<p>Looks like it can not get its own cluster IP. </p>
<p>Do you guys have any ideas?
Thank you very much!</p>
| <p>The only service environment variables that are populated in a pod are the services that existed before the pod created. Environment variables are not injected into running pods once they've already been started.</p>
<p>I'm guessing that you created the <code>abc</code> replication controller / pods before you created the <code>abc</code> service. If you kill the existing pods and let them be recreated, they should have the ABC_SERVICE_HOST environment variable set.</p>
|
<p>It seems that the <a href="https://cloud.google.com/monitoring/agent/install-agent"><strong>Google Monitoring Agent</strong> (powered by <em>Stackdriver</em>)</a> should be installed on each <em>Node</em> (i.e. each compute instance, i.e. each machine) of a <em>Kubernetes</em> cluster.</p>
<p>However the new <em>plugins</em>, like <a href="https://cloud.google.com/monitoring/agent/plugins/nginx">Nginx</a>, <a href="https://cloud.google.com/monitoring/agent/plugins/redis">Redis</a>, <a href="https://cloud.google.com/monitoring/agent/plugins/elasticsearch">ElasticSearch</a>..., need those agents to know the IP of these services. This means having <code>kube-proxy</code> running and set up which should mean running that <em>Google Monitoring Agent</em> on a Pod.</p>
<p>These two conflict: On one side that agent monitors the entire machine, on the other it monitor services running on one or more machines.</p>
<p>Can these Stackdriver plugins work on a <strong>Google Container Engine</strong> (GKE) / Kubernetes cluster?</p>
| <p>To monitor each machine (memory, CPU, disk...) it's possible to install the agent on each node (i.e. on each Compute Instance of your GKE cluster). Note that it'll not work with auto-scaling in the sense that re-created nodes won't have the agent installed.</p>
<p>To monitor services (number of requests/s, client connection...) it's possible to install the agent plugin in another container so that for example Nginx Pod run two containers:</p>
<ul>
<li>Nginx</li>
<li>Google Monitoring Agent together with the Nginx plugin</li>
</ul>
<p>Note: Not fully tested yet.</p>
|
<p>I'm running Kubernetes via <a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="noreferrer">Docker</a>. Following the tutorial I launched an Nginx POD using <code>kubectl run nginx --image=nginx --port=80</code>. However this seems to create orphaned PODs (without a replication controller). <code>kubectl get rc</code> doesn't return anything and <code>kubectl describe pod nginx-198147104-kqudh</code> shows <strong>Replication Controllers: none</strong> (kubectl version "v1.2.0+5cb86ee" shows Controllers: ReplicaSet/nginx-198147104 but scaling it to 0 just causes a new Nginx pod to be created, and it can't be deleted).</p>
<p>I would like to be able to delete the Kubernetes managed Nginx container from Docker. I haven't had much luck find out how to delete an orphan pod (without it being recreated...).</p>
<p><strong>Client Version</strong>: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.4", GitCommit:"65d28d5fd12345592405714c81cd03b9c41d41d9", GitTreeState:"clean"}<BR>
<strong>Server Version</strong>: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}</p>
| <p>With v1.2 Kubernetes we use <code>ReplicaSet</code> (a newer form of <code>ReplicationController</code>). Given that you have a <code>ReplicaSet</code>, you must have used a v1.2 client to create it. But it doesn't stop there. What 1.2 actually creates for you is a <code>Deployment</code> which itself manages <code>ReplicaSets</code>.</p>
<p>So what you need to know is <code>kubectl scale deployment</code> or <code>kubectl delete deployment</code>.</p>
<p>Which tutorial are you following?</p>
|
<p>We are using kubernetes 1.1.8 (with flannel) but with 1.2 about to drop any input on this topic that is specific to 1.2 is fine.</p>
<p>We run kubernetes in our own datacenters on bare metal which means that we need to do maintenance on worker nodes which take them in and out of production. </p>
<p>We have a process for taking a node out of the cluster to do maintenance on it and I'm wondering if our process can be improved to minimize the potential for user facing downtime.</p>
<p>We are using f5 load balancers. Each service that we deploy is given a static nodePort. For example appXYZ has nodePort 30173. In the F5 pool for service appXYZ all minions in the cluster are added as pool members with a tcp port open check on port 30173. </p>
<p>During maintenance on a node we take the following steps
1. Set the node to unschedulable = true.
2. Get the list of pods running on the node and delete each pod. Sometimes this will be 40 pods per node.<br>
3. Wait for up to two minutes for the pods in step #2 to shutdown.
4. Reboot the physical node.</p>
<p>I'm wondering if this is what other people are doing or if we are missing one or more steps that would further minimize the amount of traffic that could potentially get set to a dead or dying pod on the node undergoing maintenance?</p>
<p>When I read through <a href="http://kubernetes.io/docs/user-guide/pods/#termination-of-pods" rel="nofollow">http://kubernetes.io/docs/user-guide/pods/#termination-of-pods</a> it makes me wonder if adding a longer (over 30 seconds) --grace-period= to our delete command and pausing for our reboot for a longer amount of time would ensure all of the kube-proxy's have been updated to remove the node from the list of endpoints.</p>
<p>So if anyone can confirm that what we are doing is a decent practice or has any suggestions on how to improve it. Especially any tips on what to do in kubernetes 1.2.</p>
<p>TIA!</p>
| <p>Checkout the '<a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/" rel="nofollow">kubectl drain</a>' command:</p>
<pre><code># Drain node "foo", even if there are pods not managed by a ReplicationController, Job, or DaemonSet on it.
$ kubectl drain foo --force
# As above, but abort if there are pods not managed by a ReplicationController, Job, or DaemonSet, and use a grace period of 15 minutes.
$ kubectl drain foo --grace-period=900
</code></pre>
<p>See also <a href="https://github.com/kubernetes/kubernetes/issues/3885" rel="nofollow">Issue 3885</a> and related linked issues</p>
|
<p>I tried creating a new kube cluster via googleapis with oAuth authentication. But I am getting an error that
<strong>"HTTP Load Balancing requires the '<a href="https://www.googleapis.com/auth/compute" rel="noreferrer">https://www.googleapis.com/auth/compute</a>' scope."</strong>.
I came to know that google has updated the kube version to <strong>1.2</strong> the previous night in their console (until which I was able to create cluster using same method in <strong>v1.0</strong>)
I tried creating one via API explorer using google's oAuth, but it failed with same error.
I think the authscope has been updated, but I could not find the new authscope in any of '<strong>google cloud platform container engine doc</strong>' or '<strong>kubernetes latest release doc</strong>'. Can someone please help me in identifying the new authscope?<a href="https://i.stack.imgur.com/PBzK2.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/PBzK2.jpg" alt="Accessing via google cloud platform - screen shot"></a>
<a href="https://i.stack.imgur.com/iUllT.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/iUllT.jpg" alt="Request and response - screenshot"></a></p>
| <p>That error message was due to an error on our part while rolling out support for Kubernetes 1.2 in Google Container Engine. We've fixed the issues, and you can now create a container cluster using the api explorer. Sorry for the trouble. </p>
|
<p>I am creating an app in Origin 3.1 using my Docker image. </p>
<p>Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff". </p>
<p>I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause.</p>
<p>I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID." </p>
<p><a href="https://stackoverflow.com/questions/35710965/what-is-crashloopbackoff-status-for-openshift-pods">What is CrashLoopBackOff status for openshift pods?</a> </p>
<p>Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting.</p>
<p>Could anyone please help me with this.</p>
| <p>You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.</p>
<p>Your dockerfile mentions below :</p>
<p>ENTRYPOINT ["container-entrypoint"]</p>
<p>What actually this "container-entrypoint" doing ?</p>
<p>you need to check.</p>
<p>Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything</p>
|
<p>What is the correct way to install <a href="https://github.com/kubernetes/kubernetes/tree/release-1.1/cluster/addons" rel="noreferrer">addons</a> with Kubernetes 1.1? The <a href="https://github.com/kubernetes/kubernetes/tree/release-1.1/cluster/addons" rel="noreferrer">docs</a> aren't as clear as I'd like on this subject; they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens.</p>
<p>Additionally, for added confusion, the docs imply that addons are bundled with Kubernetes:</p>
<blockquote>
<p>So the only persistent way to make changes in add-ons is to update the manifests on the master server. But still, users are discouraged to do it on their own - they should rather wait for a new release of Kubernetes that will also contain new versions of add-ons.</p>
</blockquote>
<p>So, how should I really install addons, f.ex. <a href="https://github.com/kubernetes/kubernetes/tree/release-1.1/cluster/addons/cluster-loadbalancing" rel="noreferrer">cluster-loadbalancing</a>, with Kubernetes 1.1?</p>
| <blockquote>
<p>... they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens.</p>
</blockquote>
<p>This is only true if you are using one of the salt-based installation mechanisms. </p>
<blockquote>
<p>So, how should I really install addons, f.ex. cluster-loadbalancing, with Kubernetes 1.1?</p>
</blockquote>
<p>Most of the add-ons can be installed by just running <code>kubectl create -f</code> against the replication controller and service files for the add-on. You need to create the <code>kube-system</code> namespace first if you haven't already, and some of the add-ons (like <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/skydns-rc.yaml.in" rel="nofollow">dns</a>) require you to fill in a few values in a jinja template that would otherwise be handled by salt. </p>
|
<p>We like to know if there is a way to get the service level monitoring parameters like ( how many request / sec, latency / request , etc...) from the Kubernetes Service?.
I understand that if Kubernetes service is created with type LoadBalancer, then we can leverage the cloud provider interfaces for those metrics; However I like to know if there is any provision to get the above said metrics at service level or container level without any latency.?.</p>
| <p>Not presently. This is being tracked in <a href="https://github.com/kubernetes/kubernetes/issues/9125" rel="nofollow">issue 9215</a>. As is pointed out in the issue, use of iptables makes this non-trivial.</p>
|
<p>With command, I can add label as below</p>
<pre><code>kubectl label pod POD_NAME KEY1=VALUE1
</code></pre>
<p>How could I do that from kubernetes API?</p>
<p>I guess it can be done by <code>PATCH /api/v1/namespaces/{namespace}/pods/{name}</code></p>
<p>Here is pod.json</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"labels": {
"key1": "value1"
}
}
}
</code></pre>
<p>I tried with following command</p>
<pre><code>KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
curl --request PATCH --insecure \
--header "Authorization: Bearer $KUBE_TOKEN" \
--data "$(cat pod.json)" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$POD_NAMESPACE/pods/$POD_NAME
</code></pre>
<p>And it returns </p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server responded with the status code 415 but did not return more information",
"details": {},
"code": 415
}
</code></pre>
| <p>Set content-type to <code>application/json-patch+json</code> and specify the patch in <a href="http://jsonpatch.org" rel="noreferrer">http://jsonpatch.org</a> format. </p>
<pre><code>$ cat > patch.json <<EOF
[
{
"op": "add", "path": "/metadata/labels/hello", "value": "world"
}
]
EOF
$ curl --request PATCH --data "$(cat patch.json)" -H "Content-Type:application/json-patch+json" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/$POD_NAMESPACE/pods/$POD_NAME
</code></pre>
|
<p>I am trailing Kubernetes on AWS, and I have a cluster set up, but having trouble creating an application by pulling a docker image from an insecure repo.</p>
<p>When I created the cluster, I ensured that the environment variable <code>KUBE_ENABLE_INSECURE_REGISTRY=true</code> was set to true. But I still don't seem to be able to pull from this repo.</p>
<p>The logs show (edited application name and registry URL):</p>
<blockquote>
<p>Error syncing pod, skipping: failed to "StartContainer" for "<em><em><strong>"
with ErrImagePull: "API error (500): unable to ping registry endpoint
https://docker-registry.</strong></em>.com:5000/v0/\nv2 ping attempt failed with
error: Get https://docker-registry.</em><strong>.com:5000/v2/: EOF\n v1 ping
attempt failed with error: Get
https://docker-registry.</strong>*.com:5000/v1/_ping: EOF\n"</p>
</blockquote>
<p>Can anyone please advise on this?</p>
<p>Thanks</p>
| <p>According to this <a href="https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/gce/config-default.sh#L85" rel="nofollow">code</a>, it seams that, only registries on network 10.0.0.0/8 can be insecure, is your registry on this range? What about setting <code>EXTRA_DOCKER_OPTS="--insecure-registry YOUR_REGISTRY_IP"</code> manually to docker environment file? Is that possible for you?</p>
|
<p>I am attempting to setup Kubernetes locally using a docker instance. I am following <a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="noreferrer">this documentation</a> but get stuck at the point of creating a new service and exposing the ports.</p>
<p>I have the docker container pulled and I have a <code>kubectl</code> available.</p>
<p>When I run the command <code>kubectl get nodes --show-labels</code> I get the following</p>
<pre>
|NAME | STATUS | AGE | LABELS |
|-----------|---------|--------|--------------------------------------|
|127.0.0.1 | Ready | 1h | kubernetes.io/hostname=127.0.0.1 |
</pre>
<p>I now create a new service with <code>kubectl run nginx --image=nginx --port=80</code> as per the docs. When I run <code>docker ps</code> I see a container that's been created using my local nginx:latest image.</p>
<pre>
CONTAINER_ID: 4192d1b423ec
IMAGE: nginx
COMMAND: "nginx -g 'daemon off'"
CREATED: 37 minutes ago
STATUS: Up 37 minutes
NAMES: k8s_nginx.aab52821_nginx-198147104-qop91_default_f1cf5d8a-ef2d-11e5-b527-f0def1de109f_940ee216
</pre>
<p>The next step is where I'm having problems <code>kubectl expose rc nginx --port=80</code> is supposed to expose the nginx image as a kubernetes service on port 80.</p>
<p>I get this in the terminal.</p>
<blockquote>
<p><strong>Error</strong> from server: replicationcontrollers "nginx" not found</p>
</blockquote>
<p>So I started reading about replicationcontrollers, I understand the concepts but I do not know how they are configured or setup. This got me to thinking that I'm following what should be an idiot proof setup guide. Can anyone help me with this? I have added my docker and kubernetes versions below.</p>
<h1>Version info</h1>
<h3>Docker version (Local Ubuntu 15.10)</h3>
<pre>
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:59:07 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:59:07 2016
OS/Arch: linux/amd64
</pre>
<h3>Kubernetes version</h3>
<pre>
Client Version: version.Info{
Major:"1",
Minor:"2",
GitVersion:"v1.2.0",
GitCommit:"5cb86ee022267586db386f62781338b0483733b3",
GitTreeState:"clean"
}
Server Version: version.Info{
Major:"1",
Minor:"2",
GitVersion:"v1.2.0",
GitCommit:"5cb86ee022267586db386f62781338b0483733b3",
GitTreeState:"clean"
}
</pre>
| <p>Kubernetes documentation for docker is outdated for now. Some elements need to be added.</p>
<p>This <a href="https://github.com/kubernetes/kubernetes/issues/4869" rel="nofollow">kubernetes issue</a> is the one to follow.</p>
<p>You can find at <a href="https://github.com/kubernetes/kubernetes/issues/4869#issuecomment-194149162" rel="nofollow">this comment</a> the answer to setup. It's working for me with DNS/Dashboard addons. I have not done a complete test of all the features but complexe application are running on it (understand not helloworld application).</p>
<p>If you are interested, you can find some project that try to solve this setup that is not so trivial if you don't have time:</p>
<ul>
<li><a href="https://github.com/tdeheurles/hive" rel="nofollow">hive</a></li>
<li><a href="https://github.com/skippbox/kmachine" rel="nofollow">kmachine</a></li>
</ul>
<p>Note I don't put any setup here as it will certainly be outdated soon ... Kubernetes documentation is the good place (and for now the issue I pointed you to ^^)</p>
|
<p>We use Kubernetes <code>Job</code>s for a lot of batch computing here and I'd like to instrument each Job with a monitoring sidecar to update a centralized tracking system with the progress of a job.</p>
<p>The only problem is, I can't figure out what the semantics are (or are supposed to be) of multiple containers in a job.</p>
<p>I gave it a shot anyways (with an <code>alpine</code> sidecar that printed "hello" every 1 sec) and after my main task completed, the <code>Job</code>s are considered <code>Successful</code> and the <code>kubectl get pods</code> in Kubernetes 1.2.0 shows:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
job-69541b2b2c0189ba82529830fe6064bd-ddt2b 1/2 Completed 0 4m
job-c53e78aee371403fe5d479ef69485a3d-4qtli 1/2 Completed 0 4m
job-df9a48b2fc89c75d50b298a43ca2c8d3-9r0te 1/2 Completed 0 4m
job-e98fb7df5e78fc3ccd5add85f8825471-eghtw 1/2 Completed 0 4m
</code></pre>
<p>And if I describe one of those pods</p>
<pre><code>State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 24 Mar 2016 11:59:19 -0700
Finished: Thu, 24 Mar 2016 11:59:21 -0700
</code></pre>
<p>Then <code>GET</code>ing the yaml of the job shows information per container:</p>
<pre><code> status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-03-24T18:59:29Z
message: 'containers with unready status: [pod-template]'
reason: ContainersNotReady
status: "False"
type: Ready
containerStatuses:
- containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937
image: luigi-reduce:0.2
imageID: docker://sha256:5a5e15390ef8e89a450dac7f85a9821fb86a33b1b7daeab9f116be252424db70
lastState: {}
name: pod-template
ready: false
restartCount: 0
state:
terminated:
containerID: docker://333709ca66462b0e41f42f297fa36261aa81fc099741e425b7192fa7ef733937
exitCode: 0
finishedAt: 2016-03-24T18:59:30Z
reason: Completed
startedAt: 2016-03-24T18:59:29Z
- containerID: docker://3d2b51436e435e0b887af92c420d175fafbeb8441753e378eb77d009a38b7e1e
image: alpine
imageID: docker://sha256:70c557e50ed630deed07cbb0dc4d28aa0f2a485cf7af124cc48f06bce83f784b
lastState: {}
name: sidecar
ready: true
restartCount: 0
state:
running:
startedAt: 2016-03-24T18:59:31Z
hostIP: 10.2.113.74
phase: Running
</code></pre>
<p>So it looks like my sidecar would need to watch the main process (how?) and exit gracefully once it detects it is alone in the pod? If this is correct, then are there best practices/patterns for this (should the sidecar exit with the return code of the main container? but how does it get that?)?</p>
<p>** Update **
After further experimentation, I've also discovered the following:
If there are two containers in a pod, then it is not considered successful until all containers in the pod return with exit code 0. </p>
<p>Additionally, if <code>restartPolicy: OnFailure</code> is set on the pod spec, then any container in the pod that terminates with non-zero exit code will be restarted in the same pod (this could be useful for a monitoring sidecar to count the number of retries and delete the job after a certain number (to workaround no max-retries currently available in Kubernetes jobs)).</p>
| <p>You can use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="noreferrer">downward api</a> to figure out your own podname from within the sidecar, and then retrieving your own pod from the apiserver to lookup exist status. Let me know how this goes.</p>
|
<p>I've started running kubernetes on GCE.
I setup a 3x f1-micro cluster and Im running:
* influxdb x1
* Grafana x1
* nginx x1
* phpfpm7 x1
* golang x2
* redis x1</p>
<p>I keep having all my container restarting regularly, in 1h time, grafana restarted 4x, redis 3x, my golangs 2x, nginx 4x.
On my local, they never restart and are perfectly working fine...
The logs don't tell me anything about why it rebooted.</p>
<ol>
<li>For people using kubernetes, how often do your containers restart?</li>
<li>Could it be a perf issue?</li>
<li>I have also mounted persistentdisk for grafana and influxdb, but it seems after each restart, the data is wiped. Any idea?</li>
</ol>
<p>thanks for your help!</p>
| <p>I actually played around with the resources and limits of each container, deactivated heapster on the kube-system namespace and everthing runs now fine!</p>
|
<p>I'm newbie of the Kubernetes while I'm using Google Cloud Container. I just follow the tutorials as belows:</p>
<p><a href="https://cloud.google.com/container-engine/docs/tutorials/http-balancer" rel="nofollow">https://cloud.google.com/container-engine/docs/tutorials/http-balancer</a>
<a href="http://kubernetes.io/docs/hellonode/#create-your-pod" rel="nofollow">http://kubernetes.io/docs/hellonode/#create-your-pod</a></p>
<p>In these tutorials, I'll get the replicacontroller after I run the "kubectl run" but there is no replicacontrollers so that I cannot run the command of "kubectl expose rc" in order to open a port.</p>
<p>Here is my result of the commands:</p>
<pre><code>ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created
ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl expose rc nginx --target-port=80 --type=NodePort
Error from server: replicationcontrollers "nginx" not found
</code></pre>
<p>Here is my result when I run "kubectl get rc,svc,ingress,deployments,pods":</p>
<pre><code>ChangMatthews-MacBook-Pro:frontend changmatthew$ kubectl get rc,svc,ingress,deployments,pods
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.3.240.1 <none> 443/TCP 12m
NAME RULE BACKEND ADDRESS AGE
basic-ingress - nginx:80 107.178.247.247 12m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 11m
NAME READY STATUS RESTARTS AGE
nginx-198147104-zgo7m 1/1 Running 0 11m
</code></pre>
<p>One of my solution is to create yaml file which define the replicacontroller. But is there any way to create replicacontroller via kubectl run command like above tutorials?</p>
<p>Thanks,</p>
| <p>Now that kubectl run creates a deployment, you specify that the type being exposed in a deployment rather than a replication controller:</p>
<pre><code>kubectl expose deployment nginx --target-port=80 --type=NodePort
</code></pre>
|
<p>I would like to mount Google storage bucket in Google Container Engine using gcafuse or any other tool/other provision. The container runs under Google container engine So,we need to use yaml file to define few parameters in it.</p>
<p>If there is any kind of thing that can be used in .yaml file to build new replication controller/service using privileged and sys_admin or any other required parameters in it.</p>
| <p>We can use gcsfuse or s3fuse to mount Google Storage bucket in Kubernetes pod/Container. Before starting installation of fuse on container run container with SYS_ADMIN privileges like below.</p>
<p>$ docker run -it --cap-add SYS_ADMIN --name dev --device /dev/fuse ContainerID/Name /bin/bash</p>
<ol>
<li>Install <a href="https://github.com/GoogleCloudPlatform/gcsfuse" rel="noreferrer">gcsfuse</a> or <a href="https://github.com/s3fs-fuse/s3fs-fuse/wiki/Google-Cloud-Storage" rel="noreferrer">s3fuse</a> in pod/Container image.</li>
<li>Create shell script and add mount command in it.</li>
<li><p>Add privileged parameter into the YAML file to grant admin capabilities to pod/Container.
example as below.</p>
<pre><code> securityContext:
capabilities: {}
privileged: true
</code></pre></li>
<li><p>Add Postlife cycle hook parameters in YAML file to mount bucket after postStart of pod/Container.
example as below.</p>
<pre><code> lifecycle:
postStart:
exec:
command:
- "sh"
- "/usr/local/gcsfusemount.sh"
</code></pre></li>
</ol>
|
<p>We have a product which is described in some docker files, which can create the necessary docker containers. Some docker containers will just run some basic apps, while other containers will run clusters (hadoop).</p>
<p>Now is the question which cluster manager I need to use.
Kubernetes or Apache mesos or both?</p>
<p>I read Kubernetes is good for 100% containerized environments, while Apache Mesos is better for environments which are a bit containerized and a bit not-containerized. But Apache Mesos is better for running hadoop in docker (?).</p>
<p>Our environment is composed of only docker containers, but some with an hadoop cluster and some with some apps.</p>
<p>What will be the best?</p>
| <p>Both functionally do the same, orchestrate Docker containers, but obviously they will do it in different ways and what you can easily achieve with one, it might prove difficult in the other and vice versa.
Mesos has a higher complexity and learning curve in my opinion. Kubernetes is relatively simpler and easier to grasp. You can literally spawn your own Kube master and minions running one command and specifying the provider: Vagrant or AWS,etc. Kubernetes is also able to be integrated into Mesos, so there is also the possibility where you could try both.
For the Hadoop specific use case you mention, Mesos might have an edge, it might integrate better in the Apache ecosystem, Mesos and Spark were created by the same minds.
Final thoughts: start with Kube, progressively exploring how to make it work for your use case. Then, after you have a good grasp on it, do the same with Mesos. You might end up liking pieces of each and you can have them coexist, or find that Kube is enough for what you need.</p>
|
<p>im trying to setup a a high available kubernetes cluster with packer and terraform instead the kube-up.sh scripts. Reason: I want bigger machines, different setup etc. Most of my configuration comes from the coreos kubernetes deployment tutorial.</p>
<p>Something about my setup:</p>
<p>CoreOS</p>
<p>Everything runs on gce.
Ive got 3 etcd and one skydns instances. They are working and able to reach each other.</p>
<p>I have one instance as kubernetes master instance that is running the kubelet with manifests. </p>
<p>My actual problem right now is that the kube-api server is not able to connect to it self. I can run a curl command from my host system with valid response. /version and others.</p>
<p>It is also a little bit strange that 443 and 8080 are not forwarded from docker. Or is this a normal behavior?</p>
<p>I thought i missconfigured some master endpoints. so i tried localhost and the external ip for all manifests. => Not working.</p>
<p><strong>Errors in the kube-api container:</strong></p>
<pre><code>I0925 14:51:47.505859 1 plugins.go:69] No cloud provider specified.
I0925 14:51:47.973450 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
E0925 14:51:48.009367 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused
E0925 14:51:48.010730 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused
E0925 14:51:48.010996 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused
E0925 14:51:48.011083 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused
E0925 14:51:48.012697 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
E0925 14:51:48.012753 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
[restful] 2015/09/25 14:51:48 log.go:30: [restful/swagger] listing is available at https://104.155.60.74:443/swaggerapi/
[restful] 2015/09/25 14:51:48 log.go:30: [restful/swagger] https://104.155.60.74:443/swaggerui/ is mapped to folder /swagger-ui/
I0925 14:51:48.136166 1 server.go:441] Serving securely on 0.0.0.0:443
I0925 14:51:48.136248 1 server.go:483] Serving insecurely on 127.0.0.1:8080
</code></pre>
<p>The controller container has nearly the same erros. Every other container is fine.</p>
<p><strong>My config:</strong></p>
<p><code>/etc/kubelet.env</code></p>
<pre><code>KUBE_KUBELET_OPTS="\
--api_servers=http://127.0.0.1:8080 \
--register-node=false \
--allow-privileged=true \
--config=/etc/kubernetes/manifests \
--tls_cert_file=/etc/kubernetes/ssl/apiserver.pem \
--tls_private_key_file=/etc/kubernetes/ssl/apiserver-key.pem \
--cloud-provider=gce \
--cluster_dns=10.10.38.10 \
--cluster_domain=cluster.local \
--cadvisor-port=0"
</code></pre>
<p>/etc/kubernetes/manifests/</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: gcr.io/google_containers/hyperkube:v1.0.6
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd_servers=http://10.10.125.10:2379,http://10.10.82.201:2379,http://10.10.63.185:2379
- --allow-privileged=true
- --service-cluster-ip-range=10.40.0.0/16
- --secure_port=443
- --advertise-address=104.155.60.74
- --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>/etc/kubernetes/manifests/kube-controller-manager.yml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- name: kube-controller-manager
image: gcr.io/google_containers/hyperkube:v1.0.6
command:
- /hyperkube
- controller-manager
- --master=https://104.155.60.74:443
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
- --cloud_provider=gce
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host
</code></pre>
<p>docker ps</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3e37b2ea2277 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube controll 31 minutes ago Up 31 minutes k8s_kube-controller-manager.afecd3c9_kube-controller-manager-kubernetes-km0.c.stylelounge-1042.inte
rnal_kube-system_621db46bf7b0764eaa46d17dfba8e90f_519cd0da
43917185d91b gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube proxy -- 31 minutes ago Up 31 minutes k8s_kube-proxy.a2db3197_kube-proxy-kubernetes-km0.c.stylelounge-1042.internal_kube-system_67c22e99a
eb1ef9c2997c942cfbe48b9_c82a8a60
f548279e90f9 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube apiserve 31 minutes ago Up 31 minutes k8s_kube-apiserver.2bcb2c35_kube-apiserver-kubernetes-km0.c.stylelounge-1042.internal_kube-system_8
67c500deb54965609810fd0771fa92d_a306feae
94b1942a09f0 gcr.io/google_containers/hyperkube:v1.0.6 "/hyperkube schedule 31 minutes ago Up 31 minutes k8s_kube-scheduler.603b59f4_kube-scheduler-kubernetes-km0.c.stylelounge-1042.internal_kube-system_3
9e2c582fd067b44ebe8cefaee036c0e_e0ddf6a2
9de4a4264ef6 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 31 minutes ago Up 31 minutes k8s_controller-manager-elector.89f472b4_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_k
ube-system_e23fc0902c7e6da7b315ad34130b9807_7c8d2901
af2df45f4081 gcr.io/google_containers/podmaster:1.1 "/podmaster --etcd-s 31 minutes ago Up 31 minutes k8s_scheduler-elector.608b6780_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_kube-syste
m_e23fc0902c7e6da7b315ad34130b9807_b11e601d
ac0e068456c7 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-controller-manager-kubernetes-km0.c.stylelounge-1042.internal_kube-system_621d
b46bf7b0764eaa46d17dfba8e90f_e9760e28
2773ba48d011 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-podmaster-kubernetes-km0.c.stylelounge-1042.internal_kube-system_e23fc0902c7e6
da7b315ad34130b9807_4fba9edb
987531f1951d gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-apiserver-kubernetes-km0.c.stylelounge-1042.internal_kube-system_867c500deb549
65609810fd0771fa92d_d15d2d66
f4453b948186 gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-proxy-kubernetes-km0.c.stylelounge-1042.internal_kube-system_67c22e99aeb1ef9c2
997c942cfbe48b9_07e540c8
ce01cfda007e gcr.io/google_containers/pause:0.8.0 "/pause" 31 minutes ago Up 31 minutes k8s_POD.e4cc795_kube-scheduler-kubernetes-km0.c.stylelounge-1042.internal_kube-system_39e2c582fd067
b44ebe8cefaee036c0e_e6cb6500
</code></pre>
<p>Here the curl command:</p>
<pre><code>kubernetes-km0 ~ # docker logs a404a310b55e
I0928 09:14:05.019135 1 plugins.go:69] No cloud provider specified.
I0928 09:14:05.192451 1 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0928 09:14:05.192900 1 master.go:295] Will report 10.10.247.127 as public IP address.
E0928 09:14:05.226222 1 reflector.go:136] Failed to list *api.LimitRange: Get http://127.0.0.1:8080/api/v1/limitranges: dial tcp 127.0.0.1:8080: connection refused
E0928 09:14:05.226428 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
E0928 09:14:05.226479 1 reflector.go:136] Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces: dial tcp 127.0.0.1:8080: connection refused
E0928 09:14:05.226593 1 reflector.go:136] Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token: dial tcp 127.0.0.1:8080: connection refused
E0928 09:14:05.226908 1 reflector.go:136] Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts: dial tcp 127.0.0.1:8080: connection refused
[restful] 2015/09/28 09:14:05 log.go:30: [restful/swagger] listing is available at https://10.10.247.127:443/swaggerapi/
[restful] 2015/09/28 09:14:05 log.go:30: [restful/swagger] https://10.10.247.127:443/swaggerui/ is mapped to folder /swagger-ui/
E0928 09:14:05.232632 1 reflector.go:136] Failed to list *api.ResourceQuota: Get http://127.0.0.1:8080/api/v1/resourcequotas: dial tcp 127.0.0.1:8080: connection refused
I0928 09:14:05.368697 1 server.go:441] Serving securely on 0.0.0.0:443
I0928 09:14:05.368788 1 server.go:483] Serving insecurely on 127.0.0.1:8080
kubernetes-km0 ~ # curl http://127.0.0.1:8080/api/v1/limitranges
{
"kind": "LimitRangeList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/limitranges",
"resourceVersion": "100"
},
"items": []
}
</code></pre>
<hr>
| <p>You need to register the master as a node if you want the master to actually host any pods with the <code>--register-node=true</code> flag to the kubelet runnning on master. The CoreOs tutorial does not register the master as a node because thats the ideal scenario.</p>
|
<p>In the kubectl tool there is the option:</p>
<pre><code>--cascade[=true]: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a ReplicationController). Default true.
</code></pre>
<p>But when I'm doing the 'delete' using the API(1.2) by default the deleted resources doesn't cascade. </p>
<p>Is there a simple way to do it with the api?</p>
| <p>No, cascading delete is client side only. There are plans to move that logic server side in future versions</p>
|
<p>I have mounted a emptyDir volume with memory medium to a pod. Then I logged into the pod and tried to create a file in that volume path but I got a permission denied error. </p>
<pre><code>touch a.txt
touch: cannot touch `a.txt': Permission denied
</code></pre>
<p>User I used is root. What could be the reason for that?</p>
| <p>Add :z or :Z as a workaround to your mount path to work properly with selinux:</p>
<pre><code>volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data:z
</code></pre>
|
<p>I have my kubernetes cluster setup and I want to check for the nodes from worker/minion node, can we run kubectl form worker/minion node? </p>
| <p>Yes, you just need to have the proper client credentials and you can run kubectl from anywhere that has network access to the apiserver. See <a href="http://kubernetes.io/docs/user-guide/sharing-clusters/" rel="noreferrer">Sharing Cluster Access with kubeconfig</a> for the instructions to get a <code>kubeconfig</code> file onto your worker node. </p>
|
<p>When a Google Container Engine cluster is created, Container Engine creates a Compute Engine managed instance group to manage the created instances. These instances are from Google Compute engine, which means, they are Virtual machines.</p>
<p>But we read in the doc page: "VMs are heavyweight and non-portable. The New Way is to deploy containers based on operating-system-level virtualization rather than hardware virtualization" isn't a contradiction? correct me if I'm wrong.
We use containers because they are extremely fast (either in boot time or tasks execution) compared to VMs, and they save a lot of space storage. So if we have one node(vm) that can supports 4 containers max, our clients can rapidly lunch 4 containers, but beyond this number, gcloud autoscaler will need to lunch a new node(vm) to support upcoming containers, which incurs some tasks delay. </p>
<p>Is it impossible to launch containers over physical machines?</p>
<p>And what do you recommend for running critical time execution tasks? </p>
| <p>It is definitely possible to launch containers on physical machines. In fact, according to the <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43438.pdf" rel="nofollow">Borg paper</a> ( the design of which heavily influenced Container Engine/Kubernetes ), this is the norm within Google's own infrastructure:</p>
<blockquote>
<p>Each task maps to a set of Linux processes running in a container on a
machine [62]. The vast majority of the Borg workload does not run
inside virtual machines (VMs), because we donβt want to pay the cost
of virtualization. Also, the system was designed at a time when we had
a considerable investment in processors with no virtualization support
in hardware.</p>
</blockquote>
<p>Since Container Engine is hosted within GCP, VMs are used to facilitate dynamic provisioning. However, these VMs are long lived compared to the lifetime of containers scheduled onto them. Pods of containers may be scheduled on and off of these VMs and jobs run to completion. However, VMs are torn down when clusters are upgraded or re-sized. </p>
|
<p>I've created a kubenetes cluster on my Mac with docker-machine, following the documentation here:</p>
<p><a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="nofollow">http://kubernetes.io/docs/getting-started-guides/docker/</a></p>
<p>I can access the normal api from inside the instance on 127.0.0.1:8080, but I want to access it externally from my macbook. I know there is a secure port :6443, but I'm unsure how to set up the credentials to access this port.</p>
<p>There are lots of instructions on how to do it on custom installs of kubernetes, but I don't know how to do it inside the docker containers I'm running.</p>
| <p>Likely, you will want to use Virtual Box's <a href="https://www.virtualbox.org/manual/ch06.html#natforward" rel="nofollow">port forwarding</a> capabilities. An example from the documentation:</p>
<pre><code>VBoxManage modifyvm "MyVM" --natpf1 "k8srule,tcp,,6443,,6443"
</code></pre>
<p>This forwards port 6443 on all hosts interfaces to port 6443 of the guest. Port forwarding can also be configured through the VirtualBox UI.</p>
|
<p>I have a Replication Controller whose size is more than one, and I'd like to embed the application monitoring profiler in only a pod in the replication controller. So I want the index or something to determine the pod is chosen only one. Especially in the GKE environment, is there such information?</p>
| <p>Pods started by a replication controller are all treated identically; they don't have any sort of ordinality. </p>
<p>If you want to start a group of identical pods and enable an extra feature in just one of them, you should consider using a master election scheme and having just the elected master run the monitoring profiler. </p>
|
<p>I have a Replication Controller whose size is more than one, and I'd like to embed the application monitoring profiler in only a pod in the replication controller. So I want the index or something to determine the pod is chosen only one. Especially in the GKE environment, is there such information?</p>
| <p>You will be interested in the parametrized set/templating proposal that will allow you to define indices <a href="https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/proposals/templates.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/proposals/templates.md</a>. This will most likely be included in 1.3. </p>
|
<p>How to parse the json to retrieve a field from output of </p>
<pre><code>kubectl get pods -o json
</code></pre>
<p>From the command line I need to obtain the system generated container name from a google cloud cluster ... Here are the salient bits of json output from above command :
<a href="https://i.stack.imgur.com/ysqWI.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ysqWI.png" alt="enter image description here"></a></p>
<p><a href="https://gist.github.com/scottstensland/278ce94dc6873aa54e44" rel="noreferrer">click here to see entire json output</a></p>
<p>So the top most json key is an array : items[] followed by metadata.labels.name where the search critera value of that compound key is "web" (see above image green marks). On a match, I then need to retrieve field </p>
<pre><code>.items[].metadata.name
</code></pre>
<p>which so happens to have value :</p>
<pre><code>web-controller-5e6ij // I need to retrieve this value
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="noreferrer">Here are docs on jsonpath</a></p>
<p>I want to avoid text parsing output of</p>
<pre><code>kubectl get pods
</code></pre>
<p>which is</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mongo-controller-h714w 1/1 Running 0 12m
web-controller-5e6ij 1/1 Running 0 9m
</code></pre>
<p>Following will correctly parse this <code>get pods</code> command yet I feel its too fragile </p>
<pre><code>kubectl get pods | tail -1 | cut -d' ' -f1
</code></pre>
| <p>After much battling this one liner does retrieve the container name :</p>
<pre><code>kubectl get pods -o=jsonpath='{.items[?(@.metadata.labels.name=="web")].metadata.name}'
</code></pre>
<p>when this is the known search criteria :</p>
<pre><code>items[].metadata.labels.name == "web"
</code></pre>
<p>and this is the desired field to retrieve </p>
<pre><code>items[].metadata.name : "web-controller-5e6ij"
</code></pre>
|
<p>We use a separate VPC per environment. Does or will spinnaker support targeting different Kubernetes clusters? Will adding environments ad-hoc be viable?</p>
| <p>Spinnaker supports multiple Kubernetes clusters, each is added as an 'account' in Spinnaker configuration. The configured accounts are presented as options at deployment time, and the Server Groups for each application are rolled up under the account they belong to.</p>
<p>It is possible to change that configuration and refresh it at runtime, but it would involve editing the on-disk yaml file that backs the Clouddriver component of Spinnaker and triggering the /config-refresh endpoint.</p>
|
<p>What I have is</p>
<ul>
<li>Kubernetes: v.1.1.2</li>
<li>iptables v1.4.21</li>
<li>kernel: 3.10.0-327.3.1.el7.x86_64 Centos</li>
<li>Networking is done via flannel udp</li>
<li>no cloud provider</li>
</ul>
<p>what I do</p>
<p>I have enabled it with <strong>--proxy_mode=iptables</strong> argument. And I checked the iptables</p>
<pre><code>Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- SIDR26KUBEAPMORANGE-005/26 anywhere
MASQUERADE all -- 172.17.0.0/16 anywhere
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351
Chain DOCKER (2 references)
target prot opt source destination
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references)
target prot opt source destination
MARK all -- 172.20.10.130 anywhere /* default/nc-service: */ MARK set 0x4d415351
DNAT tcp -- anywhere anywhere /* default/nc-service: */ tcp to:172.20.10.130:9000
Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references)
target prot opt source destination
MARK all -- SIDR26KUBEAPMORANGE-001.serviceengage.com anywhere /* default/kubernetes: */ MARK set 0x4d415351
DNAT tcp -- anywhere anywhere /* default/kubernetes: */ tcp to:10.62.66.254:9443
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-6N4SJQIF3IX3FORG tcp -- anywhere 172.21.0.1 /* default/kubernetes: cluster IP */ tcp dpt:https
KUBE-SVC-362XK5X6TGXLXGID tcp -- anywhere 172.21.145.28 /* default/nc-service: cluster IP */ tcp dpt:commplex-main
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-362XK5X6TGXLXGID (1 references)
target prot opt source destination
KUBE-SEP-3SX6E5663KCZDTLC all -- anywhere anywhere /* default/nc-service: */
Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
target prot opt source destination
KUBE-SEP-Q4LJF4YJE6VUB3Y2 all -- anywhere anywhere /* default/kubernetes: */
</code></pre>
<p>When I do <strong>nc</strong> request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below
nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( <a href="http://nmap.org/ncat" rel="nofollow">http://nmap.org/ncat</a> )
hello
Ncat: Connection timed out.</p>
<p>while when I do request to the 172.20.10.130:9000 server it's working fine. </p>
<p>nc -v 172.20.10.130 9000
Ncat: Version 6.40 ( <a href="http://nmap.org/ncat" rel="nofollow">http://nmap.org/ncat</a> )
Ncat: Connected to 172.20.10.130:9000.
hello
yes</p>
<p>From the dmesg log, I can see</p>
<pre><code>[10153.318195] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318282] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318374] DBG@POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0
</code></pre>
<p>And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip. </p>
<pre><code>nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.21.145.28:5000.
hello
yes
</code></pre>
<p>I am wondering why and how to fix it.</p>
| <p>I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.</p>
|
<p>I tried creating a <strong>Replication Controller</strong> via an JSON file and I have mentioned <strong>restartPolicy</strong> as "Never" for <strong>pod restartPolicy</strong>.</p>
<p>but I am getting an error that,</p>
<p><strong>Error:
The ReplicationController "ngnix-rc" is invalid.
*spec.template.spec.restartPolicy: Unsupported value: "Never": supported values: Always</strong></p>
<p>Is there any change in v1.2 that it supports only "<strong>Always</strong>" as an option for <strong>restartPolicy</strong>? I'm confused.</p>
<p>I tried another scenario where I faced a strange behavior.
I specified the restart policy as "<strong>never</strong>" and I got the error as,</p>
<p><strong>Error:
The ReplicationController "ngnix-rc" is invalid.
*spec.template.spec.restartPolicy: Unsupported value: "never": supported values: Always, OnFailure, Never
*spec.template.spec.restartPolicy: Unsupported value: "never": supported values: Always</strong></p>
<p>As found, there are two errors in this scenario.
I don't know what is the exact problem.</p>
<p>The JSON file that i used to create RC is given below</p>
<pre><code>{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"ngnix-rc",
"labels":{
"app":"webserver"
}
},
"spec":{
"replicas":1,
"selector":{
"app":"webserver1"
},
"template":{
"metadata":{
"name":"ngnix-pod",
"labels":{
"app":"webserver1"
}
},
"spec":{
"containers":[
{
"image":"ngnix",
"name":"nginx"
}
],
"restartPolicy":"Never"
}
}
}
}
</code></pre>
| <p>To expand on zhb's answer: while different restart policies make sense for single pods, or even for run-to-completion jobs, a replication controller's entire purpose is to keep N instances of a pod running, so saying that you don't want the pods restarted doesn't mesh great with the concept.</p>
<p>The part of the docs that explains this is: <a href="http://kubernetes.io/docs/user-guide/pod-states/#restartpolicy" rel="nofollow">http://kubernetes.io/docs/user-guide/pod-states/#restartpolicy</a></p>
|
<p>I try to use the master api to update resources.</p>
<p>In 1.2 to update a deployment resource I'm doing <code>kubectl apply -f new updateddeployment.yaml</code></p>
<p>How to do the same action with the api? </p>
| <p>I checked the code in <code>pkg/kubectl/cmd/apply.go</code> and I think the following lines of code shows what's behind the scene when you run <code>kubectl apply -f</code>:</p>
<pre><code>// Compute a three way strategic merge patch to send to server.
patch, err := strategicpatch.CreateThreeWayMergePatch(original, modified, current,
versionedObject, true)
helper := resource.NewHelper(info.Client, info.Mapping)
_, err = helper.Patch(info.Namespace, info.Name, api.StrategicMergePatchType, patch)
</code></pre>
<p>And here is the code <code>helper.Patch</code>:</p>
<pre><code>func (m *Helper) Patch(namespace, name string, pt api.PatchType, data []byte) (runtime.Object, error) {
return m.RESTClient.Patch(pt).
NamespaceIfScoped(namespace, m.NamespaceScoped).
Resource(m.Resource).
Name(name).
Body(data).
Do().
Get()
}
</code></pre>
|
<p>I am working with Kubernetes tutorial and deploying the cluster locally with Vagrant.</p>
<p>After the vagrant machine finishes its loading, I get the following outout:</p>
<blockquote>
<pre><code>Kubernetes cluster is running.
The master is running at:
https://10.245.1.2
Administer and visualize its resources using Cockpit:
https://10.245.1.2:9090
For more information on Cockpit, visit http://cockpit-project.org
The user name and password to use is located in /Users/me/.kube/config
</code></pre>
</blockquote>
<p>When i go to <code>https://10.245.1.2:9090</code> i see the login fedora screen.
I do the following:</p>
<blockquote>
<pre><code>./cluster/kubectl.sh config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.245.1.2
name: vagrant
contexts:
- context:
cluster: vagrant
user: vagrant
name: vagrant
current-context: vagrant
kind: Config
preferences: {}
users:
- name: vagrant
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: 9r5V2B2wn6oeaciX
username: admin
</code></pre>
</blockquote>
<p>but the username and password are incorrect.</p>
<p>how am i supposed to connect to the cockpit? </p>
<p>Thanks</p>
| <p>The username and password in the kubeconfig file are used to authenticate to the Kubernetes apiserver running in your cluster. The authentication for Cockpit is entirely separate. </p>
<p>According to the <a href="https://github.com/kubernetes/kubernetes/blob/2e89f555c650c01ba577a015fd1b23dbb71707e3/cluster/vagrant/provision-master.sh#L107-L108">vagrant setup scripts</a>, you should log into Cockpit as the user <code>vagrant</code> with the password <code>vagrant</code>. </p>
|
<p>I did an upgrade of GKE to v1.2 yesterday and started to try out the DaemonSet feature (beta), didn't work as expected and i wanted to delete it from the cluster. What happened is that the delete operation failed and now the DaemonSet is in a inconsistent state and restarts all my other pods every 5 min.</p>
<p>What can be done without deleting and recreating the whole cluster? I did try to apply the DaemonSet with a busybox like that : </p>
<p>And the DS looks like that : </p>
<p><a href="https://i.stack.imgur.com/I4V9u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I4V9u.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/9pP0G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9pP0G.png" alt="enter image description here"></a></p>
<p>The deleting operations fails :</p>
<p><a href="https://i.stack.imgur.com/5smS8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5smS8.png" alt="enter image description here"></a></p>
| <p>Because the state of the DaemonSet is inconsistent, you can try to delete it using <code>--cascade=false?</code> flag.</p>
|
<p>I'm trying to run Kubernetes on a local Centos server and have had some issues (for example, <a href="https://stackoverflow.com/questions/36228065/kubernetes-dns-fails-in-kubernetes-1-2">with DNS</a>). A version check shows that I'm running Kubernetes 1.2 Alpha 1. Since the full release is now available from the <a href="https://github.com/kubernetes/kubernetes/releases" rel="nofollow noreferrer">Releases Download</a> page, I'd like to upgrade and see if that resolves my issue. The <a href="http://kubernetes.io/docs/getting-started-guides/binary_release/" rel="nofollow noreferrer">documentation</a> for installing a prebuilt binary release states:</p>
<p><em>Download the latest release and unpack this tar file on Linux or OS X, cd to the created kubernetes/ directory, and then follow the getting started guide for your cloud.</em></p>
<p>However, the <a href="http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/" rel="nofollow noreferrer">Getting Started Guide for Centos</a> says nothing about using a prebuilt binary. Instead, it tells you to set up a yum repo and run a yum install command:</p>
<pre><code>yum -y install --enablerepo=virt7-docker-common-release kubernetes
</code></pre>
<p>This command downloads and installs the Alpha1 release. In addition, it attempts to install Docker 1.8 (two releases down from the current 1.10), which fails if Docker is already installed.</p>
<p>How can I install from a prebuilt binary and use an existing Docker?</p>
| <p>According to the <a href="http://kubernetes.io/docs/getting-started-guides/#table-of-solutions" rel="nofollow">Table of Solutions</a> for installing Kubernetes, the maintainer of the CentOS getting started guide is <a href="https://github.com/coolsvap" rel="nofollow">@coolsvap</a>. You should reach out to him to ask about getting the pre-built binary updated to the official release. </p>
|
<p>To create kubernetes cluster in AWS, I use the set up script "<a href="https://get.k8s.io" rel="noreferrer">https://get.k8s.io</a>". That script creates a new VPC automatically, but I want to create kubernetes cluster inside an existing VPC in AWS. Is there a way to do it?</p>
<p>I checked /kubernetes/cluster/aws/config-default.sh file, but there doesn't seem to be any environment variables about VPC.</p>
| <p>You can add this ENV variable (we are using ver 1.1.8)</p>
<p><code>export VPC_ID=vpc-YOURID</code></p>
<p>Also Kubernetes creates a VPC with 172.20.0.0/16 and I think it expects this.</p>
|
<p>I following the example exactly, [<a href="http://kubernetes.io/docs/hellonode/,]" rel="noreferrer">http://kubernetes.io/docs/hellonode/,]</a></p>
<p>after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
deployment "hello-node" created] . the pod doesnot run ok, I get CrashLoopBackOff status.I have no deployment exec, </p>
<p>any comment is appreciated. </p>
<p>Nobert</p>
<p>==========================================</p>
<pre>
norbert688@kubernete-codelab-1264:~/hellonode$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-2129762707-hte0f 0/1 CrashLoopBackOff 5 6m
norbert688@kubernete-codelab-1264:~/hellonode$ kubectl describe pod hello
Name: hello-node-2129762707-hte0f
Namespace: default
Node: gke-hello-world-16359f5d-node-zkpf/10.140.0.3
Start Time: Mon, 28 Mar 2016 20:07:53 +0800
Labels: pod-template-hash=2129762707,run=hello-node
Status: Running
IP: 10.16.2.3
Controllers: ReplicaSet/hello-node-2129762707
Containers:
hello-node:
Container ID: docker://dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d
Image: gcr.io/kubernete-codelab-1264/hello-node:v1
Image ID: docker://1fab5e6a9ef21db5518db9bcfbafa52799c38609738f5b3e1c4bb875225b5d61
Port: 8080/TCP
Args:
deployment
hello-node
created
QoS Tier:
cpu: Burstable
memory: BestEffort
Requests:
cpu: 100m
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: [8] System error: exec: "deployment": executable file not found in $PATH
Exit Code: -1
Started: Mon, 28 Mar 2016 20:14:16 +0800
Finished: Mon, 28 Mar 2016 20:14:16 +0800
Ready: False
Restart Count: 6
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
default-token-k3zl5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k3zl5
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulling pulling image "gcr.io/kubernete-codelab-1264/hello-node:v1"
6m 6m 1 {default-scheduler } Normal Scheduled Successfully assigned hello-node-2129762707-hte0f to gke-hello-world-16359f5d-node-zkpf
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 41c8fde8f94b
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 41c8fde8f94b with error: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: "deployment": executable file not found in $PATH
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 41c8fde8f94bee697e3f1a3af88e6b347f5b850d9a6a406a5c2e25375e48c87a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id a99c8dc5cc8a
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id a99c8dc5cc8a with error: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: "deployment": executable file not found in $PATH
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container a99c8dc5cc8a884d35f7c69e9e1ba91643f9e9ef8815b95f80aabdf9995a6608: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Successfully pulled image "gcr.io/kubernete-codelab-1264/hello-node:v1"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 977b07a9e5de
6m 6m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 977b07a9e5de with error: API error (500): Cannot start container 977b07a9e5dea5256de4e600d6071e3ac5cc6e9a344cb5354851aab587bff952: [8] System error: exec: "deployment": executable file not found in $PATH
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id f8ad177306bc
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id f8ad177306bc with error: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: "deployment": executable file not found in $PATH
5m 5m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container f8ad177306bc6154498befbbc876ee4b2334d3842f269f4579f762434effe33a: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
5m 4m 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id d9218f5385cb
4m 4m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id d9218f5385cb with error: API error (500): Cannot start container d9218f5385cb020c752c9e78e3eda87f04fa0428cba92d14a1a73c93a01c8d5b: [8] System error: exec: "deployment": executable file not found in $PATH
4m 3m 7 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with RunContainerError: "runContainer: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: \"deployment\": executable file not found in $PATH\n"
3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id 7c3c680f18c4 with error: API error (500): Cannot start container 7c3c680f18c4cb7fa0fd02f538dcbf2e8f8ba94661fe2703c2fb42ed0c908f59: [8] System error: exec: "deployment": executable file not found in $PATH
3m 3m 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id 7c3c680f18c4
2m 40s 12 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning Failed Failed to start container with docker id dfae3b1e068a with error: API error (500): Cannot start container dfae3b1e068a5b0e89b1791f1acac56148fc649ea5894d36575ce3cd46a2ae3d: [8] System error: exec: "deployment": executable file not found in $PATH
26s 26s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Created Created container with docker id dfae3b1e068a
6m 26s 6 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Normal Pulled Container image "gcr.io/kubernete-codelab-1264/hello-node:v1" already present on machine
3m 14s 3 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync (events with common reason combined)
5m 3s 26 {kubelet gke-hello-world-16359f5d-node-zkpf} spec.containers{hello-node} Warning BackOff Back-off restarting failed docker container
3s 3s 1 {kubelet gke-hello-world-16359f5d-node-zkpf} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hello-node" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=hello-node pod=hello-node-2129762707-hte0f_default(b300b749-f4dd-11e5-83ee-42010af0000e)"
</pre>
<p>==========================================</p>
| <blockquote>
<p>after I run [kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created]</p>
</blockquote>
<p>Do you mean your run <code>kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080 deployment "hello-node" created</code>?</p>
<p>If this is the case, then there is no surprise since <code>deployment</code> is not an executable in your PATH.</p>
|
<p>I am having this <a href="https://stackoverflow.com/questions/36138636/how-do-i-delete-orphan-kubernetes-pods">issue</a> except I have not created a ReplicaSet as suggested by <a href="https://stackoverflow.com/users/3653449/tim-hockin">Tim Hockin</a>. </p>
<p>Somehow a ReplicaSet was created with the same properties as my ReplicationController. The only difference is the name. The Controller is named 'fp-frontend' and the Set is named 'fp-frontend-389969098'. The appended number suggests that it was automatically created.</p>
<p>Perhaps a race condition or something, who knows.... I would however like to delete it and the pods it spawns.</p>
<p>So I try to delete it:</p>
<p><code>
$kubectl delete rs fp-frontend-389969098
replicaset "fp-frontend-389969098" deleted
</code></p>
<p>Command says it was deleted. But...</p>
<p><code>
$kubectl get rs
NAME DESIRED CURRENT AGE
fp-frontend-389969098 1 1 4s
</code></p>
<p>Any suggestions?
I think I am going to delete and recreate the cluster?</p>
<p>I am using google container engine and kubernetes is up to date.</p>
<p>```
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}</p>
| <p>You've probably created a deployment that's recreating the replica set for you. Try running <code>kubectl get deployments</code> and deleting the deployment from the output of that command.</p>
|
<p>I'm trying to get Kubernetes to download images from a Google Container Registry from another project. According to the <a href="http://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod">docs</a> you should create an image pull secret using: </p>
<pre><code>$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
</code></pre>
<p>But I wonder what <code>DOCKER_USER</code> and <code>DOCKER_PASSWORD</code> I should use for authenticating with Google Container Registry? Looking at the <a href="https://cloud.google.com/container-registry/docs/auth">GCR docs</a> it says that the password is the access token that you can get by running:</p>
<pre><code>$ gcloud auth print-access-token
</code></pre>
<p>This actually works... for a while. The problem seems to be that this access token expires after (what I believe to be) one hour. I need a password (or something) that doesn't expire when creating my image pull secret. Otherwise the Kubernetes cluster can't download the new images after an hour or so. What's the correct way to do this?</p>
| <p>This is really tricky but after a lot of trail and error I think I've got it working.</p>
<ol>
<li><p>Go to the Google Developer Console > Api Manager > Credentials and click "Create credentials" and create a "service account key"</p>
</li>
<li><p>Under "service account" select new and name the new key "gcr" (let the key type be json)</p>
</li>
<li><p>Create the key and store the file on disk (from here on we assume that it was stored under <code>~/secret.json</code>)</p>
</li>
<li><p>Now login to GCR using Docker from command-line:</p>
<p><code>$ docker login -e [email protected] -u _json_key -p "$(cat ~/secret.json)" https://eu.gcr.io</code></p>
</li>
</ol>
<p>This will generate an entry for "https://eu.gcr.io" in your <code>~/.docker/config.json</code> file.
6. Copy the JSON structure under "https://eu.gcr.io" into a new file called "~/docker-config.json", remove newlines! For example:</p>
<pre><code></code></pre>
<ol start="7">
<li><p>Base64 encode this file:</p>
<p><code>$ cat ~/docker-config.json | base64</code></p>
</li>
<li><p>This will print a long base64 encoded string, copy this string and paste it into an image pull secret definition (called <code>~/pullsecret.yaml</code>):</p>
</li>
</ol>
<blockquote>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mykey
data:
.dockercfg: <paste base64 encoded string here>
type: kubernetes.io/dockercfg
</code></pre>
</blockquote>
<ol start="9">
<li>Now create the secret:</li>
</ol>
<p><code>$ kubectl create -f ~/pullsecret.yaml</code>
10. Now you can use this pull secret from a pod, for example:</p>
<blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- image: "janedoe/awesomeapp:v1"
name: foo
imagePullSecrets:
- name: mykey
</code></pre>
</blockquote>
<p>or add it to a <a href="http://kubernetes.io/docs/user-guide/service-accounts/#adding-imagepullsecrets-to-a-service-account" rel="nofollow noreferrer">service account</a>.</p>
|
<p>I'm running some containers on Google Container Engine.
One day everything was fine, and the next day I can't <code>attach</code> to my containers anymore. Or <code>exec</code>, or any other docker command.</p>
<p>I deleted the pods and let new ones be instanced, didn't help.
Then I deleted the node and waited for a new one to be created and the pods deployed, didn't help either.</p>
<pre><code>$ kubectl attach www-controller-dev-xxxxx
Error from server: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-xxxxxxxxxxxxxxxxxxxxxxxx"?
</code></pre>
<p>What else can I try?</p>
<p>The problem might have started after I've deleted the cluster and recreated it, but I can't be sure. Did that before and it never was a problem.</p>
| <p>Commands like attach rely on the cluster's master being able to talk to the nodes
in the cluster. However, because the master isn't in the same Compute
Engine network as your cluster's nodes, we rely on SSH tunnels to enable secure
communication.</p>
<p>Container Engine puts an SSH public key in your Compute Engine project
<a href="https://cloud.google.com/compute/docs/metadata" rel="noreferrer">metadata</a>. All Compute Engine VMs using
Google-provided images regularly check their project's common metadata
and their instance's metadata for SSH keys to add to the VM's list of
authorized users. Container Engine also adds a firewall rule to your Compute
Engine network allowing SSH access from the master's IP address to each node
in the cluster.</p>
<p>If kubectl attach (or logs, exec, and port-forward) doesn't work, it's likely that it's because the master is unable to open SSH tunnels to the nodes. To
determine what the underlying problem is, you should check for these potential
causes:</p>
<ol>
<li><p>The cluster doesn't have any nodes.</p>
<p>If you've scaled down the number of nodes in your cluster to zero, SSH
tunnels won't work.</p>
<p>To fix it,
<a href="http://kubernetes.io/docs/user-guide/resizing-a-replication-controller/" rel="noreferrer">resize your cluster</a>
to have at least one node.</p></li>
<li><p>Pods in the cluster have gotten stuck in a terminating state and prevented
nodes that no longer exist from being removed from the cluster.</p>
<p>This is an issue that should only affect Kubernetes version 1.1, but could
be caused by repeated resizing of the cluster down and up.</p>
<p>To fix it,
<a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_delete/" rel="noreferrer">delete the pods</a>
that have been in a terminating state for more than a few minutes.
The old nodes will then be removed from the master's API and replaced
by the new nodes.</p></li>
<li><p>Your network's firewall rules don't allow for SSH access to the master.</p>
<p>All Compute Engine networks are created with a firewall rule called
"default-allow-ssh" that allows SSH access from all IP addresses (requiring
a valid private key, of course). Container Engine also inserts an SSH rule
for each cluster of the form "gke---ssh"
that allows SSH access specifically from the cluster's master IP to the
cluster's nodes. If neither of these rules exists, then the master will be
unable to open SSH tunnels.</p>
<p>To fix it,
<a href="https://cloud.google.com/compute/docs/networking#addingafirewall" rel="noreferrer">re-add a firewall rule</a>
allowing access to VMs with the tag that's on all the cluster's nodes from
the master's IP address.</p></li>
<li><p>Your project's common metadata entry for sshKeys is full.</p>
<p>If the project's metadata entry named "sshKeys" is close to the 32KiB size
limit, then Container Engine isn't able to add its own SSH key to let it
open SSH tunnels. You can see your project's metadata by running
<code>gcloud compute project-info describe [--project=PROJECT]</code>, then check the
length of the list of sshKeys.</p>
<p>To fix it,
<a href="https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#delete_project-wide_ssh_keys" rel="noreferrer">delete some of the SSH keys</a>
that are no longer needed.</p></li>
<li><p>You have set a metadata field with the key "sshKeys" on the VMs in the
cluster.</p>
<p>The node agent on VMs prefers per-instance sshKeys to project-wide SSH keys,
so if you've set any SSH keys specifically on the cluster's nodes, then the
master's SSH key in the project metadata won't be respected by the nodes.
To check, run <code>gcloud compute instances describe <VM-name></code> and look for
an "sshKeys" field in the metadata.</p>
<p>To fix it,
<a href="https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#delete_instance-only_ssh-keys_values" rel="noreferrer">delete the per-instance SSH keys</a>
from the instance metadata.</p></li>
</ol>
<p>It's worth noting that these features are not required for the correct
functioning of the cluster. If you prefer to keep your cluster's network locked
down from all outside access, that's perfectly fine. Just be aware that
features like these won't work as a result.</p>
|
<p>Trying to deploy <code>heapster-controller</code> to get Heapster + Graphana + InfluxDB working for Kubernetes. Getting error messages while trying ot deploy using heapster-controller.yaml file:</p>
<p><strong>heapster-controller.yaml</strong></p>
<p><code>
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: heapster-v1.1.0-beta1
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: heapster
template:
metadata:
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster:v1.1.0-beta1
name: heapster
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 200m
requests:
cpu: 100m
memory: 200m
command:
- /heapster
- --source=kubernetes.summary_api:''
- --sink=influxdb:http://monitoring-influxdb:8086
- --metric_resolution=60s
- image: gcr.io/google_containers/heapster:v1.1.0-beta1
name: eventer
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 200m
requests:
cpu: 100m
memory: 200m
command:
- /eventer
- --source=kubernetes:''
- --sink=influxdb:http://monitoring-influxdb:8086
- image: gcr.io/google_containers/addon-resizer:1.0
name: heapster-nanny
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=100m
- --extra-cpu=0m
- --memory=200
- --extra-memory=200Mi
- --threshold=5
- --deployment=heapster-v1.1.0-beta1
- --container=heapster
- --poll-period=300000
- image: gcr.io/google_containers/addon-resizer:1.0
name: eventer-nanny
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --cpu=100m
- --extra-cpu=0m
- --memory=200
- --extra-memory=200Ki
- --threshold=5
- --deployment=heapster-v1.1.0-beta1
- --container=eventer
- --poll-period=300000
</code></p>
<p>Deployment goes through, but then I get error:</p>
<p><code>
[root@node236 influxdb]# kubectl get pods -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE NODE
heapster-v1.1.0-beta1-3082378092-t6inb 2/4 RunContainerError 0 1m node262.local.net
[root@node236 influxdb]#
</code></p>
<p>Display log for the failed container:</p>
<p><code>
[root@node236 influxdb]# kubectl logs --namespace=kube-system heapster-v1.1.0-beta1-3082378092-t6inb
Error from server: a container name must be specified for pod heapster-v1.1.0-beta1-3082378092-t6inb, choose one of: [heapster eventer heapster-nanny eventer-nanny]
[root@node236 influxdb]#
</code></p>
<p>Where am I possibly going wrong ?</p>
<p>Any feedback appreaciated!!</p>
<p>Alex</p>
| <p>The correct syntax is <code>kubectl logs <pod> <container></code>.</p>
<p>In your example, <code>kubectl logs heapster-v1.1.0-beta1-3082378092-t6inb heapster --namespace=kube-system</code> will show the logs of the "heapster" container within the named pod.</p>
|
<p>I'm running kubernetes with docker 1.10 and I want to run a container with <code>--security-opt seccomp=unconfined</code> . I understand from <a href="https://github.com/kubernetes/kubernetes/issues/20870" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/20870</a> that seccomp in general is not supported by kubernetes yet, but are there any workarounds? </p>
<p>Do I just need to downgrade docker to 1.9 and lose the security profiles altogether, or is there another way to give my container the access it needs?</p>
| <p><a href="https://github.com/kubernetes/kubernetes/pull/21790" rel="nofollow">Seccomp is disabled by default</a> in kubernetes v1.2 for docker v1.10+, so you should not have problems running container with unconfined policy.</p>
|
<p>I want to calculate and show node specific cpu usage in percent in my own web application using Kubernetes API. </p>
<p>I need the same information as Kube UI and Cadvisor displays but I want to use the Kubernetes API.</p>
<p>I have found some cpu metrics under node-ip:10255/stats which contains timestamp, cpu usage: total, user and system in big weird numbers which I do not understand. Also the CPU-Limit is reported as 1024.</p>
<p>How does Kube UI calculate cpu usage and is it possible to do the same via the API?</p>
| <p>If you use Kubernetes v1.2, there is a new, cleaner metrics summary API. From the release note:</p>
<blockquote>
<p>Kubelet exposes a new Alpha metrics API - /stats/summary in a user friendly format with reduced system overhead.</p>
</blockquote>
<p>You can access the endpoint through <code><node-ip>:10255/stats/summary</code> and <a href="https://github.com/kubernetes/kubernetes/blob/225f903ccfcd3040554fa11c94ec2f0488498280/pkg/kubelet/api/v1alpha1/stats/types.go" rel="nofollow noreferrer">detailed API objects is here</a>.</p>
|
<p>I'm attempting to set up DNS support in Kubernetes 1.2 on Centos 7. According to the <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns#how-do-i-configure-it" rel="noreferrer">documentation</a>, there's two ways to do this. The first applies to a "supported kubernetes cluster setup" and involves setting environment variables:</p>
<pre><code>ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
</code></pre>
<p>I added these settings to /etc/kubernetes/config and rebooted, with no effect, so either I don't have a supported kubernetes cluster setup (what's that?), or there's something else required to set its environment.</p>
<p>The second approach requires more manual setup. It adds two flags to kubelets, which I set by updating /etc/kubernetes/kubelet to include:</p>
<pre><code>KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"
</code></pre>
<p>and restarting the kubelet with <code>systemctl restart kubelet</code>. Then it's necessary to start a replication controller and a service. The doc page cited above provides a couple of template files for this that require some editing, both for local changes (my Kubernetes API server listens to the actual IP address of the hostname rather than 127.0.0.1, making it necessary to add a --kube-master-url setting) and to remove some Salt dependencies. When I do this, the replication controller starts four containers successfully, but the kube2sky container gets terminated about a minute after completing initialization:</p>
<pre><code>[david@centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky
I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001
I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master
I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1
I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes
I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.
F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated
</code></pre>
<p>I've determined that the termination is done by the healthz container after reporting:</p>
<pre><code>2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1
</code></pre>
<p>Aside from this, all other logs look normal. However, there is one anomaly: it was necessary to specify --validate=false when creating the replication controller, as the command otherwise gets the message:</p>
<pre><code>error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>Could this be related? These arguments come directly Kubernetes documentation. if not, what's needed to get this running? </p>
<p>Here is the skydns-rc.yaml I used: </p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain="cluster.local"
- --kube-master-url=http://192.168.87.159:8080
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain="cluster.local"
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
</code></pre>
<p>and skydns-svc.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: "10.0.0.10"
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
</code></pre>
| <p>I just commented out the lines that contain the <code>successThreshold</code> and <code>failureThreshold</code> values in <code>skydns-rc.yaml</code>, then re-run the kubectl commands.</p>
<pre><code>kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml
</code></pre>
|
<p>I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to <a href="http://kubernetes.io/docs/user-guide/sharing-clusters/" rel="noreferrer">here</a> but on the kubernetes master node there is no kube config file. Also, kubectl config view results in </p>
<pre><code>apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
</code></pre>
| <p>Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below :</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: http://<master-ip>:<port>
name: test
contexts:
- context:
cluster: test
user: test
name: test
</code></pre>
<p>Then set context by executing:</p>
<pre><code>kubectl config use-context test
</code></pre>
<p>After this you should be able to interact with the cluster.</p>
<p>Note : To add certification and key use following link : <a href="http://kubernetes.io/docs/user-guide/kubeconfig-file/" rel="noreferrer">http://kubernetes.io/docs/user-guide/kubeconfig-file/</a></p>
<p>Alternately, you can also try following command</p>
<pre><code>kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1
kubectl config use-context test-cluster
</code></pre>
|
<p>I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out.</p>
<p>First, I've seen <a href="https://stackoverflow.com/questions/34818198/daemonset-doesnt-create-any-pods-v1-1-2">this ticket</a>. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set.</p>
<pre><code>kube 31398 1 0 08:54 ? 00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true
kube 12976 1 0 09:49 ? 00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider=
kube 29489 1 0 11:34 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
</code></pre>
<p>However api-versions only shows version 1:</p>
<pre><code>$ kubectl api-versions
Available Server Api Versions: v1
</code></pre>
<p>Kubernetes version is 1.2:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
</code></pre>
<p>The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled).</p>
<pre><code>$ kubectl get ds -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "ds-test",
"namespace": "dvlp",
"selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test",
"uid": "2d948b18-fa7b-11e5-8a55-00163e245587",
"resourceVersion": "2657499",
"generation": 1,
"creationTimestamp": "2016-04-04T15:37:45Z",
"labels": {
"app": "ds-test"
}
},
"spec": {
"selector": {
"app": "ds-test"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ds-test"
}
},
"spec": {
"containers": [
{
"name": "ds-test",
"image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"currentNumberScheduled": 0,
"numberMisscheduled": 0,
"desiredNumberScheduled": 0
}
}
]
}
</code></pre>
<p>Here is my yaml file to create the DaemonSet</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ds-test
spec:
selector:
app: ds-test
template:
metadata:
labels:
app: ds-test
spec:
containers:
- name: ds-test
image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp
ports:
- containerPort: 8080
</code></pre>
<p>Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created:</p>
<pre><code>$ kubectl get pods -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": []
}
</code></pre>
| <p>(I would have posted this as a comment, if I had enough reputation)</p>
<p>I am confused by your output.</p>
<p><code>kubectl api-versions</code> should print out <code>extensions/v1beta1</code> if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled.</p>
<p>But <code>kubectl get ds</code> should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not. </p>
<p>Can you try GET <code>masterIP/apis</code> and see if extensions is listed there?
You can also go to <code>masterIP/apis/extensions/v1beta1</code> and see if daemonsets is listed there.</p>
<p>Also, I see <code>kubectl version</code> says 1.2, but then <code>kubectl api-versions</code> should not print out the string <code>Available Server Api Versions</code> (that string was removed in 1.1: <a href="https://github.com/kubernetes/kubernetes/pull/15796" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/15796</a>).</p>
|
<p>To simplify <strong>deployment</strong> and short term roll-back, it's useful to use a new Docker image tag for each new version to deploy on <strong>Kubernetes</strong>. Without clean-up this means that old images:tags are kept forever.</p>
<p>How can I <strong>list all image:tag that are used by a Kubernetes container</strong> so that I can find all old image:tag that are old and not used to delete them automatically from the <strong>Docker Registry</strong>?</p>
<p>My goal is ideally for <em>Google Container Engine</em> (GKE) to delete unused images a <em>Google Container Registry</em>.</p>
| <p>As an alternative approach, you might consider just letting Kubernetes handle reclamation of old images for you.</p>
<p>Presently, the ImageManager handles reclamation of candidate images. See: <a href="http://kubernetes.io/docs/admin/garbage-collection/" rel="noreferrer">Garbage Collection</a></p>
<blockquote>
<p>Garbage collection is a helpful function of kubelet that will clean up
unreferenced images and unused containers. kubelet will perform
garbage collection for containers every minute and garbage collection
for images every five minutes.</p>
</blockquote>
<p>Configuration is controlled via these two <a href="http://kubernetes.io/docs/admin/kubelet/" rel="noreferrer">kublet cli parameters</a>:</p>
<pre><code> --image-gc-high-threshold=90: The percent of disk usage after which image garbage collection is always run. Default: 90%
--image-gc-low-threshold=80: The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. Default: 80%
</code></pre>
<p>The high/low thresholds could be tuned to force collection at an interval that works for you.</p>
|
<p>Our current CI deployment phase works like this:</p>
<ol>
<li>Build the containers.</li>
<li>Tag the images as <code>"latest"</code> and <code>< commit hash ></code>.</li>
<li>Push images to repository.</li>
<li>Invoke rolling update on appropriate RC(s).</li>
</ol>
<p>This has been working great for RC based deployments, but now that the <code>Deployment</code> object is becoming more stable and an underlying feature, we want to take advantage of this abstraction over our current deployment schemes and development phases.</p>
<p>What I'm having trouble with is finding a sane way to automate the update of a <code>Deployment</code> with the CI workflow. What I've been experimenting with is splitting up the git repo's and doing something like:</p>
<ol>
<li>[App Build] Build the containers.</li>
<li>[App Build] Tag the images as <code>"latest"</code> and <code>< commit hash ></code>.</li>
<li>[App Build] Push images to repository.</li>
<li>[App Build] Invoke build of the app's <code>Deployment</code> repo, passing through the current commit hash.</li>
<li>[Deployment Build] Interpolate manifest file tokens (currently just the passed commit hash e.g. <code>image: app-%%COMMIT_HASH%%</code>)</li>
<li>[Deployment Build] Apply the updated manifest to the appropriate <code>Deployment</code> resource(s).</li>
</ol>
<p>Surely though there's a better way to handle this. It would be great if the <code>Deployment</code> monitored for hash changes of the image's "latest" tag...maybe it already does? I haven't had success with this. Any thoughts or insights on how to better handle the deployment of <code>Deployment</code> would be appreciated :)</p>
| <p>The <code>Deployment</code> only monitors for pod template (<code>.spec.template</code>) changes. If the image name didn't change, the <code>Deployment</code> won't do the update. You can trigger the rolling update (with <code>Deployment</code>s) by changing the pod template, for example, label it with commit hash. Also, you'll need to set <code>.spec.template.spec.containers.imagePullPolicy</code> to <code>Always</code> (it's set to <code>Always</code> by default if <code>:latest</code> tag is specified and cannot be update), otherwise the image will be reused. </p>
|
<p><strong>Environment</strong></p>
<p><code>Mesos: 0.26
Kubernetes: 1.3.0
</code></p>
<p>Anyone out there using Kubernetes-Mesos framework?</p>
<p>Kubernetes-Mesos Question: Does current Kubernetes-Mesos support Kubernetes in HA configuration ? Having multiple Kubernetes masters talking to leader.mesos ? I tried to use it but I've got the following error on my secondary kubernetes master (trying to start <code>km scheduler</code>)</p>
<p><code>"mesos-master[25014]: I0405 09:54:07.523236 25020 master.cpp:2324] Framework a979cde6-aa86-4286-b07f-e83e9ae4076e-0005 (Kubernetes) at scheduler(1)@10.9.158.237:42819 failed over"</code></p>
| <p>It not supported; only one k8sm scheduler should talk to mesos master. One option is to use Marathon to manage k8sm-xxx daemons, it will re-start k8sm master for failover.</p>
|
<p>Maybe my question does not make sense, but this is what I'm trying to do:</p>
<ul>
<li>I have a running Kubernetes cluster running on CoreOS on bare metal. </li>
<li>I am trying to mount block storage from an OpenStack cloud provider with Cinder.</li>
</ul>
<p>From my readings, to be able to connect to the block storage provider, I need <code>kubelet</code> to be configured with <code>cloud-provider=openstack</code>, and use a <code>cloud.conf</code> file for the configuration of credentials.</p>
<p>I did that and the auth part seems to work fine (i.e. I successfully connect to the cloud provider), however <code>kubelet</code> then complains that it cannot find my node on the <code>openstack</code> provider.</p>
<p>I get: </p>
<p><code>Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object</code></p>
<p>This is similar to this question:</p>
<p><a href="https://stackoverflow.com/questions/32882348/unable-to-construct-api-node-object-for-kubelet-failed-to-get-external-id-from">Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: Failed to find object</a></p>
<p>However, I know <code>kubelet</code> will not find my node at the OpenStack provider since it is not hosted there! The error makes sense, but how do I avoid it? </p>
<p>In short, how do I tell <code>kubelet</code> not to look for my node there, as I only need it to look up the storage block to mount it?</p>
<p>Is it even possible to mount block storage this way? Am I misunderstanding how this works?</p>
| <p>There seem to be new ways to attach Cinder storage to bare metal, but it's apparently just PoC</p>
<p><a href="http://blog.e0ne.info/post/Attach-Cinder-Volume-to-the-Ironic-Instance-without-Nova.aspx" rel="nofollow">http://blog.e0ne.info/post/Attach-Cinder-Volume-to-the-Ironic-Instance-without-Nova.aspx</a></p>
|
<p>I've been trying to get help on <a href="https://stackoverflow.com/questions/36322006/kubernetes-using-openstack-cinder-from-one-cloud-provider-while-nodes-on-anothe">a Kubernetes question</a> and I don't get answers, and one suggestion was to ask on the Kubernetes slack channel, however it seems to be on invite only or for google, intel, coreos and redhat email addresses.</p>
<p>So, how am I supposed to get an invite to the channel?
The 'get my invite' options claims they sent me the invite, yet I didn't receive it and there is no option to resend it.</p>
<p>This question is a real question, and it also meant to attrack attention to the Kubernetes team to answer the related question. Since Kubernetes uses Stack Overflow as their support Q&A system and redirect the Github question to SO, I believe it is fair to try to get their attention here.</p>
| <p><a href="http://slack.kubernetes.io" rel="noreferrer">http://slack.kubernetes.io</a> is indeed the way to get yourself invited.</p>
<p>It sounds like there were some issue this morning (perhaps with Slack?), but the invites seem to be working now. See <a href="https://github.com/kubernetes/kubernetes/issues/23823" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/23823</a>. It's possible you're seeing the same issue (or that you are the same person :) ). Let me know if there is still a problem.</p>
|
<p>I have a few applications that run on regular Compute Engine nodes. In addition I have a Container Cluster that I am migrating applications to. Sooner or later all apps should be in Container Engine so service discovery is straight forward.
But for now the apps on Compute Engine need to be able to talk to the Container Engine apps. The Container Engine apps are all registered as a service.
For the sake of testing I used the "echoheaders" image:</p>
<pre><code>$ kubectl describe svc echoheaders
Name: echoheaders
Namespace: default
Labels: app=echoheaders
Selector: app=echoheaders
Type: ClusterIP
IP: 10.115.249.140
Port: http 80/TCP
Endpoints: 10.112.1.3:8080
Session Affinity: None
No events.
</code></pre>
<p>The issue now is that I can only access the pod service from the Compute Engine node directly via 10.112.1.3:8080 but not via its clusterip 10.115.249.140:80. That only works from within the actual Compute Engine nodes.</p>
<p>I already tried to create a bastion route pointing to one of the Container Engine nodes but it still doesn't work:</p>
<pre><code>$ gcloud compute routes describe gke-cluster-1-services
creationTimestamp: '2016-04-05T05:39:55.275-07:00'
description: Route to Cluster-1 service IP range
destRange: 10.115.240.0/20
id: '926323215677918452'
kind: compute#route
name: gke-cluster-1-services
network: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/global/networks/infra
nextHopInstance: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/zones/europe-west1-d/instances/gke-cluster-1-5679a61a-node-f7iu
priority: 500
selfLink: https://www.googleapis.com/compute/v1/projects/infrastructure-1173/global/routes/gke-cluster-1-services
</code></pre>
<p>And on the firewall the Compute Node can connect to any.</p>
<p>Anybody happen to have pointers what could be missing to allow the Compute Engine nodes access the Compute Node Services by their ClusterIPs?</p>
<p>Thanks</p>
| <p>Kubernetes expects anything within the cluster to be able to talk with everything else. GKE accomplishes this with <a href="http://kubernetes.io/docs/admin/networking/#google-compute-engine-gce" rel="nofollow">advanced routing</a>. By default, this lets GKE containers and GCE nodes on the same network communicate. This is why you could hit your containers directly.</p>
<p>A ClusterIP is only reachable within the Kubernetes cluster. These IPs are managed by iptables on just Kubernetes nodes. This is why you can't hit your service from the GCE nodes, but you can hit it from your containers.</p>
<p>Bastion routes send all traffic to the cluster's subnet to a cluster node. The node then routes the flow correctly. Create multiple bastion routes to multiple nodes at the same priority to avoid hotspotting a single node. </p>
<p>Try using the cluster's full /14, which you can find under the cluster's description in the container engine UI.</p>
|
<p>I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"?</p>
<p>According to documentation
<a href="https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine" rel="noreferrer">https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine</a>
we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image.</p>
<p>How do we enable forward input on those pods and make sure that http port available to every pod on the nodes? We also need to make sure this config is used by default when we add more nodes to our cluster.</p>
<p>Any help would be appreciated, may be I'm looking at all this from a wrong point?</p>
| <p>The basic idea is to start a separate pod that receives structured logs over TCP and forwards it to Cloud Logging, similar to a locally-running fluentd agent. See below for the steps I used.</p>
<p>(Unfortunately, the logging support that is built into Docker and Kubernetes cannot be used - it just forwards individual lines of text from stdout/stderr as separate log entries which prevents Error Reporting from seeing complete stack traces.)</p>
<p>Create a docker image for a fluentd forwarder using a <code>Dockerfile</code> as follows:</p>
<pre><code>FROM gcr.io/google_containers/fluentd-gcp:1.18
COPY fluentd-forwarder.conf /etc/google-fluentd/google-fluentd.conf
</code></pre>
<p>Where <code>fluentd-forwarder.conf</code> contains the following:</p>
<pre><code><source>
type forward
port 24224
</source>
<match **>
type google_cloud
buffer_chunk_limit 2M
buffer_queue_limit 24
flush_interval 5s
max_retry_wait 30
disable_retry_limit
</match>
</code></pre>
<p>Then build and push the image:</p>
<pre><code>$ docker build -t gcr.io/###your project id###/fluentd-forwarder:v1 .
$ gcloud docker push gcr.io/###your project id###/fluentd-forwarder:v1
</code></pre>
<p>You need a replication controller (<code>fluentd-forwarder-controller.yaml</code>):</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: fluentd-forwarder
spec:
replicas: 1
template:
metadata:
name: fluentd-forwarder
labels:
app: fluentd-forwarder
spec:
containers:
- name: fluentd-forwarder
image: gcr.io/###your project id###/fluentd-forwarder:v1
env:
- name: FLUENTD_ARGS
value: -qq
ports:
- containerPort: 24224
</code></pre>
<p>You also need a service (<code>fluentd-forwarder-service.yaml</code>):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: fluentd-forwarder
spec:
selector:
app: fluentd-forwarder
ports:
- protocol: TCP
port: 24224
</code></pre>
<p>Then create the replication controller and service:</p>
<pre><code>$ kubectl create -f fluentd-forwarder-controller.yaml
$ kubectl create -f fluentd-forwarder-service.yaml
</code></pre>
<p>Finally, in your application, instead of using 'localhost' and 24224 to connect to the fluentd agent as described on <a href="https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine">https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine</a>, use the values of evironment variables <code>FLUENTD_FORWARDER_SERVICE_HOST</code> and <code>FLUENTD_FORWARDER_SERVICE_PORT</code>.</p>
|
<p>Situation: lots of heavy docker conainers that get hit periodically for a while, then stay unused for a longer period.</p>
<p>Wish: start the containers on demand (like systemd starts things through socket activation) and stop them after idling for given period. No <strong>visible</strong> downtime to end-user.</p>
<p>Options:</p>
<ul>
<li>Kubernetes has resource controllers which can scale replicas. I suppose it would be possible to keep the number of replicas on 0 and set it to 1 when needed, but how can one achieve that? The <a href="http://kubernetes.io/docs/user-guide/replication-controller/#scaling" rel="nofollow noreferrer">user guide</a> says there is something called an <em>auto-scaling control agent</em> but I don't see any further information on this. Is there a pluggable, programmable agent one can use to track requests and scale based on user-defined logic?</li>
<li>I don't see any solution in Docker Swarm, correct me if I'm wrong though.</li>
<li>Use a custom http server written in chosen language that will have access to the docker daemon. Before routing to correct place it would check for existence of container and ensure it is running. Downside - not a general solution, has to <strong>not</strong> be a container or have access to the daemon.</li>
<li>Use systemd as described <a href="https://developer.atlassian.com/blog/2015/03/docker-systemd-socket-activation/" rel="nofollow noreferrer">here</a>. Same downsides as above, i.e. not general and one has to handle networking tasks themselves (like finding the IP of the spawned container and feeding it into the server/proxy's configuration).</li>
</ul>
<p>Any ideas appreciated!</p>
| <p>You could use Kubernetes' built-in <a href="http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/" rel="noreferrer">Horizonal Pod Autoscaling (HPA)</a> to scale up from 1 instance of each container to as many are needed to handle the load, but there's no built-in functionality for 0-to-1 scaling on receiving a request, and I'm not aware of any widely used solution.</p>
|
<p>If I am running a container in privileged mode, does it have all the Kernel capabilities or do I need to add them separately?</p>
| <p>Running in privileged mode indeed gives the container all capabilities.
But it is good practice to always give a container the minimum requirements it needs.</p>
<p>The <a href="https://github.com/docker/cli/blob/c780f7c4abaf67034ecfaa0611e03695cf9e4a3e/docs/reference/commandline/run.md" rel="noreferrer">Docker run command documentation</a> refers to this flag:</p>
<blockquote>
<p>Full container capabilities (--privileged)</p>
<p>The --privileged flag gives all capabilities to the container, and it also lifts all the limitations enforced by the device cgroup controller. In other words, the container can then do almost everything that the host can do. This flag exists to allow special use-cases, like running Docker within Docker.</p>
</blockquote>
<p>You can give specific capabilities using <code>--cap-add</code> flag. See <a href="http://linux.die.net/man/7/capabilities" rel="noreferrer" title="man 7 capabilities"><code>man 7 capabilities</code></a> for more info on those capabilities. The literal names can be used, e.g. <code>--cap-add CAP_FOWNER</code>.</p>
|
<p>I've set up a Kubernetes cluster with three nodes, i get all my nodes status ready, but the scheduler seems not find one of them. How could this happen.</p>
<pre><code>[root@master1 app]# kubectl get nodes
NAME LABELS STATUS AGE
172.16.0.44 kubernetes.io/hostname=172.16.0.44,pxc=node1 Ready 8d
172.16.0.45 kubernetes.io/hostname=172.16.0.45 Ready 8d
172.16.0.46 kubernetes.io/hostname=172.16.0.46 Ready 8d
</code></pre>
<p>I use nodeSelect in my RC file like thie:</p>
<pre><code> nodeSelector:
pxc: node1
</code></pre>
<p>describe the rc:</p>
<pre><code>Name: mongo-controller
Namespace: kube-system
Image(s): mongo
Selector: k8s-app=mongo
Labels: k8s-app=mongo
Replicas: 1 current / 1 desired
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Volumes:
mongo-persistent-storage:
Type: HostPath (bare host directory volume)
Path: /k8s/mongodb
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
βββββββββ ββββββββ βββββ ββββ βββββββββββββ ββββββ βββββββ
25m 25m 1 {replication-controller } SuccessfulCreate Created pod: mongo-controller-0wpwu
</code></pre>
<p>get pods to be pending:</p>
<pre><code>[root@master1 app]# kubectl get pods mongo-controller-0wpwu --namespace=kube-system
NAME READY STATUS RESTARTS AGE
mongo-controller-0wpwu 0/1 Pending 0 27m
</code></pre>
<p>describe pod mongo-controller-0wpwu:</p>
<pre><code>[root@master1 app]# kubectl describe pod mongo-controller-0wpwu --namespace=kube-system
Name: mongo-controller-0wpwu
Namespace: kube-system
Image(s): mongo
Node: /
Labels: k8s-app=mongo
Status: Pending
Reason:
Message:
IP:
Replication Controllers: mongo-controller (1/1 replicas created)
Containers:
mongo:
Container ID:
Image: mongo
Image ID:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Ready: False
Restart Count: 0
Environment Variables:
Volumes:
mongo-persistent-storage:
Type: HostPath (bare host directory volume)
Path: /k8s/mongodb
default-token-7qjcu:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-7qjcu
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
βββββββββ ββββββββ βββββ ββββ βββββββββββββ ββββββ βββββββ
22m 37s 12 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.46): MatchNodeSelector
fit failure on node (172.16.0.45): MatchNodeSelector
27m 9s 67 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.45): MatchNodeSelector
fit failure on node (172.16.0.46): MatchNodeSelector
</code></pre>
<p>See the ip list in events, The 172.16.0.44 seems not seen by the scheduler? How could the happen?</p>
<p>describe the node 172.16.0.44</p>
<pre><code>[root@master1 app]# kubectl describe nodes --namespace=kube-system
Name: 172.16.0.44
Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1
CreationTimestamp: Wed, 30 Mar 2016 15:58:47 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
ββββ ββββββ βββββββββββββββββ ββββββββββββββββββ ββββββ βββββββ
Ready True Fri, 08 Apr 2016 12:18:01 +0800 Fri, 08 Apr 2016 11:18:52 +0800 KubeletReady kubelet is posting ready status
OutOfDisk Unknown Wed, 30 Mar 2016 15:58:47 +0800 Thu, 07 Apr 2016 17:38:50 +0800 NodeStatusNeverUpdated Kubelet never posted node status.
Addresses: 172.16.0.44,172.16.0.44
Capacity:
cpu: 2
memory: 7748948Ki
pods: 40
System Info:
Machine ID: 45461f76679f48ee96e95da6cc798cc8
System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461
Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965
Kernel Version: 3.10.0-123.9.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Container Runtime Version: docker://1.10.1
Kubelet Version: v1.2.0
Kube-Proxy Version: v1.2.0
ExternalID: 172.16.0.44
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
βββββββββ ββββ ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
kube-system kube-registry-proxy-172.16.0.44 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
βββββββββ ββββββββ βββββ ββββ βββββββββββββ ββββββ βββββββ
59m 59m 1 {kubelet 172.16.0.44} Starting Starting kubelet.
</code></pre>
<p>Ssh login 44, i get the disk space is free(i also remove some docker images and containers):</p>
<pre><code>[root@iZ25dqhvvd0Z ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 40G 2.6G 35G 7% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 3.7G 143M 3.6G 4% /run
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
/dev/xvdb 40G 361M 37G 1% /k8s
</code></pre>
<p>Still docker logs scheduler(v1.3.0-alpha.1 ) get this</p>
<pre><code>E0408 05:28:42.679448 1 factory.go:387] Error scheduling kube-system mongo-controller-0wpwu: pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.45): MatchNodeSelector
fit failure on node (172.16.0.46): MatchNodeSelector
; retrying
I0408 05:28:42.679577 1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"mongo-controller-0wpwu", UID:"2d0f0844-fd3c-11e5-b531-00163e000727", APIVersion:"v1", ResourceVersion:"634139", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.45): MatchNodeSelector
fit failure on node (172.16.0.46): MatchNodeSelector
</code></pre>
| <p>Thanks for your replay Robert. i got this resolve by doing below:</p>
<pre><code>kubectl delete rc
kubectl delete node 172.16.0.44
stop kubelet in 172.16.0.44
rm -rf /k8s/*
restart kubelet
</code></pre>
<p>Now the node is ready, and out of disk is gone.</p>
<pre><code>Name: 172.16.0.44
Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1
CreationTimestamp: Fri, 08 Apr 2016 15:14:51 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
ββββ ββββββ βββββββββββββββββ ββββββββββββββββββ ββββββ βββββββ
Ready True Fri, 08 Apr 2016 15:25:33 +0800 Fri, 08 Apr 2016 15:14:50 +0800 KubeletReady kubelet is posting ready status
Addresses: 172.16.0.44,172.16.0.44
Capacity:
cpu: 2
memory: 7748948Ki
pods: 40
System Info:
Machine ID: 45461f76679f48ee96e95da6cc798cc8
System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461
Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965
Kernel Version: 3.10.0-123.9.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
</code></pre>
<p>I found this <a href="https://github.com/kubernetes/kubernetes/issues/4135" rel="nofollow">https://github.com/kubernetes/kubernetes/issues/4135</a>, but still don't know why my disk space is free and kubelet thinks it is out of disk...</p>
|
<p>The cassandra Filesystem is on a glusterFS, after scaling the number of pods to zero, and back up to 3, the data is not loading up into cassandra.</p>
<p>Is there a way to recover it?</p>
<pre><code>INFO 17:00:52 reading saved cache /cassandra_data/saved_caches/KeyCache-d.db
INFO 17:00:52 Harmless error reading saved cache /cassandra_data/saved_caches/KeyCache-d.db
java.lang.RuntimeException: Cache schema version c2a2bb4f-7d31-3fb8-a216-00b41a643650 does not match current schema version 59adb24e-f3cd-3e02-97f0-5b395827453f
at org.apache.cassandra.cache.AutoSavingCache.loadSaved(AutoSavingCache.java:198) ~[apache-cassandra-3.3.jar:3.3]
at org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:157) [apache-cassandra-3.3.jar:3.3]
at org.apache.cassandra.cache.AutoSavingCache$3.call(AutoSavingCache.java:153) [apache-cassandra-3.3.jar:3.3]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
</code></pre>
| <p>The log you're showing is an <strong>harmless error</strong>. Basically it is just saying that the cache file is no longer in sync.</p>
<blockquote>
<p>INFO 17:00:52 <strong>Harmless error</strong> reading saved cache /cassandra_data/saved_caches/KeyCache-d.db</p>
</blockquote>
|
<p>So I have a Kubernetes cluster, and I am using Flannel for an overlay network. It has been working fine (for almost a year actually) then I modified a service to have 2 ports and all of a sudden I get this about a completely different service, one that was working previously and I did not edit:</p>
<pre><code><Timestamp> <host> flanneld[873]: I0407 18:36:51.705743 00873 vxlan.go:345] L3 miss: <Service's IP>
<Timestamp> <host> flanneld[873]: I0407 18:36:51.705865 00873 vxlan.go:349] Route for <Service's IP> not found
</code></pre>
<p>Is there a common cause to this? I am using Kubernetes 1.0.X and Flannel 0.5.5 and I should mention only one node is having this issue, the rest of the nodes are fine. The bad node's kube-proxy is also saying it can't find the service's endpoint.</p>
| <p>Sometime flannel will change it's subnet configuration... you can tell this if the IP and MTU from <code>cat /run/flannel/subnet.env</code> doesn't match <code>ps aux | grep docker</code> (or <code>cat /etc/default/docker</code>)... in which case you will need to reconfigure docker to use the new flannel config.</p>
<p>First you have to delete the docker network interface</p>
<pre><code>sudo ip link set dev docker0 down
sudo brctl delbr docker0
</code></pre>
<p>Next you have to reconfigure docker to use the new flannel config.<br>
<em>Note: sometimes this step has to be done manually (i.e. read the contents of /run/flannel/subnet.env and then alter <code>/etc/default/docker</code>)</em></p>
<pre><code>source /run/flannel/subnet.env
echo DOCKER_OPTS=\"-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker
</code></pre>
<p>Finally, restart docker</p>
<pre><code>sudo service docker restart
</code></pre>
|
<p>I've done quite a bit of research and have yet to find an answer to this. Here's what I'm trying to accomplish:</p>
<ul>
<li>I have an ELK stack container running in a pod on a k8s cluster in GCE - the cluster also contains a PersistentVolume (format: ext4) and a PersistentVolumeClaim.</li>
<li>In order to scale the ELK stack to multiple pods/nodes and keep persistent data in ElasticSearch, I either need to have all pods write to the same PV (using the node/index structure of the ES file system), or have some volume logic to scale up/create these PVs/PVCs.</li>
<li>Currently what happens is if I spin up a second pod on the replication controller, it can't mount the PV.</li>
</ul>
<p>So I'm wondering if I'm going about this the wrong way, and what is the best way to architect this solution to allow for persistent data in ES when my cluster/nodes autoscale.</p>
| <p>Persistent Volumes have access semantics. on GCE I'm assuming you are using a Persistent Disk, which can either be mounted as writable to a single pod or to multiple pods as read-only. If you want multi writer semantics, you need to setup Nfs or some other storage that let's you write from multiple pods. </p>
<p>In case you are interested in running NFS - <a href="https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/nfs/README.md" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/release-1.2/examples/nfs/README.md</a></p>
<p>FYI: We are still working on supporting auto-provisioning of PVs as you scale your deployment. As of now it is a manual process.</p>
|
<p>Sorry for the noob question but from <a href="https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/getting-started-guides/logging-elasticsearch.md" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/getting-started-guides/logging-elasticsearch.md</a>
it says: </p>
<blockquote>
<p>To use Elasticsearch and Kibana for cluster logging you should set the
following environment variable as shown below:</p>
</blockquote>
<pre><code>KUBE_LOGGING_DESTINATION=elasticsearch
</code></pre>
<p>Where and how do I set this Env Var ? I was thinking that I should use </p>
<pre><code>gcloud container clusters create
</code></pre>
<p>and pass the options there but there is no options...</p>
| <p>As already mentioned in Robert's answer the Elasticsearch/Kibana stack needs to be added manually if the cluster is supposed to run on Google Container Engine (GKE). Using the information given in this <a href="https://groups.google.com/forum/#!topic/google-containers/Q1nvl8IAbqc" rel="noreferrer">post</a>, I was able to get it to work performing the following steps:</p>
<ol>
<li><p>Start a GKE Cluster without cloud logging</p>
<pre><code>gcloud container --project <PROJECT_ID> clusters create <CLUSTER_ID> --no-enable-cloud-logging
</code></pre></li>
<li><p>Add a configured fluentd container to each running node by using a kubernetes DaemonSet.</p>
<pre><code>kubectl create -f fluentd-es.yaml
</code></pre>
<p><strong>fluentd-es.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
app: fluentd-logging
spec:
template:
metadata:
labels:
app: fluentd-es
spec:
containers:
- name: fluentd-elasticsearch
image: gcr.io/google_containers/fluentd-elasticsearch:1.15
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
</code></pre></li>
<li><p>Add elasticsearch and kibana pods and services.</p>
<pre><code>kubectl create -f es-controller.yaml
kubectl create -f es-service.yaml
kubectl create -f kibana-controller.yaml
kubectl create -f kibana-service.yaml
</code></pre>
<p><em>Note below that the <code>kubernetes.io/cluster-service: "true"</code> label (present
in the original <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="noreferrer">files</a>) has been removed.
Having this label in the definitions resulted in termination of the running pods.</em> </p>
<p><strong>es-controller.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/elasticsearch:1.8
name: elasticsearch-logging
resources:
limits:
cpu: 100m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
volumes:
- name: es-persistent-storage
emptyDir: {}
</code></pre>
<p><strong>es-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
</code></pre>
<p><strong>kibana-controller.yaml</strong></p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: kibana-logging-v1
namespace: kube-system
labels:
k8s-app: kibana-logging
version: v1
spec:
replicas: 1
selector:
k8s-app: kibana-logging
version: v1
template:
metadata:
labels:
k8s-app: kibana-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:1.3
resources:
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: ui
protocol: TCP
</code></pre>
<p><strong>kibana-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
</code></pre></li>
<li><p>Create a kubectl proxy</p>
<pre><code>kubectl proxy
</code></pre></li>
<li><p>Watch your logs with kibana at</p>
<p><a href="http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging/" rel="noreferrer">http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging/</a></p></li>
</ol>
|
<p>I have tried all the basics of Kubernetes and if you want to update your application all you can use <code>kubectl rolling-update</code> to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called <code>Deployment</code> on version <code>v1beta1</code>. I am confused since I there is a line on the Deployment docs:</p>
<blockquote>
<p>Next time we want to update pods, we can just update the deployment again.</p>
</blockquote>
<p>Isn't this the role for <code>rolling-update</code>? Any inputs would be very useful.</p>
| <p>I have been testing rolling update of a service using both replication controller and declarative deployment objects. I found using rc there appears to be no downtime from a client perspective. But when the Deployment is doing a rolling update, the client gets some errors for a while until the update stabilizes.</p>
<p>This is with kubernetes 1.2.1</p>
|
<p>I want to create MongoDB replica of the three machines, it needs to specify the IP-addresses of these machines? But they run into a pod's and have a dynamic IP. If you try to specify the DNS-name service MongoDB he says</p>
<blockquote>
<p>No host described in new configuration XXXXX for replica set
app_replica maps to this node</p>
</blockquote>
<p>How to configure MongoDB replica for c k8s?</p>
<p>I use DNS-addons for k8s.
And I try to initialize the cluster as follows:</p>
<pre><code>var config = {
"_id" : "app_replica",
"members" : [
{
"_id" : 0,
"host" : "mongodb-node-01"
},
{
"_id" : 1,
"host" : "mongodb-node-02"
},
{
"_id" : 2,
"host" : "mongodb-node-03",
"arbiterOnly" : true
}
]
}
rs.initiate(config)
</code></pre>
<p><strong>Config Service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: "mongodb-node-01"
labels:
app: "mongodb-node-01"
spec:
ports:
- port: 27017
targetPort: 27001
selector:
app: "mongodb-node-01"
</code></pre>
<p><strong>Config Replication Controller:</strong></p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: "mongodb-node-01"
labels:
app: "mongodb-node-01"
spec:
replicas: 1
selector:
app: "mongodb-node-01"
template:
metadata:
labels:
app: "mongodb-node-01"
spec:
containers:
- name: "mongodb-node-01"
image: 192.168.0.139:5000/db/mongo
command:
- mongod
- "--replSet"
- "app_replica"
- "--smallfiles"
- "--noprealloc"
env:
- name: ENV
value: "prod"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
readOnly: false
volumes:
- name: mongo-persistent-storage
hostPath:
path: /data/mongo/mongodb-node-01
nodeSelector:
database: "true"
mongodb01: "true"
</code></pre>
| <p>You need to setup multiple deployments and services. Take a look at this zookeeper example - <a href="https://gist.github.com/bprashanth/8160d0cf1469b4b125af95f697433934" rel="nofollow">https://gist.github.com/bprashanth/8160d0cf1469b4b125af95f697433934</a></p>
<p>You do not rely on node/machine IPs. Instead you rely on stable DNS names of multiple services.</p>
|
<p>(I have looked at some other threads, but apparently the privilege mode is now supported in the latest code, so am wondering if I have hit a bug.)</p>
<p>I have two physical servers: both running Linux (ubuntu), with the latest kubernetes code from github yesterday. </p>
<p>I am running <code>docs/getting-started-guides/docker-multinode/master.sh</code> (& <code>worker.sh</code>).</p>
<p>On Master node:</p>
<pre><code>$ kubectl create -f examples/nfs/nfs-server-rc.yaml
The ReplicationController "nfs-server" is invalid.
spec.template.spec.containers[0].securityContext.privileged: forbidden '<*>(0xc208389770)true'
</code></pre>
<p>Question: Is this supported? Or am I doing something wrong. Or is this a bug, please?</p>
<p><code>master.sh</code> code already has the option <code>--allow-privileged=true</code> provided.</p>
<p>These following options were set, but not with a great conviction, and just because I saw some discussion elsewhere setting them.</p>
<pre><code>/etc/default/kubelet:
`KUBELET_OPTS="--allow_privileged=true"`
/etc/default/kube-apiserver:
`KUBE_APISERVER_OPTS="--allow_privileged=true"`
</code></pre>
<p>Master configuration:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.5.833+2e5da8b881e2f5", GitCommit:"2e5da8b881e2f5b6dfb66653acf4aaa1ca1f398e", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.3", GitCommit:"6a81b50c7e97bbe0ade075de55ab4fa34f049dc2", GitTreeState:"clean"}
$ docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 13:12:04 UTC 2015
OS/Arch: linux/amd64
</code></pre>
| <p>From kubernetes v1.1, any container in a pod can enable privileged mode, using the privileged flag on the SecurityContext of the container spec. </p>
<p>To enable privileged mode nest <code>privileged:true</code> inside <code>securityContext</code> decleration of the container spec:</p>
<pre><code>"securityContext": {
"privileged": true
</code></pre>
<p>And as Janet said set <code>--allow-privileged=true</code> for both kubelet and kube-apiserver and restart them:</p>
<pre><code>sudo /etc/init.d/kubelet restart
sudo /etc/init.d/kube-apiserver restart
</code></pre>
<p>and validate that the flags are changed by using <code>ps -ef | grep kube</code> command.</p>
|
<p>I am evaluating Kubernetes as a platform for our new application. For now, it looks all very exciting! However, Iβm running into a problem: Iβm hosting my cluster on GCE and I need some mechanism to share storage between two pods - the continous integration server and my application server. Whatβs the best way for doing this with kubernetes? None of the volume types seems to fit my needs, since GCE disks canβt be shared if one pod needs to write to the disk. NFS would be perfect, but seems to require special build options for the kubernetes cluster?</p>
<p>EDIT: Sharing storage seems to be a problem that I have encountered multiple times now using Kubernetes. There are multiple use cases where I'd just like to have one volume and hook it up to multiple pods (with write access). I can only assume that this would be a common use case, no?</p>
<p>EDIT2: For example, <a href="http://kubernetes.io/v1.0/examples/elasticsearch/README.html">this page</a> describes how to set up an Elasticsearch cluster, but wiring it up with persistent storage is impossible (<a href="https://github.com/pires/kubernetes-elasticsearch-cluster/issues/2">as described here</a>), which kind of renders it pointless :(</p>
| <h2>Firstly, do you <em>really</em> need multiple readers / writers?</h2>
<p>From my experience of Kubernetes / micro-service architecture (MSA), the issue is often more related to your design pattern. One of the fundamental design patterns with MSA is the proper encapsulation of services, and this includes the data owned by each service.</p>
<p>In much the same way as OOP, your service should look after the data that is related to its area of concern and should allow access to this data to other services via an interface. This interface could be an API, messages handled directly or via a brokage service, or using protocol buffers and gRPC. Generally, multi-service access to data is an anti-pattern akin to global variables in OOP and most programming languages.</p>
<p>As an example, if you where looking to write logs, you should have a log service which each service can call with the relevant data it needs to log. Writing directly to a shared disk means that you'd need to update every container if you change your log directory structure, or decided to add extra functionality like sending emails on certain types of errors.</p>
<p>In the major percentage of cases, you should be using some form of minimal interface before resorting to using a file system, avoiding the unintended side-effects of <a href="https://www.hyrumslaw.com/" rel="noreferrer">Hyrum's law</a> that you are exposed to when using a file system. Without proper interfaces / contracts between your services, you heavily reduce your ability to build maintainable and resilient services.</p>
<h2>Ok, your situation is best solved using a file system. There are a number of options...</h2>
<p>There are obviously times when a file system that can handle multiple concurrent writers provides a superior solution over a more 'traditional' MSA forms of communication. Kubernetes supports a large number of volume types which can be found <a href="https://kubernetes.io/docs/concepts/storage/volumes" rel="noreferrer">here</a>. While this list is quite long, many of these volume types don't support multiple writers (also known as <code>ReadWriteMany</code> in Kubernetes).</p>
<p>Those volume types that do support <code>ReadWriteMany</code> can be found in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">this table</a> and at the time of writing this is AzureFile, CephFS, Glusterfs, Quobyte, NFS and PortworxVolume.</p>
<p>There are also operators such as the popular <a href="https://rook.io/" rel="noreferrer">rook.io</a> which are powerful and provide some great features, but the learning curve for such systems can be a difficult climb when you just want a simple solution and keep moving forward.</p>
<h2>The simplest approach.</h2>
<p>In my experience, the best initial option is NFS. This is a great way to learn the basic ideas around <code>ReadWriteMany</code> Kubernetes storage, will serve most use cases and is the easiest to implement. After you've built a working knowledge of multi-service persistence, you can then make more informed decisions to use more feature rich offerings which will often require more work to implement.</p>
<p>The specifics for setting up NFS differ based on how and where your cluster is running and the specifics of your NFS service and I've previously written two articles on how to set up NFS for <a href="https://ianbelcher.me/tech-blog/adding-persistance-to-on-premises-k8s-cluster" rel="noreferrer">on-prem clusters</a> and using AWS NFS equivalent <a href="https://ianbelcher.me/tech-blog/setting-up-efs-on-eks" rel="noreferrer">EFS on EKS clusters</a>. These two articles give a good contrast for just how different implementations can be given your particular situation.</p>
<p>For a bare minimum example, you will firstly need an NFS service. If you're looking to do a quick test or you have low SLO requirements, following <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-20-04" rel="noreferrer">this DO article</a> is a great quick primer for setting up NFS on Ubuntu. If you have an existing NAS which provides NFS and is accessible from your cluster, this will also work as well.</p>
<p>Once you have an NFS service, you can create a persistent volume similar to the following:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-name
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
nfs:
server: 255.0.255.0 # IP address of your NFS service
path: "/desired/path/in/nfs"
</code></pre>
<p>A caveat here is that your nodes will need binaries installed to use NFS, and I've discussed this more in my <a href="https://ianbelcher.me/tech-blog/adding-persistance-to-on-premises-k8s-cluster" rel="noreferrer">on-prem cluster</a> article. This is also the reason you <em>need</em> to use EFS when running on EKS as your nodes don't have the ability to connect to NFS.</p>
<p>Once you have the persistent volume set up, it is a simple case of using it like you would any other volume.</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-name
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: p-name
volumeMounts:
- mountPath: /data
name: v-name
volumes:
- name: v-name
persistentVolumeClaim:
claimName: pvc-name
</code></pre>
|
<p>I currently have a cluster running on GCloud which I created with 3 nodes.
This is what I get when I run <code>kubectl describe nodes</code></p>
<pre><code>Name: node1
Capacity:
cpu: 1
memory: 3800808Ki
pods: 40
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
βββββββββ ββββ ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
default my-pod1 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default my-pod2 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-75fn 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
300m (30%) 100m (10%) 200Mi (5%) 200Mi (5%)
Name: node2
Capacity:
cpu: 1
memory: 3800808Ki
pods: 40
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
βββββββββ ββββ ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
default my-pod3 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-wcle 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%)
kube-system heapster-v11-yi2nw 100m (10%) 100m (10%) 236Mi (6%) 236Mi (6%)
kube-system kube-ui-v4-5nh36 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
400m (40%) 300m (30%) 486Mi (13%) 486Mi (13%)
Name: node3
Capacity:
cpu: 1
memory: 3800808Ki
pods: 40
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
βββββββββ ββββ ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
kube-system fluentd-cloud-logging-gke-little-people-e39a45a8-node-xhdy 100m (10%) 100m (10%) 200Mi (5%) 200Mi (5%)
kube-system kube-dns-v9-bo86j 310m (31%) 310m (31%) 170Mi (4%) 170Mi (4%)
kube-system l7-lb-controller-v0.5.2-ae0t2 110m (11%) 110m (11%) 70Mi (1%) 120Mi (3%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
ββββββββββββ ββββββββββ βββββββββββββββ βββββββββββββ
520m (52%) 520m (52%) 440Mi (11%) 490Mi (13%)
</code></pre>
<p>Now, as you can see, I have 3 pods, 2 on node1 and 1 on node2. What I would like to do is to move all pods on node1 and delete the two other nodes. However, there seem to be pods belonging to the <code>kube-system</code> namespace and I don't know what effect deleting them might have.</p>
<p>I can tell that the pods named <code>fluentd-cloud-logging...</code> or <code>heapster..</code> are used for logging and computer resources usage, but I don't really know if I can move the pods <code>kube-dns-v9-bo86j</code> and <code>l7-lb-controller-v0.5.2-ae0t2</code> to another node without repercussions. </p>
<p>Can anyone help with some insight as to how should I proceed?</p>
<p>Thank you very much.</p>
| <p>Killing them so that they'll be rescheduled on another node is perfectly fine. They can all be rescheduled other than the fluentd pods, which are bound one to each node. </p>
|
<p>We have a requirement that no requests receive 404's when doing a rolling deploy. Currently we achieve this by deploying the new assets container to all servers before continuing with a rolling deploy. With nginx's "try_files" this ensures that as the new code is being rolled out we can serve both the old and new versions of assets. Does Kubernetes have any features to support this type of workflow?</p>
| <p>You can either use <a href="http://kubernetes.io/docs/user-guide/deployments/" rel="nofollow"><code>Deployment</code> API</a> (for Kubernetes >= v1.2) or <a href="http://kubernetes.io/docs/user-guide/rolling-updates/" rel="nofollow"><code>kubectl rolling-update</code></a> (for < v1.2) to manage the rolling deploy of your Kubernetes <code>Pod</code>s (each is a co-located group of containers and volumes). You'll also need to create <code>Service</code>s for accessing those <code>Pod</code>s (<code>Service</code>s redirect traffic to <code>Pod</code>s). During the rolling deploy, a user will be redirected to either the <code>Pod</code> with old or new versions of assets container. </p>
|
<p>I am trying to add swap space on kubernetes node to prevent it from out of memory issue. Is it possible to add swap space on node (previously known as minion)? If possible what procedure should I follow and how it effects pods acceptance test? </p>
| <p>Kubernetes doesn't support container memory swap. Even if you add swap space, kubelet will create the container with --memory-swappiness=0 (when using Docker). There have been discussions about adding support, but the proposal was not approved. <a href="https://github.com/kubernetes/kubernetes/issues/7294" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/7294</a></p>
|
<p>The closest tutorial I can find in getting an SSL terminating Ingress and an nginx based controller running on bare metal (Digital Ocean, for example) is this:</p>
<p><a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx" rel="nofollow">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx</a></p>
<p>but it leaves so many assumptions unexplained.</p>
<h1>My ingress requirements are simply:</h1>
<ul>
<li>default backend at port 80 for all hosts that:
<ul>
<li>file access to <code>location ^~ /.well-known/acme-challenge/</code> which allows my LetsEncrypt cert renewals to work</li>
<li>404 on <code>location /.well-known/acme-challenge/</code></li>
<li>301 on <code>location /</code></li>
</ul></li>
<li>subdomain based routing to different backend services on port 443</li>
<li>each subdomain points to a different SSL key/cert (generated by my LetsEncrypt, and stored in K8S as a secret I suppose??)</li>
</ul>
<h1>What I think need is this:</h1>
<ul>
<li>full documentation on writing Ingress rules
<ul>
<li>can I configure SSL certs (on port 443) for each backend individually?</li>
<li>is / the "path" that's a catchall for a host?</li>
</ul></li>
<li>updating Ingress rules in place</li>
<li>what nginx controller do I use? nginx? nginx-alpha? nginx-ingress docker container -- and where is the documentation for each of these controllers?
<ul>
<li>is there a base controller image that I can override the nginx.conf template that gets populated by Ingress changes from the API server?</li>
</ul></li>
<li>how do you store SSL keys and certs as secrets?</li>
</ul>
| <p>boo my answers apply to <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx">https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx</a></p>
<blockquote>
<ul>
<li>default backend at port 80 for all hosts that:
<ul>
<li>404 on <code>location /.well-known/acme-challenge/</code></li>
</ul></li>
</ul>
</blockquote>
<p>this is not possible using Ingress rules</p>
<blockquote>
<ul>
<li>301 on <code>location /</code></li>
</ul>
</blockquote>
<p>This is already supported. If the server contains a SSL certificate it will redirect to <code>https</code>automatically</p>
<blockquote>
<ul>
<li>subdomain based routing to different backend services on port 443</li>
<li>each subdomain points to a different SSL key/cert (generated by my LetsEncrypt, and stored in K8S as a secret I suppose??)</li>
</ul>
</blockquote>
<p>You need to create multiple Ingress rules, one per subdomain. Each rule can use a different secret name (this will create multiple servers, one per subdomain)</p>
<h1>What I think need is this:</h1>
<blockquote>
<ul>
<li>full documentation on writing Ingress rules</li>
</ul>
</blockquote>
<p><a href="http://kubernetes.io/docs/user-guide/ingress/">http://kubernetes.io/docs/user-guide/ingress/</a> </p>
<p>(I don't know id there's additional information besides the go code)</p>
<blockquote>
<ul>
<li>can I configure SSL certs (on port 443) for each backend individually?</li>
<li>is / the "path" that's a catchall for a host?</li>
</ul>
</blockquote>
<p>yes</p>
<blockquote>
<ul>
<li>updating Ingress rules in place</li>
<li>what nginx controller do I use? nginx? nginx-alpha? nginx-ingress docker container -- and where is the documentation for each of these controllers?</li>
</ul>
</blockquote>
<p>This depends on what you need, if you want to build you custom Ingress controller you can use <code>nginx-alpha</code> as reference. If <code>nginx-ingress</code> is not clear in the examples please open an issue and mention what could be improved in the examples or it's missing</p>
<blockquote>
<ul>
<li>is there a base controller image that I can override the nginx.conf template that gets populated by Ingress changes from the API server?</li>
</ul>
</blockquote>
<p>No. The reason for this is that the template is tied to the go code that populates the template. That said, you can build a custom image changing the template but this requires you deploy the image to tests the changes </p>
<blockquote>
<ul>
<li>how do you store SSL keys and certs as secrets?</li>
</ul>
</blockquote>
<p>yes, as secrets like this <a href="http://kubernetes.io/docs/user-guide/ingress/#tls">http://kubernetes.io/docs/user-guide/ingress/#tls</a></p>
<p>For the <code>letsencrypt</code> support please check this comment <a href="https://github.com/kubernetes/kubernetes/issues/19899#issuecomment-184059009">https://github.com/kubernetes/kubernetes/issues/19899#issuecomment-184059009</a></p>
<p>Here is a complete example <a href="https://gist.github.com/aledbf/d88c7f7d0b8d4d032035b14ab0965e26">https://gist.github.com/aledbf/d88c7f7d0b8d4d032035b14ab0965e26</a> <a href="https://github.com/kubernetes/contrib/pull/766">added to examples in #766</a></p>
|
<p>We are planning to build a small docker cluster for our application services. We considered to use 2 master vms for ha, 1 consul(if we choose Swarm) and 5-10 hosts for containers. We have not yet decided what to use - Docker Swarm or Kubernetes.</p>
<p>So the question is what "hardware" requirements (CPU cores, RAM) managers, both Swarm and Kubernetes, can meet to orchestrate this small cluster.</p>
| <p>Just to clarify a bit on what Robert wrote about Kubernetes.
If you want to have up to 5 machines for running your applications even 1-core virtual machine (n1-standard-1 on GCE) should be enough.
You can handle 10-node cluster with 2-core virtual machine as Robert said. For official recommendations please take a look at:
<a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/best-practices/cluster-large/</a></p>
<p>However, note that resource usage of our master components is more related to number of pods (containers) you want to run on your cluster. If you want to have say single-digit-number of them, even n1-standard-1 GCE should be enough for 10-node cluster. But it's definitely safer to use n1-standard-2 in case of <=10 node clusters.</p>
<p>As for HA, I agree with Robert that having 3 master VMs is better than 2. Etcd (which is our backing storage) requires more than a half of all registered replicas to be up to work correctly, so in case of 2 instances, all of them needs to be up (which is generally not your goal). If you have 3 instances, one of them can be down.</p>
<p>Let me know if you have more questions about Kubernetes.</p>
|
<p>EDITED:
I've an OpenShift cluster with one master and two nodes. I've installed NFS on the master and NFS client on the nodes.
I've followed the wordpress example with NFS: <a href="https://github.com/openshift/origin/tree/master/examples/wordpress" rel="nofollow">https://github.com/openshift/origin/tree/master/examples/wordpress</a></p>
<p>I did the following on my master as: oc login -u system:admin:</p>
<pre><code>mkdir /home/data/pv0001
mkdir /home/data/pv0002
chown -R nfsnobody:nfsnobody /home/data
chmod -R 777 /home/data/
# Add to /etc/exports
/home/data/pv0001 *(rw,sync,no_root_squash)
/home/data/pv0002 *(rw,sync,no_root_squash)
# Enable the new exports without bouncing the NFS service
exportfs -a
</code></pre>
<p>So exportfs shows: </p>
<pre><code>/home/data/pv0001
<world>
/home/data/pv0002
<world>
$ setsebool -P virt_use_nfs 1
# Create the persistent volumes for NFS.
# I did not change anything in the yaml-files
$ oc create -f examples/wordpress/nfs/pv-1.yaml
$ oc create -f examples/wordpress/nfs/pv-2.yaml
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 <none> 1073741824 RWO,RWX Available
pv0002 <none> 5368709120 RWO Available
</code></pre>
<p>This is also what I get.
Than I'm going to my node:</p>
<pre><code>oc login
test-admin
</code></pre>
<p>And I create a wordpress project:</p>
<pre><code>oc new-project wordpress
# Create claims for storage in my project (same namespace).
# The claims in this example carefully match the volumes created above.
$ oc create -f examples/wordpress/pvc-wp.yaml
$ oc create -f examples/wordpress/pvc-mysql.yaml
$ oc get pvc
NAME LABELS STATUS VOLUME
claim-mysql map[] Bound pv0002
claim-wp map[] Bound pv0001
</code></pre>
<p>This looks exactly the same for me.</p>
<p>Launch the MySQL pod.</p>
<pre><code>oc create -f examples/wordpress/pod-mysql.yaml
oc create -f examples/wordpress/service-mysql.yaml
oc create -f examples/wordpress/pod-wordpress.yaml
oc create -f examples/wordpress/service-wp.yaml
oc get svc
NAME LABELS SELECTOR IP(S) PORT(S)
mysql name=mysql name=mysql 172.30.115.137 3306/TCP
wpfrontend name=wpfrontend name=wordpress 172.30.170.55 5055/TCP
</code></pre>
<p>So actually everyting seemed to work! But when I'm asking for my pod status I get the following:</p>
<pre><code>[root@ip-10-0-0-104 pv0002]# oc get pod
NAME READY STATUS RESTARTS AGE
mysql 0/1 Image: openshift/mysql-55-centos7 is ready, container is creating 0 6h
wordpress 0/1 Image: wordpress is not ready on the node 0 6h
</code></pre>
<p>The pods are in pending state and in the webconsole they're giving the following error:</p>
<pre><code>12:12:51 PM mysql Pod failedMount Unable to mount volumes for pod "mysql_wordpress": exit status 32 (607 times in the last hour, 41 minutes)
12:12:51 PM mysql Pod failedSync Error syncing pod, skipping: exit status 32 (607 times in the last hour, 41 minutes)
12:12:48 PM wordpress Pod failedMount Unable to mount volumes for pod "wordpress_wordpress": exit status 32 (604 times in the last hour, 40 minutes)
12:12:48 PM wordpress Pod failedSync Error syncing pod, skipping: exit status 32 (604 times in the last hour, 40 minutes)
</code></pre>
<p>Unable to mount +timeout. But when I'm going to my node and I'm doing the following (test is a created directory on my node):</p>
<pre><code>mount -t nfs -v masterhostname:/home/data/pv0002 /test
</code></pre>
<p>And I place some file in my /test on my node than it appears in my /home/data/pv0002 on my master so that seems to work.
What's the reason that it's unable to mount in OpenShift?
I've been stuck on this for a while.</p>
<p>LOGS:</p>
<pre><code>Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.795267904Z" level=info msg="GET /containers/json"
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832179 1148 mount_linux.go:103] Mount failed: exit status 32
Oct 21 10:44:52 ip-10-0-0-129 origin-node: Mounting arguments: localhost:/home/data/pv0002 /var/lib/origin/openshift.local.volumes/pods/2bf19fe9-77ce-11e5-9122-02463424c049/volumes/kubernetes.io~nfs/pv0002 nfs []
Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832279 1148 kubelet.go:1206] Unable to mount volumes for pod "mysql_wordpress": exit status 32; skipping pod
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.832794476Z" level=info msg="GET /containers/json?all=1"
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.835916304Z" level=info msg="GET /images/openshift/mysql-55-centos7/json"
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.837085 1148 pod_workers.go:111] Error syncing pod 2bf19fe9-77ce-11e5-9122-02463424c049, skipping: exit status 32
</code></pre>
| <p>Logs showed <code>Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002</code></p>
<p>So it failed mounting on localhost.
to create my persistent volume I've executed this yaml:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "registry-volume"
},
"spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/home/data/pv0002",
"server": "localhost"
}
}
}
</code></pre>
<p>So I was mounting to <code>/home/data/pv0002</code> but this path was not on the localhost but on my master server (which is <code>ose3-master.example.com</code>. So I created my PV in a wrong way.</p>
<pre><code>{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "registry-volume"
},
"spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/home/data/pv0002",
"server": "ose3-master.example.com"
}
}
}
</code></pre>
<p>This was also in a training environment. It's recommended to have a NFS server outside of your cluster to mount to.</p>
|
<p>Tried installing kubernetes v1.2.0 on azure environment but after installation cannot access kube apis at port 8080.</p>
<p>Following services are running :</p>
<pre><code>root 1473 0.2 0.5 536192 42812 ? Ssl 09:22 0:00 /home/weave/weaver --port 6783 --name 22:95:7a:6e:30:ed --nickname kube-00 --datapath datapath --ipalloc-range 10.32.0.0/12 --dns-effective-listen-address 172.17.42.1 --dns-listen-address 172.17.42.1:53 --http-addr 127.0.0.1:6784
root 1904 0.1 0.2 30320 20112 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-proxy --master=http://kube-00:8080 --logtostderr=true
root 1907 0.0 0.0 14016 2968 ? Ss 09:22 0:00 /bin/bash -c until /opt/kubernetes/server/bin/kubectl create -f /etc/kubernetes/addons/; do sleep 2; done
root 1914 0.2 0.3 35888 22212 ? Ssl 09:22 0:00 /opt/kubernetes/server/bin/kube-scheduler --logtostderr=true --master=127.0.0.1:8080
root 3129 2.2 0.3 42488 25192 ? Ssl 09:27 0:00 /opt/kubernetes/server/bin/kube-controller-manager --master=127.0.0.1:8080 --logtostderr=true
</code></pre>
<p><code>curl -v http://localhost:8080</code> returns error</p>
<blockquote>
<ul>
<li>Rebuilt URL to: <a href="http://localhost:8080/" rel="nofollow">http://localhost:8080/</a></li>
<li>Trying 127.0.0.1...</li>
<li>connect to 127.0.0.1 port 8080 failed: Connection refused</li>
<li>Failed to connect to localhost port 8080: Connection refused</li>
<li>Closing connection 0 curl: (7) Failed to connect to localhost port 8080: Connection refused</li>
</ul>
</blockquote>
<p>Same works fine with <code>v1.1.2</code>.</p>
<p>I'm using following guidelines <a href="https://github.com/kubernetes/kubernetes/tree/master/docs/getting-started-guides/coreos/azure" rel="nofollow">https://github.com/kubernetes/kubernetes/tree/master/docs/getting-started-guides/coreos/azure</a> and updated line <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml#L187" rel="nofollow">https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml#L187</a> to user version <code>v1.2.0</code>.</p>
| <p>The services you show running do not include the apiserver. For a quick breakdown I can explain what each service does that you show running.</p>
<ul>
<li><a href="https://www.weave.works/" rel="nofollow">Weave</a>: This is a software overlay network and assigns IP addresses to your pods.</li>
<li><a href="http://kubernetes.io/docs/admin/kube-proxy/" rel="nofollow">kube-proxy</a>: This runs on your worker nodes allow pods to run and route traffic between exposed services.</li>
<li><a href="http://kubernetes.io/docs/user-guide/kubectl-overview/" rel="nofollow">kubectl create</a>: Kubectl is actually the management cli tool but in this case using <code>-f /etc/kubernetes/addons/; sleep 2</code> is watching the /etc/kubernetes/addons/ folder and automatically creating any objects (pods, replication controllers, services, etc.) that are put in that folder.</li>
<li><a href="http://kubernetes.io/docs/admin/kube-scheduler/" rel="nofollow">kube-scheduler</a>: Responsible for scheduling pods onto nodes. Uses policies and rules.
<a href="http://kubernetes.io/docs/admin/kube-controller-manager/" rel="nofollow">kube-controller-manager</a>: Manages the state of the cluster by always making sure the current state and desired state are the same. This includes starting/stopping pods and creating objects (services, replication-controllers, etc) that do not yet exist or killing them if they shouldn't exist.</li>
</ul>
<hr>
<p>All of these services interact with the <a href="http://kubernetes.io/docs/admin/kube-apiserver/" rel="nofollow">kube-apiserver</a> which should be a separate service that coordinates all of the information these other services use. You'll need the apiserver running in order for all of the other components to do their jobs.</p>
<p>I won't go into the details of getting it running in your environment but from it looks like in the comments on your original thread you found some missing documentation to get it running.</p>
|
<p>Is it possible to set the working directory when launching a container with Kubernetes ?</p>
| <p>Yes, through the <code>workingDir</code> field of the <a href="https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#Container" rel="noreferrer">container spec</a>. Here's an example replication controller with an nginx container that has <code>workingDir</code> set to <code>/workdir</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: mynginximage
workingDir: /workdir
</code></pre>
|
<p>k8s 1.2 deployed <a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="noreferrer">locally, single-node docker</a></p>
<p>Am I doing something wrong? Is this working for everyone else or is something broken in my k8s deployment?</p>
<p>Following the example in the ConfigMaps guide, /etc/config/special.how should be created below but is not:</p>
<pre><code>[root@totoro brs-kubernetes]# kubectl create -f example.yaml
configmap "special-config" created
pod "dapi-test-pod" created
[root@totoro brs-kubernetes]# kubectl exec -it dapi-test-pod -- sh
/ # cd /etc/config/
/etc/config # ls
/etc/config # ls -alh
total 4
drwxrwxrwt 2 root root 40 Mar 23 18:47 .
drwxr-xr-x 7 root root 4.0K Mar 23 18:47 ..
/etc/config #
</code></pre>
<p>example.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: ["sleep", "100"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: how.file
restartPolicy: Never
</code></pre>
<p>Summary of conformance test failures follows (asked to run by jayunit100). Full run in this <a href="https://gist.github.com/reflection/746dc9290eccd327d0bb" rel="noreferrer">gist</a>.</p>
<pre><code>Summarizing 7 Failures:
[Fail] ConfigMap [It] updates should be reflected in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/configmap.go:262
[Fail] Downward API volume [It] should provide podname only [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Downward API volume [It] should update labels on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:82
[Fail] ConfigMap [It] should be consumable from pods in volume with mappings [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
[Fail] Networking [It] should function for intra-pod communication [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/networking.go:121
[Fail] Downward API volume [It] should update annotations on modification [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:119
[Fail] ConfigMap [It] should be consumable from pods in volume [Conformance]
/home/schou/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/util.go:1637
Ran 93 of 265 Specs in 2875.468 seconds
FAIL! -- 86 Passed | 7 Failed | 0 Pending | 172 Skipped --- FAIL: TestE2E (2875.48s)
FAIL
</code></pre>
<p>Output of findmnt:</p>
<pre><code>[schou@totoro single-node]$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/fedora-root
β ext4 rw,relatime,data=ordere
ββ/sys sysfs sysfs rw,nosuid,nodev,noexec,
β ββ/sys/kernel/security securityfs securit rw,nosuid,nodev,noexec,
β ββ/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,
β β ββ/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,
β ββ/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,
β ββ/sys/firmware/efi/efivars efivarfs efivarf rw,nosuid,nodev,noexec,
β ββ/sys/kernel/debug debugfs debugfs rw,relatime
β ββ/sys/kernel/config configfs configf rw,relatime
β ββ/sys/fs/fuse/connections fusectl fusectl rw,relatime
ββ/proc proc proc rw,nosuid,nodev,noexec,
β ββ/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=
β ββ/proc/fs/nfsd nfsd nfsd rw,relatime
ββ/dev devtmpfs devtmpf rw,nosuid,size=8175208k
β ββ/dev/shm tmpfs tmpfs rw,nosuid,nodev
β ββ/dev/pts devpts devpts rw,nosuid,noexec,relati
β ββ/dev/mqueue mqueue mqueue rw,relatime
β ββ/dev/hugepages hugetlbfs hugetlb rw,relatime
ββ/run tmpfs tmpfs rw,nosuid,nodev,mode=75
β ββ/run/user/42 tmpfs tmpfs rw,nosuid,nodev,relatim
β β ββ/run/user/42/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
β ββ/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatim
β ββ/run/user/1000/gvfs gvfsd-fuse fuse.gv rw,nosuid,nodev,relatim
ββ/tmp tmpfs tmpfs rw
ββ/boot /dev/sda2 ext4 rw,relatime,data=ordere
β ββ/boot/efi /dev/sda1 vfat rw,relatime,fmask=0077,
ββ/var/lib/nfs/rpc_pipefs sunrpc rpc_pip rw,relatime
ββ/var/lib/kubelet/pods/fd20f710-fb82-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-qggyv
β tmpfs tmpfs rw,relatime
ββ/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~configmap/config-volume
β tmpfs tmpfs rw,relatime
ββ/var/lib/kubelet/pods/2f652e15-fb83-11e5-ab9f-0862662cf845/volumes/kubernetes.io~secret/default-token-6bzfe
tmpfs tmpfs rw,relatime
[schou@totoro single-node]$
</code></pre>
| <p>Thanks to @Paul Morie for helping me diagnose and fix this (from github <a href="https://github.com/kubernetes/kubernetes/issues/23392">issue</a>):</p>
<blockquote>
<p>bingo, the mount propagation mode of /var/lib/kubelet is private. try changing the mount flag for the kubelet dir to <code>-v /var/lib/kubelet:/var/lib/kubelet:rw,shared</code></p>
</blockquote>
<p>I also had to change <code>MountFlags=slave</code> to <code>MountFlags=shared</code> in my docker systemd file.</p>
|
<p>Is there something like:
kubectl get pods --project=PROJECT_ID</p>
<p>I would like not to modify my default gcloud configuration to switch between my staging and production environment. </p>
| <p><code>kubectl</code> saves clusters/contexts in its configuration. If you use the default scripts to bring up the cluster, these entries should've been set for your clutser.</p>
<p>A brief overview of <code>kubectl config</code>:</p>
<ul>
<li><a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_view/" rel="noreferrer"><code>kubectl config view</code></a> let you to view the cluster/contexts in
your configuration.</li>
<li><a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_set-cluster/" rel="noreferrer"><code>kubectl config set-cluster</code></a> and <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_set-context/" rel="noreferrer"><code>kubectl config set-context</code></a> modifies/adds new entries.</li>
</ul>
<p>You can use <a href="http://kubernetes.io/docs/user-guide/kubectl/kubectl_config_use-context/" rel="noreferrer"><code>kubectl config use-context</code></a> to change the default context, and <code>kubectl --context=CONTEXT get pods</code> to switch to a different context for the current command. </p>
|
<p>In coreos we can defined service as </p>
<pre><code>[X-Fleet]
Global=true
</code></pre>
<p>This will make sure that this particular service will run in all the nodes.</p>
<p>How do i achieve same thing for a pod in Kubernetes?</p>
| <p>Probably you want to use <a href="https://github.com/kubernetes/kubernetes/blob/release-1.2/docs/design/daemon.md" rel="nofollow">Daemonset</a> - a way to run a daemon on every node in a Kubernetes cluster.</p>
|
<p>I followed <a href="http://kubernetes.io/docs/getting-started-guides/aws/" rel="nofollow">the tutorial</a> how to create a k8s cluster on aws using ubuntu. This works great, so I have one master and three nodes - minions. However I haven't found any working working recipe how to add a new node to the cluster.</p>
<ol>
<li>First of all, I have a new autoscaling group in aws. It was created along with the cluster. I tried to increase the number of nodes in the scaling group, the scaling procedure really added an instance but the node <em>is not among the k8s cluster</em>. There is no kubelet, kube-proxy, no such thing on the instance. So, it's reasonable that it did not join the k8s cluster.</li>
</ol>
<p>As I have not found any tutorial, maybe I need to do some manual procedure. But it would be weird autoscaling. What am I supposed to do? kube scripts install the cluster, add scaling group and it does not work?</p>
<ol start="2">
<li>I do not insist on the automatic scaling, I just need to add a new node in the case of any failure or how we will add new running pods/rcs. I found a reference in <a href="http://blog.kubernetes.io/2016/03/building-highly-available-applications-using-Kubernetes-new-multi-zone-clusters-a.k.a-Ubernetes-Lite.html" rel="nofollow">this article</a> in the section called <code>Add more nodes in a second zone</code>, note that this is about multizone deployment.</li>
</ol>
<p>So I tried things like:</p>
<p><code>KUBE_USE_EXISTING_MASTER=true KUBERNETES_PROVIDER=aws KUBE_AWS_ZONE=eu-central-1b NUM_NODES=1 KUBE_SUBNET_CIDR=172.20.1.0/24 MASTER_INTERNAL_IP=172.20.0.9 kubernetes/cluster/kube-up.sh</code> </p>
<p>but it always failed with:</p>
<p><code>A client error (AlreadyExists) occurred when calling the CreateLaunchConfiguration operation: Launch Configuration by this name already exists - A launch configuration already exists with the name kubernetes-minion-group-eu-central-1b</code></p>
<p>There must be definitely some way how to do that, can you please someone help me? Thank you!</p>
| <p>After some attempts it seemed that the auto-scaling group worked using debian images. Note that it needs aprox. 2 minutes once the instance is ready with all installed necessities and you can see the instance using <code>get nodes</code>.</p>
|
<p>I am looking to deploy services to a Kubernetes cluster running over multiple zones and would like to be able to inject the region/zone labels into my pods using environment variables.</p>
<p>I have looked into the downward API however this only seems to allow you to inject labels/metadata from the pod/service and not from the node you are running the pod on.</p>
<p>If there is no way to inject the node labels another solution I thought about was having the container query the kubernetes/AWS API to fetch this information however that would mean adding quite a lot of complexity to my containers.</p>
| <blockquote>
<p>I thought about was having the container query the kubernetes/AWS API to fetch this information however that would mean adding quite a lot of complexity to my containers.</p>
</blockquote>
<p>This is currently the recommended approach for getting information not available in the downward API. To avoid the additional complexity in your containers, you could use a "sidecar" with a variation on <a href="https://stackoverflow.com/questions/36690446/inject-node-labels-into-kubernetes-pod/36699927#36699927">Tobias's solution</a>. The sidecar would be an additional container in the pod, which <a href="https://kubernetes.io/docs/user-guide/accessing-the-cluster/#accessing-the-api-from-a-pod" rel="noreferrer">connects to the kubernetes API</a>, queries the information you're looking for (node labels), and writes the output to a shared volume. This could be implemented as an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init container</a>, or a sidecar that continuously syncs with the API.</p>
|
<p>I configured k8s(kubernetes) with 1 master and 2 slaves. I was able to access web UI provided by k8s but after rebooting of M/cs, not able to access UI with the same URL. May be I am missing with some of environment variables or something else, didn't able to figure it out correctly. Did any one know, what I am missing ?<br>
<code>docker ps</code> shows that I am running the desired containers. Images are-<br>
<code>gcr.io/google_containers/kubernetes-dashboard-amd64:v1.0.1</code> and <code>gcr.io/google_containers/pause:2.0</code><br>
I followed this <a href="http://kubernetes.io/docs/getting-started-guides/locally/#running-a-container" rel="nofollow">link</a>. </p>
| <p>Navigating to <code>https://<master-ip>:8080/ui</code> will redirect you to the <code>kubernetes-dashboard</code> service in the <code>kube-system</code> namespace. When you first set up your cluster this service was working properly but after rebooting the set of endpoints became stale. You can diagnose this issue by running <code>kubectl describe endpoints --namespace kube-system kubernetes-dashboard</code> to see the current set of endpoints, and if they are incorrect (or missing), restarting the dashboard pod and/or dns pod will resolve the issue. </p>
|
<p>We are interested in running certain commands as pods and services, as they start or stop. Using the life-cycle hooks in the yml files does not work for us, since these commands are not optional. We have considered running a watcher pod that uses the watch api to run these commands. But we can't figure out how to use the watch api so that it does not keep sending the same events again and again. Is there a way to tell the watch api to only send new events since connection was opened? If expecting a stateful watch api is unreasonable, will it be possible to pass it a timestamp or a monotonically increasing id to avoid getting already seen events?</p>
<p>Basically what we are doing now we are running a pod with a daemon process that communicates with the api. we can find the events as stream. But we are interested to run some task when a pod created or deleted.</p>
| <p>I have found the answer. In case anyone else is watching.</p>
<p>There is a much better system to watch resource and handle events with custom tasks with <code>pkg/controller/framework</code> <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/framework/controller.go" rel="noreferrer">package</a></p>
<p>I found the step like this,</p>
<pre><code>1. initiate a framework.NewInFormer
2. Run the controller
3. the NewInFormer loads with your custom event handlers that will call when the events occured.
</code></pre>
|
Subsets and Splits