#StackBounty: #nginx #proxy #kubernetes #socat #helm What is needed to use kubernetes port forwarding through a proxy?

Bounty: 200

I have a kubernetes cluster that I can reach through an nginx proxy.

I can do kubectl get deployments -n kube-system without issue.

However, I’m trying to use helm. Helm is throwing an error:

Error: forwarding ports: error upgrading connection: unable to upgrade connection: query parameter "port" is required

From researching this, it looks like an error with port forwarding with kubernetes. In order for helm to work, kubernetes port forwarding must work first, ie:

https://stackoverflow.com/questions/56864580/error-forwarding-ports-upgrade-request-required-error-in-helm-of-a-kubernete

Indeed, trying:

kubectl -n kube-system port-forward <tiller-deploy-Pod> <some_port>:44134

Does not work through the proxy.

So…what exactly is needed to get kubernetes port forwarding working through a proxy?

Do I need to set up a tcp proxy such as socat on the proxy server for port 44134? If so, do I just proxy traffic to 44134 on the kubernetes master?


Get this bounty!!!

#StackBounty: #kubernetes #openshift #openshift-origin #okd openshift 3.11 install fails – Unable to update cni config: No networks fou…

Bounty: 50

I’m trying to install Openshift 3.11 on a one master, one worker node setup.

The installation fails, and I can see in journalctl -r:

2730 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
2730 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d

Things I’ve tried:

  1. reboot master node
  2. Disable IP forwarding on master node as described on https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238 and https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux
  3. Applying kube-flannel, on master node as described on https://stackoverflow.com/a/54779881/265119
  4. unset http_proxy https_proxy on master node as described on https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637
  5. modify /etc/resolve.conf to have nameserver 8.8.8.8, as described on https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710
  6. created a file /etc/cni/net.d/80-openshift-network.conf with content { "cniVersion": "0.2.0", "name": "openshift-sdn", "type": "openshift-sdn" }, as described on https://stackoverflow.com/a/55743756/265119

The last step does appear to have allowed the master node to become ready, however the ansible openshift installer still fails with Control plane pods didn't come up.


Get this bounty!!!

#StackBounty: #google-cloud-platform #kubernetes #google-kubernetes-engine #prometheus Correcting clock skew in a GKE cluster

Bounty: 50

I have the following alert configured in prometheus:

alert: ClockSkewDetected
expr: abs(node_timex_offset_seconds{job="node-exporter"})
  > 0.03
for: 2m
labels:
  severity: warning
annotations:
  message: Clock skew detected on node-exporter {{ $labels.namespace }}/{{ $labels.pod }}. Ensure NTP is configured correctly on this host.

This alert is part of the default kube-prometheus stack which I am using.

I find this alert fires for around 10 mins every day or two.

I’d like to know how to deal with this problem (the alert firing!). It’s suggested in this answer that I shouldn’t need to run NTP (via a daemonset I guess) myself on GKE.

I’m also keen to use the kube-prometheus defaults where possible – so I’m unsure about increasing the 0.03 value.


Get this bounty!!!

#StackBounty: #linux #docker #capabilities #kubernetes What are the security implications of capabilities in Kubernetes pods?

Bounty: 50

We have a Kubernetes deployment with an application that need to be on a VPN. We implement this requirement by running openvpn-client in a sidecar container within the pod with elevated capabilities:

securityContext:
  capabilities:
    add:
      - NET_ADMIN

We’d like to better understand the impact of this, and how exposed we’d be if this container were compromised. We want to be confident that code exec in this container couldn’t view or modify packets or network configuration in other pods, or on the host node.

My current hypothesis is that since each pod has an isolated network namespace, giving CAP_NET_ADMIN to a container in the pod just provides the capability within that namespace.

However, I haven’t been able to find any documentation that’s definitively discusses the impact of using securityContext to assign capabilities to containers. There’s a few pieces of documentation – outlined below – that strongly imply that Kubernete / Docker will provide sufficient isolation here, but I’m not 100% certain.


The pods documentation on resource sharing [1] gives a hint here:

The applications in a Pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a Pod must coordinate their usage of ports. Each Pod has an IP address in a flat shared networking space that has full communication with other physical computers and Pods across the network.

The networking documentation on the network model has this to say:

Kubernetes IP addresses exist at the Pod scope – containers within a Pod share their network namespaces – including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This also means that containers within a Pod must coordinate port usage, but this is no different from processes in a VM. This is called the “IP-per-pod” model.

Finally, I note that pod.spec.hostNetwork is configurable, and defaults to false:

$ kubectl explain pod.spec.hostNetwork
KIND:     Pod
VERSION:  v1

FIELD:    hostNetwork <boolean>

DESCRIPTION:
     Host networking requested for this pod. Use the host's network namespace.
     If this option is set, the ports that will be used must be specified.
     Default to false.

[1] https://kubernetes.io/docs/concepts/workloads/pods/pod/#resource-sharing-and-communication

[2] https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model


Get this bounty!!!

#StackBounty: #google-cloud-platform #kubernetes #google-kubernetes-engine Google Cloud Kuberbetes run-away systemd 100% CPU usage

Bounty: 50

Last week, after upgrading our GKE cluster to Kubernetes 1.13.6-gke.13, some of our nodes started to fail due to high CPU usage. Pods on these nodes would be CPU starved, work poorly and get killed due to failing liveness checks.

This is what top shows when we SSH into a problem node:

top - 10:11:27 up 5 days, 22 min,  1 user,  load average: 23.71, 21.90, 20.32
Tasks: 858 total,   3 running, 854 sleeping,   0 stopped,   1 zombie
%Cpu(s): 33.5 us, 30.9 sy,  0.0 ni, 35.5 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :  30157.0 total,  14125.6 free,   4771.2 used,  11260.1 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.  24762.7 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
      1 root      20   0  114636  22564   4928 R  65.2   0.1   5254:53 systemd
    356 message+  20   0  110104  98784   2916 S  33.8   0.3   2705:49 dbus-daemon
   1236 root      20   0 3174876 147328  57464 S  22.0   0.5   1541:29 kubelet
    366 root      20   0   12020   3864   3396 S  21.6   0.0   1705:21 systemd-logind
   1161 root      20   0 5371428 586832  26756 S  18.7   1.9   1848:18 dockerd
    432 root      20   0 5585144  49040  13420 S  11.1   0.2 848:54.06 containerd
  23797 root      20   0  994620   8784   6088 S   3.0   0.0  96:58.79 containerd-shim
  45359 root      20   0  994620   8968   5600 S   3.0   0.0 100:28.46 containerd-shim
  35913 root      20   0 1068352   8192   5600 S   2.3   0.0 104:54.12 containerd-shim
  10806 root      20   0  994620   8908   5596 S   2.0   0.0 102:57.45 containerd-shim
  15378 root      20   0  994620   9084   5600 S   2.0   0.0 102:24.08 containerd-shim
  33141 root      20   0  994620   8856   5848 S   2.0   0.0  95:26.89 containerd-shim
  34299 root      20   0  994620   8824   5848 S   2.0   0.0  90:55.28 containerd-shim
  48411 root      20   0  994620   9488   6216 S   2.0   0.0  95:38.56 containerd-shim
1824641 root      20   0 1068352   6836   5716 S   2.0   0.0  65:45.81 containerd-shim
  10257 root      20   0  994620   9404   5596 S   1.6   0.0 101:10.45 containerd-shim
  15400 root      20   0 1068352   8916   6216 S   1.6   0.0  97:47.99 containerd-shim
  22895 root      20   0 1068352   8408   5472 S   1.6   0.0 102:55.97 containerd-shim
  29969 root      20   0  994620   9124   5600 S   1.6   0.0  98:32.32 containerd-shim
  34720 root      20   0  994620   8444   5472 S   1.6   0.0  97:23.98 containerd-shim
  10073 root      20   0 1068352   9796   6152 S   1.3   0.0 100:54.30 containerd-shim

To attempt to resolve the issue we recreated all the nodes. We created a new pool with equivalent resources and migrated all pods over by scaling down the old pool to 0 nodes. (This was difficult because at least 2 of our previous nodes failed to shut down, even after a long time. In the end we had to shut down the underlying VMs to kill those nodes.) At first this seemed to help, node CPU usage and load averages were low, but then the problem returned.

Next we created yet another pool, this time with twice as much CPU. It didn’t help. Some nodes still had extremely high CPU usage for systemd, dbus-daemon, kubelet etc.

We kept that setup though. Now we are running with tons of extra CPU per node and, although expensive, that masks the problem (there’s enough CPU to also run actual pods in addition to the problematic system services).

How do we find out what’s actually wrong here?


Snippet from journalctl -u kubelet:

Jul 04 05:49:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 05:49:34.849808    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 05:54:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 05:54:34.850598    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 05:59:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 05:59:34.851797    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:04:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:04:34.858344    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:09:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:09:34.859626    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:14:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:14:34.861142    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:19:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:19:34.862185    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:24:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:24:34.863160    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:29:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:29:34.864156    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:34:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:34:34.865041    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:39:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:39:34.866044    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:44:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:44:34.866969    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:49:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:49:34.867979    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:54:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:54:34.869429    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 06:59:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 06:59:34.870359    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:04:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:04:34.871190    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:09:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:09:34.872063    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:14:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:14:34.873240    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:19:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:19:34.874123    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:24:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:24:34.875010    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:29:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:29:34.876612    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:34:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:34:34.877420    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:39:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:39:34.878368    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:44:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:44:34.879221    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:49:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:49:34.880239    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:54:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:54:34.881172    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 07:59:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 07:59:34.881868    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 08:04:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 08:04:34.882653    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 08:09:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 08:09:34.883432    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 08:14:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 08:14:34.884175    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 08:19:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 08:19:34.885043    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 08:24:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 08:24:34.885845    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service
Jul 04 08:29:34 gke-cluster0-pool-1-9c29a343-vqtq kubelet[1236]: I0704 08:29:34.886690    1236 container_manager_linux.go:434] [ContainerManager]: Discovered runtime cgroups name: /system.slice/docker.service

Output from strace -c -p `pidof systemd`:

strace: Process 1 attached
strace: Process 1 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 24.20    0.109101           5     22827           lstat
 17.45    0.078654           9      8610           sendmsg
 15.51    0.069914           7     10549           read
 10.96    0.049406           3     17310           getpid
  6.45    0.029075           8      3806           openat
  6.08    0.027385           7      3783           readlinkat
  5.76    0.025945           3      7579           fstat
  4.47    0.020167           4      5700           getrandom
  3.62    0.016301           4      3806           close
  3.14    0.014133           7      1892           access
  2.28    0.010278           5      1924        11 stat
  0.03    0.000145           4        33           epoll_wait
  0.02    0.000089           4        22           readlink
  0.01    0.000029           3        11           prctl
  0.01    0.000029           3        11           clock_gettime
  0.01    0.000027           2        11           getgid
  0.01    0.000026           2        11           geteuid
  0.00    0.000020           2        11           getuid
  0.00    0.000020           2        11           getegid
------ ----------- ----------- --------- --------- ----------------
100.00    0.450744                 87907        11 total

Since dbus is very active, I took a look at that too: dbus-monitor --system --profile | head -n 20

dbus-monitor: unable to enable new-style monitoring: org.freedesktop.DBus.Error.AccessDenied: "Rejected send message, 1 matched rules; type="method_call", sender=":1.165" (uid=5004 pid=769854 comm="dbus-monitor --system --profile ") interface="org.freedesktop.DBus.Monitoring" member="BecomeMonitor" error name="(unset)" requested_reply="0" destination="org.freedesktop.DBus" (bus)". Falling back to eavesdropping.
#type    timestamp    serial    sender    destination    path    interface    member
#                    in_reply_to
sig    1562263380.765023    2    org.freedesktop.DBus    :1.165    /org/freedesktop/DBus    org.freedesktop.DBus    NameAcquired
sig    1562263380.953812    132870362    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d5d1be6e08bfe7552d6f9ee50a943eca88dd0dd749ec248594aa0be91879a2cdb_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.957890    132870363    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d5d1be6e08bfe7552d6f9ee50a943eca88dd0dd749ec248594aa0be91879a2cdb_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.957912    132870364    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d539cd8b8367913182e438aa1b3b05714c8f3f131e20bcadabdeb850c375a8125_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.957918    132870365    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d539cd8b8367913182e438aa1b3b05714c8f3f131e20bcadabdeb850c375a8125_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.957923    132870366    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2dcontainers_2d4c04ee8d1bf693ff2c9300b198b2b47bbf2c240265af5b9edc1f07b6cbd0c1ce_2dmounts_2dshm_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958014    132870367    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2dcontainers_2d4c04ee8d1bf693ff2c9300b198b2b47bbf2c240265af5b9edc1f07b6cbd0c1ce_2dmounts_2dshm_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958020    132870368    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d872dd2c0a63a9f3b5c9c5e4972e06fcf89d28b4c7f59aea7ea4d38f5a6bf0db6_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958025    132870369    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d872dd2c0a63a9f3b5c9c5e4972e06fcf89d28b4c7f59aea7ea4d38f5a6bf0db6_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958029    132870370    :1.157    <none>    /org/freedesktop/systemd1/unit/home_2dkubernetes_2dcontainerized_5fmounter_2drootfs_2dvar_2dlib_2dkubelet_2dpods_2d2f4e6eae_5cx2d9e51_5cx2d11e9_5cx2db4ee_5cx2d42010a80000f_2dvolumes_2dkubernetes_2eio_5cx7esecret_2dfluentd_5cx2dgcp_5cx2dtoken_5cx2dzfrkb_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958111    132870371    :1.157    <none>    /org/freedesktop/systemd1/unit/home_2dkubernetes_2dcontainerized_5fmounter_2drootfs_2dvar_2dlib_2dkubelet_2dpods_2d2f4e6eae_5cx2d9e51_5cx2d11e9_5cx2db4ee_5cx2d42010a80000f_2dvolumes_2dkubernetes_2eio_5cx7esecret_2dfluentd_5cx2dgcp_5cx2dtoken_5cx2dzfrkb_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958117    132870372    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2dkubelet_2dpods_2d2f4e6eae_5cx2d9e51_5cx2d11e9_5cx2db4ee_5cx2d42010a80000f_2dvolumes_2dkubernetes_2eio_5cx7esecret_2dfluentd_5cx2dgcp_5cx2dtoken_5cx2dzfrkb_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958121    132870373    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2dkubelet_2dpods_2d2f4e6eae_5cx2d9e51_5cx2d11e9_5cx2db4ee_5cx2d42010a80000f_2dvolumes_2dkubernetes_2eio_5cx7esecret_2dfluentd_5cx2dgcp_5cx2dtoken_5cx2dzfrkb_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958126    132870374    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d667d713fe45ea0af609c85fbfd3aafbca9494574fe10509bda8cd3d13d1e6b66_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958223    132870375    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d667d713fe45ea0af609c85fbfd3aafbca9494574fe10509bda8cd3d13d1e6b66_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958229    132870376    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d0a9f899f23c0965b43cae81dd04f46a78e4b1b063fa5679323146b06932474a9_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958233    132870377    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d0a9f899f23c0965b43cae81dd04f46a78e4b1b063fa5679323146b06932474a9_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged
sig    1562263380.958238    132870378    :1.157    <none>    /org/freedesktop/systemd1/unit/var_2dlib_2ddocker_2doverlay2_2d875734c124694cd54a7c26516f740cedcb853a18ff5805a89669747c1188b65d_2dmerged_2emount    org.freedesktop.DBus.Properties    PropertiesChanged


Get this bounty!!!

#StackBounty: #kubernetes #istio Using Gateway + VirtualService + http01 + SDS

Bounty: 100

In the document there is an example about Securing Kubernetes Ingress with Cert-Manager which is not using Gateway + VirtualService.

I have tried to make it work with acme http01 but the certificate can not be issued as in log challenge I have 404 error. Seems it can not access to domain checking challenges. Is there any best practice with the specifications that I mentioned?

[Update 1]

I want to use istio gateway with SDS option for TLS and secure that by using cert-manager with http-01.

According to the documentation I found some example like Securing Kubernetes Ingress with Cert-Manager or Deploy a Custom Ingress Gateway Using Cert-Manager. However these examples are using Kuberenetes Ingress resource itself (Not istio gateway) or like the second example is using dns-01.

I need an instruction which including istio gateway with SDS option for TLS and secure that by using cert-manager with http-01. Istio gateway give me ability to use VirtualService.

Thanks!


Get this bounty!!!

#StackBounty: #kubernetes #kubernetes-helm #istio #telemetry ISTIO: telemetry traffic not showing correctly in grafana & kiali

Bounty: 50

I am new to kubernetes & istio,
trying to apply the bookinfo tutorial to my personal project, i don’t get the same results when monitoring traffic through kiali ui or grafana ui.

i believe i didn’t change much from the bookinfo project, here is the config i used (using helm)

##################################################################################################
# Webapp services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: "{{ .Values.service.name }}-svc"
  namespace: "{{ .Values.service.namespace }}"
  labels:
    app: "{{ .Values.service.name }}"
    service: "{{ .Values.service.name }}-svc"
spec:
  ports:
  - port: {{ .Values.service.port }}
    name: "{{ .Values.service.name }}-http"
  selector:
    app: "{{ .Values.service.name }}"
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: "{{ .Values.service.name }}-{{ .Values.service.version }}"
  namespace: "{{ .Values.service.namespace }}"
  labels:
    app: "{{ .Values.service.name }}"
    version: "{{ .Values.service.version }}"
spec:
  replicas: {{ .Values.productionDeployment.replicaCount }}
  selector:
    matchLabels:
      app: "{{ .Values.service.name }}"
      version: "{{ .Values.service.version }}"
  template:
    metadata:
      labels:
        app: "{{ .Values.service.name }}"
        version: "{{ .Values.service.version }}"
    spec:
      containers:
      - name: "{{ .Values.service.name }}"
        image: "{{ .Values.productionDeployment.image.repository }}:{{ .Values.productionDeployment.image.tag }}"
        imagePullPolicy: {{ .Values.productionDeployment.image.pullPolicy }}
        ports:
        - containerPort: {{ .Values.service.port }}
---

and here’s the istio config i used:

##################################################################################################
# Webapp gateway & virtual service
##################################################################################################
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: "{{ .Values.service.name }}-gateway"
  namespace: "{{ .Values.service.namespace }}"
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: "{{ .Values.service.name }}"
  namespace: "{{ .Values.service.namespace }}"
spec:
  hosts:
  - "*"
  gateways:
  - "{{ .Values.service.name }}-gateway"
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: "{{ .Values.service.name }}-svc"
        port:
          number: {{ .Values.service.port }}
---

this is what i see in kiali:
enter image description here

and in grafana: (notice there is no service Request volume)
enter image description here

however in prometheus i see traces:
enter image description here


Get this bounty!!!

#StackBounty: #kubernetes Can't get the usage of PVC through kubelet metrics

Bounty: 100

I have a Kubernetes 1.9.11 cluster on baremetal machines running Coreos 1576.5.0.

Recently I deployed a Glusterfs 4.1.7 cluster, managed by Heketi 8, and created a lot of PVCs to be used by some statfulset applications. The problem is, I can’t get metrics about these PVCs through Kublet’s 10250 port:

curl -k https://aa05:10250/metrics 2>/dev/null | grep kubelet_volume_stats | wc -l
0

So, how can I get these metrics?

Any hints will be appreciated.


Get this bounty!!!

#StackBounty: #networking #nginx #tcp #kubernetes #minikube Configuring TCP services with nginx ingress on minikube/k8s

Bounty: 50

I’m new to k8s/minikube (and to some extent, unix networking in general) so if I ask something that seems to make no sense, I’ll be happy to clarify!

Goal

I want to configure a port-based TCP ingress, as described briefly in the nginx-ingress docs. In particular, I want to use the webpack-dev-server from inside minikube.

Error

When it’s set up according to my best understanding, I still get Failed to load resource: net::ERR_CONNECTION_REFUSED when requesting local.web:3001/client.js. That is, navigating in my browser to ‘local.web/’ brings up the page, but without the bundle that webpack is meant to be producing. The request for that fails.

Configuration

Moving from host machine to minikube pod, I have

/etc/hosts:

On my dev machine, I set local.web to the minikube IP

$ echo "$(minikube ip) local.web" | sudo tee -a /etc/hosts

Ingress:

{
  "kind": "Ingress",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "dev-web-ingress",
    "namespace": "dev",
    "selfLink": "/apis/extensions/v1beta1/namespaces/dev/ingresses/dev-web-ingress",
    "uid": "64ebfc93-612e-11e9-8df7-0800270e7244",
    "resourceVersion": "280750",
    "generation": 3,
    "creationTimestamp": "2019-04-17T16:32:30Z",
    "labels": {
      "platform": "advocate",
      "tier": "frontend"
    },
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"labels":{"platform":"advocate","tier":"frontend"},"name":"dev-web-ingress","namespace":"dev"},"spec":{"rules":[{"host":"local.web","http":{"paths":[{"backend":{"serviceName":"dev-adv-web-service","servicePort":"http"},"path":"/"}]}}]}}n",
      "kubernetes.io/ingress.class": "nginx"
    }
  },
  "spec": {
    "rules": [
      {
        "host": "local.web",
        "http": {
          "paths": [
            {
              "path": "/",
              "backend": {
                "serviceName": "dev-adv-web-service",
                "servicePort": "http"
              }
            }
          ]
        }
      }
    ]
  },
  "status": {
    "loadBalancer": {
      "ingress": [
        {
          "ip": "10.0.2.15"
        }
      ]
    }
  }
}

TCP Services

{
  "kind": "ConfigMap",
  "apiVersion": "v1",
  "metadata": {
    "name": "tcp-services",
    "namespace": "dev",
    "selfLink": "/api/v1/namespaces/dev/configmaps/tcp-services",
    "uid": "5e456f3e-622e-11e9-bcf8-0800270e7244",
    "resourceVersion": "295220",
    "creationTimestamp": "2019-04-18T23:04:50Z",
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{"apiVersion":"v1","data":{"3001":"dev/dev-adv-web-service:3001","9290":"dev/dev-echoserver:8080"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"tcp-services","namespace":"dev"}}n"
    }
  },
  "data": {
    "3001": "dev/dev-adv-web-service:3001",
  }
}

Service:

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "dev-adv-web-service",
    "namespace": "dev",
    "selfLink": "/api/v1/namespaces/dev/services/dev-adv-web-service",
    "uid": "64e3c65d-612e-11e9-8df7-0800270e7244",
    "resourceVersion": "280675",
    "creationTimestamp": "2019-04-17T16:32:30Z",
    "labels": {
      "app": "adv-web",
      "tier": "frontend"
    },
    "annotations": [... edited for clarity]
  },
  "spec": {
    "ports": [
      {
        "name": "http",
        "protocol": "TCP",
        "port": 80,
        "targetPort": 3000,
        "nodePort": 31246
      },
      {
        "name": "http2",
        "protocol": "TCP",
        "port": 3001,
        "targetPort": 3001,
        "nodePort": 31392
      }
    ],
    "selector": {
      "app": "frontend-container",
      "tier": "frontend"
    },
    "clusterIP": "10.108.24.80",
    "type": "LoadBalancer",
    "sessionAffinity": "None",
    "externalTrafficPolicy": "Cluster"
  },
  "status": {
    "loadBalancer": {}
  }
}

Pod

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "dev-adv-web-768767454f-wxvnh",
    "generateName": "dev-adv-web-768767454f-",
    "namespace": "dev",
    "selfLink": "/api/v1/namespaces/dev/pods/dev-adv-web-768767454f-wxvnh",
    "uid": "65de844e-622c-11e9-bcf8-0800270e7244",
    "resourceVersion": "294073",
    "creationTimestamp": "2019-04-18T22:50:43Z",
    "labels": {
      "app": "frontend-container",
      "pod-template-hash": "768767454f",
      "tier": "frontend"
    },
    "ownerReferences": [
      {
        "apiVersion": "apps/v1",
        "kind": "ReplicaSet",
        "name": "dev-adv-web-768767454f",
        "uid": "4babd3e7-613d-11e9-8df7-0800270e7244",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "frontend-repo",
        "hostPath": {
          "path": "/Users/me/Projects/code/frontend",
          "type": ""
        }
      },
      {
        "name": "default-token-7rfht",
        "secret": {
          "secretName": "default-token-7rfht",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "adv-web-container",
        "image": "localhost:5000/react:dev",
        "command": [
          "npm",
          "run",
          "dev"
        ],
        "ports": [
          {
            "name": "http",
            "containerPort": 3000,
            "protocol": "TCP"
          },
          {
            "name": "http2",
            "containerPort": 3001,
            "protocol": "TCP"
          }
        ],
        "env": [
          {
            "name": "HOSTNAME_PUBLISHED",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "HOSTNAME_PUBLISHED"
              }
            }
          },
          {
            "name": "LOCAL_DOMAIN",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "LOCAL_DOMAIN"
              }
            }
          },
          {
            "name": "HOST",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "HOST"
              }
            }
          },
          {
            "name": "WEBPACK_PUBLISHED_PORT",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "WEBPACK_PUBLISHED_PORT"
              }
            }
          },
          {
            "name": "WEBPACK_LISTEN_PORT",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "WEBPACK_LISTEN_PORT"
              }
            }
          },
          {
            "name": "API_URL",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "API_URL"
              }
            }
          },
          {
            "name": "LOGIN_CALLBACK_URL",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "LOGIN_CALLBACK_URL"
              }
            }
          },
          {
            "name": "NPM_CONFIG_LOGLEVEL",
            "valueFrom": {
              "configMapKeyRef": {
                "name": "dev-frontend-configmap",
                "key": "NPM_CONFIG_LOGLEVEL"
              }
            }
          }
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "frontend-repo",
            "mountPath": "/code"
          },
          {
            "name": "default-token-7rfht",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "default",
    "serviceAccount": "default",
    "nodeName": "minikube",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ],
    "priority": 0,
    "enableServiceLinks": true
  },
  "status": {
    "phase": "Running",
    "conditions": [
      {
        "type": "Initialized",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-04-18T22:50:43Z"
      },
      {
        "type": "Ready",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-04-18T22:50:45Z"
      },
      {
        "type": "ContainersReady",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-04-18T22:50:45Z"
      },
      {
        "type": "PodScheduled",
        "status": "True",
        "lastProbeTime": null,
        "lastTransitionTime": "2019-04-18T22:50:43Z"
      }
    ],
    "hostIP": "10.0.2.15",
    "podIP": "172.17.0.13",
    "startTime": "2019-04-18T22:50:43Z",
    "containerStatuses": [
      {
        "name": "adv-web-container",
        "state": {
          "running": {
            "startedAt": "2019-04-18T22:50:44Z"
          }
        },
        "lastState": {},
        "ready": true,
        "restartCount": 0,
        "image": "localhost:5000/react:dev",
        "imageID": "docker-pullable://localhost:5000/react@sha256:2bfe61ed134044bff4b23f5c057af2f9c480c3c1a1927a485f09f3410528903d",
        "containerID": "docker://57b9b6dafaf2aba8a21d5dd7db3543f4742c00331b49b48dc1561e3b5bd05315"
      }
    ],
    "qosClass": "BestEffort"
  }
}

Hypotheses

One thought was that the namespace on the TCP services ConfigMap was wrong. It’s not clear to me from the docs where that’s supposed to live. I have tried it in the namespace dev, where the ingress, service, and deployment/pods live. I also tried adding the data entry as above to the tcp-services ConfigMap in kube-system.

The logs for the webpack pod show no errors, so I don’t believe the problem is at the application level.

Since the GET local.web/ is returning data from the pod, I am convinced the service is at least partially correct.

I’m willing to perform any debugging you can suggest, and have no illusions that I know anything about what’s going on–I’ll be grateful for any help offered.


Get this bounty!!!