#StackBounty: #rhel #kubernetes Kubernetes Failed to list *v1.ConfigMap: Get

Bounty: 100

I have a K8S cluster, 2 servers, 1 master, 1 worker and everything works fine.

Cluster deploy:

kubeadm init --pod-network-cidr "10.11.0.0/16" --upload-certs 

cluster:

ardc01k8s-master01.nps.local   Ready    master   41m   v1.19.2   10.10.80.1    <none>        Red Hat Enterprise Linux 8.2 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13
ardc01k8s-wrk01.nps.local      Ready    <none>   34m   v1.19.2   10.10.80.11   <none>        Red Hat Enterprise Linux 8.2 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   docker://19.3.13

When I deploy the cluster with HA (for multiple masters) Metallb fails and cant read confimaps so see the ips it can assing.

kubeadm init --control-plane-endpoint "10.10.80.10:6443" --pod-network-cidr "10.11.0.0/16" --upload-certs 

metallb controller error:

E1009 19:34:56.370850       1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-   78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.12.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.12.0.1:443: i/o timeout
I1009 19:34:56.371672       1 trace.go:81] Trace[1783558010]: "Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-10-09 19:34:26.371367929 +0000 UTC m=+1612.080726327) (total time: 30.000286335s):
Trace[1783558010]: [30.000286335s] [30.000286335s] END

metallb cant read the configmap and the services stuck in <Pending> state.

All servers are fresh RHEL installs and I made snapshots to rollback.
Any ideas?


Get this bounty!!!

#StackBounty: #kubernetes #cert-manager Kubernetes Cert-Manager installation (No helm)

Bounty: 50

I followed this document https://www.scaleway.com/en/docs/how-to-setup-traefikv2-and-cert-manager-on-kapsule/ (no helm) to setup cert-manager with LetsEncrypt on Kubernetes but it didn’t generate certificate. I was expecting to see Certificate issued successfully

This was ClusterIssuer manifest

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    email: admin@example.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-test-key
    solvers:
    - dns01:
        route53:
          region: us-west-2
          hostedZoneID: ######
          accessKeyID: #######
          secretAccessKeySecretRef:
            name: aws-secret
            key: secret_key
      selector:
          dnsZones:
            - "example.com"

Certificate

apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
  name: test-cert
  namespace: cert-manager
spec:
  commonName: '*.test.example.com'
  secretName: test-cert
  dnsNames:
    - '*.test.example.com'
  issuerRef:
    name: letsencrypt
    kind: ClusterIssuer

Following was the status of certificate test-cert

% kubectl -n cert-manager describe certificate test-cert
Name:         test-cert
Namespace:    cert-manager
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"cert-manager.io/v1alpha2","kind":"Certificate","metadata":{"annotations":{},"name":"test-cert","namespace":"cert-manager"},...
API Version:  cert-manager.io/v1
Kind:         Certificate
Metadata:
  Creation Timestamp:  2020-09-30T18:20:14Z
  Generation:          1
  Resource Version:    114851206
  Self Link:           /apis/cert-manager.io/v1/namespaces/cert-manager/certificates/test-cert
  UID:                 c552c42a-6202-40f8-8e9d-f47387f3cf1c
Spec:
  Common Name:  *.test.example.com
  Dns Names:
    *.test.example.com
  Issuer Ref:
    Kind:       ClusterIssuer
    Name:       letsencrypt
  Secret Name:  test-cert
Status:
  Conditions:
    Last Transition Time:        2020-09-30T18:20:14Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      True
    Type:                        Issuing
    Last Transition Time:        2020-09-30T18:20:14Z
    Message:                     Issuing certificate as Secret does not exist
    Reason:                      DoesNotExist
    Status:                      False
    Type:                        Ready
  Next Private Key Secret Name:  test-cert-j2bdf
Events:                          <none>

IngressRoute for test app

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: app-external
  namespace: cert-manager
spec:
  entryPoints:
    - web
    - websecure
  routes:
    - match: Host(`app.test.example.com`)
      kind: Rule
      services:
        - name: nginx
          port: 80
      middlewares:
        - name: https-redirect
          namespace: cert-manager
  tls:
    secretName: test-cert

CertificateRequest

$ kubectl -n cert-manager describe CertificateRequest test-cert-2glmx
Name:         test-cert-2glmx
Namespace:    cert-manager
Labels:       <none>
Annotations:  cert-manager.io/certificate-name: test-cert
              cert-manager.io/certificate-revision: 1
              cert-manager.io/private-key-secret-name: test-cert-j2bdf
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"cert-manager.io/v1alpha2","kind":"Certificate","metadata":{"annotations":{},"name":"test-cert","namespace":"cert-manager"},...
API Version:  cert-manager.io/v1
Kind:         CertificateRequest
Metadata:
  Creation Timestamp:  2020-09-30T18:20:14Z
  Generate Name:       test-cert-
  Generation:          1
  Owner References:
    API Version:           cert-manager.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Certificate
    Name:                  test-cert
    UID:                   c552c42a-6202-40f8-8e9d-f47387f3cf1c
  Resource Version:        114851218
  Self Link:               /apis/cert-manager.io/v1/namespaces/cert-manager/certificaterequests/test-cert-2glmx
  UID:                     d275cb9f-a1d0-417c-a0de-6a1a76193c31
Spec:
  Issuer Ref:
    Kind:   ClusterIssuer
    Name:   letsencrypt
  Request:  LS0t...002b1JBCkZGREU3Mk.....nOFlqQW9......FvWkZhb....NjlzM3RtZm....gvW...0tLUV....0tLS0K
Status:
  Conditions:
    Last Transition Time:  2020-09-30T18:20:14Z
    Message:               Waiting on certificate issuance from order cert-manager/test-cert-2glmx-2027085711: "pending"
    Reason:                Pending
    Status:                False
    Type:                  Ready
Events:
  Type    Reason        Age    From          Message
  ----    ------        ----   ----          -------
  Normal  OrderCreated  6m42s  cert-manager  Created Order resource cert-manager/test-cert-2glmx-2027085711

Why certificate order is pending state

Whats wrong in my setup ?


Get this bounty!!!

#StackBounty: #nginx #kubernetes #docker-registry #http-status-code-413 #kube-registry docker push causes http error 413: client intend…

Bounty: 50

I have a minikube K8S cluster running on a hyperv vm.
The cluster has an nginx ingress controller to expose services to outside of the cluster.
Outside of the cluster requests are exposed to the internet through an nginx that runs on the host machine.

Inside the cluster I also have a kube-registry with kube-registry-proxy running to store my docker images.

There however is an issue when I try to push larger images (about 32mb). In this case I get a 413 with the following error in my host nginx logfile (redacted for privacy reasons):

<redactedIP> - - [<redactedLocalDateTime>] "PATCH /v2/<redactedImageName>/blobs/uploads/<redactedUuid>?_state=<redactedState>%3D HTTP/1.1" 413 183 "-" "docker/19.03.12 go/go1.13.10 git-commit/48a66213fe kernel/4.19.114 os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.12 x5C(windowsx5C))"

When looking in my ingress-nginx-controller logs in my k8s cluster, I see the following message:

[error] 371#371: *1004283 client intended to send too large body: 9930410 bytes, client: 127.0.0.1, server: , request: "PATCH /v2/<redactedImageName>/blobs/uploads/<redactedUuid> HTTP/1.1", host: ".<redactedHost>"

And looking at my kube-registry logs, I don’t see the PATCH request at all.
This suggests to me that the issue lies with the ingress-nginx-controller in my cluster.

I have looked into the issue and found some clues, regarding setting the client_max_body_size and client_body_buffer_size, either with the nginx configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  body-size: "1024m"
  proxy-body-size: "1024m"
  client-max-body-size: "1024m"
  client-body-buffer-size: "1024m"

This is set in the deployment of the ingress-nginx-controller using the --configmap argument

I also tried to set it using annotations on the kube-registry ingress:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  labels:
    app: kube-registry-ingress
  name: kube-registry-ingress
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "1024m"
    nginx.org/client-max-body-size: "1024m"
    ...

I also made sure that client_max_body_size and client_body_buffer_size were set in the host nginx config.

I verified that the nginx config from the ingress-nginx-controller was set:

http {                       
        client_body_buffer_size         1024m;           
   
        ...

        ## start server <redactedDomain>
        server {                                                                                                                                                                                                                                                                                                                     
                server_name <redactedDomain> ;                                                                                           
                                                                                                                                                                                                                                                                                                                                                          
                listen 80  ;                                                                                                                                                                                                                                                                                                         
                listen 443  ssl http2 ;                                                                                                        
                                                                                                                                                                                                                                                                                                                                                          
                set $proxy_upstream_name "-";                                                                                                                                                                                                                                                                                        
                                                                                                                                               
                ssl_certificate_by_lua_block {                                                                                                                                                                                                                                                                                                            
                        certificate.call()                                                                                                                                                                                                                                                                                           
                }                                                                                                                              
                                                                                                                                                                                                                                                                                                                                                          
                location / {                                                                                                                                                                                                                                                                                                         
                                                                                                                                               
                        set $namespace      "kube-system";                                                                                                                                                                                                                                                                                                
                        set $ingress_name   "kube-registry-ingress";                                                                                                                                                                                                                                                                 
                        set $service_name   "kube-registry-proxy";                                                                             
                        set $service_port   "80";                                                                                                                                                                                                                                                                                                         
                        set $location_path  "/";                                                                                                                                                                                                                                                                                     
                                                                                                                                               
                        rewrite_by_lua_block {                                                                                                                                                                                                                                                                                                            
                                lua_ingress.rewrite({                                                                                                                                                                                                                                                                                
                                        force_ssl_redirect = false,                                                                            
                                        ssl_redirect = true,                                                                                                                                                                                                                                                                                              
                                        force_no_ssl_redirect = false,                                                                                                                                                                                                                                                               
                                        use_port_in_redirects = false,                                                                         
                                })                                                                                                                                                                                                                                                                                                                        
                                balancer.rewrite()                                                                                                                                                                                                                                                                                   
                                plugins.run()                                                                                                  
                        }                                                                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                                                     
                        # be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any                                    
                        # will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`                                                                                                                                                                                                           
                        # other authentication method such as basic auth or external auth useless - all requests will be allowed.                                                                                                                                                                                                    
                        #access_by_lua_block {                                                                                                 
                        #}                                                                                                                                                                                                                                                                                                                                
                                                                                                                                                                                                                                                                                                                                     
                        header_filter_by_lua_block {                                                                                           
                                lua_ingress.header()                                                                                                                                                                                                                                                                                                      
                                plugins.run()                                                                                                                                                                                                                                                                                        
                        }                                                                                                                      
                                                                                                                                                                                                                                                                                                                                                          
                        body_filter_by_lua_block {                                                                                                                                                                                                                                                                                   
                        }                                                                                                                      
                                                                                                                                                                                                                                                                                                                                                          
                        log_by_lua_block {                                                                                                                                                                                                                                                                                           
                                balancer.log()                                                                                                 
                                                                                                                                                                                                                                                                                                                                                          
                                monitor.call()                                                                                                                                                                                                                                                                                       
                                                                                                                                               
                                plugins.run()                                                                                                                                                                                                                                                                                                             
                        }                                                                                                                                                                                                                                                                                                            
                                                                                                                                               
                        port_in_redirect off;                                                                                                                                                                                                                                                                                                             
                                                                                                                                                                                                                                                                                                                                     
                        set $balancer_ewma_score -1;                                                                                           
                        set $proxy_upstream_name "kube-system-kube-registry-proxy-80";                                                                                                                                                                                                                                                                    
                        set $proxy_host          $proxy_upstream_name;                                                                                                                                                                                                                                                               
                        set $pass_access_scheme  $scheme;                                                                                      
                                                                                                                                                                                                                                                                                                                                                          
                        set $pass_server_port    $server_port;                                                                                                                                                                                                                                                                       
                                                                                                                                               
                        set $best_http_host      $http_host;                                                                                                                                                                                                                                                                                              
                        set $pass_port           $pass_server_port;                                                                                                                                                                                                                                                                  
                                                                                                                                               
                        set $proxy_alternative_upstream_name "";                                                                                                                                                                                                                                                                                          
                                                                                                                                                                                                                                                                                                                                     
                        client_max_body_size                    1024m;                                                                         
                                                                                                                                                                                                                                                                                                                                                          
                        ...                                                                                                                                                                                                                                                                          
                        # Custom headers to proxied server   
                  
                        proxy_connect_timeout                   5s;                                                                                                                                                                                                                                                                  
                        proxy_send_timeout                      60s;                                                                           
                        proxy_read_timeout                      60s;                                                                                                                                                                                                                                                                                      
                                                                                                                                                                                                                                                                                                                                     
                        proxy_buffering                         off;                                                                           
                        proxy_buffer_size                       8k;                                                                                                                                                                                                                                                                                       
                        proxy_buffers                           4 8k;                                                                                                                                                                                                                                                                
                                                                                                                                               
                        proxy_max_temp_file_size                1024m;                                                                                                                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                                     
                        proxy_request_buffering                 on;                                                                            
                        ...                                                                                                                                                                                                                                                                                   
                                                                                                                                                                                                                                                                                                                                     
                }                                                                                                                              
                                                                                                                                                                                                                                                                                                                                                          
        }                                                                                                                                                                                                                                                                                                                            
        ## end server <redactedDomain>      

However, this didn’t seem to help.

My question is: how can I prevent a 413 HTTP status code that most likely comes from the ingress-nginx-controller?


Get this bounty!!!

#StackBounty: #ubuntu #networking #linux-networking #kubernetes managing multiple LAN on bare metals in kubernetes

Bounty: 50

Here is my setup :

  • I have multiple LAN’s on bare metals.
  • each LAN has a hardware router and has a static ip
  • each LAN has address range 192.168.1.*
  • external traffic usually comes to hardware router and then needs to be served by 1 of the service within the LAN

Need :

  • keep the LAN isolated sometimes(in few use cases) : like deploying an application to a LAN

Options :

  • Should i have 1 kubernetes OR 1 kubernetes for each LAN. 1 kubernetes per cluster would be a nightmare for me to manage so manage many clusters; I think 1 kubernetes overall is good.
  • for LAN specific deployments, should i create 1 namespace for each LAN or kubernetes labels are better to use or any other options
  • the external traffic usually comes via the static ip on hardware router, from there I need to route traffic usually within the LAN(thus ingress only within the LAN). How would ingress within the LAN would work.
  • also want to monitor, alarm and health check the entire cluster, namespace specific or label filtering.


Get this bounty!!!

#StackBounty: #linux #kubernetes #patch-management Periodically system security patches on K8s clusters?

Bounty: 50

We’re trying to figure out a way to periodically have system security patches for our K8s cluster, to keep our system safe and meet the security requirements.

our K8s clusters are running in different clouds, AWS, Azure, Bare metal, etc.

for clouds, we can change our IAM image to update to the latest, replace the old image, launch new nodes, and drain the old nodes. For bare metal one, we need to drain the old nodes, and then patch, and add them back.

Not sure if there is any other way to do that automatically. we don’t want to do this work each month in each clouds. maybe there is a better solution?


Get this bounty!!!

#StackBounty: #nginx #kubernetes #https #kubernetes-ingress #nginx-ingress How debug Kubernetes Ingress Nginx redirection from HTTP to …

Bounty: 400

Fast question: how I can debug ingress and Nginx to know where exactly HTTP->HTTPS redirection happens?

More details:

What we have: we have war file + Tomcat, build it with Docker. Run it with Kubernetes in AWS.

What we need: application should be accessible with HTTP and with HTTPS. HTTP should not redirect to HTTPS.

Problem: HTTP always redirect to HTTPS.

What we try: we have Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ${some name
  namespace: ${some namespace}
  labels:
    app: ${some app}
    env: ${some env}
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false" #we added this for turn off https redirection
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false" #we added this for turn off https redirection
    nginx.ingress.kubernetes.io/affinity: "cookie" # We use it for sticky sessions
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "some cookie name"
    nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/whitelist-source-range: ${whitelist of ip adresses}

spec:
  tls:
    - hosts:
        - ${some host}
        - ${some another host}
      secretName: my-ingress-ssl
  rules:
    - host: ${some host}
      http:
        paths:
          - path: /
            backend:
              serviceName: ${some another service name}
              servicePort: 8080
    - host: ${some another host}
      http:
        paths:
          - path: /
            backend:
              serviceName: ${some another service name}
              servicePort: 8080

And configmap

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    app: ${some app}
    env: ${some env}
  namespace: ${some namespace}
  name: nginx-config
data:
  hsts: "false" #we added this for turn off https redirection
  hsts-max-age: "0" #we added this for turn off https redirection
  ssl-redirect: "false" #we added this for turn off https redirection
  hsts-include-subdomains: "false" #we added this for turn off https redirection

In Tomcat server.xml we have:

<Connector port="8080" protocol="HTTP/1.1"
                   connectionTimeout="20000"
                   redirectPort="8443" />

<!-- Define an AJP 1.3 Connector on port 8009 -->
        <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

and this connector we commented (it shouldn’t work now):

<!--
<Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
           maxThreads="150" SSLEnabled="true" >
    <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
    <SSLHostConfig>
        <Certificate certificateKeyFile="conf/key.pem"
                     certificateFile="conf/cert.pem"
                     certificateChainFile="conf/chain.pem"
                     type="RSA" />
    </SSLHostConfig>
</Connector>
-->

I tried all possible variants with ingress annotations, but without success result.

What I want to know: how can I debug ingress with Nginx to know where exactly HTTP->HTTPS redirection happens?


Get this bounty!!!

#StackBounty: #kubernetes #local-area-network #k3s Unable to reach service deployed using k3s

Bounty: 50

I just deployed a k3s server on my Raspberry Pi 4 Model B using the command

curl -sfL https://get.k3s.io | sh -s - server --disable metrics-server --docker

In order to test if, I did kubectl apply -f hello.yml where hello.yml is the following:

#############################################################
#    _____                                                  #
#   |_   _|                                                 #
#     | |   _ __    __ _  _ __  ___  ___  ___   ___  ___    #
#     | |  | '_   / _` || '__|/ _ / __|/ __| / _ / __|   #
#    _| |_ | | | || (_| || |  |  __/__ \__ |  __/__    #
#   |_____||_| |_| __, ||_|   ___||___/|___/ ___||___/   #
#                   __/ |                                   #
#                  |___/                                    #
#############################################################
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: hello-http
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    traefik.ingress.kubernetes.io/rewrite-target: /$2
  labels:
    hello: "http"
    app: "hello-http"
    img: "nginx"
spec:
  rules:
  - host: "example.com"
    http:
      paths:
      - path: /()(.*)
        backend:
          serviceName: hello-http
          servicePort: 80

---
######################################################
#     _____                     _                    #
#    / ____|                   (_)                   #
#   | (___    ___  _ __ __   __ _   ___  ___  ___    #
#    ___   / _ | '__|  / /| | / __|/ _ / __|   #
#    ____) ||  __/| |     V / | || (__|  __/__    #
#   |_____/  ___||_|     _/  |_| ___|___||___/   #
#                                                    #
######################################################
apiVersion: v1
kind: Service
metadata:
  name: hello-http
  labels:
    hello: "http"
    app: "hello-http"
    img: "nginx"
spec:
  selector:
      hello: "http"
      app: "hello-http"
      img: "nginx"
  type: ClusterIP
  ports:
    - protocol: TCP
      targetPort: 80
      port: 80
---
apiVersion: v1
kind: Service
metadata:
  name: hello-http2
  labels:
    hello: "http"
    app: "hello-http"
    img: "nginx"
spec:
  selector:
      hello: "http"
      app: "hello-http"
      img: "nginx"
  type: NodePort
  ports:
    - protocol: TCP
      targetPort: 80
      port: 80
      nodePort: 30102

---
################################################################################
#    _____                _                                        _           #
#   |  __               | |                                      | |          #
#   | |  | |  ___  _ __  | |  ___   _   _  _ __ ___    ___  _ __  | |_  ___    #
#   | |  | | / _ | '_  | | / _  | | | || '_ ` _   / _ | '_  | __|/ __|   #
#   | |__| ||  __/| |_) || || (_) || |_| || | | | | ||  __/| | | || |_ __    #
#   |_____/  ___|| .__/ |_| ___/  __, ||_| |_| |_| ___||_| |_| __||___/   #
#                 | |                __/ |                                     #
#                 |_|               |___/                                      #
################################################################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-http
  labels:
    hello: "http"
    app: "hello-http"
    img: "nginx"
spec:
  replicas: 2
  selector:
    matchLabels:
      hello: "http"
      app: "hello-http"
      img: "nginx"
  template:
    metadata:
      labels:
        hello: "http"
        app: "hello-http"
        img: "nginx"
    spec:
      containers:
        - name: hello-http
          image: nginx
          ports:
            - containerPort: 80

After doing so, I was unable to reach the service usin the raspberry ip address.
Here is what I tried:

  • curl http://localhost:30102 on the raspberry –> OK
  • curl http://192.168.1.39:30102 on the raspberry –> OK
  • curl http://192.168.1.39:30102 on my computer on the same network –> TIMEOUT
  • curl http://192.168.1.39:8787/admin on my computer on the same network –> OK (pi-hole admin)
  • curl http://localhost:80 on my computer after doing kubectl port-forward svc/hello-http 80:80 (with service of type ClusterIP) –> OK

I’m pretty sure there is a configuration error somewhere preventing k3s from listening for some ip addresses.
Using netstat properly show the port being bound (but it doesn’t show traefik’s ports which should be bound to 80 and 443. Not sure it’s relevant, though)

pi:~ > sudo netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 0.0.0.0:8787            0.0.0.0:*               LISTEN      615/lighttpd
tcp        0      0 0.0.0.0:32629           0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 0.0.0.0:53              0.0.0.0:*               LISTEN      607/pihole-FTL
tcp        0      0 0.0.0.0:30102           0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      593/sshd
tcp        0      0 127.0.0.1:40701         0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 0.0.0.0:31361           0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 127.0.0.1:4711          0.0.0.0:*               LISTEN      607/pihole-FTL
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      19412/k3s
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      19412/k3s
tcp6       0      0 :::10250                :::*                    LISTEN      19412/k3s
tcp6       0      0 :::10251                :::*                    LISTEN      19412/k3s
tcp6       0      0 :::6443                 :::*                    LISTEN      19412/k3s
tcp6       0      0 :::10252                :::*                    LISTEN      19412/k3s
tcp6       0      0 :::8787                 :::*                    LISTEN      615/lighttpd
tcp6       0      0 :::53                   :::*                    LISTEN      607/pihole-FTL
tcp6       0      0 :::22                   :::*                    LISTEN      593/sshd
tcp6       0      0 ::1:4711                :::*                    LISTEN      607/pihole-FTL
udp        0      0 0.0.0.0:53              0.0.0.0:*                           607/pihole-FTL
udp        0      0 0.0.0.0:67              0.0.0.0:*                           607/pihole-FTL
udp        0      0 0.0.0.0:68              0.0.0.0:*                           567/dhcpcd
udp        0      0 0.0.0.0:48253           0.0.0.0:*                           343/avahi-daemon: r
udp        0      0 0.0.0.0:5353            0.0.0.0:*                           343/avahi-daemon: r
udp        0      0 0.0.0.0:8472            0.0.0.0:*                           -
udp6       0      0 :::60785                :::*                                343/avahi-daemon: r
udp6       0      0 :::547                  :::*                                607/pihole-FTL
udp6       0      0 :::53                   :::*                                607/pihole-FTL
udp6       0      0 :::5353                 :::*                                343/avahi-daemon: r
raw6       0      0 :::58                   :::*                    7           607/pihole-FTL
raw6       0      0 :::58                   :::*                    7           567/dhcpcd
Active UNIX domain sockets (only servers)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name     Path
unix  2      [ ACC ]     STREAM     LISTENING     1597133  22134/containerd-sh  @/containerd-shim/moby/ce2a9493a6f65296eb320bbd7c4dbf181da1482fa51f0f17e391803f569f4a94/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1593577  20426/containerd-sh  @/containerd-shim/moby/aee6252fead51d5aef63d79c78a1ca8e62df6c9b1544e4c4e42098b50a7f74de/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     18461    575/dockerd          /var/run/docker/libnetwork/2a6d0f12c790.sock
unix  2      [ ACC ]     STREAM     LISTENING     1591698  19941/containerd-sh  @/containerd-shim/moby/57bde10e2890cb598016762c0844102f10180eeb50fb6a7ec8ef353dcfdc167c/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1590326  19412/k3s            /var/lib/kubelet/pod-resources/254437567
unix  2      [ ACC ]     STREAM     LISTENING     1590377  19412/k3s            /var/lib/kubelet/device-plugins/kubelet.sock
unix  2      [ ACC ]     STREAM     LISTENING     1600579  22514/containerd-sh  @/containerd-shim/moby/a6926060aef148fd6fa518485eeb96b0eefaf42c3f09db8f3236af80aa81653a/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     13112    573/containerd       /run/containerd/containerd.sock
unix  2      [ ACC ]     STREAM     LISTENING     14136    629/php-cgi          /var/run/lighttpd/php.socket-0
unix  2      [ ACC ]     STREAM     LISTENING     1593237  21234/containerd-sh  @/containerd-shim/moby/1116f08dad3e02158a50340c657d80c4d5a608c4d940fc9b57a68e2f35ed827d/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     14145    575/dockerd          /var/run/docker/metrics.sock
unix  2      [ ACC ]     STREAM     LISTENING     1595568  20830/containerd-sh  @/containerd-shim/moby/fa4bd35bc011d1183ed39488fcaa38bae4aed312805f4dcde1191cdae1d74131/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1596139  21368/containerd-sh  @/containerd-shim/moby/c862b5b8017587825c3897f7814802c7e6d9d77418e95c3565a5f2dce502e9a2/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1596437  21404/containerd-sh  @/containerd-shim/moby/5625775a5f2897cd1c4bdd5e8c4c2dffc7d17287c2557d1fc4857bae34bd6cd8/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1599230  22505/containerd-sh  @/containerd-shim/moby/997b9f82175fec5735c9ea3858237945cf988890d1c7a7f9235b7d853180c719/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1598009  22154/containerd-sh  @/containerd-shim/moby/8a7bda7a684d769c613f8ba2bbc3192bac2b940135a1287559cafb4fdaea1ea9/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1410     1/init               /run/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     1595698  20961/containerd-sh  @/containerd-shim/moby/33aef9dbe42ba948388845f1fd4f6d94732a7edfc574f3ae92a071258db98ea9/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1431     1/init               /run/systemd/fsck.progress
unix  2      [ ACC ]     SEQPACKET  LISTENING     1434     1/init               /run/udev/control
unix  2      [ ACC ]     STREAM     LISTENING     1589035  19939/containerd-sh  @/containerd-shim/moby/f8c579409c1fe6ab74d25632b548bb5ad40a070e9e50869b6277243c9028dc3c/shim.sock@
unix  2      [ ACC ]     STREAM     LISTENING     1438     1/init               /run/systemd/journal/stdout
unix  2      [ ACC ]     STREAM     LISTENING     16543    579/systemd          /run/user/999/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     16549    579/systemd          /run/user/999/gnupg/S.gpg-agent.ssh
unix  2      [ ACC ]     STREAM     LISTENING     16552    579/systemd          /run/user/999/gnupg/S.dirmngr
unix  2      [ ACC ]     STREAM     LISTENING     16554    579/systemd          /run/user/999/gnupg/S.gpg-agent.browser
unix  2      [ ACC ]     STREAM     LISTENING     16556    579/systemd          /run/user/999/gnupg/S.gpg-agent.extra
unix  2      [ ACC ]     STREAM     LISTENING     16558    579/systemd          /run/user/999/gnupg/S.gpg-agent
unix  2      [ ACC ]     STREAM     LISTENING     11437    1/init               /var/run/dbus/system_bus_socket
unix  2      [ ACC ]     STREAM     LISTENING     11442    1/init               /var/run/docker.sock
unix  2      [ ACC ]     STREAM     LISTENING     11445    1/init               /run/avahi-daemon/socket
unix  2      [ ACC ]     STREAM     LISTENING     11449    1/init               /run/thd.socket
unix  2      [ ACC ]     STREAM     LISTENING     11967    567/dhcpcd           /var/run/dhcpcd.sock
unix  2      [ ACC ]     STREAM     LISTENING     11969    567/dhcpcd           /var/run/dhcpcd.unpriv.sock
unix  2      [ ACC ]     STREAM     LISTENING     16580    607/pihole-FTL       /var/run/pihole/FTL.sock
unix  2      [ ACC ]     STREAM     LISTENING     1528780  8034/systemd         /run/user/1000/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     1588686  19412/k3s            /var/run/704625888
unix  2      [ ACC ]     STREAM     LISTENING     1587091  19412/k3s            kine.sock
unix  2      [ ACC ]     STREAM     LISTENING     1528786  8034/systemd         /run/user/1000/gnupg/S.gpg-agent.extra
unix  2      [ ACC ]     STREAM     LISTENING     1528789  8034/systemd         /run/user/1000/gnupg/S.gpg-agent.browser
unix  2      [ ACC ]     STREAM     LISTENING     1528791  8034/systemd         /run/user/1000/gnupg/S.gpg-agent
unix  2      [ ACC ]     STREAM     LISTENING     1528793  8034/systemd         /run/user/1000/gnupg/S.dirmngr
unix  2      [ ACC ]     STREAM     LISTENING     1528795  8034/systemd         /run/user/1000/gnupg/S.gpg-agent.ssh
unix  2      [ ACC ]     STREAM     LISTENING     1592393  20291/containerd-sh  @/containerd-shim/moby/0140047bf0e7f84e9b579466ab807bd73dd37a774d665562a705d9235b6dd7b7/shim.sock@

Is there a mistake somewhere? Could there be an interference of some kind with pi-hole running as a service on the raspberry pi?
My pi-hole is running as DHCP for my home network. Could it be an issue of some sort? Why would ports resolve properly when they’re not managed by k3s?

Edit:

Output of ip a s:

pi:~ > ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether dc:a6:32:75:4f:12 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.39/24 brd 192.168.1.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1894:dc3:322:f523/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:2a:9a:f5:d3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
26: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 12:ba:5d:7c:ae:1a brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.0/32 brd 10.42.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet 169.254.196.183/16 brd 169.254.255.255 scope global noprefixroute flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::bafd:33fc:4ef5:1ac5/64 scope link
       valid_lft forever preferred_lft forever
27: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether da:85:b6:aa:7b:81 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.1/24 brd 10.42.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::d885:b6ff:feaa:7b81/64 scope link
       valid_lft forever preferred_lft forever
28: veth812ed1d8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
    link/ether 72:51:dc:2b:ed:96 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.233.179/16 brd 169.254.255.255 scope global noprefixroute veth812ed1d8
       valid_lft forever preferred_lft forever
    inet6 fe80::6e47:81d6:1747:7495/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::7051:dcff:fe2b:ed96/64 scope link
       valid_lft forever preferred_lft forever
29: veth3ad835ec@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether fe:76:be:78:c7:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 169.254.208.37/16 brd 169.254.255.255 scope global noprefixroute veth3ad835ec
       valid_lft forever preferred_lft forever
    inet6 fe80::a69c:717d:1974:6eda/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::fc76:beff:fe78:c7e9/64 scope link
       valid_lft forever preferred_lft forever
31: veth30d41999@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether ea:86:7a:b9:14:0c brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet 169.254.145.140/16 brd 169.254.255.255 scope global noprefixroute veth30d41999
       valid_lft forever preferred_lft forever
    inet6 fe80::686f:1b4f:71d7:d04d/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::e886:7aff:feb9:140c/64 scope link
       valid_lft forever preferred_lft forever
32: veth3b141724@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether c2:b3:34:34:7a:11 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet 169.254.214.201/16 brd 169.254.255.255 scope global noprefixroute veth3b141724
       valid_lft forever preferred_lft forever
    inet6 fe80::e595:82ae:f031:c6e0/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::c0b3:34ff:fe34:7a11/64 scope link
       valid_lft forever preferred_lft forever
33: veth089ef283@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether da:01:a3:05:01:b9 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet 169.254.164.221/16 brd 169.254.255.255 scope global noprefixroute veth089ef283
       valid_lft forever preferred_lft forever
    inet6 fe80::320a:fd1c:86cb:8bae/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::d801:a3ff:fe05:1b9/64 scope link
       valid_lft forever preferred_lft forever
34: vethe3710e9b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether fe:b0:40:82:c4:c1 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet 169.254.87.10/16 brd 169.254.255.255 scope global noprefixroute vethe3710e9b
       valid_lft forever preferred_lft forever
    inet6 fe80::c352:9ec0:71f7:e909/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::fcb0:40ff:fe82:c4c1/64 scope link
       valid_lft forever preferred_lft forever
35: veth5618f8bf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 8a:71:a4:3b:5d:73 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet 169.254.10.0/16 brd 169.254.255.255 scope global noprefixroute veth5618f8bf
       valid_lft forever preferred_lft forever
    inet6 fe80::2a5d:db76:859:9c88/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::8871:a4ff:fe3b:5d73/64 scope link
       valid_lft forever preferred_lft forever
36: veth58b1d8e8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 62:91:ac:f7:68:12 brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet 169.254.75.22/16 brd 169.254.255.255 scope global noprefixroute veth58b1d8e8
       valid_lft forever preferred_lft forever
    inet6 fe80::856c:a64d:b2aa:cfc7/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::6091:acff:fef7:6812/64 scope link
       valid_lft forever preferred_lft forever

Output of ip r s:

pi:~ > ip r s
default via 192.168.1.1 dev eth0 src 192.168.1.39 metric 202
10.42.0.0/24 dev cni0 proto kernel scope link src 10.42.0.1
169.254.0.0/16 dev flannel.1 scope link src 169.254.196.183 metric 226
169.254.0.0/16 dev veth812ed1d8 scope link src 169.254.233.179 metric 228
169.254.0.0/16 dev veth3ad835ec scope link src 169.254.208.37 metric 229
169.254.0.0/16 dev veth30d41999 scope link src 169.254.145.140 metric 231
169.254.0.0/16 dev veth3b141724 scope link src 169.254.214.201 metric 232
169.254.0.0/16 dev veth089ef283 scope link src 169.254.164.221 metric 233
169.254.0.0/16 dev vethe3710e9b scope link src 169.254.87.10 metric 234
169.254.0.0/16 dev veth5618f8bf scope link src 169.254.10.0 metric 235
169.254.0.0/16 dev veth58b1d8e8 scope link src 169.254.75.22 metric 236
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.39 metric 202

Output of sudo iptables -L -n -v:

pi:~ > sudo iptables -L -n -v
Chain INPUT (policy ACCEPT 1179K packets, 605M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy DROP 51944 packets, 2339K bytes)
 pkts bytes target     prot opt in     out     source               destination
51944 2339K DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0
51944 2339K DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 1053K packets, 326M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0
51944 2339K RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination
51944 2339K RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0
# Warning: iptables-legacy tables present, use iptables-legacy to see them

Output of sudo iptables-legacy -L -n -v:

pi:~ > sudo iptables-legacy -L -n -v
Chain INPUT (policy ACCEPT 36942 packets, 11M bytes)
 pkts bytes target     prot opt in     out     source               destination
98517   22M KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
3909K 1456M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0
98517   22M KUBE-EXTERNAL-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
 133K 6484K KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
 133K 6471K KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
 133K 6471K ACCEPT     all  --  *      *       10.42.0.0/16         0.0.0.0/0
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.42.0.0/16

Chain OUTPUT (policy ACCEPT 35956 packets, 11M bytes)
 pkts bytes target     prot opt in     out     source               destination
91998 5537K KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
3677K 1117M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain KUBE-EXTERNAL-SERVICES (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
    0     0 DROP       all  --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-PROXY-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-SERVICES (3 references)
 pkts bytes target     prot opt in     out     source               destination

By the way, why the -1s? Is this a bad question? Would be nice to have feedback, instead.


Get this bounty!!!

#StackBounty: #docker #deployment #kubernetes #docker-swarm Deploy with docker swarm on homogeneous devices without cluster

Bounty: 50

Is it possible to use docker swarm for deployment of docker containers on homogeneous device? The answer should be yes for clusters right. But the IOT devices are not internally connected and do not have any relation to each other beside the images.

If there is a way to do it with docker swarm (or Kubernetes), what would be the workflow? If others technologies out there, would be happy to hear a suggestions.


Get this bounty!!!

#StackBounty: #jenkins #kubernetes #jenkins-pipeline #jenkins-declarative-pipeline How to define workspace volume for jenkins pipeline …

Bounty: 50

I am trying to setup declarative pipeline where I would like to persiste workspace as volume claim so large git checkout can be faster. Based on doc there are options workspaceVolume and persistentVolumeClaimWorkspaceVolume but I am not able to make it work – jenkins always does following:

volumeMounts:
 - mountPath: "/home/jenkins/agent"
   name: "workspace-volume"
   readOnly: false
volumes:
  - emptyDir: {}
    name: "workspace-volume"


Get this bounty!!!

#StackBounty: #kubernetes #amazon-eks EKS cluster nodes go from Ready to NotReady after approximately 30 minutes with authorization fai…

Bounty: 50

I am using eksctl to set up a cluster on EKS/AWS.

Following the guide in the EKS documentation, I use default values for pretty much everything.

The cluster is created successfully, I update the Kubernetes configuration from the cluster, and I can run the various kubectl commands successfully – e.g. “kubectl get nodes” shows me the nodes are in the “Ready” state.

I do not touch anything else, I have a clean out-of-the-box cluster working with no other changes made and so far it would appear everything is working as expected. I don’t deploy any applications to it, I just leave it alone.

The problem is after some relatively short period of time (roughly 30 minutes after the cluster is created), the nodes change from “Ready” to “NotReady” and it never recovers.

The event log shows this (I redacted the IPs):

LAST SEEN   TYPE     REASON                    OBJECT        MESSAGE
22m         Normal   Starting                  node/ip-[x]   Starting kubelet.
22m         Normal   NodeHasSufficientMemory   node/ip-[x]   Node ip-[x] status is now: NodeHasSufficientMemory
22m         Normal   NodeHasNoDiskPressure     node/ip-[x]   Node ip-[x] status is now: NodeHasNoDiskPressure
22m         Normal   NodeHasSufficientPID      node/ip-[x]   Node ip-[x] status is now: NodeHasSufficientPID
22m         Normal   NodeAllocatableEnforced   node/ip-[x]   Updated Node Allocatable limit across pods
22m         Normal   RegisteredNode            node/ip-[x]   Node ip-[x] event: Registered Node ip-[x] in Controller
22m         Normal   Starting                  node/ip-[x]   Starting kube-proxy.
21m         Normal   NodeReady                 node/ip-[x]   Node ip-[x] status is now: NodeReady
7m34s       Normal   NodeNotReady              node/ip-[x]   Node ip-[x] status is now: NodeNotReady

Same events for the other node in the cluster.

Connecting to the instance and inspecting /var/log/messages shows this at the same time the node goes to NotReady:

Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.259207    3896 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "ip-[x]": Unauthorized
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.385044    3896 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "ip-[x]": Unauthorized
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.621271    3896 reflector.go:270] object-"kube-system"/"aws-node-token-bdxwv": Failed to watch *v1.Secret: the server has asked for the client to provide credentials (get secrets)
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.621320    3896 reflector.go:270] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: the server has asked for the client to provide credentials (get configmaps)
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.638850    3896 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1beta1.RuntimeClass: the server has asked for the client to provide credentials (get runtimeclasses.node.k8s.io)
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.707074    3896 reflector.go:270] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: the server has asked for the client to provide credentials (get pods)
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.711386    3896 reflector.go:270] object-"kube-system"/"coredns-token-67fzd": Failed to watch *v1.Secret: the server has asked for the client to provide credentials (get secrets)
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.714899    3896 reflector.go:270] object-"kube-system"/"kube-proxy-config": Failed to watch *v1.ConfigMap: the server has asked for the client to provide credentials (get configmaps)
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.720884    3896 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "ip-[x]": Unauthorized
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.868003    3896 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "ip-[x]": Unauthorized
Mar  7 10:40:37 ip-[X] kubelet: E0307 10:40:37.868067    3896 controller.go:125] failed to ensure node lease exists, will retry in 200ms, error: Get https://[X]/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/ip-[x]?timeout=10s: write tcp 192.168.91.167:50866->34.249.27.158:443: use of closed network connection
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.017157    3896 kubelet_node_status.go:385] Error updating node status, will retry: error getting node "ip-[x]": Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.017182    3896 kubelet_node_status.go:372] Unable to update node status: update node status exceeds retry count
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.200053    3896 controller.go:125] failed to ensure node lease exists, will retry in 400ms, error: Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.517193    3896 reflector.go:270] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: the server has asked for the client to provide credentials (get configmaps)
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.729756    3896 controller.go:125] failed to ensure node lease exists, will retry in 800ms, error: Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.752267    3896 reflector.go:126] object-"kube-system"/"aws-node-token-bdxwv": Failed to list *v1.Secret: Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.824988    3896 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.899566    3896 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.963756    3896 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
Mar  7 10:40:38 ip-[X] kubelet: E0307 10:40:38.963822    3896 reflector.go:126] object-"kube-system"/"kube-proxy-config": Failed to list *v1.ConfigMap: Unauthorized

CloudWatch logs for the authenticator component show many of these messages:

time="2020-03-07T10:40:37Z" level=warning msg="access denied" arn="arn:aws:iam::[ACCOUNT_ID]]:role/AmazonSSMRoleForInstancesQuickSetup" client="127.0.0.1:50132" error="ARN is not mapped: arn:aws:iam::[ACCOUNT_ID]:role/amazonssmroleforinstancesquicksetup" method=POST path=/authenticate

I confirmed that role does exist in via IAM console.

Clearly this node is reporting NotReady because of these authentication failures.

Is this some authentication token that timed out after approximately 30 minutes, and if so shouldn’t a new token automatically be requested? Or am I supposed to set something else up?

I was surprised that a fresh cluster created by eksctl would show this problem.

What did I miss?


Get this bounty!!!