#StackBounty: #multithreading #kubernetes #spring-batch #azure-aks Azure Kubernetes CPU multithreading

Bounty: 50

I wish to run the Spring batch application in Azure Kubernetes.

At present, my on-premise VM has the below configuration

  • CPU Speed: 2,593
  • CPU Cores: 4

My application uses multithreading(~15 threads)

how do I define the CPU in AKS.

resources:
  limits:
    cpu: "4"
  requests:
    cpu: "0.5"
args:
- -cpus
- "4"

Reference: Kubernetes CPU multithreading

AKS Node Pool:
enter image description here


Get this bounty!!!

#StackBounty: #c# #entity-framework #asp.net-core #kubernetes #entity-framework-core Entity Framework Core leaving many connections in …

Bounty: 100

I have a .net core API using Entity Framework Core. The DB context is registered in startup.cs like this:

  services.AddDbContext<AppDBContext>(options =>
         options.UseSqlServer(connectionString,
         providerOptions => providerOptions.CommandTimeout(60))); 

In connection string I set

  Pooling=true;Max Pool Size=100;Connection Timeout=300

The controller calls methods in a service which in turn makes calls to aysnc methods in a repo for data retrieval and processing.

All worked well if concurrent user is under 500 during load testing. However beyond that number I start to see a lot of timeout expired errors. When I checked the database, there’s no deadlock but I could see well over 100 connections in sleeping mode(the API is hosted on two kubernetes pods). I monitored these connections during the testing and it appeared that instead of current sleeping connections being reused, new ones were added to the pool. My understanding is entity framework core manages opening and closing connections but this didn’t seem to be the case. Or am I missing anything?

The error looks like this:

StatusCode":500,"Message":"Error:Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Stack Trace: at Microsoft.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)n at Microsoft.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource1 retry, DbConnectionOptions userOptions)n at Microsoft.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry, SqlConnectionOverrides overrides)n at Microsoft.Data.SqlClient.SqlConnection.Open(SqlConnectionOverrides overrides)n at Microsoft.Data.SqlClient.SqlConnection.Open()n at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.OpenInternal(Boolean errorsExpected)n at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.Open(Boolean errorsExpected)n at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.BeginTransaction(IsolationLevel isolationLevel)n…………………

An example of how the dbcontext was used:

the controller calls a method in a service class:

  var result = await _myservice.SaveUserStatusAsync(userId, status);

then in ‘myservice’:

  var user = await _userRepo.GetUserAsync(userId);

  ....set user status to new value and then

  return await _userRepo.UpdateUserAsync(user);

then in ‘userrepo’:

  _context.user.Update(user);
   var updated = await _context.SaveChangesAsync();
   return updated > 0;


Get this bounty!!!

#StackBounty: #kubernetes #containerd containerd 1.4.9 Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService

Bounty: 50

I have installed containerd 1.4.9 on CentOS steam 8 server.

based on this document https://containerd.io/docs/getting-started/. I have created default config file containerd config default > /etc/containerd/config.toml like this.

after restarting containerd, when I run crictl ps Its throwing below error

FATA[0000] listing containers failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService

How to fix this error? after fix this I want to join this node to Kubernets cluster 1.21.3 using systemd cfgroup.

Thanks
SR


Get this bounty!!!

#StackBounty: #docker #kubernetes #visual-studio-code VS Code attach to container in k8s fails with mkdir: cannot create directory &#39…

Bounty: 100

Based on this guide:

https://code.visualstudio.com/docs/remote/attach-container#_attach-to-a-container-in-a-kubernetes-cluster

I am trying to attach to a running container in a k8s cluster and edit a file in that container using vs code:

enter image description here

As a result another vs code instance opens but prints this error:

[2260 ms] 
[2261 ms] mkdir: cannot create directory '/var/www': Permission denied
[2261 ms] Exit code 1
[2274 ms] Command in container failed: mkdir -p /var/www/.vscode-server/bin/054a9295330880ed74ceaedda236253b4f39a335_1621844063707

Looks like its trying to create that directory in the remote container but is that necessary to connect and open/edit a file on the remote container in vs code?

Or is it somehow possible to specify a path for where to store this content inside the container with adequate permissions?


Get this bounty!!!

#StackBounty: #kubernetes #kubectl How could I find the Kubernetes POD restart reasons

Bounty: 50

In have scaled my pods to 20 in my cluster and when I see the next day the few of the scaled pods were recreated.

When I say POD recreated, it is getting deleted and created freshly and the timestamp of the recreated pod and the scaled pods vary.

I was unable to find the reasons for the recreate of the PODs.

I could not find which POD went for a recreate as the POD is deleted and gone away. There are no logs in the journalctl regarding which POD got recreated. Is there any way I can debug further to find the reason for the POD recreate. or What might be the reason for the PODs getting deleted.

Note: I have readiness and liveness probes defined, but these probes would act on container and would not lead to POD recreate is my understanding.


Get this bounty!!!

#StackBounty: #security #docker #kubernetes #stunnel #sysadmin Stunnel in non-root mode

Bounty: 50

I have a setup on k8s that uses STUNNEL to secure all requests sent/received to/from google LDAPs. in this setup Stunnel is running as root and – by default – it accesses root dirs.

My aim is to re-setup stunnel to work as non-root with read-only access to root dirs.
If you are asking why I am doing that, it’s becuz I’m applying a PSP that prohibits root capabilities.

What I did: I used "run as" feature of k8s and was able to force stunnel to run as non-root but however this does not fix the fact that Stunnel is accessing root dirs in R/W mode.

The current blocking issue is that stunnel is attempting to write in "/usr/share/ca-certificates" which crashes the whole container cuz it’s a prohibited operation.

What I am thinking of:

  1. if I mounted an empty_dir volume in this path "/usr/share/ca-certificates", it will for sure solve the issue but would this be a correct approach? are we still secure? (note1: their might be more root paths that can’t be fixed it in this way) (note 2: the cluster itself is protected)
  2. What If I wanted to completely dockrize stunnel as non-root, is it something possible? (note: AFAIK, there are no official docker images for stunnel)
  3. Is there any alternative way to secure connections to google LDAPs that could be more flexible instead of stunnel? (it’s unlikely to go this way but still considerable)

I hope you could assist with this.


Get this bounty!!!

#StackBounty: #virtual-machine #windows-subsystem-for-linux #kubernetes Restarting a WSL2 VM with minikube

Bounty: 50

I’m running a Ubuntu VM on WSL2 on my windows 10 machine. I have a Kubernetes cluster deployed in it that I’ve created with minikube using this tutorial but sometimes I need to restart my computer, and whenever I do this, something goes wrong on the cluster. I’ve tried powering off the VM before the re-start and ever running minikube stop before powering off but I still end up having to delete everything and re-deploy when the computer start. How can I safely power off my computer and still have my cluster when I turn it back on?


Get this bounty!!!

#StackBounty: #kubernetes #kubeadm Changing Kubernetes cluster IP addres

Bounty: 50

I have a single-node Kubernetes cluster that I need to change IP. I see that the current IP is within many configuration files, which is not a big problem. The bigger problem is that when I changed the address, I had an error that the certificate is valid only for the old IP, not the new one.

How should I perform the change? I assume that I can change the IPs in the config files to a DNS-recognizable hostname, so the configs will be valid for both IPs. Is it possible to regenerate the certificate for a hostname so it will also work regardless of the IP?

Edit:

I’ve found a command to regenerate the certificate:

kubeadm alpha phase certs selfsign --apiserver-advertise-address=0.0.0.0 --cert-altnames=10.161.233.80 --cert-altnames=114.215.201.87

Unfortunately, it doesn’t work:

Error: unknown flag: --apiserver-advertise-address

I’ve also tried to run these commands:

sudo mv /etc/kubernetes/pki /etc/kubernetes/pki_bak
sudo kubeadm init phase certs all

It has recreated the certs with a correct ip address, but the /etc/kubernetes/admin.conf still has the wrong one. Any kubectl --kubeconfig=/etc/kubernetes/admin.conf without --insecure-skip-tls-verify fails. Same for kubeadm config view:

sudo kubeadm config view --v=5
I0330 04:32:11.754422   21907 config.go:296] [config] retrieving ClientSet from file
I0330 04:32:11.769423   21907 config.go:400] [config] getting the cluster configuration
Get https://10.202.91.41:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: x509: certificate is valid for 10.96.0.1, 10.202.91.129, not 10.202.91.41

My cluster configuration is quite distributed so I’d like to avoid having to recreate the cluster from scratch as it will be quite hard to do that.

I’m currently on version 1.16.


Get this bounty!!!

#StackBounty: #kubernetes Changing Kubernetes cluster IP addres

Bounty: 50

I have a single-node Kubernetes cluster that I need to change IP. I see that the current IP is within many configuration files, which is not a big problem. The bigger problem is that when I changed the address, I had an error that the certificate is valid only for the old IP, not the new one.

How should I perform the change? I assume that I can change the IPs in the config files to a DNS-recognizable hostname, so the configs will be valid for both IPs. Is it possible to regenerate the certificate for a hostname so it will also work regardless of the IP?


Get this bounty!!!

#StackBounty: #kubernetes #kubernetes-helm #grafana #prometheus-operator Grafana configure dashboard access permissions

Bounty: 50

We have configured Grafana user and admin roles using Grafana.ini which
works great.
Now we want to provide some permission to user to
see specific dashboards, e.g. user X can see 5 dashboard and user Y can see 8 dashboards according to some configurations.

We were able to keep this config in Grafana UI but if the pod is fail the details is
deleted, we are using latest Grafana helm chart.
My question is how should we
store this data right ?

https://grafana.com/docs/grafana/latest/permissions/dashboard-folder-permissions/

How we can store this data when the pod is restarted?

we are using grafana via kube-prom stack

https://github.com/prometheus-operator/kube-prometheus

The kube helm chart config
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L600


Get this bounty!!!