#StackBounty: #python #docker No module named 'http.client' on Docker

Bounty: 50

I am trying to retrieve my machine external IP address with python and I would like to use ipgetter for that.

Running it locally it’s working as expected, and I get my IP.
But, when running on Docker I get

File "/utils/ipgetter.py", line 41, in <module>
    import urllib.request as urllib
  File "/usr/local/lib/python3.6/urllib/request.py", line 88, in <module>
    import http.client
ModuleNotFoundError: No module named 'http.client'

In my requirements.txt I have declared ipgetter==0.7

My Dockerfile start from FROM python:3.6.3-alpine3.6
and I have installed my requirements successfully.

I could implement the ipgetter with different libraries, but I prefer to overcome this issue.

How to solve this? am I missing another dependency?

Thank you

Get this bounty!!!

#StackBounty: #docker #wordpress #kubernetes Dynamically added WordPress plugins on Kubernetes

Bounty: 50

If I’m running WordPress in a Kubernetes environment, whereby the code is part of a Docker image and someone tries to add a plugin through the WordPress admin, I don’t expect that will work very well, as the plugin will only be installed on the container that’s hit when they add the plugin, right?

Is my approach of building the code into an image a misstep? Another approach I’d considered was a volume that holds the code, which would handle this use case well. Is there a discussion of such things I could read somewhere?

Get this bounty!!!

#StackBounty: #docker #docker-compose #laravel-5.3 #docker-swarm #docker-stack Docker service not running when not running in manager n…

Bounty: 50

This is my docker-compose file used to deploy the service in multiple instance using the docker-stack. As you can see the the app service which is the laravel running in 2 nodes and database (mysql) in one of the nodes.

version: '3.4'
        - subnet:

        image: mysql:5.7
          - smstake
          - "3306"
          - configuration.env
          MYSQL_USER: ${DB_USER}
          - mysql_data:/var/lib/mysql
          mode: replicated
          replicas: 1

        image: SMSTAKE_VERSION
          - 8000:80
          - smstake
          - db
          mode: replicated
          replicas: 2

The problems I am facing.
1. Though the service are in running state when I check the logs of the service I can see the migrations in successful in only one nodes and not running in another node. See the logs bellow

  1. When I make the app service run only in manager node putting constraints the appliations works great. I can login to page and do everything but When I make the app service run in any node using just replicas than login page is showing up but when try to login it redirects to NOT FOUND page

Here is the full logs when trying to run on 3 nodes. Bellow is sample when running on 2 nodes. You can see migration issues in details

Service logs checked using docker service logs <smstake_app>

| Cache cleared successfully.
    | Configuration cache cleared!
    | Dropped all tables successfully.
    | Migration table created successfully.
    | In Connection.php line 664:
    |   SQLSTATE[42S02]: Base table or view not found: 1146 Table 'smstake.migratio  
    |   ns' doesn't exist (SQL: insert into `migrations` (`migration`, `batch`) val  
    |   ues (2014_10_12_100000_create_password_resets_table, 1))                     
    | In Connection.php line 452:
    |   SQLSTATE[42S02]: Base table or view not found: 1146 Table 'smstake.migratio  
    |   ns' doesn't exist                                                            
    | Laravel development server started: <>
    | PHP 7.1.16 Development Server started at Thu Apr  5 07:02:22 2018
    | [Thu Apr  5 07:03:56 2018] [200]: /js/app.js

    | Cache cleared successfully.
    | Configuration cache cleared!
    | Dropped all tables successfully.
    | Migration table created successfully.
    | Migrating: 2014_10_12_000000_create_users_table
    | Migrated:  2014_10_12_000000_create_users_table
    | Migrating: 2014_10_12_100000_create_password_resets_table
    | Migrated:  2014_10_12_100000_create_password_resets_table
    | Migrating: 2018_01_11_235754_create_groups_table
    | Migrated:  2018_01_11_235754_create_groups_table
    | Migrating: 2018_01_12_085401_create_contacts_table
    | Migrated:  2018_01_12_085401_create_contacts_table
    | Migrating: 2018_01_12_140105_create_sender_ids_table
    | Migrated:  2018_01_12_140105_create_sender_ids_table
    | Migrating: 2018_02_06_152623_create_drafts_table
    | Migrated:  2018_02_06_152623_create_drafts_table
    | Migrating: 2018_02_21_141346_create_sms_table
    | Migrated:  2018_02_21_141346_create_sms_table
    | Seeding: UserTableSeeder
    | Laravel development server started: <>
    | PHP 7.1.16 Development Server started at Thu Apr  5 07:03:23 2018
    | [Thu Apr  5 07:03:56 2018] [200]: /css/app.css

I don’t know if its due to migration problem or what. Sometime I can
login and after few time I get redirected to Not found page again when
clicking on the link inside dashboard.

Get this bounty!!!

#StackBounty: #linux #networking #amazon-web-services #docker Docker container not accessible after X minutes in AWS

Bounty: 100

I have a docker container (from sonarqube image) running on AWS and it was not remotely accessible. I was able to access only through ssh.

To fix my problem, I need to run this command:

$ sysctl net.ipv4.ip_forward=1

The problem is that after some minutes (of after some event) this flag is reverted back to net.ipv4.ip_forward=0. Something is automatically adding a row in this file:

#-> grep net.ipv4.ip_forward /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 0

Somebody knows what can be the cause? Maybe is some configuration on AWS?

Get this bounty!!!

#StackBounty: #docker #docker-registry Docker Private Registry – Deleted all images, but still showing in catalog

Bounty: 100

Following the official documentation (https://docs.docker.com/registry/spec/api/#deleting-an-image) I have been able to successfully delete an image. As expected, after deleting, the image can no longer be pulled nor its manifest called via API.

I feel like I’ve got the hard part done, however the problem is that the repo is still listed under /v2/_catalog after the deletion is finished. I’m trying to fully purge the registry.

Here is my registry compose file:

  image: registry:2.5.2
  container_name: registry-test
    - 5007:5000
    REGISTRY_HTTP_TLS_KEY: /etc/cert.key
    - /dockerdata/volumes/registry-test/etc/cert.crt:/etc/cert.crt
    - /dockerdata/volumes/registry-test/etc/cert.key:/etc/cert.key
  restart: unless-stopped

Here is the high-level method on what I did to delete the image:

  1. Gather image digest:
    HEAD https://myprivateregistry:5001/v2/myimage/manifests/mytag with "Accept: application/vnd.docker.distribution.manifest.v2+json" added to the header on the call

  2. The call returns header key Docker-Content-Digest with a value such as sha256:b57z31xyz0f616e65f106b424f4ef29185fbd80833255d79dabc73b8eb873bd

  3. Using that value from step 2, run the delete call: DELETE https://myprivateregistry:5001/v2/myimage/manifests/sha256:b57z31xyz0f616e65f106b424f4ef29185fbd80833255d79dabc73b8eb873bd

  4. Registry API returns 202 Accepted

  5. Run garbage collection manually: registry garbage-collect /etc/docker/registry/config.yml

  6. Garbage collector deletes the associated blobs from disk (log omitted here, but it successfully deletes the blobs)

At this point I can confirm the blobs are completely deleted from disk and I can no longer call image details (like in step 1 above) so I thought I was done.

However, when running: /v2/_catalog my associated repo still lists (even though there are no images within it)! Obviously it cannot pull or be used, but how can I fully remove that repo from that list now that it has no images associated with it?

I don’t see anywhere how to properly remove this on the API documentation page.
Perhaps I’m missing it somewhere?


I wanted to add some more info on how the registry looks before and after the above deletion takes place.

Before the Delete Operation Above:

docker/registry/v2/repositories/myimage/_layers/sha256/... (5 layers listed)

After the Delete Operation Above:

docker/registry/v2/repositories/myimage/_layers/sha256/... (5 layers listed)

So the only thing left over is the _layers directory with the same 5x layers listed. This seems to be the reason why it’s still listed on _catalog

When I delete the myimage folder (from docker/registry/v2/repositories/myimage) then the repository is no longer shown in the _catalog

This seems to be a method to purge it from the _catalog listing. However – what if an image has 2x tags, yet only 1 is deleted – is there a reason to delete anything from _layers in that case? How would that be handled with multiple versions of an image? Obviously I can’t just clobber the _layers directory as the final method since, in the real world, there will be many tagged versions of an image. So this needs to be done intelligently.

I am simply finding it hard to find any documentation on the maintenance/upkeep of the Docker registry nor the schema for the _layers subdirectory and why the garbage collector doesn’t clean up that directory the same it does with manifests and blobs.

Get this bounty!!!

#StackBounty: #docker #rust #dockerfile Unable to run the docker image with rust executable

Bounty: 50

I am trying to create an image with my binary file (written in Rust) but I get different errors. This is my Dockerfile:

FROM scratch
COPY binary /
COPY .env /
COPY cert.pem /etc/ssl/
CMD /binary

Building finishes fine but when I try to run it I get this:

$ docker run binary
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: "/bin/sh": stat /bin/sh: no such file or directory": unknown.
ERRO[0000] error waiting for container: context canceled 

And this:

$ docker run binary /binary
standard_init_linux.go:195: exec user process caused "no such file or directory"

I have no idea what to do. The error message looks very odd to me. According to the official docker documentation it must work so.

System info: latest archlinux, docker:

Docker version 18.02.0-ce, build fc4de447b5

P.S. Tested the same with C++ – works fine, with both clang and gcc.

P.P.S. It does not work with scratch, alpine, busybox,bash-based images. But it works withpostgresql,ubuntu,debian` images. So the problem exactly in something related to rust and lightweight docker images – everything works okay otherwise.

Get this bounty!!!

#StackBounty: #docker #docker-volume #windows-container Windows Container with Sidecar for data

Bounty: 100

I am trying to setup a windows nanoserver container as a sidecar container holding the certs that I use for SSL. Because the SSL cert that I need changes in each environment, I need to be able to change the sidecar container (i.e. dev-cert container, prod-cert container, etc) at startup time. I have worked out the configuration problems, but am having trouble using the same pattern that I use for Linux containers.

On linux containers, I simply copy my files into a container and use the VOLUMES step to export my volume. Then, on my main application container, I can use volumes_from to import the volume from the sidecar.

I have tried to follow that same pattern with nanoserver and cannot get working. Here is my dockerfile:

# Building stage
FROM microsoft/nanoserver

RUN mkdir c:\certs
COPY . .

VOLUME c:/certs

The container builds just fine, but I get the following error when I try and run it. The dockerfile documentation says the following:

Volumes on Windows-based containers: When using Windows-based
containers, the destination of a volume inside the container must be
one of:

a non-existing or empty directory
a drive other than C:

so I thought, easy, I will just switch to the D drive (because I don’t want to export an empty directory like #1 requires). I made the following changes:

# Building stage
FROM microsoft/windowservercore as build
VOLUME ["d:"]

WORKDIR c:/certs
COPY . .

RUN copy c:certs d:

and this container actually started properly. However, I missed in the docs where is says:

Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.

so, when I checked, I didn’t have any files in the d:certs directory.

So how can you mount a drive for external use in a windows container if, #1 the directory must be empty to make a VOLUME on the c drive in the container, and use must use VOLUME to create a d drive, which is pointless because anything put in there will not be in the final container?

Get this bounty!!!

#StackBounty: #docker #cloudera Failed to find `/usr/bin/docker-quickstart` when running CDH5 Docker image

Bounty: 50

I’ve downloaded the CDH 5.12 Quickstart Docker image from Cloudera but it fails to run.

$ docker import cloudera-quickstart-vm-5.12.0-0-beta-docker.tar.gz
$ docker run --hostname=quickstart.cloudera --privileged=true -t -i 8fe04d8a5547 /usr/bin/docker-quickstart
docker: Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "exec: "/usr/bin/docker-quickstart": stat /usr/bin/docker-quickstart: no such file or directory".

On the other hand, doing docker pull cloudera/quickstart:latest gives me an image for which the above works – it’s just an older one (5.07 I believe).

This blog post suggests that something changed around CDH 5.10. How then am I supposed to run newer images?

Get this bounty!!!

#StackBounty: #monitoring #virtual-machines #docker #graphite VM CPU usage at 100%

Bounty: 100

CPU usage on our metrics box is at 100% intermittently causing:
‘Internal server error’ when rendering Grafana dashboards

The only application running on our machine is Docker with 3 subcontainers

  • cadvisor
  • graphite

  • grafana

Machine spec
OS Version Ubuntu 16.04 LTS
Release 16.04 (xenial)
Kernel Version 4.4.0-103-generic
Docker Version 17.09.0-ce
CPU 4 cores
Memory 4096 MB
Memory reservation is unlimited
Network adapter mgnt

Driver overlay2
Backing Filesystem extfs
Supports d_type true
Native Overlay Diff true

Memory swap limit is 2.00GB

Here is a snippet from cAdvisor:

enter image description here

The kworker and ksoftirqd processes change status constently from ‘D’ to ‘R’ to ‘S’

Are the machine specs correct for this setup?
How can I get the CPU usage to ‘normal’ levels?


After increasing memory from 4GB to 8GB the CPU usage gradually increased:
enter image description here

Get this bounty!!!

#StackBounty: #docker How do you run a linux docker container on windows server 2016?

Bounty: 100

I get:

PS C:tmp> docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
no matching manifest for windows/amd64 in the manifest list entries

Now, before you say ‘Duplicate!’, ‘make sure its in experimental mode’, like all the other answers to this question out there, I have.

I have followed the instructions on https://github.com/linuxkit/lcow, and even read and followed the steps to manually create a hyper-v images from https://tutorials.ubuntu.com/tutorial/tutorial-windows-ubuntu-hyperv-containers

I have downloaded the nightly build of docker.

I am running in experimental mode:

PS C:tmp> docker version
 Version:       master-dockerproject-2018-02-01
 API version:   1.36
 Go version:    go1.9.3
 Git commit:    26a2a459
 Built: Thu Feb  1 23:50:28 2018
 OS/Arch:       windows/amd64
 Experimental:  false
 Orchestrator:  swarm

  Version:      master-dockerproject-2018-02-01
  API version:  1.36 (minimum version 1.24)
  Go version:   go1.9.3
  Git commit:   53a58da
  Built:        Thu Feb  1 23:57:33 2018
  OS/Arch:      windows/amd64
  Experimental: true

I have tried with the --platform argument:

PS C:tmp> docker run --platform linux ubuntu
Unable to find image 'ubuntu:latest' locally
C:Program FilesDockerdocker.exe: Error response from daemon: invalid platform: invalid platform os "linux".
See 'C:Program FilesDockerdocker.exe run --help'.

I seem to have some differences to the docker info from my Windows 10 desktop machine, where everything is working:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: master-dockerproject-2018-02-01
Storage Driver: windowsfilter
Logging Driver: json-file
 Volume: local
 Network: ics l2bridge l2tunnel nat null overlay transparent
 Log: awslogs etwlogs fluentd gelf json-file logentries splunk syslog
Swarm: inactive
Default Isolation: process

# Windows 10 value:
# Kernel Version: 4.9.60-linuxkit-aufs
Kernel Version: 10.0 14393 (14393.2007.amd64fre.rs1_release.171231-1800)

# Windows 10 values:
# Operating System: Docker for Windows
# OSType: linux
Operating System: Windows Server 2016 Standard
OSType: windows

Architecture: x86_64
CPUs: 2
Total Memory: 3.997GiB
Name: Tests
ID: ...
Docker Root Dir: C:lcow
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: -1
 Goroutines: 16
 System Time: 2018-02-02T14:46:53.5608784+08:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
Live Restore Enabled: false

So the version on Windows server is not configured to linux containers.

How do I change that configuration to the correct one?

On docker for windows you can conveniently right click on the icon in the task bar and pick ‘use linux containers’.

How can you do whatever it is that that does, on windows server?

Get this bounty!!!