#StackBounty: #nginx #docker #elastic-beanstalk #containers Multi-container docker on AWS – Nginx use host machine /etc/hosts resolver

Bounty: 50

I have a multi-container docker environment on Amazon Elastic Beanstalk with the following Dockerrun.aws.json file:

{ 
    "AWSEBDockerrunVersion": 2, 
    "containerDefinitions": [ 
      { 
        "name": "web", 
        "memoryReservation": 256, 
        "image": "my/nginx/repo/image",  
        "portMappings": [ 
          { 
            "hostPort": 80, 
            "containerPort": 80 
          } 
        ], 
        "links": [ 
          "api" 
        ], 
        "essential": true 
      }, 
      { 
        "name": "api", 
        "memoryReservation": 256, 
        "image": "my-api/repo", 
        "essential": true, 
        "portMappings": [ 
          { 
            "hostPort": 3000, 
            "containerPort": 80 
          } 
        ]
      } 
    ] 
  }

Ultimately I want the node app served by nginx to resolve requests to named addresses from linked containers, so in my web image (node app) I’d like to make a request to http://api/some/resource and let nginx resolve that to the api container.

Now, since docker adds a host entry for the api container due to the specified link, I want the nginx server to resolve addresses from the hosts etc/hosts file, however as I found out, nginx uses it’s own resolver. After researching the issue a bit I found out that in non-Elastic Beanstalk multi-container solutions and with user-defined networks, the resolver would be provided by docker on 127.0.0.11, however since it is currently not possible to define user-defined networks in the Dockerrun.aws.json, I keep looking for a different solution. The links can be resolved inside the container, pinging api does work, however, nginx does it’s own thing there.

I have read about dnsmasq as well, however, I wanted to get this running without installing this package, do I even have a choice here ?


Get this bounty!!!

#StackBounty: #python #docker No module named 'http.client' on Docker

Bounty: 50

I am trying to retrieve my machine external IP address with python and I would like to use ipgetter for that.

Running it locally it’s working as expected, and I get my IP.
But, when running on Docker I get

File "/utils/ipgetter.py", line 41, in <module>
    import urllib.request as urllib
  File "/usr/local/lib/python3.6/urllib/request.py", line 88, in <module>
    import http.client
ModuleNotFoundError: No module named 'http.client'

In my requirements.txt I have declared ipgetter==0.7

My Dockerfile start from FROM python:3.6.3-alpine3.6
and I have installed my requirements successfully.

I could implement the ipgetter with different libraries, but I prefer to overcome this issue.

How to solve this? am I missing another dependency?

Thank you


Get this bounty!!!

#StackBounty: #docker #wordpress #kubernetes Dynamically added WordPress plugins on Kubernetes

Bounty: 50

If I’m running WordPress in a Kubernetes environment, whereby the code is part of a Docker image and someone tries to add a plugin through the WordPress admin, I don’t expect that will work very well, as the plugin will only be installed on the container that’s hit when they add the plugin, right?

Is my approach of building the code into an image a misstep? Another approach I’d considered was a volume that holds the code, which would handle this use case well. Is there a discussion of such things I could read somewhere?


Get this bounty!!!

#StackBounty: #docker #docker-compose #laravel-5.3 #docker-swarm #docker-stack Docker service not running when not running in manager n…

Bounty: 50

docker-compose.yml
This is my docker-compose file used to deploy the service in multiple instance using the docker-stack. As you can see the the app service which is the laravel running in 2 nodes and database (mysql) in one of the nodes.

version: '3.4'
networks:
  smstake:   
    ipam:
      config:
        - subnet: 10.0.10.0/24

services:
    db:
        image: mysql:5.7
        networks:
          - smstake
        ports:
          - "3306"
        env_file:
          - configuration.env
        environment:
          MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
          MYSQL_DATABASE: ${DB_NAME}
          MYSQL_USER: ${DB_USER}
          MYSQL_PASSWORD: ${DB_PASSWORD}
        volumes:
          - mysql_data:/var/lib/mysql
        deploy:
          mode: replicated
          replicas: 1

    app:
        image: SMSTAKE_VERSION
        ports:
          - 8000:80
        networks:
          - smstake
        depends_on:
          - db
        deploy:
          mode: replicated
          replicas: 2

The problems I am facing.
1. Though the service are in running state when I check the logs of the service I can see the migrations in successful in only one nodes and not running in another node. See the logs bellow

  1. When I make the app service run only in manager node putting constraints the appliations works great. I can login to page and do everything but When I make the app service run in any node using just replicas than login page is showing up but when try to login it redirects to NOT FOUND page

Here is the full logs when trying to run on 3 nodes. Bellow is sample when running on 2 nodes. You can see migration issues in details
https://pastebin.com/wqjxSnv2

Service logs checked using docker service logs <smstake_app>

| Cache cleared successfully.
    | Configuration cache cleared!
    | Dropped all tables successfully.
    | Migration table created successfully.
    | 
    | In Connection.php line 664:
    |                                                                                
    |   SQLSTATE[42S02]: Base table or view not found: 1146 Table 'smstake.migratio  
    |   ns' doesn't exist (SQL: insert into `migrations` (`migration`, `batch`) val  
    |   ues (2014_10_12_100000_create_password_resets_table, 1))                     
    |                                                                                
    | 
    | In Connection.php line 452:
    |                                                                                
    |   SQLSTATE[42S02]: Base table or view not found: 1146 Table 'smstake.migratio  
    |   ns' doesn't exist                                                            
    |                                                                                
    | 
    | Laravel development server started: <http://0.0.0.0:80>
    | PHP 7.1.16 Development Server started at Thu Apr  5 07:02:22 2018
    | [Thu Apr  5 07:03:56 2018] 10.255.0.14:53744 [200]: /js/app.js



    | Cache cleared successfully.
    | Configuration cache cleared!
    | Dropped all tables successfully.
    | Migration table created successfully.
    | Migrating: 2014_10_12_000000_create_users_table
    | Migrated:  2014_10_12_000000_create_users_table
    | Migrating: 2014_10_12_100000_create_password_resets_table
    | Migrated:  2014_10_12_100000_create_password_resets_table
    | Migrating: 2018_01_11_235754_create_groups_table
    | Migrated:  2018_01_11_235754_create_groups_table
    | Migrating: 2018_01_12_085401_create_contacts_table
    | Migrated:  2018_01_12_085401_create_contacts_table
    | Migrating: 2018_01_12_140105_create_sender_ids_table
    | Migrated:  2018_01_12_140105_create_sender_ids_table
    | Migrating: 2018_02_06_152623_create_drafts_table
    | Migrated:  2018_02_06_152623_create_drafts_table
    | Migrating: 2018_02_21_141346_create_sms_table
    | Migrated:  2018_02_21_141346_create_sms_table
    | Seeding: UserTableSeeder
    | Laravel development server started: <http://0.0.0.0:80>
    | PHP 7.1.16 Development Server started at Thu Apr  5 07:03:23 2018
    | [Thu Apr  5 07:03:56 2018] 10.255.0.14:53742 [200]: /css/app.css

I don’t know if its due to migration problem or what. Sometime I can
login and after few time I get redirected to Not found page again when
clicking on the link inside dashboard.


Get this bounty!!!

#StackBounty: #linux #networking #amazon-web-services #docker Docker container not accessible after X minutes in AWS

Bounty: 100

I have a docker container (from sonarqube image) running on AWS and it was not remotely accessible. I was able to access only through ssh.

To fix my problem, I need to run this command:

$ sysctl net.ipv4.ip_forward=1

The problem is that after some minutes (of after some event) this flag is reverted back to net.ipv4.ip_forward=0. Something is automatically adding a row in this file:

#-> grep net.ipv4.ip_forward /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.ip_forward = 0

Somebody knows what can be the cause? Maybe is some configuration on AWS?


Get this bounty!!!

#StackBounty: #docker #docker-registry Docker Private Registry – Deleted all images, but still showing in catalog

Bounty: 100

Following the official documentation (https://docs.docker.com/registry/spec/api/#deleting-an-image) I have been able to successfully delete an image. As expected, after deleting, the image can no longer be pulled nor its manifest called via API.

I feel like I’ve got the hard part done, however the problem is that the repo is still listed under /v2/_catalog after the deletion is finished. I’m trying to fully purge the registry.

Here is my registry compose file:

registry:
  image: registry:2.5.2
  container_name: registry-test
  ports:
    - 5007:5000
  environment:
    REGISTRY_STORAGE: s3
    REGISTRY_HTTP_TLS_CERTIFICATE: /etc/cert.crt
    REGISTRY_HTTP_TLS_KEY: /etc/cert.key
    REGISTRY_STORAGE_S3_ACCESSKEY: ******
    REGISTRY_STORAGE_S3_SECRETKEY: ******
    REGISTRY_STORAGE_S3_REGION: us-west-1
    REGISTRY_STORAGE_S3_BUCKET: ******
    REGISTRY_STORAGE_S3_SECURE: "true"
    REGISTRY_STORAGE_DELETE_ENABLED: "true"
  volumes:
    - /dockerdata/volumes/registry-test/etc/cert.crt:/etc/cert.crt
    - /dockerdata/volumes/registry-test/etc/cert.key:/etc/cert.key
  restart: unless-stopped

Here is the high-level method on what I did to delete the image:

  1. Gather image digest:
    HEAD https://myprivateregistry:5001/v2/myimage/manifests/mytag with "Accept: application/vnd.docker.distribution.manifest.v2+json" added to the header on the call

  2. The call returns header key Docker-Content-Digest with a value such as sha256:b57z31xyz0f616e65f106b424f4ef29185fbd80833255d79dabc73b8eb873bd

  3. Using that value from step 2, run the delete call: DELETE https://myprivateregistry:5001/v2/myimage/manifests/sha256:b57z31xyz0f616e65f106b424f4ef29185fbd80833255d79dabc73b8eb873bd

  4. Registry API returns 202 Accepted

  5. Run garbage collection manually: registry garbage-collect /etc/docker/registry/config.yml

  6. Garbage collector deletes the associated blobs from disk (log omitted here, but it successfully deletes the blobs)

At this point I can confirm the blobs are completely deleted from disk and I can no longer call image details (like in step 1 above) so I thought I was done.

However, when running: /v2/_catalog my associated repo still lists (even though there are no images within it)! Obviously it cannot pull or be used, but how can I fully remove that repo from that list now that it has no images associated with it?

I don’t see anywhere how to properly remove this on the API documentation page.
Perhaps I’m missing it somewhere?

EDIT –

I wanted to add some more info on how the registry looks before and after the above deletion takes place.

Before the Delete Operation Above:

docker/registry/v2/repositories/myimage/_manifests/revisions/...
docker/registry/v2/repositories/myimage/_manifests/tags/... 
docker/registry/v2/repositories/myimage/_layers/sha256/... (5 layers listed)
docker/registry/v2/blobs/sha256/...

After the Delete Operation Above:

docker/registry/v2/repositories/myimage/_layers/sha256/... (5 layers listed)

So the only thing left over is the _layers directory with the same 5x layers listed. This seems to be the reason why it’s still listed on _catalog

When I delete the myimage folder (from docker/registry/v2/repositories/myimage) then the repository is no longer shown in the _catalog

This seems to be a method to purge it from the _catalog listing. However – what if an image has 2x tags, yet only 1 is deleted – is there a reason to delete anything from _layers in that case? How would that be handled with multiple versions of an image? Obviously I can’t just clobber the _layers directory as the final method since, in the real world, there will be many tagged versions of an image. So this needs to be done intelligently.

I am simply finding it hard to find any documentation on the maintenance/upkeep of the Docker registry nor the schema for the _layers subdirectory and why the garbage collector doesn’t clean up that directory the same it does with manifests and blobs.


Get this bounty!!!

#StackBounty: #docker #rust #dockerfile Unable to run the docker image with rust executable

Bounty: 50

I am trying to create an image with my binary file (written in Rust) but I get different errors. This is my Dockerfile:

FROM scratch
COPY binary /
COPY .env /
COPY cert.pem /etc/ssl/
ENV RUST_BACKTRACE 1
CMD /binary

Building finishes fine but when I try to run it I get this:

$ docker run binary
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: "/bin/sh": stat /bin/sh: no such file or directory": unknown.
ERRO[0000] error waiting for container: context canceled 

And this:

$ docker run binary /binary
standard_init_linux.go:195: exec user process caused "no such file or directory"

I have no idea what to do. The error message looks very odd to me. According to the official docker documentation it must work so.

System info: latest archlinux, docker:

Docker version 18.02.0-ce, build fc4de447b5

P.S. Tested the same with C++ – works fine, with both clang and gcc.

P.P.S. It does not work with scratch, alpine, busybox,bash-based images. But it works withpostgresql,ubuntu,debian` images. So the problem exactly in something related to rust and lightweight docker images – everything works okay otherwise.


Get this bounty!!!

#StackBounty: #docker #docker-volume #windows-container Windows Container with Sidecar for data

Bounty: 100

I am trying to setup a windows nanoserver container as a sidecar container holding the certs that I use for SSL. Because the SSL cert that I need changes in each environment, I need to be able to change the sidecar container (i.e. dev-cert container, prod-cert container, etc) at startup time. I have worked out the configuration problems, but am having trouble using the same pattern that I use for Linux containers.

On linux containers, I simply copy my files into a container and use the VOLUMES step to export my volume. Then, on my main application container, I can use volumes_from to import the volume from the sidecar.

I have tried to follow that same pattern with nanoserver and cannot get working. Here is my dockerfile:

# Building stage
FROM microsoft/nanoserver

RUN mkdir c:\certs
COPY . .

VOLUME c:/certs

The container builds just fine, but I get the following error when I try and run it. The dockerfile documentation says the following:

Volumes on Windows-based containers: When using Windows-based
containers, the destination of a volume inside the container must be
one of:

a non-existing or empty directory
a drive other than C:

so I thought, easy, I will just switch to the D drive (because I don’t want to export an empty directory like #1 requires). I made the following changes:

# Building stage
FROM microsoft/windowservercore as build
VOLUME ["d:"]

WORKDIR c:/certs
COPY . .

RUN copy c:certs d:

and this container actually started properly. However, I missed in the docs where is says:

Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.

so, when I checked, I didn’t have any files in the d:certs directory.

So how can you mount a drive for external use in a windows container if, #1 the directory must be empty to make a VOLUME on the c drive in the container, and use must use VOLUME to create a d drive, which is pointless because anything put in there will not be in the final container?


Get this bounty!!!

#StackBounty: #docker #cloudera Failed to find `/usr/bin/docker-quickstart` when running CDH5 Docker image

Bounty: 50

I’ve downloaded the CDH 5.12 Quickstart Docker image from Cloudera but it fails to run.

$ docker import cloudera-quickstart-vm-5.12.0-0-beta-docker.tar.gz
sha256:8fe04d8a55477d648e9e28d1517a21e22584fd912d06de84a912a6e2533a256c
$ docker run --hostname=quickstart.cloudera --privileged=true -t -i 8fe04d8a5547 /usr/bin/docker-quickstart
docker: Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "exec: "/usr/bin/docker-quickstart": stat /usr/bin/docker-quickstart: no such file or directory".

On the other hand, doing docker pull cloudera/quickstart:latest gives me an image for which the above works – it’s just an older one (5.07 I believe).

This blog post suggests that something changed around CDH 5.10. How then am I supposed to run newer images?


Get this bounty!!!

#StackBounty: #monitoring #virtual-machines #docker #graphite VM CPU usage at 100%

Bounty: 100

CPU usage on our metrics box is at 100% intermittently causing:
‘Internal server error’ when rendering Grafana dashboards

The only application running on our machine is Docker with 3 subcontainers

  • cadvisor
  • graphite

  • grafana

Machine spec
OS Version Ubuntu 16.04 LTS
Release 16.04 (xenial)
Kernel Version 4.4.0-103-generic
Docker Version 17.09.0-ce
CPU 4 cores
Memory 4096 MB
Memory reservation is unlimited
Network adapter mgnt

Storage
Driver overlay2
Backing Filesystem extfs
Supports d_type true
Native Overlay Diff true

Memory swap limit is 2.00GB

Here is a snippet from cAdvisor:

enter image description here

The kworker and ksoftirqd processes change status constently from ‘D’ to ‘R’ to ‘S’

Are the machine specs correct for this setup?
How can I get the CPU usage to ‘normal’ levels?

EDIT

After increasing memory from 4GB to 8GB the CPU usage gradually increased:
enter image description here


Get this bounty!!!