#StackBounty: #monitoring #virtual-machines #docker #graphite VM CPU usage at 100%

Bounty: 100

CPU usage on our metrics box is at 100% intermittently causing:
‘Internal server error’ when rendering Grafana dashboards

The only application running on our machine is Docker with 3 subcontainers

  • cadvisor
  • graphite

  • grafana

Machine spec
OS Version Ubuntu 16.04 LTS
Release 16.04 (xenial)
Kernel Version 4.4.0-103-generic
Docker Version 17.09.0-ce
CPU 4 cores
Memory 4096 MB
Memory reservation is unlimited
Network adapter mgnt

Storage
Driver overlay2
Backing Filesystem extfs
Supports d_type true
Native Overlay Diff true

Memory swap limit is 2.00GB

Here is a snippet from cAdvisor:

enter image description here

The kworker and ksoftirqd processes change status constently from ‘D’ to ‘R’ to ‘S’

Are the machine specs correct for this setup?
How can I get the CPU usage to ‘normal’ levels?

EDIT

After increasing memory from 4GB to 8GB the CPU usage gradually increased:
enter image description here


Get this bounty!!!

#StackBounty: #docker How do you run a linux docker container on windows server 2016?

Bounty: 100

I get:

PS C:tmp> docker pull ubuntu
Using default tag: latest
latest: Pulling from library/ubuntu
no matching manifest for windows/amd64 in the manifest list entries

Now, before you say ‘Duplicate!’, ‘make sure its in experimental mode’, like all the other answers to this question out there, I have.

I have followed the instructions on https://github.com/linuxkit/lcow, and even read and followed the steps to manually create a hyper-v images from https://tutorials.ubuntu.com/tutorial/tutorial-windows-ubuntu-hyperv-containers

I have downloaded the nightly build of docker.

I am running in experimental mode:

PS C:tmp> docker version
Client:
 Version:       master-dockerproject-2018-02-01
 API version:   1.36
 Go version:    go1.9.3
 Git commit:    26a2a459
 Built: Thu Feb  1 23:50:28 2018
 OS/Arch:       windows/amd64
 Experimental:  false
 Orchestrator:  swarm

Server:
 Engine:
  Version:      master-dockerproject-2018-02-01
  API version:  1.36 (minimum version 1.24)
  Go version:   go1.9.3
  Git commit:   53a58da
  Built:        Thu Feb  1 23:57:33 2018
  OS/Arch:      windows/amd64
  Experimental: true

I have tried with the --platform argument:

PS C:tmp> docker run --platform linux ubuntu
Unable to find image 'ubuntu:latest' locally
C:Program FilesDockerdocker.exe: Error response from daemon: invalid platform: invalid platform os "linux".
See 'C:Program FilesDockerdocker.exe run --help'.

I seem to have some differences to the docker info from my Windows 10 desktop machine, where everything is working:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: master-dockerproject-2018-02-01
Storage Driver: windowsfilter
 Windows:
Logging Driver: json-file
Plugins:
 Volume: local
 Network: ics l2bridge l2tunnel nat null overlay transparent
 Log: awslogs etwlogs fluentd gelf json-file logentries splunk syslog
Swarm: inactive
Default Isolation: process

# Windows 10 value:
# Kernel Version: 4.9.60-linuxkit-aufs
Kernel Version: 10.0 14393 (14393.2007.amd64fre.rs1_release.171231-1800)

# Windows 10 values:
# Operating System: Docker for Windows
# OSType: linux
Operating System: Windows Server 2016 Standard
OSType: windows

Architecture: x86_64
CPUs: 2
Total Memory: 3.997GiB
Name: Tests
ID: ...
Docker Root Dir: C:lcow
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: -1
 Goroutines: 16
 System Time: 2018-02-02T14:46:53.5608784+08:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

So the version on Windows server is not configured to linux containers.

How do I change that configuration to the correct one?

On docker for windows you can conveniently right click on the icon in the task bar and pick ‘use linux containers’.

How can you do whatever it is that that does, on windows server?


Get this bounty!!!

#StackBounty: #ubuntu #docker How to install missing /lib/modules/$(uname -r) on my trusty docker container

Bounty: 50

Running docker on a Mac

docker pull ubuntu:14.04
docker run -i -t ubuntu:14.04 /bin/bash

Linux Standard Base

root@d112db1e835e:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.5 LTS
Release:    14.04
Codename:   trusty

My goal is to retire a dedicated laptop that I used to build some good old C code and use a docker container instead.

In order to compile my code, my Makefile is looking to run

Makefile:       /usr/bin/make -C /lib/modules/$(shell uname -r)/build M=$(PWD)/linux/$* modules

unfortunately the modules folder is empty

mysuer@d112db1e835e:~/robot$ ls -al /lib/modules/
ls: cannot access /lib/modules/: No such file or directory

On my linux machine, I can find the modules

$ ls -al /lib/modules/
total 28
drwxr-xr-x  7 root root 4096 Dez 13  2016 .
drwxr-xr-x 24 root root 4096 Apr 24  2017 ..
drwxr-xr-x  5 root root 4096 Dez 13  2016 3.13.0-105-generic
drwxr-xr-x  5 root root 4096 Jun 23  2015 3.13.0-55-generic
drwxr-xr-x  5 root root 4096 Jul 10  2015 3.13.0-57-generic
drwxr-xr-x  5 root root 4096 Nov  3  2015 3.13.0-65-generic
drwxr-xr-x  5 root root 4096 Nov 24  2015 3.13.0-68-generic

but no modules in my docker.

On my docker

uname -r
4.9.60-linuxkit-aufs

hence

/usr/bin/make -C /lib/modules/4.9.60-linuxkit-aufs/build .... FAILS

/lib/modules/4.9.60-linuxkit-aufs is not there.

How do I work around that?

Trying to install headers

apt-cache search linux-headers-4
linux-headers-4.2.0-18 - Header files related to Linux kernel version 4.2.0
linux-headers-4.2.0-18-generic - Linux kernel headers for version 4.2.0 on 64 bit x86 SMP
linux-headers-4.2.0-18-lowlatency - Linux kernel headers for version 4.2.0 on 64 bit x86 SMP
linux-headers-4.2.0-19 - Header files related to Linux kernel version 4.2.0
linux-headers-4.2.0-19-generic - Linux kernel headers for version 4.2.0 on 64 bit x86 SMP
linux-headers-4.2.0-19-lowlatency - Linux kernel headers for version 4.2.0 on 64 bit x86 SMP
linux-headers-4.2.0-21 - Header files related to Linux kernel version 4.2.0
linux-headers-4.2.0-21-generic - Linux kernel headers for version 4.2.0 on 64 bit x86 SMP
linux-headers-4.2.0-21-lowlatency - Linux kernel headers for version 4.2.0 on 64 bit x86 SMP
linux-headers-4.2.0-22 - Header files related to Linux kernel version 4.2.0
...

I don’t find headers for 4.9.60

root@d112db1e835e:~#  apt-get install linux-headers-$(uname -r)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package linux-headers-4.9.60-linuxkit-aufs
E: Couldn't find any package by regex 'linux-headers-4.9.60-linuxkit-aufs'

or

root@d112db1e835e:~# apt-cache search linux-headers-4.9
root@d112db1e835e:~# 

no candidate

root@d112db1e835e:~# apt-get install linux-headers 
Reading package lists... Done
Building dependency tree        
Reading state information... Done
Package linux-headers is a virtual package provided by:
  linux-headers-4.4.0-1010-aws 4.4.0-1010.10
  linux-headers-4.4.0-1009-aws 4.4.0-1009.9
... FILTERED ...
  linux-headers-3.13.0-100-lowlatency 3.13.0-100.147
  linux-headers-3.13.0-100-generic 3.13.0-100.147
You should explicitly select one to install.

E: Package 'linux-headers' has no installation candidate
root@d112db1e835e:~# 

doesn’t return any packages

root@d112db1e835e:~# apt-cache search linux-source     
linux-source - Linux kernel source with Ubuntu patches
linux-source-3.13.0 - Linux kernel source for version 3.13.0 with Ubuntu patches


Get this bounty!!!

#StackBounty: #asp.net #node.js #docker #docker-compose #dockerfile Docker copy from one container to another

Bounty: 50

I have this docker file:

FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base
WORKDIR /app
EXPOSE 80

FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build
WORKDIR /src
COPY *.sln ./
COPY MyApp.Api/MyApp.Api.csproj MyApp.Api/
RUN dotnet restore
COPY . .
WORKDIR /src/MyApp.Api
RUN dotnet build -c Release -o /app

FROM build AS publish
RUN dotnet publish -c Release -o /app

FROM base AS final
copy --from=build["C:Program Filesnodejs", "C:nodejs"]
RUN SETX PATH "%PATH%;C:nodejs"
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyApp.Api.dll"]

And I want to copy nodejs from c:Program Filesnodejs on build to C:nodejs on final.
But when I build it i get this error:

Step 15/19 : copy –from=publish [“C:Program Filesnodejs”,
“C:nodejs”]

ERROR: Service ‘myapp.api’ failed to build:
failed to process “[“C:Program”: unexpected end of statement while
looking for matching double-quote

How can I copy nodejs from the build image to my final image?
Thanks


Get this bounty!!!

#StackBounty: #php #docker #php-fpm #php7 PHP slowlog empty even though PHP-FPM says it's logging

Bounty: 50

I have PHP-FPM listening on a Unix domain socket and I’ve configured the www pool (the only one present) with the following values:

slowlog = /$pool.log.slow
request_slowlog_timeout = 10s

and just for testing I’ve set max_execution_time in php.ini to 20 seconds. Then I created a test script:

<?php

while(1){
  $i++;
}

?>

Then accessed it via web browser. The script eventually times out due to max_execution_time but the log remains empty:

root@b7e4a919c988:/var/www/html# ll /www.log.slow 
-rw-rw-rw-. 1 www-data root 0 Jan  4 21:31 /www.log.slow

The PHP-FPM log, though seems to indicate that it was expecting to log the slow run:

[04-Jan-2018 21:37:28] WARNING: [pool www] child 9382, script '/var/www/html/test.php' (request: "GET /test.php") executing too slow (13.061999 sec), logging

I’ve tried a variety of things such as using sleep(10000) or putting the while loop in a function (just in case it couldn’t build a stack trace) but nothing seems to get it to print the backtrace to the log. The existence of the logfile itself also seems to indicate FPM is expecting to write slow requests.

At this point I just don’t know what else to check.


Get this bounty!!!

#StackBounty: #docker #pip Repeated installation of a package inside docker image

Bounty: 50

I built a python package called my-package. I have no intension to make it public so installation is mostly through our internal servers. Recently one senior developer built an architecture using docker where the application is hosted and my-package is a dependency.

The problem is in order to test the package, I REPEATEDLY need to COPY my code into docker image, then uninstall old version of package and re-install from the local code.

  1. Rebuilding entire image again takes half an hour. – Not an option.
  2. Create another Dockerfile FROM existing image and run only specific commands to COPY and install the pip package. – My current solution yet not very efficient.

I am pretty sure the docker users would have come across this issue so need an expert opinion on the most efficient way to handle this.


Get this bounty!!!

#StackBounty: #docker #docker-compose #ubuntu-14.04 #webrtc #dockerfile How share /dev/videoX devices between Chromium on host and Chro…

Bounty: 150

Environment

  • Host running Ubuntu 14.04.5 LTS
  • Docker version 17.09.0-ce, build afdb6d4
  • Chromium 62.0.3202.89
  • 2 webcams: /dev/video0, /dev/video1

Cameras

# v4l2-ctl --list-devices
Venus USB2.0 Camera (usb-0000:00:1a.0-1.2.2):
    /dev/video1

USB 2.0 Camera (usb-0000:00:1a.0-1.3):
    /dev/video0

I need to share the webcams on Ubuntu 14.04 host to the Ubuntu 16.04 docker container and be able to get the video streams (WebRTC getUserMedia) from each camera on each chromium instance respectively running on the host and the container.

To test the getUserMedia, I am browsing to https://www.onlinemictest.com/webcam-test/

How to reproduce

Dockerfile

Dockerfile 
FROM ubuntu:16.04

# Install chromium
RUN apt-get update 
    && apt-get install sudo chromium-browser alsa-base -y 
    && rm -rf /var/lib/apt/lists/*

# Create a normal user to run chromium as
RUN useradd --create-home browser 
    && adduser browser video 
    && adduser browser audio 
    && usermod -aG sudo browser
USER browser
WORKDIR /home/browser

CMD ["/usr/bin/chromium-browser"]

docker-compose up

$ more docker-compose.yml 
version: '3'
services:
  chromium:
    build:
      context: .
      dockerfile: Dockerfile
    image: ubuntu-cr:0.1

    privileged: true

    environment:
        DISPLAY: $DISPLAY
        XAUTHORITY: /.Xauthority

    volumes:
        - /tmp/.X11-unix:/tmp/.X11-unix
        - ~/.Xauthority:/.Xauthority:ro

1. Start Chromium in docker container

export DISPLAY=:0.0 
docker-compose up

images

docker images
REPOSITORY      TAG            IMAGE ID            CREATED             SIZE
ubuntu-cr       0.1            a61f5506b1f9        9 minutes ago       764MB
ubuntu          16.04          747cb2d60bbe        2 months ago        122MB
hello-world     latest         05a3bd381fc2        3 months ago        1.84kB

2. When Chromium open in the docker container, browse to https://www.onlinemictest.com/webcam-test/

Great! I can see the video stream from my camera!

3. Open a Chromium browser to the same URL on the host

🙁 I get the ERROR message

Camera not authorized. Please check your media permissions settings

I get the same error, if I start Chromium on host first and browse to the camera test page to get the video stream (getUserMedia). When in sequence I run the Chromium in the container, I get the same ERROR message, which corresponds to the NavigatorUserMediaError > TrackStartError.

I tried from the Chromium console

navigator.mediaDevices.getUserMedia({audio: true, video: true})

and it gave me a TrackStartError when cam test already running on the other Chromium instance.

Any pointers on how to configure my docker container to allow one cam to be assigned to the host while the other is dedicated to the docker container?

Interesting threads

  • NotReadableError: Failed to allocate videosource points that this may be happening because the camera is used by another application.

  • Interestingly, when I open 2 Chromium instances on the host (no container this time) pointing to the same camera test page (getUserMedia), it does manage to get the same video stream to the 2 Chromium instances. That’s when I try to access from a container that it conflicts. It can play either one or the other, but not at the same time. So it could be something to configure on the docker container. Still try to understand why this is happening.


Get this bounty!!!

#StackBounty: #ruby-on-rails #ruby #docker #redis #circleci getting sh: 12: redis-server: not found in CircleCI 2.0 using docker

Bounty: 50

I am having issues with Redis in CircleCI. Here is the entire circle.yml

version: 2
jobs:
  build:
    working_directory: ~/DIR_NAME
    docker:
     - image: circleci/ruby:2.4.1-node
    environment:
      RAILS_ENV: continous_integration
      PGHOST: 127.0.0.1
      PGUSER: rails_test_user

  - image: circleci/postgres:9.6.3-alpine
    environment:
      POSTGRES_USER: rails_test_user
      POSTGRES_PASSWORD: ""
      POSTGRES_DB: continous_integration

  - image: redis:3.2.6
    environment:
      POSTGRES_USER: root

steps:
  - checkout

  - restore_cache:
      keys:
        - DIR_NAME-{{ checksum "Gemfile.lock" }}
        - DIR_NAME-

  - save_cache:
      key: rails-demo-{{ checksum "Gemfile.lock" }}
      paths:
        - vendor/bundle

  - run:
      name: Setup Bundler and Gems
      command: |
        gem install bundler
        gem update bundler
        gem install brakeman
        gem install rubocop
        gem install rubocop-rspec
        gem install scss_lint
        gem install eslint-rails
        gem install execjs
        bundle config without development:test
        bundle check --path=vendor/bundle || bundle install --without development test --path=vendor/bundle --jobs 4 --retry 3

  - run:
      name: Setup Utilities
      command: |
        sudo curl --output /tmp/phantomjs https://s3.amazonaws.com/circle-downloads/phantomjs-2.1.1
        sudo chmod ugo+x /tmp/phantomjs
        sudo ln -sf /tmp/phantomjs /usr/local/bin/phantomjs

  - run:
      name: Setup Postgres
      command: |
        sudo apt-get update
        sudo apt-get install postgresql-client

  - run:
      name: Build Rails Database Yaml
      command: |
        cp config/database_example.yml config/database.yml

  - run:
      name: Setup Rails Database
      command: |
        RAILS_ENV=continous_integration bundle exec rake db:drop
        RAILS_ENV=continous_integration bundle exec rake db:setup

  - run:
      name: Run Brakeman
      command: |
        RAILS_ENV=continous_integration brakeman -z

  - run:
      name: Run Rubocop
      command: |
        RAILS_ENV=continous_integration bundle exec rubocop --format fuubar --require rubocop-rspec --config .rubocop.yml

  - run:
      name: Run the SCSS Linter
      command: |
        RAILS_ENV=continous_integration bundle exec scss-lint --config=config/scsslint.yml

  - run:
      name: Run the Eslint Linter for JS
      command: |
        RAILS_ENV=continous_integration bundle exec rake eslint:run_all

  - run:
      name: Run Rspec
      command: |
        RAILS_ENV=continous_integration bundle exec rspec --format RspecJunitFormatter -o /tmp/test-results/rspec.xml

  - store_test_results:
      path: /tmp/test-results

Postgres connects to redis fine, but when it gets to the test suite it hangs with sh: 12: redis-server: not found for varying amounts of time before exiting, or very intermittently actually succeeding but with this output:

cat: /home/circleci/DIR_NAME/tmp/pids/redis-test.pid: No such file or directory
kill: invalid argument Q

The above output still has to be a problem because something is not resolving correctly. Any insight would be greatly appreciated!


Get this bounty!!!

#StackBounty: #docker #deployment #ansible Using ansible to deploy dockerized testing environment and plain ubuntu for production

Bounty: 50

I need some help with deploying a system I have been working for a year and a half now. In order you can understand my concern, I will explain a little about our infrastructure.

We have a server (let’s call it TESTING_SERVER) where we have different testing environments for our system. Each of these environments is running entirely with docker. Each instance of a testing environment consists in:
1. Docker container with nginx acting as a proxy
2. Docker container with a Django web
3. Docker container with mysql

Every time we need to build a new environment for testing purposes (i.e: QA want to test a new feature), we use an ansible playbook which run these tasks on TESTING_SERVER:

  1. Create a docker network
  2. Create database container
  3. Clone or update django git repo somewhere in TESTING_SERVER
  4. Create django container
  5. Run django collectstatic command inside django container
  6. Run django migrate command inside django container
  7. Create nginx container

In our production environment we have a plain ubuntu server (PRODUCTION_SERVER) running mysql, django and nginx. Every time we have to deploy to production, we run an ansible playbook that (almost) repeat the steps listed above:

  1. check mysql connection (db is in another server)
  2. Clone or update django git repo somewhere in PRODUCTION_SERVER
  3. Check and restart gunicorn (is the equivalent to create django container)
  4. run django collectstatic
  5. run django migrate
  6. check nginx configuration

These two playbooks are different, although they have a lot in common. I was thinking to convert each step to an ansible task and use a conditional to know which tasks (dockerized or direct) should be run. But I will still have different tasks for each step (same playbook but seems a little tricky).

My question is: is there a way to “merge” these playbooks to have just one without repeating ourselves?


Get this bounty!!!

#StackBounty: #docker #docker-compose Docker compose port forwarding not working properly

Bounty: 50

When I use docker with the very simple command:

docker run -p 80:80 nginx

Port forwarding works properly and I can get nginx ‘welcome page’ when I go to localhost:80 using browser/curl.

At the same time when I use very similar but docker-compose specific config:

version: '3'
services:
  nginx:
    image: nginx
    ports:
     - "80:80"

And when I do docker-compose up and go to the browser – I see infinite loading, so looks like port forwarding is not configured properly, but I can’t understand what is wrong in the config.
I tried using different browsers and curl, I’m getting the same result – infinite loading.

Nginx here is just an example because of it’s simplicity, in fact I have the same issue with redis/mysql/java images, so the issue isn’t related to nginx.

I’ve also tried the following ways to start container via docker-compose:

docker-compose run -p 80:80 nginx

docker-compose run --service-ports nginx

but no luck, I’ve got the same result.

In both cases (docker run and docker-compose up) I have the same network driver type – bridge.

I’ve compared results of docker inspect <container id> for both cases: http://i.prntscr.com/obvxi0yESEa92znLDEu_PA.png

And results of docker inspect <network id>:
http://i.prntscr.com/yyTpetvJSXa-dz4o9Pcl3w.png

ifconfig docker0 results:

docker0   Link encap:Ethernet  HWaddr 02:42:f1:9a:b6:72  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:f1ff:fe9a:b672/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:174 errors:0 dropped:0 overruns:0 frame:0
          TX packets:837 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:47434 (47.4 KB)  TX bytes:107712 (107.7 KB)

brctl show results:

bridge name     bridge id               STP enabled     interfaces
br-f7adc3956101         8000.02427f870e7f       no
docker0         8000.0242f19ab672       no

ifconfig on the host machine results: https://pastebin.com/6ufWeYTE

route on the host machine results:

Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 600 0 0 wlp4s0
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 docker0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp4s0

Both docker and docker-compose were installed using official sites instructions for Linux.

Host OS: Ubuntu 17.04

UPDATE:
I’ve tried to set ‘attachable’ network property in compose config and the issue was fixed. Though still unclear why is that happening.

networks:
default:
attachable: true


Get this bounty!!!