#StackBounty: #docker #ssh Accessing Files on a Windows Docker Container Easily

Bounty: 50


So I’m trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I’ve got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:Program FilesSOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I’m well aware of the docker cp command, but I’d like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.

Background (what I’ve tried):

I’ve spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I’ve had no luck. I’ve spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can’t get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn’t really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I’ve tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I’ve followed every guide I can find (though half are specific to linux containers), and haven’t gotten any of them to work, and half the posts I’ve found just say “why would you want to use ssh when you can just use the built in docker commands”. I want to use ssh because it’s simpler from an end users perspective, and I’d rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.

EDIT: Using OpenSSH

Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven’t generated at least an RSA or DSA key using:

ssh-keygen.exe -f "C:\Program Files\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa

And modifying the permissions using:

icacls "C:Program FilesOpenSSH-Win64/" /grant sshd:(OI)(CI)F /T


icacls "C:Program FilesOpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T

Again, I’m using the default supplied sshd_config file, but I’ve tried just about every adjustment of those settings I can find and none of them help.

I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a container is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can’t get it to work. Keep getting errors when I try to start the container saying “the directory is not empty”.

EDIT: Command used:

docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container

Running this on a ProxMox VM.

At this point, I’m running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying “Just use the built in docker cp command!” when that is honestly a pretty bad solution when you’re going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.

Is there something obvious here that I’m missing? How is this so difficult? Any help is appreciated.

Get this bounty!!!

#StackBounty: #docker #docker-machine How to run Docker commands on remote Windows engine

Bounty: 200

I’m working on integrating Docker into our TeamCity build process so that I can create a task that runs a “docker build” to create an image from our code. Right now, all our build agents run on either Windows Server 2008 or Windows Server 2012, neither of which can run Docker. There’s a chance we can get a license for one Windows Server 2016 build machine, but I’m wondering if there’s a way to run Docker Engine on that machine while issuing docker commands from other build agents.

Here’s what I’ve considered so far:

  • Docker Toolkit: This is a way to run Docker on legacy systems, but it spins up a local VirtualBox VM running Linux thus it can only run Linux containers. I need to be able to build and run Windows containers.
  • Docker Machine: This is a way to talk to a remote Docker engine. However, according to this open bug, it appears Docker Machine is only capable to talking to remote engines on Linux hosts due to security implementations; It’s an old issue but I can’t find any indication this limitation has been removed.
  • Docker itself uses a client/server architecture, but I couldn’t find any documentation on how to talk to a remote engine without using something like Docker Machine.

Anything else I’m missing, or am I just pretty much out of luck unless we upgrade all our build agents to Windows 10 or Windows Server 2016?

Get this bounty!!!

#StackBounty: #backup #docker #postgresql pg_dumpall hangs occassionally

Bounty: 100

I have a bash shell script that dumps all my postgres databases from docker:

function dump_postgres {
    mkdir -p ${BACKUP_DIR}/postgres/
    docker ps -a --format '{{.Names}}t{{.Ports}}' | grep 5432/tcp | awk '{ print $1 }' | while read -r line; do
        echo "extracting database from container '${line}'"
        docker exec -t ${line} pg_dumpall -v --lock-wait-timeout=600 -c -U postgres > ${BACKUP_DIR}/postgres/${line}.sql

dump_postgres >> "${LOG}" 2>> "${ERROR}"

The script figures out which docker containers are listening on the default postgres port and dumps the database in sql format.

My problem is that this command suddenly stops every other day when started by cron. It just stops and the container that does the dump does not exit. Also there is no output on stderr.

Do you have any idea how to solve this?

Get this bounty!!!

#StackBounty: #docker Starting specific task containers from inside a coordinator container

Bounty: 100

I have containers that perform specific tasks, for example run an R-project command or create the waveform of an audio file by running “docker (run|exec) run.sh “, I am looking for ways to run those from inside other containers without having to do extra work for each new task.

The current way I am thinking of solving this is to give access to the docker daemon by binding the socket inside the container. My host runs a docker container which runs an application as a user, app.

The host docker socket is mounted inside the docker container and a script is created by root, /usr/local/run_other_docker.sh.

Now, user app does not have access rights on the mounted docker socket, but is allowed to run /usr/local/run_other_docker.sh after being given passwordless access as a sudoer.

How dangerous is this?

Is there a standard/safe way of starting other task containers from inside a container without binding to the host docker socket?

The only other solution I have come across involves creating a microservice that runs in the second container for the first one to call. This is undesirable because it adds more things asking for maintaintenance for each such use case.

Get this bounty!!!

#StackBounty: #docker Docker execution in docker container: is there a safe way to do it?

Bounty: 100

I do understand that giving access to the docker daemon by binding the socket inside the container is a risk to start with, but I have a scenario which I would appreciate some insight on how safe it is from someone knowledgeable…

There is a host running a docker container which runs an application as a user, app.
The host docker socket is mounted inside the docker container and a script is created by root, /usr/local/run_other_docker.sh.

Now, user app does not have access rights on the mounted docker socket, but is allowed to run /usr/local/run_other_docker.sh after being given passwordless access as a sudoer.

How dangerous is this?
Is there a standard way of doing such a thing?

I’ve been searching and the only solution that does not include tricks seems to be creating a microservice that runs in the second container for the first one to call, which can be a pain as it adds more things asking for maintaintenance for each such use case…

Get this bounty!!!

#StackBounty: #docker #hadoop #yarn Unable to increase Max Application Master Resources

Bounty: 50

I am using uhopper/hadoop docker image to create yarn cluster. I have 3 nodes with 250GB RAM per node. I have added configuration

        - name: YARN_CONF_yarn_scheduler_minimum___allocation___mb
          value: "2048"
        - name: YARN_CONF_yarn_scheduler_maximum___allocation___mb
          value: "16384"
        - name:  MAPRED_CONF_mapreduce_framework_name
          value: "yarn" 
        - name: MAPRED_CONF_mapreduce_map_memory_mb
          value: "8192"
        - name: MAPRED_CONF_mapreduce_reduce_memory_mb
          value: "8192"
        - name: MAPRED_CONF_mapreduce_map_java_opts
          value: "-Xmx8192m"
        - name: MAPRED_CONF_mapreduce_reduce_java_opts
          value: "-Xmx8192m"

Max Application Master Resources is 10240 MB. I ran 5 spark jobs with each 3 GB driver memory, 2 jobs never came in RUNNING state due 10240MB. I am unable to fully utilize my hardware.
enter image description here
How I can increase the Max Application Master Resources memory ?

Get this bounty!!!

#StackBounty: #ubuntu #docker docker-compose run –rm slow startup

Bounty: 50

I get that docker has some overhead and I wouldn’t expect it to be as fast as local bin, but 2 seconds overhead? It seems too much … Once the container is running, the execution itself seems the same.

$ time docker-compose run --rm php-cli php -i > /dev/null
docker-compose run --rm php-cli php -i > /dev/null  0,43s user 0,07s system 23% cpu 2,107 total

$ time php -i > /dev/null
php -i > /dev/null  0,04s user 0,01s system 98% cpu 0,050 total

Even the simple docker hello-world takes more time than I would think is appropriate.

time docker run --rm hello-world > /dev/null
docker run --rm hello-world > /dev/null  0,07s user 0,02s system 9% cpu 0,869 total

I tried stracing the command and it hangs on wait4 most of the time (which I guess is waiting for the docker daemon response? I’m not a pro so please correct me), here is partial output if that helps https://pastebin.com/pdA63zBi.

Is this expected behavior or is something wrong with my setup?

Get this bounty!!!

#StackBounty: #systemd #dns #docker #systemd-resolved How to allow systemd-resolved to listen to an interface other than loopback?

Bounty: 100

systemd-resolved is a daemon that, among other things, acts as a DNS server by listening IP address on the local loopback interface.

I would like to let the daemon listen to another interface. My use-case is to expose it to docker containers, so that docker containers share the DNS caching provided by systemd-resolved. I know how to configure the host as a DNS server for docker containers, but at least by default, systemd-resolved rejects these DNS queries because they are not coming from the loopback interface, but from the docker bridge interface.

With dnsmasq (a tool similar to systemd-resolved), I did this by adding listen-address= to the configuration file. Unfortunately, I couldn’t find a systemd-resolved equivalent.

Is there a way to configure which interface systemd-resolved listens on?

Get this bounty!!!

#StackBounty: #virtualbox #symbolic-link #docker Docker Symlink from shared folder in virtualbox

Bounty: 50

I’m working on a test server, on my computer. Where I have installed Ubuntu server, as a VM in virtual-box.

Version 5.2.8.r121009


  • Ubuntu 17.10 4.13.0-21-generic
  • Docker version 18.04.0-ce, build 3d479c0

I have created two Shared folders.

root@docker:/var/lib/docker# ls /media/ -l
totalt 4
drwxr-xr-x 2 root root   4096 mai    8 23:15 cdrom
drwxrwx--- 1 root vboxsf    0 mai    8 23:46 sf_docker-compose
drwxrwx--- 1 root vboxsf    0 mai    9 00:17 sf_docker-volumes 

When I do

service docker stop && 
rm -fr /var/lib/docker/volumes && 
ln -s /media/sf_docker-volumes /var/lib/docker/volumes && 
service docker restart && 
docker ps

I get the following error

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. 
Is the docker daemon running?

So i revert back, to the defaults:

service docker stop && 
rm -fr /var/lib/docker/volumes && 
service docker restart && 
docker ps

And everything is working again.

So my questions is, how can i fix the permissions issues i get from using a symlink for the /var/lib/docker/volumes/ folder.
Im sure that the issue is that the group owner of the symlink is vboxsf and not root. but i cant seem to manage to change that .

Get this bounty!!!

#StackBounty: #ubuntu #nginx #docker Update fastcgi_pass in nginx conf with docker container IP on startup

Bounty: 50

We have the following setup:

We host multiple website on an Ubuntu server, most of them running PHP 5.6. One of them, runs inside a Docker container with PHP 7.1.

The nginx conf for this website has the following line:


which points to the IP of the docker container, which we get from

docker inspect <container>|grep IP

The problem is whenever the system restarts, the container gets a new IP assigned and we have to copy it into the nginx conf again and restart nginx. How could we do this automatically?

Thank you!


Get this bounty!!!