#StackBounty: #windows #networking #vpn #windows-server-2016 #rras No services reachable on RRAS server after successful VPN dialin

Bounty: 100

We have the following configuration in place:

–LAN SIDE

Local Clients #N

Local Server A (DC + RRAS)

Local Server B (DC + RRAS)

Local Servers C…Z

—ROUTER/FIREWALL

–PUBLIC SIDE

Foreign Clients 1,2..,#N


Server A & B are both up to date (Aug ’17) Windows Server 2016 Standard.

We have this issue on both Servers, so we’re gonna take Server A as example for simplicity.

** RRAS configuration **:

Server A has just one nick, and it’s configured as follows:

1Physical NIC

Ethernet IF: 192.168.12.41/255.255.255.0

Alias IF: 192.168.12.38/255.255.255.255 (This has been created by the
RRAS server)

RRAS server (both PPTP and SSTP)’s ports are forwarded from their
public IP to the Ethernet IF’s IP by the router

Foreign Client’s configuration:

local and foreign clients are all Windows 7 to Windows 10 professional, there are no further relevant informations but the provided ipconfig/nmap’s output:

** Issue representation **

Foreign Clients can succesfully connect to the LAN through Server A’s RRAS.
But when they do, at the networking level, they can connect to anything BUT Server A.
The problem was firstly noticed when one connected to Server A wasn’t able to RDP to Server A. Then, we found that no services are rechable on Server A, but you can connect to any client and Servers B to Z.

If the connection is made the other way around (Foreign Client connects to Server B’s RRAS) it will connect to anything in the LAN BUT Server B.

Both UDP and TCP ports are not reachable through the connected RRAS:

That’s an nslookup from Foreign Client to Server A and B while connected to Server A:

c:Usersuser>nslookup site.customer.com 192.168.12.41
DNS request timed out.
    timeout was 2 seconds.
Server:  UnKnown
Address:  192.168.12.41

DNS request timed out.
    timeout was 2 seconds.
DNS request timed out.
    timeout was 2 seconds.
DNS request timed out.
    timeout was 2 seconds.
DNS request timed out.
    timeout was 2 seconds.
*** Tempo scaduto per la richiesta a UnKnown

c:Usersuser>nslookup site.customer.com 192.168.12.42
Server:  moon.site.customer.com
Address:  192.168.12.42

Nome:    site.customer.com
Addresses:  192.168.12.41
          192.168.12.42

That’s an ipconfig /all of Foreign Client after successful connection to Server A:

c:Usersuser>ipconfig /all
...
Scheda PPP CUSTOMER:

   Suffisso DNS specifico per connessione: site.customer.com
   Descrizione . . . . . . . . . . . . . : CUSTOMER
   Indirizzo fisico. . . . . . . . . . . :
   DHCP abilitato. . . . . . . . . . . . : No
   Configurazione automatica abilitata   : Sì
   Indirizzo IPv4. . . . . . . . . . . . : 192.168.12.143(Preferenziale)
   Subnet mask . . . . . . . . . . . . . : 255.255.255.255
   Gateway predefinito . . . . . . . . . :
   Server DNS . . . . . . . . . . . . .  : 192.168.12.42
                                           192.168.12.41
   NetBIOS su TCP/IP . . . . . . . . . . : Disattivato
...

That’s a tracert to the ServerA.

c:Usersuser>tracert -w 100 sun.site.customer.com

Traccia instradamento verso sun.site.customer.com [192.168.12.41]
su un massimo di 30 punti di passaggio:

  1    16 ms     *       14 ms  192.168.12.38
  2    15 ms    15 ms    14 ms  192.168.12.41

Traccia completata.

That’s tracert to the ServerB.

c:Usersuser>tracert -w 100 moon.site.customer.com

Traccia instradamento verso moon.site.customer.com [192.168.12.42]
su un massimo di 30 punti di passaggio:

  1     *       29 ms    26 ms  192.168.12.38
  2    23 ms    29 ms    31 ms  192.168.12.42

Traccia completata.

Nmap Server B:

c:Usersuser>nmap --unprivileged -P0 -F 192.168.12.42
Starting Nmap 7.60 ( https://nmap.org ) at 2017-08-29 17:54 ora legale Europa occidentale
Nmap scan report for moon.site.customer.com (192.168.12.42)
Host is up (1.1s latency).
Not shown: 89 closed ports
PORT     STATE SERVICE
53/tcp   open  domain
80/tcp   open  http
88/tcp   open  kerberos-sec
135/tcp  open  msrpc
139/tcp  open  netbios-ssn
389/tcp  open  ldap
443/tcp  open  https
445/tcp  open  microsoft-ds
1433/tcp open  ms-sql-s
1723/tcp open  pptp
3389/tcp open  ms-wbt-server

Nmap done: 1 IP address (1 host up) scanned in 32.27 seconds

Nmap Server A:

c:Usersuser>nmap --unprivileged -e ppp0 -P0 -F 192.168.12.41
Starting Nmap 7.60 ( https://nmap.org ) at 2017-08-29 17:56 ora legale Europa occidentale
Nmap scan report for sun.site.customer.com (192.168.12.41)
Host is up.
All 100 scanned ports on sun.site.customer.com (192.168.12.41) are filtered

Nmap done: 1 IP address (1 host up) scanned in 46.63 seconds

As you can see name resolution is working fine, but tcp/udp connections to Server A will fail if connected to Server A’s RRAS or will fail to Server B if connected to Server B’s RRAS.

No RRAS nor any other service’s on the Servers or Foreign clients shows anything relevant, and I think it’s normal as we have a problem at the networking level here.

We actually found that for Foreign Clients, the second autocreated Internal IP of the RRAS has Server’s A services published and reachable:

c:Usersuser>nmap --unprivileged -P0 -F 192.168.12.38
Starting Nmap 7.60 ( https://nmap.org ) at 2017-08-29 18:26 ora legale Europa occidentale
Nmap scan report for 192.168.12.38
Host is up (0.19s latency).
Not shown: 90 filtered ports
PORT     STATE SERVICE
21/tcp   open  ftp
80/tcp   open  http
88/tcp   open  kerberos-sec
135/tcp  open  msrpc
139/tcp  open  netbios-ssn
389/tcp  open  ldap
443/tcp  open  https
445/tcp  open  microsoft-ds
1723/tcp open  pptp
3389/tcp open  ms-wbt-server

Nmap done: 1 IP address (1 host up) scanned in 38.33 seconds

Question: what can be done in order to diagnose and prevent this symptom?


Get this bounty!!!

#StackBounty: #networking #windows-10 #vpn #openvpn #windows-server-2012-r2 Windows using VPN even when using IP address to access share

Bounty: 100

I’m connecting from my Windows 10 machine to a Windows Server 2012 R2 machine in the same subnet. I also have an OpenVPN connection that goes to the office and the server also has its own connection.

When I’m accessing files on the server sometimes the connection goes through the VPN without me noticing. Of course this is not what I want since I have a gigabit connection to it in the local network and a lot slower through the VPN.

The strange thing about this is that I have set the server name in the hosts file to point to the local IP. And even stranger: even if I write \192.168.23.45share to Explorer address line the connection will actually go through the VPN!

The only way I can get it to work properly is to disable the VPN, access files and then maybe enable VPN.

Is there some way to tell Windows that it should never attempt to use the VPN address for that server and always use the local network address?

The metrics for both routes are 276, this might make for it not to favor the local route but won’t explain why it doesn’t use the IP address I tell it to use. I have also tried to set the metric in OpenVPN configuration to lower or higher but this doesn’t change anything.

Local network is 192.168.23.0/24 and VPN network is 10.12.34.0/24 so they are completely separate. No IPv6 on the VPN, local network has the local IPv6 addresses.

I can also stop OpenVPN while transfering files or doing whatever with the file shares. The Windows 10 machine will just wait a moment, then switch to the non-VPN connection and continue. And if I restart it the transfers will switch to the VPN connection.

The server has also now been upgraded to 2016 but that hasn’t changed anything. The problem is somehow in the Windows 10 machine. This also doesn’t happen at all from another Windows 10 machine in the same subnet, same domain with the same OpenVPN configuration.


Get this bounty!!!

#StackBounty: #networking #amazon-ec2 #amazon-web-services #virtualization #amazon-vpc Are multiple ENIs ever required for AWS EC2 inst…

Bounty: 50

AWS allows you to attach multiple elastic network interfaces (ENIs) to an EC2 instance. Other than “making it look like an on-prem server”, are there any cases where multiple ENIs are actually required?

I’ve considered the reasons one would do this in an on-prem environment, but none of these seem to apply to AWS:

  • Link aggregation
  • Link redundancy
  • Separate management interfaces
  • In-line IDS/IPS
  • In-line firewall

The AWS implied router always “sits” between each ENI and everything else, so it isn’t possible to place another instance (running, say, a sniffer) in-line.

Amazon’s own documentation isn’t even clear on why you’d want multiple ENIs on an instance. It just says multiple interfaces are “useful when you want to:”

Create a management network.
Use network and security appliances in your VPC.
Create dual-homed instances with workloads/roles on distinct subnets.
Create a low-budget, high-availability solution.

But it doesn’t explain why ENIs are required or even desirable for those use-cases. (It’s obvious multiple ENIs would be required for dual-homed instances on different subnets, but it doesn’t explain why you’d ever want a dual-homed instance in the first place).

The only use-case I can come up with is an instance running containers (i.e. Docker) and you want to map individual containers to host IP addresses in different subnets.

What are the use-cases for multiple ENIs, if any?


Get this bounty!!!

#StackBounty: #networking #nginx #docker #docker-compose #docker-swarm Docker Swarm Mode network and load balancing doesn't work fo…

Bounty: 50

My setup

Two nodes (2GB RAM, 2 vCPU) running docker engine (v17.06.1-ce) — one swarm and one worker. Internal network bandwidth: 10Gbps. All files and databases are located outside this docker cluster (AWS S3 and different instances for database).

What I am trying to achieve?

I am trying to create a docker based “platform” where I push stateless services and docker handles load balancing, updates etc. Besides this, I am also trying to set up reverse proxy and allow specific services to have access to this proxy.

What I have done so far?

Firstly, I created an overlay network and called it “public.” (10.0.9.0/24) Then, I created an nginx service in “global” mode. The service by itself is attached to “public” network. I checked both my worker and swarm nodes and the service runs in both of them without a problem.

Secondly, I created docker compose file for fast deployment of multiple services. For the sake of my testings, I kept one service per compose file:

version: '3.3'
services:
  web:
    image: app1_image:latest
    networks:
      - public
networks:
  public:
    external:
      name: public

For the second service, I just changed the image name and kept everything else the same. Ran both “stacks”:

docker stack deploy --with-registry-auth --compose-file compose1.yml app1
docker stack deploy --with-registry-auth --compose-file compose2.yml app2

After inspecting both services, I see that both services are in “overlay” network with IPs such as 10.0.9.5 (app1_web) and 10.0.9.6 (app2_web). app1_web is created in swarm node and app2_web is created in worker node.

So, I create two nginx config files for both of my services in the following way:

server {
    listen 80;
    server_name app1.example.com;
    location / {
        proxy_pass http://app1_web; # This line is important
        # Other proxy parameters
    }
}

As you see I am passing the service name in nginx configuration. For easier config management I use docker configs:

docker config create nginx_app1.conf app1.conf
docker config create nginx_app1.conf app1.conf

docker service update --config-add source=nginx_app1.conf,target=/etc/nginx/conf.d/app1.conf nginx_proxy
docker service update --config-add source=nginx_app2.conf,target=/etc/nginx/conf.d/app2.conf nginx_proxy

Adding these configs automatically restarts nginx services and runs them. This is all. I wanted to give you a gist of my process before moving forward.

The problem

app1_web is created in swarm; so, when I go to app1.example.com, nginx proxies my request to the service and I get a proper output. This is what is expected and I am happy with the outcome.

However, because app2_web is created in worker node, nginx gives me an error that app2_web does not exist. So, I started troubleshooting.

From swarm, I found the docker instance ID and tried to run a command from nginx proxy:

docker exec nginx-proxy-id ping app2_web

This gave me an “Bad address” error. So, I went into compose2.yml and added ports:

ports:
  - 5380:80

When I went to swarm.example.com:5380, it basically gave me 404. However, opening the same port from worker.example.com:5380 opened app2.

I tested the same for app1. I replicated app1 using docker service scale app1=2 and the service got created in worker node. Then I paused the service in swarm using docker pause app1-id. When I went to app1.example.com, it would work half the time. I think it was still weird because I was expecting Docker to know that service is paused and only proxy the service to worker node but whatever. At least it was working. Replicating app2 did not help though. I still kept getting error that host name does not exist. After this, I went further and told the worker node to leave the swarm: docker swarm leave and coincidentally, everything worked normally…

After spending at least 10 hours on this, I am lost on what I am doing wrong here. For some reason, when service is created in worker first, Docker doesn’t like it.

Sorry for such a long wall of text. I wanted to share all the steps I took. I would really appreciate your help.


Get this bounty!!!

#StackBounty: #networking #nginx #docker #docker-compose #docker-swarm Docker Swarm Mode network and load balancing doesn't work fo…

Bounty: 50

My setup

Two nodes (2GB RAM, 2 vCPU) running docker engine (v17.06.1-ce) — one swarm and one worker. Internal network bandwidth: 10Gbps. All files and databases are located outside this docker cluster (AWS S3 and different instances for database).

What I am trying to achieve?

I am trying to create a docker based “platform” where I push stateless services and docker handles load balancing, updates etc. Besides this, I am also trying to set up reverse proxy and allow specific services to have access to this proxy.

What I have done so far?

Firstly, I created an overlay network and called it “public.” (10.0.9.0/24) Then, I created an nginx service in “global” mode. The service by itself is attached to “public” network. I checked both my worker and swarm nodes and the service runs in both of them without a problem.

Secondly, I created docker compose file for fast deployment of multiple services. For the sake of my testings, I kept one service per compose file:

version: '3.3'
services:
  web:
    image: app1_image:latest
    networks:
      - public
networks:
  public:
    external:
      name: public

For the second service, I just changed the image name and kept everything else the same. Ran both “stacks”:

docker stack deploy --with-registry-auth --compose-file compose1.yml app1
docker stack deploy --with-registry-auth --compose-file compose2.yml app2

After inspecting both services, I see that both services are in “overlay” network with IPs such as 10.0.9.5 (app1_web) and 10.0.9.6 (app2_web). app1_web is created in swarm node and app2_web is created in worker node.

So, I create two nginx config files for both of my services in the following way:

server {
    listen 80;
    server_name app1.example.com;
    location / {
        proxy_pass http://app1_web; # This line is important
        # Other proxy parameters
    }
}

As you see I am passing the service name in nginx configuration. For easier config management I use docker configs:

docker config create nginx_app1.conf app1.conf
docker config create nginx_app1.conf app1.conf

docker service update --config-add source=nginx_app1.conf,target=/etc/nginx/conf.d/app1.conf nginx_proxy
docker service update --config-add source=nginx_app2.conf,target=/etc/nginx/conf.d/app2.conf nginx_proxy

Adding these configs automatically restarts nginx services and runs them. This is all. I wanted to give you a gist of my process before moving forward.

The problem

app1_web is created in swarm; so, when I go to app1.example.com, nginx proxies my request to the service and I get a proper output. This is what is expected and I am happy with the outcome.

However, because app2_web is created in worker node, nginx gives me an error that app2_web does not exist. So, I started troubleshooting.

From swarm, I found the docker instance ID and tried to run a command from nginx proxy:

docker exec nginx-proxy-id ping app2_web

This gave me an “Bad address” error. So, I went into compose2.yml and added ports:

ports:
  - 5380:80

When I went to swarm.example.com:5380, it basically gave me 404. However, opening the same port from worker.example.com:5380 opened app2.

I tested the same for app1. I replicated app1 using docker service scale app1=2 and the service got created in worker node. Then I paused the service in swarm using docker pause app1-id. When I went to app1.example.com, it would work half the time. I think it was still weird because I was expecting Docker to know that service is paused and only proxy the service to worker node but whatever. At least it was working. Replicating app2 did not help though. I still kept getting error that host name does not exist. After this, I went further and told the worker node to leave the swarm: docker swarm leave and coincidentally, everything worked normally…

After spending at least 10 hours on this, I am lost on what I am doing wrong here. For some reason, when service is created in worker first, Docker doesn’t like it.

Sorry for such a long wall of text. I wanted to share all the steps I took. I would really appreciate your help.


Get this bounty!!!

#StackBounty: #networking #nginx #docker #docker-compose #docker-swarm Docker Swarm Mode network and load balancing doesn't work fo…

Bounty: 50

My setup

Two nodes (2GB RAM, 2 vCPU) running docker engine (v17.06.1-ce) — one swarm and one worker. Internal network bandwidth: 10Gbps. All files and databases are located outside this docker cluster (AWS S3 and different instances for database).

What I am trying to achieve?

I am trying to create a docker based “platform” where I push stateless services and docker handles load balancing, updates etc. Besides this, I am also trying to set up reverse proxy and allow specific services to have access to this proxy.

What I have done so far?

Firstly, I created an overlay network and called it “public.” (10.0.9.0/24) Then, I created an nginx service in “global” mode. The service by itself is attached to “public” network. I checked both my worker and swarm nodes and the service runs in both of them without a problem.

Secondly, I created docker compose file for fast deployment of multiple services. For the sake of my testings, I kept one service per compose file:

version: '3.3'
services:
  web:
    image: app1_image:latest
    networks:
      - public
networks:
  public:
    external:
      name: public

For the second service, I just changed the image name and kept everything else the same. Ran both “stacks”:

docker stack deploy --with-registry-auth --compose-file compose1.yml app1
docker stack deploy --with-registry-auth --compose-file compose2.yml app2

After inspecting both services, I see that both services are in “overlay” network with IPs such as 10.0.9.5 (app1_web) and 10.0.9.6 (app2_web). app1_web is created in swarm node and app2_web is created in worker node.

So, I create two nginx config files for both of my services in the following way:

server {
    listen 80;
    server_name app1.example.com;
    location / {
        proxy_pass http://app1_web; # This line is important
        # Other proxy parameters
    }
}

As you see I am passing the service name in nginx configuration. For easier config management I use docker configs:

docker config create nginx_app1.conf app1.conf
docker config create nginx_app1.conf app1.conf

docker service update --config-add source=nginx_app1.conf,target=/etc/nginx/conf.d/app1.conf nginx_proxy
docker service update --config-add source=nginx_app2.conf,target=/etc/nginx/conf.d/app2.conf nginx_proxy

Adding these configs automatically restarts nginx services and runs them. This is all. I wanted to give you a gist of my process before moving forward.

The problem

app1_web is created in swarm; so, when I go to app1.example.com, nginx proxies my request to the service and I get a proper output. This is what is expected and I am happy with the outcome.

However, because app2_web is created in worker node, nginx gives me an error that app2_web does not exist. So, I started troubleshooting.

From swarm, I found the docker instance ID and tried to run a command from nginx proxy:

docker exec nginx-proxy-id ping app2_web

This gave me an “Bad address” error. So, I went into compose2.yml and added ports:

ports:
  - 5380:80

When I went to swarm.example.com:5380, it basically gave me 404. However, opening the same port from worker.example.com:5380 opened app2.

I tested the same for app1. I replicated app1 using docker service scale app1=2 and the service got created in worker node. Then I paused the service in swarm using docker pause app1-id. When I went to app1.example.com, it would work half the time. I think it was still weird because I was expecting Docker to know that service is paused and only proxy the service to worker node but whatever. At least it was working. Replicating app2 did not help though. I still kept getting error that host name does not exist. After this, I went further and told the worker node to leave the swarm: docker swarm leave and coincidentally, everything worked normally…

After spending at least 10 hours on this, I am lost on what I am doing wrong here. For some reason, when service is created in worker first, Docker doesn’t like it.

Sorry for such a long wall of text. I wanted to share all the steps I took. I would really appreciate your help.


Get this bounty!!!

#StackBounty: #networking #nginx #docker #docker-compose #docker-swarm Docker Swarm Mode network and load balancing doesn't work fo…

Bounty: 50

My setup

Two nodes (2GB RAM, 2 vCPU) running docker engine (v17.06.1-ce) — one swarm and one worker. Internal network bandwidth: 10Gbps. All files and databases are located outside this docker cluster (AWS S3 and different instances for database).

What I am trying to achieve?

I am trying to create a docker based “platform” where I push stateless services and docker handles load balancing, updates etc. Besides this, I am also trying to set up reverse proxy and allow specific services to have access to this proxy.

What I have done so far?

Firstly, I created an overlay network and called it “public.” (10.0.9.0/24) Then, I created an nginx service in “global” mode. The service by itself is attached to “public” network. I checked both my worker and swarm nodes and the service runs in both of them without a problem.

Secondly, I created docker compose file for fast deployment of multiple services. For the sake of my testings, I kept one service per compose file:

version: '3.3'
services:
  web:
    image: app1_image:latest
    networks:
      - public
networks:
  public:
    external:
      name: public

For the second service, I just changed the image name and kept everything else the same. Ran both “stacks”:

docker stack deploy --with-registry-auth --compose-file compose1.yml app1
docker stack deploy --with-registry-auth --compose-file compose2.yml app2

After inspecting both services, I see that both services are in “overlay” network with IPs such as 10.0.9.5 (app1_web) and 10.0.9.6 (app2_web). app1_web is created in swarm node and app2_web is created in worker node.

So, I create two nginx config files for both of my services in the following way:

server {
    listen 80;
    server_name app1.example.com;
    location / {
        proxy_pass http://app1_web; # This line is important
        # Other proxy parameters
    }
}

As you see I am passing the service name in nginx configuration. For easier config management I use docker configs:

docker config create nginx_app1.conf app1.conf
docker config create nginx_app1.conf app1.conf

docker service update --config-add source=nginx_app1.conf,target=/etc/nginx/conf.d/app1.conf nginx_proxy
docker service update --config-add source=nginx_app2.conf,target=/etc/nginx/conf.d/app2.conf nginx_proxy

Adding these configs automatically restarts nginx services and runs them. This is all. I wanted to give you a gist of my process before moving forward.

The problem

app1_web is created in swarm; so, when I go to app1.example.com, nginx proxies my request to the service and I get a proper output. This is what is expected and I am happy with the outcome.

However, because app2_web is created in worker node, nginx gives me an error that app2_web does not exist. So, I started troubleshooting.

From swarm, I found the docker instance ID and tried to run a command from nginx proxy:

docker exec nginx-proxy-id ping app2_web

This gave me an “Bad address” error. So, I went into compose2.yml and added ports:

ports:
  - 5380:80

When I went to swarm.example.com:5380, it basically gave me 404. However, opening the same port from worker.example.com:5380 opened app2.

I tested the same for app1. I replicated app1 using docker service scale app1=2 and the service got created in worker node. Then I paused the service in swarm using docker pause app1-id. When I went to app1.example.com, it would work half the time. I think it was still weird because I was expecting Docker to know that service is paused and only proxy the service to worker node but whatever. At least it was working. Replicating app2 did not help though. I still kept getting error that host name does not exist. After this, I went further and told the worker node to leave the swarm: docker swarm leave and coincidentally, everything worked normally…

After spending at least 10 hours on this, I am lost on what I am doing wrong here. For some reason, when service is created in worker first, Docker doesn’t like it.

Sorry for such a long wall of text. I wanted to share all the steps I took. I would really appreciate your help.


Get this bounty!!!

#StackBounty: #networking #nginx #docker #docker-compose #docker-swarm Docker Swarm Mode network and load balancing doesn't work fo…

Bounty: 50

My setup

Two nodes (2GB RAM, 2 vCPU) running docker engine (v17.06.1-ce) — one swarm and one worker. Internal network bandwidth: 10Gbps. All files and databases are located outside this docker cluster (AWS S3 and different instances for database).

What I am trying to achieve?

I am trying to create a docker based “platform” where I push stateless services and docker handles load balancing, updates etc. Besides this, I am also trying to set up reverse proxy and allow specific services to have access to this proxy.

What I have done so far?

Firstly, I created an overlay network and called it “public.” (10.0.9.0/24) Then, I created an nginx service in “global” mode. The service by itself is attached to “public” network. I checked both my worker and swarm nodes and the service runs in both of them without a problem.

Secondly, I created docker compose file for fast deployment of multiple services. For the sake of my testings, I kept one service per compose file:

version: '3.3'
services:
  web:
    image: app1_image:latest
    networks:
      - public
networks:
  public:
    external:
      name: public

For the second service, I just changed the image name and kept everything else the same. Ran both “stacks”:

docker stack deploy --with-registry-auth --compose-file compose1.yml app1
docker stack deploy --with-registry-auth --compose-file compose2.yml app2

After inspecting both services, I see that both services are in “overlay” network with IPs such as 10.0.9.5 (app1_web) and 10.0.9.6 (app2_web). app1_web is created in swarm node and app2_web is created in worker node.

So, I create two nginx config files for both of my services in the following way:

server {
    listen 80;
    server_name app1.example.com;
    location / {
        proxy_pass http://app1_web; # This line is important
        # Other proxy parameters
    }
}

As you see I am passing the service name in nginx configuration. For easier config management I use docker configs:

docker config create nginx_app1.conf app1.conf
docker config create nginx_app1.conf app1.conf

docker service update --config-add source=nginx_app1.conf,target=/etc/nginx/conf.d/app1.conf nginx_proxy
docker service update --config-add source=nginx_app2.conf,target=/etc/nginx/conf.d/app2.conf nginx_proxy

Adding these configs automatically restarts nginx services and runs them. This is all. I wanted to give you a gist of my process before moving forward.

The problem

app1_web is created in swarm; so, when I go to app1.example.com, nginx proxies my request to the service and I get a proper output. This is what is expected and I am happy with the outcome.

However, because app2_web is created in worker node, nginx gives me an error that app2_web does not exist. So, I started troubleshooting.

From swarm, I found the docker instance ID and tried to run a command from nginx proxy:

docker exec nginx-proxy-id ping app2_web

This gave me an “Bad address” error. So, I went into compose2.yml and added ports:

ports:
  - 5380:80

When I went to swarm.example.com:5380, it basically gave me 404. However, opening the same port from worker.example.com:5380 opened app2.

I tested the same for app1. I replicated app1 using docker service scale app1=2 and the service got created in worker node. Then I paused the service in swarm using docker pause app1-id. When I went to app1.example.com, it would work half the time. I think it was still weird because I was expecting Docker to know that service is paused and only proxy the service to worker node but whatever. At least it was working. Replicating app2 did not help though. I still kept getting error that host name does not exist. After this, I went further and told the worker node to leave the swarm: docker swarm leave and coincidentally, everything worked normally…

After spending at least 10 hours on this, I am lost on what I am doing wrong here. For some reason, when service is created in worker first, Docker doesn’t like it.

Sorry for such a long wall of text. I wanted to share all the steps I took. I would really appreciate your help.


Get this bounty!!!

#StackBounty: #networking #docker #coreos #docker-swarm Communication to containers not always possible

Bounty: 50

I have a couple of services running in a docker swarm on a single docker host. All services run in the same overlay network. These services all expose a different port on which a web server is available. The docker-host runs CoreOS (1520.0.0 Alpha channel).

Sometimes I end up in a situation in which requests made on http://docker-host.local: timeout. When I login on the docker-host and make a request to localhost: it also times out. However from a shell in a different container a request to the service does succeed without issues.

docker service ls shows the correct port mappings.

The service that is not reachable, is seemingly random. Sometimes all are functioning correctly, sometimes one is not reachable, sometimes it resolves after some time.

I have inspected the docker networks, they do not conflict the with the host network.

I can reproduce this by creating a stack of nginx services, hosting the default webpage.
file: docker-compose-test.yml

version: '3.1'
services:
  nginx1:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10081:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx2:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10082:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx3:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10083:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx4:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10084:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx5:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10085:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx6:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10086:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx7:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10087:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx8:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10088:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure

  nginx9:
    image: nginx:1.11.8-alpine
    networks:
      - test
    ports:
      - "10089:80"
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
networks:
  test:

This script will deploy the stack, test availability and take down the stack until the error situation is reached.
file: test-docker-swarm.sh

#!/bin/bash

DOCKER_HOST=$1
fail=0

while [[ ${fail} -eq 0 ]] ; do
  docker -H ${DOCKER_HOST} stack deploy -c docker-compose-test.yml test
  sleep 15

  for i in $(seq 1 9) ; do
    request="http://${DOCKER_HOST}:1008${i}"
    echo "making request: ${request}"
    curl -s -o /dev/null --max-time 2 ${request}
    if [[ $? -ne 0 ]] ; then
        echo request failed: ${request}
        fail=1
    fi
  done

  if [[ ${fail} -eq 0 ]] ; then
      docker -H ${DOCKER_HOST} stack down test

    while [[ $(docker -H ${DOCKER_HOST} network ls --filter 'name=^test_' | wc -l) -ne 1 ]]; do
      echo "waiting for stack to go down"
      sleep 2
    done
  fi
done

execute running: `./test-docker-swarm.sh

I have no clue what steps I can take to debug, and resolve this. Any pointers are appreciated.

docker version

Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.2
 Git commit:   874a737
 Built:        Tue Aug 29 23:50:27 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.2
 Git commit:   874a737
 Built:        Tue Aug 29 23:50:09 2017
 OS/Arch:      linux/amd64
 Experimental: false

docker info

Containers: 9
 Running: 9
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 17.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: x06mlhlwqyo3dg4lmigy18z1q
 Is Manager: true
 ClusterID: qy022nd3bjn1157sxcc6qzr9n
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Root Rotation In Progress: false
 Node Address: 10.255.11.40
 Manager Addresses:
  10.255.11.40:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: v0.13.2 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
 seccomp
  Profile: default
 selinux
Kernel Version: 4.13.0-rc7-coreos
Operating System: Container Linux by CoreOS 1520.0.0 (Ladybug)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 5.776GiB
Name: fqfs-development
ID: RCNI:3ZUR:LTDA:ABIB:EYEW:HCIY:H2RC:XDNT:LC77:BMQH:FKXI:T6YZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false


Get this bounty!!!

#StackBounty: #networking #vpn #routing #openvpn OpenVPN: Allow server to reach client without redirecting all client traffic over VPN?

Bounty: 200

So I’ve set up a reasonably basic tun OpenVPN server, and am having trouble getting it so that the server can communicate with all of the connected clients.

I currently have two sets of clients, some that don’t use the VPN to connect to the internet (just to talk to the other clients), and some that use redirect-gateway to send all of their traffic over the VPN.

With how I have it set up, all of the connected clients can communicate with the server, and with the other clients. However, from the server, I can only reach (e.g. ping) the clients that are using redirect-gateway to send all of their traffic through the VPN. The clients not using that config can ping the server, but the server cannot ping back (they don’t respond to it and it times out).

How can I set up the routing so that the server can still communicate with clients even if they don’t use the VPN as their default gateway?

Here’s the relevant server config:

port 1194
proto udp
dev tun
topology subnet
push "topology subnet"
server 10.7.0.0 255.255.255.0
ifconfig-pool-persist /etc/openvpn/ipp.txt
client-config-dir /etc/openvpn/ccd
client-to-client
keepalive 10 120
cipher AES-256-CBC
comp-lzo
user nobody
group nobody
persist-key
persist-tun
explicit-exit-notify 1

In the client config directory on the server, each client has a file like this (just to give each a static IP):

ifconfig-push 10.7.0.10 255.255.255.0

The relevant bits of the local client config:

client
dev tun
proto udp
remote {server's public ip} 1194
float
keepalive 15 60
ns-cert-type server
key-direction 1
tun-mtu 1500
cipher AES-256-CBC
keysize 256
comp-lzo yes
nobind

The clients that are using the VPN for internet access add redirect-gateway def1 bypass-dhcp to their config.

I’m using ufw for my server’s firewall – here’s the relevant config (in /etc/ufw/before.rules):

*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.7.0.0/8 -j SNAT --to-source {server's public ip}

As this is running on an OpenVZ VPS, I cannot use MASQUERADE, but the above seems to work just as well.

Any ideas on how to set this up properly? Thanks in advance. If it matters, the server is running CentOS.


Get this bounty!!!