#StackBounty: #nginx #proxy #reverse-proxy #attacks Nginx Redirection Giving unexpected response

Bounty: 50

I’m using google compute engine as a proxy server using nginx. I make several POST requests to it and it redirects them to a third party server.

The issue is, from today I started getting unexpected responses for my all POST requests through proxy server – “hello Guest, How Can I help You?”
however, making a request directly to the third party server is giving proper response and restarting nginx server fixed the issue.

So, is my server is compromised or this message is given by nginx ?
and if it is compromised then how can I avoid this in future

Get this bounty!!!

#StackBounty: #proxy #reverse-proxy #configuration #haproxy HAProxy reports that servers-http and servers-https is down. Does not start

Bounty: 100

I’m trying to setup haproxy for the first time, and it has been giving me a lot of trouble. Right now, when I call the haproxy file in the /etc/init.d folder to start it up, I get the following:

$ ./haproxy start
  Starting haproxy:           [FAILED]

I’ve confirmed that chef installed haproxy:

$ haproxy -v
  HA-Proxy version 1.5.18 2016/05/10
  Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>

To investigating further, I used the following commands:

$ haproxy -c -f /etc/haproxy/haproxy.cfg
  [WARNING] 023/190620 (24869) : parsing [/etc/haproxy/haproxy.cfg:19] : 'option httplog' not usable with frontend 'https' (needs 'mode http'). Falling back to 'option tcplog'. 
  Configuration file is valid

$ ha proxy -db -f /etc/haproxy/haproxy.cfg
  [WARNING] 023/190810 (25554) : parsing [/etc/haproxy/haproxy.cfg:19] : 'option httplog' not usable with frontend 'https' (needs 'mode http'). Falling back to 'option tcplog'.
  [WARNING] 023/190810 (25554) : Server servers-http/test001.company.org is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
  [ALERT] 023/190810 (25554) : backend 'servers-http' has no server available!
  [WARNING] 023/190811 (25554) : Server servers-https/test001.company.org is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
  [ALERT] 023/190811 (25554) : backend 'servers-https' has no server available!

I’m not sure how server cannot be available, as the that server is the same server haproxy is deployed to; its the localhost, I’m just have the actual server name in the config file. That file is as follows:

  log   local0
  log   local1 notice
  #log loghost    local0 info
  maxconn 4096
  user root
  group root

  log     global
  mode    http
  retries 3
  timeout client 50s
  timeout connect 5s
  timeout server 50s
  option dontlognull
  option httplog
  option redispatch
  balance  roundrobin

# Set up application listeners here.

listen admin
  mode http
  stats uri /

frontend http
  maxconn 2000
  default_backend servers-http

frontend https
  mode tcp
  maxconn 2000
  default_backend servers-https

backend servers-http
  server test001.company.com <IP address here>:4002 weight 1 maxconn 100 check

backend servers-https
  mode tcp
  server test001.company.com <IP address here>:4003 weight 1 maxconn 100 check
  option ssl-hello-chk

I’ve also used netstat -nlp to make sure each port does not have anything running on it. I’m not sure what else I can check.


I opened another terminal just to check and confirmed that HAProxy is starting up and running on ports 4000 and 4001. However, the backend ports cannot be used. I’ve also confirmed that nothing is using these ports, using netstat -nlp | grep 4002 and netstat -nlp | grep 4003. I’ve also tried using as the IP address instead of the actual IP address, but continue to get the same error.

Get this bounty!!!

#StackBounty: #nginx #docker #reverse-proxy #gitlab Gitlab showing 404 while running behind nginx reverse proxy, all within a docker ne…

Bounty: 100

As the title says, I’m trying to serve Gitlab through an nginx reverse proxy, with both programs being run in separate docker containers connected through a docker network. A picture as an example:

Linux Host
|                            |
| Docker                     |
|  __________________________|
| |                          |
| | Docker network (test-net)|
| |  ________________________|
| | |                        |
| | | nginx        gitlab    | Only nginx has a port bound to the host (443).
| | | |   |        |   |     | TLS is terminated at nginx as well.
| | | |   |   -->  |   |     | in my test, I have nginx running as localhost.
| | | |___|        |___|     | To access gitlab, hit https://localhost/git/
| | |________________________|
| |__________________________|

nginx runs with this docker command:

docker run -dit --network=test-net --name=nginx -p 443:443 -v "$PWD/conf":/etc/nginx:ro nginx:alpine && docker logs -f nginx


<Removed unnecessary config from here, very basic setup>
http {
    keepalive_timeout 65;
    server {
        listen 443 ssl;
        server_name localhost;
        ssl_certificate localhost.crt;
        ssl_certificate_key localhost.key;
        ssl_protocols   TLSv1.2;
        ssl_ciphers HIGH:!aNULL:!MD5;
        location /git/ {
            proxy_pass http://test/;


<only relevant parts added here>
external_url 'https://localhost'
nginx['listen_port'] = 80
nginx['listen_https'] = false
nginx['proxy_set_headers'] = {
 "Host" => "$http_host_with_default",
 "X-Real-IP" => "$remote_addr",
 "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
 "X-Forwarded-Proto" => "http",
 "Upgrade" => "$http_upgrade",
 "X-Forwarded-Ssl" => "on",
 "Connection" => "$connection_upgrade"
nginx['custom_error_pages'] = {
  '404' => {
    'title' => '404',
    'header' => 'You've been hit by !! You've been struck by ! A false URL.',
    'message' => 'Double check that URL! Is it correct?'

docker-compose.yml for gitlab:

version: '3.7'
    image: 'internal-docker-repo:1234/gitlab/gitlab-ce:11.8.3-ce.0'
    restart: always
    hostname: 'test'
    container_name: test
      - './config:/etc/gitlab:rw'
      - net
    external: true
    name: test-net

Internally (to docker networks) nginx is known as nginx and gitlab is known as test. I have confirmed I can ping each container from inside the other, using their container names.

As it is now, it almost works. When I go to https://localhost/git/ on my linux host I get a 404 error page from gitlab, but no login screen.

the 404 screen. Custom, so I know gitlab is running and picked up the configuration

I’m obviously missing something but I’m not sure what it is. It’s hard for me to tell if it’s an NGinx configuration issue or a Gitlab configuration issue.

Log output when I hit https://localhost/git/

nginx log output: - - [07/Jan/2020:21:28:35 +0000] "GET /git/ HTTP/1.1" 404 2289 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0"      

gitlab log output:

test      | ==> /var/log/gitlab/nginx/gitlab_access.log <==
test      | - - [07/Jan/2020:21:28:35 +0000] "GET / HTTP/1.0" 404 2289 "" "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0"
test      | 
test      | ==> /var/log/gitlab/gitlab-workhorse/current <==
test      | 2020-01-07_21:28:35.10649 test - - [2020/01/07:21:28:35 +0000] "GET / HTTP/1.1" 404 3108 "" "Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0" 0.001
test      | 

Get this bounty!!!

#StackBounty: #apache-2.4 #load-balancing #reverse-proxy Is it possible to use the load balancer only at beginning of the request?

Bounty: 50

Let’s say the client requests a page www.example.com/index.html.
The DNS translates this into

Then this server, working as a load balancer (let’s say Apache with mod_proxy_balancer), redirects the request to another IP (not in the same local network)

This is the idea:

client ==> example.com ==> DNS ==> (Load Balancer) => (destination server)

Once the link has been established between client and destination,
how to avoid that further communication between client and destination will go through the middle server

To be more precise: is it possible, that once the load balancer server ( has allowed the client and the destination server to “meet”/to “know each other”, then:

  • further communication between them (upload/download, potentially megabytes or gigabytes!) does not transit via anymore (in order to save bandwitdh for the load balancer)
  • all of this with example.com still being displayed in his browser

How to configure this in mod_proxy_balancer?

Get this bounty!!!

#StackBounty: #linux #iptables #nginx #reverse-proxy Nginx dropping the first client as soon as the second connects

Bounty: 50

I’m trying to configure Nginx to reverse proxy port 445, but every time client A is connected to the share through Nginx and a client B connects I have the connection of client A dropped by Nginx even though he was actively using the share (downloading a big file, for example). It’s like Nginx is reusing the connection for client B before client A finishes using it.

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log debug;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;

stream {

    server {
         listen 445;
         proxy_pass storage:445;

What’s missing in the config file above to allow both client A and B to use the share simultaneously without dropping one connection to stablish the other?

Some extra context:
Nginx v. 1.17.1 runing on Ubuntu 18.04.2 LTS virtual machine 4 vCPU and 4Gb mem ;

I have already tried making this control using iptables instead of Nginx to forward the connections on port 445 to the share server and the result was similar: client A has its connection dropped when B connects;

The share works fine if the clients A and B connects directly to the storage share without Nginx between them;

I have tried quite a lot of recomended configurations from Nginx documentation (limit_conn, so_keepalive, reuseport….), but I might have misused them;

From Wireshark I see Nginx sends a [FIN, ACK] packet to client A when client B connects;

Log of Nginx when client A has its connection afected: *[error] 32110#32110: 7 recv() failed (104: Connection reset by peer) while proxying and reading from upstream… but I notice this log is related to a [RST, ACK] packet client A sends to Nginx even after that [FIN, ACK] packet it received.

Tried with the newer version 1.17.3 and no success.

Get this bounty!!!

#StackBounty: #redirect #reverse-proxy #api-gateway Is redirection a valid strategy for an API Gateway?

Bounty: 50

I read this article about the API Gateway pattern. I realize that API Gateways typically serve as reverse proxies, but this forces a bottleneck situation. If all requests to an application’s public services go through a single gateway, or even a single load balancer across multiple replicas of a gateway (perhaps a hardware load balancer which can handle large amounts of bandwidth more easily than an API gateway), then that single access point is the bottleneck.

I also understand that it is a wide bottleneck, as it simply has to deliver messages in proxy, as the gateways and load balancers themselves are not responsible for any processing or querying. However, imagining a very large application with many users, one would require extremely powerful hardware to not notice the massive bandwidth traveling over the gateway or load balancer, given that every request to every microservice exposed by the gateway travels through that single access point.

If the API gateway instead simply redirected the client to publicly exposed microservices (sort of like a custom DNS lookup), the hardware requirements would be much lower. This is because the messages traveling to and from the API Gateway would be very small, the requests consisting only of a microservice name, and the responses consisting only of the associated public IP address.

I recognize that this pattern would involve greater latency due to increased external requests. It would also be more difficult to secure, as every microservice is publicly exposed, rather than providing authentication at a single entrypoint. However, it would allow for bandwidth to be distributed much more evenly, and provide a much wider bottleneck, thus making the application much more scalable. Is this a valid strategy?

Get this bounty!!!

#StackBounty: #reverse-proxy #apache2 Apache Webserver block frontend connection on reverse proxy error

Bounty: 450

We have a problem with a relatively dumb loadbalancer that sits in front of an Apache Webserver. The Apache Webserver again is a reverse proxy for an application server.

The problem starts when the application goes down. This makes the Apache throw an error code.

The problem is that the loadblancer only considers a hard tcp error for removing a server from the pool. This means that error pages will go through the loadbalancer to the user instead of the server just being removed from the pool.

Is it possible to configure Apache to reject a tcp request on backend a backend error?

Get this bounty!!!

#StackBounty: #nginx #reverse-proxy #java #cookies Intercept location response to extract a cookie

Bounty: 50

I’m working on a project using nginx as reverse proxy and I’m trying to intercept a response from a specific location in order to extract a cookie and save it in a database.

I’m able to listen a location and verify the validity of a token added as a header. I’m using Java to handle this. SO what I’m trying to do now is to intercept the response and extract a specific cookie. Otherwise extracting only the cookie without intercepting the response.

Thank for your help.

Get this bounty!!!

#StackBounty: #linux #proxy #vmware-workstation #reverse-proxy VMWare Workstation – Win host, Linux client, corporate proxy

Bounty: 50

here is my setup:

  • Windows 7 desktop
  • VMWare Workstation 14
  • Linux clients (Ubuntu 18.4, Mint 19, Centos 7)
  • Use NAT networking for client VMs
  • Corporate proxy which filters all traffic going to the web

What I was able to setup for the linux clients

  • for apt or yum, I was able to configure them to go through the proxy.
  • ex apt: /etc/apt/apt.conf
    Acquire::http::Proxy "http://DOMAINUSER:PASSWORD@PROXY.FQDN.COM:8080";
  • That works, I can update, install, …

What I was not able to setup

Any other network software.
– Ex. browser. I configured my linux browser to the same proxy, no luck.
– I downloaded the wpad.dat for my Windows host, and extracted the proxy name (and therefore IP). Tried that in the browser proxy setting, no luck.
– Same thing at the system, network proxy level.

From what I have read up to now:

  • My browsers on Windows use the wpad.dat to figure out what proxy address to use. Then NTLM authentication. I confirmed that with Fiddler on Windows, I see NTLM authentication headers.
  • I do not understand how APT does not use NTLM authentication and still works ok.

What I have tried:

  • cntlm: I setup cntlm on my linux client and that did not work. cntlm was never able to connect to the proxy. I see a connection at the network level, but it always refuse my user/password. I wonder if the proxy somehow verifies if the client is in the Windows domain before accepting connections.

Other thing I tried:

  • I had the same setup on VirtualBox. Same thing, APT was ok, any other proxy was not. So it does not look like a VMWare thing, more a Linux configuration thing.

Any other ideas?
Methods I could try to collect more information from the proxy?
Do you know how to convert the APT configuration into a browser compatible configuration?
Is VM Workstation ok for this?

Thanks for any help!

Get this bounty!!!

#StackBounty: #apache #soap #reverse-proxy Apache Reverse Proxy – ProxyPass based on HTTP body

Bounty: 50

I have systems issuing SOAP requests to my Apache Reverse Proxy, and I have two distinct applications which should handle these requests. The differentiation is, however, done on the requests’ SOAP bodies:

  • Whenever the incoming request contains a specific XML Tag in the body, send it to server_a:port/endpoint
  • Whenever the incoming request does not contain that specific XML Tag in the body, send it to server_b:port/endpoint

Note that the incoming request URLs are the same. The only difference is the actual SOAP body of the incoming requests.

I’ve already found ways to do that by checking the incoming request URL POST parameters (Conditional ProxyPass Apache), but couldn’t find exactly what I need.

How would you go about doing that? Is Apache actually able to do that?

Get this bounty!!!