#StackBounty: #nginx #cache #cluster #replication #copy Copy nginx cache between two servers

Bounty: 50

I have a nginx caching server with at least 2tb of cache files at any moment, i need to migrate this server to another hosting provider but none of my clients can handle that traffic without cache.

At my first try, just copying the cache object to another instance with the same configuration and permissions, it fails with MISS.
I changed the permissions, groups and every time it fails and create a new cache file.

It’s possible to copy my cache files and get HIT in the first access?


Get this bounty!!!

#StackBounty: #nginx #ssl #streaming #rtmp nginx cpu consumption for ssl termination with rtmp stream

Bounty: 50

Here is my nginx config file

user  nginx;
worker_processes  auto;


error_log  /var/log/nginx/error.log error;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}


stream {
    server {
        listen *:443 ssl;

        ssl_certificate             /etc/nginx/ssl/server.crt;
        ssl_certificate_key         /etc/nginx/ssl/server.key;

        ssl_session_cache shared:SSL:20m;
        ssl_session_timeout 60m;

        proxy_pass 127.0.0.1:1935;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DHE+AES128:!ADH:!AECDH:!MD5;
    }
}

I am not happy with it because nginx consumes too much cpu.
How to tune it properly?
Here I have rtmp listener at 127.0.0.1:1935. So nginx is only terminating ssl and passes by rtmp further.

This is how cpu consumption looks for 1cpu server. About 50% is eaten by nginx and the rest by rtmp server. I would like to make nginx consume less CPU.
cpu consumption

Nginx version

nginx -V
nginx version: nginx/1.15.2
built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) 
built with OpenSSL 1.1.0f  25 May 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.15.2/debian/debuild-base/nginx-1.15.2=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

The load you see is provided by 500 simultaneous stream uploads by command

ffmpeg -re -stream_loop -1 -i /dolbycanyon.mp4 -acodec copy -vcodec copy -f flv rtmps://rtmp:443/live/live139

from another server.

I generated certificate files /etc/nginx/ssl/server.crt and /etc/nginx/ssl/server.key by command

openssl req -config ./openssl.conf -x509 -nodes -days 365 -newkey rsa:2048 -keyout selfsigned.key -out selfsigned.crt

where openssl.conf is

[req]
prompt = no
distinguished_name = req_distinguished_name
req_extensions = v3_req

[req_distinguished_name]
C = US
ST = California
L = Los Angeles
O = Our Company Llc
#OU = Org Unit Name
CN = Our Company Llc
#emailAddress = info@example.com

[v3_req]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = example.com
DNS.2 = www.example.com

Update 1

So when I use nginx ssl termination, it can stand maximum 512 streams for 1 CPU. When I stream directly to RTMP port 1935 (without nginx) it’s 970 simultaneous streams. So I would like to optimize nginx to get a number a bit closer to 970 streams rather than 512 streams.


Get this bounty!!!

#StackBounty: #linux #nginx #apache-2.4 #resource-management Correctly implement and test resource limitations for Apache/NGINX

Bounty: 50

I am assigned the task to limit the resources (Maximum memory usage and CPU weight) for Apache2 and nginx according to usage statistics, and test that limitation.

For resource limitations, I’ve chosen the easy way — to implement it via systemd’s resource control by running apache2 & nginx in a custom limited slice:

limiter.slice:

[Unit]
Description=Resource limiting test
Before=slices.target

[Slice]
CPUWeight=50
MemoryMax=800M

[Install]
WantedBy=multi-user.target

And of course added the Slice=limiter.slice option to apache2 & nginx service files.

Now, nginx and apache2 correctly start in the limited slice (checked with systemctl status limiter.slice):

● limiter.slice - Resource limiting test
   Loaded: loaded (/etc/systemd/system/limiter.slice; enabled; vendor preset: enabled)
   Active: active since Wed 2018-07-18 16:07:59 EEST; 3s ago
    Tasks: 58
   Memory: 8.1M (max: 800.0M)
      CPU: 82ms
   CGroup: /limiter.slice
           ├─apache2.service
           │ ├─1823 /usr/sbin/apache2 -k start
           │ ├─1841 /usr/sbin/apache2 -k start
           │ └─1843 /usr/sbin/apache2 -k start
           └─nginx.service
             ├─1805 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
             ├─1808 nginx: worker process
             └─1811 nginx: worker process

I believe that the resource limitation should work. However, I do not have access to any heavily-used apache/nginx servers, so I cannot get close to even making apache2/nginx use that much RAM. And my task is to show what happens when Apache2/Nginx start using that much memory, and it has to be realistic, it really has to be that much RAM, and they do want me to show those web servers running with almost as much RAM usage. You can ignore the CPU Weight, as it is not my main focus.

So, in other words, I need to somehow increase/inflate the RAM usage of those web servers.

  1. Is the way I’m trying to limit their resource usage a good practice (via systemd slice resource-control)?
  2. How to go about increasing Apache & Nginx webservers’ memory usage (to about 800M RAM, if possible even higher)?

My machine has 6G RAM, I’m okay with using 2-3G RAM for test cases.
Running Linux Debian 9.4.
Apache2 version:
Server version: Apache/2.4.25 (Debian)
Nginx version:
nginx version: nginx/1.10.3

Note: Apache2 web server is the main focus, doing the same for Nginx is not necessary, it’s just a bonus for research.

I’ve tried to find any ways to benchmark RAM usage, but failed. Keep in mind that I’m doing this on a normal workstation, thus there’s no easy way to deploy apps on the web servers and generate enough traffic naturally in order to get such an increase in memory usage.

Update:
I managed to get apache + nginx consuming up to 400-500MB RAM by using heavy ab benchmarks all the time. However, I still can’t test to see if they would be able to even reach the specified 800MB RAM limit. Any suggestions would be appreciated.


Get this bounty!!!

#StackBounty: #wordpress #nginx Wordpres and Nginx redirection

Bounty: 100

I have a WordPress site with nginx webserver i have a certain requirement.

So WordPress used to be main website, with a blog permalink which opens a page with all the blog list there,But now we have our main website in ruby and we setup the wordpress on same server as Ruby with nginx, To access blog you have to type abc.com/blog. and it working fine this way.

But now we want to open the blog permalink page to open when we enter abc.com/blog .

By this i mean if i hit abc.com/blog it should redirect to abc.com/blog/blog.

Below is my nginx config which works perfect for other situations.

location   /blog {
           index   index.html index.htm index.php;
          try_files $uri $uri/ /blog/index.php?$args;
          }

Now i am adding a rule for redirection on permalink like below but it is not working.

location = /blog/ {
       try_files $uri $uri/ /blog/blog/ ;


Get this bounty!!!

#StackBounty: #nginx nginx location new location redirection cycle

Bounty: 50

I moved a bunch of images on my site from /images to /images/categories. I want the old URLs to still serve up the assets from the old location without duplicating them because I have 3rd parties that reference those URLs. I attempted this with a location/try_files block:

location /images {
    try_files /images/categories/$uri $uri;
}

The problem is that then I get:

rewrite or internal redirection cycle while internally redirecting to “/images/categories/myimage.png”, client: 172.27.0.1, server: app, request: “GET /images/categories/myimage.png HTTP/1.1”, host: “localhost”, referrer:

How can I prevent the infinite loop here?


Get this bounty!!!

#StackBounty: #nginx How to forward URL query in nginx dynamically and strip extra params

Bounty: 50

I have a couple thousand URL’s that look like the following:

https://www.example.com/supplier-shop/u-23452345/s-p/
https://www.example.com/supplier-shop/u-1714128138
https://www.example.com/supplier-shop/u-436877957/s-p
https://www.example.com/supplier-shop/u-32452345
https://www.example.com/supplier-shop/u-2345245664
https://www.example.com/supplier-shop/u-23452345/

This is from my legacy website but our new URL structure looks like the following:

www.example.com/seller/xxxxxxxx

I know how to rewrite single URL’s but how would I do a catch all case for all of my id‘s?


Get this bounty!!!

#StackBounty: #nginx #percona #pmm Nginx: location with proxy_pass, including uri (Percona PMM)

Bounty: 100

I’m trying to setup a proxy with Nginx for Percona Monitoring and Management (PMM). I’m using their public demo site for a testing purpose.

The goal is to expose PMM interface via URL like https://localhost.local/pmm.

server {
    listen 443 default_server ssl http2;
    server_name localhost;

    ssl_certificate /etc/pki/tls/certs/localhost.crt;
    ssl_certificate_key /etc/pki/tls/private/localhost.key;

    location ^~ /pmm/ {
        proxy_pass https://pmmdemo.percona.com/;
        rewrite ^/pmm/(.*) /$1 break;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Authorization "";
    }

}

There are a few different URLs on the backend software.

This is currently NOT working properly and I can see a 404 requests in the browser console for URLs like https://localhost/graph/public/build/grafana.dark.css?v5.0.4

I tried to add a rewrite rule: rewrite ^/pmm/(.*) /$1 break; but this still didn’t help.


Get this bounty!!!

#StackBounty: #nginx #proxy #tomcat #rewrite Nginx rewrite .jsp extension and proxy to tomcat

Bounty: 100

Outgoing

How can i create a Nginx rewrite rule in the appropriate server block, that takes any URL ending on .jsp and removes the .jsp extension after retrieving the correct .jsp page from the tomcat server, but before sending the response to the client?

Incoming

How can i create a Nginx rewrite rule in the appropriate server block, that takes any URL that does not end on .do and add a .jsp extension, after receiving a HTTP request, but before fetching the .jsp file from the tomcat server. And than follow the outgoing rewrite rule to remove the extension again before sending response?

Test

I tried to play around with the following

server {
        listen 443 ssl;
        server_name www.test.local test.local;

        location / {
                if ($request_uri ~ ^/(.*).jsp$) {
                        return 302 /$1;
                }
                try_files $uri.jsp @proxy;
        }

        location @proxy {
                proxy_pass http://websites/;
                include proxy_params;
        }
}

Nginx removes the .jsp extension, but it also sends the request to Tomcat without the .jsp extension, so tomcat does not know what to look for and returns a 404.

As far as i can tell, Nginx is not asking Tomcat do you have a $uri.jsp page but is instead asking if tomcat has a $uri page (without .jsp extension).

As far as i can read and understand try_files syntax is

try_files [Location[file, folder]] [fallback[file, folder, HTTP code]]

But the official documentation does not say (as far as i can find) how to instruct Nginx to (in this case) ask the proxy for the different files and folders to try, but is instead quering its own local root location for $uri.jsp and than using @proxy as fallback.


Get this bounty!!!

#StackBounty: #nginx #reverse-proxy #cache Debugging Nginx Cache Misses: Hitting high number of MISS despite high proxy valid

Bounty: 100

My proxy cache path is set to a very high size

proxy_cache_path  /var/lib/nginx/cache  levels=1:2   keys_zone=staticfilecache:180m  max_size=700m;

and the size used is only

sudo du -sh *
14M cache
4.0K    proxy

Proxy cache valid is set to

proxy_cache_valid 200 120d;

I track HIT and MISS via

add_header X-Cache-Status $upstream_cache_status;

Despite these settings I am seeing a lot of MISSes. And this is for pages I intentionally ran a cache warmer an hour ago.

How do I debug why these MISSes are happening? How do I find out if the miss was due to eviction, expiration, some rogue header etc? Does Nginx provide commands for this?


Get this bounty!!!