#StackBounty: #centos #dns #hosts Device or resource busy – getaddrinfo

Bounty: 50

I’m on a CentOS 7 VM running PG, MariaDB, sidekiq and apache httpd. Sometimes my logs are spammed with errors such as:

unable to resolve address: System error

WARN: Mysql2::Error::ConnectionError: Unknown MySQL server host 'mariadb' (16)

WARN: PG::ConnectionBad: could not translate host name "postgres" to address: System error

WARN -- : Unable to record event with remote Sentry server (Errno::EBUSY - Failed to open TCP connection to o383708.ingest.sentry.io:443 (Device or resource busy - getaddrinfo)):

All these hosts (except the sentry service) are set to 127.0.0.1 in my /etc/hosts file.

Pinging the host names appears to work from the console, these errors pop up in various application logs during runtime.

lsof | wc -l => 700k (max 1.6M)

The VM is under no significant load (10% load average). No attacks or rootkits or anything like that.

My hosts file:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

127.0.0.1 mariadb
127.0.0.1 postgres
127.0.0.1 mongodb
127.0.0.1 redis
127.0.0.1 memcached
127.0.0.1 socketcluster

Anyone know what’s going on? Why can’t getaddrinfo open the hosts file???

Adding a bounty to this question. Please no freeloading.


Get this bounty!!!

#StackBounty: #centos #vnc Unable to connect to CentOS 7/8 Screen Sharing (VNC) over work VPN

Bounty: 50

As with most people, we’re WFH. It’s getting difficult to troubleshoot issues via chat/phone, so we’d like to use the CentOS "Screen Sharing" feature, which comes bundled with the OS, to allow support personnel to connect to user machine’s to assist with their issues. The support personnel will also used to built in client vinagre with VNC selected as the connection type.

All users are connected to a VPN that is configured to allow internal hosts to communicate with one another. We can ping other hosts, view services they’re running, etc. Here is my machine’s Screen Share settings.

Screen Sharing settings window

When I look at netstat, I see port 5900 is listening on tcp6. I considered the issue was the underlying VNC server was listening on IPv6 when every hosts IPv4 interface is set as the default connection, but I recall reading somewhere that tcp6 connections map to tcp4 connections, so it’s supposedly a non-issue. Here’s my netstat output:

$ sudo netstat -nlt 
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN     
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN     
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:10391         0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN     
tcp        0      0 127.0.0.1:29754         0.0.0.0:*               LISTEN     
tcp6       0      0 :::5355                 :::*                    LISTEN     
tcp6       0      0 :::5900                 :::*                    LISTEN     
tcp6       0      0 :::111                  :::*                    LISTEN     
tcp6       0      0 ::1:631                 :::*                    LISTEN     
tcp6       0      0 :::9090                 :::*                    LISTEN    

I considered it was the firewall, but both my machine and the client machine have firewalld disabled.

Here is the output of nmcli to show the settings of Wired Connection 1

enp57s0u1: connected to Wired connection 1
    "Realtek RTL8153"
    ethernet (r8152), 8C:04:BA:67:52:16, hw, mtu 1500
    ip4 default
    inet4 192.168.1.123/24
    route4 0.0.0.0/0
    route4 192.168.1.0/24
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64
    route6 fe80::/64
    route6 ff00::/8

Finally, I tested enabling SSH and it worked. Users were able to connect and were prompted to authenticate (we didn’t have them actually log-in). This leads me to believe it’s an issue with the underlying VNC server itself, something maybe related to it listening on tcp6 above?

Any idea what the issue may be?


Get this bounty!!!

#StackBounty: #centos #networking #docker #bridge #plesk Connecting to docker bridged container on CentOS 7 gives: Connection reset by …

Bounty: 50

As the title says I have a docker image which runs fine on my development machine in docker using network bridged to an another port (-p 8080:5000). But as soon as I am deploying this on my server connections fail.

-> Ruling out not hosting the application on 0.0.0.0:5000 IP’s.

When using --network host the images do run fine.

I would like to use the Plesk docker extension. The only supported behaviour in here is using the docker bridge network.

This server is running

  • Plesk => docker extension
  • CentOS 7

I did have some issues with the firewall settings (There are 2 active firewalls: firewalld and the Plesk Firewall). But the issue persists when both are disabled.

-> Ruling out firewall zone trust issues.

The last answer which came up while searching is a collision in network interface IP ranges..? But I have a simple setup with a single interface eth0 (public IP), local loopback and docker0.

What could be wrong here? Any ideas how to troubleshoot this further? I am out after many hours spent researching this issue.

Thanks!

> docker ps

CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS                    NAMES
8c3d3f32b8ef        savahdevelopment/savah_api   "dotnet savah_api.dll"   11 minutes ago      Up 11 minutes       0.0.0.0:8888->5000/tcp   sharp_lamarr

b355e8fef0ec        savahdevelopment/savah_api   "dotnet savah_api.dll"   7 hours ago         Up 14 minutes                                savah_api_prod

e38e1b01b039        savahdevelopment/savah_api   "dotnet savah_api.dll"   7 hours ago         Up 14 minutes                                savah_api_dev

> curl http://something.hostbeter.nl:5000/admin/test
YUUUUUUUUUUUUUUP!!

> curl http://something.hostbeter.nl:5100/admin/test
YUUUUUUUUUUUUUUP!!

> curl http://something.hostbeter.nl:8888/admin/test
curl: (56) Recv failure: Connection reset by peer

Some extra info:

netstat -tulp tells me the working containers did bind to the IPv6 addresses only. But connecting externally with the IPv4 works fine? So it seems to be something network related?


Get this bounty!!!

#StackBounty: #nginx #magento2.3.4 #multi-website #localhost #centos Magento 2.3 – How to configure Nginx for Multi-Website store on Lo…

Bounty: 50

Steps :-

  1. Main website Path : usr/share/nginx/html/gomart
    • Contains all magento files
    • Configuration in nginx.conf.sample for Multi-Website
    • nginx.conf.sample -> https://justpaste.it/97yki (code)
  2. Multi-Website Path : usr/share/nginx/html/gomart/grocery
    • Created subfolder inside the root folder along with symbollice linksof app, lib, pub, var and copy index.php & .htaccess from root folder to subfolder
  3. Inside the subfolder Symbolic Liks:
  4. Nginx Setup (Not sure right)

Url : http://192.168.1.64:8087/grocery 404 error.


Get this bounty!!!

#StackBounty: #centos #yum exclude i686 packages in yum.conf

Bounty: 100

I’m trying to exclude *.i686 packages from installing when I try to install the x86_64 version of libcrypto.so.10.

If I put any of the following (one at a time) into my /etc/yum.conf under [main]:

multilib_policy=best
exactarch=1
exclude=*.i386 *.i686
exclude=*.i?86

And I try to install the package it says that it isn’t there:

sudo yum install libcrypto.so.10
Loaded plugins: fastestmirror, rhnplugin, tsflags, versionlock
This system is receiving updates from RHN Classic or Red Hat Satellite.
Loading mirror speeds from cached hostfile
No package libcrypto.so.10 available.
Error: Nothing to do

However if I remove any of those settings it tries to install both i686 and the x86_64 version of libcrypto.so.10. I am using Centos version: CentOS Linux release 7.7.1908 (Core)

How can I exclude *.i686 packages in the /etc/yum.conf file?


Get this bounty!!!

#StackBounty: #nginx #magento2.3.4 #multi-website #localhost #centos Magento 2.3 – Separate folders for each website store – How to con…

Bounty: 50

Steps :-

  1. Main website Path : usr/share/nginx/html/gomart
    • Contains all magento files
    • Configuration in nginx.conf.sample for Multi-Website
    • nginx.conf.sample -> https://justpaste.it/97yki (code)
  2. Multi-Website Path : usr/share/nginx/html/gomart/grocery
    • Created subfolder inside the root folder along with symbollice linksof app, lib, pub, var and copy index.php & .htaccess from root folder to subfolder
  3. Inside the subfolder Symbolic Liks of (usr/share/nginx/html/gomart/grocery)
  4. Nginx Setup (Not sure right)

Url : http://192.168.1.64:8087/grocery 404 error.


Get this bounty!!!

#StackBounty: #nginx #magento2.3.4 #multi-website #localhost #centos Magento 2.3 – How to setup Nginx for Multi-Website store on Localh…

Bounty: 50

Installed magento 2.3.4 on centos 7 – nginx localhost,

  • Root folder – usr/share/nginx/html/gomart
  • Created subfolder inside the root folder along with symbollice link of app, lib, pub, var and copy index.php & .htaccess from root folder to subfolder index.php, .htaccess [Modified for Multi-Website] – Folder Path – usr/share/nginx/html/gomart/grocery

Second Website (Multi Website) configure by this tutorial

How to setup etc/site-available & etc/nginx/conf.d/mgento.conf for multi-website store?

Current store etc/site-available & etc/nginx/conf.d/mgento.conf configuration.

How to setup multi-website for Nginx in the site-available& mgento.conf?


Get this bounty!!!

#StackBounty: #nginx #magento2.3.4 #multi-website #localhost #centos Magento 2.3 – Setup Multi Website on Centos 7 – Nginx in Localhost

Bounty: 50

Install magento 2.3.4 on centos 7 – nginx localhost,

  • Root folder – usr/share/nginx/html/gomart
  • Created subfolder inside the root folder along with symbollice link of app, lib, pub, var and copy index.php & .htaccess from root folder to subfolder index.php, .htaccess [Modified for Multi-Website] – Folder Path – usr/share/nginx/html/gomart/grocery

Second Website (Multi Website) configure by this tutorial

How to setup etc/site-available & etc/nginx/conf.d/mgento.conf for multi-website store?

Current store etc/site-available & etc/nginx/conf.d/mgento.conf configuration.

How to setup multi-website site-available& mgento.conf?


Get this bounty!!!

#StackBounty: #centos #vsftpd #centos8 #uid Inverse Name Search by UID (CentOS 8) – Retrieves last created with same UID

Bounty: 50

I am working with CentOS 8 and I have a problem with UIDs and User Names. I have installed VestaCP to manage my websites. The user by the name of “user123” and UID 1007 is the owner of all the websites (user in VestaCP). Then I have created individual FTP users for each website. Each FTP user has the following name format: “user123_random”, where random is a random text. Each FTP user has a different name, but they all share the same UID (1007) (this is the default behavior when creating new FTP users).

Now the problem happens when I am checking the ownership (user) of each website or file inside that website. So technically, the owner belongs is UID 1007. The problem here is that CentOS 8, for some reason, it is showing “user123_random” as the owner of the websites instead of “user123”.

The curious thing is that when I do a “id -nu 1007”, it returns the name of the last FTP user created with the prefix “user123_”. So I assume, this is what CentOS 8 does internally, showing the last username (with same ID 1007) as the owner of a file/directory. This is not how CentOS 7 worked. CentOS 7 would show “user123” as the owner of the files, irrespective of adding new FTP users with the same UID.

The question is…is there a way to change this behavior in CentOS 8, so that it behaves as CentOS 7? So that the inverse name search by UID returns the “first created user” with that UID.


Get this bounty!!!

#StackBounty: #centos #apache-httpd #ssl #apache-virtualhost #pacemaker Apache fails to load when SSL activated through Pacemaker

Bounty: 50

I have setup a cluster on Pacemaker holding: apache, mariadb, 2x GFS2, and a VIP

Everything was working fine when running on http but as soon as I added the (self-signed) SSL certificate and the virtual host to httpd/conf.d/ssl.conf file, the cluster won’t start the web server again.

I have searched for results on the /server-status and SSL/https but I cant fund any results on how to conjure it.

When I run:

[root@node01 ~]# pcs resource debug-start mb-web
Operation start for mb-web (ocf:heartbeat:apache) returned: 'unknown error' (1)
> stderr: May 18 12:38:43 INFO: apache not running
> stderr: May 18 12:38:43 INFO: waiting for apache /etc/httpd/conf/httpd.conf to come up
> stderr: ocf-exit-reason:Failed to access httpd status page.
> stderr: May 18 12:38:44 INFO: Attempting graceful stop of apache PID 31950
> stderr: May 18 12:38:46 INFO: apache stopped.

I also get it in the failed messages:

Failed Resource Actions:
* mb-web_start_0 on node01 'unknown error' (1): call=128, status=complete, exitreason='Failed to access httpd status page.',
last-rc-change='Mon May 18 12:32:05 2020', queued=0ms, exec=3402ms
* mb-web_start_0 on node02 'unknown error' (1): call=130, status=complete, exitreason='Failed to access httpd status page.',
last-rc-change='Mon May 18 12:31:35 2020', queued=0ms, exec=3425ms

I have tried updating the resource via:

pcs resource update mb-web statusurl="https://localhost/server-status"
or
pcs resource update mb-web statusurl="https://127.0.0.1/server-status"
or
pcs resource update mb-web statusurl="https://vip.fqdn.ltd/server-status"

I followed the setup from: ClusterLabs.org

Within my /etc/httpd/conf.d/status.conf file I have:

<Location /server-status>
    SetHandler server-status
     Require local
</Location>

There are no redirects to https from http as I could access both 80 and 443 on the normal domain, when the server was running (before I restarted it last).

I can’t even wget to see what’s happening as the service won’t start through the cluster, but if I run systemctl start httpd everything runs and wget http://localhost/server-status returns:

[root@node01 ~]# wget http://localhost/server-status
--2020-05-18 12:58:53--  http://localhost/server-status
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... failed: Connection refused.

while wget https://localhost/server-status returns:

[root@node01 ~]# wget https://localhost/server-status
--2020-05-18 12:58:45--  https://localhost/server-status
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:443... connected.
OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
Unable to establish SSL connection.

Are there any resources I am missing or not looking, or is there something I have forgotten to activate?


Get this bounty!!!