#StackBounty: #16.04 #ssh #connection SSH resets to default port on reboot

Bounty: 100

I changed my default SSH port on my home server (in the /etc/ssh/sshd_config file) to port 54747, then restarted the ssh and sshd services (never sure which one so I did both just to be safe). To test my configuration, I logged out and then back in without any problem.

A couple days later, I installed apt updates, and then rebooted my server. When I tried to SSH back in (on port 54747), I got a connection refused error.

For some reason, I tried to SSH on default port, and it worked ! I went back to check on the sshd_config, but it still had the custom port. So I restarted the sshand sshdservices, and it got back to “regular” behaviour (ssh on port 54747). I tried rebooting again, and connection refused again…

Anyone knows what I did wrong ?

Extra details :

  • Ubuntu 16.04.2 LTS
  • Server is also used a HTPC, with an open session (same user as SSH) on my TV
  • I SSH using my laptop’s RSA key, and have disabled password auth
  • I used to reboot with sudo reboot -h now, but after searching, I discovered it was discouraged by some people, so I tried sudo reboot, but no differences

EDIT
Sequence of events :

  1. Change SSH port from 22 to 54747 in /etc/ssh/sshd_config
  2. Restart ssh and sshd services
  3. End current SSH session
  4. SSH back in successfully on port 54747
  5. Reboot
  6. SSH connection error on port 54747, but successful on port 22
  7. Restart ssh and sshd services
  8. SSH back in successfully on port 54747, connection error on port 22
  9. Reboot and go back to 6

EDIT 1 : netstat output

rgo@ATLAS:~$ sudo netstat -lntp | grep :54747
rgo@ATLAS:~$ sudo netstat -lntp | grep :22
tcp6       0      0 :::22                   :::*                    LISTEN      1/init  

EDIT 2 : service sshd status

‚óŹ ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: inactive (dead)

EDIT 3 : lsof -i | grep ssh

systemd      1     root   46u  IPv6  42724      0t0  TCP ATLAS:ssh->192.168.1.27:49837 (ESTABLISHED)
systemd      1     root   49u  IPv6  14641      0t0  TCP *:ssh (LISTEN)
sshd      4088     root    3u  IPv6  42724      0t0  TCP ATLAS:ssh->192.168.1.27:49837 (ESTABLISHED)
sshd      4088     root    4u  IPv6  42724      0t0  TCP ATLAS:ssh->192.168.1.27:49837 (ESTABLISHED)
sshd      4202      rgo    3u  IPv6  42724      0t0  TCP ATLAS:ssh->192.168.1.27:49837 (ESTABLISHED)
sshd      4202      rgo    4u  IPv6  42724      0t0  TCP ATLAS:ssh->192.168.1.27:49837 (ESTABLISHED)

For reference, ATLAS is the remote server hostname, 192.168.1.27 is my laptop’s LAN IP, and command was executed between steps 6 and 7

ufw status

Status: inactive

EDIT 4 : ps -ef |grep sshd

root      4088     1  0 22:40 ?        00:00:00 sshd: rgo [priv]
rgo       4202  4088  0 22:40 ?        00:00:00 sshd: rgo@pts/1 sshd


Get this bounty!!!

#StackBounty: #networking #virtualbox #ssh #connection #vagrant connect to host localhost port 22: Connection refused

Bounty: 50

I created a Ubuntu Vagrant box and ssh’d into it.

Now, created a ssh key and trying to ssh to my localhost but I get the following error.

vagrant@vagrant-ubuntu-trusty-64:~$ ssh -vvvv localhost 
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to localhost [127.0.0.1] port 22.
debug1: connect to address 127.0.0.1 port 22: Connection refused
ssh: connect to host localhost port 22: Connection refused

When I check the list of ports opened as follows:

vagrant@vagrant-ubuntu-trusty-64:~$ sudo netstat -tulpn|grep 22
tcp        0      0 10.0.2.15:22            0.0.0.0:*               
LISTEN      11598/sshd

I see the ip address as 10.0.2.15:22. Also, my sshd service is running as well.

vagrant@vagrant-ubuntu-trusty-64:~$ ps aux|grep ssh
root     11430  0.0  0.7  68084  3688 ?        Ss   17:41   0:00 sshd: vagrant [priv]
vagrant  11432  0.0  0.3  68216  1908 ?        S    17:41   0:00 sshd: vagrant@pts/0 
root     11598  0.0  0.6  61388  3060 ?        Ss   17:44   0:00 /usr/sbin/sshd -D


Get this bounty!!!

#StackBounty: #windows #tcp #connection #stunnel Unable to create seemingly simple stunnel configuration

Bounty: 100

I have a computer at work that is behind a firewall with an internal ip address of 192.168.12.13… the firewall maps ports 40000 – 40019 to matching ports on this local machine. (e.g. 40000 – 40000, 40001 – 40001, etc…) And, let’s define the external ip as 12.34.56.78.

I want to setup my home computer to connect to this work computer.

Work computer stunnel.config:

[brianserver]
client = no
accept = 127.0.0.1:40020
connect = 192.168.12.13:40000
PSKsecrets = psk1.txt

Home computer stunnel.config:

[brianclient]
client = yes
accept = 127.0.0.1:40020
connect = 12.34.56.78:40000
PSKsecrets = psk1.txt

I am using a product called “Hercules SETUP utility” to listen on the work machine:

enter image description here

And, I am using “Hercules SETUP utility” to initiate a connection from the home machine:

enter image description here

As you can see I am getting a connection refused message.

Home computer stunnel.log: (these messages occurred during connection attempt)

2019.04.10 23:36:09 LOG7[main]: Found 1 ready file descriptor(s)
2019.04.10 23:36:09 LOG7[main]: FD=616 ifds=r-x ofds = ---
2019.04.10 23:36:09 LOG7[main]: FD=624 ifds=r-x ofds = ---
2019.04.10 23:36:09 LOG7[main]: Service[brianclient] accepted(FD= 768) from 127.0.0.1:56795
2019.04.10 23:36:09 LOG7[main]: Creating a new thread
2019.04.10 23:36:09 LOG7[main]: New thread created
2019.04.10 23:36:09 LOG7[2]: Service[brianclient] started
2019.04.10 23:36:09 LOG7[2]: Setting local socket options(FD= 768)
2019.04.10 23:36:09 LOG7[2]: Option TCP_NODELAY set on local socket
2019.04.10 23:36:09 LOG5[2]: Service[brianclient] accepted connection from 127.0.0.1:56795
2019.04.10 23:36:09 LOG6[2]: s_connect: connecting 12.34.56.78:40000
2019.04.10 23:36:09 LOG7[2]: s_connect: s_poll_wait 12.34.56.78:40000: waiting 10 seconds
2019.04.10 23:36:10 LOG3[2]: s_connect: connect 12.34.56.78:40000: Connection refused(WSAECONNREFUSED) (10061)
2019.04.10 23:36:10 LOG3[2]: No more addresses to connect
2019.04.10 23:36:10 LOG5[2]: Connection reset: 0 byte (s) sent to TLS, 0 byte (s) sent to socket
2019.04.10 23:36:10 LOG7[2]: Local descriptor(FD= 768) closed
2019.04.10 23:36:10 LOG7[2]: Service[brianclient] finished(0 left)

Work computer stunnel.log: (ran at startup… no messages on connection attempt)

2019.04.10 21:24:55 LOG7[main]: Running on Windows 6.2
2019.04.10 21:24:55 LOG7[main]: No limit detected for the number of clients
2019.04.10 21:24:55 LOG5[main]: stunnel 5.51 on x64-pc-mingw32-gnu platform
2019.04.10 21:24:55 LOG5[main]: Compiled/running with OpenSSL 1.1.1b  26 Feb 2019
2019.04.10 21:24:55 LOG5[main]: Threading:WIN32 Sockets:SELECT,IPv6 TLS:ENGINE,OCSP,PSK,SNI
2019.04.10 21:24:55 LOG7[main]: errno: (* _errno())
2019.04.10 21:24:55 LOG7[service]: GUI message loop initialized
2019.04.10 21:24:55 LOG7[main]: Running on Windows 6.2
2019.04.10 21:24:55 LOG5[main]: Reading configuration from file stunnel.conf
2019.04.10 21:24:55 LOG5[main]: UTF-8 byte order mark detected
2019.04.10 21:24:55 LOG7[main]: Compression disabled
2019.04.10 21:24:55 LOG7[main]: No PRNG seeding was required
2019.04.10 21:24:55 LOG6[main]: Initializing service[brianserver]
2019.04.10 21:24:55 LOG6[main]: PSK identities: 1 retrieved
2019.04.10 21:24:55 LOG7[main]: Ciphers: HIGH:!aNULL:!SSLv2:!DH:!kDHEPSK
2019.04.10 21:24:55 LOG7[main]: TLSv1.3 ciphersuites: TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256
2019.04.10 21:24:55 LOG7[main]: TLS options: 0x02100004 (+0x00000000, -0x00000000)
2019.04.10 21:24:55 LOG7[main]: No certificate or private key specified
2019.04.10 21:24:55 LOG6[main]: DH initialization not needed
2019.04.10 21:24:55 LOG7[main]: ECDH initialization
2019.04.10 21:24:55 LOG7[main]: ECDH initialized with curves X25519:P-256:X448:P-521:P-384
2019.04.10 21:24:55 LOG5[main]: Configuration successful
2019.04.10 21:24:55 LOG7[main]: Binding service[brianserver]
2019.04.10 21:24:55 LOG7[main]: Listening file descriptor created(FD= 716)
2019.04.10 21:24:55 LOG7[main]: Setting accept socket options(FD= 716)
2019.04.10 21:24:55 LOG7[main]: Option SO_EXCLUSIVEADDRUSE set on accept socket
2019.04.10 21:24:55 LOG6[main]: Service[brianserver] (FD=716) bound to 127.0.0.1:40020
2019.04.10 21:24:55 LOG7[cron]: Cron thread initialized
2019.04.10 21:25:55 LOG6[cron]: Executing cron jobs
2019.04.10 21:25:55 LOG6[cron]: Cron jobs completed in 0 seconds
2019.04.10 21:25:55 LOG7[cron]: Waiting 86400 seconds

Also, psk1.txt has matching content:

brianskey:a3...6r

Also, on work computer:

C:Program Files (x86)stunnelbin>netstat -ano|findstr 40020
   TCP    0.0.0.0:40020          0.0.0.0:0              LISTENING       71888
   TCP    127.0.0.1:40020        0.0.0.0:0              LISTENING       34728

Note: the line with “0.0.0.0:40020” shows up after I start the Hercules listener.


Get this bounty!!!

Best practices in JDBC Connection

JDBC Connection Scope

How should your application manage the life cycle of JDBC connections? Asked another way, this question really asks – what is the scope of the JDBC connection object within your application? Let’s consider a servlet that performs JDBC access. One possibility is to define the connection with servlet scope as follows.

import java.sql.*;

public class JDBCServlet extends HttpServlet {

    private Connection connection;

    public void init(ServletConfig c) throws ServletException {
      //Open the connection here
    }

    public void destroy() {
     //Close the connection here
    }

    public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException { 
      //Use the connection here
      Statement stmt = connection.createStatement();
      //do JDBC work.
  }
}

Using this approach the servlet creates a JDBC connection when it is loaded and destroys it when it is unloaded. The doGet() method has immediate access to the connection since it has servlet scope. However the database connection is kept open for the entire lifetime of the servlet and that the database will have to retain an open connection for every user that is connected to your application. If your application supports a large number of concurrent users its scalability will be severely limited!

Method Scope Connections


To avoid the long life time of the JDBC connection in the above example we can change the connection to have method scope as follows.

public class JDBCServlet extends HttpServlet {

  private Connection getConnection() throws SQLException {
    // create a JDBC connection
  }

  public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException { 
    try {
      Connection connection = getConnection();
      //..
      connection.close();
    }
    catch (SQLException sqlException) {
      sqlException.printStackTrace();
    }
  }
}


This approach represents a significant improvement over our first example because now the connection’s life time is reduced to the time it takes to execute doGet(). The number of connections to the back end database at any instant is reduced to the number of users who are concurrently executing doGet(). However this example will create and destroy a lot more connections than the first example and this could easily become a performance problem.

In order to retain the advantages of a method scoped connection but reduce the performance hit of creating and destroying a large number of connections we now utilize connection pooling to arrive at our finished example that illustrates the best practices of connecting pool usage.

import java.sql.*;
import javax.sql.*;

public class JDBCServlet extends HttpServlet {

  private DataSource datasource;

  public void init(ServletConfig config) throws ServletException {
    try {
      // Look up the JNDI data source only once at init time
      Context envCtx = (Context) new InitialContext().lookup("java:comp/env");
      datasource = (DataSource) envCtx.lookup("jdbc/MyDataSource");
    }
    catch (NamingException e) {
      e.printStackTrace();
    }
  }

  private Connection getConnection() throws SQLException {
    return datasource.getConnection();
  }

  public void doGet (HttpServletRequest req, HttpServletResponse res) throws ServletException {
    Connection connection=null;
    try {
      connection = getConnection();
      ....
    } 
    catch (SQLException sqlException) {
      sqlException.printStackTrace();
    }
    finally {
      if (connection != null) 
        try {connection.close();} catch (SQLException e) {}
      }
    }
  }
}


This approach uses the connection only for the minimum time the servlet requires it and also avoids creating and destroying a large number of physical database connections. The connection best practices that we have used are:

A JNDI datasource is used as a factory for connections. The JNDI datasource is instantiated only once in init() since JNDI lookup can also be slow. JNDI should be configured so that the bound datasource implements connecting pooling. Connections issued from the pooling datasource will be returned to the pool when closed.

We have moved the connection.close() into a finally block to ensure that the connection is closed even if an exception occurs during the doGet() JDBC processing. This practice is essential when using a connection pool. If a connection is not closed it will never be returned to the connection pool and become available for reuse. A finally block can also guarantee the closure of resources attached to JDBC statements and result sets when unexpected exceptions occur. Just call close() on these objects also.

For More details :
http://www.javaranch.com/journal/200601/JDBCConnectionPooling.html