#StackBounty: #ssh #freebsd #jail Pubkey SSH fails with "we did not send a packet, disable method" in freebsd jail

Bounty: 50

I have a FreeBSD VPS with 2 jails, each setup with ezjail (I know now that this is largely deprecated, but didn’t at the time).

$ jls
   JID  IP Address      Hostname                      Path
     1  172.16.1.1      wwwserver                     /usr/jails/wwwserver
     2  172.16.1.2      wwwgit                        /usr/jails/wwwgit

The host and the jails are all running 12.2-RELEASE-p2.

I have key-based ssh login enabled in each jail, as well as the host. This works fine for the host and wwwserver, but not wwwgit. For that jail, I get this log:

debug1: Reading configuration data /Users/chris/.ssh/config
debug1: /Users/chris/.ssh/config line 3: Applying options for *
debug1: /Users/chris/.ssh/config line 22: Applying options for waitstaff_git
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 47: Applying options for *
debug2: resolve_canonicalize: hostname {censored-ip-address} is address
debug2: ssh_connect_direct
debug1: Connecting to {censored-ip-address} [{censored-ip-address}] port 22.
debug1: Connection established.
debug1: identity file /Users/chris/.ssh/id_ed25519_chrisdeluca_git type 3
debug1: identity file /Users/chris/.ssh/id_ed25519_chrisdeluca_git-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.9 FreeBSD-20200214
debug1: match: OpenSSH_7.9 FreeBSD-20200214 pat OpenSSH* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: Authenticating to {censored-ip-address}:22 as 'git'
debug3: hostkeys_foreach: reading file "/Users/chris/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /Users/chris/.ssh/known_hosts:7
debug3: load_hostkeys: loaded 1 keys from {censored-ip-address}
debug3: order_hostkeyalgs: prefer hostkeyalgs: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521
debug3: send packet: type 20
debug1: SSH2_MSG_KEXINIT sent
debug3: receive packet: type 20
debug1: SSH2_MSG_KEXINIT received
debug2: local client KEXINIT proposal
debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-c
debug2: host key algorithms: ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256-cert-v01@openssh.com,ssh-rsa-cert-v01@openssh.com,ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com
debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,zlib@openssh.com,zlib
debug2: compression stoc: none,zlib@openssh.com,zlib
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug2: peer server KEXINIT proposal
debug2: KEX algorithms: curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1
debug2: host key algorithms: rsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519
debug2: ciphers ctos: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc
debug2: ciphers stoc: chacha20-poly1305@openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,aes192-cbc,aes256-cbc
debug2: MACs ctos: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: MACs stoc: umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1
debug2: compression ctos: none,zlib@openssh.com
debug2: compression stoc: none,zlib@openssh.com
debug2: languages ctos:
debug2: languages stoc:
debug2: first_kex_follows 0
debug2: reserved 0
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: <implicit> compression: none
debug3: send packet: type 30
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug3: receive packet: type 31
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:nhwOgcMl+Z+47Qu1VHAnjGnSbIdnjqMV60XQ9ilsCrI
debug3: hostkeys_foreach: reading file "/Users/chris/.ssh/known_hosts"
debug3: record_hostkey: found key type ECDSA in file /Users/chris/.ssh/known_hosts:7
debug3: load_hostkeys: loaded 1 keys from {censored-ip-address}
debug1: Host '{censored-ip-address}' is known and matches the ECDSA host key.
debug1: Found key in /Users/chris/.ssh/known_hosts:7
debug3: send packet: type 21
debug2: set_newkeys: mode 1
debug1: rekey out after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug3: receive packet: type 21
debug1: SSH2_MSG_NEWKEYS received
debug2: set_newkeys: mode 0
debug1: rekey in after 134217728 blocks
debug1: Will attempt key: /Users/chris/.ssh/id_ed25519_chrisdeluca_git ED25519 SHA256:xUYB2rlHSwtkA515PXWHC3dN8XQkcG2dbXJg1SPikxM explicit agent
debug2: pubkey_prepare: done
debug3: send packet: type 5
debug3: receive packet: type 7
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
debug3: receive packet: type 6
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug3: send packet: type 50
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug3: start over, passed a different list publickey,keyboard-interactive
debug3: preferred publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering public key: /Users/chris/.ssh/id_ed25519_chrisdeluca_git ED25519 SHA256:xUYB2rlHSwtkA515PXWHC3dN8XQkcG2dbXJg1SPikxM explicit agent
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey,keyboard-interactive
debug2: we did not send a packet, disable method
debug3: authmethod_lookup keyboard-interactive
debug3: remaining preferred: password
debug3: authmethod_is_enabled keyboard-interactive
debug1: Next authentication method: keyboard-interactive
debug2: userauth_kbdint
debug3: send packet: type 50
debug2: we sent a keyboard-interactive packet, wait for reply
debug3: receive packet: type 60
debug2: input_userauth_info_req
debug2: input_userauth_info_req: num_prompts 1
Password for git@waitstaff:

At first I thought maybe my permissions were off, but I can confirm I have the public keys uploaded to the git user’s .ssh/authorized_keys file, and the permissions are correct:

drwx------  2 git  git  512 Dec 29 22:07 .ssh
-rw-------  1 git  git  109 Dec 29 22:13 authorized_keys

The SSH config itself is nearly identical across the host and jails.

Host

$ grep -E -v '^$|^#' /etc/ssh/sshd_config
Subsystem   sftp    /usr/libexec/sftp-server
PermitRootLogin without-password

wwwserver

$ sudo jexec wwwserver grep -E -v '^$|^#' /etc/ssh/sshd_config
Port 2222
AuthorizedKeysFile  .ssh/authorized_keys
ChallengeResponseAuthentication no

wwwgit

$ sudo jexec wwwgit grep -E -v '^$|^#' /etc/ssh/sshd_config
AuthorizedKeysFile  .ssh/authorized_keys
Subsystem   sftp    /usr/libexec/sftp-server

I also have a local ssh config file, which might be helpful. Here’s the relevant contents.

IdentitiesOnly yes

Host *
  AddKeysToAgent yes
  UseKeychain yes

...

# Freebsd host
Host waitstaff
  Hostname {censored-ip-address}
  Port 22
  IdentityFile ~/.ssh/id_ed25519_waitstaff
  User freebsd

# wwwserver jail
Host waitstaff_deploy
  Hostname {censored-ip-address}
  Port 2222
  IdentityFile ~/.ssh/id_ed25519_waitstaff_deploy
  User chris

# wwwgit jail
Host waitstaff_git
  Hostname {censored-ip-address}
  IdentityFile ~/.ssh/id_ed25519_chrisdeluca_git
  User git

I’m at a loss about what’s wrong. Any help figuring this out would be greatly appreciated. Thanks in advance!

Edit: In case it’s pertinent, I changed the home directory for the git user (the user I’m trying to login as) to /git.


Get this bounty!!!

#StackBounty: #ssh #xforwarding speed differences in x forwarding

Bounty: 50

We’re dealing with three linux systems:

  • a "server" (actually a workstation that is also used as desktop PC) running ubuntu 18.04.

  • an old client (laptop with sandy-bridge cpu, no dedicated GPU) running arch linux

  • a new client (laptop with comet lake cpu, dedicated Nvidia GPU, hybrid setting in the bios, no bumblebee installation) running debian testing and using the proprietary nvidia drivers from the debian repositories

the two clients are in the same network (adjacent ports on the switch, same IP range, both gigabit ethernet adapters, speedtest.net reports the same connection speed for both).

I can ssh onto the server from both clients and X forwarding generally works except much slower on the new client (as test cycle i start the intel vtune gui and close it once the main menu is completely drawn. that takes ~ 26s on the new client and ~5s on the old client).

With ssh -vvv I don’t see any difference between the two setups (both aes128-gcm@openssh.com as cipher, neither use connection compression. in fact the $HOME/.ssh/config files and /etc/ssh/ssh_config files are the same (the latter shipped slightly differently but I copied the file from the old client to the new to rule out that that’s where the difference comes from).

Obviously we would like to get x forwarding on the new client to get as fast as it is on the old client. suggestion where to look for differences? can the X configurations / graphics drivers on the clients cause the slowdown (we don’t see general graphic issues on any of the three systems)?

update

Both systems run the same window manager (i3). enabling/disabling compression doesn’t change the behaviour.

A bit by accident (though triggered by the suggestion of @symcbean) I noticed on the new client that the behaviour changes dramatically when switching network device on the new client: connecting through the USB-C to ethernet that came with it, I observe the slow behaviour. Connecting through wifi I get down to ~9s for the start-and-quit cycle mentioned above. (the old client uses the builtin ethernet adapter).

Digging further, while speedtest.net says both clients get 800Mbit/s (ethernet on both clients), a scp from the server to the clients gets 90MB/s to the old client, but only 5 MB/s to the new client with ethernet (50MB/s with wifi).

update 2

in ip addr I see the qdisc setting differs between the clients:

  • new client ethernet <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  • old client ethernet <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  • new client wifi <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000


Get this bounty!!!

#StackBounty: #ssh #x11-forwarding #pycharm ssh -X: Remote application should open hyperlinks in the local browser

Bounty: 100

I use ssh -X to execute a GUI on a remote machine.

This works great, except for one thing:

If I click on a hyperlink in the application, then a browser gets launched on the remote machine.

In my case the application is PyCharm.

How to open the hyperlink in my local browser?

open-hyperlink-in-local-browser

There is a port-forwarding from the local machine 8000 to the remote-server 8000 port.


Get this bounty!!!

#StackBounty: #ssh #pygame #framebuffer #computer-vision #xrdp Python pygame program won't run through SSH / Remote Desktop

Bounty: 50

I’ve been working on my own object recognition program based on the rpi-vision test program pitft_labeled_output.py (from this webpage). It’s basically a custom neural network model and a bit modified code for the new model to work. However, I’m having problems running my program through Remote Desktop. This is how I run my program (using venv from Graphic Labeling Demo):

pi@raspberrypi:~ $ sudo bash
root@raspberrypi:/home/pi# cd signcap && . ../rpi-vision/.venv/bin/activate
(.venv) root@raspberrypi:/home/pi/signcap# python3 src/dar_ts.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
No protocol specified
No protocol specified
No protocol specified
xcb_connection_has_error() returned true
No protocol specified
xcb_connection_has_error() returned true
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
No protocol specified
Unable to init server: Could not connect: Connection refused

(Mask:1359): Gtk-WARNING **: 20:00:31.012: cannot open display: :10.0

As you can see, I’m getting several "No protocol specified" errors and the display error. When I run echo $DISPLAY in a console when I’m connected through Remote Desktop, it prints :10.0.

When I run the pitft_labeled_output.py through Remote Desktop like:

sudo bash
root@raspberrypi:/home/pi# cd rpi-vision && . .venv/bin/activate
(.venv) root@raspberrypi:/home/pi/rpi-vision# python3 tests/pitft_labeled_output.py

The display is successfully opened and everything works as it should.

However, my program works fine locally or through VNC. When I’m connected through VNC and run echo $DISPLAY, I get :1.0.
What could be the issue that it doesn’t work through Remote Desktop, but pitft_labeled_output.py does?

Here is the code of my program so you can compare it with pitft_labeled_output.py from rpi-vision:

import time
import logging
import argparse
import pygame
import os
import sys
import numpy as np
import subprocess
import signal

# Environment variables for Braincraft HAT.
os.environ['SDL_FBDEV'] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER'] = "fbcon"

def dont_quit(signal, frame):
   print('Caught signal: {}'.format(signal))
signal.signal(signal.SIGHUP, dont_quit)

from capture import PiCameraStream
from tsdar import TrafficSignDetectorAndRecognizer

# Initialize the logger.
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)

# Initialize the display.
pygame.init()
# Create a Surface object which is shown on the display.
# If size is set to (0,0), the created Surface will have the same size as the
# current screen resolution (240x240 for Braincraft HAT).
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
# Declare the capture manager for Pi Camera.
capture_manager = None

# Function for parsing program arguments.
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--rotation', type=int, choices=[0, 90, 180, 270],
                        dest='rotation', action='store', default=0,
                        help='Rotate everything on the display by this angle')
    args = parser.parse_args()
    return args

last_seen = []
already_seen = []

def main(args):
    global capture_manager, last_seen, already_seen

    # Initialize the capture manager to get stream from Pi Camera.
    if screen.get_width() == screen.get_height() or args.roation in (0, 180):
        capture_manager = PiCameraStream(resolution=(max(320, screen.get_width()), max(240, screen.get_height())), rotation=180, preview=False, format='rgb')
    else:
        capture_manager = PiCameraStream(resolution=(max(240, screen.get_height()), max(320, screen.get_width())), rotation=180, preview=False, format='rgb')

    # Initialize the buffer size to screen size.
    if args.rotation in (0, 180):
        buffer = pygame.Surface((screen.get_width(), screen.get_height()))
    else:
        buffer = pygame.Surface((screen.get_height(), screen.get_width()))

    # Hide the mouse from the screen.
    pygame.mouse.set_visible(False)
    # Initialize the screen to black.
    screen.fill((0,0,0))
    # Try to show the splash image on the screen (if the image exists), otherwise, leave screen black.
    try:
        splash = pygame.image.load(os.path.dirname(sys.argv[0])+'/bchatsplash.bmp')
        splash = pygame.transform.rotate(splash, args.rotation)
        screen.blit(splash, ((screen.get_width() / 2) - (splash.get_width() / 2),
                    (screen.get_height() / 2) - (splash.get_height() / 2)))
    except pygame.error:
        pass
    pygame.display.update()

    # Use the default font.
    smallfont = pygame.font.Font(None, 24)
    medfont = pygame.font.Font(None, 36)

    # Initialize the traffic sign detector and recognizer object with the path
    # to the TensorFlow Lite (tflite) neural network model.
    tsdar0 = TrafficSignDetectorAndRecognizer(os.path.dirname(sys.argv[0])+'/models/uw_tsdar_model_no_aug_w_opts.tflite')

    # Start getting capture from Pi Camera.
    capture_manager.start()

    while not capture_manager.stopped:
        # If the frame wasn't captured successfully, go to the next while iteration
        if capture_manager.frame is None:
            continue

        # Fill the buffer with black color
        buffer.fill((0,0,0))

        # Update the frame.
        rgb_frame = capture_manager.frame

        # Make predictions. If traffic signs were detected, a bounding rectangle
        # will be drawn around them.
        timestamp = time.monotonic()
        predictions, out_frame = tsdar0.predict(rgb_frame)
        delta = time.monotonic() - timestamp
        logging.info(predictions)
        logging.info("TFLite inference took %d ms, %0.1f FPS" % (delta * 1000, 1 / delta))

        # Make an image from a frame.
        previewframe = np.ascontiguousarray(out_frame)
        img = pygame.image.frombuffer(previewframe, capture_manager.camera.resolution, 'RGB')

        # Put the image into buffer.
        buffer.blit(img, (0, 0))

        # Add FPS and temperature on the top corner of the buffer.
        fpstext = "%0.1f FPS" % (1/delta,)
        fpstext_surface = smallfont.render(fpstext, True, (255, 0, 0))
        fpstext_position = (buffer.get_width()-10, 10) # Near the top right corner
        buffer.blit(fpstext_surface, fpstext_surface.get_rect(topright=fpstext_position))
        try:
            temp = int(open("/sys/class/thermal/thermal_zone0/temp").read()) / 1000
            temptext = "%dN{DEGREE SIGN}C" % temp
            temptext_surface = smallfont.render(temptext, True, (255, 0, 0))
            temptext_position = (buffer.get_width()-10, 30) # near the top right corner
            buffer.blit(temptext_surface, temptext_surface.get_rect(topright=temptext_position))
        except OSError:
            pass

        # Reset the detecttext vertical position.
        dtvp = 0

        # For each traffic sign that is recognized in the current frame (up to 3 signs),
        # its name will be printed on the screen and it will be announced if it already wasn't.
        for i in range(len(predictions)):
            p = predictions[i];
            name = tsdar0.CLASS_NAMES[p]
            print("Detected", name)

            last_seen.append(name)

            # Render sign name on the bottom of the buffer (if multiple signs detected,
            # current sign name is written above the previous sign name).           .
            detecttext = name
            detecttext_font = medfont
            detecttext_color = (255, 0, 0)
            detecttext_surface = detecttext_font.render(detecttext, True, detecttext_color)
            dtvp = buffer.get_height() - (i+1)*(detecttext_font.size(detecttext)[1]) - i*detecttext_font.size(detecttext)[1]//2
            detecttext_position = (buffer.get_width()//2, dtvp)
            buffer.blit(detecttext_surface, detecttext_surface.get_rect(center=detecttext_position))

            # Make an announcement for the traffic sign if it's new (not detected in previous consecutive frames).
            if detecttext not in already_seen:
                os.system('echo %s | festival --tts & ' % detecttext)

        # If new traffic signs were detected in the current frame, add them to already_seen list
        for ts in last_seen:
            if ts not in already_seen:
                already_seen.append(ts)

        # If the traffic sign disappeared from the frame (a car passed it), remove it from already_seen
        diff = list(set(already_seen)-set(last_seen))
        already_seen = [ts for ts in already_seen if ts not in diff]

        # Reset last_seen.
        last_seen = []

        # Show the buffer image on the screen.
        screen.blit(pygame.transform.rotate(buffer, args.rotation), (0,0))
        pygame.display.update()

# Run the program until it's interrupted by key press.
if __name__ == "__main__":
    args = parse_args()
    try:
        main(args)
    except KeyboardInterrupt:
        capture_manager.stop()

Edit:
To clarify a bit more, I first followed this tutorial to install my Braincraft HAT and then followed this one to try out the object recognition test example (pitft_labeled_output.py) from rpi-vision. Everything worked great through SSH. I saw logging info in the SSH console and the camera feed and recognized objects on the Braincraftt HAT display. Then I decided to try it out from Windows Remote Desktop (after installing xrdp on RPi) and it worked great. I saw logging info in terminal and camera feed on Braincraft display. But, when I wanted to run my program instead of pitft_labeled_output.py, I received the errors mentioned above. I even went further and replaced pitft_labeled_output.py code with my code(dar_ts.py) and ran my code as if I was running pitft_labeled_output.py (thought that there might be some dependencies inside rpi-vision folder), but it didn’t work, received the same error. What could be the issue with my code?

P.S. What also confused me further is that pitft_labeled_output.py has a typo in line 56 and runs fine anyway, but when I ran my code for the first time, it asked me to correct the error.
enter image description here


Get this bounty!!!

#StackBounty: #ssh #pygame #framebuffer #computer-vision #xrdp Python pygame program won't run through SSH / Remote Desktop

Bounty: 50

I’ve been working on my own object recognition program based on the rpi-vision test program pitft_labeled_output.py (from this webpage). It’s basically a custom neural network model and a bit modified code for the new model to work. However, I’m having problems running my program through Remote Desktop. This is how I run my program (using venv from Graphic Labeling Demo):

pi@raspberrypi:~ $ sudo bash
root@raspberrypi:/home/pi# cd signcap && . ../rpi-vision/.venv/bin/activate
(.venv) root@raspberrypi:/home/pi/signcap# python3 src/dar_ts.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
No protocol specified
No protocol specified
No protocol specified
xcb_connection_has_error() returned true
No protocol specified
xcb_connection_has_error() returned true
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
No protocol specified
Unable to init server: Could not connect: Connection refused

(Mask:1359): Gtk-WARNING **: 20:00:31.012: cannot open display: :10.0

As you can see, I’m getting several "No protocol specified" errors and the display error. When I run echo $DISPLAY in a console when I’m connected through Remote Desktop, it prints :10.0.

When I run the pitft_labeled_output.py through Remote Desktop like:

sudo bash
root@raspberrypi:/home/pi# cd rpi-vision && . .venv/bin/activate
(.venv) root@raspberrypi:/home/pi/rpi-vision# python3 tests/pitft_labeled_output.py

The display is successfully opened and everything works as it should.

However, my program works fine locally or through VNC. When I’m connected through VNC and run echo $DISPLAY, I get :1.0.
What could be the issue that it doesn’t work through Remote Desktop, but pitft_labeled_output.py does?

Here is the code of my program so you can compare it with pitft_labeled_output.py from rpi-vision:

import time
import logging
import argparse
import pygame
import os
import sys
import numpy as np
import subprocess
import signal

# Environment variables for Braincraft HAT.
os.environ['SDL_FBDEV'] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER'] = "fbcon"

def dont_quit(signal, frame):
   print('Caught signal: {}'.format(signal))
signal.signal(signal.SIGHUP, dont_quit)

from capture import PiCameraStream
from tsdar import TrafficSignDetectorAndRecognizer

# Initialize the logger.
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)

# Initialize the display.
pygame.init()
# Create a Surface object which is shown on the display.
# If size is set to (0,0), the created Surface will have the same size as the
# current screen resolution (240x240 for Braincraft HAT).
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
# Declare the capture manager for Pi Camera.
capture_manager = None

# Function for parsing program arguments.
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--rotation', type=int, choices=[0, 90, 180, 270],
                        dest='rotation', action='store', default=0,
                        help='Rotate everything on the display by this angle')
    args = parser.parse_args()
    return args

last_seen = []
already_seen = []

def main(args):
    global capture_manager, last_seen, already_seen

    # Initialize the capture manager to get stream from Pi Camera.
    if screen.get_width() == screen.get_height() or args.roation in (0, 180):
        capture_manager = PiCameraStream(resolution=(max(320, screen.get_width()), max(240, screen.get_height())), rotation=180, preview=False, format='rgb')
    else:
        capture_manager = PiCameraStream(resolution=(max(240, screen.get_height()), max(320, screen.get_width())), rotation=180, preview=False, format='rgb')

    # Initialize the buffer size to screen size.
    if args.rotation in (0, 180):
        buffer = pygame.Surface((screen.get_width(), screen.get_height()))
    else:
        buffer = pygame.Surface((screen.get_height(), screen.get_width()))

    # Hide the mouse from the screen.
    pygame.mouse.set_visible(False)
    # Initialize the screen to black.
    screen.fill((0,0,0))
    # Try to show the splash image on the screen (if the image exists), otherwise, leave screen black.
    try:
        splash = pygame.image.load(os.path.dirname(sys.argv[0])+'/bchatsplash.bmp')
        splash = pygame.transform.rotate(splash, args.rotation)
        screen.blit(splash, ((screen.get_width() / 2) - (splash.get_width() / 2),
                    (screen.get_height() / 2) - (splash.get_height() / 2)))
    except pygame.error:
        pass
    pygame.display.update()

    # Use the default font.
    smallfont = pygame.font.Font(None, 24)
    medfont = pygame.font.Font(None, 36)

    # Initialize the traffic sign detector and recognizer object with the path
    # to the TensorFlow Lite (tflite) neural network model.
    tsdar0 = TrafficSignDetectorAndRecognizer(os.path.dirname(sys.argv[0])+'/models/uw_tsdar_model_no_aug_w_opts.tflite')

    # Start getting capture from Pi Camera.
    capture_manager.start()

    while not capture_manager.stopped:
        # If the frame wasn't captured successfully, go to the next while iteration
        if capture_manager.frame is None:
            continue

        # Fill the buffer with black color
        buffer.fill((0,0,0))

        # Update the frame.
        rgb_frame = capture_manager.frame

        # Make predictions. If traffic signs were detected, a bounding rectangle
        # will be drawn around them.
        timestamp = time.monotonic()
        predictions, out_frame = tsdar0.predict(rgb_frame)
        delta = time.monotonic() - timestamp
        logging.info(predictions)
        logging.info("TFLite inference took %d ms, %0.1f FPS" % (delta * 1000, 1 / delta))

        # Make an image from a frame.
        previewframe = np.ascontiguousarray(out_frame)
        img = pygame.image.frombuffer(previewframe, capture_manager.camera.resolution, 'RGB')

        # Put the image into buffer.
        buffer.blit(img, (0, 0))

        # Add FPS and temperature on the top corner of the buffer.
        fpstext = "%0.1f FPS" % (1/delta,)
        fpstext_surface = smallfont.render(fpstext, True, (255, 0, 0))
        fpstext_position = (buffer.get_width()-10, 10) # Near the top right corner
        buffer.blit(fpstext_surface, fpstext_surface.get_rect(topright=fpstext_position))
        try:
            temp = int(open("/sys/class/thermal/thermal_zone0/temp").read()) / 1000
            temptext = "%dN{DEGREE SIGN}C" % temp
            temptext_surface = smallfont.render(temptext, True, (255, 0, 0))
            temptext_position = (buffer.get_width()-10, 30) # near the top right corner
            buffer.blit(temptext_surface, temptext_surface.get_rect(topright=temptext_position))
        except OSError:
            pass

        # Reset the detecttext vertical position.
        dtvp = 0

        # For each traffic sign that is recognized in the current frame (up to 3 signs),
        # its name will be printed on the screen and it will be announced if it already wasn't.
        for i in range(len(predictions)):
            p = predictions[i];
            name = tsdar0.CLASS_NAMES[p]
            print("Detected", name)

            last_seen.append(name)

            # Render sign name on the bottom of the buffer (if multiple signs detected,
            # current sign name is written above the previous sign name).           .
            detecttext = name
            detecttext_font = medfont
            detecttext_color = (255, 0, 0)
            detecttext_surface = detecttext_font.render(detecttext, True, detecttext_color)
            dtvp = buffer.get_height() - (i+1)*(detecttext_font.size(detecttext)[1]) - i*detecttext_font.size(detecttext)[1]//2
            detecttext_position = (buffer.get_width()//2, dtvp)
            buffer.blit(detecttext_surface, detecttext_surface.get_rect(center=detecttext_position))

            # Make an announcement for the traffic sign if it's new (not detected in previous consecutive frames).
            if detecttext not in already_seen:
                os.system('echo %s | festival --tts & ' % detecttext)

        # If new traffic signs were detected in the current frame, add them to already_seen list
        for ts in last_seen:
            if ts not in already_seen:
                already_seen.append(ts)

        # If the traffic sign disappeared from the frame (a car passed it), remove it from already_seen
        diff = list(set(already_seen)-set(last_seen))
        already_seen = [ts for ts in already_seen if ts not in diff]

        # Reset last_seen.
        last_seen = []

        # Show the buffer image on the screen.
        screen.blit(pygame.transform.rotate(buffer, args.rotation), (0,0))
        pygame.display.update()

# Run the program until it's interrupted by key press.
if __name__ == "__main__":
    args = parse_args()
    try:
        main(args)
    except KeyboardInterrupt:
        capture_manager.stop()

Edit:
To clarify a bit more, I first followed this tutorial to install my Braincraft HAT and then followed this one to try out the object recognition test example (pitft_labeled_output.py) from rpi-vision. Everything worked great through SSH. I saw logging info in the SSH console and the camera feed and recognized objects on the Braincraftt HAT display. Then I decided to try it out from Windows Remote Desktop (after installing xrdp on RPi) and it worked great. I saw logging info in terminal and camera feed on Braincraft display. But, when I wanted to run my program instead of pitft_labeled_output.py, I received the errors mentioned above. I even went further and replaced pitft_labeled_output.py code with my code(dar_ts.py) and ran my code as if I was running pitft_labeled_output.py (thought that there might be some dependencies inside rpi-vision folder), but it didn’t work, received the same error. What could be the issue with my code?

P.S. What also confused me further is that pitft_labeled_output.py has a typo in line 56 and runs fine anyway, but when I ran my code for the first time, it asked me to correct the error.
enter image description here


Get this bounty!!!

#StackBounty: #ssh #pygame #framebuffer #computer-vision #xrdp Python pygame program won't run through SSH / Remote Desktop

Bounty: 50

I’ve been working on my own object recognition program based on the rpi-vision test program pitft_labeled_output.py (from this webpage). It’s basically a custom neural network model and a bit modified code for the new model to work. However, I’m having problems running my program through Remote Desktop. This is how I run my program (using venv from Graphic Labeling Demo):

pi@raspberrypi:~ $ sudo bash
root@raspberrypi:/home/pi# cd signcap && . ../rpi-vision/.venv/bin/activate
(.venv) root@raspberrypi:/home/pi/signcap# python3 src/dar_ts.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
No protocol specified
No protocol specified
No protocol specified
xcb_connection_has_error() returned true
No protocol specified
xcb_connection_has_error() returned true
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
No protocol specified
Unable to init server: Could not connect: Connection refused

(Mask:1359): Gtk-WARNING **: 20:00:31.012: cannot open display: :10.0

As you can see, I’m getting several "No protocol specified" errors and the display error. When I run echo $DISPLAY in a console when I’m connected through Remote Desktop, it prints :10.0.

When I run the pitft_labeled_output.py through Remote Desktop like:

sudo bash
root@raspberrypi:/home/pi# cd rpi-vision && . .venv/bin/activate
(.venv) root@raspberrypi:/home/pi/rpi-vision# python3 tests/pitft_labeled_output.py

The display is successfully opened and everything works as it should.

However, my program works fine locally or through VNC. When I’m connected through VNC and run echo $DISPLAY, I get :1.0.
What could be the issue that it doesn’t work through Remote Desktop, but pitft_labeled_output.py does?

Here is the code of my program so you can compare it with pitft_labeled_output.py from rpi-vision:

import time
import logging
import argparse
import pygame
import os
import sys
import numpy as np
import subprocess
import signal

# Environment variables for Braincraft HAT.
os.environ['SDL_FBDEV'] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER'] = "fbcon"

def dont_quit(signal, frame):
   print('Caught signal: {}'.format(signal))
signal.signal(signal.SIGHUP, dont_quit)

from capture import PiCameraStream
from tsdar import TrafficSignDetectorAndRecognizer

# Initialize the logger.
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)

# Initialize the display.
pygame.init()
# Create a Surface object which is shown on the display.
# If size is set to (0,0), the created Surface will have the same size as the
# current screen resolution (240x240 for Braincraft HAT).
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
# Declare the capture manager for Pi Camera.
capture_manager = None

# Function for parsing program arguments.
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--rotation', type=int, choices=[0, 90, 180, 270],
                        dest='rotation', action='store', default=0,
                        help='Rotate everything on the display by this angle')
    args = parser.parse_args()
    return args

last_seen = []
already_seen = []

def main(args):
    global capture_manager, last_seen, already_seen

    # Initialize the capture manager to get stream from Pi Camera.
    if screen.get_width() == screen.get_height() or args.roation in (0, 180):
        capture_manager = PiCameraStream(resolution=(max(320, screen.get_width()), max(240, screen.get_height())), rotation=180, preview=False, format='rgb')
    else:
        capture_manager = PiCameraStream(resolution=(max(240, screen.get_height()), max(320, screen.get_width())), rotation=180, preview=False, format='rgb')

    # Initialize the buffer size to screen size.
    if args.rotation in (0, 180):
        buffer = pygame.Surface((screen.get_width(), screen.get_height()))
    else:
        buffer = pygame.Surface((screen.get_height(), screen.get_width()))

    # Hide the mouse from the screen.
    pygame.mouse.set_visible(False)
    # Initialize the screen to black.
    screen.fill((0,0,0))
    # Try to show the splash image on the screen (if the image exists), otherwise, leave screen black.
    try:
        splash = pygame.image.load(os.path.dirname(sys.argv[0])+'/bchatsplash.bmp')
        splash = pygame.transform.rotate(splash, args.rotation)
        screen.blit(splash, ((screen.get_width() / 2) - (splash.get_width() / 2),
                    (screen.get_height() / 2) - (splash.get_height() / 2)))
    except pygame.error:
        pass
    pygame.display.update()

    # Use the default font.
    smallfont = pygame.font.Font(None, 24)
    medfont = pygame.font.Font(None, 36)

    # Initialize the traffic sign detector and recognizer object with the path
    # to the TensorFlow Lite (tflite) neural network model.
    tsdar0 = TrafficSignDetectorAndRecognizer(os.path.dirname(sys.argv[0])+'/models/uw_tsdar_model_no_aug_w_opts.tflite')

    # Start getting capture from Pi Camera.
    capture_manager.start()

    while not capture_manager.stopped:
        # If the frame wasn't captured successfully, go to the next while iteration
        if capture_manager.frame is None:
            continue

        # Fill the buffer with black color
        buffer.fill((0,0,0))

        # Update the frame.
        rgb_frame = capture_manager.frame

        # Make predictions. If traffic signs were detected, a bounding rectangle
        # will be drawn around them.
        timestamp = time.monotonic()
        predictions, out_frame = tsdar0.predict(rgb_frame)
        delta = time.monotonic() - timestamp
        logging.info(predictions)
        logging.info("TFLite inference took %d ms, %0.1f FPS" % (delta * 1000, 1 / delta))

        # Make an image from a frame.
        previewframe = np.ascontiguousarray(out_frame)
        img = pygame.image.frombuffer(previewframe, capture_manager.camera.resolution, 'RGB')

        # Put the image into buffer.
        buffer.blit(img, (0, 0))

        # Add FPS and temperature on the top corner of the buffer.
        fpstext = "%0.1f FPS" % (1/delta,)
        fpstext_surface = smallfont.render(fpstext, True, (255, 0, 0))
        fpstext_position = (buffer.get_width()-10, 10) # Near the top right corner
        buffer.blit(fpstext_surface, fpstext_surface.get_rect(topright=fpstext_position))
        try:
            temp = int(open("/sys/class/thermal/thermal_zone0/temp").read()) / 1000
            temptext = "%dN{DEGREE SIGN}C" % temp
            temptext_surface = smallfont.render(temptext, True, (255, 0, 0))
            temptext_position = (buffer.get_width()-10, 30) # near the top right corner
            buffer.blit(temptext_surface, temptext_surface.get_rect(topright=temptext_position))
        except OSError:
            pass

        # Reset the detecttext vertical position.
        dtvp = 0

        # For each traffic sign that is recognized in the current frame (up to 3 signs),
        # its name will be printed on the screen and it will be announced if it already wasn't.
        for i in range(len(predictions)):
            p = predictions[i];
            name = tsdar0.CLASS_NAMES[p]
            print("Detected", name)

            last_seen.append(name)

            # Render sign name on the bottom of the buffer (if multiple signs detected,
            # current sign name is written above the previous sign name).           .
            detecttext = name
            detecttext_font = medfont
            detecttext_color = (255, 0, 0)
            detecttext_surface = detecttext_font.render(detecttext, True, detecttext_color)
            dtvp = buffer.get_height() - (i+1)*(detecttext_font.size(detecttext)[1]) - i*detecttext_font.size(detecttext)[1]//2
            detecttext_position = (buffer.get_width()//2, dtvp)
            buffer.blit(detecttext_surface, detecttext_surface.get_rect(center=detecttext_position))

            # Make an announcement for the traffic sign if it's new (not detected in previous consecutive frames).
            if detecttext not in already_seen:
                os.system('echo %s | festival --tts & ' % detecttext)

        # If new traffic signs were detected in the current frame, add them to already_seen list
        for ts in last_seen:
            if ts not in already_seen:
                already_seen.append(ts)

        # If the traffic sign disappeared from the frame (a car passed it), remove it from already_seen
        diff = list(set(already_seen)-set(last_seen))
        already_seen = [ts for ts in already_seen if ts not in diff]

        # Reset last_seen.
        last_seen = []

        # Show the buffer image on the screen.
        screen.blit(pygame.transform.rotate(buffer, args.rotation), (0,0))
        pygame.display.update()

# Run the program until it's interrupted by key press.
if __name__ == "__main__":
    args = parse_args()
    try:
        main(args)
    except KeyboardInterrupt:
        capture_manager.stop()

Edit:
To clarify a bit more, I first followed this tutorial to install my Braincraft HAT and then followed this one to try out the object recognition test example (pitft_labeled_output.py) from rpi-vision. Everything worked great through SSH. I saw logging info in the SSH console and the camera feed and recognized objects on the Braincraftt HAT display. Then I decided to try it out from Windows Remote Desktop (after installing xrdp on RPi) and it worked great. I saw logging info in terminal and camera feed on Braincraft display. But, when I wanted to run my program instead of pitft_labeled_output.py, I received the errors mentioned above. I even went further and replaced pitft_labeled_output.py code with my code(dar_ts.py) and ran my code as if I was running pitft_labeled_output.py (thought that there might be some dependencies inside rpi-vision folder), but it didn’t work, received the same error. What could be the issue with my code?

P.S. What also confused me further is that pitft_labeled_output.py has a typo in line 56 and runs fine anyway, but when I ran my code for the first time, it asked me to correct the error.
enter image description here


Get this bounty!!!

#StackBounty: #ssh #pygame #framebuffer #computer-vision #xrdp Python pygame program won't run through SSH / Remote Desktop

Bounty: 50

I’ve been working on my own object recognition program based on the rpi-vision test program pitft_labeled_output.py (from this webpage). It’s basically a custom neural network model and a bit modified code for the new model to work. However, I’m having problems running my program through Remote Desktop. This is how I run my program (using venv from Graphic Labeling Demo):

pi@raspberrypi:~ $ sudo bash
root@raspberrypi:/home/pi# cd signcap && . ../rpi-vision/.venv/bin/activate
(.venv) root@raspberrypi:/home/pi/signcap# python3 src/dar_ts.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
No protocol specified
No protocol specified
No protocol specified
xcb_connection_has_error() returned true
No protocol specified
xcb_connection_has_error() returned true
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
No protocol specified
Unable to init server: Could not connect: Connection refused

(Mask:1359): Gtk-WARNING **: 20:00:31.012: cannot open display: :10.0

As you can see, I’m getting several "No protocol specified" errors and the display error. When I run echo $DISPLAY in a console when I’m connected through Remote Desktop, it prints :10.0.

When I run the pitft_labeled_output.py through Remote Desktop like:

sudo bash
root@raspberrypi:/home/pi# cd rpi-vision && . .venv/bin/activate
(.venv) root@raspberrypi:/home/pi/rpi-vision# python3 tests/pitft_labeled_output.py

The display is successfully opened and everything works as it should.

However, my program works fine locally or through VNC. When I’m connected through VNC and run echo $DISPLAY, I get :1.0.
What could be the issue that it doesn’t work through Remote Desktop, but pitft_labeled_output.py does?

Here is the code of my program so you can compare it with pitft_labeled_output.py from rpi-vision:

import time
import logging
import argparse
import pygame
import os
import sys
import numpy as np
import subprocess
import signal

# Environment variables for Braincraft HAT.
os.environ['SDL_FBDEV'] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER'] = "fbcon"

def dont_quit(signal, frame):
   print('Caught signal: {}'.format(signal))
signal.signal(signal.SIGHUP, dont_quit)

from capture import PiCameraStream
from tsdar import TrafficSignDetectorAndRecognizer

# Initialize the logger.
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)

# Initialize the display.
pygame.init()
# Create a Surface object which is shown on the display.
# If size is set to (0,0), the created Surface will have the same size as the
# current screen resolution (240x240 for Braincraft HAT).
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
# Declare the capture manager for Pi Camera.
capture_manager = None

# Function for parsing program arguments.
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--rotation', type=int, choices=[0, 90, 180, 270],
                        dest='rotation', action='store', default=0,
                        help='Rotate everything on the display by this angle')
    args = parser.parse_args()
    return args

last_seen = []
already_seen = []

def main(args):
    global capture_manager, last_seen, already_seen

    # Initialize the capture manager to get stream from Pi Camera.
    if screen.get_width() == screen.get_height() or args.roation in (0, 180):
        capture_manager = PiCameraStream(resolution=(max(320, screen.get_width()), max(240, screen.get_height())), rotation=180, preview=False, format='rgb')
    else:
        capture_manager = PiCameraStream(resolution=(max(240, screen.get_height()), max(320, screen.get_width())), rotation=180, preview=False, format='rgb')

    # Initialize the buffer size to screen size.
    if args.rotation in (0, 180):
        buffer = pygame.Surface((screen.get_width(), screen.get_height()))
    else:
        buffer = pygame.Surface((screen.get_height(), screen.get_width()))

    # Hide the mouse from the screen.
    pygame.mouse.set_visible(False)
    # Initialize the screen to black.
    screen.fill((0,0,0))
    # Try to show the splash image on the screen (if the image exists), otherwise, leave screen black.
    try:
        splash = pygame.image.load(os.path.dirname(sys.argv[0])+'/bchatsplash.bmp')
        splash = pygame.transform.rotate(splash, args.rotation)
        screen.blit(splash, ((screen.get_width() / 2) - (splash.get_width() / 2),
                    (screen.get_height() / 2) - (splash.get_height() / 2)))
    except pygame.error:
        pass
    pygame.display.update()

    # Use the default font.
    smallfont = pygame.font.Font(None, 24)
    medfont = pygame.font.Font(None, 36)

    # Initialize the traffic sign detector and recognizer object with the path
    # to the TensorFlow Lite (tflite) neural network model.
    tsdar0 = TrafficSignDetectorAndRecognizer(os.path.dirname(sys.argv[0])+'/models/uw_tsdar_model_no_aug_w_opts.tflite')

    # Start getting capture from Pi Camera.
    capture_manager.start()

    while not capture_manager.stopped:
        # If the frame wasn't captured successfully, go to the next while iteration
        if capture_manager.frame is None:
            continue

        # Fill the buffer with black color
        buffer.fill((0,0,0))

        # Update the frame.
        rgb_frame = capture_manager.frame

        # Make predictions. If traffic signs were detected, a bounding rectangle
        # will be drawn around them.
        timestamp = time.monotonic()
        predictions, out_frame = tsdar0.predict(rgb_frame)
        delta = time.monotonic() - timestamp
        logging.info(predictions)
        logging.info("TFLite inference took %d ms, %0.1f FPS" % (delta * 1000, 1 / delta))

        # Make an image from a frame.
        previewframe = np.ascontiguousarray(out_frame)
        img = pygame.image.frombuffer(previewframe, capture_manager.camera.resolution, 'RGB')

        # Put the image into buffer.
        buffer.blit(img, (0, 0))

        # Add FPS and temperature on the top corner of the buffer.
        fpstext = "%0.1f FPS" % (1/delta,)
        fpstext_surface = smallfont.render(fpstext, True, (255, 0, 0))
        fpstext_position = (buffer.get_width()-10, 10) # Near the top right corner
        buffer.blit(fpstext_surface, fpstext_surface.get_rect(topright=fpstext_position))
        try:
            temp = int(open("/sys/class/thermal/thermal_zone0/temp").read()) / 1000
            temptext = "%dN{DEGREE SIGN}C" % temp
            temptext_surface = smallfont.render(temptext, True, (255, 0, 0))
            temptext_position = (buffer.get_width()-10, 30) # near the top right corner
            buffer.blit(temptext_surface, temptext_surface.get_rect(topright=temptext_position))
        except OSError:
            pass

        # Reset the detecttext vertical position.
        dtvp = 0

        # For each traffic sign that is recognized in the current frame (up to 3 signs),
        # its name will be printed on the screen and it will be announced if it already wasn't.
        for i in range(len(predictions)):
            p = predictions[i];
            name = tsdar0.CLASS_NAMES[p]
            print("Detected", name)

            last_seen.append(name)

            # Render sign name on the bottom of the buffer (if multiple signs detected,
            # current sign name is written above the previous sign name).           .
            detecttext = name
            detecttext_font = medfont
            detecttext_color = (255, 0, 0)
            detecttext_surface = detecttext_font.render(detecttext, True, detecttext_color)
            dtvp = buffer.get_height() - (i+1)*(detecttext_font.size(detecttext)[1]) - i*detecttext_font.size(detecttext)[1]//2
            detecttext_position = (buffer.get_width()//2, dtvp)
            buffer.blit(detecttext_surface, detecttext_surface.get_rect(center=detecttext_position))

            # Make an announcement for the traffic sign if it's new (not detected in previous consecutive frames).
            if detecttext not in already_seen:
                os.system('echo %s | festival --tts & ' % detecttext)

        # If new traffic signs were detected in the current frame, add them to already_seen list
        for ts in last_seen:
            if ts not in already_seen:
                already_seen.append(ts)

        # If the traffic sign disappeared from the frame (a car passed it), remove it from already_seen
        diff = list(set(already_seen)-set(last_seen))
        already_seen = [ts for ts in already_seen if ts not in diff]

        # Reset last_seen.
        last_seen = []

        # Show the buffer image on the screen.
        screen.blit(pygame.transform.rotate(buffer, args.rotation), (0,0))
        pygame.display.update()

# Run the program until it's interrupted by key press.
if __name__ == "__main__":
    args = parse_args()
    try:
        main(args)
    except KeyboardInterrupt:
        capture_manager.stop()

Edit:
To clarify a bit more, I first followed this tutorial to install my Braincraft HAT and then followed this one to try out the object recognition test example (pitft_labeled_output.py) from rpi-vision. Everything worked great through SSH. I saw logging info in the SSH console and the camera feed and recognized objects on the Braincraftt HAT display. Then I decided to try it out from Windows Remote Desktop (after installing xrdp on RPi) and it worked great. I saw logging info in terminal and camera feed on Braincraft display. But, when I wanted to run my program instead of pitft_labeled_output.py, I received the errors mentioned above. I even went further and replaced pitft_labeled_output.py code with my code(dar_ts.py) and ran my code as if I was running pitft_labeled_output.py (thought that there might be some dependencies inside rpi-vision folder), but it didn’t work, received the same error. What could be the issue with my code?

P.S. What also confused me further is that pitft_labeled_output.py has a typo in line 56 and runs fine anyway, but when I ran my code for the first time, it asked me to correct the error.
enter image description here


Get this bounty!!!

#StackBounty: #ssh #pygame #framebuffer #computer-vision #xrdp Python pygame program won't run through SSH / Remote Desktop

Bounty: 50

I’ve been working on my own object recognition program based on the rpi-vision test program pitft_labeled_output.py (from this webpage). It’s basically a custom neural network model and a bit modified code for the new model to work. However, I’m having problems running my program through Remote Desktop. This is how I run my program (using venv from Graphic Labeling Demo):

pi@raspberrypi:~ $ sudo bash
root@raspberrypi:/home/pi# cd signcap && . ../rpi-vision/.venv/bin/activate
(.venv) root@raspberrypi:/home/pi/signcap# python3 src/dar_ts.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
No protocol specified
No protocol specified
No protocol specified
xcb_connection_has_error() returned true
No protocol specified
xcb_connection_has_error() returned true
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
No protocol specified
Unable to init server: Could not connect: Connection refused

(Mask:1359): Gtk-WARNING **: 20:00:31.012: cannot open display: :10.0

As you can see, I’m getting several "No protocol specified" errors and the display error. When I run echo $DISPLAY in a console when I’m connected through Remote Desktop, it prints :10.0.

When I run the pitft_labeled_output.py through Remote Desktop like:

sudo bash
root@raspberrypi:/home/pi# cd rpi-vision && . .venv/bin/activate
(.venv) root@raspberrypi:/home/pi/rpi-vision# python3 tests/pitft_labeled_output.py

The display is successfully opened and everything works as it should.

However, my program works fine locally or through VNC. When I’m connected through VNC and run echo $DISPLAY, I get :1.0.
What could be the issue that it doesn’t work through Remote Desktop, but pitft_labeled_output.py does?

Here is the code of my program so you can compare it with pitft_labeled_output.py from rpi-vision:

import time
import logging
import argparse
import pygame
import os
import sys
import numpy as np
import subprocess
import signal

# Environment variables for Braincraft HAT.
os.environ['SDL_FBDEV'] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER'] = "fbcon"

def dont_quit(signal, frame):
   print('Caught signal: {}'.format(signal))
signal.signal(signal.SIGHUP, dont_quit)

from capture import PiCameraStream
from tsdar import TrafficSignDetectorAndRecognizer

# Initialize the logger.
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)

# Initialize the display.
pygame.init()
# Create a Surface object which is shown on the display.
# If size is set to (0,0), the created Surface will have the same size as the
# current screen resolution (240x240 for Braincraft HAT).
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
# Declare the capture manager for Pi Camera.
capture_manager = None

# Function for parsing program arguments.
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--rotation', type=int, choices=[0, 90, 180, 270],
                        dest='rotation', action='store', default=0,
                        help='Rotate everything on the display by this angle')
    args = parser.parse_args()
    return args

last_seen = []
already_seen = []

def main(args):
    global capture_manager, last_seen, already_seen

    # Initialize the capture manager to get stream from Pi Camera.
    if screen.get_width() == screen.get_height() or args.roation in (0, 180):
        capture_manager = PiCameraStream(resolution=(max(320, screen.get_width()), max(240, screen.get_height())), rotation=180, preview=False, format='rgb')
    else:
        capture_manager = PiCameraStream(resolution=(max(240, screen.get_height()), max(320, screen.get_width())), rotation=180, preview=False, format='rgb')

    # Initialize the buffer size to screen size.
    if args.rotation in (0, 180):
        buffer = pygame.Surface((screen.get_width(), screen.get_height()))
    else:
        buffer = pygame.Surface((screen.get_height(), screen.get_width()))

    # Hide the mouse from the screen.
    pygame.mouse.set_visible(False)
    # Initialize the screen to black.
    screen.fill((0,0,0))
    # Try to show the splash image on the screen (if the image exists), otherwise, leave screen black.
    try:
        splash = pygame.image.load(os.path.dirname(sys.argv[0])+'/bchatsplash.bmp')
        splash = pygame.transform.rotate(splash, args.rotation)
        screen.blit(splash, ((screen.get_width() / 2) - (splash.get_width() / 2),
                    (screen.get_height() / 2) - (splash.get_height() / 2)))
    except pygame.error:
        pass
    pygame.display.update()

    # Use the default font.
    smallfont = pygame.font.Font(None, 24)
    medfont = pygame.font.Font(None, 36)

    # Initialize the traffic sign detector and recognizer object with the path
    # to the TensorFlow Lite (tflite) neural network model.
    tsdar0 = TrafficSignDetectorAndRecognizer(os.path.dirname(sys.argv[0])+'/models/uw_tsdar_model_no_aug_w_opts.tflite')

    # Start getting capture from Pi Camera.
    capture_manager.start()

    while not capture_manager.stopped:
        # If the frame wasn't captured successfully, go to the next while iteration
        if capture_manager.frame is None:
            continue

        # Fill the buffer with black color
        buffer.fill((0,0,0))

        # Update the frame.
        rgb_frame = capture_manager.frame

        # Make predictions. If traffic signs were detected, a bounding rectangle
        # will be drawn around them.
        timestamp = time.monotonic()
        predictions, out_frame = tsdar0.predict(rgb_frame)
        delta = time.monotonic() - timestamp
        logging.info(predictions)
        logging.info("TFLite inference took %d ms, %0.1f FPS" % (delta * 1000, 1 / delta))

        # Make an image from a frame.
        previewframe = np.ascontiguousarray(out_frame)
        img = pygame.image.frombuffer(previewframe, capture_manager.camera.resolution, 'RGB')

        # Put the image into buffer.
        buffer.blit(img, (0, 0))

        # Add FPS and temperature on the top corner of the buffer.
        fpstext = "%0.1f FPS" % (1/delta,)
        fpstext_surface = smallfont.render(fpstext, True, (255, 0, 0))
        fpstext_position = (buffer.get_width()-10, 10) # Near the top right corner
        buffer.blit(fpstext_surface, fpstext_surface.get_rect(topright=fpstext_position))
        try:
            temp = int(open("/sys/class/thermal/thermal_zone0/temp").read()) / 1000
            temptext = "%dN{DEGREE SIGN}C" % temp
            temptext_surface = smallfont.render(temptext, True, (255, 0, 0))
            temptext_position = (buffer.get_width()-10, 30) # near the top right corner
            buffer.blit(temptext_surface, temptext_surface.get_rect(topright=temptext_position))
        except OSError:
            pass

        # Reset the detecttext vertical position.
        dtvp = 0

        # For each traffic sign that is recognized in the current frame (up to 3 signs),
        # its name will be printed on the screen and it will be announced if it already wasn't.
        for i in range(len(predictions)):
            p = predictions[i];
            name = tsdar0.CLASS_NAMES[p]
            print("Detected", name)

            last_seen.append(name)

            # Render sign name on the bottom of the buffer (if multiple signs detected,
            # current sign name is written above the previous sign name).           .
            detecttext = name
            detecttext_font = medfont
            detecttext_color = (255, 0, 0)
            detecttext_surface = detecttext_font.render(detecttext, True, detecttext_color)
            dtvp = buffer.get_height() - (i+1)*(detecttext_font.size(detecttext)[1]) - i*detecttext_font.size(detecttext)[1]//2
            detecttext_position = (buffer.get_width()//2, dtvp)
            buffer.blit(detecttext_surface, detecttext_surface.get_rect(center=detecttext_position))

            # Make an announcement for the traffic sign if it's new (not detected in previous consecutive frames).
            if detecttext not in already_seen:
                os.system('echo %s | festival --tts & ' % detecttext)

        # If new traffic signs were detected in the current frame, add them to already_seen list
        for ts in last_seen:
            if ts not in already_seen:
                already_seen.append(ts)

        # If the traffic sign disappeared from the frame (a car passed it), remove it from already_seen
        diff = list(set(already_seen)-set(last_seen))
        already_seen = [ts for ts in already_seen if ts not in diff]

        # Reset last_seen.
        last_seen = []

        # Show the buffer image on the screen.
        screen.blit(pygame.transform.rotate(buffer, args.rotation), (0,0))
        pygame.display.update()

# Run the program until it's interrupted by key press.
if __name__ == "__main__":
    args = parse_args()
    try:
        main(args)
    except KeyboardInterrupt:
        capture_manager.stop()

Edit:
To clarify a bit more, I first followed this tutorial to install my Braincraft HAT and then followed this one to try out the object recognition test example (pitft_labeled_output.py) from rpi-vision. Everything worked great through SSH. I saw logging info in the SSH console and the camera feed and recognized objects on the Braincraftt HAT display. Then I decided to try it out from Windows Remote Desktop (after installing xrdp on RPi) and it worked great. I saw logging info in terminal and camera feed on Braincraft display. But, when I wanted to run my program instead of pitft_labeled_output.py, I received the errors mentioned above. I even went further and replaced pitft_labeled_output.py code with my code(dar_ts.py) and ran my code as if I was running pitft_labeled_output.py (thought that there might be some dependencies inside rpi-vision folder), but it didn’t work, received the same error. What could be the issue with my code?

P.S. What also confused me further is that pitft_labeled_output.py has a typo in line 56 and runs fine anyway, but when I ran my code for the first time, it asked me to correct the error.
enter image description here


Get this bounty!!!

#StackBounty: #ssh #pygame #framebuffer #computer-vision #xrdp Python pygame program won't run through SSH / Remote Desktop

Bounty: 50

I’ve been working on my own object recognition program based on the rpi-vision test program pitft_labeled_output.py (from this webpage). It’s basically a custom neural network model and a bit modified code for the new model to work. However, I’m having problems running my program through Remote Desktop. This is how I run my program (using venv from Graphic Labeling Demo):

pi@raspberrypi:~ $ sudo bash
root@raspberrypi:/home/pi# cd signcap && . ../rpi-vision/.venv/bin/activate
(.venv) root@raspberrypi:/home/pi/signcap# python3 src/dar_ts.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
No protocol specified
No protocol specified
No protocol specified
xcb_connection_has_error() returned true
No protocol specified
xcb_connection_has_error() returned true
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
No protocol specified
Unable to init server: Could not connect: Connection refused

(Mask:1359): Gtk-WARNING **: 20:00:31.012: cannot open display: :10.0

As you can see, I’m getting several "No protocol specified" errors and the display error. When I run echo $DISPLAY in a console when I’m connected through Remote Desktop, it prints :10.0.

When I run the pitft_labeled_output.py through Remote Desktop like:

sudo bash
root@raspberrypi:/home/pi# cd rpi-vision && . .venv/bin/activate
(.venv) root@raspberrypi:/home/pi/rpi-vision# python3 tests/pitft_labeled_output.py

The display is successfully opened and everything works as it should.

However, my program works fine locally or through VNC. When I’m connected through VNC and run echo $DISPLAY, I get :1.0.
What could be the issue that it doesn’t work through Remote Desktop, but pitft_labeled_output.py does?

Here is the code of my program so you can compare it with pitft_labeled_output.py from rpi-vision:

import time
import logging
import argparse
import pygame
import os
import sys
import numpy as np
import subprocess
import signal

# Environment variables for Braincraft HAT.
os.environ['SDL_FBDEV'] = "/dev/fb1"
os.environ['SDL_VIDEODRIVER'] = "fbcon"

def dont_quit(signal, frame):
   print('Caught signal: {}'.format(signal))
signal.signal(signal.SIGHUP, dont_quit)

from capture import PiCameraStream
from tsdar import TrafficSignDetectorAndRecognizer

# Initialize the logger.
logging.basicConfig()
logging.getLogger().setLevel(logging.INFO)

# Initialize the display.
pygame.init()
# Create a Surface object which is shown on the display.
# If size is set to (0,0), the created Surface will have the same size as the
# current screen resolution (240x240 for Braincraft HAT).
screen = pygame.display.set_mode((0,0), pygame.FULLSCREEN)
# Declare the capture manager for Pi Camera.
capture_manager = None

# Function for parsing program arguments.
def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('--rotation', type=int, choices=[0, 90, 180, 270],
                        dest='rotation', action='store', default=0,
                        help='Rotate everything on the display by this angle')
    args = parser.parse_args()
    return args

last_seen = []
already_seen = []

def main(args):
    global capture_manager, last_seen, already_seen

    # Initialize the capture manager to get stream from Pi Camera.
    if screen.get_width() == screen.get_height() or args.roation in (0, 180):
        capture_manager = PiCameraStream(resolution=(max(320, screen.get_width()), max(240, screen.get_height())), rotation=180, preview=False, format='rgb')
    else:
        capture_manager = PiCameraStream(resolution=(max(240, screen.get_height()), max(320, screen.get_width())), rotation=180, preview=False, format='rgb')

    # Initialize the buffer size to screen size.
    if args.rotation in (0, 180):
        buffer = pygame.Surface((screen.get_width(), screen.get_height()))
    else:
        buffer = pygame.Surface((screen.get_height(), screen.get_width()))

    # Hide the mouse from the screen.
    pygame.mouse.set_visible(False)
    # Initialize the screen to black.
    screen.fill((0,0,0))
    # Try to show the splash image on the screen (if the image exists), otherwise, leave screen black.
    try:
        splash = pygame.image.load(os.path.dirname(sys.argv[0])+'/bchatsplash.bmp')
        splash = pygame.transform.rotate(splash, args.rotation)
        screen.blit(splash, ((screen.get_width() / 2) - (splash.get_width() / 2),
                    (screen.get_height() / 2) - (splash.get_height() / 2)))
    except pygame.error:
        pass
    pygame.display.update()

    # Use the default font.
    smallfont = pygame.font.Font(None, 24)
    medfont = pygame.font.Font(None, 36)

    # Initialize the traffic sign detector and recognizer object with the path
    # to the TensorFlow Lite (tflite) neural network model.
    tsdar0 = TrafficSignDetectorAndRecognizer(os.path.dirname(sys.argv[0])+'/models/uw_tsdar_model_no_aug_w_opts.tflite')

    # Start getting capture from Pi Camera.
    capture_manager.start()

    while not capture_manager.stopped:
        # If the frame wasn't captured successfully, go to the next while iteration
        if capture_manager.frame is None:
            continue

        # Fill the buffer with black color
        buffer.fill((0,0,0))

        # Update the frame.
        rgb_frame = capture_manager.frame

        # Make predictions. If traffic signs were detected, a bounding rectangle
        # will be drawn around them.
        timestamp = time.monotonic()
        predictions, out_frame = tsdar0.predict(rgb_frame)
        delta = time.monotonic() - timestamp
        logging.info(predictions)
        logging.info("TFLite inference took %d ms, %0.1f FPS" % (delta * 1000, 1 / delta))

        # Make an image from a frame.
        previewframe = np.ascontiguousarray(out_frame)
        img = pygame.image.frombuffer(previewframe, capture_manager.camera.resolution, 'RGB')

        # Put the image into buffer.
        buffer.blit(img, (0, 0))

        # Add FPS and temperature on the top corner of the buffer.
        fpstext = "%0.1f FPS" % (1/delta,)
        fpstext_surface = smallfont.render(fpstext, True, (255, 0, 0))
        fpstext_position = (buffer.get_width()-10, 10) # Near the top right corner
        buffer.blit(fpstext_surface, fpstext_surface.get_rect(topright=fpstext_position))
        try:
            temp = int(open("/sys/class/thermal/thermal_zone0/temp").read()) / 1000
            temptext = "%dN{DEGREE SIGN}C" % temp
            temptext_surface = smallfont.render(temptext, True, (255, 0, 0))
            temptext_position = (buffer.get_width()-10, 30) # near the top right corner
            buffer.blit(temptext_surface, temptext_surface.get_rect(topright=temptext_position))
        except OSError:
            pass

        # Reset the detecttext vertical position.
        dtvp = 0

        # For each traffic sign that is recognized in the current frame (up to 3 signs),
        # its name will be printed on the screen and it will be announced if it already wasn't.
        for i in range(len(predictions)):
            p = predictions[i];
            name = tsdar0.CLASS_NAMES[p]
            print("Detected", name)

            last_seen.append(name)

            # Render sign name on the bottom of the buffer (if multiple signs detected,
            # current sign name is written above the previous sign name).           .
            detecttext = name
            detecttext_font = medfont
            detecttext_color = (255, 0, 0)
            detecttext_surface = detecttext_font.render(detecttext, True, detecttext_color)
            dtvp = buffer.get_height() - (i+1)*(detecttext_font.size(detecttext)[1]) - i*detecttext_font.size(detecttext)[1]//2
            detecttext_position = (buffer.get_width()//2, dtvp)
            buffer.blit(detecttext_surface, detecttext_surface.get_rect(center=detecttext_position))

            # Make an announcement for the traffic sign if it's new (not detected in previous consecutive frames).
            if detecttext not in already_seen:
                os.system('echo %s | festival --tts & ' % detecttext)

        # If new traffic signs were detected in the current frame, add them to already_seen list
        for ts in last_seen:
            if ts not in already_seen:
                already_seen.append(ts)

        # If the traffic sign disappeared from the frame (a car passed it), remove it from already_seen
        diff = list(set(already_seen)-set(last_seen))
        already_seen = [ts for ts in already_seen if ts not in diff]

        # Reset last_seen.
        last_seen = []

        # Show the buffer image on the screen.
        screen.blit(pygame.transform.rotate(buffer, args.rotation), (0,0))
        pygame.display.update()

# Run the program until it's interrupted by key press.
if __name__ == "__main__":
    args = parse_args()
    try:
        main(args)
    except KeyboardInterrupt:
        capture_manager.stop()

Edit:
To clarify a bit more, I first followed this tutorial to install my Braincraft HAT and then followed this one to try out the object recognition test example (pitft_labeled_output.py) from rpi-vision. Everything worked great through SSH. I saw logging info in the SSH console and the camera feed and recognized objects on the Braincraftt HAT display. Then I decided to try it out from Windows Remote Desktop (after installing xrdp on RPi) and it worked great. I saw logging info in terminal and camera feed on Braincraft display. But, when I wanted to run my program instead of pitft_labeled_output.py, I received the errors mentioned above. I even went further and replaced pitft_labeled_output.py code with my code(dar_ts.py) and ran my code as if I was running pitft_labeled_output.py (thought that there might be some dependencies inside rpi-vision folder), but it didn’t work, received the same error. What could be the issue with my code?

P.S. What also confused me further is that pitft_labeled_output.py has a typo in line 56 and runs fine anyway, but when I ran my code for the first time, it asked me to correct the error.
enter image description here


Get this bounty!!!