#StackBounty: #javascript #django #python-3.x #opencv #pythonanywhere How to access webcam in OpenCV on PythonAnywhere through Javascri…

Bounty: 50

I have developed a WebApplication in Django that has a view method which contains the OpevCV code that when triggered opens the User Webcam to detect its face. This app works fine in my localserver but when I have hosted it on PythonAnywhere it says camera not found as my PA hosting doesnt serve a camera.
So someone suggested me to open the webcam through javascript as it deals with the client machine and then pass its feed to server machine which is my hosting.
But as i am a rookie in Python i am not able to figure how to perform the above task.
I found this piece of js code but i dont know how and where to add this in my Django App.

Code for getting the feed with Javascript

var video = document.querySelector("#videoElement");

if (navigator.mediaDevices.getUserMedia) {
    navigator.mediaDevices.getUserMedia({video: true}).then(function(stream) {
      video.srcObject = stream;
  }).catch(function(err0r) {
      console.log("Something went wrong!");
  });
}

My Python code for opening the camera and detecting faces is as follows (it works in localserver)

import cv2

cascade = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')

cam = cv2.VideoCapture(0)


while True:
    ret, frame = cam.read()
    frame = cv2.flip(frame, 1)

    if ret:
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        faces = cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=3)

        for (x, y, w, h) in faces:
            cropped = cv2.resize(frame[y:y+h, x:x+w], (198,198))
            cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
            cv2.destroyAllWindows()

        cv2.imshow('Stream', frame)

Any help is appreciated. Thank you in advance


Get this bounty!!!

#StackBounty: #opencv #object-detection #emgucv #yolo How to reduce number of classes in YOLOV3 files?

Bounty: 50

I am using YOLOV3 to detect cars in videos. I downloaded three files used in my code coco.names, yolov3.cfg and yolov3.weights which are trained for 80 different classes of objects to be detected. The code worked but very slowly, it takes more than 5 seconds for each frame. I believe that if I reduced the number of classes, it would run much faster. I can delete the unnecessary classes from coco.names, but unfortunately, I don’t understand all the contents from yolov3.cfg, and I can’t even read yolov3.weights.
I was thinking about training my own model, but I faced a lot of problems, so I gave up the idea.
Can anyone help me in modifying these files?


Get this bounty!!!

#StackBounty: #opencv #mingw #gstreamer #gstreamer-1.0 can't build OpenCV + GStreamer correctly (MinGW, Windows)

Bounty: 50

I’m trying to run test pipeline:

 cv::VideoCapture cap = cv::VideoCapture(" autovideosrc ! videoconvert ! appsink0 ", cv::CAP_GSTREAMER);

But it doesn’t start and return next debug info (see below). I compiled opencv using mingw32 7.3.0. Version of Gstreamer is also 32bit. What can be wrong?
win7, mingw 7.3.0, opencv 4.1.0, gstreamer 1.16.0

    0:00:00.040498363  6904   1F44A7C0 WARN      GST_PLUGIN_LOADING gstplugin.c:793:_priv_gst_plugin_load_file_for_registry: module_open failed: 'E:gstreamer1.0x86libgstreamer-1.0libgstdecklink.dll': Не найдена указанная процедура.

        (untitled2.exe:6904): GStreamer-WARNING **: 02:29:33.412: Failed to load plugin 'E:gstreamer1.0x86libgstreamer-1.0libgstdecklink.dll': 'E:gstreamer1.0x86libgstreamer-1.0libgstdecklink.dll': 
The specified procedure was not found.
        0:00:00.061620856  6904   1F44A7C0 WARN      GST_PLUGIN_LOADING gstplugin.c:793:_priv_gst_plugin_load_file_for_registry: module_open failed: 'E:gstreamer1.0x86libgstreamer-1.0libgstopenh264.dll': Не найдена указанная процедура.

        (untitled2.exe:6904): GStreamer-WARNING **: 02:29:33.432: Failed to load plugin 'E:gstreamer1.0x86libgstreamer-1.0libgstopenh264.dll': 'E:gstreamer1.0x86libgstreamer-1.0libgstopenh264.dll': 
The specified procedure was not found.
        0:00:00.072668621  6904   1F44A7C0 WARN      GST_PLUGIN_LOADING gstplugin.c:793:_priv_gst_plugin_load_file_for_registry: module_open failed: 'E:gstreamer1.0x86libgstreamer-1.0libgstsoundtouch.dll': Не найдена указанная процедура.

        (untitled2.exe:6904): GStreamer-WARNING **: 02:29:33.442: Failed to load plugin 'E:gstreamer1.0x86libgstreamer-1.0libgstsoundtouch.dll': 'E:gstreamer1.0x86libgstreamer-1.0libgstsoundtouch.dll': 
The specified procedure was not found.
        0:00:00.088487674  6904   1F44A7C0 WARN      GST_PLUGIN_LOADING gstplugin.c:793:_priv_gst_plugin_load_file_for_registry: module_open failed: 'E:gstreamer1.0x86libgstreamer-1.0libgstsrt.dll': Не найдена указанная процедура.

        (untitled2.exe:6904): GStreamer-WARNING **: 02:29:33.465: Failed to load plugin 'E:gstreamer1.0x86libgstreamer-1.0libgstsrt.dll': 'E:gstreamer1.0x86libgstreamer-1.0libgstsrt.dll': 
The specified procedure was not found.
        0:00:00.089972159  6904   1F44A7C0 WARN      GST_PLUGIN_LOADING gstplugin.c:793:_priv_gst_plugin_load_file_for_registry: module_open failed: 'E:gstreamer1.0x86libgstreamer-1.0libgsttaglib.dll': Не найдена указанная процедура.

        (untitled2.exe:6904): GStreamer-WARNING **: 02:29:33.465: Failed to load plugin 'E:gstreamer1.0x86libgstreamer-1.0libgsttaglib.dll': 'E:gstreamer1.0x86libgstreamer-1.0libgsttaglib.dll': 
The specified procedure was not found.
        0:00:00.097988553  6904   1F44A7C0 WARN      GST_PLUGIN_LOADING gstplugin.c:793:_priv_gst_plugin_load_file_for_registry: module_open failed: 'E:gstreamer1.0x86libgstreamer-1.0libgstwebrtcdsp.dll': Не найдена указанная процедура.

        (untitled2.exe:6904): GStreamer-WARNING **: 02:29:33.475: Failed to load plugin 'E:gstreamer1.0x86libgstreamer-1.0libgstwebrtcdsp.dll': 'E:gstreamer1.0x86libgstreamer-1.0libgstwebrtcdsp.dll': 
The specified procedure was not found.
        0:00:00.107822720  6904   1F44A7C0 WARN                 filesrc gstfilesrc.c:533:gst_file_src_start:<source> error: No such file "C:UsersShmeisserDocumentsbuild-untitled2-Desktop_Qt_5_9_4_MinGW_32bit-Debug autovideosrc ! videoconvert ! appsink0"
        0:00:00.107911655  6904   1F44A7C0 WARN                 basesrc gstbasesrc.c:3469:gst_base_src_start:<source> error: Failed to start
        0:00:00.108341459  6904   1F44A7C0 WARN                 filesrc gstfilesrc.c:533:gst_file_src_start:<source> error: No such file "C:UsersShmeisserDocumentsbuild-untitled2-Desktop_Qt_5_9_4_MinGW_32bit-Debug autovideosrc ! videoconvert ! appsink0"
        0:00:00.108391029  6904   1F44A7C0 WARN                 basesrc gstbasesrc.c:3469:gst_base_src_start:<source> error: Failed to start
        0:00:00.108489004  6904   1F44A7C0 WARN                 filesrc gstfilesrc.c:533:gst_file_src_start:<source> error: No such file "C:UsersShmeisserDocumentsbuild-untitled2-Desktop_Qt_5_9_4_MinGW_32bit-Debug autovideosrc ! videoconvert ! appsink0"
        0:00:00.108535367  6904   1F44A7C0 WARN                 basesrc gstbasesrc.c:3469:gst_base_src_start:<source> error: Failed to start
        0:00:00.108575898  6904   1F44A7C0 WARN                 basesrc gstbasesrc.c:3824:gst_base_src_activate_push:<source> Failed to start in push mode
        0:00:00.108603015  6904   1F44A7C0 WARN                GST_PADS gstpad.c:1142:gst_pad_set_active:<source:src> Failed to activate pad


Get this bounty!!!

#StackBounty: #python #algorithm #opencv #image-processing #computer-vision Image Processing: Algorithm Improvement for Real-Time FedEx…

Bounty: 50

I’ve been working on a project involving image processing for logo detection. Specifically, the goal is to develop an automated system for a real-time FedEx truck/logo detector that reads frames from a IP camera stream and sends a notification on detection. Here’s a sample of the system in action with the recognized logo surrounded in the green rectangle.

original frame

Fedex logo

Some constraints on the project:

  • Uses raw OpenCV (no deep learning, AI, or trained neural networks)
  • Image background can be noisy
  • The brightness of the image can vary greatly (morning, afternoon, night)
  • The FedEx truck/logo can have any scale, rotation, or orientation since it could be parked anywhere on the sidewalk
  • The logo could potentially be fuzzy or blurry with different shades depending on the time of day
  • There may be many other vehicles with similar sizes or colors in the same frame
  • Real-time detection (~25 FPS from IP camera)
  • The IP camera is in a fixed position and the FedEx truck will always be in the same orientation (never backwards or upside down)
  • The Fedex Truck will always be the “red” variation instead of the “green” variation

Current Implementation/Algorithm

I have two threads:

  • Thread #1 – Captures frames from the IP camera using cv2.VideoCapture() and resizes frame for further processing. Decided to handle grabbing frames in a separate thread to improve FPS by reducing I/O latency since cv2.VideoCapture() is blocking. By dedicating an independent thread just for capturing frames, this would allow the main processing thread to always have a frame available to perform detection on.
  • Thread #2 – Main processing/detection thread to detect FedEx logo using color thresholding and contour detection.

Overall Pseudo-algorithm

For each frame:
    Find bounding box for purple color of logo
    Find bounding box for red/orange color of logo
    If both bounding boxes are valid/adjacent and contours pass checks:
        Combine bounding boxes
        Draw combined bounding boxes on original frame
        Play sound notification for detected logo

Color thresholding for logo detection

For color thresholding, I have defined HSV (low, high) thresholds for purple and red to detect the logo.

colors = {
    'purple': ([120,45,45], [150,255,255]),
    'red': ([0,130,0], [15,255,255]) 
}

To find the bounding box coordinates for each color, I follow this algorithm:

  • Blur the frame
  • Erode and dilate the frame with a kernel to remove background noise
  • Convert frame from BGR to HSV color format
  • Perform a mask on the frame using the lower and upper HSV color bounds with set color thresholds
  • Find largest contour in the mask and obtain bounding coordinates

After performing a mask, I obtain these isolated purple (left) and red (right) sections of the logo.


False positive checks

Now that I have the two masks, I perform checks to ensure that the found bounding boxes actually form a logo. To do this, I use cv2.matchShapes() which compares the two contours and returns a metric showing the similarity. The lower the result, the higher the match. In addition, I use cv2.pointPolygonTest() which finds the shortest distance between a point in the image and a contour for additional verification. My false positive process involves:

  • Checking if the bounding boxes are valid
  • Ensuring the two bounding boxes are adjacent based on their relative proximity

If the bounding boxes pass the adjacency and similarity metric test, the bounding boxes are combined and a FedEx notification is triggered.

Results

enter image description here
enter image description here

This check algorithm is not really robust as there are many false positives and failed detections. For instance, these false positives were triggered.

enter image description here
enter image description here

While this color thresholding and contour detection approach worked in basic cases where the logo was clear, it was severely lacking in some areas:

  • There is latency problems from having to compute bounding boxes on each frame
  • It occasionally false detects when the logo is not present
  • Brightness and time of day had a great impact on detection accuracy
  • When the logo was on a skewed angle, color threshold detection worked but was unable to detect the logo due to the check algorithm.

Would anyone be able to help me improve my algorithm or suggest alternative detection strategies? Is there any other way to perform this detection since color thresholding is highly dependent on exact calibration? If possible, I would like to move away from color thresholding and the multiple layers of filters since it’s not very robust. Any insight or advice is greatly appreciated!


Get this bounty!!!

#StackBounty: #android #opencv #image-processing #opencv4android #opencv-contour How to apply FloodFill algorithm(Image Processing) in …

Bounty: 50

i have achieved the implementation of FloodFill algorithm in ImageView. But i am not able achieve it on Camera(Surface View of Android). I’m using OpenCv Library for doing it. i have tried the following code using contour concept. But i am not getting exact result.

public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
        mRgba = inputFrame.rgba();

        if (mIsColorSelected) {
            mDetector.process(mRgba);
            List<MatOfPoint> contours = mDetector.getContours();
            Log.e(TAG, "Contours count: " + contours.size());
            for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++) {
                Imgproc.drawContours(mRgba, contours, contourIdx, new Scalar(0,
                        0, 100, 10), -1);

            }

            Mat colorLabel = mRgba.submat(4, 68, 4, 68);
            colorLabel.setTo(mBlobColorRgba);

            Mat spectrumLabel = mRgba.submat(4, 4 + mSpectrum.rows(), 70,
                    70 + mSpectrum.cols());
            mSpectrum.copyTo(spectrumLabel);


        }

        return mRgba;
    }

Anybody have an idea that how to do it. Thanks in advance.


Get this bounty!!!

#StackBounty: #node.js #opencv #mat #opencv4nodejs Why OpenCV Mat creates memory leaks?

Bounty: 50

Not sure if this is relevant, but I’m using opencv4nodejs for my project, and I did run in this situation where if I don’t call .release() on each Mat object, the memory consumption goes up ~10MB/s.

This simple example code will crate the issue.

function loop(camera, display)
{
    let mat = camera.read();

    let grey_mat = mat.bgrToGray();

    loop(camera, display);
}

Where as, this one fixes the problem:

function loop(camera, display)
{
    let mat = camera.read();

    let grey_mat = mat.bgrToGray();

    grey_mat.release();

    mat.release();

    loop(camera, display);
}

If I search for why OpenCV Mat object causes leaks I get answers where people say that Mat is capable of taking care of memory usage on its own.

If the last statement is true, what am I doing wrong? And if I’m not doing anything wrong, why do I have to explicitly tell a Mat object to release its memory? Or, is there a potential issue with the npm module opencv4nodejs itself?


Get this bounty!!!

#StackBounty: #python #object-oriented #multithreading #design-patterns #opencv Image capture client – multi-threading + sharing data b…

Bounty: 100

I’m working on a small side project at the moment – like a homemade CCTV system.

This part is my Python Capture Client – it uses OpenCV to capture frames from a connected webcam and sends the frames to a connected server via a socket.

The main thing I was going for was a small application with two services which operate independently once started. One for capturing frames from the camera, and another for sending + receiving network messages. If either of these fail, the other would still work with no issues.

I have more or less achieved this but I’m not certain that I took the best approach – I’m not normally a Python developer so I sort of winged it with this application.

Things I felt especially strange about were the use of queues. From my searching, they seemed to be the best way for sharing data between threads.

The application can be found here – any advice or comments would be appreciated!

This is the main entry point into the application:

main.py

from orchestrator import Orchestrator
from connection_service import ConnectionService
from capture_service import CaptureService

HOST = "127.0.0.1"
PORT = 11000

def main():
    capture_service = CaptureService()
    connection_service = ConnectionService(HOST, PORT)
    orchestrator = Orchestrator(capture_service, connection_service)

    orchestrator.start()

if __name__ == '__main__':
    main()

This is my orchestration service – it coordinates the main loop of retrieving frames + sending to the server:

orchestrator.py

from connection_service import ConnectionService
from capture_service import CaptureService
from not_connected_exception import NotConnectedException

import multiprocessing
import cv2
import time

class Orchestrator():

    def __init__(self, capture_service, connection_service):
        self.manager = multiprocessing.Manager()

        self.connection_service = connection_service
        self.capture_service = capture_service

        self.SEND_FOOTAGE = True   
        self.DETECT_MOTION = False

        self.RUN = True

    # End services
    def finish(self):
        self.RUN = False
        self.connection_service.disconnect()
        self.capture_service.stop_capture()

    # Start services, connect to server / start capturing from camera
    # Grab frames from capture service and display
    # Retrieve any messages from connection service
    # Deal with message e.g stop / start sending frames
    # If send footage is true, encode frame as string and send
    def start(self):
        print ("Starting Orchestration...")

        self.connection_service.connect()
        self.capture_service.start_capture()
        while self.RUN:
            message = None

            #Get camera frames
            frame = self.capture_service.get_current_frame()

            self.display_frame(frame)

            message = self.connection_service.get_message()

            self.handle_message(message)

            #Send footage if requested
            if self.SEND_FOOTAGE and frame is not None: #or (self.DETECT_MOTION and motion_detected):
                try:
                    frame_data = cv2.imencode('.jpg', frame)[1].tostring()
                    self.connection_service.send_message(frame_data)

                except NotConnectedException as e:
                    self.connection_service.connect()

    def handle_message(self, message):
        if message is "SEND_FOOTAGE":
            self.SEND_FOOTAGE = True

        elif message is "STOP_SEND_FOOTAGE":
            self.SEND_FOOTAGE = False

        elif message is "DETECT_MOTION":
            self.DETECT_MOTION = True

        elif message is "STOP_DETECT_MOTION":
            self.DETECT_MOTION = False

    def display_frame(self, frame):
        if frame is not None:
            # Display the resulting frame
            cv2.imshow('orchestrator', frame)
            if cv2.waitKey(1) & 0xFF == ord('q'):
                cv2.destroyAllWindows()
                raise SystemExit("Exiting...")

This is my capturing service – it’s job is to capture frames from the camera and put the frames onto a queue:

capture_service.py

import cv2
import multiprocessing

class CaptureService():

    FRAME_QUEUE_SIZE_LIMIT = 10
    STOP_QUEUE_SIZE_LIMIT = 1
    START_QUEUE_SIZE_LIMIT = 1

    def __init__(self):
        self.frame = None
        manager = multiprocessing.Manager()

        # The queue to add frames to
        self.frame_queue = manager.Queue(self.FRAME_QUEUE_SIZE_LIMIT)

        # A queue to indicate capturing should be stopped
        self.stop_queue = manager.Queue(self.STOP_QUEUE_SIZE_LIMIT)

        # A queue to indicate that capturing should be started
        self.start_queue = manager.Queue(self.START_QUEUE_SIZE_LIMIT)

    # Start Capture
    # Empty the stop queue. If the start queue is empty - start a new capture thread
    # If start queue is not empty, service has already been started
    def start_capture(self):
        print ("Starting capture...")
        while not self.stop_queue.empty():
            self.stop_queue.get()

        if self.start_queue.empty():
            self.capture_thread = multiprocessing.Process(target=self.capture_frames)
            self.capture_thread.start()
            self.start_queue.put("")
            print ("Capturing started...")
        else:
            print ("Capture already started...")

    # Is Capturing
    # Return true if start queue has a value
    def is_capturing(self):
        return not self.start_queue.empty()

    # Get Current Frame
    # Return the current frame from the frame queue
    def get_current_frame(self):
        if not self.frame_queue.empty():
            return self.frame_queue.get()

        return None

    # Stop Capture
    # Add a message to the stop queue
    # Empty the start queue
    def stop_capture(self):
        if self.stop_queue.empty():
            self.stop_queue.put("")

        while not self.start_queue.empty():
            self.start_queue.get()

    # Capture Frames
    # Captures frames from the device at 0
    # Only add frames to queue if there's space
    def capture_frames(self):
        cap = None
        try:
            cap = cv2.VideoCapture(0)
            while True:
                #Empty Start / Stop queue signals
                if not self.stop_queue.empty():
                    while not self.stop_queue.empty():
                        self.stop_queue.get()
                    while not self.start_queue.empty():
                        self.start_queue.get()
                    break;

                ret, frame = cap.read()

                if self.frame_queue.qsize() > self.FRAME_QUEUE_SIZE_LIMIT or self.frame_queue.full():
                    continue

                self.frame_queue.put(frame)

            # When everything done, release the capture
            cap.release()
            cv2.destroyAllWindows()

        except Exception as e:
            print ("Exception capturing images, stopping...")
            self.stop_capture()
            cv2.destroyAllWindows()
            if cap is not None:
                cap.release()

This is my connection service, it takes care of all network related comms.

connection_service.py

from send_message_exception import SendMessageException
from not_connected_exception import NotConnectedException

import socket
import time
import multiprocessing
import struct

class ConnectionService():

    MAX_QUEUE_SIZE = 1

    def __init__(self, host, port):
        self.host = host
        self.port = port
        self.socket = None

        manager = multiprocessing.Manager()

        # The queue to put messages to send on
        self.send_message_queue = manager.Queue(self.MAX_QUEUE_SIZE)

        # The queue received messages go onto
        self.receive_message_queue = manager.Queue(self.MAX_QUEUE_SIZE)

        # A queue which indicates if the service is connected or not
        self.is_connected_queue = manager.Queue(self.MAX_QUEUE_SIZE)

        # A queue which indicateds if the service is trying to connect
        self.pending_connection_queue = manager.Queue(self.MAX_QUEUE_SIZE)

        # A queue to stop sending activity
        self.stop_send_queue = manager.Queue(self.MAX_QUEUE_SIZE)

        # A queue to stop receiving activity
        self.stop_receive_queue = manager.Queue(self.MAX_QUEUE_SIZE)

    # Connect to the server
    # 1) If already connected - return
    # 2) If pending connection - return
    # 3) Start the network thread - don't return until the connection status is pending            
    def connect(self):
        if self.is_connected():
            return
        elif not self.pending_connection_queue.empty():
            return
        else:
            self.network_thread = multiprocessing.Process(target=self.start_network_comms)
            self.network_thread.start()

            #Give thread time to sort out queue
            while self.pending_connection_queue.empty():
                continue

    # Start network communications
    # Mark connection status as pending via queue. Clear stop queues.
    # Get socket for connection, mark as connected via queue.
    # Start Send + Receive message queues with socket as argument
    def start_network_comms(self):
            self.pending_connection_queue.put("CONNECTING")
            self.clear_queue(self.stop_send_queue)
            self.clear_queue(self.stop_receive_queue)

            self.socket = self.connect_to_server(self.host, self.port)

            self.is_connected_queue.put("CONNECTED")
            self.pending_connection_queue.get()

            print ("Connected to server...")

            receive_message_thread = multiprocessing.Process(target=self.receive_message, args=(self.socket,))
            receive_message_thread.start()

            send_message_thread = multiprocessing.Process(target=self.send_message_to_server, args=(self.socket,))
            send_message_thread.start()

    # Return true if connected queue has a value
    def is_connected(self):
        return not self.is_connected_queue.empty()

    # Put message on stop queues to end send / receive threads
    # Clear connected state queues
    def disconnect(self):
        print ("Disconnecting...")

        self.stop_receive_queue.put("")
        self.stop_send_queue.put("")

        self.clear_queue(self.pending_connection_queue)
        self.clear_queue(self.is_connected_queue)

        print ("Connection closed")

    # Send a message
    # If connected and send queue isn't full - add message to send queue
    # Raise exception if not connected
    def send_message(self, message):
        if self.is_connected():
            if self.send_message_queue.full():
                print ("Send message queue full...")
                return
            self.send_message_queue.put(message)
        else:
            raise NotConnectedException("Not connected to server...")

    # Send message to server
    # If send queue isn't empty, send the message length + message (expects binary data) to server
    # If exception while sending and the stop queue isn't empty - disconnect
    def send_message_to_server(self, socket):
        while self.stop_send_queue.empty():
            while not self.send_message_queue.empty():
                print ("Message found on queue...")
                try:
                    message = self.send_message_queue.get()
                    message_size = len(message)
                    print (f"Message: {message_size}")
                    socket.sendall(struct.pack(">L", message_size) + message)
                except Exception as e:
                    if not self.stop_send_queue.empty():
                        return
                    print (f"nException sending message:nn{e}")
                    self.disconnect()

    # Get a message
    # If the receive queue isn't empty - return a message
    def get_message(self):
        if not self.receive_message_queue.empty():
            return self.receive_message_queue.get()

        return None

    # Receive messages from socket
    # Read data from socket according to the pre-pended message length
    def receive_message(self, socket):
        data = b""
        payload_size = struct.calcsize(">L")

        print ("Listening for messages...")
        while self.stop_receive_queue.empty():
            #Get message size
            try:
                while len(data) < payload_size:
                    data += socket.recv(4096)

                packed_msg_size = data[:payload_size]
                data = data[payload_size:]
                msg_size = struct.unpack(">L", packed_msg_size)[0]

                print ("Received message size:")
                print (msg_size)

                #Get message
                while len(data) < msg_size:
                    data += socket.recv(4096) 

                message = data[:msg_size]       
                data = data[msg_size:]   

                print (message)

                if self.receive_message_queue.qsize() >= self.MAX_QUEUE_SIZE or self.receive_message_queue.full():
                    continue

                self.receive_message_queue.put(message)

            except Exception as e:
                print (f"nException while receiving messages: {e}nn")
                break

        print ("nDisconnecting...nn")
        self.disconnect()

    # Connect to the server
    def connect_to_server(self, host, port, wait_time=1):
        try:
            client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            client_socket.connect((host, port))
            return client_socket
        except Exception:
            print (f"Couldn't connect to remote address, waiting {wait_time} seconds to retry")
            time.sleep(wait_time)
            return self.connect_to_server(host, port, wait_time * 1)

    # Clear messages from the supplied queue (should live somewhere else)
    def clear_queue(self, queue):
        while not queue.empty():
            queue.get()

not_connected_exception.py

class NotConnectedException(Exception):
    def __init__(self, message):
        super().__init__(message)

And a small test server just to test receiving messages..

test_server.py

import socket
import sys
import struct

HOST = "127.0.0.1"
PORT = 11000

def main():
    s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
    print('Socket created')
    s.bind((HOST,PORT))
    print('Socket bind complete')

    while True:
        s.listen(10)
        try:
            print('Socket now listening')

            conn,addr=s.accept()

            data = b""
            payload_size = struct.calcsize(">L")
            print("payload_size: {}".format(payload_size))
            while True:
                while len(data) < payload_size:
                    data += conn.recv(4096)

                packed_msg_size = data[:payload_size]
                data = data[payload_size:]
                msg_size = struct.unpack(">L", packed_msg_size)[0]
                print("msg_size: {}".format(msg_size))
                while len(data) < msg_size:
                    data += conn.recv(4096)
                frame_data = data[:msg_size]
                data = data[msg_size:]

        except Exception as e:
            print("Whoops...")
            print (e)

if __name__ == '__main__':
    main()


Get this bounty!!!