#StackBounty: #python #docker No module named 'http.client' on Docker

Bounty: 50

I am trying to retrieve my machine external IP address with python and I would like to use ipgetter for that.

Running it locally it’s working as expected, and I get my IP.
But, when running on Docker I get

File "/utils/ipgetter.py", line 41, in <module>
    import urllib.request as urllib
  File "/usr/local/lib/python3.6/urllib/request.py", line 88, in <module>
    import http.client
ModuleNotFoundError: No module named 'http.client'

In my requirements.txt I have declared ipgetter==0.7

My Dockerfile start from FROM python:3.6.3-alpine3.6
and I have installed my requirements successfully.

I could implement the ipgetter with different libraries, but I prefer to overcome this issue.

How to solve this? am I missing another dependency?

Thank you


Get this bounty!!!

#StackBounty: #python #python-3.x #ctypes #msdn #joystick Reading joystick capability

Bounty: 50

I’m trying to read my joystick capability by using the winmm.dll library.
Here how I’m doing it

from ctypes import windll, Structure, c_uint, c_ushort, c_char, c_ulong
WORD = c_ushort
UINT = c_uint
TCHAR = c_char
winmm = windll.LoadLibrary('winmm.dll')
class JOYCAPS(Structure):
_fields_ = [
    ('wMid', WORD),
    ('wPid', WORD),
    ('szPname', TCHAR),  # originally szPname[MAXPNAMELEN]
    ('wXmin', UINT),
    ('wXmax', UINT),
    ('wYmin', UINT),
    ('wYmax', UINT),
    ('wZmin', UINT),
    ('wZmax', UINT),
    ('wNumButtons', UINT),
    ('wPeriodMin', UINT),
    ('wPeriodMax', UINT),
    ('wRmin', UINT),
    ('wRmax', UINT),
    ('wUmin', UINT),
    ('wUmax', UINT),
    ('wVmin', UINT),
    ('wVmax', UINT),
    ('wCaps', UINT),
    ('wMaxAxes', UINT),
    ('wNumAxes', UINT),
    ('wMaxButtons', UINT),
    ('szRegKey', TCHAR),  # originally szRegKey[MAXPNAMELEN]
    ('szOEMVxD', TCHAR)  # originally szOEMVxD[MAX_JOYSTICKOEMVXDNAME]
]

joyinf = JOYCAPS()
err = winmm.joyGetDevCaps(0, pointer(joyinf), sizeof(joyinf))
if err == JOYERR_NOERROR:
    for l in [s for s in dir(joyinf) if "_" not in s]:
        print(l, getattr(joyinf, l))

When I try to do so I get an error “function ‘joyGetDevCaps’ not found”
When listing the dll I can see that there are two entries under joyGetDevCaps:

  • joyGetDevCapsA
  • joyGetDevCapsW

But when trying to read them I get an error number of 165 which is not listed in the original function MSDN site

I’m using windows 10 and python 3.6

The code is working when using the joyGetPos and joyGetPosEx.

Thank you


Get this bounty!!!

#StackBounty: #python #pandas #numpy #tensorflow Tensorflow TypeError: Can't convert 'numpy.int64' object to str implicitly

Bounty: 150

Here is my jupyter notebook :

import pandas as pd
from pprint import pprint
import pickle 
import numpy as np

with open('preDF.p', 'rb') as f:
    preDF = pickle.load(f)
#pprint(preDF)
df = pd.DataFrame(data=preDF)
#df.rename(columns={166: '166'}, inplace=True)
df.head()
0 1   2   3   4   5   6   7   8   9   ... 157 158 159 160 161 162 163 164 165 166
0 3   8   1   13  15  13  9   12  12  1   ... 0   0   0   0   0   0   0   0   0   1
1 3   1   13  15  13  9   12  12  1   27  ... 0   0   0   0   0   0   0   0   0   1
2 3   8   1   13  15  13  9   12  12  1   ... 0   0   0   0   0   0   0   0   0   1
3 13  5   20  18  9   3   1   18  9   1   ... 0   0   0   0   0   0   0   0   0   1
4 3   8   12  15  18  8   5   24  9   4   ... 0   0   0   0   0   0   0   0   0   2
5 rows × 167 columns
import numpy as np 
#msk = np.random.rand(len(df)) < 0.8
#train = df[msk]
#test = df[~msk]

from sklearn.model_selection import KFold
kf = KFold(n_splits=2)
train = df.iloc[train_index]
test = df.iloc[test_index]
train.columns = train.columns.astype(np.int32)
test.columns = test.columns.astype(np.int32)


import tensorflow as tf

def train_input_fn(features, labels, batch_size):
    """An input function for training"""
    # Convert the inputs to a Dataset.
    dataset = tf.data.Dataset.from_tensor_slices((dict(features.astype(np.int32)), labels.astype(np.int32)))

    # Shuffle, repeat, and batch the examples.
    dataset = dataset.shuffle(1000).repeat().batch(batch_size)

    # Return the dataset.
    return dataset


def eval_input_fn(features, labels, batch_size):
    """An input function for evaluation or prediction"""
    features=dict(features.astype(np.int32))
    if labels is None:
        # No labels, use only features.
        inputs = features
    else:
        inputs = (features, labels)

    # Convert the inputs to a Dataset.
    dataset = tf.data.Dataset.from_tensor_slices(inputs)

    # Batch the examples
    assert batch_size is not None, "batch_size must not be None"
    dataset = dataset.batch(batch_size)

    # Return the dataset.
    return dataset

def load_data(train,test,y_name=166):

    train_x, train_y = train, train.pop(y_name)

    test_x, test_y = test, test.pop(y_name)

    return (train_x, train_y), (test_x, test_y)

def main(train,test):
    batch_size = np.int32(100)
    train_steps = np.int32(1000)
    # Fetch the data

    SPECIES = ['neg', 'stable', 'pos']
    (train_x, train_y), (test_x, test_y) = load_data(train,test)

    # Feature columns describe how to use the input.
    my_feature_columns = []
    for key in train_x.keys():
        my_feature_columns.append(tf.feature_column.numeric_column(key=key))

    # Build 2 hidden layer DNN with 10, 10 units respectively.
    classifier = tf.estimator.DNNClassifier(
        feature_columns=my_feature_columns,
        # Two hidden layers of 10 nodes each.
        hidden_units=[30, 10,30],
        # The model must choose between 3 classes.
        n_classes=3)

    classifier.train(
        input_fn=lambda:train_input_fn(train_x, train_y,
                                                 batch_size),
        steps=train_steps)
    # Evaluate the model.
    eval_result = classifier.evaluate(
        input_fn=lambda:eval_input_fn(test_x, test_y,
                                                batch_size))

    print('nTest set accuracy: {accuracy:0.3f}n'.format(**eval_result))

    # Generate predictions from the model
    expected = ['exp neg', 'exp stable', 'exp pos']
    predict_x = {
        'open': [5.1, 5.9, 6.9],
        'high': [3.3, 3.0, 3.1],
        'low':   [1.7, 4.2, 5.4],
        'close': [0.5, 1.5, 2.1],
    }

    predictions = classifier.predict(
        input_fn=lambda:eval_input_fn(predict_x,
                                                labels=None,
                                                batch_size=batch_size))

    template = ('nPrediction is "{}" ({:.1f}%), expected "{}"')

    for pred_dict, expec in zip(predictions, expected):
        class_id = pred_dict['class_ids'][0]
        probability = pred_dict['probabilities'][class_id]

        print(template.format(SPECIES[class_id],
                              100 * probability, expec))


if __name__ == '__main__':
    #tf.logging.set_verbosity(tf.logging.INFO)
    tf.app.run(main(train,test))

So I get this error :

INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpz7rw1puj
INFO:tensorflow:Using config: {'_task_type': 'worker', '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f478ba9bdd8>, '_tf_random_seed': None, '_keep_checkpoint_max': 5, '_is_chief': True, '_master': '', '_session_config': None, '_log_step_count_steps': 100, '_global_id_in_cluster': 0, '_evaluation_master': '', '_service': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_task_id': 0, '_num_worker_replicas': 1, '_model_dir': '/tmp/tmpz7rw1puj', '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000}
INFO:tensorflow:Calling model_fn.
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-141-fcd417d2c3ff> in <module>()
     98 if __name__ == '__main__':
     99     #tf.logging.set_verbosity(tf.logging.INFO)
--> 100     tf.app.run(main(train,test))

<ipython-input-141-fcd417d2c3ff> in main(train, test)
     64         input_fn=lambda:train_input_fn(train_x, train_y,
     65                                                  batch_size),
---> 66         steps=train_steps)
     67     # Evaluate the model.
     68     eval_result = classifier.evaluate(

/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
    350 
    351     saving_listeners = _check_listeners_type(saving_listeners)
--> 352     loss = self._train_model(input_fn, hooks, saving_listeners)
    353     logging.info('Loss for final step: %s.', loss)
    354     return self

/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py in _train_model(self, input_fn, hooks, saving_listeners)
    810       worker_hooks.extend(input_hooks)
    811       estimator_spec = self._call_model_fn(
--> 812           features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
    813 
    814       if self._warm_start_settings:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py in _call_model_fn(self, features, labels, mode, config)
    791 
    792     logging.info('Calling model_fn.')
--> 793     model_fn_results = self._model_fn(features=features, **kwargs)
    794     logging.info('Done calling model_fn.')
    795 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/canned/dnn.py in _model_fn(features, labels, mode, config)
    352           dropout=dropout,
    353           input_layer_partitioner=input_layer_partitioner,
--> 354           config=config)
    355 
    356     super(DNNClassifier, self).__init__(

/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/canned/dnn.py in _dnn_model_fn(features, labels, mode, head, hidden_units, feature_columns, optimizer, activation_fn, dropout, input_layer_partitioner, config)
    183         dropout=dropout,
    184         input_layer_partitioner=input_layer_partitioner)
--> 185     logits = logit_fn(features=features, mode=mode)
    186 
    187     def _train_op_fn(loss):

/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/canned/dnn.py in dnn_logit_fn(features, mode)
     89         partitioner=input_layer_partitioner):
     90       net = feature_column_lib.input_layer(
---> 91           features=features, feature_columns=feature_columns)
     92     for layer_id, num_hidden_units in enumerate(hidden_units):
     93       with variable_scope.variable_scope(

/usr/local/lib/python3.5/dist-packages/tensorflow/python/feature_column/feature_column.py in input_layer(features, feature_columns, weight_collections, trainable, cols_to_vars)
    271   """
    272   return _internal_input_layer(features, feature_columns, weight_collections,
--> 273                                trainable, cols_to_vars)
    274 
    275 

/usr/local/lib/python3.5/dist-packages/tensorflow/python/feature_column/feature_column.py in _internal_input_layer(features, feature_columns, weight_collections, trainable, cols_to_vars, scope)
    192       ordered_columns.append(column)
    193       with variable_scope.variable_scope(
--> 194           None, default_name=column._var_scope_name):  # pylint: disable=protected-access
    195         tensor = column._get_dense_tensor(  # pylint: disable=protected-access
    196             builder,

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in __enter__(self)
   1901 
   1902     try:
-> 1903       return self._enter_scope_uncached()
   1904     except:
   1905       if self._graph_context_manager is not None:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in _enter_scope_uncached(self)
   2006         raise
   2007       self._current_name_scope = current_name_scope
-> 2008       unique_default_name = _get_unique_variable_scope(self._default_name)
   2009       pure_variable_scope = _pure_variable_scope(
   2010           unique_default_name,

/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py in _get_unique_variable_scope(prefix)
   1690   var_store = _get_default_variable_store()
   1691   current_scope = get_variable_scope()
-> 1692   name = current_scope.name + "/" + prefix if current_scope.name else prefix
   1693   if var_store.variable_scope_count(name) == 0:
   1694     return prefix

TypeError: Can't convert 'numpy.int64' object to str implicitly

My guess is that this worked without calling numpy as a simple example.

now that I’ve called numpy every int is a int64 and it seems that tensorflow try to convert very simply an int to string.

But as it is not so simple to convert an int64 to a string it failed because now all int are by default int64.

But I have some problems to find which int is problematic here.

The notebook is here :
https://www.dropbox.com/s/rx8v5aap3zhoshm/NewML.html?dl=1
and the pickle predf is here :
https://www.dropbox.com/s/wd831906jq3o1jl/preDF.p?dl=1


Get this bounty!!!

#StackBounty: #python #python-3.x #image Automatic image cropping in Python 3

Bounty: 50

I made a script to automate the cropping of spectrogram images I generated using Matlab.

I have 4 different types of images (Fixed height, but varying width) and they are cropped differently according to their type.

Here’s an example of the input image (Type 2) (2462×256)
enter image description here

Here’s an example of the input image (Type 3) (34482×256)
enter image description here

Image types 1 and 2 are simply cropped from the right edge to the desired dimensions (In this case 1600px), since the interesting part of the signal is on the right. I’m essentially removing the left side of the image until I have a 1600px wide image left.

Image types 3 and 4 are originally very long images, so I can crop multiple images out of each one, overlapping each by a fixed amount. (In this case, I’ll crop a 1600px wide image starting at (0,0), save it, crop another 1600px wide image at (400,0) then at (800,0) and so on.)

Here’s the first example after cropping. (1600×256)
enter image description here

Here’s the first two crops from the second example, you can see the overlap on the right. (1600×256)
enter image description here
enter image description here


As a beginner, I mostly want to know if I’m doing something wrong, that could be optimized or just done better.

#Packages
import cv2
import os
from imageio import imwrite, imread

#Defined parameters
#Input and output paths
path_directory_input = '/home/.../spectrograms/uncropped'
path_directory_output = '/home/.../spectrograms/cropped'
#Cropping parameters
image_height_final = 256
image_width_final = 1600
image_overlap = 400
crop_nb_maximum = 11

#Class example counters
class1,class2,class3,class4 = 0,0,0,0
class1_out,class2_out,class3_out,class4_out = 0,0,0,0
# Object slipping = 1
# Object slipping on surface = 2
# Robot movement = 3
# Robot movement with object = 4

#Iterate over all samples in the input directory
for path_image in os.listdir(path_directory_input):

    #Defines the current image path, output path and reads the image
    path_image_input = os.path.join(path_directory_input, path_image)
    path_image_output = os.path.join(path_directory_output, path_image)
    image_current = imread(path_image_input)

    #Parse the filename and determine the current class (determined by the 15th character)
    class_current = int(path_image[15])

    #Counts the number of input examples being treated
    if class_current == 1:
        class1 += 1
    if class_current == 2:
        class2 += 1
    if class_current == 3:
        class3 += 1
    if class_current == 4:
        class4 += 1

    #Get image dimensions
    image_height_current, image_width_current = image_current.shape[:2]

    #Changes the procedure depending on the current class
    if (class_current == 1) or (class_current == 2):
        print('Processing class: ', class_current)

        #Crops the image to target size (Format is Y1:Y2,X1:X2)
        image_current_cropped = image_current[0:image_height_final,
                                (image_width_current-image_width_final):image_width_current]
        #Saves the new image in the output file
        imwrite(path_image_output,image_current_cropped)

    elif (class_current == 3) or (class_current == 4):
        print('Processing class: ', class_current)

        #Count how many crops can fit in the original
        crop_nb = int((image_width_current - image_width_final)/image_overlap)
        #Limit the crop number to arrive at equal class examples
        if crop_nb > crop_nb_maximum:
            if class_current == 3:
                crop_nb = crop_nb_maximum
            else:
                crop_nb = crop_nb_maximum * 2

        #Loop over that number
        for crop_current in range(0,crop_nb):
            #Counts the number of output examples
            if class_current == 3:
                class3_out += 1
            if class_current == 4:
                class4_out += 1

            #Crop the image multiple times with some overlap
            image_current_cropped = image_current[0:image_height_final,
                                    (crop_current * image_overlap):((crop_current * image_overlap) + image_width_final)]
            #Save the crop with a number appended
            path_image_output_new = path_image_output[:-4] #Removes the .png
            path_image_output_new = str.join('_',(path_image_output_new,str(crop_current))) #Appends the current crop number
            path_image_output_new = path_image_output_new + '.png' #Appends the .png at the end
            imwrite(path_image_output_new,image_current_cropped)

    else:
        #If the current class is not a valid selection (1-4)
        print('Something went wrong with the class selection: ',class_current)


#Prints the number of examples
print('Cropping is done. Here are the input example numbers:')
print('class1',class1)
print('class2',class2)
print('class3',class3)
print('class4',class4)
print('Here are the output example numbers')
print('class1',class1)
print('class2',class2)
print('class3',class3_out)
print('class4',class4_out)


Get this bounty!!!

#StackBounty: #python #algorithm #dynamic-programming #pathfinding Minimum cost path of matrix using Python

Bounty: 50

I was reading this article about finding the minimum cost path from (0,0) to any (m,n) point in a matrix. Using python the author has provided 2 solutions in Python.

The first one solves it by using a backward induction technique through recursion while the second one uses an auxiliary table (tc). I was wondering if I could solve it using forward induction and without the need of an additional table. Here’s my solution:

def test(target_matrix, cost, i, j, m, n):
    if (i == m and j == n):
        return cost
    if i+1 > m:
        cost += target_matrix[i][j+1] 
        return test(target_matrix, cost, i, j+1, m, n)
    if j+1 > n:
        cost += target_matrix[i+1][j] 
        return test(target_matrix, cost, i+1, j, m, n)
    if (i+1 <= m and j+1 <= n):
        ret_cost, i, j = min(target_matrix[i+1][j], target_matrix[i][j+1], target_matrix[i+1][j+1], i, j)
        cost +=ret_cost
        return test(target_matrix, cost, i, j, m, n)


def min(x, y, z, i, j):
    if (x < y):
        if (x < z):
            return x, i+1, j
        else:
            return z, i+1, j+1
    else:
        if (y < z):
            return y, i, j+1
        else:
            return z, i+1, j+1


if __name__ == '__main__':
    input = [
            [11,9, 3],
            [3, 1, 0],
            [1, 3, 2]
            ]
    res = test(input, input[0][0], 0, 0, 2, 2)
    print(res)

What are your comments on this? What do you think are its drawbacks compared to the two solutions provided in the article? I’m particularly interested in comments regarding the time and space complexity of my algorithm.


Get this bounty!!!

#StackBounty: #python Scapy spoofing UDP packet error

Bounty: 50

AttributeError: 'bytearray' object has no attribute '__rdiv__'

I get this for the following code:

b = bytearray([0xff, 0xff])

def spoof(src_ip, src_port, dest_ip, dest_port):
    global b
    spoofed_packet = IP(src=src_ip, dst=dest_ip) / TCP(sport=src_port, dport=dest_port) / b
    send(spoofed_packet)

Found the example to spoof the packet on stackoverflow but it didn’t use a bytearray, I assume I need to convert the bytearray to a string?

Also my scapy keeps opening powershell any way around that?

I fixed that error by making the bytearray into a string now I get the following error:

    os.write(1,b".")
    OSError: [Errno 9] Bad file descriptor


Get this bounty!!!

#StackBounty: #python #python-3.x #web-scraping #scrapy Parsing different categories using scrapy from a webpage

Bounty: 50

I’ve written a script in python scrapy to parse different “model”, “country” and “year” of various bikes from a webpage. There are several subcategories to track to reach the target page to scrape the required info. The below scraper first starts from the main page then track each links within class art-indexhmenu then going to one layer deep it again tracks the links within class niveau2 then again follow the links within class niveau3 then tracking the links within class art-indexbutton-wrapper it reaches the target page. Then it scrapes “model”, “country” and “years” of each products. My scraper is doing it’s job errorlessly. However, although it is working nice, the way I’ve created this scraper is very repetitive to look at. As there are always room for improvement, I suppose there should be any way to make it more robust by getting rid of banality. Thanks in advance.

This is the spider (website included):

import scrapy
from scrapy.spiders import CrawlSpider
from scrapy.http.request import Request
from scrapy.crawler import CrawlerProcess

class BikePartsSpider(scrapy.Spider):
    name = 'honda'

    def start_requests(self):
        yield Request(url = "https://www.bike-parts-honda.com/", callback = self.parse_links)

    def parse_links(self, response):
        for link in response.css('.art-indexhmenu a::attr(href)').extract():
            yield response.follow(link, callback = self.parse_inner_links) #going to one layer deep from landing page

    def parse_inner_links(self, response):
        for link in response.css('.niveau2 .art-indexbutton::attr(href)').extract():
            yield response.follow(link, callback = self.parse_cat_links) # digging deep to go another layer

    def parse_cat_links(self, response):
        for link in response.css('.niveau3 .art-indexbutton::attr(href)').extract():
            yield response.follow(link, callback = self.parse_target_links) ## go inside another layer

    def parse_target_links(self, response):
        for link in response.css('.art-indexbutton-wrapper .art-indexbutton::attr(href)').extract():
            yield response.follow(link, callback = self.parse_docs) # tracking links leading to the target page

    def parse_docs(self, response):
        items = [item for item in response.css('.titre_12_red::text').extract()]
        yield {"categories":items} #this is where the scraper parses the info

c = CrawlerProcess({               #using CrawlerProcess() method to be able to run from the IDE
    'USER_AGENT': 'Mozilla/5.0',   
})
c.crawl(BikePartsSpider)
c.start()


Get this bounty!!!

#StackBounty: #python #tensorflow #tensorboard #tensorflow-serving #tensorflow-datasets Adding Tensorboard summaries from graph ops gen…

Bounty: 50

I’ve found the Dataset.map() functionality pretty nice for setting up pipelines to preprocess image/audio data before feeding into the network for training, but one issue I have is accessing the raw data before the preprocessing to send to tensorboard as a summary.

For example, say I have a function that loads audio data, does some framing, makes a spectrogram, and returns this.

import tensorflow as tf 

def load_audio_examples(label, path):
    # loads audio, converts to spectorgram
    pcm = ...  # this is what I'd like to put into tf.summmary.audio() !
    # creates one-hot encoded labels, etc
    return labels, examples

# create dataset
training = tf.data.Dataset.from_tensor_slices((
    tf.constant(labels), 
    tf.constant(paths)
))

training = training.map(load_audio_examples, num_parallel_calls=4)

# create ops for training
train_step = # ...
accuracy = # ...

# create iterator
iterator = training.repeat().make_one_shot_iterator()
next_element = iterator.get_next()

# ready session
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
train_writer = # ...

# iterator
test_iterator = testing.make_one_shot_iterator()
test_next_element = iterator.get_next()

# train loop
for i in range(100):
    batch_ys, batch_xs, path = sess.run(next_element)
    summary, train_acc, _ = sess.run([summaries, accuracy, train_step], 
        feed_dict={x: batch_xs, y: batch_ys})
    train_writer.add_summary(summary, i) 

It appears as though this does not become part of the graph that is plotted in the “Graph” tab of tensorboard (see screenshot below).

tesnrofboard

As you can see, it’s just X (the output of the preprocessing map() function).

  1. How would I better structure this to get the raw audio into a tf.summary.audio()? Right now the things inside map() aren’t accessible as Tensors inside my training loop.
  2. Also, why isn’t my graph showing up on Tensorboard? Worries me that I won’t be able to export my model or use Tensorflow Serving to put my model into production because I’m using the new Dataset API – maybe I should go back to doing things manually? (with queues, etc).


Get this bounty!!!

#StackBounty: #python #celery celery shutdown worker after particular task

Bounty: 100

I’m using celery (solo pool with concurrency=1) and I want to be able to shut down the worker after a particular task has run. A caveat is that I want to avoid any possibility of the worker picking up any further tasks after that one.

Here’s my attempt in the outline:

from __future__ import absolute_import, unicode_literals
from celery import Celery
from celery.exceptions import WorkerShutdown
from celery.signals import task_postrun

app = Celery()
app.config_from_object('celeryconfig')

@app.task
def add(x, y):
    return x + y

@task_postrun.connect(sender=add)
def shutdown(*args, **kwargs):
    raise WorkerShutdown()

However, when I run the worker

celery -A celeryapp  worker --concurrency=1 --pool=solo

and run the task

add.delay(1,4)

I get the following:

 -------------- celery@sam-APOLLO-2000 v4.0.2 (latentcall)
---- **** ----- 
--- * ***  * -- Linux-4.4.0-116-generic-x86_64-with-Ubuntu-16.04-xenial 2018-03-18 14:08:37
-- * - **** --- 
- ** ---------- [config]
- ** ---------- .> app:         __main__:0x7f596896ce90
- ** ---------- .> transport:   redis://localhost:6379/0
- ** ---------- .> results:     redis://localhost/
- *** --- * --- .> concurrency: 4 (solo)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery


[2018-03-18 14:08:39,892: WARNING/MainProcess] Restoring 1 unacknowledged message(s)

The task is re-queued and will be run again on another worker, leading to a loop.

This also happens when I move the WorkerShutdown exception within the task itself.

@app.task
def add(x, y):
    print(x + y)
    raise WorkerShutdown()

Is there a way I can shut down the worker after a particular task, while avoiding this unfortunate side-effect?


Get this bounty!!!