#StackBounty: #python #django #amazon-web-services #deployment #elastic-beanstalk AWS Elastic Beanstalk health check issue

Bounty: 50

First I apologize for not being familiar with English.

My web application is Django and web server use Nginx, use Docker image and Elastic Beanstalk to deployment.

Normally there was no problem, but as the load balancer expands EC2, my web server becomes 502 Bad Gateway.

I’m checked Elastic Beanstalk application logs, about 16% of the requests returned 5xx errors, at which time the load balancer expands EC2, causing the web server to transition to the 502 Bad Gateway state and the Elastic Beanstalk application to the Degraded state.

Is this a common problem when the load balancer performs a health check? If not, can I tell you how to turn off the Health Check?

I will attach a capture image for your help.

enter image description here


Get this bounty!!!

#StackBounty: #python #django #django-models #django-1.8 #django-1.9 Using a model's sub-classes as choice options for that model r…

Bounty: 100

We are trying to upgrade a legacy code’s django version from 1.8 to 1.9. We have one model that is defined like this:

def _get_descendant_question_classes():
    stack = [Question]

    while stack:
        cls = stack.pop()
        stack.extend(cls.__subclasses__())
        yield cls

def _get_question_choices():
    question_classes = _get_descendant_question_classes()

    for cls in question_classes:
        yield (cls.slug, cls._meta.verbose_name)

class Question(models.Model):
    type = models.CharField(max_length=10, choices=_get_question_choices(), default=slug)

class TextQuestion(Question): pass
class SelectQuestion(Question): pass
...

Basically the model wants to use its sub-classes as choice options for one of its fields. It does this with traversing the model in a DFS manner and yielding all the sub-classes.

This code works in django 1.8 but in django 1.9 it gives this error:

Traceback (most recent call last):
  File "./manage.py", line 16, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 324, in execute
    django.setup()
  File "/usr/local/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
    app_config.import_models(all_models)
  File "/usr/local/lib/python2.7/site-packages/django/apps/config.py", line 202, in import_models
    self.models_module = import_module(models_module_name)
  File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/home/saeed/saeed/survey/models.py", line 85, in <module>
    class Question(models.Model):
  File "/home/saeed/saeed/survey/models.py", line 99, in Question
    type = models.CharField(max_length=10, choices=_get_question_choices(), default=slug)
  File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 1072, in __init__
    super(CharField, self).__init__(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py", line 161, in __init__
    choices = list(choices)
  File "/home/saeed/saeed/survey/models.py", line 65, in _get_question_choices
    for cls in question_classes:
  File "/home/saeed/saeed/survey/models.py", line 54, in _get_descendant_question_classes
    stack = [Question]
NameError: global name 'Question' is not defined

I understand the problem what I don’t understand is how this works in django 1.8? What has changed in django 1.9 that causes this? What is the best way to fix this?


Get this bounty!!!

#StackBounty: #python #python-3.x #functional-programming #meta-programming Decorate instance method with (arbitrary) meta data in python

Bounty: 50

Requirements

What I need: attach meta information to methods, such that

  1. it is ‘easy’ to retrieve all such methods for a given class instance
  2. methods can still be called ‘in a normal way’, e.g. obj.method()
  3. meta-data is accessible from the decorated method, e.g. obj.method.data
  4. IDEs (PyCharm in particular) do not produce any warnings or errors (and, if possible, IDE support, e.g. auto-completion, should be able to handle the annotation)

Additionally, I would like the code to be readable/intuitive (not necessarily the super classes, though), generally robust and ‘bug-free’. I accept the limitation that my decorators need to be the most ‘outer’ decorator for the automatic collection to take place.

From my point of view, overcoming function/method transformation while still exposing an arbitrary object type (not a function type — thinking of this, maybe subclassing a FunctionType might be another idea?) is the hardest challenge.

What do you think of the following three solutions? Is there something I did miss?

Code

class MethodDecoratorWithIfInCall(object):
    def __init__(self):
        self._func = None

    def __call__(self, *args, **kwargs):
        if self._func is None:
            assert len(args) == 1 and len(kwargs) == 0
            self._func = args[0]
            return self
        else:
            return self._func(*args, **kwargs)

    def __get__(self, *args, **kwargs):
        # update func reference to method object
        self._func = self._func.__get__(*args, **kwargs)
        return self


class MacroWithIfInCall(MethodDecoratorWithIfInCall):
    def __init__(self, name):
        super(MacroWithIfInCall, self).__init__()
        self.name = name


class MethodDecoratorWithExplicitDecorate(object):
    def __init__(self, *args, **kwargs):
        # wildcard parameters to satisfy PyCharm
        self._func = None

    def __call__(self, *args, **kwargs):
        return self._func(*args, **kwargs)

    def __get__(self, *args, **kwargs):
        # update func reference to method object
        self._func = self._func.__get__(*args, **kwargs)
        return self

    def _decorate(self):
        def _set_func(func):
            self._func = func
            return self
        return _set_func

    @classmethod
    def decorate(cls, *args, **kwargs):
        obj = cls(*args, **kwargs)
        return obj._decorate()


class MacroWithExplicitDecorate(MethodDecoratorWithExplicitDecorate):
    def __init__(self, name):
        super(MacroWithExplicitDecorate, self).__init__()
        self.name = name


class MacroWithoutSuperclass(object):
    def __init__(self, func, name):
        self.func = func
        self.name = name

    def __get__(self, *args, **kwargs):
        # update func reference to method object
        self.func = self.func.__get__(*args, **kwargs)
        return self

    def __call__(self, *args, **kwargs):
        return self.func(*args, **kwargs)

    @staticmethod
    def decorate(name):
        return lambda func: MacroWithoutSuperclass(func, name)


class Shell:
    def __init__(self):
        macros = [macro for macro in map(self.__getattribute__, dir(self))
                  if isinstance(macro, (MacroWithIfInCall, MacroWithExplicitDecorate, MacroWithoutSuperclass))]

        for macro in macros:
            print(macro.name, macro())

    @MacroWithIfInCall(name="macro-with-if-in-call")
    def macro_add_1(self):
        return "called"

    @MacroWithExplicitDecorate.decorate(name="macro-with-explicit-decorate")
    def macro_add_2(self):
        return "called"

    @MacroWithoutSuperclass.decorate(name="macro-without-superclass")
    def macro_add_3(self):
        return "called"


if __name__ == '__main__':
    shell = Shell()

Output

macro-with-if-in-call called
macro-with-explicit-decorate called
macro-without-superclass called

Disclaimer

This code is/will be part of a public, GPL-3.0 repository.


Get this bounty!!!

#StackBounty: #python #django #django-i18n "django-admin.py makemessages -l en" adds Plural-Forms to the output file

Bounty: 50

Every time I run django-admin.py makemessages -l en, it adds the following line to the djangojs.po file:

"Plural-Forms: nplurals=INTEGER; plural=EXPRESSION;n"

after that running python manage.py runserver break out whith this error:

ValueError: plural forms expression could be dangerous

Of course removing that line fix the error and make it go away.
How can I prevent this line from being added?


Get this bounty!!!

#StackBounty: #python #tensorflow Tensorflow: How to retrieve information from the prediction Tensor?

Bounty: 50

I have found a neural network for semantic segmentation purpose. The network works just fine, I feed my training, validation and test data and I get the output (segmented parts in different colors). Until here, all is OK. I am using Keras with Tensorflow 1.7.0, GPU enabled. Python version is 3.5

What I want to achieve though is to get access to the pixel groups (segments) so that I can get their boundaries’ image coordinates, i.e. an array of points which forms the boundary of the segment X shown in green in the prediction image.

How to do that? Obviously I cannot put the entire code here but here is a snippet which I should modify to achieve what I would like to:

I have the following in my evaluate function:

    def evaluate(model_file):
    net = load_model(model_file, custom_objects={'iou_metric': create_iou_metric(1 + len(PART_NAMES)),
                                                 'acc_metric': create_accuracy_metric(1 + len(PART_NAMES), output_mode='pixelwise_mean')})

    img_size = net.input_shape[1]
    image_filename = lambda fp: fp + '.jpg'
    d_test_x = TensorResize((img_size, img_size))(ImageSource(TEST_DATA, image_filename=image_filename))
    d_test_x = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_test_x)
    d_test_pred = Predict(net)(d_test_x)
    d_test_pred.metadata['properties'] = ['background'] + PART_NAMES

    d_x, d_y = process_data(VALIDATION_DATA, img_size)
    d_x = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_x)
    d_y = AddBackgroundMap(use_lane_names=['Y'])(d_y)

    d_train = Join()([d_x, d_y])
    print('losses:', net.evaluate_generator(d_train.batch_array_tuple_generator(batch_size=3), 3))

    # the tensor which needs to be modified
    pred_y = Predict(net)(d_x)
    Visualize(('slices', 'labels'))(Join()([d_test_x, d_test_pred]))
    Visualize(('slices', 'labels', 'labels'))(Join()([d_x, pred_y, d_y]))

As for the Predict function, here is the snippet:

Alternatively, I’ve found that by using the following, one can get access to the tensor:

#    for sample_img, in d_x.batch_array_tuple_generator(batch_size=3, n_samples=5):
#        aa = net.predict(sample_img)
#        indexes = np.argmax(aa,axis=3)
#        print(indexes)
#        import pdb
#        pdb.set_trace()

But I have no idea how this works, I’ve never used pdb, therefore no idea.

In case if anyone wants to also see the training function, here it is:

def train(model_name='refine_res', k=3, recompute=False, img_size=224,
        epochs=10, train_decoder_only=False, augmentation_boost=2, learning_rate=0.001,
        opt='rmsprop'):

    print("Traning on: " + str(PART_NAMES))
    print("In Total: " + str(1 + len(PART_NAMES)) + " parts.")

    metrics = [create_iou_metric(1 + len(PART_NAMES)),
               create_accuracy_metric(1 + len(PART_NAMES), output_mode='pixelwise_mean')]

    if model_name == 'dummy':
        net = build_dummy((224, 224, 3), 1 + len(PART_NAMES))  # 1+ because background class
    elif model_name == 'refine_res':
        net = build_resnet50_upconv_refine((img_size, img_size, 3), 1 + len(PART_NAMES), k=k, optimizer=opt, learning_rate=learning_rate, softmax_top=True,
                                           objective_function=categorical_crossentropy,
                                           metrics=metrics, train_full=not train_decoder_only)
    elif model_name == 'vgg_upconv':
        net = build_vgg_upconv((img_size, img_size, 3), 1 + len(PART_NAMES), k=k, optimizer=opt, learning_rate=learning_rate, softmax_top=True,
                               objective_function=categorical_crossentropy,metrics=metrics, train_full=not train_decoder_only)
    else:
        net = load_model(model_name)

    d_x, d_y = process_data(TRAINING_DATA, img_size, recompute=recompute, ignore_cache=False)
    d = Join()([d_x, d_y])

    # create more samples by rotating top view images and translating
    images_to_be_rotated = {}
    factor = 5
    for root, dirs, files in os.walk(TRAINING_DATA, topdown=False):
        for name in dirs:
            format = str(name + '/' + name)  # construct the format of foldername/foldername
            images_to_be_rotated.update({format: factor})

    d_aug = ImageAugmentation(factor_per_filepath_prefix=images_to_be_rotated, rotation_variance=90, recalc_base_seed=True)(d)
    d_aug = ImageAugmentation(factor=3 * augmentation_boost, color_interval=0.03, shift_interval=0.1, contrast=0.4,  recalc_base_seed=True, use_lane_names=['X'])(d_aug)
    d_aug = ImageAugmentation(factor=2, rotation_variance=20, recalc_base_seed=True)(d_aug)
    d_aug = ImageAugmentation(factor=7 * augmentation_boost, rotation_variance=10, translation=35, mirror=True, recalc_base_seed=True)(d_aug)

    # apply augmentation on the images of the training dataset only

    d_aug = AddBackgroundMap(use_lane_names=['Y'])(d_aug)
    d_aug.metadata['properties'] = ['background'] + PART_NAMES

    # substract mean and shuffle
    d_aug = Shuffle()(d_aug)
    d_aug, d_val = RandomSplit(0.8)(d_aug)
    d_aug = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_aug)
    d_val = PixelwiseSubstract([103.93, 116.78, 123.68], use_lane_names=['X'])(d_val)

    # Visualize()(d_aug)

    d_aug.configure()
    d_val.configure()
    print('training size:', d_aug.size())
    batch_size = 4

    callbacks = []
    #callbacks += [EarlyStopping(patience=10)]
    callbacks += [ModelCheckpoint(filepath="trained_models/"+model_name + '.hdf5', monitor='val_iou_metric', mode='max',
                                  verbose=1, save_best_only=True)]
    callbacks += [CSVLogger('logs/'+model_name + '.csv')]
    history = History()
    callbacks += [history]

    # sess = K.get_session()
    # sess.run(tf.initialize_local_variables())

    net.fit_generator(d_aug.batch_array_tuple_generator(batch_size=batch_size, shuffle_samples=True), steps_per_epoch=d_aug.size() // batch_size,
                      validation_data=d_val.batch_array_tuple_generator(batch_size=batch_size), validation_steps=d_val.size() // batch_size,
                      callbacks=callbacks, epochs=epochs)

    return {k: (max(history.history[k]), min(history.history[k])) for k in history.history.keys()}


Get this bounty!!!

#StackBounty: #python #django #python-3.x #django-models Django 2.0 Access Models (CREATE/REMOVE/FILTER) Standalone [without manage.py …

Bounty: 50

I have a Django project and I wanted to generate some objects (from the models)

What I’m trying to get at : Standalone Python Script to create bunch of objects and/or filter,delete.

after importing the model with from apps.base.models import MyModel
and setting up the configuration as the previous StackOverflow Questions suggested I was not able to run the script.

import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myProject.settings")
import django

django.setup()
from apps.base.models import MyModel

Please note that this is on Django version 2.0.6 [Django 2.0+].

Correct settings have been used, (i.e. myProject.settings)

  • After properly configuring everything else I get the following error:
    • RuntimeError: Model class apps.base.models.MyModel doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.


Get this bounty!!!

#StackBounty: #python #camera Take stills with Picamera() with auto exposure without enabling camera.start_preview()

Bounty: 50

following this tutorial on https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/6 , I am told that I need to use camera.start_preview() and sleep(5) in order to turn on auto exposure for the raspberry pi camera. However, doing so also opens a camera preview on my monitor which covers up my GUI. Is there a way to turn on auto exposure without having to enable camera.start_preview()?enter image description here


Get this bounty!!!

#StackBounty: #python #mysql #pyqt #pyqt4 Error: "Driver not loaded" in PyQt

Bounty: 50

i’m about to connect PyQt5 to MySQL. Unfortunately I get the error’ Driver not loaded’. Python says the driver’s inside:

from PyQt5.QtSql import QSqlDatabase
print(list(map(str, QSqlDatabase.drivers())))

Answer: [‘QSQLITE’, ‘QMYSQL’, ‘QMYSQL3’, ‘QODBC’, ‘QODBC3’, ‘QPSQL’, ‘QPSQL7’]
I use Windows 7 and Qt Designer is installed

here’s my code:

from PyQt5.QtSql import QSqlDatabase, QSqlQuery, QSqlTableModel
from PyQt5.QtWidgets import QTableView, QApplication
import sys
app = QApplication(sys.argv)
db = QSqlDatabase.addDatabase('QMYSQL')
db.setHostName('****')
db.setDatabaseName('****')
db.setUserName('****')
db.setPassword('****')
ok = db.open()
if not ok: print(db.lastError().text())
else: print("connected")
query = QSqlQuery(db);
query.exec_('SELECT * FROM tbl_Customers');

Who has experience with it. Thank you very much


Get this bounty!!!

#StackBounty: #python #python-2.7 #networking Download several files from a local server to a client

Bounty: 50

The following codes let me download from server to client three files called tmp.bsp, tmp.seq and tmp.dms. However, just the first file tmp.dms is completely downloaded. The other one tmp.seq is filled up with the informations of tmp.bsp and tmp.bsp stay 0KB.

client:

import socket 

import socket 

skClient = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
skClient.connect(("127.0.0.1",2525))

sData = "Temp"
sData2 = "Temp"
sData3 = "Temp"

while True:
    sData = skClient.recv(1024)
    fDownloadFile = open("tmp.dms","wb")

    sData2 = skClient.recv(1024)
    fDownloadFile2 = open("tmp.seq","wb")

    sData3 = skClient.recv(1024)
    fDownloadFile3 = open("tmp.bsp","wb")

    while sData:

        fDownloadFile.write(sData)
        sData = skClient.recv(1024)
        fDownloadFile.close()

        fDownloadFile2.write(sData2)
        sData2 = skClient.recv(1024)
        fDownloadFile2.close()

        fDownloadFile3.write(sData3)
        sData3 = skClient.recv(1024)
        fDownloadFile3.close()

    print "Download over"
    break

skClient.close()

n is a counter and the prints are for debugging.
sFileName is to download one file, and used to work but since I want three files I just commented it.

server:

import socket

host = ''
skServer = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
skServer.bind((host,2525))
skServer.listen(10)
print "Server currently active"

while True:
    Content,Address = skServer.accept()
    print Address
    files = "C:UsersName_userDesktopNetworkingSend_Receive/"
    fUploadFile = open(files+str('tmp.dms'),"rb")
    sRead = fUploadFile.read(1024)

    fUploadFile2 = open(files+str('tmp.seq'),"rb")
    sRead2 = fUploadFile2.read(1024)

    fUploadFile3 = open(files+str('tmp.bsp'),"rb")
    sRead3 = fUploadFile3.read(1024)

    while sRead:
        Content.send(sRead)
        sRead = fUploadFile.read(1024)

        Content.send(sRead2)
        sRead2 = fUploadFile2.read(1024)

#       Content.send(sRead3)   
#       sRead3 = fUploadFile3.read(1024)

    Content.close()
    print "Sending is over"
    break


skServer.close()

files I’m using:
server2.py is my server
Execution


Get this bounty!!!

#StackBounty: #python #python-3.x #setuptools #directory-structure Elegant way to refer to files in data science project

Bounty: 50

I have spent few recent days to learn how to structure data science project to keep it simple, reusable and pythonic. Sticking to this guideline I have created my_project. You can see it’s structure below.

├── README.md          
├── data
│   ├── processed          <-- data files
│   └── raw                            
├── notebooks  
|   └── notebook_1                             
├── setup.py              
|
├── settings.py            <-- settings file   
└── src                
    ├── __init__.py    
    │
    └── data           
        └── get_data.py    <-- script  

I defined a function that loads data from .data/processed. I want to use this function in other scripts and also in jupyter notebooks located in .notebooks.

def data_sample(code=None):
    df = pd.read_parquet('../../data/processed/my_data')
    if not code:
        code = random.choice(df.code.unique())
    df = df[df.code == code].sort_values('Date')
    return df

Obviously this function won’t work anywhere unless I run it directly in the script where it is defined.
My idea was to create settings.py where I’d declare:

from os.path import join, dirname

DATA_DIR = join(dirname(__file__), 'data', 'processed')

So now I can write:

from my_project import settings
import os

def data_sample(code=None):
    file_path = os.path.join(settings.DATA_DIR, 'my_data')
    df = pd.read_parquet(file_path)
    if not code:
        code = random.choice(df.code.unique())
    df = df[df.code == code].sort_values('Date')
    return df

Questions:

  1. Is this common practice to refer to files in this way? settings.DATA_DIR looks kinda ugly.

  2. Is this at all how settings.py should be used? And should it be placed in this directory? I have seen it in different spot in this repo under .samr/settings.py

I understand that there might not be ‘one right answer’, I just try to find logical, elegant way of handling these things.


Get this bounty!!!