#StackBounty: #python #matplotlib #memory-leaks Matplotlib – Fail to allocate bitmap

Bounty: 100

Basically I’m just running a for-loop that plots and saves a bunch of figures as PNG and when I’m up to like 25 figures to save in total I get this “Fail to allocate bitmap” error however I make sure to clear the axis, figure and figure window in between each one, so what gives?

Here’s my code just in case:

def update_trade_graphs(instruments):
    print('chokepoint 13')
    for i in range(0, len(instruments)):
        if instruments[i].linked_trade is False:
            continue

        # Update variables
        bid = instruments[i].orderbook['bids'][0][0] if len(instruments[i].orderbook['bids']) > 0 else None
        ask = instruments[i].orderbook['asks'][0][0] if len(instruments[i].orderbook['asks']) > 0 else None

        trades[instruments[i].linked_trade].time_series.append(date2num(current_time_mark))
        trades[instruments[i].linked_trade].bid_prices.append(bid)
        trades[instruments[i].linked_trade].ask_prices.append(ask)

        for timespan in timespans_all:
            if timespan in trades[instruments[i].linked_trade].buy_targets:
                pass
            else:
                trades[instruments[i].linked_trade].buy_targets[timespan] = []

            trades[instruments[i].linked_trade].buy_targets[timespan].append(instruments[i].minmax[timespan][0])

            if timespan in trades[instruments[i].linked_trade].sell_targets:
                pass
            else:
                trades[instruments[i].linked_trade].sell_targets[timespan] = []

            trades[instruments[i].linked_trade].sell_targets[timespan].append(instruments[i].minmax[timespan][1])

        # Plot graph
        fig = plt.figure()
        ax1 = plt.subplot2grid((1, 1), (0, 0))

        for timespan in timespans_all:
            ax1.plot_date(trades[instruments[i].linked_trade].time_series,
                          trades[instruments[i].linked_trade].buy_targets[timespan], '-', label='Buy Target', color='c')
            ax1.plot_date(trades[instruments[i].linked_trade].time_series,
                          trades[instruments[i].linked_trade].sell_targets[timespan], '-', label='Sell Target',
                          color='m')

        ax1.plot_date(trades[instruments[i].linked_trade].time_series, trades[instruments[i].linked_trade].bid_prices,
                      '-', label='Bid', color='r')
        ax1.plot_date(trades[instruments[i].linked_trade].time_series, trades[instruments[i].linked_trade].ask_prices,
                      '-', label='Ask', color='b')

        ax1.axhline(trades[instruments[i].linked_trade].entry_price, linestyle=':', color='c')
        if trades[instruments[i].linked_trade].exit_price > 0:
            ax1.axhline(trades[instruments[i].linked_trade].exit_price, linestyle=':', color='m')

        for label in ax1.xaxis.get_ticklabels():
            label.set_rotation(90)
        ax1.grid(True)

        plt.xlabel('Date')
        plt.ylabel('Price')
        plt.title(trades[instruments[i].linked_trade].symbol)

        plt.subplots_adjust(left=0.09, bottom=0.23, right=0.94, top=0.95, wspace=0.2, hspace=0)

        plt.savefig('trade_{0}.png'.format(instruments[i].linked_trade))
        plt.close(fig)
        plt.clf()
        plt.cla()


Get this bounty!!!

#StackBounty: #python #environment-variables #anaconda Register customizable environment variables with anaconda

Bounty: 50

By running
conda env export > environment.yml
I can make it easy for people to clone and replicate my environment.

But I also need them to set some environment variables. When using PHP (Laravel), I had a .env file (ignored by git) where the user could put account details, passwords, tokens etc. A file .env.example was provided allowing the user to see the required values. So I implemented that with a python class but it was frowned upon in r/learnpython (“…to give your user rope to hang themselves with”).

After further reading I did a file activate in my project root

export 
    GITHUB_ACCESS_TOKEN="your value goes here", 
    BENNO="test",

So the user now just runs source activate to register the variables. But I see several problems

  • activate is committed, how to protect the user from accidently publishing this?
  • After exiting my conda environment, the variable GITHUB_ACCESS_TOKEN was still active. I expected the conda environment to keep a separate set of environment variables?
  • The user have to run the activate script every time they relaunch the terminal
  • The activation script does not support windows usage
  • The principle still is the same as the .env.example in PHP which is bad??

To summarize I would like a clean simple way to store both the dependencies AND customizable environment vars, allowing for simple installation for conda users, but also if possible a wider set of python users. What are some good practices here? Can I somehow list the vars in environment.yml?


Get this bounty!!!

#StackBounty: #python #python-3.x #web-scraping #proxy Unable to rectify the logic within my script to make it stop when it's done

Bounty: 50

I’ve written a script in python using proxies to scrape the links of different posts traversing different pages of a webpage. I’ve tried to make use of proxies from a list. The script is supposed to take random proxies from the list and send request to that website and finally parse the items. However, if any proxy is not working then it should be kicked out from the list.

I suppose the way I’ve used number of proxies and list of urls within ThreadPool(10).starmap(make_requests, zip(proxyVault,lead_url)) is accurate.

What I’m trying to do is bring about any change within my script so that it will break as soon as the links are parsed no matter if there are still proxies in the list otherwise the script will keep scraping on the same items repeatedly.

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from multiprocessing.pool import ThreadPool
from itertools import cycle

base_url = 'https://stackoverflow.com/questions/tagged/web-scraping'
lead_url = ["https://stackoverflow.com/questions/tagged/web-scraping?sort=newest&page={}&pagesize=15".format(page) for page in range(1,6)]

proxyVault = ['104.248.159.145:8888', '113.53.83.252:54356', '206.189.236.200:80', '218.48.229.173:808', '119.15.90.38:60622', '186.250.176.156:42575']

def make_requests(proxyVault,lead_url):
    while True:   
        pitem = cycle(proxyVault)
        proxy = {'https':'http://{}'.format(next(pitem))}
        try:
            res = requests.get(lead_url,proxies=proxy)
            soup = BeautifulSoup(res.text,"lxml")
            [get_title(proxy,urljoin(base_url,item.get("href"))) for item in soup.select(".summary .question-hyperlink")]
        except Exception:
            try: 
                proxyVault.pop(0)
                make_requests(proxyVault,lead_url)
            except Exception:pass

def get_title(proxy,itemlink):
    res = requests.get(itemlink,proxies=proxy)
    soup = BeautifulSoup(res.text,"lxml")
    print(soup.select_one("h1[itemprop='name'] a").text)

if __name__ == '__main__':
    ThreadPool(10).starmap(make_requests, zip(proxyVault,lead_url))

Btw, the proxies used above are just placeholders.


Get this bounty!!!

#StackBounty: #python #window #screen #screenshot Getting screenshot via printwindow not redrawing if laptop screen off

Bounty: 50

My goal is to take screenshots off an application while the laptop screen is off, but instead the screenshot will always be the same as just before turning off the screen. It does not redraw itself once the screen is off, and remains frozen.

I’m obtaining a screenshot using printwindow from Python (using method described here: Python Screenshot of inactive window PrintWindow + win32gui

This method works nicely as long as I have my laptop screen on, but if I turn it off, it simply returns the last image before the screen turned off. I’ve tried using win32gui.RedrawWindow, hoping that this would force a redraw, but I haven’t gotten it to work, even trying all the different flags. I’ve also tried getting screenshots via pyautogui, but this also has the same problem. Is there any way to redraw the application while the laptop screen is off?


Get this bounty!!!

#StackBounty: #javascript #python #jupyter-notebook #clipboard Copy to clipboard in jupyter notebook

Bounty: 50

I’d like to implement a clipboard copy in a jupyter notebok.

The jupyter notebook is running remotely, thus I cannot use pandas.to_clipboard or pyperclip and I have to use javascript

This is what I came up with:

def js_code_copy(content)
    return """
var body = document.getElementsByTagName('body')[0];
var tmp_textbox = document.createElement('input');
body.appendChild(tmp_textbox);
tmp_textbox.setAttribute('value', '{content}');
tmp_textbox.select();
document.execCommand('copy');
body.removeChild(tmp_textbox);
""".format(content=content.replace("'", '\'+"'"))

Note that the code does what it’s supposed to if I run it in my browser’s console.

However, if I run it in jupyter with:

from IPython.display import display, Javascript
content = "boom"
display(Javascript(js_code_copy("Copy me to clipboard")))

Nothing works,

Any ideas ?


Get this bounty!!!

#StackBounty: #python #machine-learning #hyperparameters #hyperopt qloguniform search space setting issue in Hyperopt

Bounty: 50

I am working on using hyperopt to tune my ML model but having troubles in using the qloguniform as the search space. I am giving the example from official wiki and changed the search space.

import pickle
import time
#utf8
import pandas as pd
import numpy as np
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials

def objective(x):
    return {
        'loss': x ** 2,
        'status': STATUS_OK,
        # -- store other results like this
        'eval_time': time.time(),
        'other_stuff': {'type': None, 'value': [0, 1, 2]},
        # -- attachments are handled differently
        'attachments':
            {'time_module': pickle.dumps(time.time)}
        }
trials = Trials()
best = fmin(objective,
    space=hp.qloguniform('x', np.log(0.001), np.log(0.1), np.log(0.001)),
    algo=tpe.suggest,
    max_evals=100,
    trials=trials)
pd.DataFrame(trials.trials)

But getting the following error.

ValueError: (‘negative arg to lognormal_cdf’, array([-3.45387764,
-3.45387764, -3.45387764, -3.45387764, -3.45387764,
-3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764,
-3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764,
-3.45387764, -3.45387764, -3.45387764, -3.45387764, -3.45387764,
-3.45387764, -3.45387764, -3.45387764, -3.45387764]))

I have tried without log transform as below but the output values turns out to be log transformation (ex- 1.017,1.0008,1.02456), which is wrong. It is consistent with the documentation.

hp.qloguniform('x', 0.001,0.1, 0.001)

Thanks


Get this bounty!!!

#StackBounty: #python #json #python-3.x Adding nodes to json in python

Bounty: 50

I am trying to generate custom JSON in python using the following code

root={}
Levels=[['L1','L1','L2'],
        ['L1','L1','L3'],
        ['L1','L2'],
        ['L2','L2','L3'],
        ['L2','L2','L1'],
        ['L3','L2'],
        ['L4','L2','L1'],
        ['L4','L2','L4']]

def append_path(root, paths):
    if paths:
        child = root.setdefault(paths[0], {})
        append_path(child, paths[1:])

for p in Levels:
    append_path(root, p)

def convert(d):
    templist=[]
    noofchildren=0
    if(len(d.items())==0):
        return ([{}],1)
    for k,v in d.items():
        temp,children=convert(v)
        noofchildren+=children
        if(temp):
            templist.append({"name":k+"("+str(children)+")",'children':temp})
        else:
            templist.append({'name': k+"("+str(children)+")", 'children':[{}]})

    return (templist,noofchildren)    

# Print results
import json
print(json.dumps(convert(root)[0],  indent=2))

and the OUTPUT is

[
  {
    "name": "L1(3)",
    "children": [
      {
        "name": "L1(2)",
        "children": [
          {
            "name": "L2(1)",
            "children": [
              {}
            ]
          },
          {
            "name": "L3(1)",
            "children": [
              {}
            ]
          }
        ]
      },
      {
        "name": "L2(1)",
        "children": [
          {}
        ]
      }
    ]
  },
  {
    "name": "L2(2)",
    "children": [
      {
        "name": "L2(2)",
        "children": [
          {
            "name": "L3(1)",
            "children": [
              {}
            ]
          },
          {
            "name": "L1(1)",
            "children": [
              {}
            ]
          }
        ]
      }
    ]
  },
  {
    "name": "L3(1)",
    "children": [
      {
        "name": "L2(1)",
        "children": [
          {}
        ]
      }
    ]
  },
  {
    "name": "L4(2)",
    "children": [
      {
        "name": "L2(2)",
        "children": [
          {
            "name": "L1(1)",
            "children": [
              {}
            ]
          },
          {
            "name": "L4(1)",
            "children": [
              {}
            ]
          }
        ]
      }
    ]
  }
]

My dataset has changed a little bit

 Levels=[[['L1','L1','L2'],[10,20,30]],
        [[['L1','L1','L3'],[10,15,20]],
        [[['L1','L2'],[20,10]],
        [[['L2','L2','L3'],[20,20,30]],
        [[['L2','L2','L1'],[10,20,30]]
        [[['L3','L2'],[10,20]]
        [[['L4','L2','L1'],[10,20,10]]
        [[['L4','L2','L4'],[20,40,50]]]

and the output that I want is the average of the levels along with the count

[
  {
    "name": "L1(3)#(13)", // taking avg of 10,10,20
    "children": [
      {
        "name": "L1(2)#(17)", // taking avg of 20,15
        "children": [
          {
            "name": "L2(1)#(30)",
            "children": [
              {}
            ]
          },
          {
            "name": "L3(1)#(20)",
            "children": [
              {}
            ]
          }
        ]
      },
      {
        "name": "L2(1)#10",
        "children": [
          {}
        ]
      }
    ]
  },
  {
    "name": "L2(2)#(15)", // avg of 20,10
    "children": [
      {
        "name": "L2(2)#(20)", // avg of 20,20
        "children": [
          {
            "name": "L3(1)#(30)",
            "children": [
              {}
            ]
          },
          {
            "name": "L1(1)#(30)",
            "children": [
              {}
            ]
          }
        ]
      }
    ]
  },
  {
    "name": "L3(1)#(10)",
    "children": [
      {
        "name": "L2(1)#(10)",
        "children": [
          {}
        ]
      }
    ]
  },
  {
    "name": "L4(2)#(15)",// avg of 10,20
    "children": [
      {
        "name": "L2(2)#(30)", // avg of 20,40
        "children": [
          {
            "name": "L1(1)# (10)",
            "children": [
              {}
            ]
          },
          {
            "name": "L4(1)#(50)",
            "children": [
              {}
            ]
          }
        ]
      }
    ]
  }
]

How can i change my code to add this information?


Get this bounty!!!

#StackBounty: #python #build #egg Can't locate a python script from error message

Bounty: 50

I am trying to trace the origin of a python error message that I am getting when I try to run my code test.py.

The module (which is called by test.py) that I am trying to trace from the error output is apparently:

build/bdist.linux-x86_64/egg/george/gp.py

The error message snippet:

File "build/bdist.linux-x86_64/egg/george/gp.py", line 498, in
    predict
  1. I can find build/bdist.linux-x86_64/ but it is empty. Maybe it’s not the ‘right one’.
  2. I have also found a different version of gp.py, but when I make changes to that, nothing happens, so test.py is not calling that version.

All I want to do is find the code in which the error is occurring so that I can add some more outputs to it to figure out what is going wrong.


Here is the error message:

Traceback (most recent call last):
  File "test.py", line 213, in <module>
    mumc, dummy = gp1.predict(residuals, dates, kernel = kernelprime )
  File "build/bdist.linux-x86_64/egg/george/gp.py", line 511, in predict
  File "build/bdist.linux-x86_64/egg/george/solvers/basic.py", line 87, in apply_inverse
  File "/home/me/.local/lib/python2.7/site-packages/scipy/linalg/decomp_cholesky.py", line 174, in cho_solve
    b1 = asarray_chkfinite(b)
  File "/home/me/.local/lib/python2.7/site-packages/numpy/lib/function_base.py", line 1219, in asarray_chkfinite
"array must not contain infs or NaNs")
ValueError: array must not contain infs or NaNs

So obviously, at some point down the line, I am feeding an array that contains infs or NaNs into some scipy or numpy code that it doesn’t like. But to see why the values are infs or NaNs in the first place, it seems like whatever is going wrong is happening in the predict module.

(gp1 is a class which is also defined in the gp.py code!)


Get this bounty!!!

#StackBounty: #python #python-3.x #function #if-statement Function return with different conditional statement

Bounty: 50

I have a simple function, which I do like to call and return some values. Inside that function there is a if, elif and else statement, purpose is when if condition is met, return some values, it is when if and elif are not fulfilled, run and display what ever is under else statement. I have used a widget alert to flag and state the problem.

The problem is:

1- When the function calls, it returns just what ever is under else. despite the if statement is fulfilled.

2- Remove all codelines under else, just run if and elif, return some value if the conditions are met, otherwise returns TypeError: 'NoneType' object is not iterable.

The code:

from PyQt5 import QtCore, QtWidgets, QtGui

def fun( x, y, z):
    X = x
    Y = y
    Z = z

    for i in range(0,Z):
        R = i * X/Y

        if R == 10:
            return R, i
        elif 10 < R <= 45:
            return R, i
        else:
            print('Error') 
            app = QtWidgets.QApplication([])
            error_dialog = QtWidgets.QErrorMessage()
            error_dialog.showMessage('Error!! change values')
            app.exec_() 
            return R, i

Using these values to fulfill conditions.

result, prod = fun(10, 60, 100)
result, prod = fun(105, 60, 100)
result, prod = fun(10, 600, 100)

Input with else statement:

result, prod = fun(10, 60, 100)
print( result, prod)

Output with else statement:

Error window shows up

Error
0.0 0

Input without else statement:

result, prod = fun(10, 60, 100)
print( result, prod)

Output without else statement:

10.0 60

I want to keep the statements and return values as it is desired. Thanks for your help


Get this bounty!!!

#StackBounty: #python #python3 #anaconda #ide Cant change python interpreter in Visual Code on Mac

Bounty: 50

In my console (iterm2)
which python gives /Users/anders/anaconda3/bin/python

In Visual Code’s built in terminal
which python gives /usr/bin/python

Since I want to use anacondas python installation I use CMD+SHIFT+P(Python: Select Interpreter) and there I see ~/anaconda3/bin/python so I select that one.

However this does not take effect in my terminal. I have tried the following to make it reflect when doing which python

  • Open a new terminal tab
  • Restart program

And if I go back to confirm active interpreter, it does say anaconda. But still it uses the one from /usr/bin/python. Whats going on here?


Get this bounty!!!