#StackBounty: #libreoffice #performance #opencl LibreOffice Draw is very slow with big projects

Bounty: 50

So I am making free educational books into Draw. I need it because it is amazing at providing the tools I need for design. However for large projects like say over 100 slides, the program gets super slow and laggy.

My setup:
Intel® Core™ i7-7700HQ CPU @ 2.80GHz × 8
GeForce GTX 1050/PCIe/SSE2
Samsung SSD
Memory 31.3 GiB

Draw Version:

I have Ubuntu 18.04 and Nvidia 390.

I would expect this setup to handle big projects easily, yet it struggles with Draw. Maybe I am missing something. For example do I have to use OpenGL and/or OpenCL with Draw? If I activate any of them I do not see any major improvement, and with OpenGL activated Draw UI gets very buggy.

Do I have to install or check anything within my system?


Get this bounty!!!

#StackBounty: #python #performance #programming-challenge #recursion #matrix Codewars: N-dimensional Von Neumann Neighborhood in a matrix

Bounty: 50


For creating this challenge on Codewars I need a very performant function that calculates Von Neumann Neighborhood in a N-dimensional array. This function will be called about 2000 times

The basic recursive approach:

  • calculate the index span influenced by the distance
  • if the index is in the range of the matrix go one step deeper into the next dimension
  • if max dimension is reached – append the value to the global neigh list isCenter is just a token that helps to NOT INCLUDE the cell itself to the neighbourhood. There is also remaining_distance that reduce the span.

You probably do not need to understand the process of this deep math so good. But maybe someone python experienced can point me to some basic performance upgrade potential the code has.


  • What I am curious about. Is .append inefficient? I heard list comprehensions are better than append.
  • Would not (0 <= dimensions_coordinate < len(arr)) changed to len(arr) <= dimensions_coordinate or dimensions_coordinate < 0) boost the code?
  • Are there performance differences between == and is?
  • Is dimensions = len(coordinates)... if curr_dim == dimensions:... slower than if curr_dim == len(coordinates)?
  • if you understood the math do you see a way to do it iterative? Because I heard recursions are slower in python and theoretical informatics says “Everything recursive can be iterative”

I would be very thankful if you can answer me at least some of the questions or point to lacks that I don’t see.

The whole code:

  • matrix is a N-dimensional matrix.
  • coordinates of the cell is a N-length tuple.
  • distance is the reach of the neighbourhood

def get_neighbourhood(matrix, coordinates, distance=1):
    dimensions = len(coordinates)
    neigh = []
    app = neigh.append

    def recc_von_neumann(arr, curr_dim=0, remaining_distance=distance, isCenter=True):
        //the breaking statement of the recursion
        if curr_dim == dimensions:
            if not isCenter:

        dimensions_coordinate = coordinates[curr_dim]
        if not (0 <= dimensions_coordinate < len(arr)):

        dimesion_span = range(dimensions_coordinate - remaining_distance, 
                              dimensions_coordinate + remaining_distance + 1)
        for c in dimesion_span:
            if 0 <= c < len(arr):
                                 curr_dim + 1, 
                                 remaining_distance - abs(dimensions_coordinate - c), 
                                 isCenter and dimensions_coordinate == c)

    return neigh

Get this bounty!!!

#StackBounty: #boot #virtualbox #xubuntu #performance #user-space Xubuntu User Space boot takes very long

Bounty: 50

I found similar topics regarding slow kernel boot but my problem is that my system takes 3 minutes to load the userspace

systemd-analyze gives the following output

Startup finished in 4.247s (kernel) + 3min 743ms (userspace = 3min 4.991s
graphical.target reacher after 1min 35.812s in userspace

Is there a way to identify what exactly is taking so much time?
I’m running Xubuntu 18.04 in a VirtualBox and I think the problem started after I enlarged my partition (including a recreation of the swap partition).

Output of the systemd-analyze critical-chain

graphical.target @1min 35.812s
└─multi-user.target @1min 35.812s
  └─docker.service @1min 32.815s +2.996s
    └─network-online.target @1min 32.814s
      └─NetworkManager-wait-online.service @1min 31.863s +951ms
        └─NetworkManager.service @1min 31.075s +784ms
          └─dbus.service @1min 30.640s
            └─basic.target @1min 30.529s
              └─sockets.target @1min 30.529s
                └─docker.socket @1min 30.498s +30ms
                  └─sysinit.target @1min 30.493s
                    └─apparmor.service @979ms +695ms
                      └─local-fs.target @947ms
                        └─media-aj-VBox_GAs_5.2.81.mount @1min 39.649s
                          └─clean-mount-point@media-aj-VBox_GAs_5.2.81.service @
                            └─system-cleanx2dmountx2dpoint.slice @1min 39.668s
                              └─system.slice @272ms
                                └─-.slice @266ms

Get this bounty!!!

#StackBounty: #c# #performance #json #json.net Looping JSON in WebAPI Controller and Add new Property

Bounty: 50

I have a json array that is being passed into a function. Whenever the function comes across a field (call it Field1) in a record with a value that starts with “@!!!@” the function compile them into a list to fire off to another server. I have code that looks like this in the initial function:

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
    dynamic request = await req.Content.ReadAsAsync<dynamic>();

    JObject data = (JObject)request;
    var directive = data["Directive"];

    var json = data.Last.First;

    string url = null;

    if (directive.ToString() == "D")
        url = "registry/get";
    else if (directive.ToString() == "S")
        url = "registry/sanitize";

    JArray payloadArray = (JArray)json;

    string newToken = null;

    List<JObject> objList = new List<JObject>();

    for (int i = 0; i <= payloadArray.Count() - 1; i++)
        string newJson = null;
        foreach (var prop in payloadArray[i])
            newJson = newJson + prop.ToString() + ",";
            if (prop.ToString().Contains("@!!!@"))
                JObject newProp = new JObject();
                newJson = newJson + """ + newProp.Properties().First().Name + "":"sanitize|" + prop.First + ""," + ""@" + newProp.Properties().First().Name.ToString() + "":"" + prop.First + "",";
                // newJson = newJson + ""@" + newProp.Properties().First().Name.ToString() + "":"sanitize|" + prop.First + "",";
                newToken = newToken + "{"Token":"" + prop.First + "","ProcessId":"" + prop.First.ToString().Replace("@!!!@", "") + ""},";
        objList.Add(JObject.Parse("{" + newJson + "}"));


    string outGoingPayload = "{"Registry":[" + newToken + "]}";

      var content = new StringContent(outGoingPayload.ToString(), Encoding.UTF8, "application/json");

       HttpResponseMessage response = MakeRequest(outGoingPayload.ToString());

 var responseBodyAsText = response.Content.ReadAsStringAsync();
JObject responseJson = JObject.Parse(responseBodyAsText.Result);
int counter = 0;
foreach(JObject item in objList)
    foreach(var itm in item)
        if (itm.Value.ToString().Contains("sanitize|@!!!@"))
            foreach(var resItem in responseJson)
                if (resItem.Value[counter]["ProcessId"].ToString() == itm.Value.ToString().Replace("sanitize|@!!!@",""))
                    catch (Exception ex)
                        itm.Value.Replace("Token not found");


string jsonStr = null;
foreach (var val in objList)
    jsonStr = jsonStr + val + ",";

jsonStr = "[" + jsonStr.TrimEnd(',') + "]";
var returnArray = JsonConvert.DeserializeObject(jsonStr);

    return req.CreateResponse(HttpStatusCode.OK, "returnArray");

For a Json payload of 1000 records this take 5000ms to run. What can I do to improve performance here? Setup of the system is the payload passed into the function will contain all records, I must build up a new payload to pass into the remote service to get corresponding values. I do this as one HttpClient request. So this means I loop the initial payload, build up the new payload, call the remove service, return all matches and loop the initial payload and add the extra field where appropriate. I am trying to get this function to return a bit faster. I have tried using Linq to JSON I have tried treating the json as a string. The code I have posted seems to be the fastest. I can provide more information if needed.

Sample payload to send to this function:


Sample to send the intermediate function (I cannot touch this one but it’s already optimized) My function should build this from the above payload:

{ "Wrapper":[

Return from intermediate service:

{ "Wrapper":[
{"Token":"@!!!@17e9ad37968e25893e96855ba3d633e250a401a6584b2bc9c7288f9fc458a9b6", "Value":"test"},
{"Token":"@!!!@008d613d1ca60885468bf274daa693cc778430fc8a539bdf2e7dc2dec88cd922", "Value":"test2"}

Back in my function both the original payload and the return payload from the intermediate function should be merged and returned like this:

 "Field3":"@!!!@17e9ad37968e25893e96855ba3d633e250a401a6584b2bc9c7288f9fc458a9b6 ",


Get this bounty!!!

#StackBounty: #magento2 #magento-2.1 #catalogsearch #performance #indexer Improve performance – Catalog Search Fulltext

Bounty: 50

Do you have some suggestion how can I improve the performance of the catalogsearch_fulltext indexer for Magento 2.1.X

I already set this indexer to ‘Update By Schedule’ so only the updated products are refreshed thanks to Mview pattern.

I have ElasticSearch as search engine, but it is not really related to it.

I have around

  • 20k products
  • 7k products visible in search by store
  • 31 stores
  • less than 10 new custom searchable attributes

For a full re-index, Magento took around 10 minutes by stores, so around 5 hours for all stores.

I plan to re-index each store in parallel and increase the server CPU and memory because each catalog search index seems independent by stores.

I try to import products with a differential approach, but sometime my customer needs a full product import.

Edit: I did a POC thanks to https://github.com/amphp/parallel reducing from 5 hours to 14 min using the c5.9xlarge EC2 instance with 36vCPU. But I want to know if there is alternative solution. (>1000$ per month)

Get this bounty!!!

#StackBounty: #performance #iis-7.5 #.net IIS: Slow time-taken does not match with application api response time

Bounty: 50

We are in a situation of API Micro services architecture.

There is an API endpoint (initialise), which is called by the Native. This “initialise” endpoint is further calling four endpoints.

Since the client has reported an issue of slowness with “initialise”, we decided to check the endpoint with some load test.

After bit of a check with the IIS log “timetaken” field, it has been identified that the response time of one of the api endpoint “customer” called by initialise is slow, varies between 4 to 5 seconds. From code perspective, the “customer” endpoint is taking 300 milliseconds; which is checked through application level logs.

If we have to believe the application level logs and iis timetaken field log, where does the rest of these 3.5 seconds are spent?

After reading through this IIS: How to tell if a slow time-taken is due to a slow network connection, it is advising to check through the wireshark tool. I have not yet gone to this path of asking networking team for packet analysis, but do you see anything else I can also check ?

The same url is also pointing towards enabling HTTP_SEND_RESPONSE_FLAG_BUFFER_DATA (read: https://blogs.msdn.microsoft.com/wndp/2006/08/15/buffering-in-http-sys/) but I don’t know how to enable the same for windows 2008 R2 servers.

Please note that we are using .NET version 4.x with 32bit app pool.

Does somebody know the best way to analyse memory dump (tool to read?) and most importantly, what to see there?

Also, any performance counter that can shed some lights?


Get this bounty!!!

#StackBounty: #javascript #performance #node.js #interview-questions #web-services Find currency exchange rates

Bounty: 50


Design a service to fetch exchange rate from a remote resource and
then calculate the exchange rate for each currency pair.

The remote resource contains the exchange rates of each currency in

This is an interview assignment and I came up with an easy solution.


'use strict';

const joi = require('joi');

const api = require('./api');
const Exchange = require('./exchange');
const xmlParser = require('./parse-xml');

const schema = joi
    source: joi.string().required().min(3).max(3).example('EUR'),
    target: joi.string().required().min(3).max(3).example('GBP')

const defaults = {
  timeout: 1000 // 1 sec

const exchange = async (pair, options = {}) => {
  options = Object.assign({}, defaults, options);
  const {source, target} = joi.attempt(pair, schema);

  const {requestApi = api, parser = xmlParser} = options;

  const exchange = new Exchange(requestApi, parser, options);
  const rate = await exchange.convert({source, target});
  return {source, target, rate};

module.exports = exchange;


'use strict';

const URL = 'https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml';

class Exchange {
  constructor(api, parser, options = {}) {
    this.api = api;
    this.options = options;
    this.parser = parser;

  async convert({source, target}) {
    if (!this.xml) {
      await this.fetch();
      this.euroToAll = this.parser(this.xml);
    const euroToSource = this.euroToAll[source];
    const euroToTarget = this.euroToAll[target];
    return exchange(euroToSource, euroToTarget);

  async fetch() {
    const response = await this.api.fetch(URL, this.options);
    this.xml = response.body || '';

function exchange(from, to) {
  return round(parseFloat(to) / parseFloat(from));

function round(result, digits = 4) {
  return Math.round(result * (10 ** digits)) / (10 ** digits);

module.exports = Exchange;


'use strict';

const xmldoc = require('xmldoc');
const debug = require('debug')('exchange-rate:parse');

const currencies = require('./currencies');

const parse = xml => {
  const doc = new xmldoc.XmlDocument(xml);
  const cube = doc.childNamed('Cube').childNamed('Cube');

  const rates = currencies.reduce(
    (accumulator, currency) => {
      const exchange = cube.childWithAttribute('currency', currency);
      if (exchange) {
        const {rate} = exchange.attr;
        accumulator[currency] = rate;
      } else {
        debug(`Node not found for currency: ${currency}`);
      return accumulator;
  // Add EUR rate to make it consistent
  rates.EUR = '1.0';
  return rates;

module.exports = parse;


'use strict';

const got = require('got');

module.exports = {
  async fetch(url, options = {}) {
    return got(url, options);


  1. What if in future we need to add different providers with different representation? How can I make it more flexible and keep the core logic decoupled?
  2. I am also curious to know if the design of the api from the client perspective is good or it can be improved.
  3. In NodeJs we can define dependency via require but I found it difficult to mock them for testing so, I have tried at couple of places to pass dependencies via arguments, is this fine?

Get this bounty!!!

#StackBounty: #sql-server #performance #clustered-index When Azure recommends an index (part of natural key) with all other columns inc…

Bounty: 50

The table does have an identity key (current CI), but it’s barely used to query. Because the natural key is not ever-increasing I’m afraid of insert performance, fragmentation or other problems that I don’t foresee now.

The table is not wide, with just a few columns. It has about 8 million rows and bringing our site to a halt during peak times. (+1000s concurrent users). The data is not easily cacheable, because it is quite volatile and essential that it’s up to date.

There are a lot of reads on one column of the natural key, but also quite active inserting and updating. Say 8 reads, vs 1 updates vs 1 inserts.

Id (PK)         int
UserId*         int
Key1*           varchar(25)
Key2*           varchar(25)
Key3*           int
LastChanged     datetime2(7)
Value           varchar(25)
Invalid         bit

* this combination is the natural primary key

I need to query the most of the time on:

  • All rows for one UserId (most queried)
  • All rows for a list of UserIds (a lot of rows)
  • All rows for a list of UserIds with Key1 = X
  • All rows for a list of UserIds with Key2 = X
  • All rows for a list of UserIds with Key1 = X and Key2 = X

I know the final answer is always “profile it”, but we are quite on a time constraint here so any guidance or experienced opinions in advance would really be appreciated.

Thanks in advance,

Get this bounty!!!

#StackBounty: #model-selection #r-squared #performance #rms Why does the rank order of models differ for R squared and RMSE?

Bounty: 50

I am comparing $R^2$ and RMSE of different models. Interestingly, the rank ordering of the models with respect to $-R^2$ and RMSE is different and I do not understand why.

Here is an example in R:





Thus, the order if different for $-R^2$ suqared and $RMSE$.

The question is, why.

Let $SS_{res}$ be the sum of squared residuals $sum(y_i-f_i)^2$.

$RMSE$ is defined as $sqrt{SS_{res}/n}$.

$R^2$ is defined as $1-SS_{res}/SS_{tot}$ where $SS_{tot}$ is $sum(y_i-overline{y})^2$.

Since $SS_{res}=n(RMSE)^2$, we can write $R^2$ as $1-n(RMSE)^2/SS_{tot}$.
Since $n$ and $SS_{tot}$ are constant and the same for all models, $-R^2$ and $RMSE$ should strictly positively related. However, they are not since the ranking order is in practice not identical (see example code).

What is wrong with my argument?

Get this bounty!!!