#StackBounty: #javascript #performance #comparative-review #combinatorics n choose k combination in JavaScript

Bounty: 50

I was asked to solve a problem for $binom{n}{k}$. I did the following implementation and am wondering if there’s any feedback.


n = Integer

k = Integer


result = Array of arrays of integers


Time: $O(binom{n}{k})$

Space: $O(binom{n}{k})$

The order of the output array DOES NOT MATTER.

n and k are both positive.


N Choose K


function combinations(n, k) {
    let result= [];

  function recurse(start, combos) {
    if(combos.length === k) {
      return result.push(combos.slice());
        // Check if you can actually create a combo of valid length
        // given current start number
        // For example: 5 choose 4 can't begin with [3] since it would never have 4 numbers
    if(combos.length + (n - start + 1) < k){
    recurse(start + 1, combos);
    recurse(start + 1, combos);

  recurse(1, []);
  return result;

I was wondering if there’s a reason why the time gets optimized using a backtracking solution.

// function combinations(n, k) {
//   let result = []
//   function combine(combo, currentNumber){
//     if(combo.length === k) {
//       result.push(combo);
//       return;
//     }
//     if(currentNumber > n) {
//       return;
//     }

//     let newCombo1 = combo.slice();
//     let newCombo2 = combo.slice();
//     newCombo2.push(currentNumber);

//     combine(newCombo2, currentNumber + 1);  
//     combine(newCombo1, currentNumber + 1);

//   }
//   combine([], 1);
//   return result;
// }

// speed: 1120.756 ms

//backtracking method
function combinations(n, k) {
  let result = []
  let stack = []
  function combine(currentNumber){
    if(stack.length === k) {
      let newCombo = stack.slice()
    if(currentNumber > n) {

    combine(currentNumber + 1);  
    combine(currentNumber + 1);
  return result;

console.time("Naive Combinations")
console.log(combinations(20, 10))
console.timeEnd("Naive Combinations")

// speed: 320.756 ms

Get this bounty!!!

#StackBounty: #performance #collision #lua Collision detection for moving 2D objects

Bounty: 50

After implementing SAT (separating axis theorem) and not being happy with it only working on stationary shapes, this in theory allowing for fast objects to glitch through others, I came up with this approach to detect collisions between moving objects.

Algorithm explained

The idea is quite simple:

I figured that, if two shapes A and B aren’t intersecting in their starting positions and A moves relative to B, the two shapes collide if and only if
a) any vertex of A crosses a segment of B or
b) any segment of A touches (“sweeps over”) a vertex of B.

A vertex of A collides with a segment of B when the segment from A to A + V, V being the velocity of A (aka. it’s movement) intersect. That is implemented in the line intersection method of the line class (see below).

Lastly, if I loop through all vertices in A and B and let them collide with all segments of the respective other and repeat the same with B and A with the movement vector turned 180 degrees, the shortest distance any point can travel until a collision happens is the shortest distance A can travel until it collides with B.

To figure out if two segments intersect, I first trasnform them both so that the first segment goes from (0, 0) to (1, 0). Then, the two segments intersect if and only if the second segment cuts the X axis between 0 and 1, which is trivial to implement.


Collision detection itself

local function single(a, b, v)
    local vertices = {a:vertices()}
    local segments = {b:segments()}

    local min_intersection
    for idx,vertex in ipairs(vertices) do
        local v_end = vertex + v
        local projection = line(vertex.x, vertex.y, v_end.x, v_end.y)
        for idx,segment in ipairs(segments) do
            min_intersection = nil_min(min_intersection, projection * segment)

    if min_intersection == 1 then return nil end
    return min_intersection

function module.movement(a, b, v)
    return nil_min(single(a, b, v), single(b, a, -v))

Line intersection

intersection = function(self, other)
    if not is_line(other) then error("Invalid argument; expected line, got "..type(other), 2) end

    local origin, base = self:vectors()

    local a = vector_2d(other.x_start, other.y_start)
    local b = vector_2d(other.x_end,   other.y_end  )

    a = (a - origin):linear_combination(base)
    b = (b - origin):linear_combination(base)

    -- Both points are above or below X axis
    if a.y < 0 and b.x < 0 or a.y > 0 and b.y > 0 then
        return nil

    -- A always has the smallest X value
    if a.x > b.x then
        a, b = b, a

    local x0 = a.x + (b.x-a.x) * a.y / (a.y-b.y)

    if x0>=0 and x0<=1 then
        return x0
        return nil, x0

Cheaty linear combination

As you can see, I only need the linear combination of a given vector and that same vector rotated by 90 degrees, making it quite trivial. Implementation with two vectors is irrelevant to this situation and may get implemented in the future should I need it.

linear_combination = function(self, basis_x, basis_y)
    if basis_y then
        error("Not Implemented!", 2)
    else -- Assumes basis_y is basis_x + 90 degrees
        local angle = self:angle() - basis_x:angle()
        local f_len = self:length() / basis_x:length()
        return vector_2d(
            round(f_len * cos(angle)),
            round(f_len * sin(angle))

Okay, that’s pretty much it. I have done some testing using busted and it seems to work, but I am not sure if I may have overlooked some stupid mistake that might lead to complications later on. I am also unsure if that algorithm will be fast enough. Considering 3D games do complex collision detection these days, I would assume even a slightly slower algorithm wouldn’t impact a 2D game on a modern gaming PC, but since this is löve, would this run on a mid-tier android phone or tablet at an acceptable framerate?


  • Any game this might be used in will not have an unhealthily high number of collisions
  • It is purely meant for 2D, no intention to try it in 3D
  • most shapes will be rectangles, on average they may have at most 10 or so vertices
  • Vectors and segments are implemented using LuaJIT FFI structs, not Lua Tables

As a small extra: The angle at which the first vertex collides with a segment of the other shape, can easily be used to obtain the angle to apply a force to both shapes at the point of collision. This can mean anything from just bouncing the entire object without considering center of mass, to more advanced phyiscal calculations that apply an actual force to the object. While this is interesting and a nice feature of the algorithm, it is trivial to implement and thus out of scope for the actual question.

Get this bounty!!!

#StackBounty: #libreoffice #performance #opencl LibreOffice Draw is very slow with big projects

Bounty: 50

So I am making free educational books into Draw. I need it because it is amazing at providing the tools I need for design. However for large projects like say over 100 slides, the program gets super slow and laggy.

My setup:
Intel® Core™ i7-7700HQ CPU @ 2.80GHz × 8
GeForce GTX 1050/PCIe/SSE2
Samsung SSD
Memory 31.3 GiB

Draw Version:

I have Ubuntu 18.04 and Nvidia 390.

I would expect this setup to handle big projects easily, yet it struggles with Draw. Maybe I am missing something. For example do I have to use OpenGL and/or OpenCL with Draw? If I activate any of them I do not see any major improvement, and with OpenGL activated Draw UI gets very buggy.

Do I have to install or check anything within my system?


Get this bounty!!!

#StackBounty: #python #performance #programming-challenge #recursion #matrix Codewars: N-dimensional Von Neumann Neighborhood in a matrix

Bounty: 50


For creating this challenge on Codewars I need a very performant function that calculates Von Neumann Neighborhood in a N-dimensional array. This function will be called about 2000 times

The basic recursive approach:

  • calculate the index span influenced by the distance
  • if the index is in the range of the matrix go one step deeper into the next dimension
  • if max dimension is reached – append the value to the global neigh list isCenter is just a token that helps to NOT INCLUDE the cell itself to the neighbourhood. There is also remaining_distance that reduce the span.

You probably do not need to understand the process of this deep math so good. But maybe someone python experienced can point me to some basic performance upgrade potential the code has.


  • What I am curious about. Is .append inefficient? I heard list comprehensions are better than append.
  • Would not (0 <= dimensions_coordinate < len(arr)) changed to len(arr) <= dimensions_coordinate or dimensions_coordinate < 0) boost the code?
  • Are there performance differences between == and is?
  • Is dimensions = len(coordinates)... if curr_dim == dimensions:... slower than if curr_dim == len(coordinates)?
  • if you understood the math do you see a way to do it iterative? Because I heard recursions are slower in python and theoretical informatics says “Everything recursive can be iterative”

I would be very thankful if you can answer me at least some of the questions or point to lacks that I don’t see.

The whole code:

  • matrix is a N-dimensional matrix.
  • coordinates of the cell is a N-length tuple.
  • distance is the reach of the neighbourhood

def get_neighbourhood(matrix, coordinates, distance=1):
    dimensions = len(coordinates)
    neigh = []
    app = neigh.append

    def recc_von_neumann(arr, curr_dim=0, remaining_distance=distance, isCenter=True):
        //the breaking statement of the recursion
        if curr_dim == dimensions:
            if not isCenter:

        dimensions_coordinate = coordinates[curr_dim]
        if not (0 <= dimensions_coordinate < len(arr)):

        dimesion_span = range(dimensions_coordinate - remaining_distance, 
                              dimensions_coordinate + remaining_distance + 1)
        for c in dimesion_span:
            if 0 <= c < len(arr):
                                 curr_dim + 1, 
                                 remaining_distance - abs(dimensions_coordinate - c), 
                                 isCenter and dimensions_coordinate == c)

    return neigh

Get this bounty!!!

#StackBounty: #boot #virtualbox #xubuntu #performance #user-space Xubuntu User Space boot takes very long

Bounty: 50

I found similar topics regarding slow kernel boot but my problem is that my system takes 3 minutes to load the userspace

systemd-analyze gives the following output

Startup finished in 4.247s (kernel) + 3min 743ms (userspace = 3min 4.991s
graphical.target reacher after 1min 35.812s in userspace

Is there a way to identify what exactly is taking so much time?
I’m running Xubuntu 18.04 in a VirtualBox and I think the problem started after I enlarged my partition (including a recreation of the swap partition).

Output of the systemd-analyze critical-chain

graphical.target @1min 35.812s
└─multi-user.target @1min 35.812s
  └─docker.service @1min 32.815s +2.996s
    └─network-online.target @1min 32.814s
      └─NetworkManager-wait-online.service @1min 31.863s +951ms
        └─NetworkManager.service @1min 31.075s +784ms
          └─dbus.service @1min 30.640s
            └─basic.target @1min 30.529s
              └─sockets.target @1min 30.529s
                └─docker.socket @1min 30.498s +30ms
                  └─sysinit.target @1min 30.493s
                    └─apparmor.service @979ms +695ms
                      └─local-fs.target @947ms
                        └─media-aj-VBox_GAs_5.2.81.mount @1min 39.649s
                          └─clean-mount-point@media-aj-VBox_GAs_5.2.81.service @
                            └─system-cleanx2dmountx2dpoint.slice @1min 39.668s
                              └─system.slice @272ms
                                └─-.slice @266ms

Get this bounty!!!

#StackBounty: #c# #performance #json #json.net Looping JSON in WebAPI Controller and Add new Property

Bounty: 50

I have a json array that is being passed into a function. Whenever the function comes across a field (call it Field1) in a record with a value that starts with “@!!!@” the function compile them into a list to fire off to another server. I have code that looks like this in the initial function:

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
    dynamic request = await req.Content.ReadAsAsync<dynamic>();

    JObject data = (JObject)request;
    var directive = data["Directive"];

    var json = data.Last.First;

    string url = null;

    if (directive.ToString() == "D")
        url = "registry/get";
    else if (directive.ToString() == "S")
        url = "registry/sanitize";

    JArray payloadArray = (JArray)json;

    string newToken = null;

    List<JObject> objList = new List<JObject>();

    for (int i = 0; i <= payloadArray.Count() - 1; i++)
        string newJson = null;
        foreach (var prop in payloadArray[i])
            newJson = newJson + prop.ToString() + ",";
            if (prop.ToString().Contains("@!!!@"))
                JObject newProp = new JObject();
                newJson = newJson + """ + newProp.Properties().First().Name + "":"sanitize|" + prop.First + ""," + ""@" + newProp.Properties().First().Name.ToString() + "":"" + prop.First + "",";
                // newJson = newJson + ""@" + newProp.Properties().First().Name.ToString() + "":"sanitize|" + prop.First + "",";
                newToken = newToken + "{"Token":"" + prop.First + "","ProcessId":"" + prop.First.ToString().Replace("@!!!@", "") + ""},";
        objList.Add(JObject.Parse("{" + newJson + "}"));


    string outGoingPayload = "{"Registry":[" + newToken + "]}";

      var content = new StringContent(outGoingPayload.ToString(), Encoding.UTF8, "application/json");

       HttpResponseMessage response = MakeRequest(outGoingPayload.ToString());

 var responseBodyAsText = response.Content.ReadAsStringAsync();
JObject responseJson = JObject.Parse(responseBodyAsText.Result);
int counter = 0;
foreach(JObject item in objList)
    foreach(var itm in item)
        if (itm.Value.ToString().Contains("sanitize|@!!!@"))
            foreach(var resItem in responseJson)
                if (resItem.Value[counter]["ProcessId"].ToString() == itm.Value.ToString().Replace("sanitize|@!!!@",""))
                    catch (Exception ex)
                        itm.Value.Replace("Token not found");


string jsonStr = null;
foreach (var val in objList)
    jsonStr = jsonStr + val + ",";

jsonStr = "[" + jsonStr.TrimEnd(',') + "]";
var returnArray = JsonConvert.DeserializeObject(jsonStr);

    return req.CreateResponse(HttpStatusCode.OK, "returnArray");

For a Json payload of 1000 records this take 5000ms to run. What can I do to improve performance here? Setup of the system is the payload passed into the function will contain all records, I must build up a new payload to pass into the remote service to get corresponding values. I do this as one HttpClient request. So this means I loop the initial payload, build up the new payload, call the remove service, return all matches and loop the initial payload and add the extra field where appropriate. I am trying to get this function to return a bit faster. I have tried using Linq to JSON I have tried treating the json as a string. The code I have posted seems to be the fastest. I can provide more information if needed.

Sample payload to send to this function:


Sample to send the intermediate function (I cannot touch this one but it’s already optimized) My function should build this from the above payload:

{ "Wrapper":[

Return from intermediate service:

{ "Wrapper":[
{"Token":"@!!!@17e9ad37968e25893e96855ba3d633e250a401a6584b2bc9c7288f9fc458a9b6", "Value":"test"},
{"Token":"@!!!@008d613d1ca60885468bf274daa693cc778430fc8a539bdf2e7dc2dec88cd922", "Value":"test2"}

Back in my function both the original payload and the return payload from the intermediate function should be merged and returned like this:

 "Field3":"@!!!@17e9ad37968e25893e96855ba3d633e250a401a6584b2bc9c7288f9fc458a9b6 ",


Get this bounty!!!

#StackBounty: #magento2 #magento-2.1 #catalogsearch #performance #indexer Improve performance – Catalog Search Fulltext

Bounty: 50

Do you have some suggestion how can I improve the performance of the catalogsearch_fulltext indexer for Magento 2.1.X

I already set this indexer to ‘Update By Schedule’ so only the updated products are refreshed thanks to Mview pattern.

I have ElasticSearch as search engine, but it is not really related to it.

I have around

  • 20k products
  • 7k products visible in search by store
  • 31 stores
  • less than 10 new custom searchable attributes

For a full re-index, Magento took around 10 minutes by stores, so around 5 hours for all stores.

I plan to re-index each store in parallel and increase the server CPU and memory because each catalog search index seems independent by stores.

I try to import products with a differential approach, but sometime my customer needs a full product import.

Edit: I did a POC thanks to https://github.com/amphp/parallel reducing from 5 hours to 14 min using the c5.9xlarge EC2 instance with 36vCPU. But I want to know if there is alternative solution. (>1000$ per month)

Get this bounty!!!

#StackBounty: #performance #iis-7.5 #.net IIS: Slow time-taken does not match with application api response time

Bounty: 50

We are in a situation of API Micro services architecture.

There is an API endpoint (initialise), which is called by the Native. This “initialise” endpoint is further calling four endpoints.

Since the client has reported an issue of slowness with “initialise”, we decided to check the endpoint with some load test.

After bit of a check with the IIS log “timetaken” field, it has been identified that the response time of one of the api endpoint “customer” called by initialise is slow, varies between 4 to 5 seconds. From code perspective, the “customer” endpoint is taking 300 milliseconds; which is checked through application level logs.

If we have to believe the application level logs and iis timetaken field log, where does the rest of these 3.5 seconds are spent?

After reading through this IIS: How to tell if a slow time-taken is due to a slow network connection, it is advising to check through the wireshark tool. I have not yet gone to this path of asking networking team for packet analysis, but do you see anything else I can also check ?

The same url is also pointing towards enabling HTTP_SEND_RESPONSE_FLAG_BUFFER_DATA (read: https://blogs.msdn.microsoft.com/wndp/2006/08/15/buffering-in-http-sys/) but I don’t know how to enable the same for windows 2008 R2 servers.

Please note that we are using .NET version 4.x with 32bit app pool.

Does somebody know the best way to analyse memory dump (tool to read?) and most importantly, what to see there?

Also, any performance counter that can shed some lights?


Get this bounty!!!

#StackBounty: #javascript #performance #node.js #interview-questions #web-services Find currency exchange rates

Bounty: 50


Design a service to fetch exchange rate from a remote resource and
then calculate the exchange rate for each currency pair.

The remote resource contains the exchange rates of each currency in

This is an interview assignment and I came up with an easy solution.


'use strict';

const joi = require('joi');

const api = require('./api');
const Exchange = require('./exchange');
const xmlParser = require('./parse-xml');

const schema = joi
    source: joi.string().required().min(3).max(3).example('EUR'),
    target: joi.string().required().min(3).max(3).example('GBP')

const defaults = {
  timeout: 1000 // 1 sec

const exchange = async (pair, options = {}) => {
  options = Object.assign({}, defaults, options);
  const {source, target} = joi.attempt(pair, schema);

  const {requestApi = api, parser = xmlParser} = options;

  const exchange = new Exchange(requestApi, parser, options);
  const rate = await exchange.convert({source, target});
  return {source, target, rate};

module.exports = exchange;


'use strict';

const URL = 'https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml';

class Exchange {
  constructor(api, parser, options = {}) {
    this.api = api;
    this.options = options;
    this.parser = parser;

  async convert({source, target}) {
    if (!this.xml) {
      await this.fetch();
      this.euroToAll = this.parser(this.xml);
    const euroToSource = this.euroToAll[source];
    const euroToTarget = this.euroToAll[target];
    return exchange(euroToSource, euroToTarget);

  async fetch() {
    const response = await this.api.fetch(URL, this.options);
    this.xml = response.body || '';

function exchange(from, to) {
  return round(parseFloat(to) / parseFloat(from));

function round(result, digits = 4) {
  return Math.round(result * (10 ** digits)) / (10 ** digits);

module.exports = Exchange;


'use strict';

const xmldoc = require('xmldoc');
const debug = require('debug')('exchange-rate:parse');

const currencies = require('./currencies');

const parse = xml => {
  const doc = new xmldoc.XmlDocument(xml);
  const cube = doc.childNamed('Cube').childNamed('Cube');

  const rates = currencies.reduce(
    (accumulator, currency) => {
      const exchange = cube.childWithAttribute('currency', currency);
      if (exchange) {
        const {rate} = exchange.attr;
        accumulator[currency] = rate;
      } else {
        debug(`Node not found for currency: ${currency}`);
      return accumulator;
  // Add EUR rate to make it consistent
  rates.EUR = '1.0';
  return rates;

module.exports = parse;


'use strict';

const got = require('got');

module.exports = {
  async fetch(url, options = {}) {
    return got(url, options);


  1. What if in future we need to add different providers with different representation? How can I make it more flexible and keep the core logic decoupled?
  2. I am also curious to know if the design of the api from the client perspective is good or it can be improved.
  3. In NodeJs we can define dependency via require but I found it difficult to mock them for testing so, I have tried at couple of places to pass dependencies via arguments, is this fine?

Get this bounty!!!