#StackBounty: #javascript #c# #deobfuscation #jint Jint+JSfuck – 'Index was outside the bounds of the array'

Bounty: 50

I’m trying to run the following code in jint:

Jint.Engine engine = new Jint.Engine();
var result = engine.SetValue("data", data).Execute("(/\n(.+)/.exec(eval(data.replace(/\s+/, "").slice(0, -2)))[1]);").GetCompletionValue();

Which, when unescaped, is executing the following javascript:

(/n(.+)/.exec(eval(data.replace(/s+/, "").slice(0, -2)))[1]);

the data variable corresponds to a JSfuck string, similar to this: https://pastebin.com/vmGAebW5

The problem is that I always get a ‘Index was outside the bounds of the array’ exception, even though the javascript works fine when run in a browser. Any ideas as to what is causing the issue?

Get this bounty!!!

#StackBounty: #javascript #reactjs #webpack #mocha #sinon How Do I Stub webpack's require.ensure?

Bounty: 100

I use webpack‘s code splitting feature (require.ensure) to reduce the initial bundle size of my React application by loading components that are not visible on page load from a separate bundle that is loaded asynchronously.

This works perfectly, but I have trouble writing a unit test for it.

My test setup is based on Mocha, Chai and Sinon.

Here is the relevant excerpt from the code I have tried so far:

describe('When I render the component', () => {
    let component,
    beforeEach(() => {
        mySandbox = sandbox.create();
        mySandbox.stub(require, 'ensure');
        component = mount(<PageHeader />);
    describe('the rendered component', () =>
        it('contains the SideNav component', () =>
    afterEach(() => mySandbox.restore());

When running the test, I get this error message:

“before each” hook for “contains the SideNav component”: Cannot stub non-existent own property ensure

This happens because require.ensure is a method that only exists in a webpack bundle, but I’m not bundling my tests with webpack, nor do I want to, because it would create more overhead and presumably longer test execution times.

So my question is:

Is there a way to stub webpack’s require.ensure with Sinon without running the tests through webpack?

Get this bounty!!!

#StackBounty: #javascript #angularjs #twitter Twitter REST API: Bad Authentication data

Bounty: 50

I’m trying to post a tweet but for any reason it doesn’t work as expected.

I suspect that the issue could be related to the signature string, but following what twitter says according to signing requests looks ok.

Here is my code:

function postTweet(user_id, AccessToken, AccessTokenSecret) {
  var base_url = 'https://api.twitter.com/1.1/statuses/update.json',
    oauth_nonce = randomString(),
    oauth_timestamp = Math.round((new Date()).getTime() / 1000.0),

  reqArray = [
    "oauth_consumer_key=" + CONFIG.TWITTER_CONSUMER_KEY,
    "oauth_nonce=" + oauth_nonce,
    "oauth_timestamp=" + oauth_timestamp,
    "oauth_token=" + AccessToken,
    "status=" + encodeURI("hello world")

  req = reqArray.sort().join('&');

  signature_base_string = "POST&" + encodeURI(base_url) + "&" + encodeURIComponent(req);

  signing_key = CONFIG.TWITTER_CONSUMER_KEY_SECRET + '&' + AccessTokenSecret;

  oauth_signature = CryptoJS.HmacSHA256(signature_base_string, signing_key).toString(CryptoJS.enc.Base64);

  $http.defaults.headers.common.Authorization = 'OAuth oauth_consumer_key="' + CONFIG.TWITTER_CONSUMER_KEY + '", oauth_nonce="' + oauth_nonce + '", oauth_signature="' + oauth_signature + '",oauth_signature_method="HMAC-SHA1",oauth_timestamp="' + oauth_timestamp + '", oauth_version="1.0"';

  return $http.post('https://api.twitter.com/1.1/statuses/update.json', {
    status: 'hello world'
  }).then(function (response) {
    return response;
  }).catch(function (error) {

enter image description here

As a response, I get that:

enter image description here

Get this bounty!!!

#StackBounty: #open-source #javascript #markdown #reference-management #copy-paste Copy Paste Cite to Markdown Tool

Bounty: 50


Does some piece of software exist,

  1. Where when I copy and paste information from a website, it will auto-format the copy/citation into markdown
  2. Sum all the common base url’s together so that the entire page is not blue. (this is not necessary, would be great)

Desired Output

Let’s say I copy information from Software Recommendations tour page

Software Recommendations Stack Exchange is a question and answer site for people seeking specific software recommendations

In order for me to do that, I had to hit Ctrl-L and then a copy paste of the words “Software Recommendations Stack Exchange is a question and answer site for people seeking specific software recommendations”, where as in the bookmark application Pinboard, if I select/highlight the information, it brings all the text highlighted, into the bookmark automatically.

Examples of Pinboard

1 No Text Highlighted Pinboard



Seen in the wild

I know this happens when you try and copy a motivational quote from this website Brainy Quote

Brainy Quote Output after Paste

Only I can change my life. No one can do it for me. Read more at:

After seeing this, I searched stackexchange and found this
How to add extra info to copied web text

  • this proved it’s possible.

I also found this, but it did not happen to solve the person’s question.
Markdown editor to preserve URL links in text copied from browser

He asked this “Markdown editor to preserve URL links in text copied from browser”

Markdown editor to preserve URL links in text copied from browser question

It is hilarious because in order to properly cite him, I had to take a picture because I could not copy and paste it properly without a lot of pain to show you what his question was in order to save you the pain of clicking through the link

Get this bounty!!!

#StackBounty: #javascript #python #selenium Python: Unable to download with selenium in webpage

Bounty: 50

My purpose it to download a zip file from https://www.shareinvestor.com/prices/price_download_zip_file.zip?type=history_all&market=bursa

It is a link in this webpage https://www.shareinvestor.com/prices/price_download.html#/?type=price_download_all_stocks_bursa. Then save it into this directory "/home/vinvin/shKLSE/ (I am using pythonaywhere). Then unzip it and the csv file extract in the directory.

The code run until the end with no error but it does not downloaded.
The zip file is automatically downloaded when click on https://www.shareinvestor.com/prices/price_download_zip_file.zip?type=history_all&market=bursa manually.

My code with a working username and password is used. The real username and password is used so that it is easier to understand the problem.

    print "hello from python 2"

    import urllib2
    from selenium import webdriver
    from selenium.webdriver.common.keys import Keys
    import time
    from pyvirtualdisplay import Display
    import requests, zipfile, os    

    display = Display(visible=0, size=(800, 600))

    profile = webdriver.FirefoxProfile()
    profile.set_preference('browser.download.folderList', 2)
    profile.set_preference('browser.download.manager.showWhenStarting', False)
    profile.set_preference('browser.download.dir', "/home/vinvin/shKLSE/")
    profile.set_preference('browser.helperApps.neverAsk.saveToDisk', '/zip')

    for retry in range(5):
            browser = webdriver.Firefox(profile)
            print "firefox"

    login_main = browser.find_element_by_xpath("//*[@href='/user/login.html']").click()
    print browser.current_url
    username = browser.find_element_by_id("sic_login_header_username")
    password = browser.find_element_by_id("sic_login_header_password")
    print "find id done"
    print "log in done"
    login_attempt = browser.find_element_by_xpath("//*[@type='submit']")
    print browser.current_url
    dl = browser.find_element_by_xpath("//*[@href='/prices/price_download_zip_file.zip?type=history_all&market=bursa']").click()


   zip_ref = zipfile.ZipFile(/home/vinvin/sh/KLSE, 'r')

HTML snippet:

<li><a href="/prices/price_download_zip_file.zip?type=history_all&amp;market=bursa">All Historical Data</a> <span>About 220 MB</span></li>

Note that &amp is shown when I copy the snippet. It was hidden from view source, so I guess it is written in JavaScript.

Observation I found

  1. The directory home/vinvin/shKLSE do not created even I run the code with no error

  2. I try to download a much smaller zip file which can be completed in a second but still do not download after a wait of 30s. dl = browser.find_element_by_xpath("//*[@href='/prices/price_download_zip_file.zip?type=history_daily&date=20170519&market=bursa']").click()

enter image description here

Get this bounty!!!

#StackBounty: #javascript #php #file-upload #dropzone.js Cannot send more than 4 files with dropzone

Bounty: 100

This is weird…

I can send 0, 1, 2, 3, 4 files with dropzone but cannot send 5 or more..

I suppose I am correctly defining the options here:

Dropzone.options.myDropzone = {
    url: "action.php",
    autoProcessQueue: false,
    uploadMultiple: true,
    parallelUploads: 6,
    maxFilesize: 5, 
    maxFiles: 6,
    addRemoveLinks: true,
    paramName: 'userfile',
    acceptedFiles: 'image/*',
    dictMaxFilesExceeded: 'Too many files! Maximum is {{maxFiles}}',

    // The setting up of the dropzone
    init: function() {
        dzClosure = this; // Makes sure that 'this' is understood inside the functions below.
        // for Dropzone to process the queue (instead of default form behavior):
        document.getElementById("submit-form").addEventListener("click", function(e) {
            // Make sure that the form isn't actually being sent.
            // If the user has selected at least one file, AJAX them over.
            if (dzClosure.files.length !== 0) {
                // dzClosure.options.autoProcessQueue = true;
            // Else just submit the form and move on.
            } else {
        // send all the form data along with the files:
        this.on("sendingmultiple", function(data, xhr, formData) {
            formData.append("name", $("#name").val());
            formData.append("email", $("#email").val());
        this.on("sucessmultiple", function(files, response) {
            // dzClosure.options.autoProcessQueue = false;
            $(location).attr('href', 'message_sent.html')

I say correctly because when I drag more than 6 files I see the error message: Too many files! Maximum is 6

Does this boundary make any sense in the source code?

Some more details:
a simplified version of my form is as follows

<form id="foorm" method="post" action="action.php" enctype="multipart/form-data">
    <input type="text" id="name" name="name" required>
    <input type="email" id="email" name="email" required>
    <div class="dropzone" id="myDropzone">
    <button type="submit" name="submit-form" id="submit-form">Send!</button>

and my action.php starts with displaying json_encode($_FILES); and json_encode($_POST);

and they are as expected when sending less than 4 files and are [] when sending 5 or more


It seems that I can upload 5 or more if they are smaller in size! Can this be anything else than a bug on dropzone? (honest question)

Get this bounty!!!

#StackBounty: #javascript #node.js #mongodb Pass large array to node child process

Bounty: 200

I have complex CPU intensive work I want to do on a large array. Ideally, I’d like to pass this to the child process.

    var spawn = require('child_process').spawn;

    // dataAsNumbers is a large 2D array
    var child = spawn(process.execPath, ['/child_process_scripts/getStatistics', dataAsNumbers]);

    child.stdout.on('data', function(data){
        console.log('from child: ', data.toString());

But when I do, node gives the error:

spawn E2BIG

I came across this http://blog.trevnorris.com/2013/07/child-process-multiple-file-descriptors.html

So piping the data to the child process seems to be the way to go. My code is now:

var spawn = require('child_process').spawn;

console.log('creating child........................');

var options = { stdio: [null, null, null, 'pipe'] };
var args = [ '/getStatistics' ];
var child = spawn(process.execPath, args, options);

var pipe = child.stdio[3];


child.stdout.on('data', function(data){
    console.log('from child: ', data.toString());

And then in getStatistics.js:

console.log('im inside child');

process.stdin.on('data', function(data) {

    console.log('data is ', data);


However the callback in process.stdin.on isn’t reached. How can I receive a stream in my child script?


I had to abandon the buffer approach. Now I’m sending the array as a message:

var cp = require('child_process');
var child = cp.fork('/getStatistics.js');

    dataAsNumbers: dataAsNumbers

But this only works when the length of dataAsNumbers is below aboout 20,000, otherwise it times out…

Get this bounty!!!

#StackBounty: #javascript #internet-explorer #websocket #msgpack Handle invalid msgpack message sent from IE10

Bounty: 50

We have an application that communicate with backend via WS. We encode all messages with msgpack-lite. Library specification said that it supports IE10.

In all modern browsers like Chrome, Firefox, Safari and Edge all works well. But in IE10 we caught strange situation:

msgpack-lite encodes message to the same binary as in other browsers BUT after sending encoded message to the backend this binary message changes.


Our message that we want encode and send to the backend:

  "method": "subscribe",
  "data": {
    "sports": [

Encoded message (backend also handle the same data sent from all browsers except IE10):

[130 166 109 101 116 104 111 100 169 115 117 98 115 99 114 105 98 101 164 100 97 116 97 129 166 115 112 111 114 116 115 145 85]

Handled message that had sent from IE10:

[239 191 189 239 191 189 109 101 116 104 111 100 239 191 189 115 117 98 115 99 114 105 98 101 239 191 189 100 97 116 97 239 191 189 239 191 189 115 112 111 114 116 115 239 191 189 85]

So our question is: how can binary data changed during sending via websockets in IE10?

Get this bounty!!!

#StackBounty: #javascript #node.js #rabbitmq #amqp #worker When using RabbitMQ / AMQP and NodeJS with separate web and worker processes…

Bounty: 50

I am currently attempting to build a NodeJS webapp that has a web process and a worker process, and using AMQP to communicate between them. With my current setup, starting the application involves launching a script for the web process (server.js) and another script for the worker process (worker.js). Each of them include a third file, amqp.js which uses a start function that involves creating a connection, then creating a channel, then asserting queues.

However, in attempting to debug another issue, I came across this article which appears to show a different structure: A connection is created first, and then the two processes are launched, each creating a channel to that connection and asserting two queues.

Should I be creating a new connection for every worker, and is it possible for me to implement this in an environment where the web and the worker are separate and cannot otherwise communicate?

Get this bounty!!!

#StackBounty: #javascript #node.js #memory-leaks Memory leak with setInterval running in a Node.js Process

Bounty: 50

I’m having an issue with setInterval() causing a memory leak in my Node.js application. The app is simple: it wakes up every half hour, looks in a MongoDB table to see if there’s any work to do (most times it does not), and then sends an email to the records found that meet the criterion. Over time (a few days), the memory goes from 100MB to over 1GB.

I tried moving the variables outside of the setInteveral to get GC’d but no luck. Am I missing something?

I’m using New Relic to monitor the transaction, but this issue persisted prior to me adding this instrumentation.

const transactionName = 'email-scheduler';
let invokeTransaction = newrelic.createBackgroundTransaction(transactionName,
    function () {
      sendEmail(function (error) {
        log.info("Job completed; ending transaction.");
    }); //must be outside of setInterval to be GC'd
  setInterval(invokeTransaction, JOB_INTERVAL_MINUTES * 1000 * 60);

function sendEmail(callback) {
  log.info('Scheduler woke up to send emails (set to send every ' + JOB_INTERVAL_MINUTES + ' minutes)');
  mongo.findUsersSince(180, function (err, result) {
    if (err) {
      log.error("Welcome emails could not be sent: " + err);
    else if (result && result instanceof Array) {
    } else {

Here’s the alternative version when I’m using a package like Cron instead of setInterval(). Suffers from the same issue:

function sendEmail(callback) {
  log.info('Scheduler woke up to send emails (set to send every ' + JOB_INTERVAL_MINUTES + ' minutes)');

  try {
    new CronJob('0 */' + JOB_INTERVAL_MINUTES + ' * * * *', function () {
      log.info('Scheduler woke up to send emails (set to send every ' + JOB_INTERVAL_MINUTES + ' minutes)');
      mongo.findUsersSince(OKTA_WAIT_MINUTES, function (err, result) {
        if (err) {
          log.error("Welcome emails could not be sent: " + err);
        else if (result && result instanceof Array) {
        } else {
    }, function () {
      log.info('Scheduler completed job.');
    }, RUN_SCHEDULER, "America/Los_Angeles");
  } catch (ex) {
    log.error("cron job pattern not valid");

Get this bounty!!!