#StackBounty: #performance #scheme #racket #benchmarking #chez-scheme (Chez) Scheme benchmarks?

Bounty: 50

Now that Chez Scheme is open-source, I wonder how it compares to Racket and other Schemes or languages in terms of performance, so that I could make informed choices about using them in my projects.

Unfortunately, I couldn’t find any relevant benchmarks.

I found the following:


https://ecraven.github.io/r7rs-benchmarks/benchmark.html

Problem: no Racket, or other languages


http://www.larcenists.org/benchmarksGenuineR6Linux.html

Problem: no Chez Scheme, or other languages


http://benchmarksgame.alioth.debian.org/

Problem: only Racket, questionable comparisions (For example, Python is not allowed to use Numpy where it would clearly help, while Racket is making FFI calls to GMP)


So, none of the benchmarks I found allow you to compare Racket to Chez, for example, or Chez to SBCL, or Java. Are there Chez benchmarks that give you a sense of how fast it is?

Chez Scheme is often said to be the fastest Scheme/Lisp around. We should know if it’s faster than, say, Java for your typical business logic application.


Get this bounty!!!

#StackBounty: #windows #performance #x86 Can I read the CPU performance counters from a user-mode program in Windows?

Bounty: 200

I would like to program and read the hardware performance counters offered on all recent x86 hardware.

On Linux there are the various perf_events systems to do this (and the perf utility to do it from outside an unmodified program).

Is there any such built-in facility in Windows?


Get this bounty!!!

#StackBounty: #performance #date #elasticsearch #filter Do additional date range filters increase the performance?

Bounty: 50

To an existing, large ElasticSearch 5 index, I want to add a date field, containing the date of the indexation of each document. Afterwards I want to query this index, to return all documents, created in the last minute.

In the ElasticSearch Ultimative Guide for version 1 it is mentioned, that adding additional filters for day, month and/or year can improve the performance drastically. Newer versions of the guide do not say so anymore.

Can I gain performance in ElasticSearch 5 with adding additional date filters?


Get this bounty!!!

#StackBounty: #performance #numerical-methods #symbolic-math #numerical-computing Symbolic vs Numeric Math – Performance

Bounty: 50

Do symbolic math calculations for solving nonlinear polynomial systems cause huge performance(calculation speed) disadvantage compared to numeric calculations? Are there any benchmark/data about this?

Found a related question: https://scicomp.stackexchange.com/questions/21754/symbolic-computation-vs-numerical-computation


Get this bounty!!!

#StackBounty: #amazon-ec2 #performance #mongodb MongoDB stalling at 100 writes/s but benchmark says 54000 writes/s

Bounty: 50

I am running MongoDB (v3.4 Community Edition) on an AWS EC2 m4.large instance. I installed it according to this MongoDB tutorial. I haven’t modified any MongoDB config. I haven’t configured any replica set or shard. I have a Jersey API which interacts with MongoDB using org.mongodb.morphia (v1.3.2) Java Driver.

I have created a load test using SoapUI where I make 100 calls to API which in turn create 100 write operations (one document is created in one collection per write operation) for MongoDB. I run the test for 24 hours. At the end, I see my webserver trying to communicate with the Mongo server but the connection is timing out and the host CPU is running at 100%. I conclude that the MongoDB server is choking.

I then tried this benchmarking exercise, still with a database hosted on an m4.large. 50,000,000 records were created in 923.133 seconds i.e. 54,113 insertions per second. That is over 500 times faster!

If MongoDB is performing so well then why it is choking at 100 insertion per second when going through JAVA driver? Is the Java driver slow? Is my use of the Java driver wrong? Is my EC2 instance size is too low? Would it help to add replication (RAID and Replica sets)?

I am new to MongoDB hosting and really appreciate your help learning.


Get this bounty!!!

#StackBounty: #amazon-ec2 #performance #mongodb MongoDB Community edition chocking at 100 write operations per second

Bounty: 50

I have installed MondoDB (v3.4) on AWS EC2 instance (m4.large) using following link. I havent modify any MongoDB config. I havent configured any replica set or shard. I have a jersey API which interact with MongoDB using JAVA driver. I created a load test using SoapUI where I made the 100 calls to API which in turn created 100 write operations (one document is created in one collection per write operation) for MongoDB. I ran the test for 24 Hours. I saw at the end my webserver was trying to communicate with Mongo Sever but connection was timing out and Host CPU was running at 100%.That is why I concluded MongoDB server is chocking.

Is my EC2 instance size is too low?
Am I have to do replication (RAID and Replica sets)?

I am going to try benchmarking exercise mentioned here. I will updated the post after. I am new to MongoDB hosting and really appreciate your help learning.

Update 1: I did the MongoDB benchmarking test above. Database hosted on M4.Large instance gave following data. 50,000,000 records were created in 923.133 seconds i.e. 54,113 insertion per second. If MongoDB is performing so well then why it is chocking at 100 insertion per second when done through JAVA driver. Is Java driver slow? Is my implementation of JAVA Driver wrong? I am using org.mongodb.morphia (v1.3.2) Java Driver.


Get this bounty!!!

#StackBounty: #mysql #performance #performance-tuning #explain A MySQL EXPLAIN number of rows discrepancy

Bounty: 50

MySQL 5.5.49-log

More questions on the query in Why does it use temporary? (MySQL) (the query is the same but the question is different):

I have the following table (filled with many rows):

CREATE TABLE `SectorGraphs2` (
  `Kind` tinyint(3) UNSIGNED NOT NULL COMMENT '1 - продюсер, 2 - жанр, 3 - регион',
  `Criterion` tinyint(3) UNSIGNED NOT NULL,
  `Period` tinyint(3) UNSIGNED NOT NULL,
  `PeriodStart` date NOT NULL,
  `SectorID` int(10) UNSIGNED NOT NULL,
  `Value` float NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

ALTER TABLE `SectorGraphs2`
  ADD UNIQUE KEY `Producer2` (`Kind`,`Criterion`,`Period`,`PeriodStart`,`SectorID`) USING BTREE,
  ADD KEY `SectorID` (`SectorID`);

then I run:

EXPLAIN 
    SELECT SectorID, SUM(Value)
    FROM SectorGraphs2
    WHERE Kind = 1 AND Criterion = 7
      AND Period = 1
      AND PeriodStart >= ? AND PeriodStart < ? + INTERVAL 1 WEEK
    GROUP BY SectorID

and it produces:

+---+---+---+---+---+---+---+---+---+---+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+---+---+---+---+---+---+---+---+---+---+
| 1 | SIMPLE | SectorGraphs2 | range | Producer2,SectorID | Producer2 | 6 | NULL | 1 | Using index condition; Using temporary; Using filesort |
+---+---+---+---+---+---+---+---+---+---+

See nicely formatted explain here.

My question: Why it is used a temporary table and filesort but it reports only 1 row examined? It seems that because of using a temporary table, it should process more than one row. How can I determine the real number of rows processed? How to solve this discrepancy about number of processed rows?

Note that the task I was assigned to do now is to eliminate heavy (involving too many rows) queries. And now I do not know how to do this.


Get this bounty!!!

#StackBounty: #machine-learning #cross-validation #performance LFW face pair-matching performance evaluation, why retrain model on view2?

Bounty: 50

I am trying to understand how performance evaluation works in LFW(Labeled Faces in the Wild) dataset http://vis-www.cs.umass.edu/lfw/.

I am interested in task: pair-matching. However, as I dig deeper, I found myself confused.

Here is a brief summary on evaluating pair-matching performance in LFW dataset:

  1. LFW dataset is divided into View1 and View2. View1 is for development of algorithms, you can use it to select model, tune parameters and choose features. View2 is for reporting accuracy of your model produced by View1.

  2. View1 description:

    For development purposes, we recommend using the below training/testing split, which was generated randomly and independently of the splits for 10-fold cross validation, to avoid unfairly overfitting to the sets above during development. For instance, these sets may be viewed as a model selection set and a validation set. See the tech report below for more details.

    pairsDevTrain.txt, pairsDevTest.txt

  3. View2 description:

    As a benchmark for comparison, we suggest reporting performance as 10-fold cross validation using splits we have randomly generated.

I also found an example of carrying out the experiment with PCA for face pair-matching in the LFW 2008 paper.

Eigenfaces for pair matching. We computed eigenvectors from the training set of View 1 and determined the threshold value for classifying pairs as matched or mismatched that gave the best performance on the test set of View 1. For each run of View 2, the training set was used to compute the eigenvectors, and pairs were classified using the threshold on Euclidian distance from View 1.

State of the art pair matching. To determine the current best performance on pair matching, we ran an implementation of the current state of the art recognition system of Nowak and Jurie [14].11 The Nowak algorithm gives a similarity score to each pair, and View 1 was used to determine the threshold value for classifying pairs as matched or mismatched. For each of the 10 folds of View 2 of the database, we trained on 9 of the sets and computed similarity measures for the held out test set, and classified pairs using the threshold

My questions are:

  1. How to do training with View1 data using 10-fold cross validation?

    The data is already split into pairsDevTrain.txt and pairsDevTest.txt. Does it mean that I need to merge these two file and then do a standard 10-fold cross validation to train my model?

  2. Why is 10-fold cross validation required in View2?

    Since model and parameter is all determined using data in View1, why not just use all View2 data to report performance.

  3. Since 10-fold cross validation is required in View2, there must be a training process. Why retrain another model?

    It is worth mentioning here, both in View1 and View2. train and test data don’t share common identity, i.e. person1 appear in train, will not appear in test.

  4. 10-fold cross validation is recommended for both View1 and View2. 10-fold splits are given for View2 but not View1. Is there a reason why?

Thank you beforehand for helping me understand the performance evaluation for LFW.


Get this bounty!!!

#StackBounty: #windows #ssd #performance #benchmarking #nvme SSD with write cache buffer flushing turned on is way slower in AS SSD but…

Bounty: 50

When running a benchmark using the program AS SSD on an NVMe drive, When the second checkbox seen in the image below is unchecked, the drive gets terrible write performance in AS SSD (see first benchmark screenshot below), but not in CrystalDiskMark (see last screenshot); however, if I check that box, then AS SSD performs well. Does anyone know what’s going on here? My concern is that according to that checkbox description, I should NOT have it checked since my drive doesn’t have its own power supply, but AS SSD is so slow, I’m concerned other programs may be affected.

In case it’s helpful, I learned about that checkbox in the first place from reading https://www.reddit.com/r/Dell/comments/628odr/toshiba_nvme_slow_write_speed_fix/, but this doesn’t explain to me why CDM would still be fast. See also Slow SSD performance (Toshiba 1GB NVMe). I updated my Intel RST drivers and that didn’t help.

I understand there could be a roundabout driver solution here, but I want to stay within warranty and try to find out what’s going on so I can get Dell to support this and issue an official solution.

enter image description here
enter image description here
enter image description here
enter image description here


Get this bounty!!!

#StackBounty: #menus #navigation #performance #cache Nav and logo loading each time causing menu to move JointsWP – Foundation 6

Bounty: 50

Hi I was wondering if anyone could help.

I’m creating a site using JointsWP Foundation 6 theme and have created a new fixed side menu which includes the logo and social links. My problem is everytime a user click on the menu it reloads causing a shift – is there a way of stopping this – is it a page load issue or have i come about it the wrong way? I tried adding a caching plugin but it hasn’t seemed to help. Any suggestions appreciated.

Here is examples of my code:

<body <?php body_class(); ?>>

    

and the page.php

<?php get_header(); ?>
    
--> <!--
--> --> <main id="main" class="large-9 medium-9 columns contentSection" role="main"> <?php if (have_posts()) : while (have_posts()) : the_post(); ?> <?php get_template_part( 'parts/loop', 'page' ); ?> <?php endwhile; endif; ?> <!--</main> <!-- end #main --> <!--</div> <!-- end #inner-content --> <!--</div> <!-- end #content -->

Edit:

i have added 2 test pages so that you can see – biggreenspace.com/test-page-1 and you will be able to navigate to test page 2 (the other menu will take you to the maintenance screen). This primarily happens in Chrome and Firefox – not in IE edge.


Get this bounty!!!