## #StackBounty: #security #cryptography #cryptonote #cryptonight #algorithm Is CryptoNight the hash function which plays the role of the …

### Bounty: 50

Is CryptoNight the hash function which plays the role of the “random oracle” in the random oracle model under which certain properties of CryptoNote are mathematically proven in the original paper? Or am I confusing concepts?

If I’m confusing concepts, what is the “random oracle” hash function in CryptoNote?

And how much do we know about the likelihood that the algorithm is good enough to reasonably approximate the behavior of a random oracle and how does that impact the validity of the proofs in the paper, do we have reasons to worry that it might have some bad behavior which would invalidate the Linkability/Exculpability/Unforgeability/Anonimity properties?

Get this bounty!!!

## #StackBounty: #regex #algorithm regular expression algorithm beyond leetcode

### Bounty: 50

The problem is the leetcode question,
https://leetcode.com/problems/regular-expression-matching/#/description,
which uses the pattern(characters with . and/or *) to match the string(characters without * and .), what I want to ask is that is it possible for a pattern to match with the pattern in the scenerio of this question.

p.s.

‘.’ can be replaced with any single character.

‘*’ can be replaced with zero or more of the preceding element.

For example, try to use bdbaa.* to match bdb.*daa

I know the solution to the original problem can be dynamic programming or backtracking, but I don’t know how I can solve the pattern-matching-pattern version of the problem.

Get this bounty!!!

## #StackBounty: #javascript #node.js #algorithm Punch multiple strings into a single (shortest possible) string that includes all the cha…

### Bounty: 50

My purpose is to punch multiple strings into a single (shortest) string that will contain all the character of each string in a forward direction. The question is not specific to any language, but more into the algorithm part. (probably will implement it in a node server, so tagging nodejs/javascript).

So, to explain the problem:

Lets consider I have few strings

["jack", "apple", "maven", "hold", "solid", "mark", "moon", "poor", "spark", "live"]

The Resultant string should be something like:

"sjmachppoalidveonrk"

jack: sjmachppoalidveonrk

apple: sjmachppoalidveonrk

solid: sjmachppoalidveonrk

====================================>>>> all in the forward direction

These all are manual evaluation and the output may not 100% perfect in the example.

So, the point is all the letters of each strings have to exists in the output in
FORWARD DIRECTION (here the actual problem belongs), and possibly the server will send the final strings and numbers like 27594 will be generated and passed to extract the token, in the required end. If I have to punch it in a minimal possible string it would have much easier (That case only unique chars are enough). But in this case there are some points:

1. Letters can be present multiple time, though I have to reuse any
letter if possible, eg: for solid and hold o > l > d can be
reused as forward direction but for apple (a > p) and spark
(p > a) we have to repeat a as in one case it appears before p
for apple, and after p for sparks so either we need to repeat
a or p. we cannot do p > a > p as it will cover both the case
because we need two p after a

2. We directly have no option to place a single p and use the same
index twice in time of extract, we need multiple p with no option
left as the input token contains that

3. I am (not) sure, that there is multiple output possible for a set of
strings. but the concern is it should me minimal in length,
the combination doesn’t matter if its cover all the tokens
in a forward direction. all (or one ) outputs of minimal possible length
need to trace.

I have tried, by starting with a arbitrary string, and then made a analysis of next string and splitting all the letters, and place them accordingly, but after some times, it seems that current string letters can be placed in a better way, If the last string’s (or a previous string’s) letters was placed according to the current string. But again that string was analyzed and placed based on something (multiple) what was processed, and placing something on the favor of something what is not processed seems difficult, because to that we need to process that. Or might me maintaining a tree of all processed/unprocessed tree will help, building the building the final string. Any better way than it, It seems a brute force!

Note: I know there are a lot of other transformation possible, please try not to suggest anything else to use, we are doing a bit research on it, and anything in the context of question will be highly appreciated. If anything is not clear or need any further explanation feel free to comment. Thanks a ton for reading the entire problem with patience.

Get this bounty!!!

## #StackBounty: #java #algorithm #apache-spark #spark-streaming Build path for different events and assign globalID

### Bounty: 50

We are working with spark 1.6 and we are trying to keep global identity for similar events. There can be few “groups”of events with identical ID (in the example as number. letters are added just for uniqueness). And we know that some of these events are similar so we are able to connect them. We want to keep something like:

Z -> 1, 2, 3
X -> 4

so in a future if some events with id 4 will come we can assign X as a global identity.

Please check example for better illustration:

Let’s say we have some streaming data coming into spark job.

1a
1b
2c
2d
2e
3f
3g
3h
4i

Since event 1 is our first appearance we want to assign 1 to Z.
Next what we know is that 1b and 2c are similar. so we want to keep somewhere 2->1 mapping. Same thing is for 2e and 3f so we need mapping 3-2. So for now we have 3 pairs 1->Z, 2->1, 3->2.

And we want to create “historical” path: Z <- 1 <- 2 <- 3
At the end we will have all events with ID = Z.

1a -> Z
1b -> Z
2c -> Z
2d -> Z
2e -> Z
3f -> Z
3g -> Z
3h -> Z
4i -> X

We tried to use mapwithstate but only thing we were able to do was that 2->1 and 3->2. With mapwithstate we were not able to get state for “parent” in state for current event – eg. current event 3 with parent 2 and not able to get 2 -> 1 and neither 1 -> Z.

Is it possible to have some global mapping for this? We already tried accumulators and broadcast but looks like not very suitable. And we were not able to replace events 1 for first mapping and events 2 for second mapping with Z.

If new event 5 will come and it is similar with 3h for example we need to assign mapping 5->Z again.

Get this bounty!!!

## #StackBounty: #algorithm #vector #3d #projection #triangulation Project and triangulate a vector. From 3D to 2D and viceversa

### Bounty: 50

I’m having some issues when projecting a 3D vector in an image with known pose, and triangulating two 2D lines in two images with known poses.

Project 3D vector

Let’s suppose we have a vector v_world = (x_v, y_v, z_v) in world coordinates. I want to project it in an image taken from a camera with known intrinsic and extrinsic parameters. The pose of this camera is an homogeneous matrix 4×4 Tcw. For a vector projection, I only need the rotation 3×3 submatrix Rcw. Let’s get the vector v in camera coordinates.

v_camera = Rcw * v_world

This would be the same as.

(v_camera, 0) = Tcw * (v_world, 0)

To get now the vector in 2D, I just have to ignore the depth dimension and realize this operation.

v_camera_2D = atan2(v_camera[1], v_camera[0])

Triangulate two 2D lines

Let’s suppose I want to triangulate two lines. The first one pass through the pixel (x_1, y_1) in the first image with known optical center (cx_1, cy_2), focal length (fx_1, fy_1) and pose Tcw_1. Second line pass through the pixel (x_2, y_2) in the second image with known optical center (cx_2, cy_2), focal length (fx_2, fy_2) and pose Tcw_2. Both lines’ orientation are also known, v_1 and v_2, in given images.

I want to triangulate these lines to get the world orientation of the 3D line.

My first step is to construct the plane that contains the 2D line and the optical center of the camera. The 3D line will be the the intersection of the two constructed planes in both images. This is, the cross products of the planes’ normal vector.

I just need then the normal vector of the desired planes. For this, I calculate the vector from the optical center to the line point.

v_plane1 = ( (x_1 - cx_1) / fx_1, (y_1 - cy_1) / fy_1, 1)

And convert it to world coordinates.

v_plane1_world = (Rcw)^T * v_plane

I convert too the 2D line orientation to 3D world coordinates.

v_line1 = ( cos(v_1), sin(v_1), 0 )
v_line1_world = (Rcw)^T * v_line1

The normal vector of the desired plane is the cross product of its two containing vectors.

v_normal_plane1_world = v_plane1_world x v_line1_world

And the orientation of the 3D line will be.

v_world = v_normal_plane1_world x v_normal_plane2_world

I have implemented it and doesn’t work. I create an initial 3D line. I project it in two images, and triangulate both projections again. The triangulation and the initial 3D line should be the same (with a float precision threshold). Both lines are the same when I create a basic 3D line (an horizontal line, for example) and project it in two basic images (with any rotation). But the lines don’t match when doing the same with rotated and translated images.

Is there any step I am wrongly doing? I don’t pay attention to the vectors’ magnitude, because I only want the orientation. I can upload my code to a repository if it’s needed.

Get this bounty!!!

## #StackBounty: #python #performance #algorithm #strings #search Naive implementation of KMP algorithm

### Bounty: 50

After reading this answer to the question “High execution time to count overlapping substrings”, I decided to implement the suggested Knuth-Morris-Pratt (KMP) algorithm. I used the pseudo-code listed on Wikipedia for the functions kmp_table and kmp_search.

However, when running it on some corner-cases, I have observed that it is a lot slower than the standard str.find, which apparently uses a modified Boyer-Moore-Horspool algorithm and should thus have worse worst-case performance.

The specific case I looked at is:

$ipython -i kmp.py In [1]: text = "A"*1000000 + "B" In [2]: word = "A"*100 + "B" In [3]: %timeit kmp_search(text, word) 1 loop, best of 3: 410 ms per loop In [4}: %timeit text.find(word) 1000 loops, best of 3: 703 µs per loop So the difference is about a factor 1000 for this input. This is probably due to the fact that the native one is written in C and this is written in Python, but I still wanted to see if I did anything stupid here or missed any obvious optimization. def kmp_table(word): table = [0] * len(word) position, candidate = 2, 0 table[0] = -1 while position < len(word): if word[position - 1] == word[candidate]: table[position] = candidate + 1 candidate += 1 position += 1 elif candidate > 0: candidate = table[candidate] else: table[position] = 0 position += 1 return table def kmp_search(text, word): m, i = 0, 0 table = kmp_table(word) while m + i < len(text): if word[i] == text[m + i]: if i == len(word) - 1: return m i += 1 else: if table[i] > -1: m += i - table[i] i = table[i] else: m += 1 i = 0 return len(text) Get this bounty!!! ## #StackBounty: #algorithm #apache-spark #mapreduce #graph-theory #disjoint-sets Disjoint sets on apache spark ### Bounty: 50 I trying to find algorithm of searching disjoint sets (connected components/union-find) on large amount of data with apache spark. Problem is amount of data. Even Raw representation of graph vertex doesn’t fit in to ram on single machine. Edges also doesn’t fit in to the ram. Source data is text file of graph edges on hdfs: “id1 t id2”. id present as string value, not int. Naive solution that I found is: 1. take rdd of edges -> [id1:id2] [id3:id4] [id1:id3] 2. group edges by key. -> [id1:[id2;id3]][id3:[id4]] 3. for each record set minimum id to each group -> (flatMap) [id1:id1][id2:id1][id3:id1][id3:id3][id4:id3] 4. reverse rdd from stage 3 [id2:id1] -> [id1:id2] 5. leftOuterJoin of rdds from stage 3 and 4 6. repeat from stage 2 while size of rdd on step 3 wouldn’t change But this results in the transfer of large amounts of data between nodes (shuffling) Any advices? Get this bounty!!! ## #HackerEarth: #BattleOfBots 9: Taunt Problem Taunt is a two player board game which is played on a 10X4 grid of cells and is played on opposite sides of the game-board. Each player has an allocated color, Orange ( First Player ) or Green ( Second Player ) being conventional. Each player has nine piece in total. The players move their pieces towards to his / her opponent’s area by moving their pieces strategically. Each piece has a different moving feature and is one of the 3 types of pieces. Piece 1: It can move to horizontally or vertically adjacent cell, if the cell doesn’t contain a piece of same color. Piece 2: It can move to horizontally adjacent cell or can move two steps forward, if the cell doesn’t contain a piece of same color (except the piece itself). This type of piece can move to its own position if its in the second last row of the grid and going downward or if its in the second row of the grid and going upward. Piece 3: It can move two step diagonally in the forward direction, if the cell doesn’t contain a piece of same color (except the piece itself). This type of piece can move to its own position if its in the second last row of the grid and going downward or if its in the second row of the grid and going upward. Players take turns involving moves of pieces as mentioned above and can captures opponent’s piece by jumping on or over opponent’s pieces. Note: Forward direction for first player is downward and for second player is upward. If a piece (except piece 1) is moving downward and touches the last row, its direction will change i.e. now it will move upward. Similarly, once if a piece (except piece 1) is moving upward and touches the first row, its direction will change i.e. now it will move downward. Rules: • Player can only move according to the moves mentioned above. • A player may not move an opponent’s piece. • A player can captures opponent’s piece by jumping on or over opponent pieces. The game will end after 100 moves ( 50 moves for each player ) or when any of the players don’t have any move left. At the end of the game the player with majority of pieces will win. We will play it on an 10X4 grid. The top left of the grid is [0,0] and the bottom right is [9,3]. Input: The input will be a 10X4 matrix consisting only of 0,1or2. Next line will contain an integer denoting the total number of moves till the current state of the board. Next line contains an integer – 1 or 2 which is your player id. In the given matrix, top-left is [0,0] and bottom-right is [9,3]. The y-coordinate increases from left to right, and x-coordinate increases from top to bottom. A cell is represented by 3 integers. First integer denotes the player id (1 or 2). Second integer denotes the type of piece (1, 2 or 3). Third integer denotes the direction of the piece (0 (upward) or 1 (downward)). When the piece is of first type, direction doesn’t matter as the piece is free to move to horizontally or vertically adjacent cell, if the cell doesn’t contain a piece of same color. Empty cell is represented by 000. Output: In the first line print the coordinates of the cell separated by space, the piece you want to move. In second line print the coordinates of the cell in which the piece will jump. You must take care that you don’t print invalid coordinates. For example, [1,1] might be a valid coordinate in the game play if the piece is able to jump to [1,1], but [9,10] will never be. Also if you play an invalid move or your code exceeds the time/memory limit while determining the move, you lose the game. Starting state The starting state of the game is the state of the board before the game starts. 131 131 131 121 121 121 111 111 111 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 210 210 210 220 220 220 230 230 230 First Input This is the input give to the first player at the start of the game. 131 131 131 121 121 121 111 111 111 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 210 210 210 220 220 220 230 230 230 0 1 SAMPLE INPUT 000 000 000 000 000 000 000 111 000 000 111 130 000 000 000 000 000 000 000 000 000 220 000 000 131 000 000 000 121 000 210 000 000 210 131 000 000 210 000 000 58 1 SAMPLE OUTPUT 8 2 8 0 Explanation This is player 1’s turn, and the player will move the piece at [8,2] and will take two steps diagonally in downward direction and will be at [8,0] After his/her move the state of game becomes: 000 000 000 000 000 000 000 111 000 000 111 130 000 000 000 000 000 000 000 000 000 220 000 000 131 000 000 000 121 000 210 000 130 210 000 000 000 000 000 000 59 2 Note: Direction of the piece is also changed from 1 to 0 as the piece was moving downward and touches the last row. This state will be fed as input to program of player 2. Here is the code of the default bot. Time Limit:1.0 sec(s) for each input file. Memory Limit:256 MB Source Limit:1024 KB Sample Game ## #StackBounty: #c++ #algorithm #object-oriented #c++11 #interview-questions Solve a set of "restricted" linear equations effic… ### Bounty: 100 I was recently asked to solve the following challenge (in C++) as part of the interview process. However, I haven’t heard from them at all afterwards, and based on past experiences of unsuccessful applicants that I’ve read online, my submission didn’t meet their standards. Since I did solve the challenge to the best of my abilities, I’m at a loss to understand in what ways I could have made a better solution. I’m posting the problem statement (in my own words) and my solution here. Please critique it as you would for a potential applicant to your team (as a means for gauging whether it’s worthwhile to have a subsequent phone-screen with such an applicant). ## Input Details The utility would take as input an input file containing a list of equations, one per line. Each equation has the following format: <LHS> = <RHS>, where LHS is the left-hand side of the equation and is always a variable name. RHS is the right hand side of the equation and can be composed of the following only: • Variables • Unsigned integers • The + operator ### Assumptions Input is well-formed i.e. • Number of variables = Number of equations, with each variable occurring on the LHS of exactly one equation. • The system of equations has an unique solution, and does not have circular dependencies. • There are one or more white spaces between each token (numbers, + operator, variables). • A variable name can only be composed of letters from the alphabet (e.g. for which isalpha(c) is true). • All integers will fit in a C unsigned long. ## Output Format The utility would print the value of each variable after evaluating the set of equations, in the format <variable name> = <unsigned integer value>. The variables would be sorted in ascending (lexicographic) order. ### Sample Input Output Input file: off = 4 + r + 1 l = 1 + or + off or = 3 + 5 r = 2 Expected output for the above input: l = 16 off = 7 or = 8 r = 2 ## Implementation Notes Due to the simplified nature of the input equations, a full-blown linear equation solver is not required in my opinion (as such a solver would have at least quadratic complexity). A much simplified (and asymptotically faster) solution can be arrived at by modeling the set of input equations as a Directed Acyclic Graph (DAG), by observing the dependencies of the variables from the input equations. Once we can model the system as a DAG, the steps to derive the variable values are as follows: • Construct the dependency DAG, where each node in the graph corresponds to a variable, and$(a, b)$is a directed edge from$a$to$b$if and only if the variable$a$needs to be fully evaluated before evaluating$b$. • Order the vertices in the DAG thus constructed using topological sort. • For each vertex in the sorted order, evaluate its corresponding variable fully before moving on to the next vertex. The algorithm above has a linear complexity, which is the best we could achieve under the current assumptions. I’ve encapsulated the algorithm in the following class (I’ve used Google’s C++ Style Guide in my code – not sure it’s the best choice, but I preferred to follow a style guide that’s at least recognized by and arrived at by a non-trivial number of engineers.) Class header file: // // Class that encapsulates a (constrained) linear equation solver. See README.md // for assumptions on input restrictions. // #include <unordered_map> #include <vector> #include <list> #ifndef _EVALUATOR #define _EVALUATOR class Evaluator { private: // Stores the values of each variable throughout algorithm std::vector<UL> variable_values_; // Hash tables that store the correspondence between variable name and index std::unordered_map<std::string, UL> variable_index_map_; std::unordered_map<UL, std::string> index_variable_map_; // Adjacency list for DAG that stores the dependency information amongst // variables. If A[i, j] is an edge, it implies variable 'i' appears on the // RHS of definition of variable 'j'. std::vector<std::list<UL> > dependency_adj_list_; // List of equations stored as indices. If the list corresponding to eq[i] // contains 'j', then variable 'j' appears on the RHS of variable 'i'. std::vector<std::list<UL> > equation_list_; // For efficiency, this list stores the number of dependencies for each // variable, which is useful while executing a topological sort. std::vector<UL> num_dependencies_; // Resets all internal data structures void Clear(); // Prints values of internal data structures to aid in debugging void PrintState(); // Adds an entry corresponding to each new variable detected while parsing input UL AddNewVar(std::string& ); // Parse the input equations from filename given as argument, and build the // internal data structures coressponsing to the input. bool ParseEquationsFromFile(const std::string&); // If DAG in dependency_adj_list_ has a valid topological order, returns // true along with the ordered vertices in the input vector bool GetTopologicalVarOrder(std::vector<UL>&); public: Evaluator() {}; /** * @brief Evaluate the set of constrained linear equations and returns the * values of the variables as a list. * * @param[in] string: Filename containing list of constrained linear equations. * @param[in] vector<string>: If solution exists, returns the values of * variables in lexicographic order (ascending). * * @return True if solution exists (always exists for valid input), false if * input is not well-formed (See README.md for more details about input * format). */ bool SolveEquationSet(const std::string&, std::vector<std::string>& ); }; #endif The main class file: #include "evaluator.h" #include <sstream> #include <unordered_set> #include <set> #include <queue> #include <algorithm> #include <cassert> #ifdef _EVALUATOR // Used for early returns if the expression is false #define TRUE_OR_RETURN(EXPR, MSG) do { bool status = (EXPR); if (status != true) { cerr << __FUNCTION__ << ": " << MSG << endl; return false; } } while(0) #endif using namespace std; //**** Helper functions local to the file **** // Returns true if each character in the non-empty string is a digit bool IsNumber(string s) { return !s.empty() && std::all_of(s.begin(), s.end(), ::isdigit); } // Given a string, returns a vector of tokens separated by whitespace vector<string> ParseTokensFromString(const string& s) { istringstream iss(s); vector<string> token_list; string token; while (iss >> token) token_list.push_back(token); return token_list; } // Returns true if the string can be a valid variable name (i.e has // only alphabetical characters in it). bool IsValidVar(string& v) { for (auto& c: v) TRUE_OR_RETURN(isalpha(c), "Non-alphabetical char in variable: " + v); return true; } //******************************************** void Evaluator::Clear() { variable_values_.clear(); variable_index_map_.clear(); index_variable_map_.clear(); dependency_adj_list_.clear(); equation_list_.clear(); num_dependencies_.clear(); } void Evaluator::PrintState() { for (auto i = 0U; i < dependency_adj_list_.size(); ++i) cout << index_variable_map_[i] << "(" << i << ") =>" << "Value(" << variable_values_[i] << "), Deps(" << num_dependencies_[i] << ")" << endl; } // Ensures all data structures correctly set aside an entry for the new variable UL Evaluator::AddNewVar(string& v) { if (variable_index_map_.count(v) == 0) { dependency_adj_list_.push_back(list<UL>()); equation_list_.push_back(list<UL>()); variable_values_.push_back(0); num_dependencies_.push_back(0); variable_index_map_.insert(make_pair(v, dependency_adj_list_.size() - 1)); index_variable_map_.insert(make_pair(dependency_adj_list_.size() - 1, v)); assert(num_dependencies_.size() == variable_values_.size() && variable_index_map_.size() == variable_values_.size() && variable_values_.size() == dependency_adj_list_.size()); } return variable_index_map_[v]; } // Parses equation from given input file line-by-line, checking // for validity of input at each step and returning true only if // all equations were successfully parsed. bool Evaluator::ParseEquationsFromFile(const string& sEqnFile) { string line; ifstream infile(sEqnFile); // This LUT serves as a sanity check for duplicate definitions of vars // As per spec, only ONE definition (appearance as LHS) per variable is handled unordered_set<string> defined_vars; while (getline(infile, line)) { vector<string> tokens = ParseTokensFromString(line); string lhs = tokens[0]; // Check if equation is adhering to spec TRUE_OR_RETURN(tokens.size() >= 3 && IsValidVar(lhs) && tokens[1] == "=", "Invalid equation: " + line); // Check if variable on LHS was previously defined - this would make the // current approach untenable, and require general equation solver. TRUE_OR_RETURN(defined_vars.count(lhs) == 0, "Multiple defn for: " + lhs); defined_vars.insert(lhs); const UL lhs_idx = AddNewVar(lhs); // The operands appear in alternate positions in RHS, tracked by isOp for (size_t i = 2, isOp = 0; i < tokens.size(); ++i, isOp ^= 1) { string token = tokens[i]; if (isOp) TRUE_OR_RETURN(token == "+", "Unsupported operator: " + token); else { if (IsNumber(token)) variable_values_[lhs_idx] += stol(token); else { TRUE_OR_RETURN(IsValidVar(token), "Invalid variable name: " + token); // Token variable must be evaluated before LHS. // Hence adding token => LHS edge, and adding token to RHS of // equation_list_[lhs] auto token_idx = AddNewVar(token); dependency_adj_list_[token_idx].push_back(lhs_idx); assert(lhs_idx < equation_list_.size()); equation_list_[lhs_idx].push_back(token_idx); num_dependencies_[lhs_idx]++; } } } } return (variable_index_map_.size() == dependency_adj_list_.size() && dependency_adj_list_.size() == variable_values_.size()); } // Execute the BFS version of topological sorting, using queue bool Evaluator::GetTopologicalVarOrder(vector<UL>& ordered_vertices) { ordered_vertices.clear(); queue<UL> q; for (auto i = 0U; i < dependency_adj_list_.size(); ++i) if (num_dependencies_[i] == 0) q.push(i); while (!q.empty()) { UL var_idx = q.front(); ordered_vertices.push_back(var_idx); q.pop(); for (auto& nbr: dependency_adj_list_[var_idx]) { assert(num_dependencies_[nbr] >= 0); num_dependencies_[nbr]--; if (num_dependencies_[nbr] == 0) q.push(nbr); } } return (ordered_vertices.size() == dependency_adj_list_.size()); } // Solves the constrained set of linear equations in 3 phases: // 1) Parsing equations and construction of the dependency DAG // 2) Topological sort on the dependency DAG to get the order of vertices // 3) Substituting the values of variables according to the sorted order, // to get the final values for each variable. bool Evaluator::SolveEquationSet(const string& eqn_file, vector<string>& solution_list) { Clear(); vector<UL> order; TRUE_OR_RETURN(ParseEquationsFromFile(eqn_file), "Parsing Equations Failed"); TRUE_OR_RETURN(GetTopologicalVarOrder(order), "Topological Order Not Found"); // Populate variable values in topological order for (auto& idx: order) for (auto& nbr: equation_list_[idx]) variable_values_[idx] += variable_values_[nbr]; // Get keys from the LUT sorted in ascending order set<pair<string, UL> > sorted_var_idx; for (auto& vi_pair: variable_index_map_) sorted_var_idx.insert(vi_pair); for (auto& vi_pair: sorted_var_idx) solution_list.push_back(vi_pair.first + " = " + to_string(variable_values_[vi_pair.second])); return true; } #endif The usage of the class is as follows: string eqn_file, log_file; Evaluator evaluate; vector<string> solution_list; // Logic to get input filename from user - skipping it here bool bStatus = evaluate.SolveEquationSet(eqn_file, solution_list); for (auto& s: solution_list) cout << s << endl; Get this bounty!!! ## #StackBounty: #c++ #algorithm #boost #numerical-methods Double exponential quadrature ### Bounty: 50 I’m trying to lighten the code review load for the maintainers of boost.math, and I was hoping you guys could help me out. I have a pull request which implements tanh-sinh quadrature, which is provably optimal for integrands in Hardy spaces. Here’s my code: which is also reproduced below. I have a few design worries. 1. It is a class and not a function. This is a bit confusing; I worry that people will not recognize that the constructor is doing some one-time calculations to make integrations faster. 2. It takes a long time to compile. I generated the abscissas and weights to 100 digits, and then they must be lexically cast to their correct type. I could keep fewer levels of abscissas and weights in the .hpp, but then the runtime would longer for complex integrands. 3. For integrands in Hardy spaces, the number of correct digits always doubles on each refinement. However, we want to do just the right amount of work to deliver the requested accuracy, which is almost always overshot. Interface: #ifndef BOOST_MATH_QUADRATURE_TANH_SINH_HPP #define BOOST_MATH_QUADRATURE_TANH_SINH_HPP #include <cmath> #include <limits> #include <memory> #include <boost/math/quadrature/detail/tanh_sinh_detail.hpp> namespace boost{ namespace math{ template<class Real> class tanh_sinh { public: tanh_sinh(Real tol = sqrt(std::numeric_limits<Real>::epsilon()), size_t max_refinements = 20); template<class F> Real integrate(F f, Real a, Real b, Real* error = nullptr); private: std::shared_ptr<detail::tanh_sinh_detail<Real>> m_imp; }; template<class Real> tanh_sinh<Real>::tanh_sinh(Real tol, size_t max_refinements) : m_imp(std::make_shared<detail::tanh_sinh_detail<Real>>(tol, max_refinements)) { return; } template<class Real> template<class F> Real tanh_sinh<Real>::integrate(F f, Real a, Real b, Real* error) { using std::isfinite; using boost::math::constants::half; using boost::math::detail::tanh_sinh_detail; if (isfinite(a) && isfinite(b)) { if (b <= a) { throw std::domain_error("Arguments to integrate are in wrong order; integration over [a,b] must have b > a.n"); } Real avg = (a+b)*half<Real>(); Real diff = (b-a)*half<Real>(); auto u = [=](Real z) { return f(avg + diff*z); }; return diff*m_imp->integrate(u, error); } // Infinite limits: if (a <= std::numeric_limits<Real>::lowest() && b >= std::numeric_limits<Real>::max()) { auto u = [=](Real t) { auto t_sq = t*t; auto inv = 1/(1 - t_sq); return f(t*inv)*(1+t_sq)*inv*inv; }; return m_imp->integrate(u, error); } // Right limit is infinite: if (isfinite(a) && b >= std::numeric_limits<Real>::max()) { auto u = [=](Real t) { auto z = 1/(t+1); auto arg = 2*z + a - 1; return f(arg)*z*z; }; return 2*m_imp->integrate(u, error); } if (isfinite(b) && a <= std::numeric_limits<Real>::lowest()) { auto u = [=](Real t) { return f(b-t);}; auto v = [=](Real t) { auto z = 1/(t+1); auto arg = 2*z - 1; return u(arg)*z*z; }; return 2*m_imp->integrate(v, error); } throw std::logic_error("The domain of integration is not sensible; please check the bounds.n"); } }} #endif Implementation (with some layers of abscissas and weights removed, for brevity): #ifndef BOOST_MATH_QUADRATURE_DETAIL_TANH_SINH_DETAIL_HPP #define BOOST_MATH_QUADRATURE_DETAIL_TANH_SINH_DETAIL_HPP #include <cmath> #include <vector> #include <typeinfo> #include <boost/math/constants/constants.hpp> #include <boost/math/special_functions/next.hpp> namespace boost{ namespace math{ namespace detail{ // Returns the tanh-sinh quadrature of a function f over the open interval (-1, 1) template<class Real> class tanh_sinh_detail { public: tanh_sinh_detail(Real tol = sqrt(std::numeric_limits<Real>::epsilon()), size_t max_refinements = 15); template<class F> Real integrate(F f, Real* error = nullptr); private: Real m_tol; Real m_t_max; size_t m_max_refinements; const std::vector<std::vector<Real>> m_abscissas{ { lexical_cast<Real>("0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"), // g(0) lexical_cast<Real>("0.951367964072746945727055362904639667492765811307212865380079106867050650113429723656597692697291568999499"), // g(1) lexical_cast<Real>("0.999977477192461592864899636308688849285982470957489530009950811164291603926066803141755297920571692976244"), // g(2) lexical_cast<Real>("0.999999999999957058389441217592223051901253805502353310119906858210171731776098247943305316472169355401371"), // g(3) lexical_cast<Real>("0.999999999999999999999999999999999999883235110249013906725663510362671044720752808415603434458608954302982"), // g(4) lexical_cast<Real>("0.999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999988520"), // g(5) }, { lexical_cast<Real>("0.674271492248435826080420090632052144267244477154740136576044169121421222924677035505101849981126759183585"), // g(0.5) lexical_cast<Real>("0.997514856457224386832717419238820368149231215381809295391585321457677448277585202664213469426402672227688"), // g(1.5) lexical_cast<Real>("0.999999988875664881984668015033322737014902900245095922058323073481945768209599289672119775131473502267885"), // g(2.5) lexical_cast<Real>("0.999999999999999999999946215084086063112254432391666747077319911504923816981286361121293026490456057993796"), // g(3.5) lexical_cast<Real>("0.999999999999999999999999999999999999999999999999999999999999920569786807778838966034206923747918174840316"), // g(4.5) }, }; const std::vector<std::vector<Real>> m_weights{ { lexical_cast<Real>("1.570796326794896619231321691639751442098584699687552910487472296153908203143104499314017412671058533991074"), // g'(0) lexical_cast<Real>("0.230022394514788685000412470422321665303851255802659059205889049267344079034811721955914622224110769925389"), // g'(1) lexical_cast<Real>("0.000266200513752716908657010159372233158103339260303474890448151422406465941700219597184051859505048039379"), // g'(2) lexical_cast<Real>("0.000000000001358178427453909083422196787474500211182703205221379182701148467473091863958082265061102415190"), // g'(3) lexical_cast<Real>("0.000000000000000000000000000000000010017416784066252963809895613167040078319571113599666377944404151233916"), // g'(4) lexical_cast<Real>("0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002676308"), // g'(5) }, { lexical_cast<Real>("0.965976579412301148012086924538029475282925173953839319280640651228612016942374156051084481340637789686773"), // g'(0.5) lexical_cast<Real>("0.018343166989927842087331266912053799182780844891859123704139702537001454134135727608312038892655885289502"), // g'(1.5) lexical_cast<Real>("0.000000214312045569430393576972333072321177878392994404158296872167216420137367626015660887389125958297359"), // g'(2.5) lexical_cast<Real>("0.000000000000000000002800315101977588958258001699217015336310581249269449114860803391121477177123095973212"), // g'(3.5) lexical_cast<Real>("0.000000000000000000000000000000000000000000000000000000000011232705345486918789827474356787339538750684404"), // g'(4.5) }, }; }; template<class Real> tanh_sinh_detail<Real>::tanh_sinh_detail(Real tol, size_t max_refinements) { m_tol = tol; m_max_refinements = max_refinements; /* * Our goal is to calculate t_max such that tanh(pi/2 sinh(t_max)) < 1 in the requested precision. * What follows is a good estimate for t_max, but in fact we can get closer by starting with this estimate * and then calculating tanh(pi/2 sinh(t_max + eps)) until it = 1 (the second to last is t_max). * However, this is computationally expensive, so we can't do it. * An alternative is to cache the value of t_max for various types (float, double, long double, float128, cpp_bin_float_50, cpp_bin_float_100) * and then simply use them, but unfortunately the values will be platform dependent. * As such we are then susceptible to the catastrophe where we evaluate the function at x = 1, when we have promised we wouldn't do that. * Other authors solve this problem by computing the abscissas in double the requested precision, and then returning the result at the request precision. * This seems to be overkill to me, but presumably it's an option if we find integrals on which this method struggles. */ using std::tanh; using std::sinh; using std::asinh; using std::atanh; using boost::math::constants::half_pi; using boost::math::constants::two_div_pi; auto g = [](Real t) { return tanh(half_pi<Real>()*sinh(t)); }; auto g_inv = [](Real x) { return asinh(two_div_pi<Real>()*atanh(x)); }; Real x = float_prior((Real) 1); m_t_max = g_inv(x); while(!isnormal(m_t_max)) { // Although you might suspect that we could be in this loop essentially for ever, in point of fact it is only called once // even for 100 digit accuracy, and isn't called at all up to float128. x = float_prior(x); m_t_max = g_inv(x); } // This occurs once on 100 digit arithmetic: while(!(g(m_t_max) < (Real) 1)) { x = float_prior(x); m_t_max = g_inv(x); } } template<class Real> template<class F> Real tanh_sinh_detail<Real>::integrate(F f, Real* error) { using std::abs; using std::floor; using std::tanh; using std::sinh; using std::sqrt; using boost::math::constants::half; using boost::math::constants::half_pi; Real h = 1; Real I0 = half_pi<Real>()*f(0); Real IL0 = abs(I0); for(size_t i = 1; i <= m_t_max; ++i) { Real x = m_abscissas[0][i]; Real w = m_weights[0][i]; Real yp = f(x); Real ym = f(-x); I0 += (yp + ym)*w; IL0 += (abs(yp) + abs(ym))*w; } Real I1 = half<Real>()*I0; Real IL1 = abs(I1); h /= 2; Real sum = 0; Real absum = 0; for(size_t i = 0; i < m_weights[1].size() && h + i <= m_t_max; ++i) { Real x = m_abscissas[1][i]; Real w = m_weights[1][i]; Real yp = f(x); Real ym = f(-x); sum += (yp + ym)*w; absum += (abs(yp) + abs(ym))*w; } I1 += sum*h; IL1 += sum*h; size_t k = 2; Real err = abs(I0 - I1); while (k < 4 || (k < m_weights.size() && k < m_max_refinements) ) { I0 = I1; IL0 = IL1; I1 = half<Real>()*I0; IL1 = half<Real>()*IL0; h = (Real) 1/ (Real) (1 << k); Real sum = 0; Real absum = 0; auto const& abscissa_row = m_abscissas[k]; auto const& weight_row = m_weights[k]; // We have no guarantee that round-off error won't cause the function to be evaluated strictly at the endpoints. // However, making sure x < 1 - eps is a reasonable compromise between accuracy and usability.. for(size_t j = 0; j < weight_row.size() && abscissa_row[j] < (Real) 1 - std::numeric_limits<Real>::epsilon(); ++j) { Real x = abscissa_row[j]; Real w = weight_row[j]; Real yp = f(x); Real ym = f(-x); Real term = (yp + ym)*w; sum += term; // A question arises as to how accurately we actually need to estimate the L1 integral. // For simple integrands, computing the L1 norm makes the integration 20% slower, // but for more complicated integrands, this calculation is not noticeable. Real abterm = (abs(yp) + abs(ym))*w; absum += abterm; } I1 += sum*h; IL1 += absum*h; ++k; err = abs(I0 - I1); if (err <= m_tol*IL1) { if (error) { *error = err; } return I1; } } if (!isnormal(I1)) { throw std::domain_error("The tanh_sinh integrate evaluated your function at a singular point. Please narrow the bounds of integration or chech your function for singularities.n"); } while (k < m_max_refinements && err > m_tol*IL1) { I0 = I1; IL0 = IL1; I1 = half<Real>()*I0; IL1 = half<Real>()*IL0; h *= half<Real>(); Real sum = 0; Real absum = 0; for(Real t = h; t < m_t_max - std::numeric_limits<Real>::epsilon(); t += 2*h) { Real s = sinh(t); Real c = sqrt(1+s*s); Real x = tanh(half_pi<Real>()*s); Real w = half_pi<Real>()*c*(1-x*x); Real yp = f(x); Real ym = f(-x); Real term = (yp + ym)*w; sum += term; Real abterm = (abs(yp) + abs(ym))*w; absum += abterm; // There are claims that this test improves performance, // however my benchmarks show that it's slower! // However, I leave this comment here because it totally stands to reason that this *should* help: //if (abterm < std::numeric_limits<Real>::epsilon()) { break; } } I1 += sum*h; IL1 += absum*h; ++k; err = abs(I0 - I1); if (!isnormal(I1)) { throw std::domain_error("The tanh_sinh integrate evaluated your function at a singular point. Please narrow the bounds of integration or chech your function for singularities.n"); } } if (error) { *error = err; } return I1; } }}} #endif For those of you interested in performance, I have used google benchmark to measure the runtime, which is reproduced below: #include <cmath> #include <iostream> #include <random> #include <boost/math/quadrature/tanh_sinh.hpp> #include <boost/math/constants/constants.hpp> #include <boost/multiprecision/float128.hpp> #include <boost/multiprecision/cpp_bin_float.hpp> #include <boost/multiprecision/cpp_dec_float.hpp> #include <benchmark/benchmark.h> using boost::math::tanh_sinh; using boost::math::constants::half_pi; using boost::multiprecision::float128; using boost::multiprecision::cpp_bin_float_50; using boost::multiprecision::cpp_bin_float_100; using boost::multiprecision::cpp_dec_float_50; using boost::multiprecision::cpp_dec_float_100; template<class Real> static void BM_tanh_sinh_sqrt_log(benchmark::State& state) { using std::sqrt; using std::log; auto f = [](Real t) { return sqrt(t)*log(t); }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) 0, (Real) 1)); } } template<class Real> static void BM_tanh_sinh_linear(benchmark::State& state) { auto f = [](Real t) { return 5*t + 7; }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) 0, (Real) 1)); } } template<class Real> static void BM_tanh_sinh_quadratic(benchmark::State& state) { auto f = [](Real t) { return 5*t*t + 3*t + 7; }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) -1, (Real) 1)); } } template<class Real> static void BM_tanh_sinh_runge(benchmark::State& state) { auto f = [](Real t) { return 1/(1+25*t*t); }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) -1, (Real) 1)); } } template<class Real> static void BM_tanh_sinh_runge_move_pole(benchmark::State& state) { auto f = [](Real t) { Real tmp = t/5; return 1/(1+tmp*tmp); }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) -1, (Real) 1)); } } template<class Real> static void BM_tanh_sinh_exp_cos(benchmark::State& state) { auto f = [](Real t) { return exp(t)*cos(t); }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) 0, half_pi<Real>())); } } template<class Real> static void BM_tanh_sinh_horrible(benchmark::State& state) { auto f = [](Real x) { return x*sin(2*exp(2*sin(2*exp(2*x) ) ) ); }; Real Q; Real tol = 100*std::numeric_limits<Real>::epsilon(); tanh_sinh<Real> integrator(tol, 20); while(state.KeepRunning()) { benchmark::DoNotOptimize(Q = integrator.integrate(f, (Real) -1, (Real) 1)); } } BENCHMARK_TEMPLATE(BM_tanh_sinh_sqrt_log, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_sqrt_log, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_sqrt_log, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_sqrt_log, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_sqrt_log, cpp_bin_float_50); BENCHMARK_TEMPLATE(BM_tanh_sinh_sqrt_log, cpp_bin_float_100); BENCHMARK_TEMPLATE(BM_tanh_sinh_linear, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_linear, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_linear, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_linear, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_linear, cpp_bin_float_50); BENCHMARK_TEMPLATE(BM_tanh_sinh_linear, cpp_bin_float_100); BENCHMARK_TEMPLATE(BM_tanh_sinh_quadratic, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_quadratic, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_quadratic, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_quadratic, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge_move_pole, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge_move_pole, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge_move_pole, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_runge_move_pole, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_exp_cos, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_exp_cos, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_exp_cos, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_exp_cos, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_exp_cos, cpp_bin_float_50); BENCHMARK_TEMPLATE(BM_tanh_sinh_exp_cos, cpp_bin_float_100); BENCHMARK_TEMPLATE(BM_tanh_sinh_horrible, float); BENCHMARK_TEMPLATE(BM_tanh_sinh_horrible, double); BENCHMARK_TEMPLATE(BM_tanh_sinh_horrible, long double); BENCHMARK_TEMPLATE(BM_tanh_sinh_horrible, float128); BENCHMARK_TEMPLATE(BM_tanh_sinh_horrible, cpp_bin_float_50); BENCHMARK_TEMPLATE(BM_tanh_sinh_horrible, cpp_bin_float_100); BENCHMARK_MAIN(); /* --------------------------------------------------------------------------------- Benchmark Time CPU Iterations --------------------------------------------------------------------------------- BM_tanh_sinh_sqrt_log<float> 550 ns 550 ns 1214044 BM_tanh_sinh_sqrt_log<double> 9004 ns 9003 ns 77327 BM_tanh_sinh_sqrt_log<long double> 3635 ns 3635 ns 192432 BM_tanh_sinh_sqrt_log<float128> 342661 ns 342653 ns 2043 BM_tanh_sinh_sqrt_log<cpp_bin_float_50> 5940813 ns 5940664 ns 117 BM_tanh_sinh_sqrt_log<cpp_bin_float_100> 41784341 ns 41783310 ns 17 BM_tanh_sinh_linear<float> 33 ns 33 ns 20972925 BM_tanh_sinh_linear<double> 150 ns 150 ns 4655756 BM_tanh_sinh_linear<long double> 347 ns 347 ns 2019473 BM_tanh_sinh_linear<float128> 41586 ns 41585 ns 16807 BM_tanh_sinh_linear<cpp_bin_float_50> 147107 ns 147104 ns 4753 BM_tanh_sinh_linear<cpp_bin_float_100> 482590 ns 482581 ns 1452 BM_tanh_sinh_quadratic<float> 79 ns 79 ns 8846856 BM_tanh_sinh_quadratic<double> 183 ns 183 ns 3828752 BM_tanh_sinh_quadratic<long double> 424 ns 424 ns 1651417 BM_tanh_sinh_quadratic<float128> 58832 ns 58831 ns 11778 BM_tanh_sinh_runge<float> 340 ns 340 ns 2061682 BM_tanh_sinh_runge<double> 17312 ns 17311 ns 40403 BM_tanh_sinh_runge<long double> 36697 ns 36696 ns 19071 BM_tanh_sinh_runge<float128> 2984174 ns 2984116 ns 236 BM_tanh_sinh_runge_move_pole<float> 158 ns 158 ns 4431412 BM_tanh_sinh_runge_move_pole<double> 777 ns 777 ns 896286 BM_tanh_sinh_runge_move_pole<long double> 1095 ns 1095 ns 636425 BM_tanh_sinh_runge_move_pole<float128> 80297 ns 80295 ns 8678 BM_tanh_sinh_exp_cos<float> 5685 ns 5685 ns 121337 BM_tanh_sinh_exp_cos<double> 55022 ns 55021 ns 12558 BM_tanh_sinh_exp_cos<long double> 71875 ns 71874 ns 9663 BM_tanh_sinh_exp_cos<float128> 379522 ns 379514 ns 1848 BM_tanh_sinh_exp_cos<cpp_bin_float_50> 4538073 ns 4537984 ns 156 BM_tanh_sinh_exp_cos<cpp_bin_float_100> 33965946 ns 33965260 ns 21 BM_tanh_sinh_horrible<float> 427490 ns 427478 ns 1633 BM_tanh_sinh_horrible<double> 572976 ns 572966 ns 1214 BM_tanh_sinh_horrible<long double> 1346058 ns 1346033 ns 516 BM_tanh_sinh_horrible<float128> 22030156 ns 22029403 ns 32 BM_tanh_sinh_horrible<cpp_bin_float_50> 635390418 ns 635373654 ns 1 BM_tanh_sinh_horrible<cpp_bin_float_100> 4867960245 ns 4867844329 ns 1 */ These numbers were created using an Intel Xeon E3-1230, and the benchmarks can be compiled with CXX = g++ INC=-I/path/to/boost all: perf.x perf.x: perf.o$(CXX) -o $@$< -lbenchmark -pthread -lquadmath

perf.o: perf.cpp
$(CXX) --std=c++14 -fext-numeric-literals -Wfatal-errors -g -O3$(INC) -c $< -o$@

clean:
rm -f *.x *.o

Get this bounty!!!