#StackBounty: #algorithms #permutations #algorithm-design Counting permutations whose elements are not exactly their index ± M

Bounty: 50

I was recently asked this problem in an algorithmic interview and failed to solve it.

Given two values N and M, you have to count the number of permutations of length N (using numbers from 1 to N) such that the absolute difference between any number in the permutation and its position in the permutation is not equal to M.

Example – If N=3 and M=1 then, 1 2 3 and 3 2 1 are valid permutations but 1 3 2 is invalid as the number 3 is at position 2 and their difference is = M.

I tried NxM Dynamic programming but failed to form a recurrence that doesn’t count repetitions.


Get this bounty!!!

#HackerEarth: #BattleOfBots 9: Taunt

Problem

Taunt is a two player board game which is played on a 10X4 grid of cells and is played on opposite sides of the game-board. Each player has an allocated color, Orange ( First Player ) or Green ( Second Player ) being conventional. Each player has nine piece in total. The players move their pieces towards to his / her opponent’s area by moving their pieces strategically. Each piece has a different moving feature and is one of the 3 types of pieces.

Piece 1: It can move to horizontally or vertically adjacent cell, if the cell doesn’t contain a piece of same color.

enter image description here

Piece 2: It can move to horizontally adjacent cell or can move two steps forward, if the cell doesn’t contain a piece of same color (except the piece itself).

enter image description here

This type of piece can move to its own position if its in the second last row of the grid and going downward or if its in the second row of the grid and going upward.

enter image description here

Piece 3: It can move two step diagonally in the forward direction, if the cell doesn’t contain a piece of same color (except the piece itself).

enter image description here enter image description here

This type of piece can move to its own position if its in the second last row of the grid and going downward or if its in the second row of the grid and going upward.

enter image description here

Players take turns involving moves of pieces as mentioned above and can captures opponent’s piece by jumping on or over opponent’s pieces.

Note: Forward direction for first player is downward and for second player is upward.

If a piece (except piece 1) is moving downward and touches the last row, its direction will change i.e. now it will move upward. Similarly, once if a piece (except piece 1) is moving upward and touches the first row, its direction will change i.e. now it will move downward.

Rules:

  • Player can only move according to the moves mentioned above.
  • A player may not move an opponent’s piece.
  • A player can captures opponent’s piece by jumping on or over opponent pieces.

The game will end after 100 moves ( 50 moves for each player ) or when any of the players don’t have any move left. At the end of the game the player with majority of pieces will win.

We will play it on an 10X4 grid. The top left of the grid is [0,0] and the bottom right is [9,3].

Input:
The input will be a 10X4 matrix consisting only of 0,1or2. Next line will contain an integer denoting the total number of moves till the current state of the board. Next line contains an integer – 1 or 2 which is your player id.

In the given matrix, top-left is [0,0] and bottom-right is [9,3]. The y-coordinate increases from left to right, and x-coordinate increases from top to bottom.

A cell is represented by 3 integers.

First integer denotes the player id (1 or 2).
Second integer denotes the type of piece (1, 2 or 3).
Third integer denotes the direction of the piece (0 (upward) or 1 (downward)). When the piece is of first type, direction doesn’t matter as the piece is free to move to horizontally or vertically adjacent cell, if the cell doesn’t contain a piece of same color.

Empty cell is represented by 000.

Output:
In the first line print the coordinates of the cell separated by space, the piece you want to move.
In second line print the coordinates of the cell in which the piece will jump.
You must take care that you don’t print invalid coordinates. For example, [1,1] might be a valid coordinate in the game play if the piece is able to jump to [1,1], but [9,10] will never be. Also if you play an invalid move or your code exceeds the time/memory limit while determining the move, you lose the game.

Starting state
The starting state of the game is the state of the board before the game starts.

131 131 131 121
121 121 111 111
111 000 000 000
000 000 000 000
000 000 000 000
000 000 000 000
000 000 000 000
000 000 000 210
210 210 220 220
220 230 230 230

First Input
This is the input give to the first player at the start of the game.

131 131 131 121
121 121 111 111
111 000 000 000
000 000 000 000
000 000 000 000
000 000 000 000
000 000 000 000
000 000 000 210
210 210 220 220
220 230 230 230
0
1

 

SAMPLE INPUT
000 000 000 000
000 000 000 111
000 000 111 130
000 000 000 000
000 000 000 000
000 220 000 000
131 000 000 000
121 000 210 000
000 210 131 000
000 210 000 000
58
1
SAMPLE OUTPUT
8 2
8 0

Explanation

This is player 1’s turn, and the player will move the piece at [8,2] and will take two steps diagonally in downward direction and will be at [8,0]
After his/her move the state of game becomes:

000 000 000 000
000 000 000 111
000 000 111 130
000 000 000 000
000 000 000 000
000 220 000 000
131 000 000 000
121 000 210 000
130 210 000 000
000 000 000 000
59
2

Note: Direction of the piece is also changed from 1 to 0 as the piece was moving downward and touches the last row. This state will be fed as input to program of player 2.

Here is the code of the default bot.

Time Limit:1.0 sec(s) for each input file.
Memory Limit:256 MB
Source Limit:1024 KB

Sample Game

#StackBounty: #algorithms #graphs #graph-theory #weighted-graphs #union-find Disjoint-set on probabilistic graphs

Bounty: 50

We have an undirected weighted Graph G where each edge has a score in (0, 1]. We want to emit a score in [0, 1] for each pair of vertices u and v which is a score of how likely they are in the same disjoint set.

If the only possible weights were discrete 0 and 1 – then this problem reduces to a simple union-find like algorithm. But, I am not sure what literature exists for probabilistic graphs.


Get this bounty!!!

#StackBounty: #algorithms #complexity-theory Class of Algorithms where finding output of algorithm can be aided by output of same algor…

Bounty: 100

Sorry for the confusing title, but I’m trying to find out if a certain algorithm or class of algorithms fits this description. Given an algorithm A, computing A(x) can be computionally easier if A(g) is known, where g is close to x, and it’s easier the closer g gets to x. So also, the amount of computation A has to do grows with respect to the size of the input.

You would also have to be able to calculate the output of A(x) given A(g), x-g, A(x-g). Does A exist for anything other than trivial artithemtic, and if so, is there an entire class of algorithms which A belongs to?


Get this bounty!!!

#StackBounty: #bayesian #algorithms #prior How to extend Solomonoff's universal prior to stochastic models?

Bounty: 150

Solomonoff’s universal prior for models is based on the algorithmic complexity of a computer program $p$ which executes that model. Where $l$ is the length of the computer program, the prior is proportional to $2^{-l}$.

This works fine for deterministic models. We have an observation and we want to understand what model best explains the observation. If the program $p$ returns an output equal to the observation then this program is valid, otherwise it is invalid. We can apply Bayes rule to get a posterior probability by considering all programs which returned the desired output.

I’m confused about how this is extended to the case where there is randomness in the observations we collect.

Take a simple example: Suppose we are considering a linear regression model $y=ax+b$. The computer program which executes this model has a length $l$ which is a function of $a$ and $b$ (bigger parameters require more bits to model). The prior is $pi(a,b)$.

But this program doesn’t include a random element. The regression model is $y=ax+b+epsilon$ where $epsilon$ is random noise with a distribution $N(0,sigma^2)$

How should we consider a prior for the noise? A computer program can’t produce a random noise so this can’t be a part of the program length. Should the prior for $sigma^2$ be considered separately to the prior for the model? Or should the number of bits required to encode the numeric value $sigma^2$ be included in the length of the program?

In addition to the variance of the noise, the underlying normal distribution would seen to have some degree of complexity associated with it. Should the bits required to describe a normal distribution be included in the model prior?


Get this bounty!!!

#StackBounty: #bayesian #algorithms #prior How to extend Solomonoff's universal prior to stochastic models?

Bounty: 150

Solomonoff’s universal prior for models is based on the algorithmic complexity of a computer program $p$ which executes that model. Where $l$ is the length of the computer program, the prior is proportional to $2^{-l}$.

This works fine for deterministic models. We have an observation and we want to understand what model best explains the observation. If the program $p$ returns an output equal to the observation then this program is valid, otherwise it is invalid. We can apply Bayes rule to get a posterior probability by considering all programs which returned the desired output.

I’m confused about how this is extended to the case where there is randomness in the observations we collect.

Take a simple example: Suppose we are considering a linear regression model $y=ax+b$. The computer program which executes this model has a length $l$ which is a function of $a$ and $b$ (bigger parameters require more bits to model). The prior is $pi(a,b)$.

But this program doesn’t include a random element. The regression model is $y=ax+b+epsilon$ where $epsilon$ is random noise with a distribution $N(0,sigma^2)$

How should we consider a prior for the noise? A computer program can’t produce a random noise so this can’t be a part of the program length. Should the prior for $sigma^2$ be considered separately to the prior for the model? Or should the number of bits required to encode the numeric value $sigma^2$ be included in the length of the program?

In addition to the variance of the noise, the underlying normal distribution would seen to have some degree of complexity associated with it. Should the bits required to describe a normal distribution be included in the model prior?


Get this bounty!!!

#StackBounty: #bayesian #algorithms #prior How to extend Solomonoff's universal prior to stochastic models?

Bounty: 150

Solomonoff’s universal prior for models is based on the algorithmic complexity of a computer program $p$ which executes that model. Where $l$ is the length of the computer program, the prior is proportional to $2^{-l}$.

This works fine for deterministic models. We have an observation and we want to understand what model best explains the observation. If the program $p$ returns an output equal to the observation then this program is valid, otherwise it is invalid. We can apply Bayes rule to get a posterior probability by considering all programs which returned the desired output.

I’m confused about how this is extended to the case where there is randomness in the observations we collect.

Take a simple example: Suppose we are considering a linear regression model $y=ax+b$. The computer program which executes this model has a length $l$ which is a function of $a$ and $b$ (bigger parameters require more bits to model). The prior is $pi(a,b)$.

But this program doesn’t include a random element. The regression model is $y=ax+b+epsilon$ where $epsilon$ is random noise with a distribution $N(0,sigma^2)$

How should we consider a prior for the noise? A computer program can’t produce a random noise so this can’t be a part of the program length. Should the prior for $sigma^2$ be considered separately to the prior for the model? Or should the number of bits required to encode the numeric value $sigma^2$ be included in the length of the program?

In addition to the variance of the noise, the underlying normal distribution would seen to have some degree of complexity associated with it. Should the bits required to describe a normal distribution be included in the model prior?


Get this bounty!!!

#StackBounty: #bayesian #algorithms #prior How to extend Solomonoff's universal prior to stochastic models?

Bounty: 150

Solomonoff’s universal prior for models is based on the algorithmic complexity of a computer program $p$ which executes that model. Where $l$ is the length of the computer program, the prior is proportional to $2^{-l}$.

This works fine for deterministic models. We have an observation and we want to understand what model best explains the observation. If the program $p$ returns an output equal to the observation then this program is valid, otherwise it is invalid. We can apply Bayes rule to get a posterior probability by considering all programs which returned the desired output.

I’m confused about how this is extended to the case where there is randomness in the observations we collect.

Take a simple example: Suppose we are considering a linear regression model $y=ax+b$. The computer program which executes this model has a length $l$ which is a function of $a$ and $b$ (bigger parameters require more bits to model). The prior is $pi(a,b)$.

But this program doesn’t include a random element. The regression model is $y=ax+b+epsilon$ where $epsilon$ is random noise with a distribution $N(0,sigma^2)$

How should we consider a prior for the noise? A computer program can’t produce a random noise so this can’t be a part of the program length. Should the prior for $sigma^2$ be considered separately to the prior for the model? Or should the number of bits required to encode the numeric value $sigma^2$ be included in the length of the program?

In addition to the variance of the noise, the underlying normal distribution would seen to have some degree of complexity associated with it. Should the bits required to describe a normal distribution be included in the model prior?


Get this bounty!!!

#StackBounty: #bayesian #algorithms #prior How to extend Solomonoff's universal prior to stochastic models?

Bounty: 150

Solomonoff’s universal prior for models is based on the algorithmic complexity of a computer program $p$ which executes that model. Where $l$ is the length of the computer program, the prior is proportional to $2^{-l}$.

This works fine for deterministic models. We have an observation and we want to understand what model best explains the observation. If the program $p$ returns an output equal to the observation then this program is valid, otherwise it is invalid. We can apply Bayes rule to get a posterior probability by considering all programs which returned the desired output.

I’m confused about how this is extended to the case where there is randomness in the observations we collect.

Take a simple example: Suppose we are considering a linear regression model $y=ax+b$. The computer program which executes this model has a length $l$ which is a function of $a$ and $b$ (bigger parameters require more bits to model). The prior is $pi(a,b)$.

But this program doesn’t include a random element. The regression model is $y=ax+b+epsilon$ where $epsilon$ is random noise with a distribution $N(0,sigma^2)$

How should we consider a prior for the noise? A computer program can’t produce a random noise so this can’t be a part of the program length. Should the prior for $sigma^2$ be considered separately to the prior for the model? Or should the number of bits required to encode the numeric value $sigma^2$ be included in the length of the program?

In addition to the variance of the noise, the underlying normal distribution would seen to have some degree of complexity associated with it. Should the bits required to describe a normal distribution be included in the model prior?


Get this bounty!!!

#StackBounty: #bayesian #algorithms #prior How to extend Solomonoff's universal prior to stochastic models?

Bounty: 150

Solomonoff’s universal prior for models is based on the algorithmic complexity of a computer program $p$ which executes that model. Where $l$ is the length of the computer program, the prior is proportional to $2^{-l}$.

This works fine for deterministic models. We have an observation and we want to understand what model best explains the observation. If the program $p$ returns an output equal to the observation then this program is valid, otherwise it is invalid. We can apply Bayes rule to get a posterior probability by considering all programs which returned the desired output.

I’m confused about how this is extended to the case where there is randomness in the observations we collect.

Take a simple example: Suppose we are considering a linear regression model $y=ax+b$. The computer program which executes this model has a length $l$ which is a function of $a$ and $b$ (bigger parameters require more bits to model). The prior is $pi(a,b)$.

But this program doesn’t include a random element. The regression model is $y=ax+b+epsilon$ where $epsilon$ is random noise with a distribution $N(0,sigma^2)$

How should we consider a prior for the noise? A computer program can’t produce a random noise so this can’t be a part of the program length. Should the prior for $sigma^2$ be considered separately to the prior for the model? Or should the number of bits required to encode the numeric value $sigma^2$ be included in the length of the program?

In addition to the variance of the noise, the underlying normal distribution would seen to have some degree of complexity associated with it. Should the bits required to describe a normal distribution be included in the model prior?


Get this bounty!!!