## #StackBounty: #c# #mathematics #projectile-physics #trajectory #aiming Projectile Aim Prediction with Acceleration

### Bounty: 50

I’m trying to solve the classic shoot moving object problem but with acceleration attached to that changes it from a quadratic to quartic formula but my math skills are not this good sadly as i prefer to speak in code and not formulas.

i found this https://wiki.beyondunreal.com/Legacy:Projectile_Aiming. i ported it and it’s almost what i’m looking for but missing control over the acceleration as its only made for gravity or no gravity and even this i got working only with tricks and modifying it without knowing what i’m doing is getting me only that far.

I made myself a Prototype Interface for the minimum of what i’m trying to get out of it

public class PredictionResult {
public bool IsInRange; // can we even hit the target?
public Vector3[] ShotVelocity; // should be up to 4 possible values
public float[] ShotImpactTime; // how long each shot takes to arrive at the predicted target
public Vector3[] ShotImpactLocation; // somewhat redundant yet still useful if available
}
// only really the delta between start and target matter but its up to the function
// same for startVel and targetVel where startVel is the shooters speed that gets added to the bullet
// the bullet and target can have different accelerations (standing on ground => no gravity or bullet is not affect by gravity)
public PredictionResult ShootAtTarget(Vector3 start, Vector3 target, Vector3 startVel, Vector3 targetVel, Vector3 bulletAccel, Vector3 targetAccel, float bulletSpeed);


any help solving this would be great

Get this bounty!!!

# Introduction

I’m creating a game where the player can obtain 1 to 3 stars for each level based on the score it gets (based on the completion time).
The levels are grouped in “worlds” each of which is unlocked when the user obtains a given number of stars in the previous levels.

For example, world 1 has 5 levels and to unlock world 2 the user needs to gain at least 5 stars (thus at leas one star per level in average).

Here is a basic idea of the words -> n° of levels in that world -> stars to unlock

1 -> 5 -> 0 (of course)
2 -> 5 -> 5 / 15 (33%)
3 -> 5 -> 10 / 30 (33%)
4 -> 7 -> 25 / 45 (55%)
5 -> 7 -> 30 / 66 (45%)
...


To determine the score you should reach, for each level, to get 0 / 1 / 2 / 3 stars, I recorded some play stats from a bunch of beta-testers obtaining a normal distribution of play times for each level.
Given each distribution, I should be able to answer this question for each level:

at what score should I reward n stars in order for x% of the
player to get n stars overall?

# Tuning

So now I can change the thresholds for each star at each level (or group of levels) in order to filter the percentage of players that will obtain a given number of stars at some point.
This way I can set the game difficulty as the difficulty to unlock a given world which is the percentage of people who are good enough to gain enough stars to unlock that world.

For example for early world progress difficulty, I chose a base of 1.22 as exponential base of the percentage reduction (difficulty growth), like this:

world -> difficulty coeff. -> perc. players

1 -> 0 -> 100%
2 -> 1.22 -> 98.78%
3 -> 1.4884 -> 98.5116%
4 -> 1.815848 -> 98.184152%
5 -> 2.21533456 -> 97.78466544%
...


This way, for example, the last world should be reached by about 47% of the players.

Now I want to know how to tune percentages of people gaining 1 / 2 / 3 stars in order to stick to this given percentage progression.
In order to do this, I found the minimal configurations of possible stars obtained in each world level in order to unlock the next one, for example:

world -> n° 1 stars -> n° 2 stars -> n° 3 stars -> stars to unlock next world
1 -> 5 -> 0 -> 0 -> 5
2 -> 10 -> 0 -> 0 -> 10
3 -> 5 -> 10 -> 0 -> 25
4 -> 12 -> 10 -> 0 -> 30
5 -> 18 -> 11 -> 0 -> 40
...


Note that if the user got 10 x 2 stars in the previous world, then it still has those 10 x 2 stars in later worlds, unless it tops them wit 3 stars.

# My Calculations

Now, for example, if I want 98.78% of the players to be able to unlock the second world, given they have to obtain minimum 1 star at each previous level, then p^5 = 0.9878 and p = rad(5, 0.9878) ≈ 0.9975, so 99.75% of the players should be able to get at leas 1 star in each level of the first world.

For the third world, things get a little harder, as the 1 star probabilities for the first 5 levels are locked now.
Players must be able to obtain at least 1 star in each of the 10 levels of the first two worlds with an overall probability of 98.5116%, but the probability to obtain 1 star in the first 5 levels is locked at 99.75%, with an overall probability of obtaining 1 star in each of the first 5 levels of 98.78%.
So I had to solve this equation: p * 0.9878 = 0.985116 so p = 0.985116 / 0.9878 = 0.9973 which is the probability p to get at leas 1 star in all the 5 levels of the second world.
So the probability to obtain 1 star for each single level of the second world is p = rad(5, 0.9973) = 0.9995 which is slightly higher than the previous world.

Fourth world gets even weirder, as the restrictions shifts on the 10 x 2 stars that players need to obtain in the 15 previous levels. To do this, I used binomial distribution to find the probability to extract at least 10 successes over 15 attempts which gave a probability of 58.79% to obtain 2 stars in each of the 15 levels of the first 3 worlds in order to have a 98.184% probability to finish the third world with at leas 10 x 2 stars.

Fifth world differs from fourth only by 7 x 1 star so I just calculated p * 0.98184 = 0.97784 where p is the probability to obtain 1 star in each of the 7 levels of the fifth world, obtaining a probability to obtain 1 star for each single level of the fifth world of 99.50%.

# Problems

Now I’m stuck at world 6 where the user is required to obtain at least 11 x 2 stars. How do I calculate this probability? I can use the binomial distribution, but the probabilities of the events are not the same everywhere as the probability to obtain 2 stars in the first 5 words is locked.

Is there any formula to help me with this?
Is there any simpler / more direct approach I can fallow?
Does any of this make any sense at all?

Get this bounty!!!

## #StackBounty: #books #exercises #mathematics Math book: how to write Exercise and Answers

### Bounty: 50

EDIT
within documentclass[12pt]{book}I want to create chapter-wise exercises and put all the solutions (with or without hints) at the end of the book. The answer should include page No of the exercise as given in the attached jpg file.

I want to do this in simple and non-tedious way like: For the input of the questions, I just want to add questionfor each question and similar for answer, but all the answer should come at the end of the book.

Exercise style:

Solution stype:

Get this bounty!!!

## #HackerRank: Computing the Correlation

### Problem

You are given the scores of N students in three different subjects – MathematicsPhysics and Chemistry; all of which have been graded on a scale of 0 to 100. Your task is to compute the Pearson product-moment correlation coefficient between the scores of different pairs of subjects (Mathematics and Physics, Physics and Chemistry, Mathematics and Chemistry) based on this data. This data is based on the records of the CBSE K-12 Examination – a national school leaving examination in India, for the year 2013.

Pearson product-moment correlation coefficient

This is a measure of linear correlation described well on this Wikipedia page. The formula, in brief, is given by:

where x and y denote the two vectors between which the correlation is to be measured.

Input Format

The first row contains an integer N.
This is followed by N rows containing three tab-space (‘\t’) separated integers, M P C corresponding to a candidate’s scores in Mathematics, Physics and Chemistry respectively.
Each row corresponds to the scores attained by a unique candidate in these three subjects.

Input Constraints

1 <= N <= 5 x 105
0 <= M, P, C <= 100

Output Format

The output should contain three lines, with correlation coefficients computed
and rounded off correct to exactly 2 decimal places.
The first line should contain the correlation coefficient between Mathematics and Physics scores.
The second line should contain the correlation coefficient between Physics and Chemistry scores.
The third line should contain the correlation coefficient between Chemistry and Mathematics scores.

So, your output should look like this (these values are only for explanatory purposes):

0.12
0.13
0.95


Test Cases

There is one sample test case with scores obtained in Mathematics, Physics and Chemistry by 20 students. The hidden test case contains the scores obtained by all the candidates who appeared for the examination and took all three tests (Mathematics, Physics and Chemistry).
Think: How can you efficiently compute the correlation coefficients within the given time constraints, while handling the scores of nearly 400k students?

Sample Input

20
73  72  76
48  67  76
95  92  95
95  95  96
33  59  79
47  58  74
98  95  97
91  94  97
95  84  90
93  83  90
70  70  78
85  79  91
33  67  76
47  73  90
95  87  95
84  86  95
43  63  75
95  92  100
54  80  87
72  76  90


Sample Output

0.89
0.92
0.81


There is no special library support available for this challenge.

## What is the difference between linear regression on y with x and x with y?

The Pearson correlation coefficient of x and y is the same, whether you compute pearson(x, y) or pearson(y, x). This suggests that doing a linear regression of y given x or x given y should be the same, but that’s the case.

The best way to think about this is to imagine a scatter plot of points with y on the vertical axis and x represented by the horizontal axis. Given this framework, you see a cloud of points, which may be vaguely circular, or may be elongated into an ellipse. What you are trying to do in regression is find what might be called the ‘line of best fit’. However, while this seems straightforward, we need to figure out what we mean by ‘best’, and that means we must define what it would be for a line to be good, or for one line to be better than another, etc. Specifically, we must stipulate a loss function. A loss function gives us a way to say how ‘bad’ something is, and thus, when we minimize that, we make our line as ‘good’ as possible, or find the ‘best’ line.

Traditionally, when we conduct a regression analysis, we find estimates of the slope and intercept so as to minimize the sum of squared errors. These are defined as follows:

In terms of our scatter plot, this means we are minimizing the sum of the vertical distances between the observed data points and the line.

On the other hand, it is perfectly reasonable to regress x onto y, but in that case, we would put x on the vertical axis, and so on. If we kept our plot as is (with x on the horizontal axis), regressing x onto y (again, using a slightly adapted version of the above equation with x and y switched) means that we would be minimizing the sum of the horizontal distances between the observed data points and the line. This sounds very similar, but is not quite the same thing. (The way to recognize this is to do it both ways, and then algebraically convert one set of parameter estimates into the terms of the other. Comparing the first model with the rearranged version of the second model, it becomes easy to see that they are not the same.)

Note that neither way would produce the same line we would intuitively draw if someone handed us a piece of graph paper with points plotted on it. In that case, we would draw a line straight through the center, but minimizing the vertical distance yields a line that is slightly flatter (i.e., with a shallower slope), whereas minimizing the horizontal distance yields a line that is slightly steeper.

A correlation is symmetrical x is as correlated with y as y is with x. The Pearson product-moment correlation can be understood within a regression context, however. The correlation coefficient, r, is the slope of the regression line when both variables have been standardized first. That is, you first subtracted off the mean from each observation, and then divided the differences by the standard deviation. The cloud of data points will now be centered on the origin, and the slope would be the same whether you regressed y onto x, or x onto y.

Now, why does this matter? Using our traditional loss function, we are saying that all of the error is in only one of the variables (viz., y). That is, we are saying that x is measured without error and constitutes the set of values we care about, but that y has sampling error. This is very different from saying the converse. This was important in an interesting historical episode: In the late 70’s and early 80’s in the US, the case was made that there was discrimination against women in the workplace, and this was backed up with regression analyses showing that women with equal backgrounds (e.g., qualifications, experience, etc.) were paid, on average, less than men. Critics (or just people who were extra thorough) reasoned that if this was true, women who were paid equally with men would have to be more highly qualified, but when this was checked, it was found that although the results were ‘significant’ when assessed the one way, they were not ‘significant’ when checked the other way, which threw everyone involved into a tizzy. See here for a famous paper that tried to clear the issue up.