#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!

#StackBounty: #stochastic-processes #central-limit-theorem #integral #numerical-integration Plain English explanation of Ito's inte…

Bounty: 50

I’m looking for a plain English explanation of Ito’s integral. I don’t need an exhaustive proof, derivation, etc. Just a simple ~this is effectively what it does and why it’s better than a Riemann sum (or insert other numeric approximation.)

With Riemann sums, the function is divided into an arbitrarily large number of rectangles, the areas are summed, and you have a pretty good approximation of the function’s area under the curve. I’ve read that this approach works best with "monotonic functions" and isn’t ideal when the function jumps randomly. Hence, it performs poorly when approximating area under the curve of Brownian motion.

From various resources, I’ve pieced together that Ito’s integral is still using an arbitrarily large number of small rectangles to approximate area. However, they are (A) of random width and (B) sometimes overlap with one another. Due to B, the function’s area cannot be approximated as the summation of each rectangle. However, the function’s area can be approximated in a probabilistic sense: The area can be seen as a random variable (and perhaps due to central limit theorem) can be conceived as the expectation of several random variables.

So essentially these random rectangles are averaged and we get a mean and standard deviation around our function’s area estimate.

Good chance that I’m confused. Any chance someone could clarify this? Please don’t rely heavily on LaTeX; again, I’m interested in a plain English summary.

Edit: If I am on the right track, then this method would work best "stationary" data, where the jumps generally cancel each other out and bounce around some mean value. However, if there is a general trend over time, performance might be negatively impacted..?

Get this bounty!!!