#StackBounty: #stochastic-processes #markov-process #stochastic-calculus Let $X_t$ be a solution of a SDE. Does the set ${X_t in {p…

Bounty: 100

This question was previously posted on https://math.stackexchange.com/questions/3981156/let-x-t-be-a-solution-of-a-sde-does-the-set-x-t-in-p-has-null-meas.

I think this question is easy. However, I have not been able to solve it.

Let $a,sigma:mathbb{R}times mathbb Rtomathbb{R}$, smooth functions such that $sigma>0$. Consider the 1-dimensional SDE,
$$dX_t = a(X_t,t) dt + sigma(X_t,t) dW_t$$
$$X_0 = x_0inmathbb{R}. $$
where $W_t$ is the standard Brownian motion.

Fixing $yinmathbb R$ and $t>0$, I was interested in showing that$$mathbb{P}left({omega in Omega; X_t = y}right)=0.$$
Where $(Omega,mathcal F, mathbb P)$, is the probability space being considered.

Does anyone know if the above equation is true? A reference would be enough for me.


Get this bounty!!!

#StackBounty: #stochastic-processes #markov-process #stochastic-calculus Let $X_t$ be a solution of a SDE. Does the set ${X_t in {p…

Bounty: 100

This question was previously posted on https://math.stackexchange.com/questions/3981156/let-x-t-be-a-solution-of-a-sde-does-the-set-x-t-in-p-has-null-meas.

I think this question is easy. However, I have not been able to solve it.

Let $a,sigma:mathbb{R}times mathbb Rtomathbb{R}$, smooth functions such that $sigma>0$. Consider the 1-dimensional SDE,
$$dX_t = a(X_t,t) dt + sigma(X_t,t) dW_t$$
$$X_0 = x_0inmathbb{R}. $$
where $W_t$ is the standard Brownian motion.

Fixing $yinmathbb R$ and $t>0$, I was interested in showing that$$mathbb{P}left({omega in Omega; X_t = y}right)=0.$$
Where $(Omega,mathcal F, mathbb P)$, is the probability space being considered.

Does anyone know if the above equation is true? A reference would be enough for me.


Get this bounty!!!

#StackBounty: #stochastic-processes #markov-process #stochastic-calculus Let $X_t$ be a solution of a SDE. Does the set ${X_t in {p…

Bounty: 100

This question was previously posted on https://math.stackexchange.com/questions/3981156/let-x-t-be-a-solution-of-a-sde-does-the-set-x-t-in-p-has-null-meas.

I think this question is easy. However, I have not been able to solve it.

Let $a,sigma:mathbb{R}times mathbb Rtomathbb{R}$, smooth functions such that $sigma>0$. Consider the 1-dimensional SDE,
$$dX_t = a(X_t,t) dt + sigma(X_t,t) dW_t$$
$$X_0 = x_0inmathbb{R}. $$
where $W_t$ is the standard Brownian motion.

Fixing $yinmathbb R$ and $t>0$, I was interested in showing that$$mathbb{P}left({omega in Omega; X_t = y}right)=0.$$
Where $(Omega,mathcal F, mathbb P)$, is the probability space being considered.

Does anyone know if the above equation is true? A reference would be enough for me.


Get this bounty!!!

#StackBounty: #stochastic-processes #markov-process #stochastic-calculus Let $X_t$ be a solution of a SDE. Does the set ${X_t in {p…

Bounty: 100

This question was previously posted on https://math.stackexchange.com/questions/3981156/let-x-t-be-a-solution-of-a-sde-does-the-set-x-t-in-p-has-null-meas.

I think this question is easy. However, I have not been able to solve it.

Let $a,sigma:mathbb{R}times mathbb Rtomathbb{R}$, smooth functions such that $sigma>0$. Consider the 1-dimensional SDE,
$$dX_t = a(X_t,t) dt + sigma(X_t,t) dW_t$$
$$X_0 = x_0inmathbb{R}. $$
where $W_t$ is the standard Brownian motion.

Fixing $yinmathbb R$ and $t>0$, I was interested in showing that$$mathbb{P}left({omega in Omega; X_t = y}right)=0.$$
Where $(Omega,mathcal F, mathbb P)$, is the probability space being considered.

Does anyone know if the above equation is true? A reference would be enough for me.


Get this bounty!!!

#StackBounty: #stochastic-processes #markov-process #stochastic-calculus Let $X_t$ be a solution of a SDE. Does the set ${X_t in {p…

Bounty: 100

This question was previously posted on https://math.stackexchange.com/questions/3981156/let-x-t-be-a-solution-of-a-sde-does-the-set-x-t-in-p-has-null-meas.

I think this question is easy. However, I have not been able to solve it.

Let $a,sigma:mathbb{R}times mathbb Rtomathbb{R}$, smooth functions such that $sigma>0$. Consider the 1-dimensional SDE,
$$dX_t = a(X_t,t) dt + sigma(X_t,t) dW_t$$
$$X_0 = x_0inmathbb{R}. $$
where $W_t$ is the standard Brownian motion.

Fixing $yinmathbb R$ and $t>0$, I was interested in showing that$$mathbb{P}left({omega in Omega; X_t = y}right)=0.$$
Where $(Omega,mathcal F, mathbb P)$, is the probability space being considered.

Does anyone know if the above equation is true? A reference would be enough for me.


Get this bounty!!!

#StackBounty: #time-series #probability #stochastic-processes How to get this analytical results for probability of wait times

Bounty: 50

I’m working with a continious-time stochastic process, where a particular event may happen at some time t with an unkown underlying distribution.

One "run" of a simulation of this process will result in a series of event times for each time the event happened within the run. So the output is just $[t_1, t_2, … t_n]$.

From this output I’m trying to calculate a metric I’ll call $u$, which is defined as "the probability that if you choose a random time $t$ within a run and look within the time range $[t, t+L]$ (for a pre-specified L), that at least one event occured in that range".

I’ve found some documentation (from an employee long gone from the company) that gives an analytical form for $u$ and I’ve verified that this form aligns very well with experimental data, but I haven’t been able to recreate the deductions that lead to this form.

The analytical form makes use of a probability density function of wait times $f(t)$ where wait time is simply the time between conseuctive events. So the experimental wait times are simply $[t_1, t_2-t_1, t_3-t_2, … t_n – t_{n-1}]$

The form I’m given is: $u = 1 – frac{int_{t=L}^{inf} (t-L)f(t)}{int_{t=0}^{inf} tf(t)}$, where $t$ is wait time

It’s clear that $frac{int_{t=L}^{inf} (t-L)f(t)}{int_{t=0}^{inf} tf(t)}$ is the disjoint probability that in this random time range of length L, no events occur, but I’m still not clear on how the exact terms are arrived at.

In my attempt to make sense of it I’ve reconstructed it into $u= 1 – frac{E(t-L | t > L)P(t > L)}{E(t)} $

which makes some inuitive sense to me, but I still can’t find a way to start with the original problem and arrive at any of these forms of the analytical solution.

Any guidance on this would be greatly appreciated


Get this bounty!!!

#StackBounty: #time-series #stochastic-processes #stationarity #asymptotics #moving-average Is the MA($infty$) process with i.i.d. noi…

Bounty: 100

I have a MA($infty$) process defined by
$$
X_t = sum_{k=0}^infty alpha_{k} epsilon_{t-k}, qquad tinmathbb{Z}
$$

where the sums converge a.s. and the $epsilon_t$ are i.i.d. centered noise with Var($epsilon_t$) = $sigma^2< infty$.

There are plenty of proofs that this process is weakly stationary in the literature.

Is this process strictly stationary?


Get this bounty!!!

#StackBounty: #distributions #stochastic-processes #nonlinear-regression What distribution may electric vehicle battery capacity data f…

Bounty: 50

I’m trying to find out the shape of the curve that reflects electro vehicles battery degradation data (depending on cumulative travelling distance). The red line on the plot doesn’t seem a perfect fit. Is it stretched exponential of some sort?

enter image description here
Source: link

So, as I do not fully understand the nature of the process, I cannot figure out what would be appropriate distributions for such continuous variable as remaining capacity.

Any tips will be much appreciated.


Get this bounty!!!

#StackBounty: #time-series #autocorrelation #covariance #stochastic-processes #brownian Time-series Auto-Covariance vs. Stochastic Proc…

Bounty: 50

My background is more on the Stochastic processes side, and I am new to Time series analysis. I would like to ask about estimating a time-series auto-covariance:

$$ lambda(u):=frac{1}{t}sum_{t}(Y_{t+u}-bar{Y})(Y_{t}-bar{Y}) $$

When I think of the covariance of Standard Brownian motion $W(t)$ with itself, i.e. $Cov(W_s,W_t)=min(s,t)$, the way I interpret the covariance is as follows: Since $mathbb{E}[W_s|W_0]=mathbb{E}[W_t|W_0]=0$, the Covariance is a measure of how "often" one would "expect" a specific Brownian motion path at time $s$ to be on the same side of the x-axis as as the same Brownian motion path at time t.

It’s perhaps easier to think of correlation rather than covariance, since $Corr(W_s,W_t)=frac{min(s,t)}{sqrt(s) sqrt(t)}$: with the correlation, one can see that the closer $s$ and $t$ are together, the closer the Corr should get to 1, as indeed one would expect intuitively.

The main point here is that at each time $s$ and $t$, the Brownian motion will have a distribution of paths: so if I were to "estimate" the covariance from sampling, I’d want to simulate many paths (or observe many paths), and then I would fix $t$ and $s=t-h$ ($h$ can be negative), and I would compute:

$$ lambda(s,t):=frac{1}{i}sum_{i}(W_{i,t}-bar{W_i})(W_{i,t-h}-bar{W_i}) $$

For each Brownian path $i$.

With the time-series approach, it seems to be the case that we "generate" just one path (or observe just one path) and then estimate the auto-covariance from just that one path by shifting throught time.

Hopefully I am making my point clear: my question is on the intuitive interpretation of the estimation methods.


Get this bounty!!!

#StackBounty: #stochastic-processes #markov-process Markov chain with stopping times

Bounty: 50

I have a Markov chain with transition matrix $P$, with transition probabilities:

$$p_{i,j}=
begin{cases}
1-d, & text{if $i=j gt 0$} [2ex]
d , & text{ if $j=i-1 gt 0$} [2ex]
(1- gamma)^{j-1}gamma , &text{ for $0=i lt j$} [2ex]
end{cases}$$

Let $T_{i,j}$ be the first hitting time of state $j$ after the first hitting time of state $i$ (i.e $T_{i,j}= inf{ geq1 : X_n=j , n gt H_i}$).

$(mathbf 1)$I would like to explain why with $S_{i,j}=T_{i,j}-H^i$ for $i gt 0$ we can write $$S_{i,0}=sum^i_{k=1}S_{k,k-1}$$

$(mathbf 2)$Also, I would like to determine the distribution of $S_{k,k-1}$ and confirm it using the fact ${H^k lt infty}$ for all $k$, then $(mathbf 3)$ I would like to justify why the sum is made from a sequence of iid random variables.i.e why the $S_{k,k-1}$ terms are independent of each other

So, for my first task, here’s how I got with it:

By my logic, $T_{i,j}$ is just another way of writing the hitting time of $j$ . So, for example $X_0$=4 – (our first state at which we start is $4$). The say hitting time of state $3$ is $2$ and then hitting time of state $2$ is 4. So, $T_{4,2}=4$ is just the more fancy way of writing $H^2=4$ . But I also observe that $T_{3,2}=4$. I mean, by definition, the $n$ should be greater that $H^3=2$.

Ok, so by my logic, $S_{4,0}$ is just the number of steps we make from $4$ until we hit $0$(i.e $S_{i,j}=H^j-H^i$ So, to understand why we can write $S_{i,0}$ in such a form, I made a small example. Let $X_0=4, H^3=2, H^2=4,H^1=6,H^0=8$

So, $S_{3,0}=S_{1,0}+S_{2,1}+S_{3,2}=H^0-H^1+H^1-H^2+H^2-H^3=8-6+6-4+4-2=6$

And indeed, the steps I make from $3$ until I hit $0$ are $6$.

So, I’m not quite sure how to prove this result mathematically, but an intuitive explanation is such:

“Since the sum $sum^i_{k=1}S_{k,k-1}$ represents the sum of all “steps” taken inbetween the consecutive states, it equals to the total sum of “steps” taken from the first state of the sum to the end state of the sum”

Is there any better way to formulate this? Am I correct in my thinking?

Task$(mathbf 2)$: I observe that the distribution of $S_{k,k-1}sim Geom(d)$ . We can confirm that it is indeed geometrically distributed, using the facts that ${H^k lt infty}$ for all $k$ and that the Geometric distribution is almost surely finite. i.e $P(S_{k,k-1} lt infty)=1$

Task $(mathbf 3)$ . I’m actually not quite sure how to justify why the sum is made of iid random variables. Any suggestions?

Any help is appreciated


Get this bounty!!!