*Bounty: 50*

*Bounty: 50*

I’m working with a continious-time stochastic process, where a particular event may happen at some time t with an unkown underlying distribution.

One "run" of a simulation of this process will result in a series of event times for each time the event happened within the run. So the output is just $[t_1, t_2, … t_n]$.

From this output I’m trying to calculate a metric I’ll call $u$, which is defined as "the probability that if you choose a random time $t$ within a run and look within the time range $[t, t+L]$ (for a pre-specified L), that at least one event occured in that range".

I’ve found some documentation (from an employee long gone from the company) that gives an analytical form for $u$ and I’ve verified that this form aligns very well with experimental data, but I haven’t been able to recreate the deductions that lead to this form.

The analytical form makes use of a probability density function of wait times $f(t)$ where wait time is simply the time between conseuctive events. So the experimental wait times are simply $[t_1, t_2-t_1, t_3-t_2, … t_n – t_{n-1}]$

The form I’m given is: $u = 1 – frac{int_{t=L}^{inf} (t-L)f(t)}{int_{t=0}^{inf} tf(t)}$, where $t$ is wait time

It’s clear that $frac{int_{t=L}^{inf} (t-L)f(t)}{int_{t=0}^{inf} tf(t)}$ is the disjoint probability that in this random time range of length L, no events occur, but I’m still not clear on how the exact terms are arrived at.

In my attempt to make sense of it I’ve reconstructed it into $u= 1 – frac{E(t-L | t > L)P(t > L)}{E(t)} $

which makes some inuitive sense to me, but I still can’t find a way to start with the original problem and arrive at any of these forms of the analytical solution.

Any guidance on this would be greatly appreciated