*Bounty: 100*

*Bounty: 100*

Consider *tests* of randomness of bit sequences of fixed size $n$ bits as in cryptography (e.g. NIST Special Publication 800-22 page 1-4). Define such test as any deterministic function $T$ that accepts a vector $V$ of $n$ bits, and outputs a *P-value* $P$ in $]0dots1]$, obeying the defining property

$$forallalphain[0dots1],;;Prbig(T(V)lealphabig),le,alpha$$

where the probability is computed with $V$ a vector of random independent unbiased bits (or equivalently, is computed as the proportion of $V$ such that $T(V)lealpha$ among the $2^n$ vectors $V$).

Example tests matching this definition are

*True*, which always output $P=1$.*Non-zero*, which outputs $1/2^n$ if all bits in $V$ are zero, and outputs $1$ otherwise.*Non-stuck*, which outputs $1/2^{n-1}$ if all bits in $V$ are identical, and outputs $1$ otherwise.*Balanced*, which computes the number $s$ of bits set in $V$, and outputs the odds that for random $V$, $|2s-n|$ is at least as observed.- For $nle3$, Balanced is the same as Non-stuck.
- For $n=4$, $P=begin{cases}

{1/8}&text{ if } sin{0,4}\

{5/8}&text{ if } sin{1,3}\

1&text{ otherwise}end{cases}$ - For $n=5$, $P=begin{cases}

{1/16}&text{ if } sin{0,5}\

{3/8}&text{ if } sin{1,4}\

1&text{ otherwise}end{cases}$

There’s a natural partial order relation among tests: $T$ implies $T’$ when $forall V, T(V)le T'(V)$. Any test implies True. Balanced implies Non-stuck, but does not imply Non-zero. Some tests, including Balanced and Non-zero, are optimal in the sense that no other test implies them.

Section 2 of the above reference describes 15 tests for large $n$ (thousands bits), that are intended to catch some defects relevant to actual random number generators, and be near-optimal (in the above sense). For example, section 2.1 is an approximation of Balanced for large $n$ using the complementary error function, designated *The Frequency (Monobit) Test*.

**Q1**: Assume that all bits tested are random independent bits having same odds $q={1over2}+epsilon$ to be set, with $epsilon$ unknown (besides being smallish), corresponding to Shannon entropy per bit $$H=-qlog_2(q)-(1-q)log_2(1-q)=1-{2overlog2}epsilon^2+mathcal O(epsilon^4)$$

The Balanced test for some (large) number $n$ of such bits is applied once, and outputs a small *P-value* (say $Ple0.001$). That allows us to reject the null hypothesis $H=1$ with high confidence (corresponding to the *P-value* $P$).

What is a tight function $H(P,n)$ such that we can reject $Hge H(P,n)$ with some good confidence (corresponding to some known *P-value* higher than $P$, perhaps $2P$ or something on that tune)? By “tight function” I mean that the lowest $H(P,n)$ we manage to prove for some confidence, the better.

**Q2**: Things are as in Q1, except that the test is unspecified beyond the defining property of *P-values*. Can we reject the hypothesis $Hge H(P,n)$ with good confidence, for whatever $H(P,n)$ and confidence level was established in Q1? If that conjecture was false, what’s a counterexample, or/and is that reparable?

**Q3**: Things are as in Q2 (or in Q1 if the property thought in Q2 does not apply), except that the bits in the input $V$ might be dependent, but still with Shannon entropy per bit $H$; that is, the distribution of the inputs $V$ is such that $$nH;=;-sum_{Vtext{ with }Pr(V)ne0}Pr(V)log_2(Pr(V))$$

Can we reject the hypothesis $Hge H(P,n)$ with good confidence, for whatever $H(P,n)$ and confidence level was established in Q1? If that conjecture was false, what’s a counterexample, or/and is that reparable?