Since 2015, Bell-type experiments designed to test local realism have the following format: the format of a so-called “loophole-free Bell test”. There is a fixed sequence of *N* time-slots, or more precisely, paired time-slots. These are time-slots in two distant labs owned by two scientists Alice and Bob. The time-slots are paired such that a signal sent at the *start* of one of Alice’s time slots from Alice’s to Bob’s lab, travelling at the speed of light, would only reach Bob’s lab *after* the *end* of Bob’s corresponding time-slot; and vice versa. Just after the start of each time-slot, each inserts a binary setting into an experimental device. Something goes on inside that apparatus, and before the time-slot is over, a binary outcome is produced. Each instance with two inputs and two outputs is called a trial.

Actually, many experiments require a slightly more elaborate protocol involving a third lab, which you *may* think of as a source of “pairs of particles”. Charlie’s lab is located somewhere between Alice and Bob’s. Charlie’s device outputs the message “ready” or “not ready” before the *end* of his time-slot (its length is irrelevant). The message however could only arrive at Alice and Bob’s lab after they have already input their input settings, so could not directly influence their choices. Outcomes get delivered anyway. After the experiment, one looks only at the inputs and outputs of each trial in which Charlie saw the output “ready”. The experiment continues long enough that there are *N* trials labelled by Charlie’s apparatus as “ready”. From now on, I will forget about this “post-selection” of *N *trials: the first *N* which went off to a good start. (The word “post-selection” is a misnomer. It is performed after the whole experiment is complete, but the selection is determined in advance of the introduction of the settings).

The settings are typically chosen to resemble sequences of outcomes of independent fair coin tosses. Sometimes they are generated by physical random number generators using physical noise sources, sometimes they are created using pseudo random number generators (RNGs). Sometimes they are generated on the fly, sometimes created in advance. The idea is that the *settings* are *inputs* which come *from* the outside world, outside the experimental devices, and the *outcomes* are *outputs* delivered by the devices *to* the outside world.

Below is a graphical model specified in the language of the present-day theory of causality based on directed acyclic graphs (DAGs), describing the dependence structure of what is observed in terms of “hidden variables”. There is no assumption that the hidden parts of the structure are classical, nor that they are located in classical space-time. The node “psi” stands for the state of all experimental apparatus in the three labs including transmission lines between them before one trial of the experiment starts, *as far as is directly relevant* in the causal process leading from experimental inputs to experimental outputs. The node “phi” consists of the state of external devices which generate the settings. The graphical model says that *as far as the settings and the outputs are concerned*, “phi” and “psi” can be taken to be independent. It says that Bob’s setting is not in the causal pathway to Alice’s outcome.

At the end of the experiment, we have *N* quadruples of binary bits (*a*, *b*, *x*, *y*). Here, *a* and *b* are the settings and *x* and *y* are the outcomes in one of the *N* “trials”. We can now count the number *z* of trials* *in which *x* = *y* *and* *neither* *a* or *b* = 1, together with trials in which *x* ≠ *y* *and* *both* *a* and *b* = 1. Those two kinds of trials are both considered trials having the result “success”. The trials remaining have the result “fail”.

Now, let *B*(*p*) denote a random variable distributed according to the binomial distribution with parameters *N* and *p*. Think of the number of successes *z* to be the outcome of a random variable *Z*. According to local realism, and taking *p* = 0.75, it can be proved that for all *z* > *N p*, Prob( *Z* ≥ *z* ) ≤ Prob( *B*(*p*) ≥ *z* ). According to quantum mechanics, and with *q* = 0.85, it appears possible to arrange that for all *z*, Prob( *Z* ≤ *z* ) = Prob( *B*(*q*) ≤ *z* ). Let’s see what those binomial tail probabilities are with *z* = 0.80 *N*, using the statistical programming language “*R*“.

`N <- 1000`

p <- 0.75

z <- 0.8 * N

q <- 0.85

pbinom(z, N, p, lower.tail = FALSE)

[1] 8.029329e-05

pbinom(z, N, q, lower.tail = TRUE)

[1] 1.22203e-05

We see that an experiment with *N* = 1000 time-slots should be plenty to decide whether the experimental results are the result of local realism with a success rate of maximally 75%, or of quantum mechanics with a success rate of 85% (close to the theoretical maximum under quantum mechanics). The winning theory is decided by seeing if the observed success rate is above or below 80%.

**Challenge**: *show by a computer simulation that my claims are wrong. ie, simulate a “loophole-free” Bell experiment with a success rate reliably exceeding 80% when the number of trials is 1000 or more. Rules of the game: you must allow me to supply the “fair coin tosses”. Your computer simulation may use an RNG (called a fixed number of times per trial) to create its own randomness, but it must have “set seed” and “restore seed” facilities in order to make each run exactly reproducible if required. For each n, Alice’s nth output x may depend only on **Alice’s nth input a*,* together with (if desired) all the preceding inputs and outputs. Similarly, **Bob’s nth output y may depend only on **Bob’s input b*,* together with (if desired) all the preceding inputs and outputs*