Since 2015, Bell-type experiments designed to test local realism have the following format: the format of a so-called “loophole-free Bell test”. There is a fixed sequence of N time-slots, or more precisely, paired time-slots. These are time-slots in two distant labs owned by two scientists Alice and Bob. The time-slots are paired such that a signal sent at the start of one of Alice’s time slots from Alice’s to Bob’s lab, travelling at the speed of light, would only reach Bob’s lab after the end of Bob’s corresponding time-slot; and vice versa. Just after the start of each time-slot, each inserts a binary setting into an experimental device. Something goes on inside that apparatus, and before the time-slot is over, a binary outcome is produced. Each instance with two inputs and two outputs is called a trial.
Actually, many experiments require a slightly more elaborate protocol involving a third lab, which you may think of as a source of “pairs of particles”. Charlie’s lab is located somewhere between Alice and Bob’s. Charlie’s device outputs the message “ready” or “not ready” before the end of his time-slot (its length is irrelevant). The message however could only arrive at Alice and Bob’s lab after they have already input their input settings, so could not directly influence their choices. Outcomes get delivered anyway. After the experiment, one looks only at the inputs and outputs of each trial in which Charlie saw the output “ready”. The experiment continues long enough that there are N trials labelled by Charlie’s apparatus as “ready”. From now on, I will forget about this “post-selection” of N trials: the first N which went off to a good start. (The word “post-selection” is a misnomer. It is performed after the whole experiment is complete, but the selection is determined in advance of the introduction of the settings).
The settings are typically chosen to resemble sequences of outcomes of independent fair coin tosses. Sometimes they are generated by physical random number generators using physical noise sources, sometimes they are created using pseudo random number generators (RNGs). Sometimes they are generated on the fly, sometimes created in advance. The idea is that the settings are inputs which come from the outside world, outside the experimental devices, and the outcomes are outputs delivered by the devices to the outside world.
Below is a graphical model specified in the language of the present-day theory of causality based on directed acyclic graphs (DAGs), describing the dependence structure of what is observed in terms of “hidden variables”. There is no assumption that the hidden parts of the structure are classical, nor that they are located in classical space-time. The node “psi” stands for the state of all experimental apparatus in the three labs including transmission lines between them before one trial of the experiment starts, as far as is directly relevant in the causal process leading from experimental inputs to experimental outputs. The node “phi” consists of the state of external devices which generate the settings. The graphical model says that as far as the settings and the outputs are concerned, “phi” and “psi” can be taken to be independent. It says that Bob’s setting is not in the causal pathway to Alice’s outcome.
At the end of the experiment, we have N quadruples of binary bits (a, b, x, y). Here, a and b are the settings and x and y are the outcomes in one of the N “trials”. We can now count the number z of trials in which x = y and neither a or b = 1, together with trials in which x ≠ y and both a and b = 1. Those two kinds of trials are both considered trials having the result “success”. The trials remaining have the result “fail”.
Now, let B(p) denote a random variable distributed according to the binomial distribution with parameters N and p. Think of the number of successes z to be the outcome of a random variable Z. According to local realism, and taking p = 0.75, it can be proved that for all z > N p, Prob( Z ≥ z ) ≤ Prob( B(p) ≥ z ). According to quantum mechanics, and with q = 0.85, it appears possible to arrange that for all z, Prob( Z ≤ z ) = Prob( B(q) ≤ z ). Let’s see what those binomial tail probabilities are with z = 0.80 N, using the statistical programming language “R“.
N <- 1000
p <- 0.75
z <- 0.8 * N
q <- 0.85
pbinom(z, N, p, lower.tail = FALSE)
pbinom(z, N, q, lower.tail = TRUE)
We see that an experiment with N = 1000 time-slots should be plenty to decide whether the experimental results are the result of local realism with a success rate of maximally 75%, or of quantum mechanics with a success rate of 85% (close to the theoretical maximum under quantum mechanics). The winning theory is decided by seeing if the observed success rate is above or below 80%.
Challenge: show by a computer simulation that my claims are wrong. ie, simulate a “loophole-free” Bell experiment with a success rate reliably exceeding 80% when the number of trials is 1000 or more. Rules of the game: you must allow me to supply the “fair coin tosses”. Your computer simulation may use an RNG (called a fixed number of times per trial) to create its own randomness, but it must have “set seed” and “restore seed” facilities in order to make each run exactly reproducible if required. For each n, Alice’s nth output x may depend only on Alice’s nth input a, together with (if desired) all the preceding inputs and outputs. Similarly, Bob’s nth output y may depend only on Bob’s input b, together with (if desired) all the preceding inputs and outputs
Here is a different version of the challenge using the classical Bell-CHSH inequality instead of the more modern martingale inequality. Another version could be specified using the original Bell inequality, for which one would also demand that at equal settings, outcomes are always equal and opposite. After all, the original Bell inequality also assumes perfect anti-correlation, so one must check that that assumption holds.
The whole point of a computer simulation is that an independent judge is unnecessary: your code is written in a widely and freely available language suitable for scientific computing, and anyone with basic computing skills can check that the programming team is not cheating (whether deliberately or inadvertently). The independent judge is the entire scientific community. If you are successful, the simulation will actually be an example of a classical physical system producing what has been thought to be a unique signature of quantum entanglement. You, the lead scientist, will get the Nobel Prize because you and your team (I imagine that you are a theoretician who might need the assistance of a programmer) will have disproved quantum theory by a reproducible and rigorous experiment. No establishment conspiracy will be able to suppress the incredible and earth-shaking news.
Here are my stipulations on the program. I am assuming that it uses a built-in pseudo-random number generator. I assume that it includes “set.seed” and “save.seed” facilities. Otherwise, it is not useful for scientific work and not eligible for my challenge.
From now on, the phrases “photon pair”, “time slot”, and “trial” are taken to be interchangeable. After all, we are talking about a computer simulation, so the actual evocative natural language words which we use as names for variables and functions are irrelevant.
The program must accept as input a number of trials N, a seed setting the RNG, and two lists of setting labels “1” and “2” of length N. It must generate as output two lists of outcomes +/–1, also of length N. For all n, Alice’s n‘th output depends only on Alice’s n‘th input, as well (if you like) on the inputs and outputs on both sides of earlier trials. And similarly for Bob. I will check this constraint by doing many random spot checks. This is where the rule concerning the RNG comes in.
Let’s take N = 10,000. You will win if the CHSH quantity S exceeds 2.4 in a few repeats with different RNG seeds and varying the lists of inputs. In other words, the violation of the Bell-CHSH inequality is reproducible, and reproducible by independent verifiers. I will supply the lists of inputs after you have published your code. The inputs will be the result of a simulation of independent fair coin tosses using standard scientific computing tools. If you don’t trust me, we can ask a trusted third party to make them for us.
32 thoughts on “The Bell game challenge”
Thank you Richard for posting this again. I have been waiting for my new blog to be up inn order to take you up on your 5,000 euros bet. My guy is late with it. So I reply here for now to say that I am serious about betting you that a LHV description can violate Bell’s Inequalities. Your bet insists that one must use a locally real model to simulate the violation to win, but I have some issues with that which we can discuss.
How about a nice simple, intuitive and quantitative treatment that disproves Bell’s Theorem and exonerates EPR??
Bryan, I wrote: “Alice’s nth output x may depend only on Alice’s nth input a, together with (if desired) all [ie, both Alice’s and Bob’s] preceding inputs and outputs. Similarly, Bob’s nth output y may depend only on Bob’s input b, together with (if desired) all preceding inputs and outputs”.
The inputs are binary. I will supply them to you. Your outputs are binary. I require that you supply “set.seed” facilities so that each simulation run is exactly reproducible. That will enable me to check compliance of my rules by actually doing the EPR thought experiment.
I don’t see anything you could possibly mis-interpret or object to here. But feel free to ask questions. If you really can disprove Bell’s theorem and exonerate EPR by a simple, intuitive, and quantitative treatment, then you just have to get your programmer to implement your model in a simulation compliant with my rules.
Richard, I suggest that your simulation suggestion can never be satisfied because the challenge is ill conceived in the same way that Bell got sidetracked thinking of washing Bertlmann’s socks. How can you simulate incompatible elements of physical reality with on a single classical particle, when QM cannot even do it?
I didn’t ask you to simulate incompatible elements of physical reality. I challenge you to disprove Bell’s theorem by faithfully simulating a loophole-free Bell experiment. How I will check that your simulation program satisfies the experimental constraints is my own business. Alice’s input mustn’t influence Bob’s output and vice versa. I want to be able to check that without studying your programmer’s computer code. Hence I require exact reproducibility of any simulation run.
Bell wrote down an experimental protocol in the “socks” paper. Experimenters implemented it. Four experiments, in 2015.
I basically agree with you when you say no-one can meet the challenge, you stated:
” I challenge you to disprove Bell’s theorem by faithfully simulating a loophole-free Bell experiment.”
and I reply
” I will bet you 5,000 euros that I can disprove Bell’s theorem to the satisfaction of the majority.”. However, to be clear, Bell’s Theorem is not my main interest.. I would rather expand the bet to include explaining “Quantum Weirdness”.
So how about the following bet:
“Chemist bets mathematician 5,000 euros that physics is wrong: In particular the chemist can explain quantum weirdness without spookyness to the satisfaction of the majority.”
I like that, it covers everything.
I don’t think physics is that wrong, just a bit incomplete.
PS I would like to announce to friends that we are negotiating terms of a bet. Get some practical suggestions concerning precise terms, practical details. Other people can join in the discussion here on my blog, anyway.I get to check each post before making it public.
Bryan, the winner of that bet can only be decided with precise definitions of the terms. The majority of whom? When? What happens when the first of us passes away?
Good Richard, thanks. I am happy to have you or others define the terms. And when defined, we will set a date.
True time marches on, so I suggest that if we bet we have the money held in trust, and if one of us kicks the bucket the bet is off and the money is returned. Fine with me.
The only condition I will suggest right now is NO WOKISM
You’d better tell me: what is “wokism”.
Money in trust: I don’t have 5000 Euro to put aside at this moment. I would have to convert some hard assets to cash if I had to pay up today.
But I’m happy with just saying the bet is cancelled on the demise of either of us.
Trust is the most important value, if I lose I will pay you.
I agree. Were I to lose, I would pay you. Now the wording of the bet needs some wider discussion, and then a final version must be ratified by us.
Good, we are moving ahead. You asked about Wokism–it is for me cultural censorship. We agree with “Freedom of speech”. which is all I meant. https://www.reddit.com/r/canada/comments/pz85qn/too_woke_canadian_academic_star_leaves_top/
Richard, you suggested other people give ideas for the terms of the bet . Good, happy to hear. My only condition is the terms fall under the heading of “Resolving the EPR paradox to the satisfaction of the majority.” By paradox, I refer to the violation of Bell’s Inequalities by a quantum system. I will explain “Quantum Weirdness.” in the process and everyone will be happy, unless of course you happen to be in quantum info theory.
So what are the things that should be included in the bet??
What has to be decided is how the winner of the bet is decided. How do we determine “whether or not the majority is satisfied”? By a vote? Who is eligible to vote? Or do we say that you have won if and when I declare that I think you have explained quantumnweirdness?
Proof could be the publication in a prestigious journal, with peer review. If it makes it that far, scrutiny will follow. In a paper which claims to knock the socks off Bertlmann, errors, lack of clarity, and ambiguities will not go unnoticed. So this defines a way to decide the winner.
On the other hand, I think publication will lead to immediate and likely universal response, one way or another.
Keeping the bet simple and not technical, e.g.
“For a wager of 5,000 Euros, the Chemist bets the Mathematician he can resolve the EPR paradox by explaining “Quantum Weirdness”, In particular, the violation of Bell’s Inequalities will be resolved without non-locality. Proof of victory will be the publication in a peer review journal, and its survival of serious scrutiny. The winner should be known within six months. However, short extensions will be permitted but only due to publication delays by the publisher, and only after acceptance of the paper.”
I’d go for something like that. I am trying say that any delays after 6 months are not my fault.
Publication in a prestigious (hence, in particular, peer-reviewed) journal is a necessary condition, but not sufficient. Many professed resolutions of Bell’s theorem make it that far – one or two per year. Sometimes they even create brief media buzz. Usually they are forgotten in a year. It’s therefore indeed important that the paper attracts a lot of attention and survives that attention. Six months could be enough. Though sometimes it takes a bit longer, cf. the case of Hess and Philipp (2001, PNAS; 2002, Europhysics Journal). That madness was only terminated by publications by yours truly et al. in 2003. It was clear to experts that there must be an error but it was hard to pinpoint in the huge technical appendices of the papers. They were peer-reviewed but published anyway (Hess, as a member of the US national academy, had a right to have peer-review rejection overruled). Another example is Tim Palmer, published in the Proceedings of the Royal Society last year. I suggest that on (pre)publication we organise a small conference and use the email lists of several serious quantum discussion and seminar clubs to recruit participants. I suggest:
To begin with, we announce the bet on these groups. Agreed? Are you a member of both?
That sounds like a plan. However, there will be little debate because my philosophy is KISS and I’m guided by Occam.
What is KISS? I know a heavy metal rock group
KISS Keep It Simple Stupid
It’s not totally clear to me. Would a generally accepted non spooky explanation win the bet if it still had an element of non locality. Assuming the origin of the non locality were explained.
If a local explanation is being bet on, then can’t we bypass most of the process and just show the fallicy of the rather simple bells inequality derivation.
Also I’m slightly puzzled by the emphasis on the epr experiments. The predictions of Quantum theory violate them. So is the bet implicitly a bet on quantum theory being correct.
Mark, do you really believe that existing experimental results in Bell-type experiments are in contradiction with the predictions of quantum theory? BTW, both Bryan Sanctuary and I think that quantum theory is correct.
If you think that Bell’s reasoning is incorrect then maybe you are prepared to prove that by writing the program of a computer simulation which wins my Bell game challenge, https://gill1109.com/2021/12/22/the-bell-game-challenge/
Why not using the CHSH inequality that anybody acquainted with the Bell inequality immediately understands( or misunderstands)?
You said “Richard, I suggest that your simulation suggestion can never be satisfied because the challenge is ill conceived in the same way that Bell got sidetracked thinking of washing Bertlmann’s socks. How can you simulate incompatible elements of physical reality with on a single classical particle, when QM cannot even do it?”
Although Richard probably disagrees, I completely endorse your objection. However, the Bell inequality does not require incompatible “elements of physical reality”.
I suppose what you mean is that Richard can win his bet about his statistical challenge, but his virtual experiment has nothing to do with the Bell inequality when it is derived through incompatible experiments.
However, there is a very simple and down-to-earth meaning for the Bell theorem: No hidden variable theory satisfying statistical independence can violate the Bell inequality.
Notice that put in that way, there is no need for extra physical or metaphysical hypotheses such as locality, elements of reality, etc.
That puts you at a disadvantage because Richard’s virtual experiment does represent faithfully what Bell derived. Bell never mentioned “elements of physical reality” or “incompatible experiments”. He just used determinism and it does not matter whether he derived it or assumed it. The metaphysical aggregates are not Bell’s.
It may help you to understand why your bet is doomed by reading section 4 of https://link.springer.com/article/10.1007/s10701-021-00488-z
Justo, it’s about time everyone understands the Bell game, since it is much better in many respects than the CHSH inequality. It gives protection against trends and jumps and statistical dependence in the physics of source and detector. It is easy to understand. You can teach it to school children. It corresponds to a *strengthening* of Bell’s theorem since it corresponds to an inequality for the probability of a deviation of some size from the CHSH bound. Time you learnt it.
Thank you Richard for your response. However, I am below the mental capacity of school children. Is there any paper explaining in detail the derivation and the hypothesis of your game and how and why QM beats it but local realism not?
It seems that Bryan thinks that your game is flawed. Or is he below school children’s mental capacity too?
I read your preprint Comment on “Exclusion of time in the theorem of Bell”
by K. Hess and W. Philipp. Your Bell inequality there seems to be different from your “Bell game”.
Yes, there are numerous papers explaining the derivation, both by me and others.
PS Yes, I think that Bryan has insufficient mental capacity *of the necessary kind* to understand why my game is not flawed. Yes, the Bell inequality of the Bell game is different from the usual one. Different assumptions and a different conclusion. I think that my Bell game inequality is better. That’s a matter of taste.
Yes, your challenge is flawed once again. You wrote; “Alice’s nth output x may depend only on Alice’s nth input a, together with (if desired) all [ie, both Alice’s and Bob’s] preceding inputs and outputs. Similarly, Bob’s nth output y may depend only on Bob’s input b, together with (if desired) all preceding inputs and outputs” The outputs also depend on whatever the hidden variable is. And the hidden variable could override the action from input a or b. IOW, another RIGGED challenge.
Fred, my challenge is rigged, if you want to call it that way, in *your* favour. You “build” the detectors and measurement devices. You can use all that information if you like, or none of it.
Maybe I am missing something but it seems trivial, even guaranteed, that in real life you will observe excess correlation over the iid case.
Eg, we can drop the assumption of identical detected events. Perhaps there is a daily cycle that influences the equipment somehow, or it has periods of “warming up”. The iid case is actually the one with maximum variance:
We can instead, or also, drop the assumption of independent events. Eg, via the so-called memory loophole (the equipment at both sites is influenced by their correlated histories). Or both sites experiencing a common input (perhaps a large storm).
Is this type of thing disallowed by your bet?
The rules of my bet don’t forbid excess correlations over the iid case. Except in the setting choices. The settings must be completely independent. It must be allowed that the settings come from “outside the experiment”. Outside the control or influence of the experimenters.
I see. But those are exactly the types of correlation that have been found in the data:
As expressed in Eq. (14), the two marginal probabilities associated with different measurement settings for Alice should be equal. They nevertheless differ noticeably when the offset is negative, that is, when the event-ready sample is larger.
It should be noted that, unlike the violation of local realism, this possible violation of the no-signaling principle is apparently stronger when the sample size is increased, that is, when more coincidences from the unwanted reflections of the laser excitation pulses are included. It would seem to indicate that, if this effect is real, it is not a feature of entanglement between the NV centres, but rather of the excitation pulses.”
The experiments of 2015 had various defects. In particular, the quality of their random setting generation has been criticised. I am reliably informed that the next generation of Bell-type experiments is in development and will not suffer from the well known and well understood defects of the first generation. Of course, in the meantime one should keep an open mind. In science nothing is ever definitive. But at any moment, there might be some working hypotheses which seem reliable to many people, but hopefully never to all! Those who see loopholes must keep on putting their fingers on sore points.
They had better have good arguments and good data.
You refer to https://arxiv.org/abs/1606.00784, a paper from 2016 by Adenier and Khrennikov. A lot of water passed under the bridge since then.