Bell’s theorem as a no-go result in classical distributed Monte-Carlo simulation

Abstract and slides of a talk to be given at the IMS conference in London, 27–30 June 2022, https://www.imsannualmeeting-london2022.com/

It has long been realized that the mathematical core of Bell’s theorem is essentially a classical probabilistic proof that a certain distributed computing task is impossible: namely, the Monte Carlo simulation of certain iconic quantum correlations. I will present a new and simple proof of the theorem using Fourier methods (time series analysis) which should appeal to probabilists and statisticians. I call it Gull’s theorem since it was sketched in a conference talk many years ago by astrophysicist Steve Gull, but never published. Indeed, there was a gap in the proof.

The connection with the topic of this session [IS18 – Quantum Computing and Statistics – organiser Yazhen Wang, University of Wisconsin-Madison] is the following: though a useful quantum computer is perhaps still a dream, many believe that a useful quantum internet is very close indeed. The first application will be: creating shared secret random cryptographic keys which, due to the laws of physics, cannot possibly be known to any other agent. So-called loophole-free Bell experiments have already been used for this purpose. 

Like other proofs of Bell’s theorem, the proof concerns a thought experiment, and the thought experiment could also in principle be carried out in the lab. This connects to the concept of functional Bell inequalities, whose application in the quantum research lab has not yet been explored. This is again a task for classical statisticians to explore.
R.D. Gill (2022) Gull’s theorem revisited, Entropy 2022, 24(5), 679 (11pp.)
https://www.mdpi.com/1099-4300/24/5/679
https://arxiv.org/abs/2012.00719

Sanctuary’s Spin

I have recently been involved in acrimonious discussions in a Google group, https://groups.google.com/g/bell_quantum_foundations, devoted to Bell’s theorem and the interpretation of quantum mechanics. One of the group members, Bryan Sanctuary, insists that two particles leaving a source cannot remain entangled. He claims in preprints in a sequence of recent blog posts that the EPR-B correlations cannot remain valid once the particles are separated. Here is a link to one of the posts, http://blog.mchmultimedia.com/2022/05/09/hyper-helicity-and-the-foundations-of-qm/. At the same time, he claims that the EPR-B correlations do have a local and realistic interpretation once one introduces a hidden, quantum property which he calls hyper-helicity. He believes that in this way all notions of non-local collapse and weirdness can be banished from quantum mechanics. Another participant, Alexey Nikulov, also thinks that conventional quantum mechanics is wrong, and disbelieves in non-locality and collapse.

A small problem consists in figuring out what actually is Bell’s theorem. A long essay by Sheldon Goldstein (2011) on “Scholarpedia”, http://www.scholarpedia.org/article/Bell%27s_theorem, makes it clear that Bell twice quite radically changed his thinking, so in fact, one could say that there are three Bell theorems. The essential mathematics changed from the first to the second, and the physical motivation behind his characterization of a “local hidden variables model” evolved both in the first change and in the second change. The theorem is always the same, that a local hidden variables model cannot reproduce certain quantum mechanics predictions, but its assumptions and interpretation have matured.

Spacetime diagram for EPR–Bell type experiment (Goldstein & Tausk, Scholarpedia)

I have started writing out some notes, maybe they will become a whole paper, to make it clear that “collapse of the wave-function” is an interpretational optional extra, not needed in order to apply quantum mechanics in practice. No interpretation of what is going on behind the scenes is necessary if one’s purpose is to describe nature. An interpretation might be useful if it helps one to understand nature. One must realize that it is possible that understanding nature is something which will forever go beyond our poor facilities. That is not to say that the drive to gain understanding isn’t the true drive behind doing science. Like democracy and justice, understanding is an ideal which we have to continually renew and re-affirm.

That does mean that in introductory expositions it should be clearly labelled as a lie for children.

The usual colourful language involving the non-local collapse of the wave function can be thought just to be a description of a useful computational tool, not a description of physical changes to something existing in physical reality. The only thing assumed to exist are measurement outcomes, and the theory allows us to compute probability distributions of their outcomes, also in complex, composite, sequential, experimental set-ups. One can compute what one needs to know by pretending that the wave function collapse as suggested by the von Neumann-Lüders extension of the Born law is somehow real.  One gets the right answer, as directly as possible. There is, however, no need to think of wave function collapse as being something physical (and necessarily non-local). Such thinking is an optional extra. Some people find it distasteful. Tastes differ. I think it can be usefully thought of as one of those lies for children which need to be seen in a different light as one gains maturity and knowledge.

We are all children! Learning never ends!

The second purpose is to write out the mathematical content of Sanctuary’s claims concerning hyper-helicity. Sanctuary feels that his interpretation of his mathematical formalism will revolutionize our understanding of quantum entanglement. Clearly, he has a long way to go, but I hope my own struggle to understand what he is doing will be helpful to others, if not to him.

The Big Bell Bet

Poet and McGill University emeritus professor of chemistry Bryan Sanctuary (Google scholar: https://scholar.google.com/citations?user=iqR_MusAAAAJ&hl=en&oi=ao; personal blog: https://mchmultimedia.com/sanctuaryblog/) is betting me 5000 Euro that he can resolve the EPR-Bell paradox to the satisfaction of the majority of our peers. Moreover, he will do it by a publication (or at least, a pre-publication) within the year. That’s this calendar year, 2022. Naturally, he expects public (scientific) acclaim to follow “in no time”. I don’t expect that. We will settle the bet by consultation with our peers, and this consultation will be concluded by the end of the following year. So that’s by the end of the succeeding calendar year, 2023.

John S. Bell inspects the Christmas present which his friends the Bertlmanns have just given him

I, therefore, expect his gracious admission of defeat and a nice check for 5000 Euro, two years from now.

He expects the opposite. (Poor Bryan! It’s like taking candy from a baby…)

(He presumably thinks the same)

The small print

Small print item 1: Who are our peers? Like a jury, they will be determined by having our mutual approval. To begin with, we will invite the members of a couple of Google groups/internet seminars in which one or both of us already participate. Here are links to two of them: Jarek Duda’s (Krakow) “QM foundations & nature of time seminar”: https://groups.google.com/g/nature-of-time/about and http://th.if.uj.edu.pl/~dudaj/QMFNoT; and Alexandre de Castro’s Google group “Bell inequalities and quantum foundations”: https://groups.google.com/g/Bell_quantum_foundations.

Small print item 2: What does Bryan think he’s going to achieve? Restoration of locality and realism, and banning of weirdness and spookiness from quantum mechanics.

Small print item 3: What do I think about his putative theory? Personally, but it is not up to me to decide, I would accept that he has won if his theory (which he has not yet revealed to the world) would allow me to win my Bell game challenge https://gill1109.com/2021/12/22/the-bell-game-challenge/ “against myself”. i.e., it would allow me to write computer programs to simulate a successful loophole-free Bell experiment – thus satisfying the usual spatiotemporal constraints on inputs and outputs while preventing conspiracy, and reliably violating a suitable Bell inequality by an amount that is both statistically and physically significant. This means that, in my opinion, he should only win if he can convince the majority of our peers that those constraints are somehow unphysical. I mention that if experimenters voluntarily impose those constraints (to the best of their ability) in real experiments, then there cannot be a metaphysical reason to forbid them. However, the bet will be settled by a democratic vote of our peers! Clearly, this does constitute a loophole for me: a majority of our peers might still fall for superdeterminism or any other craziness.

I suspect that Bryan believes he can now resurrect his previous attempt https://arxiv.org/abs/0908.3219. I think it is very brave of him but doomed to failure, because I don’t think he will come up with a theory that will catch on. (I even believe that such a theory is not even possible, but that’s my personal belief).

To reiterate: our peers will determine who has won our bet. Bryan is betting that a year from now he will have revolutionised quantum mechanics, restoring locality and realism and that his then appearing paper will rapidly force Zeilinger, Gisin, me, and a host of others, to retract our papers on quantum teleportation, quantum non-locality, and all that. I am betting that the world will not be impressed. Our peers will vote whether or not they believe that Bryan has achieved his goal.

The Bell game challenge

Since 2015, Bell-type experiments designed to test local realism have the following format: the format of a so-called “loophole-free Bell test”. There is a fixed sequence of N time-slots, or more precisely, paired time-slots. These are time-slots in two distant labs owned by two scientists Alice and Bob. The time-slots are paired such that a signal sent at the start of one of Alice’s time slots from Alice’s to Bob’s lab, travelling at the speed of light, would only reach Bob’s lab after the end of Bob’s corresponding time-slot; and vice versa. Just after the start of each time-slot, each inserts a binary setting into an experimental device. Something goes on inside that apparatus, and before the time-slot is over, a binary outcome is produced. Each instance with two inputs and two outputs is called a trial.

From Bell’s “Bertlmann’s socks” paper. Inputs are shown below and outputs above the long horizontal box which encloses Alice and Bob’s devices and what is in between

Actually, many experiments require a slightly more elaborate protocol involving a third lab, which you may think of as a source of “pairs of particles”. Charlie’s lab is located somewhere between Alice and Bob’s. Charlie’s device outputs the message “ready” or “not ready” before the end of his time-slot (its length is irrelevant). The message however could only arrive at Alice and Bob’s lab after they have already input their input settings, so could not directly influence their choices. Outcomes get delivered anyway. After the experiment, one looks only at the inputs and outputs of each trial in which Charlie saw the output “ready”. The experiment continues long enough that there are N trials labelled by Charlie’s apparatus as “ready”. From now on, I will forget about this “post-selection” of N trials: the first N which went off to a good start. (The word “post-selection” is a misnomer. It is performed after the whole experiment is complete, but the selection is determined in advance of the introduction of the settings).

Space-time disposition of the time-slots of one trial. The sloping arrows are the boundaries of future light-cones with vertices at the start of Alice, Bob, and Charlie’s time-slots.

The settings are typically chosen to resemble sequences of outcomes of independent fair coin tosses. Sometimes they are generated by physical random number generators using physical noise sources, sometimes they are created using pseudo random number generators (RNGs). Sometimes they are generated on the fly, sometimes created in advance. The idea is that the settings are inputs which come from the outside world, outside the experimental devices, and the outcomes are outputs delivered by the devices to the outside world.

Below is a graphical model specified in the language of the present-day theory of causality based on directed acyclic graphs (DAGs), describing the dependence structure of what is observed in terms of “hidden variables”. There is no assumption that the hidden parts of the structure are classical, nor that they are located in classical space-time. The node “psi” stands for the state of all experimental apparatus in the three labs including transmission lines between them before one trial of the experiment starts, as far as is directly relevant in the causal process leading from experimental inputs to experimental outputs. The node “phi” consists of the state of external devices which generate the settings. The graphical model says that as far as the settings and the outputs are concerned, “phi” and “psi” can be taken to be independent. It says that Bob’s setting is not in the causal pathway to Alice’s outcome.

At the end of the experiment, we have N quadruples of binary bits (a, b, x, y). Here, a and b are the settings and x and y are the outcomes in one of the N “trials”. We can now count the number z of trials in which x = y and neither a or b = 1, together with trials in which xy and both a and b = 1. Those two kinds of trials are both considered trials having the result “success”. The trials remaining have the result “fail”.

Now, let B(p) denote a random variable distributed according to the binomial distribution with parameters N and p. Think of the number of successes z to be the outcome of a random variable Z. According to local realism, and taking p = 0.75, it can be proved that for all z > N p, Prob( Zz ) ≤ Prob( B(p) ≥ z ). According to quantum mechanics, and with q = 0.85, it appears possible to arrange that for all z, Prob( Zz ) = Prob( B(q) ≤ z ). Let’s see what those binomial tail probabilities are with z = 0.80 N, using the statistical programming language “R“.

N <- 1000
p <- 0.75
z <- 0.8 * N
q <- 0.85
pbinom(z, N, p, lower.tail = FALSE)
[1] 8.029329e-05
pbinom(z, N, q, lower.tail = TRUE)
[1] 1.22203e-05

We see that an experiment with N = 1000 time-slots should be plenty to decide whether the experimental results are the result of local realism with a success rate of maximally 75%, or of quantum mechanics with a success rate of 85% (close to the theoretical maximum under quantum mechanics). The winning theory is decided by seeing if the observed success rate is above or below 80%.

Challenge: show by a computer simulation that my claims are wrong. ie, simulate a “loophole-free” Bell experiment with a success rate reliably exceeding 80% when the number of trials is 1000 or more. Rules of the game: you must allow me to supply the “fair coin tosses”. Your computer simulation may use an RNG (called a fixed number of times per trial) to create its own randomness, but it must have “set seed” and “restore seed” facilities in order to make each run exactly reproducible if required. For each n, Alice’s nth output x may depend only on Alice’s nth input a, together with (if desired) all the preceding inputs and outputs. Similarly, Bob’s nth output y may depend only on Bob’s input b, together with (if desired) all the preceding inputs and outputs

Here is a different version of the challenge using the classical Bell-CHSH inequality instead of the more modern martingale inequality. Another version could be specified using the original Bell inequality, for which one would also demand that at equal settings, outcomes are always equal and opposite. After all, the original Bell inequality also assumes perfect anti-correlation, so one must check that that assumption holds.

The whole point of a computer simulation is that an independent judge is unnecessary: your code is written in a widely and freely available language suitable for scientific computing, and anyone with basic computing skills can check that the programming team is not cheating (whether deliberately or inadvertently). The independent judge is the entire scientific community. If you are successful, the simulation will actually be an example of a classical physical system producing what has been thought to be a unique signature of quantum entanglement. You, the lead scientist, will get the Nobel Prize because you and your team (I imagine that you are a theoretician who might need the assistance of a programmer) will have disproved quantum theory by a reproducible and rigorous experiment. No establishment conspiracy will be able to suppress the incredible and earth-shaking news.

Here are my stipulations on the program. I am assuming that it uses a built-in pseudo-random number generator. I assume that it includes “set.seed” and “save.seed” facilities. Otherwise, it is not useful for scientific work and not eligible for my challenge. 

From now on, the phrases “photon pair”, “time slot”, and “trial” are taken to be interchangeable. After all, we are talking about a computer simulation, so the actual evocative natural language words which we use as names for variables and functions are irrelevant.

The program must accept as input a number of trials N, a seed setting the RNG, and two lists of setting labels “1” and “2” of length N. It must generate as output two lists of outcomes +/–1, also of length N. For all n, Alice’s n‘th output depends only on Alice’s n‘th input, as well (if you like) on the inputs and outputs on both sides of earlier trials. And similarly for Bob. I will check this constraint by doing many random spot checks. This is where the rule concerning the RNG comes in.

Let’s take N = 10,000. You will win if the CHSH quantity S exceeds 2.4 in a few repeats with different RNG seeds and varying the lists of inputs. In other words, the violation of the Bell-CHSH inequality is reproducible, and reproducible by independent verifiers. I will supply the lists of inputs after you have published your code. The inputs will be the result of a simulation of independent fair coin tosses using standard scientific computing tools. If you don’t trust me, we can ask a trusted third party to make them for us.

Steve Gull’s challenge: An impossible Monte Carlo simulation project in distributed computing

At the 8th MaxEnt conference in 1998, held in Cambridge UK, Ed Jaynes was the star of the show. His opening lecture has the following abstract: “We show how the character of a scientific theory depends on one’s attitude toward probability. Many circumstances seem mysterious or paradoxical to one who thinks that probabilities are real physical properties existing in Nature. But when we adopt the “Bayesian Inference” viewpoint of Harold Jeffreys, paradoxes often become simple platitudes and we have a more powerful tool for useful calculations. This is illustrated by three examples from widely different fields: diffusion in kinetic theory, the Einstein–Podolsky–Rosen (EPR) paradox in quantum theory [he refers here to Bell’s theorem and Bell’s inequalities], and the second law of thermodynamics in biology.”

Unfortunately Jaynes was completely wrong in believing that John Bell had merely muddled up his conditional probabilities in proving the famous Bell inequalities and deriving the famous Bell theorem. At the conference, astrophysicist Steve Gull presented a three line proof of Bell’s theorem using some well known facts from Fourier analysis. The proof sketch can be found in a scan of four smudged overhead sheets on Gull’s personal webpages at Cambridge University.

Together with Dilara Karakozak I believe I have managed to decode Gull’s proof, https://arxiv.org/abs/2012.00719, though this did require quite some inventiveness. I have given a talk presenting our solution and point out further open problems. I have the feeling progress could be made on interesting generalisations using newer probability inequalities for functions of Rademacher variables.

Here are slides of the talk: https://www.math.leidenuniv.nl/~gill/gull-talk.pdf

Not being satisfied, I wrote a new version of the talk, using different tools. Notes written with Apple pencil on the iPad, then I discuss them while recording my voice and the screen (so: either composing the notes live, or editing them live) https://www.youtube.com/watch?v=W6uuaM46RwU&list=PL2R0B8TVR1dIy0CnW6X-Nw89RGdejBwAY

Time, Reality and Bell’s Theorem

Featured image: John Bell with a Schneekugel (snowing ball) made by Renate Bertlmann; in the Bells’ flat in Geneva, 1989. © Renate Bertlmann.

Lorentz Center workshop proposal, Leiden, 6–10 September 2021

As quantum computing and quantum information technology moves from a wild dream into engineering and possibly even mass production and consumer products, the foundational aspects of quantum mechanics are more and more hotly discussed. Whether or not various quantum technologies can fulfil their theoretical promise depends on the fact that quantum mechanical phenomena cannot be merely emergent phenomena, emerging from a more fundamental physical framework of a more classical nature. At least, that is what Bell’s theorem is usually understood to say: any underlying mathematical physical framework which is able, to a reasonable approximation, to reproduce the statistical predictions made by quantum mechanics, cannot be local and realist. These words have nowadays precise mathematical meanings, but they stand for the general world view of physicists like Einstein, and in fact they stand for the general world view of the educated public. Quantum physics is understood to be weird, and perhaps even beyond understanding. “Shut up and calculate”, say many physicists.

Since the 2015 “loophole-free” Bell experiments of Delft, Munich, Vienna and at NIST, one can say even more: laboratory reality cannot be explained by a classical-like underlying theory. Those experiments were essentially watertight, at least as far as experimentally enforceable conditions are concerned. (Of course, here is heated discussion and criticism, too).

Since then however it seems that even more energy than ever before is being put into serious mathematical physics which somehow gets around Bell’s theorem. A more careful formulation of the theorem is that the statistical predictions of quantum mechanics cannot be reproduced by a theory having three key properties: locality, realism, and no-conspiracy. What is meant by no-conspiracy? It means that experimenters are free to choose settings of their experimental devices, independently of the underlying properties of the physical systems which they are investigating. In the case of a Bell-type experiment, a laser aimed at a crystal which emanates a pair of photons which arrive at two distant polarising photodectors, ie detectors which can measure the polarisation of a photon in directions chosen freely by the experimenter. If the universe actually evolves in a completely deterministic manner, then everything that goes on in those labs (housing the source and the detectors and all the cables or whatever in between) was determined already at the time of the big bang, the photons can in principle “know in advance” how they are going to be measured.

At the present time, highly respectable physicists are working on building a classical-like model for these experiments using superdeterminism. Gerard ’t Hooft used to be a lonely voice arguing for such models but he is no longer quite so alone (cf. Tim Palmer, Oxford, UK). Other physicists are using a concept called retro-causality: the future influences the past. This leads to “interpretations of quantum mechanics” in which the probabilistic predictions of quantum mechanics, which seem to have a built in arrow of time, do follow from a time symmetric physics (cf. Jaroslav Duda, Krakow, Poland).

Yet other physicists dismiss “realism” altogether. The wave function is the reality, the branching of many possible outcomes when quantum systems interact with macroscopic systems is an illusion. The Many Worlds Interpretation is still very alive. Then there is QBism, where the “B” probably was meant to stand for Bayesian (subjectivist) probability, in which one goes to an almost solipsistic view of physics; the only task of physics is to tell an agent what are the probabilities of what the agent is going to experience in the future; the agent is rational and uses the laws of quantum mechanics and standard Bayesian probability (the only rational way to express uncertainty or degrees of belief, according to this school) to update probabilities as new information is obtained. So there only is information. Information about what? This never needs to be decided.

To the right, interference patterns of waves of future quantum possibilities. To the left, the frozen actually materialised past. At the boundary, the waves break, and briefly shining fluorescent dots of light on the beach represent the consciousness of sentient beings. Take your seat and enjoy. Artist: A.C. Gill
On the right, interference patterns of waves of future quantum possibilities. On the left, the frozen actually materialised past. At the boundary, the waves break, and briefly shining fluorescent dots of light on the beach represent the consciousness of sentient beings. Take your seat and enjoy. Artist: A.C. Gill

Yet another serious escape route from Bell is to suppose that mathematics is wrong. This route is not taken seriously by many, though at the moment, Nicolas Gisin (Geneva), an outstanding experimentalist and theoretician, is exploring the possibility that an intuitionistic approach to the real numbers could actually be the right way to set up the physics of time. Klaas Landsman (Nijmegen) seems to be following a similar hunch.

Finally, many physicists do take “non-locality” as the serious way to go; and explore, with fascinating new experiments (a few years ago in China, Anton Zeilinger and Jian-Wei Pan; this year Donadi e al.), hypotheses concerning the idea that gravity itself leads to non-linearity in the basic equations of quantum mechanics, leading to the “collapse of the wave function”, by a definitely non-local process.

At the same time, public interest in quantum mechanics is bigger than ever, and non-academic physicists are doing original and interesting work, “outside of the mainstream”. Independent researchers can and do challenge orthodoxy, and it is good that someone is doing that. There is a feeling that the mainstream has reached an impasse. In our opinion, the outreach from academia to the public has also to some extent failed. Again and again, science supplements publish articles about amazing new experiments, showing ever more weird aspects of quantum mechanics, but it is often clear that the university publicity department and the science journalists involved did not understand a thing, and the newspaper articles are extraordinarily misleading if not palpably nonsense.

In the Netherlands there has long been a powerful interest in foundational aspects of quantum mechanics and also, of course, in the most daring experimental aspects. The Delft experiment of 2015 was already mentioned. At CWI, Amsterdam, there is an outstanding group led by Harry Buhrman in quantum computation; Delft has a large group of outstanding experimentalists and theoreticians, in many other universities there are small groups and also outstanding individuals. In particular one must mention Klaas Landsman and Hans Maassen in Nijmegen; and one must mention the groups working in the foundations of physics in Utrecht and in Rotterdam (Fred Muller). Earlier we had of course Gerard ’t Hooft, Dennis Dieks and Jos Uffinck in Utrecht; some of them retired but still active, others moved abroad. A new generation is picking up the baton.

The workshop will therefore bring a heterogeneous group of scientists together, many of whom disagree fundamentally on basic issues in physics. Is it an illusion to say that we can ever understand physical reality? All we can do is come up with sophisticated mathematics which amazingly gives the right answer. Yet there are conferences and Internet seminars where these disagreements are fought out, amicably, again and again. It seems that perhaps some of the disagreements are disagreements coming from different subcultures in physics, very different uses of the same words. It is certainly clear that many of those working on how to get around Bell’s theorem, actually have a picture of that theorem belonging to its early days. Our understanding has enormously developed over the decennia, and the latest experimentalists have perhaps a different theorem in mind, to the general picture held by theoretical physicists who come from relativity theory. Indubitably, the reverse is also true. We are certain that the meeting we want to organise will enable people from diverse backgrounds to understand one another more deeply and possibly “agree to differ” if the difference is a matter of taste; if however the difference has observable physical consequences then we must be able to figure out how to observe them.

The other aim of the workshop is to find better ways to communicate quantum mysteries to the public. A physical theory which basically overthrows our prior conceptions of time, space and reality, must impact culture, art, literature; it must become part of present day life; just as earlier scientific revolutions did. Copernicus, Galileo, Descartes, Newton taught us that the universe evolves in a deterministic (even if chaotic) way. Schrödinger, Bohr and all the rest told us this was not the case. The quantum nature of the universe certainly did impact popular culture but somehow it did not really impact the way that most physicists and engineers think about the world.

Illustration from Wikipedia, article on Bell’s Theorem. The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at 0°, perfect correlation at 180°. Many other possibilities exist for the classical correlation subject to these side conditions, but all are characterized by sharp peaks (and valleys) at 0°, 180°, and 360°, and none has more extreme values (±0.5) at 45°, 135°, 225°, and 315°. These values are marked by stars in the graph, and are the values measured in a standard Bell-CHSH type experiment: QM allows ±1/√2 = ±0.7071…, local realism predicts ±0.5 or less.

Warsaw

WT*?

Don’t be so impatient. All will be explained, in due time. In fact, time, and associations in space and time, is what this posting is all about.

Firstly, the *image* is the album art of the eponymous Joy Division album [I so love using the word “eponymous”!]. If you really do want to listen to it, here’s a YouTube link: https://www.youtube.com/watch?v=3UYnyiL8-VI

I warn you, it’s not everyone’s cup of tea.

Secondly, I have to tell you that last Monday I gave a Zoom talk at the dept. of physics at the Jagellionian University, Kraków; in a seminar series, hosted by my friend Jarek Duda. The announcement said that the talk (on quantum foundations, and in particular on the issues of time in Bell’s theorem) would start at 17:00 hours Warsaw time and for some days I was under the misapprehension that I would give (and later, had given) a virtual talk in Warsaw. Kraków, Warsaw, … I have wonderful memories of a number of fascinating Polish cities.

While preparing my slides I belatedly learnt that two or three months previously Boris Tsirelson (Tel Aviv) , one of my greatest scientific heros, had passed away in Basel, aged 70. One year older than me. (His family originally came from Bessarabia – nowadays more of less Moldavia. More holocaust connections here). Boris’ whole approach to Bell’s theorem, and not just his famous inequality (the “Tsirelson bound”), had always deeply resonated with me. I felt devastated, but also inspired.

Actually when I was asked if I would like to make a contribution to the J U Kraków seminar, the provisional title of my talk, and its initial “abstract”, referred to “Bell-denialists”. Of course I was thinking of one of my current Bell-denialist friends (recently referred to as my “nemesis” by another one of my friends, but I think of him more as an inspiring sparring partner) Joy Christian. So there comes the word “Joy” again. Those who are not fans of English post-punk of the late 70’s and early 80’s might like to confer with Wikipedia, to find out what historical organisation was alluded to in the name of the band https://en.wikipedia.org/wiki/Joy_Division. The lead singer, Ian Curtis, famously committed suicide at the very young age (for suicidal rock stars) of 23. He certainly was a “troubled young man” … . See the very beautiful movie “Control” directed by the Dutch photographer Anton Corbijn https://en.wikipedia.org/wiki/Control_(2007_film).

Coincidentally, today I saw the announcement of a new paper by my quantum friend Sascha Vongehr, “Many Worlds/minds Ethics and Argument Against Suicide: for Emergencies and Evaluation in Long Term Suicide Prevention and Mental Health Outcome”, on viXra, https://vixra.org/abs/2004.0158. There are actually some very fine papers on viXra!

But I digress, as is my wont. Here are the slides of my Kraków talk, and of a sequel (next Monday, 17:00 hours, Warsaw time!) https://www.math.leidenuniv.nl/~gill/Warsaw.pdf, https://www.math.leidenuniv.nl/~gill/Warsaw2.pdf [Moved to Tuesday in connection with Easter].

Perhaps, but maybe that will be on another day, and maybe even another posting, I will explain what my talks finally decided to be about.

In the meantime, thinking of requiems and Warsaw made me think of a piece by one of my favourite composers Alfred Schnittke, dedicated to the memory of the victims of the bombing of Belgrade by the nazi’s. I will add a link to a suitable YouTube performance, if I can find it. If this piece of music indeed exists anywhere, apart from in my mind. Google search is not giving me any help. I have to locate my CD collection…

Ah, it was “Ritual”. https://www.youtube.com/watch?v=rdnmWXkfR3E