I have recently been involved in acrimonious discussions in a Google group, https://groups.google.com/g/bell_quantum_foundations, devoted to Bell’s theorem and the interpretation of quantum mechanics. One of the group members, Bryan Sanctuary, insists that two particles leaving a source cannot remain entangled. He claims in preprints in a sequence of recent blog posts that the EPR-B correlations cannot remain valid once the particles are separated. Here is a link to one of the posts, http://blog.mchmultimedia.com/2022/05/09/hyper-helicity-and-the-foundations-of-qm/. At the same time, he claims that the EPR-B correlations do have a local and realistic interpretation once one introduces a hidden, quantum property which he calls hyper-helicity. He believes that in this way all notions of non-local collapse and weirdness can be banished from quantum mechanics. Another participant, Alexey Nikulov, also thinks that conventional quantum mechanics is wrong, and disbelieves in non-locality and collapse.

A small problem consists in figuring out what actually is Bell’s theorem. A long essay by Sheldon Goldstein (2011) on “Scholarpedia”, http://www.scholarpedia.org/article/Bell%27s_theorem, makes it clear that Bell twice quite radically changed his thinking, so in fact, one could say that there are three Bell theorems. The essential mathematics changed from the first to the second, and the physical motivation behind his characterization of a “local hidden variables model” evolved both in the first change and in the second change. The theorem is always the same, that a local hidden variables model cannot reproduce certain quantum mechanics predictions, but its assumptions and interpretation have matured.

I have started writing out some notes, maybe they will become a whole paper, to make it clear that “collapse of the wave-function” is an interpretational optional extra, not needed in order to apply quantum mechanics in practice. No interpretation of what is going on behind the scenes is necessary if one’s purpose is to *describe* nature. An interpretation might be useful if it helps one to *understand* nature. One must realize that it is possible that understanding nature is something which will forever go beyond our poor facilities. That is not to say that the drive to gain understanding isn’t the true drive behind doing science. Like democracy and justice, understanding is an ideal which we have to continually renew and re-affirm.

That does mean that in introductory expositions it should be clearly labelled as a *lie for children*.

The usual colourful language involving the non-local collapse of the wave function can be thought just to be a description of a useful computational tool, not a description of physical changes to something existing in physical reality. The only thing assumed to exist are measurement outcomes, and the theory allows us to compute probability distributions of their outcomes, also in complex, composite, sequential, experimental set-ups. One can compute what one needs to know by pretending that the wave function collapse as suggested by the von Neumann-Lüders extension of the Born law is somehow real. One gets the right answer, as directly as possible. There is, however, no need to think of wave function collapse as being something physical (and necessarily non-local). Such thinking is an optional extra. Some people find it distasteful. Tastes differ. I think it can be usefully thought of as one of those *lies for children* which need to be seen in a different light as one gains maturity and knowledge.

We are all children! Learning never ends!

The second purpose is to write out the mathematical content of Sanctuary’s claims concerning hyper-helicity. Sanctuary feels that his interpretation of his mathematical formalism will revolutionize our understanding of quantum entanglement. Clearly, he has a long way to go, but I hope my own struggle to understand what he is doing will be helpful to others, if not to him.

I agree, people confuse the formalism with reality itself. Collapse and superposition are part of the mormalism to predict experimental results. Of course, we must at least agree that experimetnal results are real.

By the way, the claim that entaglement does not exist is also claimed by Andre X Vatarescu.

In the case of Andre he claims that entangled photon do not exist, see for intance his commen to to my preprint on ResearchGate where he cites papers he claims are intentionally ignored by the scientific community.

https://www.researchgate.net/publication/360422254_A_Critical_Analysis_of_the_Quantum_Nonlocality_Problem?_sg%5B1%5D=MSQB7jcih569x8dcgGNn7daXoHVpDaNWgmCXp3TyAXSqogrhH-avjVNADIsKZ_gBXSRqP1rPrA

Thanks for the references. Well, Mr Vatarescu isn’t the only one who says things like that. He won’t be the last one, either. I don’t think he understands Bell’s reasoning.

I am glad Richard is undertaking to try to understand my work. So far he has a long way to go. I has no inking of QFT and is under the misconception my work is about Bell, and seems confused about what an element of reality is.

I will be happy to guide him, but he will have to ask well formulated and clear questions about my formulation.

Good luck.

Bryan, you are the one who keeps saying you have disproved Bell’s theorem. Do you retract that claim now?

PS I told you at the outset that I know no QFT. I asked for help explaining your notation but you gave none. You complained your paper was being rejected by journal after journal and I gave you some kind advice but you called me a liar. Really. Grow up, man.

I repeat my request: please read the draft paper so far, please let me know if you have problems with the mathematics.

Hi Bryan, please let me give a little advise. Although I do not know your paper, I am sure that if your pay for the APC you will get it published in almost any serious journal.

He will easily get his work published if he simply deletes his claim to have a counterexample to Bell’s theorem. His work has nothing to do with Bell’s theorem.

Probably that is right. Perhaps cleverly disguising his main point may help. However, in 2021 E. Muchowski published a blatant refutation of Bell’s theorem https://doi.org/10.1209/0295-5075/134/10004

What do you mean by “blatant”? Muchowski’s counterexample is blatantly wrong. It’s easy to see. The article is a waste of time and will have zero impact, except that it will be added to lists of papers disagreeing with Bell, by other authors who also disagree with Bell. Unfortunately they all disagree with one another, as Karl Hess once sadly remarked.

Indeed, it is fairly easy to get garbage published these days. The journals just want to rake in the publishing fees, so the more they publish the happier they are. And papers which are wrong also get cited by others who write articles explaining why they are wrong. The journals can pretend they are facilitating important academic debate.

Yes, the problem is my english. I should have used “blatantly wrong” instead of just “blatant”.

Incidentally, I just started an attempt to add to Wikipedia an article on criticism of Bell’s theorem. See the talk page belonging to the page on Bell’s Theorem.

If you provide the link it will be easier to find.

https://en.wikipedia.org/wiki/Talk:Bell%27s_theorem

Hi Richard

I dipped my toe in the water on the other Bell site but, after that, merely watched bemusedly at the angry/animated philosophy comments about ‘reality’ etc. As an aside, QFT is used to annihilate a state and then re-form a state by creation. Using annihilation and creation operators. ‘Locality’ surely disappears between annihilation and creation? A little like following a path on a real number line and choosing only to stand on rational numbers. How do you get from one rational to the next without standing on an irrational? It is not far to jump but is it genuinely local merely because the discontinuity is small (infinitesimally small).

Bryan is wrong, of course, about his model defeating Bell’s Theorem and maybe he is realising that and hence now saying that his paper is nothing to do with Bell. But I do like Bryan’s idea in principle that he is trying to model a particle’s spin clearer (clearer to him). I also did not dislike some parts of Alexey’s later comments.

I also have problems accepting that entangled particles stay entangled over time (which is a problem that, like most others, is irrelevant in a retrocausal solution). Joy always insists that measurements at A and B are made ‘at the same time’ which sort of hints that if measurements were made at different times the particle HVs would be out of sync. If one ‘walks’ along the screen in a two-slit experiment then one can go from peak to trough to peak etc in a very short distance. IMO this represents particles in phase, out of phase, in phase etc. This represents by analogy particles with HVs being entangled, not-entangled, entangled etc. where short distance differences can destroy entanglement. [None of this matters in a retrocausal solution.] Also, if small distances are so important then how does this convert into sizes of time windows? Are experimental time windows short enough to work effectively? And if short distances are so important, what is the size of a S-G detector? Presumably not smaller than sized of interference fringes. [None of this matters in a retrocausal solution.] I guess this is also what Bryan meant when very recently asking whether the path distances had been measured in one of the Bell experiments.

I am working on a second draft of my new paper on time and retrocausality. I have finished the Bell section. My previous paper was complicated as I was working out the spin structure of particles in the same paper, but in the new paper the Bell solution is much simpler as is is done purely using the 200-year old Malus’s Law. The second section is showing that antiparticles can travel backwards in time, despite no one else believing that. My analysis is not particle-at-a-time but beam-at-a-time, where there are eight beams and eight applcations of Malus’s Law.

I see Sabine has a new article today, on her blog, about superdeterminism and retrocausality, and she will soon be giving a link to published papers on her recent conference on this issue.

Great comments, Ben! Yes, I saw about Sabine’s workshop. https://backreaction.blogspot.com/2022/05/the-superdetermined-workshop-finally.html The comments about measuring the path distances show that the persons making them don’t study the experimental papers. The path distances are irrelevant, and certainly different. The experimenters use a first phase of measurement (observing actual correlations) and then tune the angles of their detectors so that, according to QM, the “x, y, z” directions at Alice’s place correspond nicely “x, y, z” at Bob’s place. They are not absolute directions which anyway hardly exist as the earth turns and revolves around the sun and the solar system hurtles through space

Ben said, “Joy always insists that measurements at A and B are made ‘at the same time’ which sort of hints that if measurements were made at different times the particle HVs would be out of sync.”

There is no hidden variable anymore. Bell’s HV program falls apart.

The important thing in the best Bell-type experiments is that both measurements are complete before a signal travelling at the speed of light could carry the setting from either one side to the place where the measurement outcome is delivered on the other side.

Hi Richard

To reinforce my advice to Sanctuary in May 19, 2022 at 11.24 pm and Richard’s observation of May 20, 2022 at 5:30 am, here as paper publisch in Foundatinos of Physics http://dx.doi.org/10.13140/RG.2.2.13546.85446.

The authors cleverly avoided directly saying in the title that they have a counterexample to the Bell theorem or anything similar.

I have studied this work a year ago. I’m afraid to say, that I think it is worthless. Moreover, I found it impossible to hold a rational discussion with the lead author. I am not planning to put more time into it.

I do not intend to waste time on it either. I left a comment in RG saying that there are two optinons. Either they have a counterexample to the Bell theorem or their example violates statistical independence. If they claim the first option then a very relevant follow up paper should be an explanation of where the Bell theorem is wrong.

The abstract itself needed an abstract. At least for me as I did not get to the end of it. I am trying to complete a paper on ‘time’ (and especially a possibility of reverse time flows). A diagram is shown here of the plan of the paper: https://ben6993.wordpress.com/

BBC TV in England a few days ago showed a rerun of a Horizon program on Einstein and entanglement. I was able to watch it through this time without gnashing my teeth too much at the presentation and contents. I am in a half way position that Einstein was of course completely correct and QM interpretations of Bell are not more than half correct. An inequality that favours Einstein.

The use of reversed-time-direction antiparticles does allow ‘apparently’ instantaneous action at a distance whereas the antiparticles never travel at greater than speed c. My 2022 paper showed that Malus’s Law+plus+retrocausality side-steps Bell’s inequalities. So it should be possible to use this effect to send messages instantaneously. I suspect some experimenters are misunderstanding that data pruning correlations to be very high does not guarantee entanglement. Entanglement implies very high correlations, but not vice versa. I will return to investigating modern instantaneous communications later but first I need to complete my paper on ‘time’. IMO Einstein and EPR colleagues will eventually get the credit for inspiring a new perspective on ‘time’, unlike the ‘Einstein was wrong’ attitude in the Horizon programme. It is too early to say Einstein was wrong.

Regards

Austin Fearnley

Hi Austin,

I asked one of the authors and he said they circumbent Bell’s theorem through what is usually called superdeterminism. Your approach is retrocausality. Both are ta least logically correct but I am agnostic about them.

Cheers,

Justo

Hi Justo

Eh! Superdeterminism in their (Jackson et al) paper? I looked again at that abstract and took a quick look at the paper. I see no superdeterminism?? What I do glimpse is an apparent similarity with Christian’s S3 model. Also curl is somewhat similar to Sanctuary’s idea of having some kind of extra helicity. I sympathise with that attempt. They say that Malus’s Law was unphysical but they claim to have made it physical because of their model. They quote results using computer code though it just seems to be theoretical calculations on a model rather than a simulation of a Bell experiment.

So I think they have got nowhere new.

I have a particle model with gyroscopic precession. I years ago followed Susskind’s mathematical (QM) derivation of the precession so my precession model is not new physics. But it does result in Malus’s Law. But I have a physical (non-QM) model of this precession which does give Malus’s Law without using QM, though using hidden variables of precession (equivalent to Feynman’s phase). So there is nothing spooky or non-physical about Malus, in my opinion. Sanctuary’s helicity, in my opinion, is also probably not adding anything new, as it seems like it is also related to phase.

The problem for me at that time was that Malus’s Law does not break Bell’s Inequalities. Well, it did not in my computer simulation of particle at a time data in a Bell experiment. And I do not see how Jackson’s or Sanctuary’s models could beat the Bell Inequalities. They should try a computer simulation to realise for themselves that it will not work.

What did work for me in a simulation was the inclusion of backwards-in-time motion of antiparticles so that retrocausality plus gyroscopic phase bypasses the Bell Inequalities.

In my current draft paper on ‘time’ I have had a new insight on the nature of time. It revolves around string theory. Or put simpler, on special relativity applied to extra dimensions moving near speed c. In a previous paper of mine I adapted an idea from the Australian, Chappell. His idea was that time seemed to be a bivector quality. I thought that a trivector quality was a better fit than bivector. But that was a sort of abstract/mathematical idea without a physical basis for my private understanding. A trivector can be +1 or -1. The physics would be the same for +1 as for -1. And the sign simply tells you the direction of the arrow of time in the volume of space. But I have now made sense of this in how I view time in a common sense way.

High energy string theory speeds lead to compactification. This equates to quantised measurements of +1 or -1. This is the basis for my preon model’s quantised values of spin, electric charge etc… Here the spin ‘dimension’ is travelling at speed c through our space and so spin measurements are compactified in our measurements localised at the places in our space where we measure them as +1 or -1. For time, our entire 3D space is travelling through extra spatial dimensions at near speed c and our measurement of these dimensions is compactified/quantised to either a single +1 or -1 located everywhere in our space. Like a trivector value of +1 or -1. So a spin +1 is only found in one place in our space whereas a time value of + 1 is found everywhere in our space as our whole space 3D dimension is embedded in the ‘time’ dimension.

This is not yet showing that retrocausality is possible but I need to understand time better first. This is how I approached the Bell issue. I first needed to understand Malus’s Law in a common sense calculation. Then I applied Malus to Bell. I am still exploring my new model for time. For example does a block model work in my time model? Is the past immutable (it cannot be for reverse time antiparticles). Is ‘now’ (the +1 trivector everywhere) the only aspect of time that exists for us? The time dimension for us is really a block of spatial dimensions in its own frame which we are speeding through. Is this extra N dimensions our block universe? Or is the past lost even in that dimension? Many more questions. One reason why this paper is still in the making.

Regards

Austin