Condemned by statisticians?

A Bayesian analysis of the case of Lucia de B.

de Vos, A. F. (2004).

Door statistici veroordeeld? Nederlands Juristenblad, 13, 686-688.


Here, the result of Google-translate by RD Gill; with some “hindsight comments” by him added in square brackets and marked “RDG”.


Would having posterior thoughts
Not be offending the gods?
Only the dinosaur
Had them before
Recall its fate! Revise your odds!
(made for a limerick competition at a Bayesian congress).

The following article was the basis for two full-page articles on Saturday, March 13, 2004 in the science supplement of the NRC (with unfortunately disturbing typos in the ultimate calculation) and in “the Forum” of Trouw (with the expected announcement on the front page that I claimed that the chance that Lucia de B. was wrongly convicted was 80%, which is not the case)

Condemned by statisticians?
Aart F. de Vos

Lucia de Berk [Aart calls her “Lucy” in his article. That’s a bit condescending – RDG] has been sentenced to life imprisonment. Statistical arguments played a role in that, although the influence of this in the media was overestimated. Many people died while she was on duty. Pure chance? The consulted statistician, Henk Elffers, repeated his earlier statement during the current appeal that the probability was 1 in 342 million. I quote from the article “Statisticians do not believe in coincidence” from the Haags Courant of January 30th: “The probability that nine fatal incidents took place in the JKZ during the shifts of the accused by pure chance is nil. (…) It wasn’t chance. I don’t know what it was. As a statistician, I can’t say anything about it. Deciding the cause is up to you”. The rest of the article showed that the judge had great difficulty with this answer, and did not manage to resolve those difficulties.

Many witnesses were then heard who talked about circumstances, plausibility, oddities, improbabilities and undeniably strong associations. The court has to combine all of this and arrive at a wise final judgment. A heavy task, certainly given the legal conceptual system that includes very many elements that have to do with probabilities but has to make do without quantification and without probability theory when combining them.

The crucial question is of course: how likely is it that Lucia de Berk committed murders? Most laypeople will think that Elffers answered that question and that it is practically certain.

This is a misunderstanding. Elffers did not answer that question. Elffers is a classical statistician, and classical statisticians do not make statements about what is actually going on, but only about how unlikely things are if nothing special is going on at all. However, there is another branch of statistics: the Bayesian. I belong to that other camp. And I’ve also been doing calculations. With the following bewildering result:

If the information that Elffers used to reach his 1 in 342 million were the only information on which Lucia de Berk was convicted, I think that, based on a fairly superficial analysis, there would be about an 80% chance of the conviction being wrong.

This article is about this great contrast. It is not an indictment of Elffers, who was extremely modest in the court when interpreting his outcome, nor a plea to acquit Lucia de Berk, because the court uses mainly different arguments, albeit without unequivocal statements of probability, while there is nothing which is absolutely certain. It is a plea to seriously study Bayesian statistics in the Netherlands, and this applies to both mathematicians and lawyers. [As we later discovered, many medical experts’ conclusions that certain deaths were unnatural was caused by their knowledge that Lucia had been present at an impossibly huge number of deaths – RDG]

There is some similarity to the Sally Clark case, which was sentenced to life imprisonment in 1999 in England because two of her sons died shortly after birth. A wonderful analysis can be found in the September 2002 issue of “living mathematics”, an internet magazine (http://plus.maths.org/issue21/features/clark/index.html)

An expert (not a statistician, but a doctor) explained that the chance that such a thing happened “just by chance” in the given circumstances was 1 in 73 million. I quote: “probably the most infamous statistical statement ever made in a British courtroom (…) wrong, irrelevant, biased and totally misleading.” The expert’s statement is completely torn to shreds in said article. Which includes mention of a Bayesian analysis. And a calculation that the probability that she was wrongly convicted was greater than 2/3. In the case of Sally Clark, the expert’s statement was completely wrong on all counts, causing half the nation to fall over him, and Sally Clark, though only after four years, was released. However, the case of Lucia de Berk is infinitely more complicated. Elffers’ statement is, I will argue, not wrong, but it is misleading, and the Netherlands has no jurisprudence, but judgments, and even though they are not directly based on extensive knowledge of probability theory, they are much more reasoned. That does not alter the fact that there is a common element in the Lucy de Berk and Sally Clark cases. [Actually, Elffers’ statement was wrong in its own terms. Had he used the standard and correct way to combine p-values from three separate samples, he would have ended up with a p-value of about 1/1000. Had he verified the data given him by the hospital, it would have been larger still. Had he taken account of heterogeneity between nurses and uncertainty in various estimates, both of which classical statisticians also know how to do too, larger still – RDG]

Bayesian statistics

My calculations are therefore based on alternative statistics, the Bayesian, named after Thomas Bayes, the first to write about “inverse probabilities”. That was in 1763. His discovery did not become really important [in statistics] until after 1960, mainly through the work of Leonard Savage, who proved that when you make decisions under uncertainty you cannot ignore the question of what chances the possible states of truth have (in our case the states “guilty” and “not guilty”). Thomas Bayes taught us how you can learn about that kind of probability from data. Scholars agree on the form of those calculations, which is pure probability theory. However, there is one problem: you have to think about what probabilities you would have given to the possible states before you had seen your data (the prior). And often these are subjective probabilities. And if you have little data, the impact of those subjective probabilities on your final judgment is large. A reason for many classical statisticians to oppose this approach. Certainly in the Netherlands, where statistics is mainly practised by mathematicians, people who are trained to solve problems without wondering what they have to do with reality. After a fanatical struggle over the foundations of statistics for decades (see my piece “the religious war of statisticians” at http://staff.feweb.vu.nl/avos/default.htm) the parties have come closer together. With one exception: the classical hypothesis test (or significance test). Bayesians have fundamental objections to classical hypothesis tests. And Elffers’ statement takes the form of a classical hypothesis test. This is where the foundations debate focuses.

The Lucy Clog case

Following Elffers, who explained his method of calculation in the Nederlands Juristenblad on the basis of a fictional case “Klompsma” which I have also worked through (arriving at totally different conclusions), I want to talk about the fictional case Lucy Clog [“Klomp” is the Dutch word for “clog”; the suffix “-sma” indicates a person from the province of Groningen; this is all rather insulting – RDG]. Lucy Clog is a nurse who has experienced 11 deaths in a period in which on average only one case occurs, but where no further concrete evidence against her can be found. In this case too, Elffers would report an extremely small chance of coincidence in court, about 1 in 100 million [I think that de Vos is thinking of the Poisson(1) chance of at least 11 events. If so, it is actually a factor 10 smaller. Perhaps he should change “11 deaths” into “10 deaths” – RDG]. This is the case where I claim that a guilty conviction, given the information so far together with my assessment of the context, has a chance of about 80% of being wrong.

This requires some calculations. Some of them are complicated, but the most important aspect is not too difficult, although it appears that many people struggle with it. A simple example may make this key point clear.

You are at a party and a stranger starts telling you a whole story about the chance that Lucia de Berk is guilty, and embarks joyfully on complex arithmetical calculations. What do you think: is this a lawyer or a mathematician? If you say a mathematician because lawyers are usually unable to do mathematics, then you fall into a classical trap. You think: a mathematician is good at calculations, while the chance that a lawyer is good at calculations is 10%, so it must be a mathematician. What you forget is that there are 100 times more lawyers than mathematicians. Even if only 10% of lawyers could do this calculating stuff, there would still be 10 times as many lawyers as mathematicians who could do it. So, under these assumptions, the probability is 10/11 that it is a lawyer. To which I must add that (I think) 75% of mathematicians are male but only 40% of lawyers are male, and I did not take this into account. If the word “she” had been in the problem formulation, that would have made a difference.

The same mistake, forgetting the context (more lawyers than mathematicians), can be made in the case of Lucia de Berk. The chance that you are dealing with a murderous nurse is a priori (before you know what is going on) very much smaller than that you are dealing with an innocent nurse. You have to weigh that against the fact that the chance of 11 deaths is many times greater in the case of “murderous” than in the case of “innocent”.

The Bayesian way of performing the calculations in such cases also appears to be intuitively not easy to understand. But if we look back on the example of the party, maybe it is not so difficult at all.

The Bayesian calculation is best not done in terms of chances, but in terms of “odds”, an untranslatable word that does not exist in the Netherlands. Odds of 3 to 7 mean a chance of 3/10 that it is true and 7/10 that it is not. Englishmen understand what this means perfectly well, thanks to horse racing: odds of 3 to 7 means you win 7 if you are right and lose 3 if you are wrong. Chances and odds are two ways to describe the same thing. Another example: odds of 2 to 10 correspond to probabilities of 2/12 and 10/12.

You need two elements for a simple Bayesian calculation. The prior odds and the likelihood ratio. In the example, the prior odds are mathematician vs. lawyer 1 to 100. The likelihood ratio is the probability that a mathematician does calculations (100%) divided by the probability that a lawyer does (10%). So 10 to 1. Bayes’ theorem now says that you must multiply the prior odds (1 : 100) and the likelihood ratio (10 : 1) to get the posterior odds, so they are (1 x 10 : 100 x 1) = (10 : 100) = (1 : 10), corresponding to a probability of 1 / 11 that it is a mathematician and 10/11 that it is a lawyer. Precisely what we found before. The posterior odds are what you can say after the data are known, the prior odds are what you could say before. And the likelihood ratio is the way you learn from data.

Back to the Lucy Clog case. If the chance of 11 deaths is 1 in 100 million when Lucy Clog is innocent, and 1/2 when she is guilty – more about that “1/2” much later – then the likelihood ratio for innocent against guilty is 1 : 50 million. But to calculate the posterior probability of being guilty, you need the prior odds. They follow from the chance that a random nurse will commit murders. I estimate that at 1 to 400,000. There are forty thousand nurses in hospitals in the Netherlands, so that would mean nursing killings once every 10 years. I hope that is an overestimate.

Bayes’ theorem now says that the posterior odds of “innocent” in the event of 11 deaths would be 400,000 to 50 million. That’s 8 : 1000, so a small chance of 8/1008, maybe enough to convict someone. Yet large enough to want to know more. And there is much more worth knowing.

For instance, it is remarkable that nobody saw Lucy doing anything wrong. It is even stranger when further investigation yields no evidence of murder. If you think that there would still be an 80% chance of finding clues in the event of many murders, against of course 0% if it is a coincidence, then the likelihood ratio of the fact “no evidence was found” is 100 : 20 in favour of innocence. Application of the rule shows that we now have odds of 40 : 1000, so a small 4% chance of innocence. Conviction now becomes really questionable. And if the suspect continues to deny, which is more plausible when she is innocent than when she is guilty, say twice as plausible, the odds turn into 80 : 1000, almost 8% chance of innocence.

As an explanation, a way of looking at this that requires less calculation work (but says exactly the same thing) is as follows: It follows from the assumptions that in 20,000 years it occurs 1008 times that 11 deaths occur in a nurse’s shifts: 1,000 of the nurses are guilty and 8 are innocent. Evidence for murder is found for 800 of the guilty nurses, moreover, 100 of the remaining 200 confess. That leaves 100 guilty and 8 innocent among the nurses who did not confess and for whom no evidence for murder was found.

So Lucy Clog must be acquitted. And all the while, I haven’t even talked about doubts about the exact probability of 1 in 100 million that “by chance” 11 people die in so many nurses’ shifts, when on average it would only be 1. This probability would be many times higher in every Bayesian analysis. I estimate, based on experience, that 1 in 2 million would come out. A Bayesian analysis can include uncertainties. Uncertainties about the similarity of circumstances and qualities of nurses, for example. And uncertainties increase the chance of extreme events enormously, the literature contains many interesting examples. As I said, I think that if I had access to the data that Elffers uses, I would not get a chance of 1 in 100 million, but a chance of 1 in 2 million. At least I assume that for the time being; it would not surprise me if it were much higher still!

Preliminary calculations show that it might even be as high as 1 in 100,000. But 1 in 2 million already saves a factor of 50 compared to 1 in 100 million, and my odds would not be 80 to 1000 but 4000 to 1000, so 4 to 1. A chance of 80% to wrongly convict. This is the 80% chance of innocence that I mentioned in the beginning. Unfortunately, it is not possible to explain the factor 50 (or a factor 1000 if the 1 in 100,000 turns out to be correct) from the last step within the framework of this article without resorting to mathematics. [Aart de Vos is probably thinking of Poisson distributions, but adding a hyperprior over the Poisson mean of 1, in order to take account of uncertainty in the true rate of deaths, as well as heterogeneity between nurses, causing some to have shifts with higher death rates than others – RDG]

What I hope has become clear is that you can always add information. “Not being able to find concrete evidence of murder” and “has not confessed” are new pieces of evidence that change the odds. And perhaps there are countless facts to add. In the case of Lucia de Berk, those kinds of facts are there. In the hypothetical case of Lucy Clog, not.

The fact that you can always add information in a Bayesian analysis is the most beautiful aspect of it. From prior odds, you come through data (11 deaths) to posterior odds, and these are again prior odds for the next steps: no concrete evidence for murder, and no confession by our suspect. Virtually all further facts that emerge in a court case can be dealt with in this way in the analysis. Any fact that has a different probability under the hypothesis of guilt than under the hypothesis of innocence contributes. Perhaps the reader has noticed that we only talked about the chances of what actually happened under various hypotheses, never about what could have happened but didn’t. A classic statistical test always talks about the probability of 11 or more deaths. That “or more” is irrelevant and misleading according to Bayesians. Incidentally, it is not necessarily easier to just talk about what happened. What is the probability of exactly 11 deaths if Lucy de Clog is guilty? The number of murders, something with a lot of uncertainty about it, determines how many deaths there are, but even though you are fired after 11 deaths, the classical statistician talks about the chance of you committing even more if you are kept on. And that last fact matters for the odds. That’s why I put in a probability of 50%, not 100%, for a murderous nurse killing exactly 11 patients. But that only makes a factor 2 difference.

It should be clear that it is not easy to come to firm statements if there is no convincing evidence. The most famous example, for which many Bayesians have performed calculations, is a murder in California in 1956, committed by a black man with a white woman in a yellow Cadillac. A couple who met this description was taken to court, and many statistical analyses followed. I have done a lot of calculations on this example myself, and have experienced how difficult, but also surprising and satisfying, it is to constantly add new elements.

A whole book is devoted to a similar famous case: “a Probabilistic Analysis of the Sacco and Vanzetti Evidence,” published in 1996 by Jay Kadane, professor of Carnegie Mellon and one of the most prominent Bayesians. If you want to know more, just consult his c.v. on his website http://lib.stat.cmu.edu/~kadane. In the “Statistics and the Law” field alone, he has more than thirty publications to his name, along with hundreds of other articles. This is now a well-developed field in America.

Conclusion?

I have thought for a long time about what the conclusion of this story is, and I have had to revise my opinion several times. And the perhaps surprising conclusion is: the actions of all parties are not that bad, only their rationalization is, to put it mildly, a bit strange. Elffers makes strange calculations but formulates the conclusions in court in such a way that it becomes intuitively clear that he is not giving the answer that the court is looking for. The judge makes judgments that sound as though they are in terms of probabilities but I cannot figure out what the judge’s probabilities are. But when I see what is going on I do get the feeling that it is much more like what is optimal than I would have thought possible, given the absurd rationalisations. The explanation is simple: judges’ actions are based on a process learnt by evolution, judges’ justifications are stuck on afterwards, and learnt through training. In my opinion, the Bayesian method is the only way to balance decisions under uncertainty about actions and rationalization. And that can be very fruitful. But the profit is initially much smaller than people think. What the court does in the case of Lucia de B is surprisingly rational. The 11 deaths are not convincing in themselves, but enough to change the prior odds from 1 in 40,000 to odds from 16 to 5, in short, an order of magnitude in which it is necessary to gather additional information before judging. Exactly what the court does. [de Vos has an optimistic view. He does not realise that the court is being fed false facts by the hospital managers – they tell the truth but not the whole truth; he does not realise that Elffers’ calculation was wrong because de Vos, as a Bayesian, doesn’t know what good classical statisticians do; neither he nor Elffers checks the data and finds out how exactly it was collected; he does not know that the medical experts’ diagnoses are influenced by Elffers’ statistics. Unfortunately, the defence hired a pure probabilist, and a kind of philosopher of probability, neither of whom knew anything about any kind of statistics, whether classical or Bayesian – RDG]

When I made my calculations, I thought at times: I have to go to court. I finally sent the article but I heard nothing more about it. It turned out that the defence had called for a witness who seriously criticized Elffers’ calculations. However, without presenting the solution. [The judge found the defence witness’s criticism incomprehensible, and useless to boot. It contained no constructive elements. But without doing statistics, anybody could see that the coincidence couldn’t be pure chance. It wasn’t: one could say that the data was faked. On the other hand, the judge did understand Elffers perfectly well – RDG].


Maybe I will once again have the opportunity to fully calculate probabilities in the Lucia de Berk case. That could provide new insights. But it is quite a job. In this case, there is much more information than is used here, such as poisonous traces in patients. Here too, it is likely that a Bayesian analysis that takes into account all the uncertainties shows that statements by experts who say something like “it is impossible that there is another explanation than the administration of poison by Lucia de Berk” should be taken with a grain of salt. Experts are usually people who overestimate their certainty. On the other hand, incriminating information can also build up. Ten independent facts that are twice as likely under the hypothesis of guilt change the odds by a factor of 1000. And if it turns out that the toxic traces found in the bodies of five deceased patients are each nine times more likely if Lucia is a murderer than if she isn’t, it saves a factor of nine to the fifth, a small 60,000. Etc, etc

But I think the court is more or less like that. It uses an incomprehensible language, that is, incomprehensible to probabilists, but a language sanctioned by evolution. We have few cases of convictions that were found to be wrong in the Netherlands. [Well! That was a Dutch layperson, writing in 2004. According to Ton Derksen, in the Netherlands about 10% of very long term prisoners (very serious cases) are innocent. It is probably something similar in other jurisdictions – RDG].

If you did the entire process in terms of probability calculations, the resulting debates between prosecutors and lawyers would become endless. And given their poor knowledge of probability, it is also undesirable for the time being. They have their secret language that usually led to reasonable conclusions. Even the chance that Lucia de Berk is guilty cannot be expressed in their language. There is also no law in the Netherlands that defines “legal and convincing evidence” in terms of the chance that a decision is correct. Is that 95%? Or 99%? Judges will maintain that it is 99.99%. But judges are experts.

So I don’t think it’s wise to try to cast the process in terms of probability right now. But perhaps this discussion will produce something in the longer term. Judges who are well informed about the statistical significance of the starting situation and then write down a number for each piece of evidence of prosecutor and defender. The likelihood ratio of each fact must be motivated. At the end, multiply all these numbers together, and have the calculations checked again by a Bayesian statistician. However, I consider this a long-term perspective. I fear (I am not really young anymore) it won’t come in my lifetime.

The magic of the d’Alembert

Simulations of the d’Alembert on a faIr roulette wheel with 36 paying outcomes and one “0”. Even odds bets (e.g., red versus black). Each line is one game. Each picture is 200 games. Parameters: initial capital of 25 units, maximum number of rounds is 21, emergency stop if capital falls below 15.

Source:


Harry Crane and Glenn Shafer (2020), Risk is random: The magic of the d’Alembert. https://researchers.one/articles/20.08.00007

Stewart N. Ethier (2010), The Doctrine of Chances Probabilistic Aspects of Gambling. Springer-Verlag: Berlin, Heidelberg.

set.seed(12345)
startKapitaal <- 25
eersteInzet <- 1
noodstopKapitaal <- 15
aantalBeurten <- 21
K <- 100
J <- 200
winsten <- rep(0, K)

for (k in (1:K)){

	plot(x = -2, y = -1, ylim = c(-5, 45), xlim = c(0, 22), xlab = "Beurt", ylab = "Kapitaal")
	abline(h = 25)
	abline(h = 0, col = "red")

	aantalKeerWinst <- 0
	totaleWinst <- 0

	for (j in (1:J)) {

		huidigeKapitaal <- startKapitaal
		huidigeInzet <- eersteInzet
		resultaten <- sample(x = c(-1, +1), prob = c(19, 18), size = aantalBeurten, replace = TRUE)
		verloop <- rep(0, aantalBeurten)
		stappen <- rep(0, aantalBeurten)
		for(i in 1:aantalBeurten) {
			 huidigeResultaat <- resultaten[i]
			 if(huidigeInzet > 0){
				  stap <- huidigeResultaat * huidigeInzet
				  stappen [i] <- stap
				  huidigeKapitaal <- huidigeKapitaal + stap
				  huidigeInzet <- max(1, huidigeInzet - stap)
				  if(huidigeKapitaal < noodstopKapitaal) {huidigeInzet <- 0}
				  verloop[i] <- huidigeKapitaal
			 } else {
				  stappen[i] <- 0
				  verloop[i] <- huidigeKapitaal
			 }
		} 
	aantalKeerWinst <- aantalKeerWinst + (verloop[aantalBeurten] > startKapitaal)
	totaleWinst <- totaleWinst + (huidigeKapitaal - startKapitaal)
	lines(0:aantalBeurten, c(startKapitaal, verloop) + runif(1, -0.15, +0.15 ), add = TRUE)
	}
print(c(k, aantalKeerWinst, totaleWinst))
winsten[k] <- totaleWinst
}

The program repeatedly runs and plots 200 games of each maximally 21 rounds. Below are the total number of times that the player made a profit, and the final net gain, for 100 sets of 200 games. The sets are numbered 1 to 100.

[1]    1  100 -483
[1]    2  108 -336
[1]    3  103 -517
[1]    4  110 -275
[1]   5 123 -40
[1]   6 125 148
[1]    7  115 -209
[1]    8  104 -427
[1]    9  108 -356
[1]   10  110 -225
[1]   11  101 -440
[1]  12 120  80
[1]   13  108 -334
[1]   14  110 -279
[1]   15   99 -538
[1]   16  114 -101
[1]  17 113 -92
[1]  18 117 -87
[1]   19  104 -363
[1]   20  103 -320
[1]  21 114 -52
[1]   22  107 -422
[1]   23  108 -226
[1]   24  115 -173
[1]   25  110 -209
[1]   26  109 -261
[1]   27  114 -186
[1]  28 120 -62
[1]  29 123  35
[1]   30  101 -442
[1]   31  111 -215
[1]   32  104 -378
[1]  33 120  49
[1]  34 117 -49
[1]   35  119 -102
[1]   36  104 -488
[1]   37  107 -402
[1]  38 122  38
[1]   39  100 -549
[1]  40 116 -31
[1]  41 127 220
[1]   42  105 -427
[1]   43  114 -153
[1]   44  109 -256
[1]   45  119 -166
[1]  46 121  47
[1]   47  105 -417
[1]   48  113 -134
[1]  49 121 111
[1]   50  112 -307
[1]  51 114 -92
[1]  52 123 123
[1]  53 118  24
[1]   54  113 -188
[1]  55 124 127
[1]   56  110 -229
[1]   57  113 -255
[1]   58  101 -554
[1]   59  114 -345
[1]  60 124 236
[1]   61   97 -599
[1]   62  115 -220
[1]  63 120  55
[1]   64  102 -512
[1]  65 121 109
[1]   66  112 -219
[1]   67  112 -181
[1]  68 115 -45
[1]   69  107 -474
[1]   70  109 -272
[1]   71  116 -134
[1]   72  107 -440
[1]   73  108 -470
[1]  74 119 -85
[1]  75 115   1
[1]  76 115 -88
[1]   77  113 -219
[1]  78 118 -55
[1]   79  115 -150
[1]  80 124  70
[1]   81  115 -203
[1]   82  115 -153
[1]   83  109 -219
[1]   84   97 -675
[1]   85  108 -396
[1]   86  112 -220
[1]   87  115 -187
[1]   88  108 -290
[1]   89  114 -182
[1]   90  105 -439
[1]   91  113 -183
[1]   92  115 -216
[1]  93 124 110
[1]   94  115 -173
[1]  95 125 177
[1]   96  110 -203
[1]  97 128 160
[1]  98 114 -83
[1]  99 118 -90
[1] 100 123 106

Steve Gull’s challenge: An impossible Monte Carlo simulation project in distributed computing

At the 8th MaxEnt conference in 1998, held in Cambridge UK, Ed Jaynes was the star of the show. His opening lecture has the following abstract: “We show how the character of a scientific theory depends on one’s attitude toward probability. Many circumstances seem mysterious or paradoxical to one who thinks that probabilities are real physical properties existing in Nature. But when we adopt the “Bayesian Inference” viewpoint of Harold Jeffreys, paradoxes often become simple platitudes and we have a more powerful tool for useful calculations. This is illustrated by three examples from widely different fields: diffusion in kinetic theory, the Einstein–Podolsky–Rosen (EPR) paradox in quantum theory [he refers here to Bell’s theorem and Bell’s inequalities], and the second law of thermodynamics in biology.”

Unfortunately Jaynes was completely wrong in believing that John Bell had merely muddled up his conditional probabilities in proving the famous Bell inequalities and deriving the famous Bell theorem. At the conference, astrophysicist Steve Gull presented a three line proof of Bell’s theorem using some well known facts from Fourier analysis. The proof sketch can be found in a scan of four smudged overhead sheets on Gull’s personal webpages at Cambridge University.

Together with Dilara Karakozak I believe I have managed to decode Gull’s proof, https://arxiv.org/abs/2012.00719, though this did require quite some inventiveness. I have given a talk presenting our solution and point out further open problems. I have the feeling progress could be made on interesting generalisations using newer probability inequalities for functions of Rademacher variables.

Here are slides of the talk: https://www.math.leidenuniv.nl/~gill/gull-talk.pdf

Not being satisfied, I wrote a new version of the talk, using different tools. Notes written with Apple pencil on the iPad, then I discuss them while recording my voice and the screen (so: either composing the notes live, or editing them live) https://www.youtube.com/watch?v=W6uuaM46RwU&list=PL2R0B8TVR1dIy0CnW6X-Nw89RGdejBwAY

A fungal year

I want to document the more than 20 species of wild mushrooms which I’ve collected and enjoyed eating this year. I will go through my collection of photographs in reverse chronological order. But above, the featured image, taken back in September: Neoboletus luridiformis, the scarletina bolete; in Dutch, heksenboleet (witch’s bolete. Don’t worry. The guy to avoid is the devil’s bolete).

I get my mushroom knowledge from quite a few books and from many websites. In this blog I will just give the English and Dutch wikipedia pages for each species. I highly recommend Google searching the Latin name (though notice – scientific names do change, as science gives us new knowledge) and if your French, German or other favourite language also has a wikipedia page, nature lover’s web pages, forager’s webpages, or whatever, check them out, because ideas of edibility and of how to cook mushrooms which are considered edible varies all over the world. If at some time there was a famine, and the only country people who could survive were those who went out in the forest and found something they could eat, then their fellows who had allergic reactions to those same mushrooms did not survive, and in this way different human populations are adapted to different fungi populations. It’s also very important to consult local knowledge (in the form of local handbooks, local websites) since the dangerous poisonous look-alikes which you must avoid vary in different parts of the world.

Do not eat wild mushrooms raw. You don’t know what is still crawling about in it, and you don’t know what has pooped or pissed on it or munched at it recently. Twenty minutes gentle cooking should destroy anything nasty, and moreover, it breaks down substances which are hard for humans to digest. The rigid structure of mushrooms is made of chitin (which insects use for their external body) and we cannot digest it raw. Some people have allergic reactions to raw chitin.

Contents

Paralepista flaccida

Russula cyanoxantha

Armillaria mellea

Coprinus comatus

Suillus luteus

Amanita muscaria

Sparassis crispa

[To be continued]

Appendix: some mushrooms and fungi to be wondered at, but not eaten

1. Paralepista flaccida

Tawny funnel, Roodbruine schijnridderzwam. Grows in my back garden in an unobtrusive spot, fruiting every year in December to January. Yellow-pinkish spore print, lovely smell, nice taste. Also after frying! The combination of aroma/taste/spore-print just does not fit any of the descriptions of this mushroom or those easy to confuse with it which I can find. There is a poisonous lookalike which however is not supposed to taste good, so that’s why I dared to eat this one. It grows close to a Lawson cyprus but there may be other old wood remains underground in the same spot.

English wikipedia: https://en.wikipedia.org/wiki/Paralepista_flaccida

Netherlands wikipedia: https://nl.wikipedia.org/wiki/Roodbruine_schijnridderzwam

2. Russula cyanoxantha

Charcoal burner, Regenboogrussula (rainbow russula). Very common in the forests behind “Palace het Loo”. A really delicious russula species, easy to identify.

English wikipedia: https://en.wikipedia.org/wiki/Russula_cyanoxantha

Netherlands wikipedia: https://nl.wikipedia.org/wiki/Regenboogrussula

3. Armillaria mellea

Honey fungus, echte honingzwam. These fellows are growing out of the base of majestic beech trees at Palace het Loo. The trees are all being cut down now; excuse: “they’re sick”; true reason: high quality beech wood is very valuable. The trees are hosts to numerous fungi, animals, birds. The managers of the park have been doing their best to kill them off for several decades by blowing their fallen leaves away and driving heavy machinery around. Looks like their evil designs are bearing fruit now.

English wikipedia: https://en.wikipedia.org/wiki/Armillaria_mellea

Netherlands wikipedia: https://nl.wikipedia.org/wiki/Echte_honingzwam

4. Coprinus comatus

Shaggy ink cap, Geschubde inktzwam. One of the last ones of the season, very fresh, from a field at the entrance to the Palace park. These guys are so delicious, fried in butter with perhaps lemon juice, and a little salt and pepper, they have a gentle mushroom flavour, they somehow remind me of oysters. And of Autumns in Aarhus, picking them often from the lawns of the university campus.

English wikipedia: https://en.wikipedia.org/wiki/Coprinus_comatus

Dutch wikipedia: https://nl.wikipedia.org/wiki/Geschubde_inktzwam

5. Suillus luteus

Suillus luteus

Slippery jack, bruine ringboleet. This one looks rather slimy and it is said that it needs to be cooked well, it disagrees with some people. It didn’t disagree with me at all, but I must say it did not have much flavour, and does feel a bit slippery in your mouth.

English wikipedia: https://en.wikipedia.org/wiki/Suillus_luteus

Dutch wikipedia: https://nl.wikipedia.org/wiki/Bruine_ringboleet

6. Amanita muscaria

Fly agaric, vliegenzwam. This mushroom contains both poisons and psychoactive substances. However, both are water soluble. One therefore boils these mushrooms lightly for 20 minutes in plenty of lightly salted water with a dash of vinegar, then drain and discard the fluid; then they can be fried in butter and brought up to taste with salt and pepper. They are then actually very tasty, in my opinion.

Another use for them is to soak them in a bowl of water and leave in your kitchen. Flies will come and investigate it, taste some get high (literally and figuratively) and drop dead. The smell is pretty disgusting at this stage.

I understand you can dry them, grind to powder, and make tea. This allegedly destroys the poisons but leaves enough of the psychoactive substances to have interesting effects. I haven’t tried it, since one of the effects is to set your heart racing, and since I have a dangerously irregular hearth rhythm already, I should not experiment with this.

Some people munch a small piece raw, from time to time, while walking in the forests. I have tried that – teaspoon size, desertspoon size even, without noticing anything except that perhaps for a moment everything sparkled more beautifully than usual. Probably that was the placebo effect.

Amanita muscaria is not terribly poisonous. If you cook and eat three or four you will probably throw up after an hour or two and also experience rather unpleasant hallucinations. To be rounded off with diarrhea and generally feeling unwell. You might find yourself getting very large or very small, it depends of course whether you nibble from the right-hand edge of the mushroom or the left-hand edge. You might believe you can fly so it can be dangerous to be in high places on your own. The poisons may damage your liver but being water soluble they are quite efficiently and rapidly excreted from the body, which is a good thing, so eating them just once probably won’t kill you and probably won’t give you permanent damage. Several other Amanita species are deadly poisonous. With poisons which do not dissolve in water and do not leave your body after you’ve eaten them, but instead destroy your liver in a few days. One must learn to recognise those mushrooms very well. In my part of the world: Amanita phalloides – the death cap (groene knolamaniet); Amanita pantherina – the panther cap (panteramaniet). I have seen these two even in the parks and roadside verges in my town, as well as in the forests outside. More rare is Amanita virosa – the destroying angel (kleverige knolamaniet). But I believe I have seen it close to home, too. It is a white mushroom with white gills and consequently many people believe you must never touch a white mushroom with white gills. Consequently, writers of mushroom books themselves generally have the idea that edible white mushroom with white gills, which do exist, do not taste particularly good, either, and so one should not bother with them. Hence they do not explain well how you can tell the difference. We will later (i.e., earlier this year) meet the counterexample to that myth.

Because of the psychoactive effects of Amanita muscaria it is actually presently illegal, in the Netherlands, to be found in possession of more than a very small amount.

7. Sparassis crispa

The cauliflower mushroom, grote sponszwam. One of my favourites. It does have the tendency to envelope leaves and insects in its folds. Before cooking it has a wonderful aroma, almost aromatic, but on frying it seems to lose a lot of flavour.

English wikipedia: https://en.wikipedia.org/wiki/Sparassis_crispa

Dutch wikipedia: https://nl.wikipedia.org/wiki/Grote_sponszwam

Time, Reality and Bell’s Theorem

Featured image: John Bell with a Schneekugel (snowing ball) made by Renate Bertlmann; in the Bells’ flat in Geneva, 1989. © Renate Bertlmann.

Lorentz Center workshop proposal, Leiden, 6–10 September 2021

As quantum computing and quantum information technology moves from a wild dream into engineering and possibly even mass production and consumer products, the foundational aspects of quantum mechanics are more and more hotly discussed. Whether or not various quantum technologies can fulfil their theoretical promise depends on the fact that quantum mechanical phenomena cannot be merely emergent phenomena, emerging from a more fundamental physical framework of a more classical nature. At least, that is what Bell’s theorem is usually understood to say: any underlying mathematical physical framework which is able, to a reasonable approximation, to reproduce the statistical predictions made by quantum mechanics, cannot be local and realist. These words have nowadays precise mathematical meanings, but they stand for the general world view of physicists like Einstein, and in fact they stand for the general world view of the educated public. Quantum physics is understood to be weird, and perhaps even beyond understanding. “Shut up and calculate”, say many physicists.

Since the 2015 “loophole-free” Bell experiments of Delft, Munich, Vienna and at NIST, one can say even more: laboratory reality cannot be explained by a classical-like underlying theory. Those experiments were essentially watertight, at least as far as experimentally enforceable conditions are concerned. (Of course, here is heated discussion and criticism, too).

Since then however it seems that even more energy than ever before is being put into serious mathematical physics which somehow gets around Bell’s theorem. A more careful formulation of the theorem is that the statistical predictions of quantum mechanics cannot be reproduced by a theory having three key properties: locality, realism, and no-conspiracy. What is meant by no-conspiracy? It means that experimenters are free to choose settings of their experimental devices, independently of the underlying properties of the physical systems which they are investigating. In the case of a Bell-type experiment, a laser aimed at a crystal which emanates a pair of photons which arrive at two distant polarising photodectors, ie detectors which can measure the polarisation of a photon in directions chosen freely by the experimenter. If the universe actually evolves in a completely deterministic manner, then everything that goes on in those labs (housing the source and the detectors and all the cables or whatever in between) was determined already at the time of the big bang, the photons can in principle “know in advance” how they are going to be measured.

At the present time, highly respectable physicists are working on building a classical-like model for these experiments using superdeterminism. Gerard ’t Hooft used to be a lonely voice arguing for such models but he is no longer quite so alone (cf. Tim Palmer, Oxford, UK). Other physicists are using a concept called retro-causality: the future influences the past. This leads to “interpretations of quantum mechanics” in which the probabilistic predictions of quantum mechanics, which seem to have a built in arrow of time, do follow from a time symmetric physics (cf. Jaroslav Duda, Krakow, Poland).

Yet other physicists dismiss “realism” altogether. The wave function is the reality, the branching of many possible outcomes when quantum systems interact with macroscopic systems is an illusion. The Many Worlds Interpretation is still very alive. Then there is QBism, where the “B” probably was meant to stand for Bayesian (subjectivist) probability, in which one goes to an almost solipsistic view of physics; the only task of physics is to tell an agent what are the probabilities of what the agent is going to experience in the future; the agent is rational and uses the laws of quantum mechanics and standard Bayesian probability (the only rational way to express uncertainty or degrees of belief, according to this school) to update probabilities as new information is obtained. So there only is information. Information about what? This never needs to be decided.

To the right, interference patterns of waves of future quantum possibilities. To the left, the frozen actually materialised past. At the boundary, the waves break, and briefly shining fluorescent dots of light on the beach represent the consciousness of sentient beings. Take your seat and enjoy. Artist: A.C. Gill
On the right, interference patterns of waves of future quantum possibilities. On the left, the frozen actually materialised past. At the boundary, the waves break, and briefly shining fluorescent dots of light on the beach represent the consciousness of sentient beings. Take your seat and enjoy. Artist: A.C. Gill

Yet another serious escape route from Bell is to suppose that mathematics is wrong. This route is not taken seriously by many, though at the moment, Nicolas Gisin (Geneva), an outstanding experimentalist and theoretician, is exploring the possibility that an intuitionistic approach to the real numbers could actually be the right way to set up the physics of time. Klaas Landsman (Nijmegen) seems to be following a similar hunch.

Finally, many physicists do take “non-locality” as the serious way to go; and explore, with fascinating new experiments (a few years ago in China, Anton Zeilinger and Jian-Wei Pan; this year Donadi e al.), hypotheses concerning the idea that gravity itself leads to non-linearity in the basic equations of quantum mechanics, leading to the “collapse of the wave function”, by a definitely non-local process.

At the same time, public interest in quantum mechanics is bigger than ever, and non-academic physicists are doing original and interesting work, “outside of the mainstream”. Independent researchers can and do challenge orthodoxy, and it is good that someone is doing that. There is a feeling that the mainstream has reached an impasse. In our opinion, the outreach from academia to the public has also to some extent failed. Again and again, science supplements publish articles about amazing new experiments, showing ever more weird aspects of quantum mechanics, but it is often clear that the university publicity department and the science journalists involved did not understand a thing, and the newspaper articles are extraordinarily misleading if not palpably nonsense.

In the Netherlands there has long been a powerful interest in foundational aspects of quantum mechanics and also, of course, in the most daring experimental aspects. The Delft experiment of 2015 was already mentioned. At CWI, Amsterdam, there is an outstanding group led by Harry Buhrman in quantum computation; Delft has a large group of outstanding experimentalists and theoreticians, in many other universities there are small groups and also outstanding individuals. In particular one must mention Klaas Landsman and Hans Maassen in Nijmegen; and one must mention the groups working in the foundations of physics in Utrecht and in Rotterdam (Fred Muller). Earlier we had of course Gerard ’t Hooft, Dennis Dieks and Jos Uffinck in Utrecht; some of them retired but still active, others moved abroad. A new generation is picking up the baton.

The workshop will therefore bring a heterogeneous group of scientists together, many of whom disagree fundamentally on basic issues in physics. Is it an illusion to say that we can ever understand physical reality? All we can do is come up with sophisticated mathematics which amazingly gives the right answer. Yet there are conferences and Internet seminars where these disagreements are fought out, amicably, again and again. It seems that perhaps some of the disagreements are disagreements coming from different subcultures in physics, very different uses of the same words. It is certainly clear that many of those working on how to get around Bell’s theorem, actually have a picture of that theorem belonging to its early days. Our understanding has enormously developed over the decennia, and the latest experimentalists have perhaps a different theorem in mind, to the general picture held by theoretical physicists who come from relativity theory. Indubitably, the reverse is also true. We are certain that the meeting we want to organise will enable people from diverse backgrounds to understand one another more deeply and possibly “agree to differ” if the difference is a matter of taste; if however the difference has observable physical consequences then we must be able to figure out how to observe them.

The other aim of the workshop is to find better ways to communicate quantum mysteries to the public. A physical theory which basically overthrows our prior conceptions of time, space and reality, must impact culture, art, literature; it must become part of present day life; just as earlier scientific revolutions did. Copernicus, Galileo, Descartes, Newton taught us that the universe evolves in a deterministic (even if chaotic) way. Schrödinger, Bohr and all the rest told us this was not the case. The quantum nature of the universe certainly did impact popular culture but somehow it did not really impact the way that most physicists and engineers think about the world.

Illustration from Wikipedia, article on Bell’s Theorem. The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at 0°, perfect correlation at 180°. Many other possibilities exist for the classical correlation subject to these side conditions, but all are characterized by sharp peaks (and valleys) at 0°, 180°, and 360°, and none has more extreme values (±0.5) at 45°, 135°, 225°, and 315°. These values are marked by stars in the graph, and are the values measured in a standard Bell-CHSH type experiment: QM allows ±1/√2 = ±0.7071…, local realism predicts ±0.5 or less.

BOLC (Bureau Verloren Zaken) “reloaded”

Het BOLC is weer terug.

10 jaar geleden (in 2010) werd de Nederlandse verpleegster Lucia de Berk bij een nieuw proces vrijgesproken van een aanklacht van 7 moorden en 3 pogingen tot moord in ziekenhuizen in Den Haag in een aantal jaren in de aanloop naar slechts een paar dagen voor de gedenkwaardige datum van “9-11”. De laatste moord zou in de nacht van 4 september 2001 zijn gepleegd. De volgende middag meldden de ziekenhuisautoriteiten een reeks onverklaarbare sterfgevallen aan de gezondheidsinspectie en de politie. Ook plaatsten ze Lucia de B., zoals ze bekend werd in de Nederlandse media, op ‘non-active’. De media meldden dat er ongeveer 30 verdachte sterfgevallen en reanimaties werden onderzocht. De ziekenhuisautoriteiten meldden niet alleen wat volgens hen vreselijke misdaden waren, ze geloofden ook dat ze wisten wie de dader was.

De wielen van gerechtigheid draaien langzaam, dus er was een proces en een veroordeling; een beroep en een nieuw proces en een veroordeling; eindelijk een beroep op het hooggerechtshof. Het duurde tot 2006 voordat de veroordeling (levenslange gevangenisstraf, die in Nederland pas wordt beëindigd als de veroordeelde de gevangenis verlaat in een kist) onherroepelijk wordt. Alleen nieuw bewijs kan het omverwerpen. Nieuwe wetenschappelijke interpretaties van oud bewijs worden niet als nieuw bewijs beschouwd. Er was geen nieuw bewijs.

Maar al, in 2003-2004, maakten sommige mensen met een interne band met het Juliana Kinderziekenhuis zich al zorgen over de zaak. Nadat ze in vertrouwen met de hoogste autoriteiten over hun zorgen hadden gesproken, maar toen ze te horen kregen dat er niets aan te doen was, begonnen ze journalisten te benaderen. Langzaam maar zeker raakten de media weer geïnteresseerd in de zaak – het verhaal was niet meer het verhaal van de vreselijke heks die baby’s en oude mensen zonder duidelijke reden had vermoord, behalve voor het plezier in het doden, maar van een onschuldige persoon die was verminkt door pech, incompetente statistieken en een monsterlijk bureaucratisch systeem dat eens in beweging, niet kon worden gestopt.

Onder de supporters van Metta de Noo en Ton Derksen waren enkele professionele statistici, omdat Lucia’s aanvankelijke veroordeling was gebaseerd op een foutieve statistische analyse van door het ziekenhuis verstrekte onjuiste gegevens en geanalyseerd door amateurs en verkeerd begrepen door advocaten. Anderen waren informatici, sommigen waren ambtenaren op hoog niveau van verschillende overheidsorganen die ontsteld waren over wat ze zagen gebeuren; er waren onafhankelijke wetenschappers, een paar medisch specialisten, een paar mensen met een persoonlijke band met Lucia (maar geen directe familie); en vrienden van zulke mensen. Sommigen van ons werkten vrij intensief samen en werkten met name aan de internetsite voor Lucia, bouwden er een Engelstalige versie van en brachten deze onder de aandacht van wetenschappers over de hele wereld. Toen kranten als de New York Times en The Guardian begonnen te schrijven over een vermeende gerechtelijke dwaling met verkeerd geïnterpreteerde statistieken, ondersteund door opmerkingen van Britse topstatistici, hadden de Nederlandse journalisten nieuws voor de Nederlandse kranten, en dat soort nieuws werd zeker opgemerkt in de gangen van de macht in Den Haag.

Snel vooruit naar 2010, toen rechters niet alleen Lucia onschuldig verklaarden, maar voor de rechtszaal hard-op verklaarden dat Lucia samen met haar collega-verpleegkundigen uiterst professioneel had gevochten om het leven van baby’s te redden die onnodig in gevaar werden gebracht door medische fouten van de medisch specialisten die waren belast met hun zorg. Ze vermeldden ook dat alleen omdat het tijdstip van overlijden van een terminaal zieke persoon niet van tevoren kon worden voorspeld, dit niet betekende dat het noodzakelijkerwijs onverklaarbaar en dus verdacht was.

Enkelen van ons, opgetogen door onze overwinning, besloten om samen te werken en een soort collectief te vormen dat zou kijken naar andere ‘verloren zaken’ met mogelijke justitiele dwalingen waar de wetenschap misbruikt was. Ik had al had mijn eigen onderzoeksactiviteiten omgebogen en gericht op het snelgroeiende veld van forensische statistiek, en ik was al diep betrokken bij de zaak Kevin Sweeney en de zaak van José Booij. Al snel hadden we een website en waren we hard aan het werk, maar kort daarna gebeurde er een opeenvolging van ongelukken. Ten eerste betaalde het ziekenhuis van Lucia een dure advocaat om me onder druk te zetten namens de hoofdkinderarts van het Juliana Children’s Hospital. Ik had namelijk wat persoonlijke informatie over deze persoon (die toevallig de schoonzus was van Metta de Noo en Ton Derksen) geschreven op mijn homepage aan de Universiteit van Leiden. Ik voelde dat het van cruciaal belang was om te begrijpen hoe de zaak tegen Lucia was begonnen en dit had zeker veel te maken met de persoonlijkheden van enkele sleutelfiguren in dat ziekenhuis. Ik schreef ook naar het ziekenhuis en vroeg om meer gegevens over de sterfgevallen en andere incidenten op de afdelingen waar Lucia had gewerkt, om het professionele onafhankelijke statistische onderzoek te voltooien dat had moeten plaatsvinden toen de zaak begon. Ik werd bedreigd en geïntimideerd. Ik vond enige bescherming van mijn eigen universiteit die namens mij dure advocatenkosten betaalde. Mijn advocaat adviseerde me echter al snel om toe te geven door aanstootgevend materiaal van internet te verwijderen, want als dit naar de rechtbank zou gaan, zou het ziekenhuis waarschijnlijk winnen. Ik zou de reputatie van rijke mensen en van een machtige organisatie schaden en ik zou moeten boeten voor de schade die ik had aangericht. Ik moest beloven om deze dingen nooit weer te zeggen en ik zou beboet worden als ze ooit herhaald zou worden door anderen. Ik heb nooit toegegeven aan deze eisen. Later heb ik wel wat gepubliceerd en naar het ziekenhuis opgestuurd. Ze bleven stil. Het was een interessante spel bluf poker.

Ten tweede schreef ik op gewone internetfora enkele zinnen waarin ik José Booij verdedigde, maar die de persoon die haar bij de kinderbescherming had aangegeven ook van schuld verweet. Dat was geen rijk persoon, maar zeker een slim persoon, en ze meldden mij bij de politie. Ik werd verdachte in een geval van vermeende laster. Geïnterviewd door een aardige lokale politieagent. En een paar maanden later kreeg ik een brief van de lokale strafrechter waarin stond dat als ik 200 euro administratiekosten zou betalen, de zaak administratief zou worden afgesloten. Ik hoefde geen schuld te bekennen maar kon ook niet aantekenen dat ik me onschuldig vond.

Dit leidde ertoe dat het Bureau Verloren Zaken zijn activiteiten een tijdje stopzette. Maar het is nu tijd voor een come-back, een “re-boot”. Ondertussen deed ik niet niets, maar raakte ik betrokken bij een half dozijn andere zaken, en leerde ik steeds meer over recht, over forensische statistiek, over wetenschappelijke integriteit, over organisaties, psychologie en sociale media. De BOLC is terug.

ORGANISATIE en PLANNEN

Het BOLC is al een paar jaar inactief, maar nu de oprichter de officiële pensioenleeftijd heeft bereikt, “herstart” hij de organisatie. Richard Gill richtte de BOLC op aan de vooravond van de vrijspraak van verpleegster Lucia de Berk in 2006. Een groep vrienden die nauw betrokken waren geweest bij de beweging om Lucia een eerlijk proces te bezorgen, besloten dat ze zo genoten van elkaars gezelschap en zoveel hadden geleerd van de ervaring van de afgelopen jaren, dat ze hun vaardigheden wilden uitproberen op enkele nieuwe cases. We kwamen snel een aantal ernstige problemen tegen en stopten onze website tijdelijk, hoewel de activiteiten in verschillende gevallen werden voortgezet, meer ervaring werd opgedaan, veel werd geleerd.

We vinden dat het tijd is om het opnieuw te proberen, nadat we enkele nuttige lessen hebben geleerd van onze mislukkingen van de afgelopen jaren. Hier is een globaal overzicht van onze plannen.

  1. Zet een robuuste formele structuur op met een bestuur (voorzitter, secretaris, penningmeester) en een adviesraad. In plaats van het de wetenschappelijke adviesraad te noemen, zoals gebruikelijk in academische organisaties, zou het een morele en / of wijsheidsadviesraad moeten zijn om op de hoogte te worden gehouden van onze activiteiten en ons te laten weten als ze denken dat we van de rails gaan.
  2. Eventueel een aanvraag indienen om een Stichting te worden. Dit betekent dat we ook zoiets zijn als een vereniging of een club, met een jaarlijkse algemene vergadering. We zouden leden hebben, die misschien ook donaties willen doen, aangezien het runnen van een website en het af en toe in de problemen komen geld kost.
  3. Schrijf over de zaken waar we de afgelopen jaren bij betrokken zijn geweest, met name: vermeende seriemoordenaars Ben Geen (VK), Daniela Poggiali (Italië); beschuldigingen van wetenschappelijk wangedrag in het geval van het proefschrift van een student van Peter Nijkamp; het geval van de AD Haring-test en de kwaliteit van Dutch New Herring; het geval van Kevin Sweeney.

Re-launch of the Bureau of Lost Causes

The BOLC is back. 10 years ago (in 2010) the Dutch nurse Lucia de Berk was acquitted, at a retrial, of a charge of 7 murders and 3 attempted murders at hospitals in the Hague in a number of years leading up to just a few days before the memorable date of “9-11”. The last murder was supposed to have been committed in the night of September 4, 2001. The next afternoon, hospital authorities reported a series of unexplained deaths to the health inspectorate and to the police. They also put Lucia de B., as she became known in the Dutch media, onto “non-active”. The media reported that about 30 suspicious deaths and resuscitations were being investigated. The hospital authorities not only reported what they believed to be terrible crimes, they also believed that they knew who was the perpetrator.

The wheels of justice turn slowly, so there was a trial and a conviction; an appeal and a retrial and a conviction; finally an appeal to the supreme court. It took till 2006 for the conviction (a life sentence, which in the Netherlands is only terminated when the convict leaves prison in a coffin) to become irrevocable. Only new evidence could overturn it. New scientific interpretations of old evidence is not considered new evidence. There was no new evidence.

Yet already, in 2003-2004, some people with an inside connection to the Juliana Children’s Hospital were already getting very concerned about the case. Having spoken of their concerns, in confidence, with the highest authorities, but being informed that nothing could be done, they started to approach journalists. Slowly but surely the media started getting interested in the case again – the story was not anymore the story of the terrible witch who had murdered babies and old people for no apparent reason whatsoever except for the pleasure in killing, but of an innocent person who was mangled by bad luck, incompetent statistics, and a monstrous bureaucratic system which once in motion could not be stopped.

Among the supporters of Metta de Noo and Ton Derksen were a few professional statisticians, because Lucia’s initial conviction had been based on a faulty statistical analysis of faulty data supplied by the hospital and analysed by amateurs and misunderstood by lawyers. Others were computer scientists, some were civil servants at high levels of several government organs appalled at what they saw going on; there were independent scientists, a few medical specialists, a few persons with some personal connection with Lucia; and friends of such people. Some of us worked quite intensively together and in particular worked on the internet site for Lucia, building an English language version of it, and bringing it to the attention of scientists world-wide. When newspapers like the New York Times and The Guardian started writing about an alleged miscarriage of justice in the Netherlands involving wrongly interpreted statistics, supported by comments from top UK statisticians, the Dutch journalists had news for the Dutch newspapers, and that kind of news certainly got noticed in the corridors of power in the Hague.

Fast forward to 2010, when judges not only pronounced Lucia innocent, but actually stated in court that Lucia together with her colleague nurses had fought with utmost professionality to save the lives of babies which were unnecessarily endangered by medical errors of the medical specialists entrusted with their care. They also mentioned that just because the time of a death of a terminally ill person could not be predicted in advance, it did not mean that it was necessarily unexplainable and hence suspicious.

A few of us, exhilarated by our victory, decided to band together and form some sort of collective which would look at other “lost causes” involving possible miscarriages of justice where science had been misused. Aready, I had turned my own research activities to the burgeoning field of forensic statistics, and already I was deeply involved in the Kevin Sweeney case, and the case of José Booij. Soon we had a web-site and were hard at work, but soon after this, a succession of mishaps occurred. Firstly, Lucia’s hospital paid for an expensive lawyer to put pressure on me on behalf of the chief paediatrician of the Juliana Children’s Hospital. I had namely written some information of some personal nature about this person (who coincidentally was the sister-in-law of Metta de Noo and Ton Derksen) on my home page at the University of Leiden. I felt it was crucially in the public interest to understand how the case against Lucia had started and this certainly had a lot to do with personalities of a few key persons at that hospital. I also wrote to the hospital asking for further data on the deaths and other incidents in the wards where Lucia had worked, in order to complete the professional independent statistical investigation which should have taken place when the case started. I was threatened and intimidated. I found some protection from my own university who actually paid expensive lawyer fees on my behalf. However, my lawyer soon advised me to give way by removing offensive material from internet, since if this went to court, the hospital would most likely win. I would be harming the reputation of rich persons and of a powerful organisation, and I would have to pay for the harm I did. Secondly, on some ordinary internet fora I wrote some sentences defending José Booij, but which pointed a finger of blame at the person who had reported her to the police. That was not a rich person, but certainly a clever person, and they reported me to the police. I became a suspect in a case of alleged slander. Got interviewed by a nice local policeman. And a few months later I got a letter from the local criminal courts saying that if I paid 200 Euro administrative fees, the case would be administratively closed.

This led to the Bureau of Lost Causes shutting down its activities for a while. But it is now time for a come-back, a “re-boot”. In the meantime I did not do nothing, but got involved in half a dozen further cases, learning more and more about law, about forensic statistics, about scientific integrity, about organisations, psychology and social media. The BOLC is back.

ORGANISATION and PLANS

The BOLC has been dormant for a few years, but now that the founder has reached official retirement age, he is “rebooting” the organisation. Richard Gill founded the BOLC on the eve of nurse Lucia de Berk’s acquittal in 2006. A group of friends who had been closely associated with the movement to get Lucia a fair retrial decided that they so enjoyed one another’s company, and had learnt so much from the experience of the past few years, that they wanted to try out their skills on some new cases. We rapidly ran into some serious problems and temporarily closed down our website, though activities continued on several cases, more experience was gained, a lot was learnt.

We feel it is time to try again, having learnt some useful lessons from our failures of the last few years. Here is a rough outline of our plans.

1. Set up a robust formal structure with an executive board (chairman, secretary, treasurer) and an advisory board. Rather than calling it the scientific advisory board as is common in academic organisations, it should be a moral and/or wisdom advisory board, to be kept informed of our activities and to let us know if they think we are going off the rails. 

2. Possibly, make an application to become a foundation (“Stichting”). This means we will also be something like a society or a club, with an annual general meeting. We would have members, who might also like to make donations, since running a web site and occasionally getting into legal trouble costs money.

3. Write about the cases we have been involved in during recent years, in particular: alleged serial killer nurses Ben Geen (UK), Daniela Poggiali (Italy); allegations of scientific misconduct in the case of the PhD thesis of a student of Peter Nijkamp; the case of the AD Herring test and the quality of Dutch New Herring; the case of Kevin Sweeney.

RIP Bill van Zwet

The photograph above was taken by me at my summerhouse (i.e., an allotment garden with a comfortable large shed) in Leiden, exactly 9 years ago, with Jerry Friedman, Jacqueline Meulman, and Willem. It is early evening and we are enjoying a choice single malt and some tasty snacks. Jacqueline is wearing a t-shirt with the logo designed by me of our new master programme “Statistical Science for the Life and Behavioural Sciences”.

Below I am, for the time being, just posting a large collection of photographs sent in by a number of Bill’s old friends. I will also later add some of the comments they made in their emails. I will perhaps also add some personal remarks in the near future.

Before the many photo albums contributed by Bill’s friends, here is a link to a Zoom commemoration hosted by myself, which started one hour after the start of Willem’s funeral. Participants: Maryse Loranger, Richard Gill, Estate Khmaladze; Friedrich Götze and his wife, Marie-Colette van Lieshout, Nick Fisher; Peter Grunwald, and Ildar Ibragimov (who later managed to switch on his webcam); later arrivals were Ronald Cramer and Steffen Lauritzen:

https://www.math.leidenuniv.nl/~gill/zoom_0.mp4

The file is 207 MB mp4; duration 35 minutes; 5 minutes silence at ca. 25 – 30 min., when Zoom briefly fails us. You can also watch it on YouTube:

https://www.youtube.com/watch?v=s_yo1Uwzw9c.

And now to the photographs. First of all, three sets of pictures taken by Chris Klaassen, starting with Willem’s 75th birthday celebrations:

Chris Klaassen, Willems 80th birthday celebration, Leiden:

Chris Klaassen, Willem’s 80th birthday event in Utrecht:

David Mason:

Friedrich Götze:

Jacqueline Meulman and Maarten Kampert:

Marie Huskova:

Marta Fiocco:

Nick Fisher:

Niels Keiding:

© Niels Keiding

Peter Bickel:

Richard Gill:

Rudi Beran:

Sara van de Geer:

Stephen Stigler:

Vera Wellner:

Miscellaneous:

Warsaw

WT*?

Don’t be so impatient. All will be explained, in due time. In fact, time, and associations in space and time, is what this posting is all about.

Firstly, the *image* is the album art of the eponymous Joy Division album [I so love using the word “eponymous”!]. If you really do want to listen to it, here’s a YouTube link: https://www.youtube.com/watch?v=3UYnyiL8-VI

I warn you, it’s not everyone’s cup of tea.

Secondly, I have to tell you that last Monday I gave a Zoom talk at the dept. of physics at the Jagellionian University, Kraków; in a seminar series, hosted by my friend Jarek Duda. The announcement said that the talk (on quantum foundations, and in particular on the issues of time in Bell’s theorem) would start at 17:00 hours Warsaw time and for some days I was under the misapprehension that I would give (and later, had given) a virtual talk in Warsaw. Kraków, Warsaw, … I have wonderful memories of a number of fascinating Polish cities.

While preparing my slides I belatedly learnt that two or three months previously Boris Tsirelson (Tel Aviv) , one of my greatest scientific heros, had passed away in Basel, aged 70. One year older than me. (His family originally came from Bessarabia – nowadays more of less Moldavia. More holocaust connections here). Boris’ whole approach to Bell’s theorem, and not just his famous inequality (the “Tsirelson bound”), had always deeply resonated with me. I felt devastated, but also inspired.

Actually when I was asked if I would like to make a contribution to the J U Kraków seminar, the provisional title of my talk, and its initial “abstract”, referred to “Bell-denialists”. Of course I was thinking of one of my current Bell-denialist friends (recently referred to as my “nemesis” by another one of my friends, but I think of him more as an inspiring sparring partner) Joy Christian. So there comes the word “Joy” again. Those who are not fans of English post-punk of the late 70’s and early 80’s might like to confer with Wikipedia, to find out what historical organisation was alluded to in the name of the band https://en.wikipedia.org/wiki/Joy_Division. The lead singer, Ian Curtis, famously committed suicide at the very young age (for suicidal rock stars) of 23. He certainly was a “troubled young man” … . See the very beautiful movie “Control” directed by the Dutch photographer Anton Corbijn https://en.wikipedia.org/wiki/Control_(2007_film).

Coincidentally, today I saw the announcement of a new paper by my quantum friend Sascha Vongehr, “Many Worlds/minds Ethics and Argument Against Suicide: for Emergencies and Evaluation in Long Term Suicide Prevention and Mental Health Outcome”, on viXra, https://vixra.org/abs/2004.0158. There are actually some very fine papers on viXra!

But I digress, as is my wont. Here are the slides of my Kraków talk, and of a sequel (next Monday, 17:00 hours, Warsaw time!) https://www.math.leidenuniv.nl/~gill/Warsaw.pdf, https://www.math.leidenuniv.nl/~gill/Warsaw2.pdf [Moved to Tuesday in connection with Easter].

Perhaps, but maybe that will be on another day, and maybe even another posting, I will explain what my talks finally decided to be about.

In the meantime, thinking of requiems and Warsaw made me think of a piece by one of my favourite composers Alfred Schnittke, dedicated to the memory of the victims of the bombing of Belgrade by the nazi’s. I will add a link to a suitable YouTube performance, if I can find it. If this piece of music indeed exists anywhere, apart from in my mind. Google search is not giving me any help. I have to locate my CD collection…

Ah, it was “Ritual”. https://www.youtube.com/watch?v=rdnmWXkfR3E

The Beginning of the End, or the End of the Beginning?

Fhloston Paradise interior film frame

We see the hotel lobby of the Fhloston Paradise hotel, the enormous space cruise-ship from Luc Besson’s movie “The Fifth Element”. It occurs to me that our global village, the Earth, has itself become a huge space cruise-ship, including the below-decks squalor of the quarters of the millions of people working away to provide the luxury for the passengers in the luxurious areas in the top-decks.

Now turn to some other pictures. Covid-19 bar-charts.

No photo description available.

From top to bottom: (per day) new proven infections, new hospital admissions, deaths, in the Netherlands. Source: Arnout Jaspers. It looked to Arnout that we were already past the peak of the epidemic. His source: RIVM, https://www.rivm.nl/documenten/epidemiologische-situatie-covid-19-in-nederland-2-april-2020

The curves look to me like shifted and shrunk versions of one another. About a third of those who are reported infected (mostly because they actually reported themselves sick) get so bad they go to hospital a small week later and a quarter of them die there just a few days later.


People who are infected (and infectious) but don’t realise it are not in these pictures. There have been an awful lot of them, it seems. Self-isolation is reducing that number.
As Arnout figured out for himself by drawing graphs like this, and David Spiegelhalter reported earlier in the UK, this pandemic is in some sense (at present) not really such a big deal. Essentially, it is doubling everyone’s annual risk of death this year and hopefully this year only. This means that 2% of all of us will die this year instead of the usual 1%. It looks as though the factor (two) is much the same for different age-groups and different prior health status. The reason this has such a major effect on society is because of “just-in-time” economics which means that our health care system is pretty efficient when the rate is 1% but more or less breaks down when it is 2%.


What is alarming are reports that younger people are now starting to get sicker and die faster than originally was the case. Human-kind is one huge petri-dish in which these micro-machines [“The genome size of coronaviruses ranges from approximately 27 to 34 kilobases, the largest among known RNA viruses”. The “basis” units on the molecule are nanometers in size] have found a lovely place to self-replicate, and with each replication, there are chances of “errors”, and so it can rapidly find out for itself new ways to reproduce even more times.


The problem is, therefore, “the global village”. Mass consumerism. Mass tourism. Basically, the Earth is one cruise-ship. One busy shopping mall.


I would like to see the graphs in square root scale or even log scale. You will better be able to see the shapes, and you will more easily see that the places where the numbers are small are actually the noisiest, in a relative sense.