Author Archives: Richard Gill

Was the AD Herring Test about more than the herring?

“Is the AD Herring Test about more than the herring?” – opinion of prof.dr. R.D. Gill

I was asked for my opinion as a statistician and scientist in a case between the AD and Dr. Ben Vollaard (economist, Tilburg University). My opinion was asked by Mr. O.G. Trojan (Bird & Bird), which represents AD in this case. These are two articles by Mr Vollaard with a statistical analysis of data on the AD herring test of July and November 2017. The articles have not been published in scientific journals (therefore have not undergone peer review), but have been made available on the internet and publicized by press releases from Tilburg University, which has led to further attention in the media.

Dr. Vollaard’s work focuses on two suspicions regarding the AD herring test: first, that it would favour AD fishmongers in the Rotterdam area; and second, that it would favour AD fishmongers that source their herring from a particular wholesaler, Atlantic. This is related to the fact that a member of the AD herring test panel also has a business relationship with Atlantic: he teaches Atlantic herring cutting and other aspects of the preparation (and storage) of herring. These suspicions have surfaced in the media before. You may have noticed that fish shops from the Rotterdam area, and fish shops that are customers of Atlantic, often appear in the “top ten” of different years of the herring test. But that may just be right, because of the quality of the herring they serve. It cannot be concluded from this that the panel is biased.

The questions I would like to answer here are the following: does Vollaard’s research provide any scientific support for the complaints about the herring test? Is Vollaard’s own summary of his findings justified?

Vollaard’s first investigation

Vollaard works by estimating and interpreting a regression model. He tries to predict the test score from measured characteristics of the herring and from partial judgments of the panel. His summary of the results is: the panel prefers “herring of 80 grams with a temperature below 7 degrees Celsius, a fat percentage above 14 percent, a price of around € 2.50, fresh from the knife, a good microbiological condition, slightly aged, very well cleaned ”.

Note, “taste” is not on the list of measured characteristics. And by the way, as far as temperature is concerned, 7 degrees is the legal maximum temperature for the sale of herring.

However, it is not possible to explain the difference between the Rotterdam area and beyond by using these factors. Vollaard concludes that “sales outlets for herring in Rotterdam and surroundings receive a higher score in the AD herring test than can be explained by the quality of the herring served”. Is that a correct conclusion?

In my opinion, Vollaard’s conclusion is unjustified. There are four reasons why the conclusion is incorrect.

First, the AD herring test is primarily a taste test and the taste of a herring, as judged by the panel of three regular subjects, is undoubtedly not fully predictable using the characteristics that have been measured. The model also does not predict the final grade exactly. Apparently there is some correlation between factors such as price and weight with taste, or more generally with quality. A reasonably good prediction can be made with the criteria used by Vollaard together, but a “residual term” remains, which stands for differences in taste between herring from fishmongers that are otherwise the same as regards the characteristics that have been measured. Vollaard does not tell us how large that residual term is, and does not say much about it.

Second, the way in which the characteristics are related to the taste (linear additive), according to Vollaard, does not have to be valid at all. I am referring to the specific mathematical form of the prediction formula: final mark = a. weight +… + remaining term. Vollaard has assumed the simplest possible relationship, with as few unknown parameters as possible (a, b,…). Here he follows tradition and opts for simplicity and convenience. His entire analysis is only valid with the proviso that this model specification is correct. I find no substantiation for this assumption in his articles.

Third, regional differences in the quality and taste of herring are quite possible, but these differences cannot be explained by differences in the measured characteristics of the herring. There can be large regional differences between consumer tastes. The taste of the permanent panel members (two herring masters and a journalist) does not have to be everyone’s taste. Proximity to important ports of supply could promote quality.

Fourth, the fish shops studied are not a random sample. A fish trader that is highly rated in one year is extra motivated to participate again in subsequent years, and vice versa. Over the years, the composition of the collection of participants has evolved in a way that may depend on the region: the participants from Rotterdam and the surrounding area have pre-selected themselves more on quality. They are also more familiar with the panel’s preferences.

Vollaard’s conclusion is therefore untenable. The correct conclusion is that the taste experience of the panel cannot be fully explained (in the way that Vollaard assumes) from the available list of measured quality characteristics. Moreover, the participating fishmongers from the Rotterdam region are perhaps a more select group (preselected for quality) than the other participants.

So it may well be that the herring outlets in Rotterdam and surroundings that participate in the AD herring test get a higher score in the AD herring test than the participating outlets from outside that region, because their herring tastes better (and in general, is of better quality).

Vollaard’s second investigation

The second article goes a lot further. Vollaard tries to compare fish shops that buy their herring from wholesaler Atlantic with the other fish shops. He thinks that the Atlantic customers score higher on average than the others. The difference is also predicted by the model, so Vollaard can try, starting from the model, to attribute the difference to the measured characteristics (regarding the question “region”, the difference could not be explained by the model). It turns out that maturation and cleaning account for half of the difference; the rest of the difference is neatly explained by the other variables.

However, according to the AD, Vollaard has made mistakes in the classification of fishmongers as an Atlantic customer. An Atlantic customer whose test score was 0.5 was wrongly not included. The difference in mean score is 2.4 instead of 3.6. The second article therefore needs to be completely revised. All numbers in Table 1 are wrong. It is impossible to say whether the same analysis will lead to the same conclusions!

Still, I will discuss Vollaard’s further analysis to show that unscientific reasoning is also used here. We had come to the point where Vollaard observes that the difference between Atlantic customers and others, according to his model, is mainly due to the fact that they score better in the measured characteristics “ripening” and “cleaning”. Suddenly, these characteristics maturation and cleansing are called “subjective”: and Vollaard’s explanation of the difference is conscious or unconscious panel bias.

Apparently, the fact that these characteristics would be subjective is evidence to Vollaard that the panel is biased. Vollaard uses the subjective nature of the factors in question to make his explanation of the correlations found, namely panel bias, plausible. Or expressed in other words: according to Vollaard there is a possibility of cheating and so there must have been cheating.

This is pure speculation. Vollaard tries to substantiate his speculation by looking at the distribution over the classes in “maturation” and “cleaning”. For example, for maturation: the distribution between “average” / “strong” / “spoiled” is 100/0/0 percent for Atlantic, 60/35/5 for non-Atlantic; for cleaning: the split between good / very good is 0/100 for Atlantic, 50/50 for non-Atlantic. These differences are so great, according to Vollaard, that there must have been cheating. (By the way, Atlantic has only 15 fish shops, non-Atlantic nine times as many.)

Vollaard seems to think that “maturation” is so subjective that the panel can shift indefinitely between the “average”, “strong” and “spoiled” classes to favour Atlantic fishing traders. However, it is not obvious that the classifications “ripening” and “cleaning” are as subjective as Vollaard wants to make it appear. In any case, this is a serious charge. Vollaard allows himself the proposition that the panel members have misused the subjective factors (consciously or unconsciously) to benefit Atlantic customers. They would have consistently awarded Atlantic customers higher valuations than can be justified on the basis of their research.

But if the Atlantic customers are rightly evaluated as very high quality on the basis of fat content, weight, microbiology, fresh-from-the-knife – which objective factors, according to Vollaard, are responsible for the other half of the difference in the average grade – why should they not rightly score high on ripening and cleaning?

Vollaard notes that the ratings of “maturation” and “microbiological status” are inconsistent while, again, according to him, the first is a subjective judgment of the panel, the second an objective measurement of a laboratory. The AD noted that maturation is related to oil and fat becoming rancid, which is a process accelerated by oxygen and heat; while the presence of certain harmful microorganisms is caused by poor hygiene. We therefore do not expect any similarities between these different types of spoilage.

Vollaard’s arguments seem to be occasional arguments intended to confirm a previously taken position; statistical or economic science does not play a role here. In any case, the second article should be thoroughly revised in connection with the misclassification of Atlantic customers. The resulting adaptation of Table 1 could shed a completely different light on the difference between Atlantic customers and others.

My conclusion is that the scientific content of the two articles is low, and the second article is seriously contaminated by the use of incorrect data. The second article concludes with the words “These points are not direct evidence of favouring fish traders with the concerned supplier in the AD herring test, but the test team has all appearances against it based on this study.” This conclusion is based on incorrect data, on a possibly wrong model, and on speculation on topics outside of statistics or economics. The author himself has created appearances and then tried to substantiate it, but his reasoning is weak or even erroneous – there is no substantiation, only the appearance remains.

At the beginning I asked the following two questions: does Vollaard’s research give any scientific support to the complaints about the herring test? Is Vollaard’s own summary of his findings justified? My conclusions are that the research conducted does not contribute much to the discussions surrounding the herring test and that the conclusions drawn are erroneous and misleading.

Appendix

Detail points

Vollaard uses *, **, *** for significance at 10% level, 5% level, 1% level. This is a devaluation of the traditional 5%, 1%, 0.1%. Too much risk of false positives gives an overly exaggerated picture of the reliability of the results.

I find it very inappropriate to include “in the top 10” as an explanatory variable in the second article. Thus, a high score is used to explain a high score. I suspect that the second visit to top 10 stores only leads to minor adjustment of the test figure (eg 0.1 point to break a tie) so no need for this variable in the forecasting model.

Why is “price” omitted as an explanatory variable in the second article? In the first, “price” had a significant effect. (I think including “top ten” is responsible for the loss of significance of some variables, such as “region” and possibly “price”).

I have the impression that some numbers in the column “Difference between outlets with and without Atlantic as supplier” of Table 1, second article, are incorrect. Is it “Atlantic customers minus non-Atlantic customers” or, conversely, “non-Atlantic customers minus Atlantic customers”?

It is common in a regression analysis to perform extensive control of the model assumptions by means of residual analysis (“regression diagnostics”). No trace of this in the articles.

Regression analysis of data from a cross-section of companies over two years, so many fish shops occur twice. Correlation between the remainder terms over the two years?

What is the standard deviation of the remainder term? This is a much more informative feature of the model’s explanatory / predictive value than the R-square.

Richard Gill

April 5, 2018

Condemned by statisticians?

A Bayesian analysis of the case of Lucia de B.

de Vos, A. F. (2004).

Door statistici veroordeeld? Nederlands Juristenblad, 13, 686-688.


Here, the result of Google-translate by RD Gill


Would having posterior thoughts
Not be offending the gods?
Only the dinosaur
Had them before
Recall its fate! Revise your odds!
(made for a limerick competition at a Bayesian congress).

The following article has been the basis for two full-page articles on Saturday, March 13 in the science supplement of the NRC (with unfortunately disturbing typos in the ultimate calculation) and in “the Forum” of Trouw (with the expected announcement on the front page that I would claim that the chance that Lucy de B. was wrongly convicted would be 80%, which is not the case)

Condemned by statisticians?
Aart F. de Vos

Lucy de B is sentenced to life imprisonment. Statistical arguments played a role in this, although the influence of this in the media was overestimated. Many people died around the times that she was on duty. Accidentally? The consulted statistician, Henk Elffers, repeated his earlier statement during the current appeal that the probability was 1 in 342 million. I quote from the article “statisticians do not believe in coincidence” from the Hague newspaper of January 30th: “The chance that nine fatal incidents took place in the JKZ during the service of the accused on the basis of chance is nil, (…) It wasn’t a coincidence. I don’t know what it was. As a statistician I can’t say anything about it. The evidence is up to you”. The further report showed that the judge had great difficulty with this answer, but that it was not really solved.

Many witnesses were then heard who talk about circumstances, plausibility, oddities, improbabilities and undeniably strong connections. The court must combine all of this and arrive at a wise final judgment. A heavy task, certainly given the legal conceptual system that includes very many elements that have to do with probabilities, but does not need quantification and probability when combining them.

The crucial question is of course: how likely is Lucy de B to commit murders? Most laypeople will think that Elffers has answered that question, so that it is practically certain.

This is a misunderstanding. Elffers did not answer that question. Elffers is a classical statistician, and classical statisticians do not make statements about what is going on, but only about how unlikely things are if nothing were going on. However, there is another branch of statistics: the Bayesian. I belong to that other camp. And I’ve also been counting. With the following bewildering result:

If the information that Elffers used to reach his 1 in 342 million were the only information on which Lucy de B was convicted, I think that, based on a fairly superficial analysis, there would be about a 80% chance of this happening incorrectly .

This article is about this great contrast. It is not an indictment of Elffers, who was extremely modest in the court when interpreting his outcome, nor a plea to acquit Lucy de B, because the court uses mainly different arguments, albeit without unequivocal statements of probability, while there is nothing to absolute certainty. It is a plea to seriously study Bayesian statistics in the Netherlands, and this applies to both mathematicians and lawyers.

There is some similarity to the Sally Clark case, which was sentenced to life imprisonment in 1999 in England because two of her sons died shortly after birth. A wonderful analysis can be found in the September 2002 issue of “living mathematics”, an internet magazine (“http://plus.maths.org/issue21/features/clark/index.html”

An expert (not a statistician, a doctor) explained the chance that such a thing happened “accidentally” in the given circumstances 1 in 73 million. I quote: “probably the most infamous statistical statement ever made in a British courtroom (..) wrong, irrelevant, biased and totally misleading.” This statement is broken down to the ground in the said article. Including a reference to the Bayesian analysis. And a calculation of the probability that she was wrongly convicted of greater than 2/3. In this case, the expert’s statement was completely wrong on all counts, causing half the nation to fall over him and Sally Clark, though only released after four years. However, the case of Lucy de B. is infinitely more complicated. Elffers’ statement is, I will argue, not wrong, but it is misleading, and the Netherlands has no jurisprudence, but judgments, and even though they are not directly based on extensive knowledge of probability theory, they are much more settled. That does not alter the fact that there is a common element in the Lucy de B. and Sally Clark cases.

Bayesian statistics

My calculations are therefore based on alternative statistics, the Bayesian, named after Thomas Bayes, the first to write about ” reverse opportunities ”. That was in 1763. His discovery did not become really important until after 1960, mainly through the work of Leonard Savage, who proved that when you decide under uncertainty you cannot ignore the question of what opportunities the possible states of truth have. (in our case the states ” guilty ” and ” not guilty ”). Bayes learned how you can learn about that kind of opportunity from data. Scholars agree on the form of those calculations, which is pure probability. However, there is one problem: you have to think about what opportunities you would have given to the states before you saw your data (the prior). And often these are subjective opportunities. And if you have little data, the impact of those subjective chances on your final judgment is great. A reason for many classical statisticians to oppose this approach. Certainly in the Netherlands, where statistics are mainly conducted by mathematicians, people who are trained to solve problems without wondering what they have to do with reality. After a fanatical struggle over the foundations of decades (see my piece “the religious war of statisticians” at http://staff.feweb.vu.nl/avos/default.htm) the parties have come closer together. With one exception: the classical test. Bayesians have fundamental objections to classical tests. And Elffers’ statement takes the form of a classical test. This is where the groundwork debate focuses.

The Lucy Clog case

Following Elffers who explained his method of calculation in the Nederlands Juristenblad on the basis of a fictional case “Klompsma” [“klomp” is the Dutch word for “clog”. The suffix “-sma” indicates a person from the province of Groningen – RDG. This is all rather insulting] (which I also calculated to arrive at totally different conclusions) I want to talk about the fictional case Lucy Clog. Lucy Clog is a nurse who has experienced 11 deaths in a period in which on average only “one case occurs, but where no further evidence can be found. In this case too, Elffers would report an extremely small chance of chance in court, about 1 in 100 million. This is the case where I claim that a conviction, that is to say, given my information and my estimates of the context, has a chance of about 80% being wrong.

This requires some calculations. Some of them are complicated, but the most important aspect is not too difficult, although it appears that many are struggling with it. A simple example may make this key point clear.

You are at a party and a stranger starts telling you a whole story about the chance that Lucy de B is guilty, and he is counting joyfully on it. What do you think: is this a lawyer or a mathematician? If you say a mathematician because lawyers are usually unable to count as well, then you fall into a classical trap. You think: a mathematician can count well and the chance that a lawyer can count well is 10%, so it must be a mathematician. What you forget is that there are 100 times more lawyers than mathematicians. So if 10% of lawyers can keep such a story, there are still 10 times as many. So, under these assumptions, the probability is 10/11 that it is a lawyer. To which I must add that (I think) 75% of mathematicians belong to the male gender and 40% of lawyers, which I did not include. If she had been in the assignment instead of he would have made it up.

The same mistake, forgetting the context (more lawyers than mathematicians), can be made in the case of Lucy de B. The chance that you are dealing with a murderous nurse is a priori (before you know what is going on ) very much smaller than being an innocent nurse. You have to weigh that against the fact that the chance of 11 deaths is many times greater in the case of “murderous” than in the case of “innocent”.

The Bayesian way of performing the calculations in such cases also appears to be intuitively not easy to understand. Looking back on the example of the party, that might not be so bad.

The calculation does not go in terms of opportunities, but with “odds”, an untranslatable word that does not live in the Netherlands. Odds of 3 to 7 mean a chance of 3/10 that it is true and 7/10 that it is not. Englishmen who understand better thanks to horse racing: you win 7 if you are right and lose 3 if you are wrong. Opportunities and odds are two ways to describe the same thing. Another example: odds of 2 to 10 correspond to a probability of 2/12.

You need two elements for a simple Bayesian calculation. The prior odds and the plausibility ratio. In the example, the prior odds are mathematician or lawyer 1 to 100. The plausibility ratio is that a mathematician starts over calculating (100%) against the chance that a lawyer will do that (10%). So 10 to 1. The Bayes theorem now says that you must multiply the prior odds (1: 100) by the plausibility ratio (10: 1) to get the posterior odds (1:10), corresponding to a probability of 1 / 11 that it is a mathematician and 10/11 that it is a lawyer. Precis as previously calculated. The posterior odds are what you can say after the dates are known, the prior odds what you could say before. And the plausibility ratio is the way you learn from data.

Back to the Lucy Clog case. If the chance of 11 deaths is 1 in 100 million when Lucy Clog is innocent, and 1/2 when she is guilty – more about that later – then the plausibility ratio for the innocent against the guilty 1 to 50 million. But to calculate the probability of being guilty, you need the prior odds. They follow from the chance that a random nurse will commit murders. I estimate that at 1 to 400,000. There are forty thousand nurses in hospitals in the Netherlands, so that would mean nursing killings every 10 years. I hope that is an overestimate.
Bayes’ theorem now says that the posterior odds of “innocent” in the event of 11 deaths would be 400,000 to 50 million. That’s 8 out of 1000, a small chance, maybe enough to convict someone. Yet large enough to want to know more. And there is much more worth knowing.

It is strange that nobody has noticed anything. It is even stranger when further investigation yields no evidence of murder. If you think that there would still be an 80% chance of finding clues in the event of many murders, against of course 0% if it is a coincidence, then the plausibility ratio of the fact “nothing has been found” is 100 in favour of 20 innocence. Application of the rule shows that we now have odds of 40 to 1000, so a small 4% chance of innocence. Condemnation now becomes really questionable. And if the suspect continues to deny, which is more plausible when she is innocent than when she is guilty, say twice as plausible, the odds turn 80 to 1000, almost 8% chance.
As an explanation, an image that requires less calculation work (but says the same thing): It follows from the assumptions that in 20,000 years it occurs 1008 times that 11 deaths occur: 1,000 guilty and 8 innocent. Clues are found among 800 guilty people, 100 of the remaining 200 confess. 100 remain guilty and 8 innocent.

So Lucy Clog must be acquitted. And then I haven’t even talked about doubts about the chance of 1 in 100 million that “by chance” 11 people die. This chance would be many times higher in every Bayesian analysis. I estimate, based on experience, that 1 in 2 million would come out. A Bayesian analysis can include uncertainties. Uncertainties about the similarity of circumstances and qualities of nurses, for example. And uncertainties increase the chance of extreme events enormously, the literature contains many interesting examples. As I said, I think that if I had access to the data that Elffers uses, I would not get a chance of 1 in 100 million, but a chance of 1 in 2 million. At least I assume that for the time being, it would not surprise me if it were much higher. Preliminary calculations show that it can sometimes be 1 in 100,000. But 1 in 2 million already saves a factor of 50 by 1 in 100 million, and my odds would not be 80 to 1000 but 4000 to 1000, so 4 to 1. A chance of 80% to wrongly condemn. This is the 80% chance of innocence that I mentioned in the beginning. Unfortunately it is not possible to explain the factor 50 (or a factor 1000 if the 1 in 100,000 turns out to be correct) from the last step within the framework of this article without falling into mathematics.

What I hope has become clear is that you can always add information. “Not being able to find” and “has not known” are new facts that change the chance. And perhaps there are countless facts to add. In the case of Lucy de B., those kinds of facts are there. In the hypothetical case of Lucy Clog, not.

The fact that you can always add information in a Bayesian analysis is the most beautiful aspect of it. From prior odds, you come through data (11 deaths) to posterior odds, and these are again prior odds for the next steps, no indication and no confession. Virtually all further facts that emerge in a court case can be conceived in this way in the analysis. Any fact that has a different plausibility under the guilty hypothesis than the innocent hypothesis contributes. Perhaps it was noticed that it was only about opportunities that related to what actually happened that never happened to what could have happened. A classic test always talks about the probability of 11 or more deaths. That or more is irrelevant and misleading according to Bayesians. Incidentally, it is not necessarily easier to just talk about what happened. What is the probability of exactly 11 deaths if Lucy de Clog is guilty? The degree of murder, something with a lot of uncertainty about it, determines how many deaths there are, but if you are fired after 11 deaths, the chance is taken of you to commit even more. And that last fact matters for the odds. I have only put 50% down there, that is at most a factor of 2 next to it.

It may be clear that it is not really easy to come to statements if there is no convincing evidence. The most famous example to which many Bayesian are counted is a murder in California in 1956, committed by a black man with a white woman in a yellow Cadillac. A couple who met this description was taken to court, and many statistical analyzes followed. I have counted a lot on this example myself, and have experienced how difficult, but also surprising and satisfying, it is to constantly add new elements.

A whole other book is even devoted to a famous case: “a Probabilistic Analysis of the Sacco and Vanzetti Evidence,” published in 1996 by Jay Kadane, professor of Carnegie Mellon and one of the most prominent Bayesians. Who wants to know more consult his resume on his website http://lib.stat.cmu.edu/~kadane. In the ” Statistics and the Law ” field alone, he has more than thirty publications to his name, along with hundreds of other articles. This is now a well-developed field in America.

Conclusion?

I have thought for a long time what the conclusion of this story is, and I have had to revise my opinion several times. And the perhaps surprising conclusion is: the actions of all parties are not that bad, only their rationalization is, to put it mildly, a bit strange. Elffers makes strange calculations but formulates the conclusions in court in such a way that it becomes intuitively clear that he is not giving the answer that the court is looking for. The judge makes judgments that sound in terms of probabilities but I cannot bake bread from. But when I see what happens I get the feeling that it is much more like what is optimal than I would have thought possible, given the absurd rationalisations. The explanation is simple: actions are based on a process based on evolution, justifications are stuck on it and based on education. In my opinion, the Bayesian method is the only way to balance decisions under uncertainty about actions and rationalization. And that can be very fruitful. But the profit is initially much smaller than people think. What the court does in the Lucy de B case is surprisingly rational. The 11 deaths are not convincing in themselves, but enough to change the prior odds from 1 in 40,000 to odds from 16 to 5, in short, an order of magnitude in which it is necessary to gather additional information before judging. Exactly what the court does.

When I made my calculations, I thought at times: I have to go to court I finally sent the article but I heard nothing more about it. It turned out that the defence had called for a witness who seriously criticized Elffers’ calculations. However, without presenting the solution.
Maybe I will once again have the opportunity to fully calculate the Lucy de B. case. That could provide new insights. But it is quite a job. In this case, there is much more information than is used here, such as poisonous traces in patients. Here too, it is likely that a Bayesian analysis that takes into account all the uncertainties shows that statements by experts who say something like “it is impossible that there is another explanation than the administration of poison by Lucy de B” should be taken with a grain of salt turn into. Experts are usually people who overestimate their securities. On the other hand, incriminating information can also build up. Ten independent facts that are twice as likely under the guilt hypothesis change the odds by a factor of 1000. And if it turns out that the toxic traces of five deceased patients are nine times as likely as nine times more likely as a result of Lucy de B’s “murder-lust” among other explanations, it saves a factor of nine to the fifth, a small 60,000. Etc, etc

But I think the court is more or less like that. In an incomprehensible language, not for probability calculators, but sanctioned by evolution. We have few cases of convictions that were found to be wrong in the Netherlands. [Well! That was a Dutch layperson, writing in 2004. According to Ton Derksen, about 10% of very long term prisoners (very serious cases) are innocent, in the Netherlands. It is probably something similar in other jurisdictions. RDG].

If you did the entire process in terms of probability calculation, the resulting debates between prosecutors and lawyers cannot be overseen. And given their poor knowledge of probability, it is also undesirable for the time being. They have their secret language that usually led to reasonable conclusions. Even the chance that Lucy is guilty of B does not really fit in with that. There is also no law in the Netherlands that defines “legal and convincing evidence” in terms of the chances of a justified decision. Is that 95%? Or 99%? Judges will maintain that it is 99.99%. But judges are experts.

So I don’t think it’s wise to try to cast the process in terms of opportunity right now. But perhaps this discussion will produce something in the longer term. Judges who are well informed about the statistical significance of the starting situation and then write down a number for each piece of evidence of prosecutor and defender. The plausibility ratio of the fact discussed. To multiply all these numbers at the end and have his calculations checked again by a Bayesian statistician. However, I consider this a long-term perspective. I fear (I am not really young anymore) for life.

The magic of the d’Alembert

Simulations of the d’Alembert on a faIr roulette wheel with 36 paying outcomes and one “0”. Even odds bets (e.g., red versus black). Each line is one game. Each picture is 200 games. Parameters: initial capital of 25 units, maximum number of rounds is 21, emergency stop if capital falls below 15.

Source:


Harry Crane and Glenn Shafer (2020), Risk is random: The magic of the d’Alembert. https://researchers.one/articles/20.08.00007

Stewart N. Ethier (2010), The Doctrine of Chances Probabilistic Aspects of Gambling. Springer-Verlag: Berlin, Heidelberg.

set.seed(12345)
startKapitaal <- 25
eersteInzet <- 1
noodstopKapitaal <- 15
aantalBeurten <- 21
K <- 100
J <- 200
winsten <- rep(0, K)

for (k in (1:K)){

	plot(x = -2, y = -1, ylim = c(-5, 45), xlim = c(0, 22), xlab = "Beurt", ylab = "Kapitaal")
	abline(h = 25)
	abline(h = 0, col = "red")

	aantalKeerWinst <- 0
	totaleWinst <- 0

	for (j in (1:J)) {

		huidigeKapitaal <- startKapitaal
		huidigeInzet <- eersteInzet
		resultaten <- sample(x = c(-1, +1), prob = c(19, 18), size = aantalBeurten, replace = TRUE)
		verloop <- rep(0, aantalBeurten)
		stappen <- rep(0, aantalBeurten)
		for(i in 1:aantalBeurten) {
			 huidigeResultaat <- resultaten[i]
			 if(huidigeInzet > 0){
				  stap <- huidigeResultaat * huidigeInzet
				  stappen [i] <- stap
				  huidigeKapitaal <- huidigeKapitaal + stap
				  huidigeInzet <- max(1, huidigeInzet - stap)
				  if(huidigeKapitaal < noodstopKapitaal) {huidigeInzet <- 0}
				  verloop[i] <- huidigeKapitaal
			 } else {
				  stappen[i] <- 0
				  verloop[i] <- huidigeKapitaal
			 }
		} 
	aantalKeerWinst <- aantalKeerWinst + (verloop[aantalBeurten] > startKapitaal)
	totaleWinst <- totaleWinst + (huidigeKapitaal - startKapitaal)
	lines(0:aantalBeurten, c(startKapitaal, verloop) + runif(1, -0.15, +0.15 ), add = TRUE)
	}
print(c(k, aantalKeerWinst, totaleWinst))
winsten[k] <- totaleWinst
}

The program repeatedly runs and plots 200 games of each maximally 21 rounds. Below are the total number of times that the player made a profit, and the final net gain, for 100 sets of 200 games. The sets are numbered 1 to 100.

[1]    1  100 -483
[1]    2  108 -336
[1]    3  103 -517
[1]    4  110 -275
[1]   5 123 -40
[1]   6 125 148
[1]    7  115 -209
[1]    8  104 -427
[1]    9  108 -356
[1]   10  110 -225
[1]   11  101 -440
[1]  12 120  80
[1]   13  108 -334
[1]   14  110 -279
[1]   15   99 -538
[1]   16  114 -101
[1]  17 113 -92
[1]  18 117 -87
[1]   19  104 -363
[1]   20  103 -320
[1]  21 114 -52
[1]   22  107 -422
[1]   23  108 -226
[1]   24  115 -173
[1]   25  110 -209
[1]   26  109 -261
[1]   27  114 -186
[1]  28 120 -62
[1]  29 123  35
[1]   30  101 -442
[1]   31  111 -215
[1]   32  104 -378
[1]  33 120  49
[1]  34 117 -49
[1]   35  119 -102
[1]   36  104 -488
[1]   37  107 -402
[1]  38 122  38
[1]   39  100 -549
[1]  40 116 -31
[1]  41 127 220
[1]   42  105 -427
[1]   43  114 -153
[1]   44  109 -256
[1]   45  119 -166
[1]  46 121  47
[1]   47  105 -417
[1]   48  113 -134
[1]  49 121 111
[1]   50  112 -307
[1]  51 114 -92
[1]  52 123 123
[1]  53 118  24
[1]   54  113 -188
[1]  55 124 127
[1]   56  110 -229
[1]   57  113 -255
[1]   58  101 -554
[1]   59  114 -345
[1]  60 124 236
[1]   61   97 -599
[1]   62  115 -220
[1]  63 120  55
[1]   64  102 -512
[1]  65 121 109
[1]   66  112 -219
[1]   67  112 -181
[1]  68 115 -45
[1]   69  107 -474
[1]   70  109 -272
[1]   71  116 -134
[1]   72  107 -440
[1]   73  108 -470
[1]  74 119 -85
[1]  75 115   1
[1]  76 115 -88
[1]   77  113 -219
[1]  78 118 -55
[1]   79  115 -150
[1]  80 124  70
[1]   81  115 -203
[1]   82  115 -153
[1]   83  109 -219
[1]   84   97 -675
[1]   85  108 -396
[1]   86  112 -220
[1]   87  115 -187
[1]   88  108 -290
[1]   89  114 -182
[1]   90  105 -439
[1]   91  113 -183
[1]   92  115 -216
[1]  93 124 110
[1]   94  115 -173
[1]  95 125 177
[1]   96  110 -203
[1]  97 128 160
[1]  98 114 -83
[1]  99 118 -90
[1] 100 123 106

Steve Gull’s challenge: An impossible Monte Carlo simulation project in distributed computing

At the 8th MaxEnt conference in 1998, held in Cambridge UK, Ed Jaynes was the star of the show. His opening lecture has the following abstract: “We show how the character of a scientific theory depends on one’s attitude toward probability. Many circumstances seem mysterious or paradoxical to one who thinks that probabilities are real physical properties existing in Nature. But when we adopt the “Bayesian Inference” viewpoint of Harold Jeffreys, paradoxes often become simple platitudes and we have a more powerful tool for useful calculations. This is illustrated by three examples from widely different fields: diffusion in kinetic theory, the Einstein–Podolsky–Rosen (EPR) paradox in quantum theory [he refers here to Bell’s theorem and Bell’s inequalities], and the second law of thermodynamics in biology.”

Unfortunately Jaynes was completely wrong in believing that John Bell had merely muddled up his conditional probabilities in proving the famous Bell inequalities and deriving the famous Bell theorem. At the conference, astrophysicist Steve Gull presented a three line proof of Bell’s theorem using some well known facts from Fourier analysis. The proof sketch can be found in a scan of four smudged overhead sheets on Gull’s personal webpages at Cambridge University.

Together with Dilara Karakozak I believe I have managed to decode Gull’s proof, https://arxiv.org/abs/2012.00719, though this did require quite some inventiveness. I have given a talk presenting our solution and point out further open problems. I have the feeling progress could be made on interesting generalisations using newer probability inequalities for functions of Rademacher variables.

Here are slides of the talk: https://www.math.leidenuniv.nl/~gill/gull-talk.pdf

Not being satisfied, I wrote a new version of the talk, using different tools. Notes written with Apple pencil on the iPad, then I discuss them while recording my voice and the screen (so: either composing the notes live, or editing them live) https://www.youtube.com/watch?v=W6uuaM46RwU&list=PL2R0B8TVR1dIy0CnW6X-Nw89RGdejBwAY

R stuff

A little plot for one of my friends, code:

a <- 1:360
b <- 1:360
AB <- outer(-sign(cos(pi * a / 180)), sign(cos(pi * b / 180)), "*")
d <- outer(a, b, "-")
ABvec <- as.vector(AB)
dvec <- as.vector(d)
out <- aggregate(x = ABvec, by = list(dvec), FUN = mean)
str(out)
dvals <- out[ , 1]
corrs <- out[ , 2]
plot(dvals, corrs, type = "l")
lines(dvals, -cos(pi * dvals / 180), col = “magenta”)

Output:

'data.frame': 719 obs. of 2 variables:
$ Group.1: int -359 -358 -357 -356 -355 -354 -353 -352 -351 -350 …
$ x : num -1 -1 -0.999 -0.998 -0.997 …

Plot:

You can see the context of this R programming exercise, here:

Take two large wooden disks and colour half of each black, half white. Fix to a wall. There’s a pointer painted on the disks, near the edge, in the middle of the black half of the circumference. Painted on the wall, just above the top of each disk, is another pointer. Around the side of the disk, painted on the wall, equally spaced, are the numbers 1 to 360; 360 on top. Spin each disk. Wait till it stops. The pointers painted in the disks each point to a number alpha, beta between 1 and 360, painted on the wall. The pointers at the top of the disk painted on the wall point either to black, or to white, in the disk. That defines A = +/- 1 and B = +/- 1. You repeat this fairground game many, many times, and average A times B for each value of delta = alpha – beta.

Image: two fairground “wheels of fortune” which have come to rest at A = +1 (black on top), B = -1 (white on top), alpha = 40 (approx), beta = 170 (approx)

The earlier drawn graph is, I believe, piecewise quadratic.

A fungal year

I want to document the more than 20 species of wild mushrooms which I’ve collected and enjoyed eating this year. I will go through my collection of photographs in reverse chronological order. But above, the featured image, taken back in September: Neoboletus luridiformis, the scarletina bolete; in Dutch, heksenboleet (witch’s bolete. Don’t worry. The guy to avoid is the devil’s bolete).

I get my mushroom knowledge from quite a few books and from many websites. In this blog I will just give the English and Dutch wikipedia pages for each species. I highly recommend Google searching the Latin name (though notice – scientific names do change, as science gives us new knowledge) and if your French, German or other favourite language also has a wikipedia page, nature lover’s web pages, forager’s webpages, or whatever, check them out, because ideas of edibility and of how to cook mushrooms which are considered edible varies all over the world. If at some time there was a famine, and the only country people who could survive were those who went out in the forest and found something they could eat, then their fellows who had allergic reactions to those same mushrooms did not survive, and in this way different human populations are adapted to different fungi populations. It’s also very important to consult local knowledge (in the form of local handbooks, local websites) since the dangerous poisonous look-alikes which you must avoid vary in different parts of the world.

Do not eat wild mushrooms raw. You don’t know what is still crawling about in it, and you don’t know what has pooped or pissed on it or munched at it recently. Twenty minutes gentle cooking should destroy anything nasty, and moreover, it breaks down substances which are hard for humans to digest. The rigid structure of mushrooms is made of chitin (which insects use for their external body) and we cannot digest it raw. Some people have allergic reactions to raw chitin.

Contents

Paralepista flaccida

Russula cyanoxantha

Armillaria mellea

Coprinus comatus

Suillus luteus

Amanita muscaria

Sparassis crispa

[To be continued]

Appendix: some mushrooms and fungi to be wondered at, but not eaten

1. Paralepista flaccida

Tawny funnel, Roodbruine schijnridderzwam. Grows in my back garden in an unobtrusive spot, fruiting every year in December to January. Yellow-pinkish spore print, lovely smell, nice taste. Also after frying! The combination of aroma/taste/spore-print just does not fit any of the descriptions of this mushroom or those easy to confuse with it which I can find. There is a poisonous lookalike which however is not supposed to taste good, so that’s why I dared to eat this one. It grows close to a Lawson cyprus but there may be other old wood remains underground in the same spot.

English wikipedia: https://en.wikipedia.org/wiki/Paralepista_flaccida

Netherlands wikipedia: https://nl.wikipedia.org/wiki/Roodbruine_schijnridderzwam

2. Russula cyanoxantha

Charcoal burner, Regenboogrussula (rainbow russula). Very common in the forests behind “Palace het Loo”. A really delicious russula species, easy to identify.

English wikipedia: https://en.wikipedia.org/wiki/Russula_cyanoxantha

Netherlands wikipedia: https://nl.wikipedia.org/wiki/Regenboogrussula

3. Armillaria mellea

Honey fungus, echte honingzwam. These fellows are growing out of the base of majestic beech trees at Palace het Loo. The trees are all being cut down now; excuse: “they’re sick”; true reason: high quality beech wood is very valuable. The trees are hosts to numerous fungi, animals, birds. The managers of the park have been doing their best to kill them off for several decades by blowing their fallen leaves away and driving heavy machinery around. Looks like their evil designs are bearing fruit now.

English wikipedia: https://en.wikipedia.org/wiki/Armillaria_mellea

Netherlands wikipedia: https://nl.wikipedia.org/wiki/Echte_honingzwam

4. Coprinus comatus

Shaggy ink cap, Geschubde inktzwam. One of the last ones of the season, very fresh, from a field at the entrance to the Palace park. These guys are so delicious, fried in butter with perhaps lemon juice, and a little salt and pepper, they have a gentle mushroom flavour, they somehow remind me of oysters. And of Autumns in Aarhus, picking them often from the lawns of the university campus.

English wikipedia: https://en.wikipedia.org/wiki/Coprinus_comatus

Dutch wikipedia: https://nl.wikipedia.org/wiki/Geschubde_inktzwam

5. Suillus luteus

Suillus luteus

Slippery jack, bruine ringboleet. This one looks rather slimy and it is said that it needs to be cooked well, it disagrees with some people. It didn’t disagree with me at all, but I must say it did not have much flavour, and does feel a bit slippery in your mouth.

English wikipedia: https://en.wikipedia.org/wiki/Suillus_luteus

Dutch wikipedia: https://nl.wikipedia.org/wiki/Bruine_ringboleet

6. Amanita muscaria

Fly agaric, vliegenzwam. This mushroom contains both poisons and psychoactive substances. However, both are water soluble. One therefore boils these mushrooms lightly for 20 minutes in plenty of lightly salted water with a dash of vinegar, then drain and discard the fluid; then they can be fried in butter and brought up to taste with salt and pepper. They are then actually very tasty, in my opinion.

Another use for them is to soak them in a bowl of water and leave in your kitchen. Flies will come and investigate it, taste some get high (literally and figuratively) and drop dead. The smell is pretty disgusting at this stage.

I understand you can dry them, grind to powder, and make tea. This allegedly destroys the poisons but leaves enough of the psychoactive substances to have interesting effects. I haven’t tried it, since one of the effects is to set your heart racing, and since I have a dangerously irregular hearth rhythm already, I should not experiment with this.

Some people munch a small piece raw, from time to time, while walking in the forests. I have tried that – teaspoon size, desertspoon size even, without noticing anything except that perhaps for a moment everything sparkled more beautifully than usual. Probably that was the placebo effect.

Amanita muscaria is not terribly poisonous. If you cook and eat three or four you will probably throw up after an hour or two and also experience rather unpleasant hallucinations. To be rounded off with diarrhea and generally feeling unwell. You might find yourself getting very large or very small, it depends of course whether you nibble from the right-hand edge of the mushroom or the left-hand edge. You might believe you can fly so it can be dangerous to be in high places on your own. The poisons may damage your liver but being water soluble they are quite efficiently and rapidly excreted from the body, which is a good thing, so eating them just once probably won’t kill you and probably won’t give you permanent damage. Several other Amanita species are deadly poisonous. With poisons which do not dissolve in water and do not leave your body after you’ve eaten them, but instead destroy your liver in a few days. One must learn to recognise those mushrooms very well. In my part of the world: Amanita phalloides – the death cap (groene knolamaniet); Amanita pantherina – the panther cap (panteramaniet). I have seen these two even in the parks and roadside verges in my town, as well as in the forests outside. More rare is Amanita virosa – the destroying angel (kleverige knolamaniet). But I believe I have seen it close to home, too. It is a white mushroom with white gills and consequently many people believe you must never touch a white mushroom with white gills. Consequently, writers of mushroom books themselves generally have the idea that edible white mushroom with white gills, which do exist, do not taste particularly good, either, and so one should not bother with them. Hence they do not explain well how you can tell the difference. We will later (i.e., earlier this year) meet the counterexample to that myth.

Because of the psychoactive effects of Amanita muscaria it is actually presently illegal, in the Netherlands, to be found in possession of more than a very small amount.

7. Sparassis crispa

The cauliflower mushroom, grote sponszwam. One of my favourites. It does have the tendency to envelope leaves and insects in its folds. Before cooking it has a wonderful aroma, almost aromatic, but on frying it seems to lose a lot of flavour.

English wikipedia: https://en.wikipedia.org/wiki/Sparassis_crispa

Dutch wikipedia: https://nl.wikipedia.org/wiki/Grote_sponszwam

Time, Reality and Bell’s Theorem

Featured image: John Bell with a Schneekugel (snowing ball) made by Renate Bertlmann; in the Bells’ flat in Geneva, 1989. © Renate Bertlmann.

Lorentz Center workshop proposal, Leiden, 6–10 September 2021

As quantum computing and quantum information technology moves from a wild dream into engineering and possibly even mass production and consumer products, the foundational aspects of quantum mechanics are more and more hotly discussed. Whether or not various quantum technologies can fulfil their theoretical promise depends on the fact that quantum mechanical phenomena cannot be merely emergent phenomena, emerging from a more fundamental physical framework of a more classical nature. At least, that is what Bell’s theorem is usually understood to say: any underlying mathematical physical framework which is able, to a reasonable approximation, to reproduce the statistical predictions made by quantum mechanics, cannot be local and realist. These words have nowadays precise mathematical meanings, but they stand for the general world view of physicists like Einstein, and in fact they stand for the general world view of the educated public. Quantum physics is understood to be weird, and perhaps even beyond understanding. “Shut up and calculate”, say many physicists.

Since the 2015 “loophole-free” Bell experiments of Delft, Munich, Vienna and at NIST, one can say even more: laboratory reality cannot be explained by a classical-like underlying theory. Those experiments were essentially watertight, at least as far as experimentally enforceable conditions are concerned. (Of course, here is heated discussion and criticism, too).

Since then however it seems that even more energy than ever before is being put into serious mathematical physics which somehow gets around Bell’s theorem. A more careful formulation of the theorem is that the statistical predictions of quantum mechanics cannot be reproduced by a theory having three key properties: locality, realism, and no-conspiracy. What is meant by no-conspiracy? It means that experimenters are free to choose settings of their experimental devices, independently of the underlying properties of the physical systems which they are investigating. In the case of a Bell-type experiment, a laser aimed at a crystal which emanates a pair of photons which arrive at two distant polarising photodectors, ie detectors which can measure the polarisation of a photon in directions chosen freely by the experimenter. If the universe actually evolves in a completely deterministic manner, then everything that goes on in those labs (housing the source and the detectors and all the cables or whatever in between) was determined already at the time of the big bang, the photons can in principle “know in advance” how they are going to be measured.

At the present time, highly respectable physicists are working on building a classical-like model for these experiments using superdeterminism. Gerard ’t Hooft used to be a lonely voice arguing for such models but he is no longer quite so alone (cf. Tim Palmer, Oxford, UK). Other physicists are using a concept called retro-causality: the future influences the past. This leads to “interpretations of quantum mechanics” in which the probabilistic predictions of quantum mechanics, which seem to have a built in arrow of time, do follow from a time symmetric physics (cf. Jaroslav Duda, Krakow, Poland).

Yet other physicists dismiss “realism” altogether. The wave function is the reality, the branching of many possible outcomes when quantum systems interact with macroscopic systems is an illusion. The Many Worlds Interpretation is still very alive. Then there is QBism, where the “B” probably was meant to stand for Bayesian (subjectivist) probability, in which one goes to an almost solipsistic view of physics; the only task of physics is to tell an agent what are the probabilities of what the agent is going to experience in the future; the agent is rational and uses the laws of quantum mechanics and standard Bayesian probability (the only rational way to express uncertainty or degrees of belief, according to this school) to update probabilities as new information is obtained. So there only is information. Information about what? This never needs to be decided.

To the right, interference patterns of waves of future quantum possibilities. To the left, the frozen actually materialised past. At the boundary, the waves break, and briefly shining fluorescent dots of light on the beach represent the consciousness of sentient beings. Take your seat and enjoy. Artist: A.C. Gill
On the right, interference patterns of waves of future quantum possibilities. On the left, the frozen actually materialised past. At the boundary, the waves break, and briefly shining fluorescent dots of light on the beach represent the consciousness of sentient beings. Take your seat and enjoy. Artist: A.C. Gill

Yet another serious escape route from Bell is to suppose that mathematics is wrong. This route is not taken seriously by many, though at the moment, Nicolas Gisin (Geneva), an outstanding experimentalist and theoretician, is exploring the possibility that an intuitionistic approach to the real numbers could actually be the right way to set up the physics of time. Klaas Landsman (Nijmegen) seems to be following a similar hunch.

Finally, many physicists do take “non-locality” as the serious way to go; and explore, with fascinating new experiments (a few years ago in China, Anton Zeilinger and Jian-Wei Pan; this year Donadi e al.), hypotheses concerning the idea that gravity itself leads to non-linearity in the basic equations of quantum mechanics, leading to the “collapse of the wave function”, by a definitely non-local process.

At the same time, public interest in quantum mechanics is bigger than ever, and non-academic physicists are doing original and interesting work, “outside of the mainstream”. Independent researchers can and do challenge orthodoxy, and it is good that someone is doing that. There is a feeling that the mainstream has reached an impasse. In our opinion, the outreach from academia to the public has also to some extent failed. Again and again, science supplements publish articles about amazing new experiments, showing ever more weird aspects of quantum mechanics, but it is often clear that the university publicity department and the science journalists involved did not understand a thing, and the newspaper articles are extraordinarily misleading if not palpably nonsense.

In the Netherlands there has long been a powerful interest in foundational aspects of quantum mechanics and also, of course, in the most daring experimental aspects. The Delft experiment of 2015 was already mentioned. At CWI, Amsterdam, there is an outstanding group led by Harry Buhrman in quantum computation; Delft has a large group of outstanding experimentalists and theoreticians, in many other universities there are small groups and also outstanding individuals. In particular one must mention Klaas Landsman and Hans Maassen in Nijmegen; and one must mention the groups working in the foundations of physics in Utrecht and in Rotterdam (Fred Muller). Earlier we had of course Gerard ’t Hooft, Dennis Dieks and Jos Uffinck in Utrecht; some of them retired but still active, others moved abroad. A new generation is picking up the baton.

The workshop will therefore bring a heterogeneous group of scientists together, many of whom disagree fundamentally on basic issues in physics. Is it an illusion to say that we can ever understand physical reality? All we can do is come up with sophisticated mathematics which amazingly gives the right answer. Yet there are conferences and Internet seminars where these disagreements are fought out, amicably, again and again. It seems that perhaps some of the disagreements are disagreements coming from different subcultures in physics, very different uses of the same words. It is certainly clear that many of those working on how to get around Bell’s theorem, actually have a picture of that theorem belonging to its early days. Our understanding has enormously developed over the decennia, and the latest experimentalists have perhaps a different theorem in mind, to the general picture held by theoretical physicists who come from relativity theory. Indubitably, the reverse is also true. We are certain that the meeting we want to organise will enable people from diverse backgrounds to understand one another more deeply and possibly “agree to differ” if the difference is a matter of taste; if however the difference has observable physical consequences then we must be able to figure out how to observe them.

The other aim of the workshop is to find better ways to communicate quantum mysteries to the public. A physical theory which basically overthrows our prior conceptions of time, space and reality, must impact culture, art, literature; it must become part of present day life; just as earlier scientific revolutions did. Copernicus, Galileo, Descartes, Newton taught us that the universe evolves in a deterministic (even if chaotic) way. Schrödinger, Bohr and all the rest told us this was not the case. The quantum nature of the universe certainly did impact popular culture but somehow it did not really impact the way that most physicists and engineers think about the world.

Illustration from Wikipedia, article on Bell’s Theorem. The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at 0°, perfect correlation at 180°. Many other possibilities exist for the classical correlation subject to these side conditions, but all are characterized by sharp peaks (and valleys) at 0°, 180°, and 360°, and none has more extreme values (±0.5) at 45°, 135°, 225°, and 315°. These values are marked by stars in the graph, and are the values measured in a standard Bell-CHSH type experiment: QM allows ±1/√2 = ±0.7071…, local realism predicts ±0.5 or less.

BOLC (Bureau Verloren Zaken) “reloaded”

Het BOLC is weer terug.

10 jaar geleden (in 2010) werd de Nederlandse verpleegster Lucia de Berk bij een nieuw proces vrijgesproken van een aanklacht van 7 moorden en 3 pogingen tot moord in ziekenhuizen in Den Haag in een aantal jaren in de aanloop naar slechts een paar dagen voor de gedenkwaardige datum van “9-11”. De laatste moord zou in de nacht van 4 september 2001 zijn gepleegd. De volgende middag meldden de ziekenhuisautoriteiten een reeks onverklaarbare sterfgevallen aan de gezondheidsinspectie en de politie. Ook plaatsten ze Lucia de B., zoals ze bekend werd in de Nederlandse media, op ‘non-active’. De media meldden dat er ongeveer 30 verdachte sterfgevallen en reanimaties werden onderzocht. De ziekenhuisautoriteiten meldden niet alleen wat volgens hen vreselijke misdaden waren, ze geloofden ook dat ze wisten wie de dader was.

De wielen van gerechtigheid draaien langzaam, dus er was een proces en een veroordeling; een beroep en een nieuw proces en een veroordeling; eindelijk een beroep op het hooggerechtshof. Het duurde tot 2006 voordat de veroordeling (levenslange gevangenisstraf, die in Nederland pas wordt beëindigd als de veroordeelde de gevangenis verlaat in een kist) onherroepelijk wordt. Alleen nieuw bewijs kan het omverwerpen. Nieuwe wetenschappelijke interpretaties van oud bewijs worden niet als nieuw bewijs beschouwd. Er was geen nieuw bewijs.

Maar al, in 2003-2004, maakten sommige mensen met een interne band met het Juliana Kinderziekenhuis zich al zorgen over de zaak. Nadat ze in vertrouwen met de hoogste autoriteiten over hun zorgen hadden gesproken, maar toen ze te horen kregen dat er niets aan te doen was, begonnen ze journalisten te benaderen. Langzaam maar zeker raakten de media weer geïnteresseerd in de zaak – het verhaal was niet meer het verhaal van de vreselijke heks die baby’s en oude mensen zonder duidelijke reden had vermoord, behalve voor het plezier in het doden, maar van een onschuldige persoon die was verminkt door pech, incompetente statistieken en een monsterlijk bureaucratisch systeem dat eens in beweging, niet kon worden gestopt.

Onder de supporters van Metta de Noo en Ton Derksen waren enkele professionele statistici, omdat Lucia’s aanvankelijke veroordeling was gebaseerd op een foutieve statistische analyse van door het ziekenhuis verstrekte onjuiste gegevens en geanalyseerd door amateurs en verkeerd begrepen door advocaten. Anderen waren informatici, sommigen waren ambtenaren op hoog niveau van verschillende overheidsorganen die ontsteld waren over wat ze zagen gebeuren; er waren onafhankelijke wetenschappers, een paar medisch specialisten, een paar mensen met een persoonlijke band met Lucia (maar geen directe familie); en vrienden van zulke mensen. Sommigen van ons werkten vrij intensief samen en werkten met name aan de internetsite voor Lucia, bouwden er een Engelstalige versie van en brachten deze onder de aandacht van wetenschappers over de hele wereld. Toen kranten als de New York Times en The Guardian begonnen te schrijven over een vermeende gerechtelijke dwaling met verkeerd geïnterpreteerde statistieken, ondersteund door opmerkingen van Britse topstatistici, hadden de Nederlandse journalisten nieuws voor de Nederlandse kranten, en dat soort nieuws werd zeker opgemerkt in de gangen van de macht in Den Haag.

Snel vooruit naar 2010, toen rechters niet alleen Lucia onschuldig verklaarden, maar voor de rechtszaal hard-op verklaarden dat Lucia samen met haar collega-verpleegkundigen uiterst professioneel had gevochten om het leven van baby’s te redden die onnodig in gevaar werden gebracht door medische fouten van de medisch specialisten die waren belast met hun zorg. Ze vermeldden ook dat alleen omdat het tijdstip van overlijden van een terminaal zieke persoon niet van tevoren kon worden voorspeld, dit niet betekende dat het noodzakelijkerwijs onverklaarbaar en dus verdacht was.

Enkelen van ons, opgetogen door onze overwinning, besloten om samen te werken en een soort collectief te vormen dat zou kijken naar andere ‘verloren zaken’ met mogelijke justitiele dwalingen waar de wetenschap misbruikt was. Ik had al had mijn eigen onderzoeksactiviteiten omgebogen en gericht op het snelgroeiende veld van forensische statistiek, en ik was al diep betrokken bij de zaak Kevin Sweeney en de zaak van José Booij. Al snel hadden we een website en waren we hard aan het werk, maar kort daarna gebeurde er een opeenvolging van ongelukken. Ten eerste betaalde het ziekenhuis van Lucia een dure advocaat om me onder druk te zetten namens de hoofdkinderarts van het Juliana Children’s Hospital. Ik had namelijk wat persoonlijke informatie over deze persoon (die toevallig de schoonzus was van Metta de Noo en Ton Derksen) geschreven op mijn homepage aan de Universiteit van Leiden. Ik voelde dat het van cruciaal belang was om te begrijpen hoe de zaak tegen Lucia was begonnen en dit had zeker veel te maken met de persoonlijkheden van enkele sleutelfiguren in dat ziekenhuis. Ik schreef ook naar het ziekenhuis en vroeg om meer gegevens over de sterfgevallen en andere incidenten op de afdelingen waar Lucia had gewerkt, om het professionele onafhankelijke statistische onderzoek te voltooien dat had moeten plaatsvinden toen de zaak begon. Ik werd bedreigd en geïntimideerd. Ik vond enige bescherming van mijn eigen universiteit die namens mij dure advocatenkosten betaalde. Mijn advocaat adviseerde me echter al snel om toe te geven door aanstootgevend materiaal van internet te verwijderen, want als dit naar de rechtbank zou gaan, zou het ziekenhuis waarschijnlijk winnen. Ik zou de reputatie van rijke mensen en van een machtige organisatie schaden en ik zou moeten boeten voor de schade die ik had aangericht. Ik moest beloven om deze dingen nooit weer te zeggen en ik zou beboet worden als ze ooit herhaald zou worden door anderen. Ik heb nooit toegegeven aan deze eisen. Later heb ik wel wat gepubliceerd en naar het ziekenhuis opgestuurd. Ze bleven stil. Het was een interessante spel bluf poker.

Ten tweede schreef ik op gewone internetfora enkele zinnen waarin ik José Booij verdedigde, maar die de persoon die haar bij de kinderbescherming had aangegeven ook van schuld verweet. Dat was geen rijk persoon, maar zeker een slim persoon, en ze meldden mij bij de politie. Ik werd verdachte in een geval van vermeende laster. Geïnterviewd door een aardige lokale politieagent. En een paar maanden later kreeg ik een brief van de lokale strafrechter waarin stond dat als ik 200 euro administratiekosten zou betalen, de zaak administratief zou worden afgesloten. Ik hoefde geen schuld te bekennen maar kon ook niet aantekenen dat ik me onschuldig vond.

Dit leidde ertoe dat het Bureau Verloren Zaken zijn activiteiten een tijdje stopzette. Maar het is nu tijd voor een come-back, een “re-boot”. Ondertussen deed ik niet niets, maar raakte ik betrokken bij een half dozijn andere zaken, en leerde ik steeds meer over recht, over forensische statistiek, over wetenschappelijke integriteit, over organisaties, psychologie en sociale media. De BOLC is terug.

ORGANISATIE en PLANNEN

Het BOLC is al een paar jaar inactief, maar nu de oprichter de officiële pensioenleeftijd heeft bereikt, “herstart” hij de organisatie. Richard Gill richtte de BOLC op aan de vooravond van de vrijspraak van verpleegster Lucia de Berk in 2006. Een groep vrienden die nauw betrokken waren geweest bij de beweging om Lucia een eerlijk proces te bezorgen, besloten dat ze zo genoten van elkaars gezelschap en zoveel hadden geleerd van de ervaring van de afgelopen jaren, dat ze hun vaardigheden wilden uitproberen op enkele nieuwe cases. We kwamen snel een aantal ernstige problemen tegen en stopten onze website tijdelijk, hoewel de activiteiten in verschillende gevallen werden voortgezet, meer ervaring werd opgedaan, veel werd geleerd.

We vinden dat het tijd is om het opnieuw te proberen, nadat we enkele nuttige lessen hebben geleerd van onze mislukkingen van de afgelopen jaren. Hier is een globaal overzicht van onze plannen.

  1. Zet een robuuste formele structuur op met een bestuur (voorzitter, secretaris, penningmeester) en een adviesraad. In plaats van het de wetenschappelijke adviesraad te noemen, zoals gebruikelijk in academische organisaties, zou het een morele en / of wijsheidsadviesraad moeten zijn om op de hoogte te worden gehouden van onze activiteiten en ons te laten weten als ze denken dat we van de rails gaan.
  2. Eventueel een aanvraag indienen om een Stichting te worden. Dit betekent dat we ook zoiets zijn als een vereniging of een club, met een jaarlijkse algemene vergadering. We zouden leden hebben, die misschien ook donaties willen doen, aangezien het runnen van een website en het af en toe in de problemen komen geld kost.
  3. Schrijf over de zaken waar we de afgelopen jaren bij betrokken zijn geweest, met name: vermeende seriemoordenaars Ben Geen (VK), Daniela Poggiali (Italië); beschuldigingen van wetenschappelijk wangedrag in het geval van het proefschrift van een student van Peter Nijkamp; het geval van de AD Haring-test en de kwaliteit van Dutch New Herring; het geval van Kevin Sweeney.