This spreadsheet was shown on TV both yesterday (Friday August 18, the day of the verdicts) and at the start of the trial of Lucy Letby. Apparently, Cheshire Constabulary find this absolutely damning evidence against Lucy. And indeed, many journalists seem to agree.
The 25 events are almost all of the events at which LL was present during the periods investigated. They are suspicious because she was under suspicion when the police started their investigations. Not surprisingly, most nurses are not present at many of these events. And of course, many nurses probably work far fewer hours than LL. Many are often on administrative duties.
The doctors on the ward are of course missing. Doctors were never investigated as suspects but from the start of police investigations apparently always believed to speak gospel truth. During cross-examination, during the trial, some of them have changed various parts of their stories. Of course, unlike Lucy, they do not lie, since they could never (under oath in court, or earlier, when being interviewed as witnesses by police) be saying untruths in order to deceive.
Back to the spreadsheet. When drawing conclusions from any data it is important to know how it was gathered. It is important to know what data is missing, but would be needed draw even the most preliminary and tentative inferences.
There was an NHS investigation into the raised rates of deaths and collapses at Countess of Chester Hospital (CoCH) in summer 2015 and summer 2016. It was published in 2017 by the Royal College of Paediatrics and Child Health (RCPCH). The investigation blamed the consultants for the appalling low standard of care, and the terrible situation regarding hygiene. The RCPCH investigators actually wrote that nurse Lucy Letby could not be associated with the events, but that passage was redacted out of the published report for privacy reasons. We know that already, consultants had presented their fears to hospital management. One of them (successful TV doctor and FaceBook influencer dr Ravi Jayaram) was on TV yesterday proudly telling the world that he had been vindicated. Management was inclined not to believe them, and did not act on them, but they certainly came to the ears of the RCPCH. On publication of the report, four consultants had had enough, and went to the police with their suspicions that LL was a murderer.
Thanks to FOI requests and statistical analysis by independent scientists, we now know that the rate of events (deaths and collapses) is just as much raised when Lucy is not on the ward as it is when she is on the ward. A lot of medical information (as well as the state of the drains at CoCH) points to a seasonal virus epidemic.
The elevated rate went back to normal after the hospital was down-graded (no longer accepting high risk patients), and when the drains were rebuilt, and when the senior consultant retired, all of which happened soon after the police investigation started. Incidentally, the rate of still-births and miscarriages show exactly the same pattern.
Lucy must certainly have been a witch in order to kill babies in the womb and even when she is far from the hospital.
Those familiar with miscarriages of justice involving serial killer nurses will be familiar with this police and prosecution tactic. Is it evil or is it just stupid? (cf. Hanlon’s razor). I think it is quite simply “learnt”. Police and prosecution learn what convinces jurors over the years, and that is why the same “mistakes” are made again and again. They work!
Professor Gill helped exonerate Lucia de B., and is now making mincemeat of the CBS report on benefits affair
Top statistician Richard Gill cracks down on the research conducted by Statistics Netherlands (CBS) into custodial placements of children of victims in the benefits affair. ‘CBS should never have come to the conclusion that this group of parents was not hit harder than other parents.’
Carla van der Wal 26-01-23, 06:00 Last update: 08:10
Emeritus professor Richard Gill would prefer to pick edible mushrooms in the woods and spend time with his grandchildren. Nevertheless, the top statistician in the Netherlands, who previously helped to exonerate the unjustly convicted Lucia de B, is now firmly committed to the benefits affair.
CBS should never have started the investigation into the custodial placement of children of victims in the benefits affair, says Gill. “And the conclusion that this group of parents has not been hit harder than other parents, CBS should never have drawn. It left many people thinking: only the tax authorities have failed, but fortunately there is nothing wrong with youth care. So all the fuss about ‘state kidnappings’ was unnecessary.”
After Statistics Netherlands calculated how many children of benefit parents were placed out of home (in the end it turned out to be 2090), it seemed that victims in the affair lost their children more often than similar parents who were not victims. The results were presented on November 1 last year, which Gill now denounces.
Gill is emeritus professor of mathematical statistics at Leiden University and in the past was an advisor to the methodology department of Statistics Netherlands. In the case of Lucia de B. he showed that calculations that would show that De B. had more deaths in her services were incorrect.
There is a special reason that Gill is now getting stuck in the benefits affair – but more on that later. First about the CBS report. Gill states that Statistics Netherlands is not equipped for this type of research and points out that after two research methods were dropped, only one ‘not ideal, but only option’ remained. He also thinks, among other things, that the more severely affected victims in the benefits affair should be the focus of the investigation. He emphasizes that relatively mildly affected families most likely had to deal with much less drastic consequences. CBS itself also says that it likes to use information about the degree of duping, but that there was none.
CBS also acknowledges some criticisms. “CBS itself has mentioned a number of comments to the report. There seems to be a misunderstanding on one point,” said a spokesperson, who also said that CBS still fully supports the conclusions. CBS will soon be discussing the methodology used with Gill, but in any case CBS sees itself as the right party to carry out the study. “CBS has the task of providing insight into social issues with reliable statistical information and data and has the necessary expertise and techniques. In this case there was a clear social need for statistical insight.”
Gill thinks otherwise and thinks it’s important to raise this. Because he is awakened by injustice. That was also a reason to offer his help when questions arose about the conviction of Lucia de B., who can simply be called Lucia de Berk again since her acquittal. In 2003 she was sentenced to life imprisonment.
With the acquittal in 2010, Gill became not only a top statistician, but also a beacon of hope for people who experienced injustice. And José Booij, a mother of a child placed in care, contacted him many years ago.
Somewhere in Gill’s house in Apeldoorn there is still a box with papers from José. It contains diaries, newspaper clippings and diplomas of hers. She was a little different from other people. A doctor who fell for women, fled the Randstad and settled in Drenthe. There she became pregnant and had a baby. And she had a neighbour with whom she had a disagreement. “That neighbour had made all kinds of reports about José to the local police, said that something terrible would happen to the child.” After six weeks, José’s daughter was removed from home.
“What happened to José at the time, I also call that a state kidnapping, just as the custodial placements among victims of the benefits affair are now called.” The woman continued to fight to get her child back. But gradually that fight drove her insane. She lost her job, she lost her home. She fled abroad. “Despite a court ruling that the child had to be returned to José, that did not happen. José eventually derailed. I now know that she has left information with more people in the Netherlands to ensure that it is available to her daughter when she is ready. But I can’t find José anymore. I heard she was seen in the south of the Netherlands after escaping from a psychiatric clinic in England.”
And meanwhile he keeps that box. And Gill thinks of José, when he considers the investigation by the Central Bureau of Statistics into custodial placements of children of victims in the benefits affair. Gill makes mincemeat of it. “The only thing CBS can say is that the results suggest that the differences between the two groups that have been compared are quite small. There should be a lot more caution, and yet in the summary you see bold summaries, such as: ‘Being duped does not increase the likelihood of child protection measures’. I suspect that CBS was put under pressure to conduct this study, or wanted to justify its existence. Perhaps there is an urge to be of service.”
Time for justice
Now is the time to put that right, Gill thinks. Research needs to be done to find out what’s really going on. “I had actually hoped that younger colleagues would have stood up by now, who would take up such matters.” But as long as that doesn’t happen, he’ll do it himself. Maybe it’s in his genes. It was Gill’s mother – he was born in England – who helped crack the enigma code used by the Germans to communicate during World War II. Gill wasn’t surprised when he found out. He already suspected that his excellent mind was inherited not only from his father, but also from his mother.
Yet in the end it was his wife – the love of whom led him settle in the Netherlands – who put him on this track. She pointed Gill to Lucia de Berk’s case and encouraged him to get to work. She may have regretted that. For example, when Gill threatened to burn his Dutch passport during a broadcast of The World Keeps on Turning Round (“De wereld draait door”) if the De Berk case was not reviewed. “She said, ‘You can’t say things like that!'”
In fact, he would like to enjoy his retirement with her now – he has been out of paid work for six years now. Then he would spend his days in the woods looking for edible mushrooms. And spend a lot of time with his grandchildren. But now his calculations also help exonerate other nurses. Last year, Daniela Poggiali was released in Italy after Gill interfered with the case together with an Italian colleague. There are still things waiting for him in England.
And so the benefits affair is here in the Netherlands, which, as far as Gill is concerned, needs more in-depth, thorough research to find out exactly what caused the custodial placements. “That is why I ended up with Pieter Omtzigt and Princess Laurentien, who are also involved in the benefits affair.” Among the people who express themselves diplomatically, he wants to be the bad cop, the man who shakes things up, as he did when he threatened to set his passport on fire. But at the same time, he also hopes that a young statistician will emerge who is prepared to take over the torch.
CBS provided this site with an extensive explanation in response to Gill’s criticism. It recognizes the complexity of this type of research, but sees itself as the appropriate body to carry out that research. An appointment to speak with Gill has already been scheduled. “CBS always tries to explain as clearly and transparently as possible in its reports what has been investigated, how it was done and what the results are.”
Statistics Netherlands also points to nuances in the text of the report, for example after the sentence above a piece of text: ‘Being duped does not increase the chance of child protection measures’. “On an individual level, there may be a relationship between duping and youth protection, which is stated in several places in the report.” Even if ‘on average no evidence is found for a relationship between duping and youth protection’, as Statistics Netherlands notes.
Statistics Netherlands fully supports the research and the conclusions as stated in the report. It is pointed out, however, that there are still opportunities for follow-up research, as has also been indicated by Statistics Netherlands.
Hoogleraar Gill hielp bij vrijpleiten Lucia de B., en maakt nu gehakt van CBS-rapport toeslagenaffaire
Topstatisticus Richard Gill kraakt het onderzoek dat het Centraal Bureau voor de Statistiek (CBS) uitvoerde naar uithuisplaatsingen van kinderen van gedupeerden in de toeslagenaffaire. ‘De conclusie dat deze groep ouders niet harder is geraakt dan andere ouders, had het CBS nooit mogen trekken.’
Carla van der Wal 26-01-23, 06:00 Laatste update: 08:10
Het liefste zou emeritus hoogleraar Richard Gill eetbare paddenstoelen plukken in het bos, en tijd doorbrengen met zijn kleinkinderen. Toch bijt de topstatisticus van Nederland, die eerder hielp bij het vrijpleiten van de onterecht veroordeelde Lucia de B, zich nu vast in de toeslagenaffaire.
Het CBS had nooit aan het onderzoek naar de uithuisplaatsing van kinderen van slachtoffers in de toeslagenaffaire moeten beginnen, zegt Gill. ,,En de conclusie dat deze groep ouders niet harder is geraakt dan andere ouders, had het CBS nooit mogen trekken. Die liet velen denken: alleen de belastingdienst heeft gefaald, maar er is gelukkig niets mis met jeugdzorg. Al die ophef over ‘staatsontvoeringen’ was dus onnodig.’’
Nadat het CBS becijferde hoeveel kinderen van toeslagenouders uit huis werden geplaatst (uiteindelijk bleken het er 2090), leek het of gedupeerden in de affaire vaker hun kinderen kwijtraakten dan soortgelijke ouders die geen slachtoffer waren. Op 1 november vorig jaar werden de resultaten gepresenteerd, die Gill nu hekelt.
Gill is emeritus hoogleraar mathematische statistiek aan de universiteit van Leiden en was in het verleden adviseur bij de afdeling methodologie van het CBS. In de zaak van Lucia de B. liet hij zien dat berekeningen die zouden aantonen dat De B. vaker sterfgevallen in haar diensten had, niet klopten.
Dat Gill zich nu vastbijt in de toeslagenaffaire heeft een bijzondere reden – maar daarover later meer. Eerst nog over het rapport van het CBS. Gill stelt dat het CBS niet is ingericht op dit type onderzoek en wijst erop dat nadat twee onderzoeksmethodes afvielen slechts één ‘niet ideale, maar enige optie’ overbleef. Ook vindt hij onder meer dat zwaarder getroffen gedupeerden in de toeslagenaffaire centraal zouden moeten staan bij het onderzoek. Hij benadrukt dat relatief licht geraakte gezinnen hoogstwaarschijnlijk met veel minder ingrijpende gevolgen te maken hebben gehad. Het CBS zegt overigens zelf ook dat het graag informatie over de mate van gedupeerdheid gebruikt, maar dat die er niet was.
Het CBS erkent ook sommige punten van kritiek. ,,Een aantal heeft het CBS zelf als kanttekening genoemd bij het rapport. Op een enkel punt lijkt sprake van een misverstand’’, aldus een woordvoerder, die ook zegt dat het CBS nog volledig achter de conclusies staat. Over de gebruikte methodologie gaat het CBS binnenkort met Gill in gesprek, maar het CBS ziet zich in elk geval wél als de juiste partij om het onderzoek uit te voeren. ,,Het CBS heeft als taak om met betrouwbare statistische informatie en data inzicht te geven in maatschappelijke vraagstukken en beschikt over de nodige expertise en technieken. In dit geval was een duidelijke maatschappelijke behoefte aan statistisch inzicht.’’
Gill denkt daar anders over en vindt het belangrijk dat aan te kaarten. Want hij ligt wakker van onrecht. Dat was ook reden om zijn hulp aan te bieden toen er vragen rezen over de veroordeling van Lucia de B., die sinds haar vrijspraak gewoon weer Lucia de Berk genoemd kan worden. In 2003 werd ze veroordeeld tot een levenslange gevangenisstraf.
Door de vrijspraak in 2010 werd Gill naast een topstatisticus ook een baken van hoop voor mensen die onrecht ervaarden. En nam José Booij, een moeder van een uit huis geplaatst kind, vele jaren geleden contact met hem op.
Ergens in het huis van Gill in Apeldoorn staat nog een doos met papieren van José. Erin zitten dagboeken, krantenknipsels en diploma’s van haar. Ze was een beetje anders dan andere mensen. Een jurist die op vrouwen viel, de Randstad ontvluchtte en neerstreek in Drenthe. Daar werd ze zwanger, kreeg ze een kindje. En had ze een buurvrouw, met wie ze onenigheid had. ,,Die buurvrouw had allerlei meldingen over José gedaan bij de lokale politie, had gezegd dat met het kindje iets vreselijks zou gebeuren.” Na zes weken werd Josés dochtertje uit huis geplaatst.
,,Wat José indertijd is overkomen, dat noem ik ook een staatsontvoering, net zoals de uithuisplaatsingen onder slachtoffers van de toeslagenaffaire nu worden genoemd.” De vrouw bleef vechten om haar kind terug te krijgen. Maar gaandeweg dreef dat gevecht haar tot waanzin. Ze raakte haar werk kwijt, ze raakte haar huis kwijt. Ze vluchtte naar het buitenland. ,,Ondanks een oordeel van de rechter, dat het kind terug moest naar José, gebeurde dat niet. José is uiteindelijk ontspoord. Inmiddels weet ik dat ze bij meer mensen in Nederland informatie heeft achtergelaten, om te zorgen dat die beschikbaar is voor haar dochter, als die eraan toe is. Maar José heb ik niet meer kunnen vinden. Ik heb gehoord dat ze nog is gezien in het zuiden van Nederland, nadat ze was ontsnapt uit een psychiatrische kliniek in Engeland.”
Tekst gaat verder onder de foto
En ondertussen bewaart hij dus die doos. En denkt Gill aan José, als hij zich buigt over het onderzoek van het Centraal Bureau voor de Statistiek, naar uithuisplaatsingen van kinderen van slachtoffers in de toeslagenaffaire. Gill maakt er gehakt van. ,,Het enige wat het CBS kan zeggen, is dat de uitkomsten suggereren dat de verschillen tussen de twee groepen die zijn vergeleken vrij klein zijn. Er zou veel meer voorzichtigheid moeten zijn, en toch zie je in de samenvatting in vetgedrukte letters stellige samenvattingen, zoals: ‘Gedupeerdheid verhoogt de kans op kinderbeschermingsmaatregelen niet’. Ik vermoed dat het CBS onder druk is gezet om dit onderzoek te doen, of zijn bestaansrecht heeft willen verantwoorden. Wellicht is er sprake van een drang om dienstbaar te zijn.”
Tijd voor rechtvaardigheid
Nu is het tijd om dat recht te zetten, vindt Gill. Er moet onderzoek worden gedaan, om te kijken hoe het echt zit. ,,Ik had eigenlijk gehoopt dat er inmiddels jongere collega’s zouden zijn opgestaan, die dit soort zaken op zouden pakken.” Maar zolang dat niet gebeurt, doet hij het zelf wel. Misschien zit het wel in zijn genen. Het was Gills moeder – hij werd geboren in Engeland – die tijdens de Tweede Wereldoorlog meewerkte aan het kraken van de enigmacode, die door de Duitsers werd gebruikt om te communiceren. Gill verraste het niet, toen hij erachter kwam. Hij had al zo’n vermoeden dat zijn excellente verstand niet alleen een erfenis van zijn vader, maar ook zijn moeder was.
Toch was het uiteindelijk zijn vrouw – de liefde zorgde dat hij in Nederland neerstreek – die hem op dit spoor heeft gezet. Zij wees Gill op de zaak van Lucia de Berk en stimuleerde hem ermee aan de slag te gaan. Misschien heeft ze dat wel eens betreurd. Bijvoorbeeld toen Gill tijdens opnames van De wereld draait door dreigde zijn Nederlandse paspoort te verbranden, als de zaak De Berk niet werd herzien. ,,Ze zei: dat kan je toch niet doen?”
Eigenlijk zou hij nu met haar van zijn pensioen willen genieten – hij is inmiddels zes jaar gestopt met zijn betaalde werk. Dan zou hij zijn dagen vullen in het bos, zoekend naar eetbare paddenstoelen. En veel tijd doorbrengen met zijn kleinkinderen. Maar nu helpen zijn berekeningen ook bij het vrijpleiten van andere verpleegkundigen. Vorig jaar werd Daniela Poggiali nog vrijgelaten in Italië, nadat Gill zich samen met een Italiaanse collega met de zaak bemoeide. In Engeland zijn nog zaken die op hem wachten.
En de toeslagenaffaire is er hier in Nederland dus, waar wat Gill betreft diepgravender, gedegen onderzoek naar moet komen, om uit te zoeken wat nu precies de uithuisplaatsingen veroorzaakte. ,,Ik ben daarom terechtgekomen bij Pieter Omtzigt en prinses Laurentien, die zich ook met de toeslagenaffaire bezighouden.” Tussen de mensen die zich diplomatiek uiten, wil hij best de bad cop zijn, de man die de boel opschudt, zoals hij deed toen hij dreigde zijn paspoort in de fik te steken. Maar tegelijkertijd hoopt hij toch vooral ook dat er een jonge statisticus opstaat, die bereid is de fakkel over te nemen.
Het CBS gaf deze site een uitgebreide toelichting, naar aanleiding van de kritiek van Gill. Het erkent de complexiteit van dit soort onderzoek, maar ziet zichzelf wél als aangewezen instantie om dat onderzoek uit te voeren. De afspraak om met Gill te spreken is al ingepland. ,,Het CBS tracht in de rapporten altijd zo duidelijk en transparant mogelijk uit te leggen wat onderzocht is, hoe dat is gedaan en wat de uitkomsten zijn.”
Ook wijst het CBS op nuanceringen in de tekst van het rapport, bijvoorbeeld na de zin boven een stuk tekst: ‘Gedupeerdheid verhoogt de kans op kinderbeschermingsmaatregelingen niet’. ,,Er kan op individueel niveau wél een relatie tussen dupering en jeugdbescherming zijn, dat staat op meerdere plekken in het rapport vermeld.” Ook als er ‘gemiddeld genomen geen bewijs gevonden wordt voor een relatie tussen dupering en jeugdbescherming’, zoals het CBS constateert.
Het CBS staat volledig achter het onderzoek en de conclusies zoals die in het rapport vermeld staan. Wel wordt erop gewezen dat er nog mogelijkheden zijn voor vervolgonderzoek, dat heeft het CBS ook aangegeven.
Hieronder volgt een poging (20 januari 2023, ‘s ochtends) om het kern van het verhaal op te schrijven in 500 woorden en Jip en Janneke taal. Het lukte niet.
Heeft het CBS de waarheid in pacht?
Velen werden wakker geschud door carabetier Peter Pannekoek’s woorden “1115 staatsontvoeringen”. Maar ze kunnen weer in slaap gesust zijn door het CBS rapport “Jeugdbescherming en de toeslagenaffaire – Kwantitatief onderzoek naar kinderbeschermingsmaatregelen bij kinderen van gedupeerden van de toeslagenaffaire”. Een van de belangrijkste conclusies (samenvatting, eerste bladzijde) luidt
“Gedupeerdheid verhoogt de kans op kinderbeschermingsmaatregelen niet“.
Dat is een krachtige uitspraak. Geen enkel relativering, geen “kleine letters”. Geen melding dat het een uitspraak is die alleen gemaakt kan worden onder een hele reeks veronderstellingen. Helaas, een hele reeks veronderstellingen waarvan velen pertinent onwaar zijn.
Mijn antwoord: misschien geen 1115, maar misschien wel: 115
Nu munt het CBS uit in het doen van beschrijvend statistiek, wat ook hun wettelijke opdracht is. Ze dienen neutraal de feiten te ontsluiten en weer te geven die politiek en bestuur en burgers nodig hebben. Waar het CBS minder expertise in huis heeft, omdat het ook beslist niet tot hun taak behoort, is in het ontwarren van oorzaak en gevolg. Dat noemen we tegenwoordig “Causaliteit” en het is een uiterst actueel, belangrijk, subtiel, en complex onderwerp binnen het wetenschappelijk onderzoek; explosief gegroeid sinds Judea Pearls boek “Causality” uit 2000. Kan je causaliteit concluderen door het waarnemen van correlatie of associatie?
Voorbeeld. Lucia de B maakte vreselijk veel incidenten mee in haar diensten. Veel meer dan men zou hebben verwacht en dat leidde ook tot levenslange gevangenisstraf voor seriële moord. Pas later werd duidelijk dat haar aanwezigheid juist de reden was dat medisch onderzoekers bepaalde gebeurtenissen als incidenten karakteriseerden!
Maar kan geen associatie ook op causaliteit duiden? Jawel! Statistieken kunnen misleiden. Een aansprekend visuele representatie van statistieken des te meer. Mijn oog werd getrokken door Figuur 6.1.2 in het CBS rapport waarin we drie vrolijk gekleurde balkjes zijn, die de percentages 1%, 4% en 4% dienen te representeren. Zie je wel! De percentage uithuisplaatsingen bij de gedupeerden is exact wat je zou hebben verwacht, als al die gezinnen helemaal niet gedupeerd waren geweest!
Ik zou zeggen, dat kan geen toeval zijn. Na studie van het onderzoeksprotocol inclusief de vele door de team hanteerde algoritmes, wordt ook duidelijk dat het geen toeval is. Door de onderzoekskeuzes die het onderzoeksteam zich gedwongen voelde te maken is het verschil in uithuisplaatsingskans tussen “vergelijkbare” wel en niet gedupeerden systematisch verkleind. Het verschil is dus groter dan het lijkt (het lijkt nul te zijn, maar dat is het beslist niet). De juiste conclusie van het onderzoek had moeten zijn, ten eerste, dat er zeker tientallen uithuisplaatsingen “extra” plaatsvonden vanwege de affaire en mogelijk honderd (of zelfs een paar honderd). Een tweede conclusie had moeten zijn dat deze gedurfde pilot studie bewezen heeft dat een totaal ander onderzoeksopzet nodig is oude gestelde vraag te beantwoorden. Mogelijk, iets in de trant van het eerder verworpen onderzoeksvoorstel van Prof. Bart Tromp van de Universiteit Groningen. Overigens, is het nooit nodig om alle dossiers van de hele geschiedenis van alle gedupeerden door te pluizen. Door slim een aselecte steekproef in een verstandig gekozen deelpopulatie te nemen, kan men zich beperken tot het goed uitzoeken van relatief weinig gevallen.
Goede “Data Science” is onmogelijk zonder grote expertise te combineren uit drie gebieden tegelijkertijd: 1) algoritmes en computer mogelijkheden; 2) kansrekening en inferientiele statistiek (dwz het kwantificeren van de onzekerheid in de gevonden resultaten); 3) (last but not least!) vakspecifieke kennis van het beoogde toepassingsgebied; in dit geval psychologie, recht, bestuur.
Ik denk aan een statistische simulatie om mijn punt te illustreren. Die twee getallen “4%” hebben foutbalken nodig van ongeveer +/- 1%. Lastig omdat ik rekening moet houden met de correlatie binnen de paren. We kunnen alleen maar raden hoe groot het is. Dus: meerdere simulaties met verschillende gissingen.
This is a first attempt to summarise my claims in 500 words and simple language. It didn’t succeed.
Does CBS have direct access to the truth?
Many were shaken up by carabetier Peter Pannekoek ‘s words “1115 state kidnappings”. But they may have been lulled back to sleep by the CBS report “Youth protection and the benefits affair – Quantitative research into child protection measures in children of victims of the benefits affair”. One of the main conclusions (summary, first page) reads
“Being a victim of the benefits scandal does not increase the likelihood of child protection measures“.
That’s a powerful statement. No relativization whatsoever, no “small print”. No mention of it being a statement that can only be made under a slew of assumptions. Alas, a slew of assumptions many of which are patently untrue.
My answer: Maybe not 1115, but could well have been 115.
Now CBS excels at doing descriptive statistics, which is also their legal assignment. They should neutrally disclose and represent the facts that politicians, administration and citizens need. Where CBS has less in-house expertise, because it is certainly not part of their task, is in disentangling cause and effect. This is what we call “Causality” today and it is an extremely topical, important, subtle, and complex subject of scientific inquiry; exploded since Judea Pearl’s 2000 book “Causality”. Can you infer causality by observing correlation or association?
Example. Lucia de B experienced an awful lot of incidents in her services. Much more than one would have expected and that also led to life imprisonment for serial murder. Only later did it become clear that her presence was precisely the reason why medical examiners characterized certain events as incidents!
But can *no* association also indicate causality? Yes! Statistics can be misleading. An appealing visual representation of statistics all the more. My eye was drawn to Figure 6.1.2 in the CBS report in which we are three brightly colored bars, which should represent the percentages 1%, 4% and 4%. See! The percentage of custodial placements among the victims is exactly what you would have expected, if all those families had not been victimized at all!
I’d say that can’t be a coincidence. After studying the research protocol, including the many algorithms used by the team, it also becomes clear that this is no coincidence. Due to the research choices that the research team felt compelled to make, the difference in out-of-home placements between “comparable” victims and non-victims has been systematically reduced. So the difference is greater than it appears (it appears to be zero, but it is definitely not). The correct conclusion of the investigation should have been, first, that there were certainly dozens of “extra” custodial placements because of the affair and possibly a hundred (or even a few hundred). A second conclusion should have been that this bold pilot study has proven that a completely different research design is needed to answer an old question. Possibly, something along the lines of Prof. dr. Bart Tromp of the University of Groningen. Incidentally, it is never necessary to go through *all* files of the entire history of all victims. By smartly taking a random sample in a sensibly chosen sub-population, one can limit oneself to properly sorting out relatively few cases.
Good “Data Science” is impossible without combining great expertise from three areas at the same time: 1) algorithms and computing capabilities; 2) probability theory and inferiential statistics (ie quantifying the uncertainty in the results found); 3) (last but not least!) subject-specific knowledge of the intended application area; in this case psychology, law, administration.
I’m thinking of a statistical simulation to illustrate my point. Those two numbers “4%” need error bars of about +/- 1%. Tricky because I must take account of the correlation within the pairs. We can only guess how big it is. So: several simulations with different guesses.
Richard Gill is emeritus professor of mathematical statistics at Leiden University. He is a member of the KNAW and former chairman of the Netherlands Statistical Society (VVS-OR)
Mr. Pieter Omtzigt has asked me to give my expert opinion on the CBS report that examines whether the number of child care placements of children by Dutch child protection authorities increased because their families had fallen victim to the child benefit scandal in the Netherlands.
The current note is preliminary and I intend to refine it further. My purpose is to stimulate discussion among relevant professionals of the methodology used by the CBS in this particular case.Feedback, please!
The report gives a clear (and short) account of creative statistical analysis of much complexity. The sophisticated nature of the analysis techniques, the urgency of the question, and the need to communicate the results to a general audience probably led to important “fine print” about the reliability of the results being omitted. The authors seem to me to be too confident in their findings.
Numerous choices had to be made by the CBS team to answer the research questions. Many preferable options are excluded due to data availability and confidentiality. Changing one of the many steps in the analysis through changes in criteria or methodology could lead to wildly different answers. The actual finding of two nearly equal percentages (both close to 4%) in the two groups of families is, in my opinion, “too good to be true”. It’s a fluke. Its striking character may have encouraged the authors to formulate their conclusions much more strongly than they are entitled to.
In this regard, I found it significant that the authors note that the datasets are so large that statistical uncertainty is unimportant. But this is simply not true. After constructing an artificial control group, they have two groups of size (in round numbers) 4000, and 4% of cases in each group, i.e. about 160. According to a rule of thumb calculation (Poisson variation), the statistical variation in those two numbers have a standard deviation of about the square root of 160, so about 12.5. That means that one of those numbers (160) could easily happen to have twice the standard deviation, which is about 25. The conclusion that the benefits scandal did not lead to more children being removed from home than without it would have been the case, certainly cannot be drawn . Taking into account the statistical sampling error, it is quite possible that the control group (those not afflicted by the benefits scandal) would have been 50 less. In that case, the study group experienced 50 more than they would have done, had they not been victims of the benefits scandal.
To make the numbers easier still, suppose there was an error of 40 cases too few in the light blue bar standing for 4%. 40 out of 4000 is 1 out of 100, 1%. Change the light blue bar from height 4% to height 3% and they don’t look the same at all!
But this is already without taking into account possible systematic errors. The statistical techniques used are advanced and model-based. This means that they depend on the validity of many particular assumptions about the form and nature of the relationships between the variables included in the analysis (using “logistic regression”). The methodology uses these assumptions for its convenience and power (more assumptions mean stronger conclusions, but threatens “garbage in, garbage out”). Logistic regression is such a popular tool in so many applied fields because the model is so simple: the results are so easy to interpret, the calculation can often be left to the computer without user intervention. But there’s no reason why the model should be exactly true; one can only hope that it is a useful approximation. Whether it is useful depends on the task for which it is used. The current analysis uses logistic regression for purposes for which it was not designed.
The assumptions of the standard model of logistic regression are certainly not exactly met. It is not clear whether the researchers tested for failure of the assumptions (for example, by looking for interaction effects – violation of additivity). The danger is that the failure of the assumptions can lead to systematic bias in the results, bias that affects the synthetic (“matched”) control group. The central assumption in logistic regression is the additivity of effects of various factors on the log-odds scale (“odds” means probability divided by complementary probability; log means logarithm). This could be true to a first rough approximation, but it is certainly not exactly true. “All models are wrong, but some are useful”.
A good practice is to build models by analyzing a first data set and then evaluating the final chosen model on an independently collected second data set. In this study, not one but numerous models were tested. The researchers seem to have chosen from countless possibilities through subjective assessment of plausibility and effectiveness. This is fine in an exploratory analysis. But the findings of such an exploration must be tested against new data (and there is no new data).
The end result was a procedure to choose “nearest neighbour matches” with respect to a number of observed characteristics of the cases examined. Errors in the logistic regression used to choose matched controls can systematically bias the control group.
Further big questions concern the actual selection of cases and controls at the beginning of the analysis. Not all families affected by the benefits scandal had to pay back a huge amount of subsidy. Mixing the hard-hit and the weak-hit dilutes the effect of the scandal, both in magnitude and accuracy, the latter because maller samples lead to relatively less accurate determination of effect size.
Another problem is that the pre-selection control population (families in general from which a child was removed) also contains victims of the benefit scandal (the study population). That brings the two groups closer together, even more so after the familywise one-on-one matching process, which of course selectively finds matches among the subpopulation most likely to be affected by the benefits scandal.
Richard Gill is emeritus hoogleraar wiskundige statistiek aan de Universiteit Leiden. Hij is lid van de KNAW en voormalig voorzitter van het Nederlands Statistisch Genootschap (VVS-OR)
De heer Pieter Omtzigt heeft mij gevraagd om mijn deskundige mening te geven over het CBS-rapport waarin wordt onderzocht of het aantal uithuisplaatsingen van kinderen door de Nederlandse kinderbescherming is toegenomen doordat hun families het slachtoffer zijn geworden van het kinderbijslagschandaal in Nederland. De huidige nota is voorlopig en ik ben van plan deze verder te verfijnen. Commentaar, kritiek, is welkom.
Het rapport geeft een duidelijk (en kort) verslag van creatieve statistische analyses van enige complexiteit. Het geavanceerde karakter van de analysetechnieken, de urgentie van de vraag en de noodzaak om de resultaten aan een algemeen publiek te communiceren, hebben er waarschijnlijk toe geleid dat belangrijke “kleine lettertjes” over de betrouwbaarheid van de resultaten werden weggelaten. De auteurs lijken mij te veel vertrouwen te hebben in hun bevindingen.
Om de onderzoeksvragen te beantwoorden moesten er door het CBS-team tal van keuzes worden gemaakt. Veel voorkeursopties zijn uitgesloten vanwege beschikbaarheid van gegevens en vertrouwelijkheid. Het wijzigen van een van de vele stappen in de analyse door wijzigingen in criteria of methodologie kan tot enorm verschillende antwoorden leiden. De daadwerkelijke bevinding van twee bijna gelijke percentages (beide dicht bij de 4%) in de twee groepen gezinnen is naar mijn mening “te mooi om waar te zijn”. Het is een toevalstreffer. Het opvallende karakter ervan heeft de auteurs misschien aangemoedigd om hun conclusies veel sterker te formuleren dan waar ze recht op hebben.
In dit verband vond ik het veelzeggend dat de auteurs opmerken dat de datasets zo groot zijn dat statistische onzekerheid onbelangrijk is. Maar dit is gewoon niet waar. Na constructie van een kunstmatige controlegroep hebben ze twee groepen van omvang (in ronde getallen) 4000, en 4% van de gevallen in elke groep, dat wil zeggen ongeveer 160. Volgens een vuistregelberekening (Poisson-variatie) heeft de statistische variatie in die twee getallen een standaarddeviatie van ongeveer de vierkantswortel van 160, dus ongeveer 12,5. Dat betekent dat elk van die getallen (160) toevallig gemakkelijk twee keer de standaarddeviatie kan hebben, namelijk ongeveer 25.
Rekening houdend met de statistische steekproeffout, is het heel goed mogelijk dat de controlegroep (degenen die niet getroffen zijn door het uitkeringsschandaal) 50 minder zou zijn geweest. In dat geval maakte de onderzoeksgroep er 50 meer mee dan ze zouden hebben gedaan als ze geen slachtoffer waren geweest van het uitkeringsschandaal.
Om de cijfers nog makkelijker te maken, stel dat er een fout was van 40 gevallen te weinig in de lichtblauwe balk, wat staat voor 4%. 40 van de 4000 is 1 van de 100, 1%. Verander de lichtblauwe balk van hoogte 4% naar hoogte 3% en ze zien er helemaal niet hetzelfde uit!
Maar dit is al zonder rekening te houden met mogelijke systematische fouten. De gebruikte statistische technieken zijn geavanceerd en modelmatig. Dit betekent dat ze afhankelijk zijn van de validiteit van tal van bijzondere aannames over vorm en aard van de relaties tussen de variabelen die in de analyse zijn opgenomen (met behulp van “logistische regressie”). De methodologie gebruikt deze aannames vanwege zijn gemak (“convenience”) en kracht (meer aannames betekent sterkere conclusies, maar dan dreigt “garbage in, garbage out”). Logistische regressie is zo’n populair hulpmiddel in zoveel toegepaste gebieden omdat het model zo eenvoudig is: de resultaten zijn zo gemakkelijk te interpreteren, de berekening kan vaak zonder tussenkomst van de gebruiker aan de computer worden overgelaten. Maar er is geen enkele reden waarom het model precies waar zou moeten zijn; men kan alleen maar hopen dat het een bruikbare benadering is. Of het nuttig is, hangt af van de taak waarvoor men het gebruikt. De huidige analyse gebruikt logistische regressie voor doeleinden waarvoor het niet is ontworpen.
Aan de aannames van het standaardmodel wordt zeker niet precies voldaan. Het is niet duidelijk of de onderzoekers hebben getest op het falen van de aannames (bijvoorbeeld door te zoeken naar interactie-effecten – schending van additiviteit). Het gevaar is dat het falen van de aannames kan leiden tot systematische vertekening in de resultaten, vertekening die van invloed is op de synthetische (“gematchte”) controlegroep. De centrale aanname bij logistische regressie is de additiviteit van effecten van verschillende factoren op de schaal van log-odds (“odds” betekent: kans gedeeld door complementaire kans; log betekent logarithme). Dit zou waar kunnen zijn bij een eerste ruwe benadering, maar het is zeker niet exact waar. “Alle modellen zijn verkeerd, maar sommige zijn nuttig”.
Een goede praktijk is om modellen te bouwen door een eerste dataset te analyseren en vervolgens het uiteindelijk gekozen model te evalueren op een onafhankelijk verzamelde tweede dataset. In deze studie werden niet één maar tal van modellen uitgeprobeerd. De onderzoekers lijken te hebben gekozen uit talloze mogelijkheden door subjectieve beoordeling van plausibiliteit en effectiviteit. Dit is prima in een verkennende analyse. Maar de bevindingen van zo’n verkenning moeten worden getoetst aan nieuwe gegevens (en er zijn geen nieuwe gegevens).
Het resultaat was een procedure om “naaste buur overeenkomsten” te kiezen met betrekking tot een aantal waargenomen kenmerken van de onderzochte gevallen. Fouten in de logistische regressie die wordt gebruikt om overeenkomende controles te kiezen, kunnen de controlegroep systematisch vertekenen.
Verdere vragen gaan over de daadwerkelijke selectie van cases en controles aan het begin van de analyse. Niet alle door het uitkeringsschandaal getroffen gezinnen moesten een enorm bedrag aan subsidie terugbetalen. Door de hard getroffen en de zwak getroffen te mengen, wordt het effect van het schandaal afgezwakt, zowel in grote als in nauwkeurigheid.
Een ander probleem is dat de pre-selectie controlepopulatie (gezinnen in het algemeen waarvan een kind werd weggehaald) ook slachtoffers bevat van het uitkeringsschandaal (de studiepopulatie). Dat brengt de twee groepen dichter bij elkaar, en dat nog meer na het matchingsproces, dat uiteraard selectief matches vindt onder de subpopulatie die het meest waarschijnlijk door het uitkeringsschandaal is getroffen.
There has been much concern about health issues associated with the breeding of short-muzzled pedigree dogs. The Dutch government commissioned a scientific report Fokken met Kortsnuitige Honden (Breeding of short-muzzled dogs), van Hagen (2019), and based on it rather stringent legislation, restricting breeding primarily on the basis of a single simple measurement of brachycephaly, the CFR: cranial-facial ratio. Van Hagen’s work is a literature study and it draws heavily on statistical results obtained in three publications: Njikam (2009), Packer et al. (2015), and Liu et al. (2017). In this paper, I discuss some serious shortcomings of those three studies and in particular, show that Packer et al. have drawn unwarranted conclusions from their study. In fact, new analyses using their data lead to an entirely different conclusion.
The present work was commissioned by “Stichting Ras en Recht” (SRR; Foundation Justice for Pedigree dogs) and focuses on the statistical research results of earlier papers summarized in the literature study Fokken met Kortsnuitige Honden (Breeding of short-muzzled – brachycephalic – dogs) by dr M. van Hagen (2019). That report is the final outcome of a study commissioned by the Netherlands Ministry of Agriculture, Nature, and Food Quality. It was used by the ministry to justify legislation restricting breeding of animals with extreme brachycephaly as measured by a low CFR, cranial-facial ratio.
An important part of van Hagen’s report is based on statistical analyses in three key papers: Njikam et al. (2009), Packer et al. (2015), and Liu et al. (2017). Notice: the paper Packer et al. (2015) reports results from two separate studies, called by the authors Study 1 and Study 2. The data analysed in Packer et al. (2015) study 1 was previously collected and analysed for other purposes in an earlier paper Packer et al. (2013) which does not need to be discussed here.
In this paper, I will focus on these statistical issues. My conclusion is the cited papers have many serious statistical shortcomings, which were not recognised by van Hagen (2019). In fact, a reanalysis of the Study 2 data investigated in Packer et al. (2015) leads to conclusions completely opposite to those drawn by Packer et al., and completely opposite to the conclusions drawn by van Hagen. I come to the conclusion that the Packer et al. study 2 badly needs updating with a much larger replication study.
A very important question is just how generalisable are the results of those papers. There is no word on that issue in van Hagen (2019). I will start by discussing the paper which is most relevant to our question: Packer et al. (2015).
An important preparatory remark should be made concerning the term “BOAS”, brachycephalic obstructive airway syndrome. It is a syndrome, which means: a name for some associated characteristics. “Obstructed airways” means: difficulty in breathing. “Brachycephalic” means: having a (relatively) short muzzle. Having difficulty in breathing is a symptom sometimes caused by having obstructed airways; it is certainly the case that the medical condition is often associated with having a short muzzle. That does not mean that having a short muzzle causes the medical condition. In the past, dog breeders have selected dogs with a view to accentuating certain features, such as a short muzzle: unfortunately, at the same time, they have sometimes selected dogs with other, less favourable characteristics at the same time. The two features of dogs’ anatomies are associated, but one is not the cause of the other. “BOAS” really means: having obstructed airways and a short muzzle.
Packer et al. (2015) reports findings from two studies. The sample for the first study, “Study 1”, 700 animals, consisted of almost all dogs referred to the Royal Veterinary College Small Animal Referral Hospital (RVC-SAH) in a certain period in 2012. Exclusions were based on a small list of sensible criteria such as the dog being too sick to be moved or too aggressive to be handled. However, this is not the end of the story. In the next stage, those dogs who actually were diagnosed to have BOAS (brachycephalic obstructive airway syndrome) were singled out, together with all dogs whose owners reported respiratory difficulties, except when such difficulties could be explained by respiratory or cardiac disorders. This resulted in a small group of only 70 dogs considered by the researchers to have BOAS, and it involved dogs of 12 breeds only. Finally, all the other dogs of those breeds were added to the 70, ending up with 152 dogs of 13 (!) breeds. (The paper contains many other instances of carelessness).
To continue with the Packer et al. (2015) Study 1 reduced sample of 152 dogs, this sample is a sample of dogs with health problems so serious that they are referred to a specialist veterinary hospital. One might find a relation between BOAS and CFR (craniofacial ratio) in that special population which is not the same as the relation in general. Moreover, the overall risk of BOAS in this special population is by its construction higher than in general. Breeders of pedigree dogs generally exclude already sick dogs from their breeding programmes.
That first study was justly characterised by the authors as exploratory. They had originally used the big sample of 700 dogs for a quite different investigation, Packer et al. (2013). It is exploratory in the sense that they investigated a number of possible risk factors for BOAS besides CFR, and actually used the study to choose CFR as appearing to be the most influential risk factor, when each is taken on its own, according to a certain statistical analysis method, in which already a large number of prior assumptions had been built in. As I will repeat a few more times, the sample is too small to check those assumptions. I do not know if they also tried various simple transformations of the risk factors. Who knows, maybe the logarithm of a different variable would have done better than CFR.
In the second study (“Study 2”), they sampled anew, this time recruiting animals directly mainly from breeders but also from general practice. A critical selection criterium was a CFR smaller than 0.5, that number being the biggest CFR of a dog with BOAS from Study 1. They especially targeted breeders of breeds with low CFR, especially those which had been poorly represented in the first study. Apparently, the Affenpinscher and Griffon Bruxellois are not often so sick that they get referred to the RVC-SAH; of the 700 dogs entering Study 1, there was, for instance, just 1 Affenpinscher and only 2 Griffon Bruxellois. Of course, these are also relatively rare breeds. Anyway, in Study 2, those numbers became 31 and 20. So: the second study population is not so badly biased towards sick animals as the first. Unfortunately, the sample is much, much smaller, and per breed, very small indeed, despite the augmentation of rarer breeds.
Now it is important to turn to technical comments concerning what perhaps seems to speak most clearly to the non-statistically schooled reader, namely, Figure 2 of Packer et al., which I reproduce here, together with the figure’s original caption.
In the abstract of their paper, they write “we show […] that BOAS risk increases sharply in a non-linear manner”. They do no such thing! They assume that the log odds of BOAS risk , that is: log(p/(1 – p)), depends exactly linearly on CFR and moreover with the same slope for all breeds. The small size of these studies forced them to make such an assumption. It is a conventional “convenience” assumption. Indeed, this is an exploratory analysis, moreover, the authors’ declared aim was to come up with a single risk factor for BOAS. They were forced to extrapolate from breeds which are represented in larger numbers to breeds of which they had seen many less animals. They use the whole sample to estimate just one number, namely the slope of log(p/(1 – p)) as an assumed linear function of CFR. Each small group of animals of each breed then moves that linear function up or down, which corresponds to moving the curves to the right or to the left. Those are not findings of the paper. They are conventional model assumptions imposed by the authors from the start for statistical convenience and statistical necessity and completely in tune with their motivations.
One indeed sees in the graphs that all those beautiful curves are essentially segments of the same curve, shifted horizontally. This has not been shown in the paper to be true. It was assumed by the authors of the paper to be true. Apparently, that assumption worked better for CFR than for the other possible criteria which they considered: that was demonstrated by the exploratory (the author’s own characterisation!) Study 1. When one goes from Study 1 to Study 2, the curves shift a bit: it is definitely a different population now.
There are strange features in the colour codes. Breeds which should be there are missing, and breeds which shouldn’t be there are. The authors have exchanged graphs (a) and (b)! This can be seen by comparing the minimum and maximum predicted risks from their Table 2.
Notice that these curves represent predictions for neutered dogs with breed mean neck girth, breed ideal body condition score (breed ideal body weight). I don’t know whose definition of ideal is being used here. The graphs are not graphs of probabilities for dog breeds, but model predictions for particular classes of dogs of various breeds. They depend strongly on whether or not the model assumptions are correct. The authors did not (and could not) check the model assumptions: the sample sizes are much too small.
By the way, breeders’ dogs are generally not neutered. Still, one-third of the dogs in the sample were neutered, so the “baseline” does represent a lot of animals. Notice that there is no indication whatsoever of statistical uncertainty in those graphics. The authors apparently did not find it necessary to add error bars or confidence bands to their plots. Had they done so, the pictures would have given a very, very different impression.
In their discussion, the authors write “Our results confirm that brachycephaly is a risk factor for BOAS and for the first time quantitatively demonstrate that more extreme brachycephalic conformations are at higher risk of BOAS than more moderate morphologies; BOAS risk increases sharply in a non-linear manner as relative muzzle length shortens”. I disagree strongly with their appraisal. The vaunted non-linearity was just a conventional and convenience (untested) assumption of linearity in the much more sensible log-odds scale. They did not test this assumption and most importantly, they did not test whether it held for each breed considered separately. They could not do that, because both of their studies were much, much too small. Notice that they themselves write, “we found some exceptional individuals that were unaffected by BOAS despite extreme brachycephaly” and it is clear that these exceptions were found in specific breeds. But they do not tell us which.
They also tell us that other predictors are important next to CFR. Once CFR and breed have been taken into account (in the way that they take it into account!), neck girth (NG) becomes very important.
They also write, “if society wanted to eliminate BOAS from the domestic dog population entirely then based on these data a quantitative limit of CFR no less than 0.5 would need to be imposed”. They point out that it is unlikely that society would accept this, and moreover, it would destroy many breeds which do not have problems with BOAS at all! They mention, “several approaches could be used towards breeding towards more moderate, lower-risk morphologies, each of which may have strengths and weaknesses and may be differentially supported by stakeholders involved in this issue”.
This paper definitely does not support imposing a single simple criterion for all dog breeds, much as its authors might have initially hoped that CFR could supply such a criterion.
In a separate section, I will test their model assumptions, and investigate the statistical reliability of their findings.
Now I turn to the other key paper, Liu et al. (2017). In this 8-author paper, the last and senior author, Jane Ladlow, is a very well-known authority in the field. This paper is based on a study involving 604 dogs of only three breeds, and those are the three breeds which are already known to be most severely affected by BOAS: bulldogs, French bulldogs, and pugs. They use a similar statistical methodology to Packer et al., but now they allow each breed to have a different shaped dependence on CFR. Interestingly, the effects of CFR on BOAS risk for pugs, bulldogs and French bulldogs are not statistically significant. Whether or not they are the same across those three breeds becomes, from the statistical point of view, an academic question.
The statistical competence and sophistication of this group of authors can be seen at a glance to be immeasurably higher than that of the group of authors of Packer et al. They do include indications of statistical uncertainty in their graphical illustrations. They state, “in our study with large numbers of dogs of the three breeds, we obtained supportive data on NGR (neck girth ratio: neck girth/chest girth), but only a weak association of BOAS status with CFR in a single breed.” Of course, part of that could be due to the fact that, in their study, CFR did not vary much within each of those three breeds, as they themselves point out. I did not yet re-analyse their data to check this. CFR was certainly highly variable in these three breeds in both of Packer et al.’s studies, see the figures above, and again in Liu et al. as is apparent from my Figure 2 below. But Liu et al. also point out that anyway, “anatomically, the CFR measurement cannot determine the main internal BOAS lesions along the upper airway”.
Another of their concluding remarks is the rather interesting “overall, the conformational and external factors as measured here contribute less than 50% of the variance that is seen in BOAS”. In other words, BOAS is not very well predicted by these shape factors. They conclude, “breeding toward [my emphasis] extreme brachycephalic features should be strictly avoided”. I should hope that nowadays, no recognised breeders deliberately try to make known risk features even more pronounced.
Liu et al. studied only bulldogs, French bulldogs and pugs. The CFRs of these breeds do show within breed statistical variation. The study showed that a different anatomical measure was an excellent predictor of BOAS. Liu et al. moreover explain anatomically and medically why one should not expect CFR to be relevant for the health problems of those races of dogs.
It is absolutely not true that almost all of the animals in that study have BOAS. The study does not investigate BOS. The study was set up in order to investigate the exploratory findings and hypotheses of Packer et al. and it rejects them, as far as the three races they considered were concerned. Packer et al. hoped to find a simple relationship between CFR and BOAS for all brachycephalic dogs but their two studies are both much too small to verify their assumptions. Liu et al. show that for the three races studied, the relationship between measurements of body structure and ill health associated with them, varies between races.
In contradiction to the opinion of van Hagen (2019), there are no “contradictions” between the studies of Packer et al. and Liu et al. The first comes up with some guesses, based on tiny samples from each breed. The second investigates those guesses but discovers that they are wrong for the three races most afflicted with BOAS. Study 1 of Packer et al. is a study of sick animals, but Study 2 is a study of animals from the general population. Liu et al. is a study of animals from the general population. (To complicate matters, Njikam et al., Packer et al. and Liu et al. all use slightly different definitions or categorisations of BOAS.)
Njikam et al. (2009), like the later researchers in the field, fit logistic regression models. They exhibit various associations between illness and risk factors per breed. They do not quantify brachycephaly by CFR but by a similar measure, BRA, the ratio of width to length of the skull. CFR and BRA are approximately non-linear one-to-one functions of one another (this would be exact if skull length equalled skull width plus muzzle length, i.e., assuming a spherical cranium), so a threshold criterium in terms of one can be roughly translated into a threshold criterium in terms of the other. Their samples are again, unfortunately, very small (the title of their paper is very misleading).
Their main interest is in genetic factors associated with BOAS apart from the genetic factors behind CFR, and indeed they find such factors! In other words, this study shows that BOAS is very complex. Its causes are multifactorial. They have no data at all on the breeds of primary interest to SRR: these breeds are not much afflicted by BOAS! It seems that van Hagen again has a reading of Njikam et al. which is not justified by that paper’s content.
Fortunately, the data sets used by the publications in PLoS ONE are available as “supplementary material” on the journal’s web pages. First of all, I would like to show a rather simple statistical graphic which shows that the relation between BOAS and CFR in Packer et al.’s Study 2 data does not look at all as the authors hypothesized. First, here are the numbers: a table of numbers of animals with and without BOAS in groups split according to CFR as a percentage, in steps of 5%. The authors recruited animals mainly from breeders, with CFR less than 50%. It seems there were none in their sample with a CFR between 45% and 50%.
BOAS versus CFR group
Table 1: BOAS versus CFR group
This next figure is a simple “pyramid plot” of percentages with and without BOAS per CFR group. I am not taking into account the breed of these dogs, nor of other possible explanatory factors. However, as we will see, the suggestion given by the plot seems to be confirmed by more sophisticated analyses. And that suggestion is: BOAS has a roughly constant incidence of about 20% among dogs with a CFR between 20% and 45%. Below that level, BOAS incidence increases more or less linearly as CFR further decreases.
Be aware that the sample sizes on which these percentages are based are very, very small.
Could it be that the pattern shown in Figure 3 is caused by other important characteristics of the dogs, in particular, breed? In order to investigate this question, I, first of all, fitted a linear logistic regression model with only CFR, and then a smooth logistic regression model with only CFR. In the latter, the effect of CFR on BOAS is allowed to be any smooth function of CFR – not a function of a particular shape. The two fitted curves are seen in Figure 4. The solid line is the smooth, the dashed line is the fitted logistic curve.
This analysis confirms the impression of the pyramid plot. However, the next results which I obtained were dramatic. I added to the smooth model also Breed and Neutered-status, and also investigated some of the other variables which turned up in the papers I have cited. It turned out that “Breed” is not a useful explanatory factor. CFR is hardly significant. Possibly, just one particular breed is important: the Pug. The differences between the others are negligible (once we have taken account of CFR). The variable “neutered” remains somewhat important.
Here (Table 2) is the best model which I found. As far as I can see, the Pug is a rather different animal from all the others. On the logistic scale, even taking account of CFR, Neckgirth and Neuter status, being a Pug increases the log odds ratio for BOAS by 2.5. Below a CFR of 20%, each 5% decrease in CFR increases the log odds ratio for BOAS by 1, so is associated with an increase in incidence by a factor of close to 3. In the appendix can be seen what happens when we allow each breed to have its own effect. We can no longer separate the influence of Breed from CFR and we cannot say anything about any individual breeds, except for one.
(CFRpct – 20) * (CFRpct < 20)
Breed == “Pug”:TRUE
*** p < 0.001; ** p < 0.01; * p < 0.05
Table 2: A very simple model (GLM, logistic regression)
The pug is in a bad way. But we knew that before. Packer Study 2 data:
Table 3: The Pug almost always has BOAS. The majority of non-Pugs don’t.
The graphs of Packer et al. in Figure 1 are a fantasy. Reanalysis of their data shows that their model assumptions are wrong. We already knew that BOAS incidence, Breed, and CFR are closely related and naturally they see that again in their data. But the actual possibly Breed-wise relation between CFR and BOAS is completely different from what their fitted model suggests. In fact, the relation between CFR and BOAS seems to be much the same for all breeds, except possibly for the Pug.
The paper Packer et al. (2015) is rightly described by its authors as exploratory. This means: it generates interesting suggestions for further research. The later paper by Liu et al. (2017) is excellent follow-up research. It follows up on the suggestions of Packer et al., but in fact it does not find confirmation of their hypotheses. On the contrary, it gives strong evidence that they were false. Unfortunately, it only studies three breeds, and those breeds are breeds where we already know action should be taken. But already on the basis of a study of just those three breeds, it comes out strongly against taking one single simple criterion, the same for all breeds, as the basis for legislation on breeding.
Further research based on a reanalysis of the data of Packer et al. (2015) shows that the main assumptions of those authors were wrong and that, had they made more reasonable assumptions, completely different conclusions would have been drawn from their study.
The conclusion to be drawn from the works I have discussed is that it is unreasonable to suppose that a single simple criterion, the same for all breeds, can be a sound basis for legislation on breeding. Packer et al. clearly hoped to find support for this but failed: Liu et al. scuppered that dream. Reanalysis of their data with more sophisticated statistical tools shows that they should already have seen that they were betting on the wrong horse.
Below a CFR of 20%, a further decrease in CFR is associated with a higher incidence of BOAS. There is not enough data on every breed to see if this relationship is the same for all breeds. For Pugs, things are much worse. For some breeds, it might not be so bad.
Study 2 of Packer et al. (2015) needs to be replicated, with much larger sample sizes.
Liu N-C, Troconis EL, Kalmar L, Price DJ, Wright HE, Adams VJ, Sargan DR, Ladlow JF (2017) Conformational risk factors of brachycephalic obstructive airway syndrome (BOAS) in pugs, French bulldogs, and bulldogs. PLoS ONE12 (8): e0181928. https://doi.org/10.1371/journal.pone.0181928
Njikam IN, Huault M, Pirson V, Detilleux J (2009) The influence of phylogenic origin on the occurrence of brachycephalic airway obstruction syndrome in a large retrospective study. International Journal of Applied Research in Veterinary Medicine7(3) 138–143. http://www.jarvm.com/articles/Vol7Iss3/Nijkam%20138-143.pdf
Packer RMA, Hendricks A, Volk HA, Shihab NK, Burn CC (2013) How Long and Low Can You Go? Effect of Conformation on the Risk of Thoracolumbar Intervertebral Disc Extrusion in Domestic Dogs. PLoS ONE8 (7): e69650. https://doi.org/10.1371/journal.pone.0069650
Table 4: A more complex model (GAM, logistic regression)
The above model (Table 4) allowing each breed to have its own separate “fixed” effect is not a success. That certainly was presumably the motivation to make “Breed” a random, not a fixed, effect in the Packer et al. publication, because treating breed effects as drawn from a normal distribution and assuming the same effect of CFR for all breeds disguises the multicollinearity and lack of information in the data. Many breeds, most of them contributing only one or two animals, enabled the authors’ statistical software to compute an overall estimate of “variability between breeds” but the result is pretty meaningless.
Further inspection shows that many breeds are only represented by 1or 2 animals in the study. Only five are in something a bit like reasonable numbers. These five are the Affenpinscher, Cavalier King Charles Spaniel, Griffon Bruxellois, Japanese Chin and Pug; in numbers 31, 11, 20, 10, 32. I fitted a GLM (logistic regression) trying to explain BOAS in these 105 animals and their breed together with variables CFR, BCR, and so on. Still then, the multicollinearity between all these variables is so strong that the best model did not include CFR at all. In fact: once BCS (Body Condition Score) was included, no other variable could be added without almost everything becoming statistically insignificant. Not surprisingly, it is good to have a good BCS. Being a Pug or a Japanese Chin is disastrous. Cavalier King Charles Spaniel is intermediate. Affenpinscher and Griffon Bruxellois have the least BOAS (and about the same amount, namely an incidence of 10%), even though the mean CFRs of these two species seem somewhat different (0.25, 0.15).
Had the authors presented p-values and error bars the paper would probably never have been published. The study should be repeated with a sample 10 times larger.
This work was partly funded by “Stichting Ras en Recht” (SRR; Foundation Justice for Pedigree dogs). The author accepted the commission by SSR to review statistical aspects of MAE van Hagen’s report “Breeding of short-muzzled dogs” under the condition that he would report his honest professional and scientific opinion on van Hagen’s literature study and its sources.
Note: the present post reproduces the text of our new preprint https://arxiv.org/abs/2104.00333, adding some juicy pictures. Further editing is planned, much reducing the length of this blog-post version of our story.
Summary: We analyse data from the final two years of a long-running and influential annual Dutch survey of the quality of Dutch New Herring served in large samples of consumer outlets. The data was compiled and analysed by Tilburg University econometrician Ben Vollaard, and his findings were publicized in national and international media. This led to the cessation of the survey amid allegations of bias due to a conflict of interest on the part of the leader of the herring tasting team. The survey organizers responded with accusations of failure of scientific integrity. Vollaard was acquitted of wrongdoing by the Dutch authority, whose inquiry nonetheless concluded that further research was needed. We reconstitute the data and uncover important features which throw new light on Vollaard’s findings, focussing on the issue of correlation versus causality: the sample is definitely not a random sample. Taking into account both newly discovered data features and the sampling mechanism, we conclude that there is no evidence of biased evaluation, despite the econometrician’s renewed insistence on his claim.
Keywords: Data generation mechanism, Predator-prey cycles, Feedback in sampling and measurement, Consumer surveys, Causality versus correlation, Questionable research practices, Unhealthy research stimuli.
In surveys intended to help consumers by regularly publishing comparisons of a particular product obtained from different consumer outlets (think of British “fish and chips” bought in a large sample of restaurants and pubs), data is often collected over a number of years and evaluated each year by a panel, which might consist of a few experts, but might also consist of a larger number of ordinary consumers. As time goes by, outlets learn what properties are most valued by the panel, and may modify their product accordingly. Also, consumers learn from the published rankings. Panels are renewed, and new members presumably learn from the past about how they are supposed to weight the different features of a product. Partly due to negative reviews, some outlets go out of business, while new outlets enter the market, and imitate the products of the “winners” of previous years’ surveys. Coming out as “best” boosts sales; coming out as “worst” can be the kiss of death.
For many years, a popular Dutch newspaper (Algemene Dagblad, in the sequel AD) published two immensely influential annual surveys of two particularly popular and typically Dutch seasonal products: the Dutch New Herring (Dutch: Hollandse Nieuwe) in June, and the Dutch “oliebol” (a kind of greasy, currant-studded, deep-fried spherical doughnut) in December. This paper will study the data published by the newspaper on its website of 2016 and 2017—the last two years of the 36 years in which the AD herring test operated. This data included not only a ranking of all participating outlets and their final scores (on a scale of 0 to 10) but also numerical and qualitative evaluations of many features of the product being offered. A position in the top ten was highly coveted. Being in the bottom ten was a disaster.
For a while, rumours had been circulated (possibly by disappointed victims of low scores!) that both tests were biased. The herring test was carried out by a team of three tasters, whose leader Aad Taal was indeed consultant to a wholesale company called Atlantic (based in Scheveningen, in the same region as Rotterdam), and who offered a popular course on herring preparation. As a director at the Dutch ministry of agriculture he had earlier successfully managed to obtain the European Union (EU) legal protection for the official designation “Dutch New Herring”. Products may only be sold under this name in the whole EU only if meticulously prepared in the circumscribed traditional way, as well as satisfying strict rules of food safety. It is nowadays indeed sold in several countries adjacent to the Netherlands. We will later add some crucial further information about what actually makes a Dutch New Herring different from the traditionally prepared herring of other countries.
Enter econometrician Dr Ben Vollaard of Tilburg University. Himself partial to a tasty Dutch New Herring, he learnt in 2017 from his local fishmonger about the complaints then circulating about the AD Herring Test. The AD is based on the city of Rotterdam, close to the main home ports of the Dutch herring fleet in past centuries. Tilburg is somewhat inland. Not surprisingly, consumers in different regions of the country seem to have developed different tastes in Dutch New Herring, and a common complaint was that the AD herring testers had a Rotterdam bias.
Vollaard decided to investigate the matter scientifically. A student helped him to manually download the data published on their website on 144 participating outlets in 2016, and 148 in 2017. An undisclosed number of outlets participated in both years, and initial reports suggested it must be a large number. Later we discovered that the overlap consisted of only 23 outlets. Next, he ran a linear regression analysis, attempting to predict the published final score for each outlet in each year, using as explanatory variables the testing team’s evaluations of the herring according to various criteria such as ripeness and cleaning, together with numerical variables such as weight, price, temperature, and laboratory measurements of fat content and microbiological contamination. Most of the numerical variables were modelled by using dummy variables after discretization into a few categories. A single indicator variable for “distance from Rotterdam’’ (greater than 30 kilometres) was used to test for regional bias.
The analysis satisfyingly showed many highly significant effects, most of which are exactly those that should have been expected. The testing team gave a high final score to fish which had a high fat content, low temperature, well-cleaned, and a little matured (not too little, not too much). More expensive and heavier fish scored better, too. Being more than 30 km from Rotterdam had a just significant negative effect, lowering the final score by about 0.5. Given the supreme importance of getting the highest possible score, 10, a loss of half a point could make a huge difference to a new outlet going all out for a top score and hence position in the “top ten” of the resulting ranking. However, just because outlets in the sample far from Rotterdam performed a little worse on average than those close to Rotterdam, can have many innocent explanations.
But Vollaard went a lot further. After comparing the actual scores to linear regression model predicted scores based on the measured characteristics of the herring, Vollaard concluded:
Everything indicates that herring sales points in Rotterdam and the surrounding area receive a higher score in the AD Herring Test than can be explained from the quality of the herring served.
That is a pretty serious allegation.
Vollaard published this analysis as a scientific paper Vollaard (2017a) on his university personal web page, and the university put out a press release. The research drew a lot of media attention. In the ensuing transition from a more or less academic study (in fact, originally just a student exercise) to a press release put out by a university publicity department, then to journalists’ newspaper articles adorned with headlines composed by desk editors, the conclusion became even more damning.
Presumably stimulated by the publicity that his work had received, Vollaard decided to go further, now following up on further criticism circulating about the AD Herring Test. He rapidly published a second analysis, Vollaard (2017b), on his university personal web page. His focus was now on the question of a conflict of interest concerning a connection between the chief herring tester and the wholesale outlet Atlantic. Presumably by contacting outlets directly, he identified 20 outlets in the sample whose herring, he believed, had been supplied by that company. Certainly, his presumed Atlantic herring outlets tended to have rather good final scores, and a few of them were regularly in the top ten.
We may surmise that Vollaard must have been disappointed and surprised to discover that his dummy variable for being supplied by Atlantic was not statistically significant when he added it to his model. His existing model (the one on the basis of which he argued that the testing team was not evaluating outlets far from Rotterdam using their own measured characteristics) predicted that Atlantic outlets should indeed, according to those characteristics, have come out exactly as well as they did! He had to come up with something different. In his second paper, he insinuated pro-Atlantic bias by comparing the amount of variance explained by what he considered to be “subjective” variables with the amount explained by the “objective” variables, and he showed that the subjective (taste and smell, visual impression) evaluations explained just as much of the variance as the objective evaluations (price, temperature, fat percentage). This change of tune represents a serious inconsistency in thinking: this is cherry-picking in order to support a pre-gone conclusion.
In itself, it does not seem unreasonable to judge a culinary delicacy by taste and smell, and not unreasonable to rely on reports of connoisseurs. However, Vollaard went much further. He hypothesized that “ripeness” and “microbiological state” were both measurements of the same variable; one subjective, the other objective. According to him, they both say how much the fish was “going off”. Since the former variable was extremely important in his model, the latter not much at all, he again accused the herring testers of bias and attributed that bias to conflict of interest. His conclusion was:
A high place in the AD Herring Test is strongly related to purchasing herring from a supplier in which the test panel has a business interest. On a scale of 0 to 10, the final mark for fishmongers with this supplier is on average 3.6 points higher than for fishmongers with another supplier.
He followed that up with the statement:
Almost half of the large difference in average final grade between outlets with and without Atlantic as supplier can be explained by a subjective assessment by the test team of how well the herring has been cleaned (very good/good/moderate/poor) and of the degree of ripening of the herring (light/medium/strong/spoiled).
The implication is that the Atlantic outlets are being given an almost 2 point advantage based on a purely subjective evaluation of ripeness.
The AD defended itself and its herring testers by pointing out that the ripeness or maturity of a Dutch new herring, evaluated by taste and smell, reflects ongoing and initially highly desirable chemical processes (protein changing to fat, fat to oil, oil becoming rancid). Degree of microbiological activity, i.e., contamination with harmful bacteria, could be correlated with that, since dangerous bacterial activity will tend to increase with time once it has started, and both processes are speeded up if the herring is not kept cold enough, but it is of a completely different nature: biological, not chemical. It is caused by carelessness in various stages of preparation of the herring, insufficient cooling, and so on. It is obviously not desirable at all. AD also pointed out that one of the Atlantic outlets must have been missed, which actually in the first of the two years had scored very badly. This could be deduced from the numbers of those outlets, and the mean score of the Atlantic-supplied outlets, both reported by Vollaard in his papers.
The newspaper AD complained first to Vollaard and then to his university. With the help of lawyers, a complaint was filed with the Tilburg University committee for scientific integrity. The committee rejected the complaint, but the newspaper took it to the national level. Their lawyers hired the second author of this paper, Richard Gill (RDG), in the hope that he would support their claims. He requested Vollaard’s data-set and also requested that the outlets in the data-set be identified, since one major methodological complaint of his was that Vollaard had not taken account of possible autocorrelation by combining samples from two subsequent years, with presumably a large overlap, but without taking any account of this. Vollaard reluctantly supplied the data but declined to identify the outlets appearing twice or even inform us how many such outlets there were. With the help of AD however, it was possible to find them, and also locate many misclassified outlets. RDG wrote an expert opinion in which he argued that the statistical analysis did not support any allegations of bias or even unreliability of the herring test.
Vollaard had repeatedly stated that he was only investigating correlations, not establishing causality, but at the same time his published statements (quoted in the media), and his spoken statements on national TV, make it clear that he considered that his analysis results were damming evidence against the test. This seemed to RDG to be unprofessional, at the very least. RDG moreover identified much statistical amateurism. Vollaard analysed his data much as any econometrician might do: he had a data-set with a variable of interest and a number of explanatory variables, he ran a linear regression making numerous modelling choices without any motivation and without any model checking. He fit a completely standard linear regression model to two samples of Dutch new herring outlets, without any thought to the data generating mechanism. How were outlets selected to appear in the sample?
According to the AD, there were actually 29 Atlantic outlets in Vollaard’s combined sample. Note, there is some difficulty in determining this number. A given outlet may obtain some fish from Atlantic, some from other suppliers, and may change their suppliers over the course of a year. So the origin of the fish actually tasted by the test team cannot be determined with certainty. We see in Table 1 (according to AD), that Vollaard “caught” only about two thirds of the Atlantic outlets, and misclassified several more.
Atlantic by Vollaard
Not Atlantic by Vollaard
Atlantic by AD
Not Atlantic by AD
Table 1: Atlantic- and not Atlantic-supplied outlets tested over two years as identified by Vollaard and the AD respectively.
At the national level, the LOWI (Landelijk Orgaan Wetenschappelijk Integriteit — the Dutch national organ for investigating complaints of violation of research integrity) re-affirmed the Tilburg University scientific integrity committee’s “not guilty” verdict. Vollaard was not deliberately trying to mislead. “Guilty” verdicts have an enormous impact and imply a finding, beyond a reasonable doubt, of gross research impropriety. This generally leads to termination of university employment contracts and to retraction of publications. They did agree that Vollaard’s analyses were substandard, and they recommended further research. RDG reached out to Vollaard suggesting collaboration, but he declined. After a while, Vollaard’s (still anonymized) data sets and statistical analysis scripts (written in the proprietary Stata language) were also published on his website Vollaard (2020a, 2020b). The data was actually in the form of Stata files; fortunately, it is nowadays possible to read such files in the open source and free R system. The known errors in the classification of Atlantic outlets were not corrected, despite AD’s request. The papers and the files are no longer on Vollaard’s webpages, and he still declines collaboration with us. We have made all documents and data available on our own webpages and on the GitHub page https://github.com/gaofengnan/dutch-new-herring.
RDG continued his re-analyses of the data and began the job of converting his expert opinion report (English translation: https://gill1109.com/2021/06/01/was-the-ad-herring-test-about-more-than-the-herring/) into a scientific paper. It seemed wise to go back to the original sources and this meant a difficult task of extracting data from the AD’s websites. Each year’s worth of data was moreover coded differently in the underlying HTML documents. At this point he was joined by the first author Fengnan Gao (FG) of the present paper who was able to automate the data scraping and cleaning procedures — a major task. Thus, we were able to replicate the whole data gathering and analysis process and this led to a number of surprises.
Before going into that, we will explain what is so special about Dutch New Herring, and then give a little more information about the variables measured in the AD Herring Test.
Dutch New Herring
Every nation around the North Sea has traditional ways of preparing North Atlantic herring. For centuries, herring has been a staple diet of the masses. It is typically caught when the North Atlantic herring population comes together at its spawning grounds, one of them being in the Skagerak, between Norway and Denmark. Just once a year there is an opportunity for fishers to catch enormous quantities of a particular extremely nutritious fish, at the height of their physical condition, about to engage in an orgy of procreation. The fishers have to preserve their catch during a long journey back to their home base; and if the fish is going to be consumed by poor people throughout a long year, further means of conservation are required. Dutch, Danish, Norwegian, British and German herring fleets (and more) all compete (or competed) for the same fish; but what people in those countries eat varies from country to country. Traditional local methods of bringing ordinary food to the tables of ordinary folk become cultural icons, tourist attractions, gastronomic specialities, and export products.
Traditionally, the Dutch herring fleet brought in the first of the new herring catch in mid-June. The separate barrels in the very first catch are auctioned and a huge price (given to charity) is paid for the very first barrel. Very soon, fishmongers, from big companies with a chain of stores and restaurants, to supermarket chains, to small businesses selling fish in local shops and street markets are offering Dutch New Herring to their customers. It’s a traditional delicacy, and nowadays, thanks to refrigeration, it can be sold the whole year long (the designation “new” should be removed in September). Nowadays, the fish arrives in refrigerated lorries from Denmark, no longer in Dutch fishing boats at Scheveningen harbour.
What makes a Dutch new herring any different from the herring brought to other North Sea and Baltic Sea harbours? The organs of the fish should be removed when they were caught, and the fish kept in lightly salted water. But two internal organs are left, a fish’s equivalent to our pancreas and kidney. The fish’s pancreas contains enzymes which slowly transform some protein into fat and this process is responsible for a special almost creamy taste which is much treasured by Dutch consumers, as well as those in neighbouring countries. See, e.g., the Wikipedia entry for soused herring for more details, https://en.wikipedia.org/wiki/Soused_herring. According to a story still told to Dutch schoolchildren, this process was discovered in the 14th century by a Dutch fisher named Willem Beukelszoon.
The AD Herring Test
For many years, the Rotterdam-based newspaper Algemene Dagblad(AD) carried out an annual comparison of the quality of the product offered in a sample of consumer outlets. A small team of expert herring tasters paid surprise visits to the typical small fishmonger’s shops and market stalls where customers can order portions of fish and eat them on the premises (or even just standing in a busy food market). The team evaluated how well the fish has been prepared, preferring especially that the fish have not been cleaned in advance but that they are carefully and properly prepared in front of the client. They judged the taste and checked the temperature at which it is given to the customer: by law it may not be above 7 degrees. A sample was sent to a lab for a number of measurements: weight, fat percentage, signs of microbiological contamination. They are also interested in the price (per gram). An important, though subjective, characteristic is “ripeness”. Expert tasters distinguish Dutch new herring which has not ripened (matured) at all: green. After that comes lightly matured, well matured, too much matured, and eventually rotten.
This information was all written down and evaluated subjectively by each team member, then combined. The team averaged the scores given by its three members (a senior herring expert, a younger colleague, and a journalist) to produce a score from 0 to 10, where 10 is perfection; below 5.5 is a failing grade. However, it was not just a question of averaging. Outlets which sold fish which was definitely rotten, definitely contaminated with harmful bacteria, or definitely too warm got a zero grade. The outlets which took part were then ranked. The ten highest ranking outlets were visited again, and their scores possibly adjusted. The final ranking was published in the newspaper, and put in its entirety on internet. Coming out on top was like getting a Michelin star. The outlets at the bottom of the list might as well have closed down straight away. One sees from the histogram below, Figure [fig:1], that in 2016 and 2017, more than 40% of the outlets got a failing grade; almost 10% were essentially disqualified, by being given a grade of zero. The distribution looks nicely smooth except for the peak at zero, which really means that their wares did not satisfy minimal legal health requirements.
It is important to understand how outlets were chosen to enter the test. To begin with, the testing team itself automatically revisited last years’ top ten. But further outlets could be nominated by individual newspaper readers, indeed, they could be self-nominated by persons close to the outlets themselves. We are not dealing with a random sample, but with a self-selecting sample, with automatically a high overlap from year to year.
Over the years, there had been more and more acrimonious criticism of the AD Herring Test. As one can imagine, it was mainly the owners of outlets who had bad scores who were unhappy about the test. Many of them, perhaps justly, were proud of their product and had many satisfied customers too. Various accusations were therefore flung around. The most serious one was that the testing team was biased and even had a conflict of interest. The lead taster gave courses on the preparation of Dutch New Herring and led the movement to have the “brand” registered with the EU. There is no doubting his expertise, but he had been hired (in order to give training sessions to their clients) by one particular wholesale business, owned by a successful businessman of Turkish origin, which as one might imagine lead to jealousy and suspicion. Especially since a number of the retail outlets of fish supplied by that particular company often (but certainly not always) appeared year by year in the top ten of the annual AD Herring Test. Other accusations were that the herring tasters favoured businesses in the neighbourhood of Rotterdam (home base of the AD). As herring cognoscenti know, people in various Dutch localities have slightly different tastes in Dutch New Herring. Amsterdammers have a different taste from Rotterdammers.
In the meantime, under the deluge of negative publicity, the AD announced that they would now stop their annual herring test. They did hire a law company who on their behalf brought an accusation of failure of scientific integrity to Tilburg University’s “Commission for Scientific Integrity”. The law firm moreover approached one of us (RDG) for expert advice. He was initially extremely hesitant to be a hired gun in an attack on a fellow academic but as he got to understand the data and the analyses and the subject, he had to agree that the AD had some good points. At the same time, various aggrieved herring sellers were following up with their own civil action against the AD; and the wholesaler whose outlets did so well in the test, also started a civil action against Tilburg University, since its own reputation was damaged by the affair.
Here is the main result of Vollaard’s first report.
No surprises here. The testing team prefers fatty and larger herring, properly cooled, mildly matured, freshly prepared in view of customers on-site, and well-cleaned too. We have a delightful amount of statistical significance. There are some curious features of Vollaard’s chosen model: some numerical variables (“temp” and “fat”) have been converted into categorical variables by presumably arbitrary choice of cut-off points, while “weight” is taken as numerical. Presumably, this is because one might expect the effect of temperature not to be monotone. Nowadays, one might attempt fitting low-degree spline curves with few knots. Some categories of categorical variables have been merged, without explanation. One should worry about interactions and about additivity. Certainly one should worry about model fit.
We add to the estimated regression model also R’s standard four diagnostic plots in Fig. 2. Dr Vollaard apparently did not carry out any model checking.
Model validation beyond Vollaard’s regression analysis
There are some serious statistical issues. There seem to be a couple of serious outliers. The error distribution seems to have a heavier than normal tail. But we also understand that some observations come in pairs — the same outlet evaluated in two subsequent years. The data set has been anonymized too much. Each outlet should at the least have been given a random code so that one can identify the pairs and take account of possible dependence from one year to the next, easy to do by simply estimating the correlation from the residuals, and then doing a generalized linear regression with an estimated covariance matrix of the error terms.
Inspection of the outliers led us to realize that there is a serious issue with the observations which got a final score of zero. Those outlets were essentially disqualified on grounds of gross violation of basic hygiene laws, applied by looking at just a couple of the variables: temperature above 12 degrees (the legal limit is 7), and microbiological activity (dangerous versus low or none). The model should have been split into two parts: a linear regression model for the scores of the not-disqualified outlets; and a logistic regression model, perhaps, for predicting “disqualification” from some of the other characteristics. However, at least it is possible to analyse each of the years separately, and to remove the “disqualified” outlets. That is easy to do. Analysing just the 2017 data, the analysis results look a lot cleaner; the two bad outliers have gone, the estimated standard deviation of the errors is a lot smaller, the normal Q-Q plot looks very nice.
There is another big issue with this data and these analyses which needs to be mentioned, and if possible, addressed. How did the “sample” come to be what it is? A regression model is at best a descriptive account of the correlations in a given data set. Before we should accuse the test team of bias, we should ask how the sample is taken. It is certainly not a random sample from a well-defined population!
Some retail outlets took part in the AD Herring Test year after year. The testing team automatically included last years’ top ten. Individual readers of the newspaper could nominate their own favourite fish shop to be added to the “sample”, and this actually did happen on a big scale. Fish shops which did really badly tended to drop out of future tests and, indeed, some of them stopped doing business altogether:
The “sample” evolves in time by a feedback mechanism.
Everybody could know what the qualities were that the AD testers appreciated, and they learnt from their score and their evaluation each year what they had to do better next year, if they wanted to stay in the running and to join the leaders of the pack. The notion of “how a Dutch New Herring ought to taste”, as well as how it ought to be prepared, was year by year being imprinted by the AD test team on the membership of the sample. New sales outlets joined and competed by adapting themselves to the criteria and the tastes of the test team.
The same newspaper did another annual ranking of outlets of a New Year’s Dutch traditional delicacy, actually, a kind of doughnuts (though without a hole in the middle) called oliebollen. They are indeed somewhat stodgy and oily, roughly spherical, objects, enlivened with currants and sprinkled with icing sugar. The testing panel was able to taste these objects blind. It consisted of about twenty ordinary folk and every year, part of the panel resigned and was replaced with fresh persons. Peter Grünwald of Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands, developed a simulation model which showed how the panel’s taste in oliebollen would vary over the years, as sales outlets tried to imitate the winners of the previous year, while the notion of what constitutes a good oliebol was not fixed. Taking the underlying quality to be one-dimensional, he demonstrated the well-known predator-prey oscillations (Angerbjorn et al., 1999). Similar lines of thinking have appeared in the study of, e.g., fashion cycles, see e.g. Acerbi et al. (2012), where the authors propose a mechanism for individual actors to imitate other actors’ cultural traits and preferences for these traits such that realistic and cyclic rise-and-fall patterns (see their Figure 4) are observed in simulated settings. A later study, Apriasz et al. (2016), divides a society into two categories of “snobs” and “followers”, where followers copy everyone else and snobs only imitate the trend of their own and go against the followers. As a result, clear recurring cyclic patterns (see their Figures 3 and 4) similar to the predator-prey cycle arise under proper parameter regimes.
The AD was again engaged in a legal dispute with disgruntled owners of low ranked sales outlets, which eventually led to this annual test being abandoned too. In fact, the AD forbade Grünwald to publish his results. We have made some initial simulation studies of a model with higher dimensional latent quality characteristics, which seems to exhibit similar but more complex behaviour.
New analyses, new insights
It turns out that the correlation between the residuals of the same outlet participating in two subsequent years is large, about 0.7. However, their number (23) is fairly small, so this has little effect on Vollaard’s findings. Taking account of it slightly increases the standard errors of estimated coefficients. However, we also knew that according to AD, many outlets were incorrectly classified by Vollaard, and since he did not wish to collaborate with us, we returned to the source of his data: the web pages of AD. This enabled us to play with the various data coding choices made by Vollaard and to try out various natural alternative model specifications. As well as this, we could use the list of outlets certified by AD and Atlantic as having actually supplied the Dutch new herring tested in 2016 and 2017.
First, it is clear from the known behaviour of the test team that a score of zero means something special. There is no reason to expect a linear model to be the right model for all participating outlets. The outlets which were given a zero score were essentially disqualified on objective public health criteria, namely temperature above 12 degrees and definitely dangerous microbiological activity. We decided to re-analyse the data while leaving out all disqualified outlets.
Next, there is the issue of correlation between outlets appearing in two subsequent years. Actually, this turned out to be a much smaller proportion than expected. So correction for autocorrelations hardly makes a difference, but on the other hand, it is easily made superfluous by dropping all outlets appearing for the second year in succession. Now we have two years of data, in the second year only of “newly nominated” outlets.
Going back to the original data published by AD, we discovered that Vollaard had made some adjustments to the published final scores. As was known, the testing team revisited the top ten scoring outlets, and ranked their product again, recording (in one of the two years) scores like 9.1, 9.2, … up to 10, in order to resolve ties. In both years there were scores registered such as 8– or 8+, meant to indicate “nearly an 8” or “a really good 8”, following Dutch traditional school and university test and exam grading. The scores “5″, “6″, “7″, “8”, “9”, “10” have familiar and conventional descriptions “unsatisfactory” or insufficient, “satisfactory” or sufficient, “good”, “very good, “excellent”. Linear regression analysis requires a numerical variable of interest. Vollaard had to convert “9–” (almost worthy of the qualification “very good”) into a number. It seems that he rounded it to 9, but one might just as well have made it 9–𝞮 for some choice of 𝞮, for instance, 𝞮 = 0.01, 0.03, or 0.1.
We compared the results obtained using various conventions for dealing with the “broken” grades, and it turned out that the choice of value of 𝞮 had major impact on the statistical significance of the “just significant” or “almost significant” variables of main interest (supplier; distance). Also, whether one followed standard strategies of model selection based on leaving out insignificant variables has a major impact on the significance of the variables of most interest (distance from Rotterdam; supplier). The size of their effects becomes a little smaller, standard errors remain large. Had Vollaard followed one of several common model selection strategies, he could have found that the effect of “Atlantic” was significant at the 5% level, supporting his prior opinion! As noted by experienced statistical practitioners such as Winship and Western (2016), in linear regression analysis where multicollinearity is present, the regression estimates are highly sensitive to small perturbations in model specification. In our data-set, what should be unimportant changes to which variables are included and which are not included, as well as unimportant changes in the quantification of the variable to be explained, keep changing the statistical significance of the variables which interested Vollaard the most — the results which led to a media circus, societal impact, and reputational damage to several big concerns, as well as to the personal reputation of the chief herring tester Aad Taal.
Having “cleaned” the data by removing the repeat tests, and removing the outlets breaking food safety regulations, and using the AD’s classification, the size of the effects of being an Atlantic-supplied outlet, and of being distant from Rotterdam, are smaller and hardly significant. By varying 𝞮, they change. On leaving out a few of the variables whose statistical significance is smallest, whether the two main variables of interest are significant changes again. The size of the effects remains about the same: Atlantic supplied outlets score a bit higher, outlets distant from Rotterdam score a bit lower, when taking account of all the other variables in the way chosen by the analyst.
By modelling the effects of so many variables by discretization, Vollaard created multicollinearity. The results depend on arbitrarily chosen cut-offs, and other arbitrary choices. For instance, “weight” was kept numerical, but “price” was made categorical. This could have been avoided by assuming additivity and smoothness and using modern statistical methodology, but in fact the data-set is simply too small for this to be meaningful. Trying to incorporate interaction between clearly important variables caused multicollinearity and failure of the standard estimation procedures. Different model selection procedures, and nonparametric approaches, end up with finding quite different models, but do not justify preferring one to another. We can come up with several excellent (and quite simple) predictors of the final score, but we cannot say anything about causality.
Vollaard’s analyses confirmed what we knew in advance (the “taste” of the testers). There is no reason whatsoever to accuse them of favouritism. The advantage of outlets supplied by Atlantic is tiny or non-existent, certainly nothing like the huge amount which Vollaard carelessly insinuated. The distant outlets are typically new entrants to the AD Herring Test. Their clients like the kind of Dutch new herring which they have been used to in their region. Vollaard’s interpretation of his own results obtained from his own data set was unjustified. He said he was only investigating correlations, but he appeared on national TV talk shows to say that his analyses made him believe that the AD Herring Test was severely biased. This caused enormous upset, financial and reputational damage, and led to a lot of money being spent on lawyers.
Everyone makes mistakes and what’s done is done, but we do all have a responsibility to learn from mistakes. The national committee for investigating accusations of violation of scientific integrity (LOWI) did not find Vollaard guilty of gross misdemeanour. They did recommend further statistical analysis. Vollaard declined to participate. No problem. We think that the statistical experiences reported here can provide valuable pedagogical material.
In our opinion, the suggestion that the AD Herring Test was in any way biased cannot be investigated by simple regression models. The “sample” is self-recruiting and much too small. The sales outlets which join the sample are doing so in the hope of getting the equivalent of a Michelin star. They can easily know in advance what are the standards by which they will be evaluated. Vollaard’s purely descriptive and correlational study confirms exactly what everyone (certainly everyone “in the business”) should know. The AD Herring Test, over the years that it operated, helped to raise standards of hygiene and presentation, and encouraged sales outlets to get hold of the best quality Dutch New Herring, and to prepare and serve it optimally. As far as subjective evaluations of taste are concerned, the test was indubitably somewhat biased toward the tastes valued by consumers in the region of Rotterdam and The Hague, and at the main “herring port” Scheveningen. But the “taste” of the herring testers was well known. Their final scores fairly represent their public, written evaluations, as far as can be determined from the available data.
The quality of the statistical analysis performed by Ben Vollaard left a great deal to be desired. To put it bluntly, from the statistical point of view it was highly amateurish. Economists who self-publish statistical reports under the flag of their university on matters of great public interest should have their work peer-reviewed and should rapidly publish their data sets. His results are extremely sensitive to minor variations in model choice and specification, to minor variations in quantifications of verbal scores, and there is not enough data to investigate his assumption of additivity. Any small effects found could as well be attributed to model misspecification as to conscious or unconscious bias on the part of the herring testers. We are reminded of Hanlon’s razor “never attribute to malice that which is adequately explained by stupidity”. In our opinion, in this case, Ben Vollaard was actually a victim of the currently huge pressure on academics to generate media interest by publishing on issues of current public interest. This leads to immature work which does not get sufficient peer review before being fed to the media. The results can cause immense damage.
Statisticians in general should not be afraid to join in societal debates. The total silence concerning this affair from the Dutch statistical society, which even has an econometric chapter, was a shame. Fortunately, the society has recently set up a new section devoted to public outreach.
A huge amount of statistical analyses are performed and published by amateurishly matching formal properties of a data-set (types of variables, the shape of the data file) to standard statistical models with no consideration at all given to model assumptions and to checks of model assumptions. Vollaard’s data-set can provide a valuable teaching resource, and we have published a version with (English language) description of the variables. We have made two versions available: Vollaard’s data-set put together by his student, but now with outlets identified, and the newly constituted data set with Atlantic-supplied outlets according to the AD, which is as well available in our GitHub repository https://github.com/gaofengnan/dutch-new-herring.
It would be interesting to add to the data some earlier years’ data, and investigate whether scores of repeatedly evaluated outlets tended to increase over the years. At the very least, it would be good to know which of the year 2016 outlets were repeat participants.
Just before we are about to submit this article, we become aware of Vollaard and van Ours (2021), in which Dr Ben Vollaard made the same accusations with essentially the same false arguments.
More study must be done of the feedback processes involved in consumer research panels.
Conflict of interest
The second author was paid by a well-known law firm for a statistical report on Vollaard’s analyses. His report, dated April 5, 2018, appeared in English translation earlier in this blog, https://gill1109.com/2021/06/01/was-the-ad-herring-test-about-more-than-the-herring/. He also reveals that the best Dutch New Herring he ever ate was at one of the retail outlets of Simonis in Scheveningen. They got their herring from the wholesaler Atlantic. He had this experience before any involvement in the Dutch New Herring scandals, topic of this paper.
Anders Angerbjorn, Magnus Tannerfeldt, and Sam Erlinge. Predator–prey relationships: arctic foxes and lemmings. Journal of Animal Ecology, 68(1):34–49, 1999. https://www.jstor.org/stable/2647297
Rafał Apriasz, Tyll Krueger, Grzegorz Marcjasz, and Katarzyna Sznajd-Weron. The hunt opinion model—an agent based approach to recurring fashion cycles. PloS one, 11(11):e0166323, 2016. https://doi.org/10.1371/journal.pone.0166323