Monthly Archives: July 2022

Repeated measurements with unintended feedback: The Dutch New Herring scandals

Fengnan Gao and Richard D. Gill; 24 July 2022

Note: the present post reproduces the text of our new preprint https://arxiv.org/abs/2104.00333, adding some juicy pictures. Further editing is planned, much reducing the length of this blog-post version of our story.

Summary: We analyse data from the final two years of a long-running and influential annual Dutch survey of the quality of Dutch New Herring served in large samples of consumer outlets. The data was compiled and analysed by Tilburg University econometrician Ben Vollaard, and his findings were publicized in national and international media. This led to the cessation of the survey amid allegations of bias due to a conflict of interest on the part of the leader of the herring tasting team. The survey organizers responded with accusations of failure of scientific integrity. Vollaard was acquitted of wrongdoing by the Dutch authority, whose inquiry nonetheless concluded that further research was needed. We reconstitute the data and uncover important features which throw new light on Vollaard’s findings, focussing on the issue of correlation versus causality: the sample is definitely not a random sample. Taking into account both newly discovered data features and the sampling mechanism, we conclude that there is no evidence of biased evaluation, despite the econometrician’s renewed insistence on his claim.

Keywords: Data generation mechanism, Predator-prey cycles, Feedback in sampling and measurement, Consumer surveys, Causality versus correlation, Questionable research practices, Unhealthy research stimuli.

https://en.wikipedia.org/wiki/Soused_herring#/media/File:Haring_04.jpg, © https://commons.wikimedia.org/wiki/User:Takeaway

Introduction

In surveys intended to help consumers by regularly publishing comparisons of a particular product obtained from different consumer outlets (think of British “fish and chips” bought in a large sample of restaurants and pubs), data is often collected over a number of years and evaluated each year by a panel, which might consist of a few experts, but might also consist of a larger number of ordinary consumers. As time goes by, outlets learn what properties are most valued by the panel, and may modify their product accordingly. Also, consumers learn from the published rankings. Panels are renewed, and new members presumably learn from the past about how they are supposed to weight the different features of a product. Partly due to negative reviews, some outlets go out of business, while new outlets enter the market, and imitate the products of the “winners” of previous years’ surveys. Coming out as “best” boosts sales; coming out as “worst” can be the kiss of death.

For many years, a popular Dutch newspaper (Algemene Dagblad, in the sequel AD) published two immensely influential annual surveys of two particularly popular and typically Dutch seasonal products: the Dutch New Herring (Dutch: Hollandse Nieuwe) in June, and the Dutch “oliebol” (a kind of greasy, currant-studded, deep-fried spherical doughnut) in December. This paper will study the data published by the newspaper on its website of 2016 and 2017—the last two years of the 36 years in which the AD herring test operated. This data included not only a ranking of all participating outlets and their final scores (on a scale of 0 to 10) but also numerical and qualitative evaluations of many features of the product being offered. A position in the top ten was highly coveted. Being in the bottom ten was a disaster.

For a while, rumours had been circulated (possibly by disappointed victims of low scores!) that both tests were biased. The herring test was carried out by a team of three tasters, whose leader Aad Taal was indeed consultant to a wholesale company called Atlantic (based in Scheveningen, in the same region as Rotterdam), and who offered a popular course on herring preparation. As a director at the Dutch ministry of agriculture he had earlier successfully managed to obtain the European Union (EU) legal protection for the official designation “Dutch New Herring”. Products may only be sold under this name in the whole EU only if meticulously prepared in the circumscribed traditional way, as well as satisfying strict rules of food safety. It is nowadays indeed sold in several countries adjacent to the Netherlands. We will later add some crucial further information about what actually makes a Dutch New Herring different from the traditionally prepared herring of other countries.

Enter econometrician Dr Ben Vollaard of Tilburg University. Himself partial to a tasty Dutch New Herring, he learnt in 2017 from his local fishmonger about the complaints then circulating about the AD Herring Test. The AD is based on the city of Rotterdam, close to the main home ports of the Dutch herring fleet in past centuries. Tilburg is somewhat inland. Not surprisingly, consumers in different regions of the country seem to have developed different tastes in Dutch New Herring, and a common complaint was that the AD herring testers had a Rotterdam bias.

Vollaard decided to investigate the matter scientifically. A student helped him to manually download the data published on their website on 144 participating outlets in 2016, and 148 in 2017. An undisclosed number of outlets participated in both years, and initial reports suggested it must be a large number. Later we discovered that the overlap consisted of only 23 outlets. Next, he ran a linear regression analysis, attempting to predict the published final score for each outlet in each year, using as explanatory variables the testing team’s evaluations of the herring according to various criteria such as ripeness and cleaning, together with numerical variables such as weight, price, temperature, and laboratory measurements of fat content and microbiological contamination. Most of the numerical variables were modelled by using dummy variables after discretization into a few categories. A single indicator variable for “distance from Rotterdam’’ (greater than 30 kilometres) was used to test for regional bias.

The analysis satisfyingly showed many highly significant effects, most of which are exactly those that should have been expected. The testing team gave a high final score to fish which had a high fat content, low temperature, well-cleaned, and a little matured (not too little, not too much). More expensive and heavier fish scored better, too. Being more than 30 km from Rotterdam had a just significant negative effect, lowering the final score by about 0.5. Given the supreme importance of getting the highest possible score, 10, a loss of half a point could make a huge difference to a new outlet going all out for a top score and hence position in the “top ten” of the resulting ranking. However, just because outlets in the sample far from Rotterdam performed a little worse on average than those close to Rotterdam, can have many innocent explanations.

But Vollaard went a lot further. After comparing the actual scores to linear regression model predicted scores based on the measured characteristics of the herring, Vollaard concluded:

Everything indicates that herring sales points in Rotterdam and the surrounding area receive a higher score in the AD Herring Test than can be explained from the quality of the herring served.

That is a pretty serious allegation.

Vollaard published this analysis as a scientific paper Vollaard (2017a) on his university personal web page, and the university put out a press release. The research drew a lot of media attention. In the ensuing transition from a more or less academic study (in fact, originally just a student exercise) to a press release put out by a university publicity department, then to journalists’ newspaper articles adorned with headlines composed by desk editors, the conclusion became even more damning.

Presumably stimulated by the publicity that his work had received, Vollaard decided to go further, now following up on further criticism circulating about the AD Herring Test. He rapidly published a second analysis, Vollaard (2017b), on his university personal web page. His focus was now on the question of a conflict of interest concerning a connection between the chief herring tester and the wholesale outlet Atlantic. Presumably by contacting outlets directly, he identified 20 outlets in the sample whose herring, he believed, had been supplied by that company. Certainly, his presumed Atlantic herring outlets tended to have rather good final scores, and a few of them were regularly in the top ten.

We may surmise that Vollaard must have been disappointed and surprised to discover that his dummy variable for being supplied by Atlantic was not statistically significant when he added it to his model. His existing model (the one on the basis of which he argued that the testing team was not evaluating outlets far from Rotterdam using their own measured characteristics) predicted that Atlantic outlets should indeed, according to those characteristics, have come out exactly as well as they did! He had to come up with something different. In his second paper, he insinuated pro-Atlantic bias by comparing the amount of variance explained by what he considered to be “subjective” variables with the amount explained by the “objective” variables, and he showed that the subjective (taste and smell, visual impression) evaluations explained just as much of the variance as the objective evaluations (price, temperature, fat percentage). This change of tune represents a serious inconsistency in thinking: this is cherry-picking in order to support a pre-gone conclusion.

In itself, it does not seem unreasonable to judge a culinary delicacy by taste and smell, and not unreasonable to rely on reports of connoisseurs. However, Vollaard went much further. He hypothesized that “ripeness” and “microbiological state” were both measurements of the same variable; one subjective, the other objective. According to him, they both say how much the fish was “going off”. Since the former variable was extremely important in his model, the latter not much at all, he again accused the herring testers of bias and attributed that bias to conflict of interest. His conclusion was:

A high place in the AD Herring Test is strongly related to purchasing herring from a supplier in which the test panel has a business interest. On a scale of 0 to 10, the final mark for fishmongers with this supplier is on average 3.6 points higher than for fishmongers with another supplier.

He followed that up with the statement:

Almost half of the large difference in average final grade between outlets with and without Atlantic as supplier can be explained by a subjective assessment by the test team of how well the herring has been cleaned (very good/good/moderate/poor) and of the degree of ripening of the herring (light/medium/strong/spoiled).

The implication is that the Atlantic outlets are being given an almost 2 point advantage based on a purely subjective evaluation of ripeness.

More media attention followed, Vollaard appeared on current affairs programs on Dutch national TV, his work was even reported in The Economist, https://www.economist.com/europe/2017/11/23/netherlands-fishmongers-accuse-herring-tasters-of-erring.

The AD defended itself and its herring testers by pointing out that the ripeness or maturity of a Dutch new herring, evaluated by taste and smell, reflects ongoing and initially highly desirable chemical processes (protein changing to fat, fat to oil, oil becoming rancid). Degree of microbiological activity, i.e., contamination with harmful bacteria, could be correlated with that, since dangerous bacterial activity will tend to increase with time once it has started, and both processes are speeded up if the herring is not kept cold enough, but it is of a completely different nature: biological, not chemical. It is caused by carelessness in various stages of preparation of the herring, insufficient cooling, and so on. It is obviously not desirable at all. AD also pointed out that one of the Atlantic outlets must have been missed, which actually in the first of the two years had scored very badly. This could be deduced from the numbers of those outlets, and the mean score of the Atlantic-supplied outlets, both reported by Vollaard in his papers.

The newspaper AD complained first to Vollaard and then to his university. With the help of lawyers, a complaint was filed with the Tilburg University committee for scientific integrity. The committee rejected the complaint, but the newspaper took it to the national level. Their lawyers hired the second author of this paper, Richard Gill (RDG), in the hope that he would support their claims. He requested Vollaard’s data-set and also requested that the outlets in the data-set be identified, since one major methodological complaint of his was that Vollaard had not taken account of possible autocorrelation by combining samples from two subsequent years, with presumably a large overlap, but without taking any account of this. Vollaard reluctantly supplied the data but declined to identify the outlets appearing twice or even inform us how many such outlets there were. With the help of AD however, it was possible to find them, and also locate many misclassified outlets. RDG wrote an expert opinion in which he argued that the statistical analysis did not support any allegations of bias or even unreliability of the herring test.

Vollaard had repeatedly stated that he was only investigating correlations, not establishing causality, but at the same time his published statements (quoted in the media), and his spoken statements on national TV, make it clear that he considered that his analysis results were damming evidence against the test. This seemed to RDG to be unprofessional, at the very least. RDG moreover identified much statistical amateurism. Vollaard analysed his data much as any econometrician might do: he had a data-set with a variable of interest and a number of explanatory variables, he ran a linear regression making numerous modelling choices without any motivation and without any model checking. He fit a completely standard linear regression model to two samples of Dutch new herring outlets, without any thought to the data generating mechanism. How were outlets selected to appear in the sample?

According to the AD, there were actually 29 Atlantic outlets in Vollaard’s combined sample. Note, there is some difficulty in determining this number. A given outlet may obtain some fish from Atlantic, some from other suppliers, and may change their suppliers over the course of a year. So the origin of the fish actually tasted by the test team cannot be determined with certainty. We see in Table 1 (according to AD), that Vollaard “caught” only about two thirds of the Atlantic outlets, and misclassified several more.


Atlantic by VollaardNot Atlantic by Vollaard
Atlantic by AD181129
Not Atlantic by AD2261263

20272292
Table 1: Atlantic- and not Atlantic-supplied outlets tested over two years as identified by Vollaard and the AD respectively.

At the national level, the LOWI (Landelijk Orgaan Wetenschappelijk Integriteit — the Dutch national organ for investigating complaints of violation of research integrity) re-affirmed the Tilburg University scientific integrity committee’s “not guilty” verdict. Vollaard was not deliberately trying to mislead. “Guilty” verdicts have an enormous impact and imply a finding, beyond a reasonable doubt, of gross research impropriety. This generally leads to termination of university employment contracts and to retraction of publications. They did agree that Vollaard’s analyses were substandard, and they recommended further research. RDG reached out to Vollaard suggesting collaboration, but he declined. After a while, Vollaard’s (still anonymized) data sets and statistical analysis scripts (written in the proprietary Stata language) were also published on his website Vollaard (2020a, 2020b). The data was actually in the form of Stata files; fortunately, it is nowadays possible to read such files in the open source and free R system. The known errors in the classification of Atlantic outlets were not corrected, despite AD’s request. The papers and the files are no longer on Vollaard’s webpages, and he still declines collaboration with us. We have made all documents and data available on our own webpages and on the GitHub page https://github.com/gaofengnan/dutch-new-herring.

RDG continued his re-analyses of the data and began the job of converting his expert opinion report (English translation: https://gill1109.com/2021/06/01/was-the-ad-herring-test-about-more-than-the-herring/) into a scientific paper. It seemed wise to go back to the original sources and this meant a difficult task of extracting data from the AD’s websites. Each year’s worth of data was moreover coded differently in the underlying HTML documents. At this point he was joined by the first author Fengnan Gao (FG) of the present paper who was able to automate the data scraping and cleaning procedures — a major task. Thus, we were able to replicate the whole data gathering and analysis process and this led to a number of surprises.

Before going into that, we will explain what is so special about Dutch New Herring, and then give a little more information about the variables measured in the AD Herring Test.

Dutch New Herring

https://commons.wikimedia.org/wiki/File:Haring_03.jpg, © https://commons.wikimedia.org/wiki/User:Takeaway

Every nation around the North Sea has traditional ways of preparing North Atlantic herring. For centuries, herring has been a staple diet of the masses. It is typically caught when the North Atlantic herring population comes together at its spawning grounds, one of them being in the Skagerak, between Norway and Denmark. Just once a year there is an opportunity for fishers to catch enormous quantities of a particular extremely nutritious fish, at the height of their physical condition, about to engage in an orgy of procreation. The fishers have to preserve their catch during a long journey back to their home base; and if the fish is going to be consumed by poor people throughout a long year, further means of conservation are required. Dutch, Danish, Norwegian, British and German herring fleets (and more) all compete (or competed) for the same fish; but what people in those countries eat varies from country to country. Traditional local methods of bringing ordinary food to the tables of ordinary folk become cultural icons, tourist attractions, gastronomic specialities, and export products.

Traditionally, the Dutch herring fleet brought in the first of the new herring catch in mid-June. The separate barrels in the very first catch are auctioned and a huge price (given to charity) is paid for the very first barrel. Very soon, fishmongers, from big companies with a chain of stores and restaurants, to supermarket chains, to small businesses selling fish in local shops and street markets are offering Dutch New Herring to their customers. It’s a traditional delicacy, and nowadays, thanks to refrigeration, it can be sold the whole year long (the designation “new” should be removed in September). Nowadays, the fish arrives in refrigerated lorries from Denmark, no longer in Dutch fishing boats at Scheveningen harbour.

What makes a Dutch new herring any different from the herring brought to other North Sea and Baltic Sea harbours? The organs of the fish should be removed when they were caught, and the fish kept in lightly salted water. But two internal organs are left, a fish’s equivalent to our pancreas and kidney. The fish’s pancreas contains enzymes which slowly transform some protein into fat and this process is responsible for a special almost creamy taste which is much treasured by Dutch consumers, as well as those in neighbouring countries. See, e.g., the Wikipedia entry for soused herring for more details, https://en.wikipedia.org/wiki/Soused_herring. According to a story still told to Dutch schoolchildren, this process was discovered in the 14th century by a Dutch fisher named Willem Beukelszoon.

The AD Herring Test

© Marco de Swart (AD), https://www.ad.nl/binnenland/reacties-vriendjespolitiek-corruptie-en-boevenstreken~a493aad9/

For many years, the Rotterdam-based newspaper Algemene Dagblad (AD) carried out an annual comparison of the quality of the product offered in a sample of consumer outlets. A small team of expert herring tasters paid surprise visits to the typical small fishmonger’s shops and market stalls where customers can order portions of fish and eat them on the premises (or even just standing in a busy food market). The team evaluated how well the fish has been prepared, preferring especially that the fish have not been cleaned in advance but that they are carefully and properly prepared in front of the client. They judged the taste and checked the temperature at which it is given to the customer: by law it may not be above 7 degrees. A sample was sent to a lab for a number of measurements: weight, fat percentage, signs of microbiological contamination. They are also interested in the price (per gram). An important, though subjective, characteristic is “ripeness”. Expert tasters distinguish Dutch new herring which has not ripened (matured) at all: green. After that comes lightly matured, well matured, too much matured, and eventually rotten.

This information was all written down and evaluated subjectively by each team member, then combined. The team averaged the scores given by its three members (a senior herring expert, a younger colleague, and a journalist) to produce a score from 0 to 10, where 10 is perfection; below 5.5 is a failing grade. However, it was not just a question of averaging. Outlets which sold fish which was definitely rotten, definitely contaminated with harmful bacteria, or definitely too warm got a zero grade. The outlets which took part were then ranked. The ten highest ranking outlets were visited again, and their scores possibly adjusted. The final ranking was published in the newspaper, and put in its entirety on internet. Coming out on top was like getting a Michelin star. The outlets at the bottom of the list might as well have closed down straight away. One sees from the histogram below, Figure [fig:1], that in 2016 and 2017, more than 40% of the outlets got a failing grade; almost 10% were essentially disqualified, by being given a grade of zero. The distribution looks nicely smooth except for the peak at zero, which really means that their wares did not satisfy minimal legal health requirements.

Figure 1: Final test scores, 2016 and 2017.

It is important to understand how outlets were chosen to enter the test. To begin with, the testing team itself automatically revisited last years’ top ten. But further outlets could be nominated by individual newspaper readers, indeed, they could be self-nominated by persons close to the outlets themselves. We are not dealing with a random sample, but with a self-selecting sample, with automatically a high overlap from year to year.

Over the years, there had been more and more acrimonious criticism of the AD Herring Test. As one can imagine, it was mainly the owners of outlets who had bad scores who were unhappy about the test. Many of them, perhaps justly, were proud of their product and had many satisfied customers too. Various accusations were therefore flung around. The most serious one was that the testing team was biased and even had a conflict of interest. The lead taster gave courses on the preparation of Dutch New Herring and led the movement to have the “brand” registered with the EU. There is no doubting his expertise, but he had been hired (in order to give training sessions to their clients) by one particular wholesale business, owned by a successful businessman of Turkish origin, which as one might imagine lead to jealousy and suspicion. Especially since a number of the retail outlets of fish supplied by that particular company often (but certainly not always) appeared year by year in the top ten of the annual AD Herring Test. Other accusations were that the herring tasters favoured businesses in the neighbourhood of Rotterdam (home base of the AD). As herring cognoscenti know, people in various Dutch localities have slightly different tastes in Dutch New Herring. Amsterdammers have a different taste from Rotterdammers.

In the meantime, under the deluge of negative publicity, the AD announced that they would now stop their annual herring test. They did hire a law company who on their behalf brought an accusation of failure of scientific integrity to Tilburg University’s “Commission for Scientific Integrity”. The law firm moreover approached one of us (RDG) for expert advice. He was initially extremely hesitant to be a hired gun in an attack on a fellow academic but as he got to understand the data and the analyses and the subject, he had to agree that the AD had some good points. At the same time, various aggrieved herring sellers were following up with their own civil action against the AD; and the wholesaler whose outlets did so well in the test, also started a civil action against Tilburg University, since its own reputation was damaged by the affair.

Vollaard’s analyses

Here is the main result of Vollaard’s first report.

lm(formula = finalscore ~
                    weight + temp + fat + fresh + micro +
                    ripeness + cleaning + yr2017)
 
Residuals:
     Min      1Q  Median      3Q     Max
 -4.0611 -0.5993  0.0552  0.8095  3.9866

Residual standard error: 1.282 on 274 degrees of freedom
Multiple R-squared:  0.8268, Adjusted R-squared:  0.816
F-statistic: 76.92 on 17 and 274 DF,  p-value: < 2.2e-16



Estimate Std.Errort value Pr(>|t|) 
Intercept
0.1390050.7278125.6873.31e–08***
weight (grams)
0.0391370.0097264.0247.41e–05***
temp






< 7 deg0 (baseline)




7–10 deg –0.6859620.193448–3.5460.000460***

> 10 deg–1.7931390.223113–8.0372.77e–14***
fat






< 10%0 (baseline)




10–14%0.1728450.1973878760.381978

> 14%0.5816020.2500332.3260.020743*
fresh
1.8170810.2003359.070< 2e–16***
micro






very good0 (baseline)




adequate–0.1614120.315593–0.5110.609443

bad–0.618397  0.448309  –1.379 0.168897–1.3790.168897

warning–0.1511430.291129–0.5190.604067

reject–2.2790990.683553–3.3340.000973*** 
ripeness






mild0 (baseline)




average–0.3778600.336139–1.1240.261947

strong–1.9306920.386549–4.9951.05e–06***

rotten–4.5987520.503490–9.134< 2e–16***
cleaning






very good0 (baseline)




good–0.9839110.210504–4.674.64e–06***

poor–1.7166680.223459–7.6822.79e–13***

bad–2.7611120.439442–6.2831.30e–09**
yr2017
0.2082960.1747401.1920.234279
Regression model output


Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1

No surprises here. The testing team prefers fatty and larger herring, properly cooled, mildly matured, freshly prepared in view of customers on-site, and well-cleaned too. We have a delightful amount of statistical significance. There are some curious features of Vollaard’s chosen model: some numerical variables (“temp” and “fat”) have been converted into categorical variables by presumably arbitrary choice of cut-off points, while “weight” is taken as numerical. Presumably, this is because one might expect the effect of temperature not to be monotone. Nowadays, one might attempt fitting low-degree spline curves with few knots. Some categories of categorical variables have been merged, without explanation. One should worry about interactions and about additivity. Certainly one should worry about model fit.

We add to the estimated regression model also R’s standard four diagnostic plots in Fig. 2. Dr Vollaard apparently did not carry out any model checking.

Figure 2a. Model validation: panel 1, residuals versus fitted values
Figure 2b. Model validation: panel 2, normal QQ plot of standardized residuals
Figure 2c. Model validation: panel 3, square root of absolute value of standardized residuals against fitted value
Figure 2d. Model validation: panel 4, standardized residuals against leverage

Model validation beyond Vollaard’s regression analysis

There are some serious statistical issues. There seem to be a couple of serious outliers. The error distribution seems to have a heavier than normal tail. But we also understand that some observations come in pairs — the same outlet evaluated in two subsequent years. The data set has been anonymized too much. Each outlet should at the least have been given a random code so that one can identify the pairs and take account of possible dependence from one year to the next, easy to do by simply estimating the correlation from the residuals, and then doing a generalized linear regression with an estimated covariance matrix of the error terms.

Inspection of the outliers led us to realize that there is a serious issue with the observations which got a final score of zero. Those outlets were essentially disqualified on grounds of gross violation of basic hygiene laws, applied by looking at just a couple of the variables: temperature above 12 degrees (the legal limit is 7), and microbiological activity (dangerous versus low or none). The model should have been split into two parts: a linear regression model for the scores of the not-disqualified outlets; and a logistic regression model, perhaps, for predicting “disqualification” from some of the other characteristics. However, at least it is possible to analyse each of the years separately, and to remove the “disqualified” outlets. That is easy to do. Analysing just the 2017 data, the analysis results look a lot cleaner; the two bad outliers have gone, the estimated standard deviation of the errors is a lot smaller, the normal Q-Q plot looks very nice.

The data-set, now as comma-separated values files and Excel spreadsheets, and with outlets identified, can be found on our already mentioned GitHub repository https://github.com/gaofengnan/dutch- new- herring.

The real problem

There is another big issue with this data and these analyses which needs to be mentioned, and if possible, addressed. How did the “sample” come to be what it is? A regression model is at best a descriptive account of the correlations in a given data set. Before we should accuse the test team of bias, we should ask how the sample is taken. It is certainly not a random sample from a well-defined population!

Some retail outlets took part in the AD Herring Test year after year. The testing team automatically included last years’ top ten. Individual readers of the newspaper could nominate their own favourite fish shop to be added to the “sample”, and this actually did happen on a big scale. Fish shops which did really badly tended to drop out of future tests and, indeed, some of them stopped doing business altogether:

The “sample” evolves in time by a feedback mechanism.

Everybody could know what the qualities were that the AD testers appreciated, and they learnt from their score and their evaluation each year what they had to do better next year, if they wanted to stay in the running and to join the leaders of the pack. The notion of “how a Dutch New Herring ought to taste”, as well as how it ought to be prepared, was year by year being imprinted by the AD test team on the membership of the sample. New sales outlets joined and competed by adapting themselves to the criteria and the tastes of the test team.

The same newspaper did another annual ranking of outlets of a New Year’s Dutch traditional delicacy, actually, a kind of doughnuts (though without a hole in the middle) called oliebollen. They are indeed somewhat stodgy and oily, roughly spherical, objects, enlivened with currants and sprinkled with icing sugar. The testing panel was able to taste these objects blind. It consisted of about twenty ordinary folk and every year, part of the panel resigned and was replaced with fresh persons. Peter Grünwald of Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands, developed a simulation model which showed how the panel’s taste in oliebollen would vary over the years, as sales outlets tried to imitate the winners of the previous year, while the notion of what constitutes a good oliebol was not fixed. Taking the underlying quality to be one-dimensional, he demonstrated the well-known predator-prey oscillations (Angerbjorn et al., 1999). Similar lines of thinking have appeared in the study of, e.g., fashion cycles, see e.g. Acerbi et al. (2012), where the authors propose a mechanism for individual actors to imitate other actors’ cultural traits and preferences for these traits such that realistic and cyclic rise-and-fall patterns (see their Figure 4) are observed in simulated settings. A later study, Apriasz et al. (2016), divides a society into two categories of “snobs” and “followers”, where followers copy everyone else and snobs only imitate the trend of their own and go against the followers. As a result, clear recurring cyclic patterns (see their Figures 3 and 4) similar to the predator-prey cycle arise under proper parameter regimes.

The AD was again engaged in a legal dispute with disgruntled owners of low ranked sales outlets, which eventually led to this annual test being abandoned too. In fact, the AD forbade Grünwald to publish his results. We have made some initial simulation studies of a model with higher dimensional latent quality characteristics, which seems to exhibit similar but more complex behaviour.

New analyses, new insights

It turns out that the correlation between the residuals of the same outlet participating in two subsequent years is large, about 0.7. However, their number (23) is fairly small, so this has little effect on Vollaard’s findings. Taking account of it slightly increases the standard errors of estimated coefficients. However, we also knew that according to AD, many outlets were incorrectly classified by Vollaard, and since he did not wish to collaborate with us, we returned to the source of his data: the web pages of AD. This enabled us to play with the various data coding choices made by Vollaard and to try out various natural alternative model specifications. As well as this, we could use the list of outlets certified by AD and Atlantic as having actually supplied the Dutch new herring tested in 2016 and 2017.

First, it is clear from the known behaviour of the test team that a score of zero means something special. There is no reason to expect a linear model to be the right model for all participating outlets. The outlets which were given a zero score were essentially disqualified on objective public health criteria, namely temperature above 12 degrees and definitely dangerous microbiological activity. We decided to re-analyse the data while leaving out all disqualified outlets.

Next, there is the issue of correlation between outlets appearing in two subsequent years. Actually, this turned out to be a much smaller proportion than expected. So correction for autocorrelations hardly makes a difference, but on the other hand, it is easily made superfluous by dropping all outlets appearing for the second year in succession. Now we have two years of data, in the second year only of “newly nominated” outlets.

Going back to the original data published by AD, we discovered that Vollaard had made some adjustments to the published final scores. As was known, the testing team revisited the top ten scoring outlets, and ranked their product again, recording (in one of the two years) scores like 9.1, 9.2, … up to 10, in order to resolve ties. In both years there were scores registered such as  8– or 8+, meant to indicate “nearly an 8” or “a really good 8”, following Dutch traditional school and university test and exam grading. The scores “5″, “6″, “7″, “8”, “9”, “10” have familiar and conventional descriptions “unsatisfactory” or insufficient, “satisfactory” or sufficient, “good”, “very good, “excellent”. Linear regression analysis requires a numerical variable of interest. Vollaard had to convert “9–” (almost worthy of the qualification “very good”) into a number. It seems that he rounded it to 9, but one might just as well have made it 9–𝞮 for some choice of 𝞮, for instance, 𝞮 = 0.01,  0.03, or 0.1.

We compared the results obtained using various conventions for dealing with the “broken” grades, and it turned out that the choice of value of 𝞮 had major impact on the statistical significance of the “just significant” or “almost significant” variables of main interest (supplier; distance). Also, whether one followed standard strategies of model selection based on leaving out insignificant variables has a major impact on the significance of the variables of most interest (distance from Rotterdam; supplier). The size of their effects becomes a little smaller, standard errors remain large. Had Vollaard followed one of several common model selection strategies, he could have found that the effect of “Atlantic” was significant at the 5% level, supporting his prior opinion! As noted by experienced statistical practitioners such as Winship and Western (2016), in linear regression analysis where multicollinearity is present, the regression estimates are highly sensitive to small perturbations in model specification. In our data-set, what should be unimportant changes to which variables are included and which are not included, as well as unimportant changes in the quantification of the variable to be explained, keep changing the statistical significance of the variables which interested Vollaard the most — the results which led to a media circus, societal impact, and reputational damage to several big concerns, as well as to the personal reputation of the chief herring tester Aad Taal.

Having “cleaned” the data by removing the repeat tests, and removing the outlets breaking food safety regulations, and using the AD’s classification, the size of the effects of being an Atlantic-supplied outlet, and of being distant from Rotterdam, are smaller and hardly significant. By varying 𝞮, they change. On leaving out a few of the variables whose statistical significance is smallest, whether the two main variables of interest are significant changes again. The size of the effects remains about the same: Atlantic supplied outlets score a bit higher, outlets distant from Rotterdam score a bit lower, when taking account of all the other variables in the way chosen by the analyst.

By modelling the effects of so many variables by discretization, Vollaard created multicollinearity. The results depend on arbitrarily chosen cut-offs, and other arbitrary choices. For instance, “weight” was kept numerical, but “price” was made categorical. This could have been avoided by assuming additivity and smoothness and using modern statistical methodology, but in fact the data-set is simply too small for this to be meaningful. Trying to incorporate interaction between clearly important variables caused multicollinearity and failure of the standard estimation procedures. Different model selection procedures, and nonparametric approaches, end up with finding quite different models, but do not justify preferring one to another. We can come up with several excellent (and quite simple) predictors of the final score, but we cannot say anything about causality.

Vollaard’s analyses confirmed what we knew in advance (the “taste” of the testers). There is no reason whatsoever to accuse them of favouritism. The advantage of outlets supplied by Atlantic is tiny or non-existent, certainly nothing like the huge amount which Vollaard carelessly insinuated. The distant outlets are typically new entrants to the AD Herring Test. Their clients like the kind of Dutch new herring which they have been used to in their region. Vollaard’s interpretation of his own results obtained from his own data set was unjustified. He said he was only investigating correlations, but he appeared on national TV talk shows to say that his analyses made him believe that the AD Herring Test was severely biased. This caused enormous upset, financial and reputational damage, and led to a lot of money being spent on lawyers.

Everyone makes mistakes and what’s done is done, but we do all have a responsibility to learn from mistakes. The national committee for investigating accusations of violation of scientific integrity (LOWI) did not find Vollaard guilty of gross misdemeanour. They did recommend further statistical analysis. Vollaard declined to participate. No problem. We think that the statistical experiences reported here can provide valuable pedagogical material.

Conclusions

In our opinion, the suggestion that the AD Herring Test was in any way biased cannot be investigated by simple regression models. The “sample” is self-recruiting and much too small. The sales outlets which join the sample are doing so in the hope of getting the equivalent of a Michelin star. They can easily know in advance what are the standards by which they will be evaluated. Vollaard’s purely descriptive and correlational study confirms exactly what everyone (certainly everyone “in the business”) should know. The AD Herring Test, over the years that it operated, helped to raise standards of hygiene and presentation, and encouraged sales outlets to get hold of the best quality Dutch New Herring, and to prepare and serve it optimally. As far as subjective evaluations of taste are concerned, the test was indubitably somewhat biased toward the tastes valued by consumers in the region of Rotterdam and The Hague, and at the main “herring port” Scheveningen. But the “taste” of the herring testers was well known. Their final scores fairly represent their public, written evaluations, as far as can be determined from the available data.

The quality of the statistical analysis performed by Ben Vollaard left a great deal to be desired. To put it bluntly, from the statistical point of view it was highly amateurish. Economists who self-publish statistical reports under the flag of their university on matters of great public interest should have their work peer-reviewed and should rapidly publish their data sets. His results are extremely sensitive to minor variations in model choice and specification, to minor variations in quantifications of verbal scores, and there is not enough data to investigate his assumption of additivity. Any small effects found could as well be attributed to model misspecification as to conscious or unconscious bias on the part of the herring testers. We are reminded of Hanlon’s razor “never attribute to malice that which is adequately explained by stupidity”. In our opinion, in this case, Ben Vollaard was actually a victim of the currently huge pressure on academics to generate media interest by publishing on issues of current public interest. This leads to immature work which does not get sufficient peer review before being fed to the media. The results can cause immense damage.

Statisticians in general should not be afraid to join in societal debates. The total silence concerning this affair from the Dutch statistical society, which even has an econometric chapter, was a shame. Fortunately, the society has recently set up a new section devoted to public outreach.

A huge amount of statistical analyses are performed and published by amateurishly matching formal properties of a data-set (types of variables, the shape of the data file) to standard statistical models with no consideration at all given to model assumptions and to checks of model assumptions. Vollaard’s data-set can provide a valuable teaching resource, and we have published a version with (English language) description of the variables. We have made two versions available: Vollaard’s data-set put together by his student, but now with outlets identified, and the newly constituted data set with Atlantic-supplied outlets according to the AD, which is as well available in our GitHub repository https://github.com/gaofengnan/dutch-new-herring.

It would be interesting to add to the data some earlier years’ data, and investigate whether scores of repeatedly evaluated outlets tended to increase over the years. At the very least, it would be good to know which of the year 2016 outlets were repeat participants.

Just before we are about to submit this article, we become aware of Vollaard and van Ours (2021), in which Dr Ben Vollaard made the same accusations with essentially the same false arguments.

More study must be done of the feedback processes involved in consumer research panels.

https://www.villamedia.nl/artikel/in-memoriam-paul-hovius-de-man-achter-de-ad-haringtest. The man behind the herring test: journalist Paul Hovius (r), with herring taster Aad Taal (l), during the AD Herring Test in 2013. © Joost Hoving, ANP

Conflict of interest

The second author was paid by a well-known law firm for a statistical report on Vollaard’s analyses. His report, dated April 5, 2018, appeared in English translation earlier in this blog, https://gill1109.com/2021/06/01/was-the-ad-herring-test-about-more-than-the-herring/. He also reveals that the best Dutch New Herring he ever ate was at one of the retail outlets of Simonis in Scheveningen. They got their herring from the wholesaler Atlantic. He had this experience before any involvement in the Dutch New Herring scandals, topic of this paper.

References

Alberto Acerbi, Stefano Ghirlanda, and Magnus Enquist. The logic of fashion cycles. PloS one, 7(3):e32541, 2012. https://doi.org/10.1371/journal.pone.0032541

Anders Angerbjorn, Magnus Tannerfeldt, and Sam Erlinge. Predator–prey relationships: arctic foxes and lemmings. Journal of  Animal Ecology, 68(1):34–49, 1999. https://www.jstor.org/stable/2647297

Rafał Apriasz, Tyll Krueger, Grzegorz Marcjasz, and Katarzyna Sznajd-Weron. The hunt opinion model—an agent based approach to recurring fashion cycles. PloS one, 11(11):e0166323, 2016. https://doi.org/10.1371/journal.pone.0166323

The Economist. Netherlands fishmongers accuse herring-tasters of erring. The Economist, 2017, November 25. https://www.economist.com/europe/2017/11/23/netherlands-fishmongers-accuse-herring-tasters-of-erring.

Ben Vollaard. Gaat de AD Haringtest om meer dan de haring? 2017a. https://www.math.leidenuniv.nl/~gill/haringtest_vollaard.pdf

Ben Vollaard. Gaat de AD Haringtest om meer dan de haring? een update. 2017b. https://web.archive.org/web/20210116030352/https://www.tilburguniversity.edu/sites/default/files/download/haringtest_vollaard_def_1.pdf

Ben Vollaard. Scores Haringtest. 2020a. https://surfdrive.surf.nl/files/index.php/s/gagqjoPAbIZkLuR

Ben Vollaard. Stata Code Haringtest. 2020b. https://surfdrive.surf.nl/files/index.php/s/51kmBZDadi6qOhv

Ben Vollaard and Jan C van Ours. Bias in expert product reviews. 2021. Tinbergen Institute Discussion Paper 2021-042/V. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3847682

Christopher Winship and Bruce Western. Multicollinearity and model misspecification. Sociological Science, 3(27):627–649, 2016. https://sociologicalscience.com/articles-v3-27-627