Model Evidence – the COVID-19 case

Following the foot and mouth epizootic events in Great Britain in 2001, a veterinary practitioner famously described the subsequent disease control measures as ‘carnage by computer’.  An epidemiological model developed by a team at Imperial College, London, had effectively sanctioned, she argued, the mass slaughter of animals on farms that adjoined or shared a boundary with infected premises, even if the chances of infection were minimal.  Other models were available at the time, including ones that took landscape, actual distance between premises and the different infectivity and viral shedding rates of different species into account.  But the Imperial model had had the edge – it was relatively simple, quick, gave clear results, was easy to re-run with new conditions, seemed to be favoured by the Chief Scientist, and so informed the policy of contiguous culls (Law 2008). The carnage referred to what, after the event, many regarded as unnecessary animal and social suffering, brought about by the disease controls rather than the disease itself.  

As Bickerstaff and Simmons (2004) pointed out, this wasn’t the first time (or indeed the last) that a model was employed to formulate and justify a disease management policy. Indeed, “Predictive epidemiological modelling has established itself in the field of public health as a tool for supporting decision-making that can forecast the likely spread of a disease as well as the expected effect of alternative management strategies. Analyses can be rapidly rerun with revised parameter settings in response to new data from an emerging epidemic” (Bickerstaff and Simmons 2004: 400).  

In the current COVID-19 crisis we are here again.  The same team of modellers have been instrumental in the UK and to an extent US based pandemic policy processes (Ferguson, Daniel Laydon et al. 2020).  It would be churlish to do anything but admire the team’s ability, expertise, and energy.  And the work is clearly vitally important in enabling decision-makers to assess non-pharmaceutical interventions in terms of their expected effects on rates and patterns of infection and mortality. The modelling is informing the policy field on an almost daily basis, with the UK government, for example, recently moving from a policy of disease mitigation (or relatively low-key attempts to flatten the epidemiological curve) to one of suppression (more stringent ‘social distancing’ efforts once alarming data from Italy had been used to re-calibrate the model). These models, the modellers accept, are never perfect, and even less so in new or emerging disease conditions. Unknowns concerning a new virus or strain and its interactions with new hosts, an immunologically naïve population, and social circumstances that mark a unique historical moment are all difficult to represent as mathematical formalisms. Models may be the best we can get in terms of anticipating and intervening in what is a fast moving risk landscape.  As some of the authors of the Imperial COVID-19 model have noted elsewhere in reference to H1N1 Swine flu, models are crucial when “decisions have to be made before definitive information [becomes] available on the severity, transmissibility or natural history of the new virus” (Lipsitch, Riley et al. 2009: 112).  

Nevertheless, it is important to insist that models are and remain partial technologies, and that there is a need for the recognition of other voices and knowledges, even in emergency situations when time is of the essence.  Here is a consultant cardiologist in London, worried about the lack of testing and the role of hospitals in producing nosocomial transmission:

“Any such model is only as good as the input data, and the data going into this one are not necessarily applicable to the UK being based on countries with very different behaviour patterns,” he said. “They are also solely intended to flatten the curve, when even a flat curve will kill thousands. These approaches would be an acceptable experiment if there were no alternatives but we have strategies from elsewhere that have been shown to work.” (Guardian 17th March 2020)  

This consultant wasn’t the only one to complain of the seemingly callous nature of the model-based policy. The social anaesthetic phrasing of ‘underlying health issues’, euphemistically referring to the elderly, disabled, heart-diseased, diabetic and so on, and which accounts for a high proportion of any population, only vaguely disguised a biopolitical undertow of “allowing to die” in order to “let live”. As the UK shifted from mitigation to suppression, these ‘underling’ lives with ‘underlying illnesses’ were managed as sub-populations, with the model used to justify a shift from a quarter of a million extra deaths in the UK, to 20,000 if epidemic suppression measures were enacted.  

The point here, again, is not to criticise the modellers – these were important policy shifts, given the opprobrium that was rightly aimed at the UK and US governments from their own health workers, other states and the WHO regarding their slowness to respond to the seriousness of the pandemic. The Imperial team’s revised parameter settings were seemingly instrumental in moving both countries from denialism into more stringent measures. The point is rather to make sure that the model and the modellers don’t manage to become the only truth that matters. Indeed, one might argue that the sluggishness of both the UK and US to increase testing capacity, despite having tests available as early as January (Street and Kelly 2020), was in part born out of the will to listen to epidemiologists and modellers keen to talk ‘herd immunity’, and to sanction spread as a means to end the epidemic (while saving the economy), rather than to institute a more rigorous and less easily modelled approach to disease control that involves testing and isolation. The population (and biopolitical) view of the epidemiological modeller is of course different in this respect to the clinician or vulnerable person – the latter trained and living with the imperative of care for each and every-body.  

So what other knowledges might be valuable and how might they be used? In order to answer this question I need to back up a little and characterise the model a little more (this account partly relies on Law 2008). The model works by defining the population of potentially affected cases (at the start of the run, this is the entire naïve population). It then makes assumptions about the networks of infective relations. These are estimates of contagion in specific settings that include schools, homes and workplaces. These assumptions and the estimated distribution of a population in these infective settings are then used to predict the speed of the epidemic and how it will develop. Key is the reproduction number, or the estimated number of cases that are produced by any one infection (R0). To bring an epidemic under control, R0 needs to be less than one. The model is run using different starting assumptions regarding schools and colleges being open or closed, workplaces staffed, dates of intervention and so on, and the effects of closures or other changes as well as assumed growth in immunity on the reproduction of the virus are predicted over time. R0 can of course be reduced temporarily in a model and then increase again as control measures, or social distancing, are relaxed. So a series of runs, with variable triggers for switching controls on and off, are also included.  

These models are deterministic – that is they generate a single output from known starting conditions. They are not stochastic. They do not allow for chance events, random shifts or non-programmed changes. As a result, they produce clear results, numbers of infected at time x, which can then be compared to health care and surge capacity in order to judge preparedness. Other models can be probabilistic and deliver ranges of values, but they tend to be less tractable or fungible for central government policy makers, and also take longer to run and re-calibrate as assumptions change. As ever there is a trade-off between complexity and computational efficiencies.  

The models are also ‘non-spatial’ – or better, the in-built spatiality is one familiar to statistical physics and operates using networked relations in abstract space.  This is why we have the somewhat strange language of social distancing – the models are networks of social relations, not people in physical spaces as such.  Certainly this was problematic in the livestock models for FMD, where an isotropic spatial surface arguably misrepresented the vagaries of landscape, but it might be a reasonable simplification in a world of social hyper-connectivity where rapid transit and an intensity of mass contacts makes distance less important. Nevertheless, the opportunities for spatial relations to alter radically, for displacements to occur, are not countenanced in the models.  

Finally, the models tend to bracket a whole series of unknowns or make assumptions about ‘heterogeneities’ (Law 2008). These include differentials in infectivity and transmissibility – for example there are uncertainties around incubation periods, asymptomatic transmissions, viral dynamics and transmission processes (the SARS-CoV-2 coronavirus is spread from the upper respiratory area, fomites and possibly other bodily excretions) and the relative efficiency of different carriers (children tend to be less symptomatic, and little is still known about the transmissibility from different age groups). None of this is news to the modelers, and they report their concerns and uncertainties dutifully. But they are testimony to the need for other kinds of knowledge to be used alongside the models. 

Models are a reality-based social heuristic  – they can pose important questions for policy makers and public bodies to consider (Wynne 2010).  But it is important that there is room for more than one form of knowledge in these critical considerations.  In the policy field, social science tends to be reduced to behaviour, and behaviour reduced to deterministic patterns of stimulus and response. In the UK, the behavioral change or nudge unit for example seemed to be instrumental in the early stages of the UK epidemic in corroborating the modelers’ or politicians’ initial sense that there should be a ‘go slow’ on social distancing and other restrictions. For the modelers (as voiced by the Government’s Chief Medical Officer and Chief Scientist) this would allow herd immunity to develop (the biopolitics of carnage by computer), with psychologists corroborating a wait and see approach by suggesting that imposing restrictions too early would lead to combat fatigue and failure to comply as and when greater social distancing was needed. Social science in this sense (and I speak as someone partially involved in the process of offering other kinds of expertise and advice) is formatted in a particular manner. Rather like the models, deterministic thinking and scientism are welcomed in committees. Yet, beyond psycho-social accounts of individual agents and their tractable (mis)behaviors, social science is more generally concerned with social collectives and difference.  And difference matters in terms of spatial variation as well as the tendency for societies to change, often in indeterminate ways.  

Some brief examples – remember the complaint by the consultant cited earlier that models were based on data from other countries. One aspect of this is the important understanding that demographers can bring to the question of why Italy has by now become the nation state with the highest fatality rate.  Italy has the world’s second oldest population and a culture of multi-generational co-habiting (Beam Dowd, Rotondi et al. 2020). There is also a spatial component in terms of the inter-city and rural-urban commuting patterns of younger working age people (most of whom have been priced out of accommodation in the cities). The consequence is a spatial and demographic structure that may be different to many other countries. Meanwhile, other matters are often dynamic and subject to what seem to be stochastic but may well be anticipated change. Change is of course the principle behind interventions to increase social distancing. And the modellers build in estimates of compliance as well as consequences of any measures (so social distancing of over 70s for example has an in-built assumption of 75% compliance, while closure of all schools produces a 50% increase in household transmissions). However, other changes and perturbations are not included in the model. An example would be the social or affective contagion of panic buying and hoarding. The consequences for public health in terms of the transmissibility of viruses in new sites of intense mixing and crowding (beleaguered supermarkets) and the effects on vulnerable people with limited access to essential provisions require other kinds of social science insight. What spreads in this sense is not only an RNA virus, but also patterns of social practice that can alter social dynamics and physical spatialities. A broader range of social sciences is required to inform policy of how emergent social consciousness can produce epidemiological as well as wide-ranging public health concerns. Finally, the urban dynamics of epidemics are key to formulating policy (Connolly, Keil et al. 2020). Not only are urban settings and edges sites for intense contagion, they are also places of cultural diversity, spatial difference, potential solidarity and assistance (Keil, Connolly et al. 2020). These are matters that may be more difficult to fit to deterministic models, but they are vital to understand in generating workable emergency- and on the ground- policy.  

The point at this juncture is not to disparage models – though I do sense that the ‘herd immunity’ moment in the UK case will turn out to be an object lesson in coming to terms with the in-built biopolitical assumptions of epidemiological modeling and centrally based politicians. The shift from mitigation to suppression marked a volte-face in policy circles and time will tell whether it was the modellers or the politicians that had counselled a go slowly approach when all the other evidence suggested a more stringent process of testing and suppression. In any event, the point for now is to underline this partiality – to recognize that, as Leach and Scoones (2013: 15) have noted, “In situations of emergency, the political imperative for governments or agencies to ‘do something’ and advance high profile claims and actions, may become paramount, perhaps overriding longer-standing political and bureaucratic commitments such as to routine public health”, and deterministic models tend to fit the bill. For these authors, reviewing the utilisation of models for both influenza and Ebola, the answer is not necessarily to integrate more social science within the models, but to recognise the importance of the heterogeneity of knowledge, its various forms and formats, and to practice the arts of knowledge triangulation.  

Recognising the voices from the ground, the clinicians warning of circulating infections in hospitals, the complexities and spatialities of local and changing conditions and practices, are all important. To return to where I started, after foot and mouth, a Royal Society report noted the mismatch between the mathematical model and the ‘view from the ground’. It concluded that the modellers had the more objective and less emotive view and could therefore be more trusted to get it right.  Those on the front line could be “mistrustful of complex and seemingly abstract mathematical models as guides to effective action on the ground, especially when this seems to contradict field experience… epidemics caused by the agents considered here are rare. It thus becomes clear that experience and intuition alone are unlikely to be adequate guides to picking the best control strategies” (The Royal Society 2002: 57-8, Bickerstaff and Simmons 2004). My sense is that we, and social science, can do better than that.    

Works Cited

Beam Dowd, J., et al. (2020) Demographic science aids in understanding the spread and fatality rates of COVID-19. Online Science Foundation,  

Bickerstaff, K. and P. Simmons (2004). “The right tool for the job? Modeling, spatial relationships, and styles of scientific practice in the UK foot and mouth crisis.” Environment and Planning D: Society and Space 22(3): 393-412.  

Connolly, C., et al. (2020). The Urbanization of COVID-19.  

Ferguson, N. M., et al. (2020). Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand.  

Keil, R., et al. (2020) Outbreaks like coronavirus start in and spread from the edges of cities. The Conservation,  

Law, J. (2008). “Culling, Catastrophe and Collectivity.” Distinktion: Journal of Social Theory 9(1): 61-76.  

Leach, M. and I. Scoones (2013). “The social and political lives of zoonotic disease models: Narratives, science and policy.” Social Science and Medicine 88: 10-17.  

Lipsitch, M., et al. (2009). “Managing and Reducing Uncertainty in an Emerging Influenza Pandemic.” New England Journal of Medicine 361(2): 112-115.  

Street, A. and A. Kelly (2020) Counting coronavirus: delivering diagnostic certainty in a global emergency. Somatosphere,  

The Royal Society (2002). Infectious diseases in livestock. Policy document 19/02 (The Royal Society, London), London    

Wynne, B. (2010). “Strange Weather, Again, Climate Science as Political Art.” Theory, Culture and Society 27(2-3): 289-305.

Steve Hinchliffe is Professor of Human Geography at the University of Exeter, UK where he teaches a course entitled The Geography of Monsters. His books include Pathological Lives (2016, Wiley Blackwell) and Humans, animals and biopolitics: The more than human condition (2016, Routledge). He currently works on a number of interdisciplinary projects on disease, biosecurity and drug resistant infections, focusing on Europe and Asia.  He is a member of the Wellcome Centre for Cultures and Environments of Health at Exeter, and currently sits on the UK Government’s Department of Environment, Food and Rural Affairs (DEFRA) Scientific Advisory Committee on Exotic Diseases and on Defra’s Science Advisory Group’s Social Science Expert Group.  

2 replies on “Model Evidence – the COVID-19 case”

Comments are closed.