London, 2018: Workshop on the future of evidence-based policy
What is the future of evidence-based policy? Does a new generation of evidence-based policy initiatives exist, and if so, how should we call it, evidence-based policy 2.0? What is the difference with 1.0? We addressed these questions at an international workshop (8 April 2019) hosted by the Global Governance Institute at University College London, School of Public Policy, funded by Science for Democracy – Associazione Luca Coscioni – with a contribution by the Project Procedural tools for effective governance, Protego (European Research Council’s Advanced Grant).
Report on the workshop, by Marco Perduca and Claudio M. Radaelli
The workshop was organised around a set of round-tables – each round table with its distinctive set of questions. The participants conveyed nuanced ideas and reported on a range of empirics that at least on some issues do not point to a single conclusion. Aware of the diversity of opinions and approaches, we, the authors of this report, want nevertheless to draw what seem to us the most important lessons from the workshop.
To kick off, consider the following, admittedly blunt proposition: the old generation of evidence-based policy initiatives (typified in the UK by the Blair’s government enthusiasm for this concept) was about the notion of using science and evidence to fill in the information deficit of the decision-makers. Key to evidence-based policy 1.0 was the notion that evidence (from the natural sciences, risk assessment, economics, randomized control trails, and so on) would reduce uncertainty in policy choice. Although bounded rationality was already known since the 1950s, the first wave of evidence-based policy failed to take into consideration the way we think. Hence the biases of decision-makers were not part of the equation.
The causal, then, arrow was supposed to work more or less like this:
 EVIDENCE -> REDUCTION OF UNCERTAINTY -> IMPROVED DECISIONS
Several empirical studies have documented the limitations (if not failure) of the model portrayed in . The problem is that the policy process features ambiguity in addition to uncertainty – the latter is defined as changing definitions of the policy problem, variation over time on the venues where the search for alternatives is carried out, and actors that come and go in the different venues, hence ambiguity implies instability of the network of actors. Following Paul Cairney and others, ambiguity cannot be eliminated, it is a key characteristic of the policy processes in democratic systems. Neither are policymakers usually looking for a simple yes/no solution to which a piece of evidence can readily provide an answer.
To be credible, the agenda for evidence-based policy 2.0 – we submit – should put forward propositions that apply to a world where both uncertainty and ambiguity are present. Empirically, a sensible agenda for evidence-based policy 2.0 should take into consideration the differences in preferences between politicians and bureaucrats – ‘decisions’ do not come out of a black box, but are the product of the nexus connecting public managers and their political masters. A fundamental lesson drawn from the behavioural sciences is that politicians and bureaucrats, like all humans, have a brain that operates in different modes, is influenced by well-known biases, and is constrained by bounded rationality. The same sciences that have shown the range of biases and heuristics also point to possible ways to de-bias decision-making processes. In short, we are more aware of what happens in a world of bounded rationality, and have learned about de-biasing. Evidence-based policy 2.0 also models the incentives and preferences of scientists and decision-makers, meaning that both scientists and decision-makers are endogenous to the explanation.
Conceptually, this agenda should be sensitive to the importance of mechanisms operating in specific political and administrative contexts. The mechanisms are the WHY of the explanation, they tell us why certain things happen or do not happen in evidence-based policy processes. These mechanisms are not the same everywhere, every time. Indeed, they operate in specific contexts where governance is modelled around relations between elected politicians and public organizations (such as government departments and regulatory agencies). Further, today the problem is less one of information deficit and more one of information surplus, or how to direct attention in a world where information has low cost and is available, albeit its quality may differ greatly.
To wrap up, the three important points for the evidence-based policy 2.0 agenda are: (1) there is ambiguity as well as uncertainty in the public policy processes (2) these processes feature various types of linkages between evidence and decisions, in different settings, with a realistic model of how the brain of decision-makers works and its biases; and (3) there is a high ratio of noise to signal, or surplus of information.
The arrows of evidence-based policy 2.0 are represented in :
 SCIENCE AND EVIDENCE -> DECISION-MAKING PROCESS = Function of (UNCERTAINTY + AMBIGUITY) -> MECHANISMS IN CONTEXT-> REAL-WORLD POLITICIANS AND BUREAUCRATS MAKE DECISIONS
One way to present the findings of our workshop is that collectively, as a group, we tried to put flesh on the bare bones of the causal relationship . Another exercise is to take a critical look at the bones, and then re-think about the flesh. In fact, the correct causal bones may not be the ones portrayed in . It is fair to argue that public policy and social norms shape the kind of science and evidence that is allowed or not allowed to feed into the decision-making process. Further, we know that systems like ‘science’ ‘society’ ‘law-making’ follow their internal logic, whilst the arrows make us think of smooth or at least logical sequences. Following Boswell and Smith, we can think of four models of research-and-policy interactions: (a) research, science and evidence are used to make public decisions (b) political power and social norms shape knowledge (c) co-production of socially-relevant knowledge in the spheres of research and governance; and (d) research and policy are definitively autonomous worlds. All approaches deserve attention, particularly at a moment when governments design policies and funding mechanisms for universities based on ‘impact of research’. These policies should not presuppose simplistic understandings of concepts like ‘impact’ and ‘utilization of knowledge by policy-makers’ – an example being the Research Excellence Framework (REF) in the UK. The risks are to misallocate funding and to give the wrong incentives to researchers.
Consider the arrows in . We see different actors, namely scientists, politicians, and bureaucrats. At a minimum we should model these actors. What do they want? Decades of research in public management and political science have informed us of the different preferences of politicians and bureaucrats. They want different things: consensus and votes for politicians, task expansion, reputation and standard operating procedures for public managers. But it is not just a question of preferences. There are also social norms and emotions. Whether we look at how organizations learn, the logic of negotiating truth in science and public policy, or at field experiments the message is that emotions carry explanatory leverage when it comes to the delivery of evidence-based policy. Thus evidence-based policy 2.0 should accommodate both the logic of incentives and the logic of emotions – at a higher conceptual level, choice and appropriateness, in a context of bounded rationality, heuristics and biases. Finally, no matter what the logic of interests and emotions tells us, there is the hard ceiling (for evidence to make an impact into policy) of organizational capacity.
Of Science and Scientists
And yet, we have not said a word about the other actor, the scientist. Here science and technology studies provide their lessons. Although we assume that evidence-based policy 1.0 is typical of naïve policy-makers, the same naïve belief may exist in the mind of scientists when they discount the complexity (as well as the values) of public decision-making. If we say that all scientists have to do is to speak the truth to power, we cover only a fraction of the evidence-based policy 2.0 picture. As research on policy learning has demonstrated, the speaking-the-truth-to-power attitude of the scientist brings failure given certain characteristics of the policy process. It can work when the policy process approaches the conditions of epistemic learning: but it delivers much less as soon as we enter bargaining, authority, or a level-playing field between lay and professional knowledge.
More fundamentally, speaking the truth to power does not tell us anything about the preferences of scientists. They care about truth and science, of course. But they also care about their reputation and funds for their institutes and projects. This is not necessarily a bad thing, of course. Actually, in some circumstances being dependent on funding from policy makers can have a good effect. One can argue that researchers who need to compete for funding from policy makers and bureaucracies have a better understanding of the policy process and the needs of their clients – they have to, in order to get funding.
Some scientists pursue their preferences by talking up science. Some of us pointed to cases when scientists oversell. They do so because they want more prestige and want to perforate the veil of communication with public opinion and decision-makers. The phenomenon may not entail anything wrong: a climate scientist with information about seasonal forecasts sees the importance of this information and is puzzled why it is not used to a larger extent. A policy maker may not quite understand how to use this information. Thus, the scientist keeps pushing with the evidence on the table. Is this really overselling?
There is also an issue about communication. Communicating the bounds of knowledge in the language of probability is correct. It mitigates the tendency to overselling. This is the territory of probability, sensitivity analysis and the language of incredible certitude. Scientists should adopt the language of humble science, prudence, and openness to conjectures and confutations. And yet, other participants asked: how exactly will being humble and speaking the language of probability contribute towards success in conveying the climate change challenge that we face? How can this approach meet the logic of communication in a world of fast, succinct social media?
We settled on the following proposition: Science can help policymakers make sense of their own ambiguity but they have to accept their own uncertainty.
Further, where does communication takes place? There are venues other than social media, such as deliberative and participatory settings. Although there is a lot of talking about the loss of trust in experts, deliberative and participatory policy experiments suggest that ordinary citizens may benefit the dialogue with scientists given the correct scope conditions. The conditions for public engagement as means to increase or restore public trust in science and experts are: to avoid self-selection (that is, only the already knowledgeable and educated citizens participate), to calibrate engagement so that citizens can effectively develop their knowledge during citizens-experts panels, and to avoid domination. Crucial is the coupling between deliberative and institutional fora. Engagement deteriorates in quality and participation over time, unless the results of the engagement feed into the decision-making process. Co-production of research with the stakeholders is a collaborative model often presented as a template. But some argue that coproduction has many hidden costs, which are unequally borne by participants.
Finally, we often think of science as something public, done in universities and public institutions, publicly funded labs for example. But today a lot of science is commercial. The scientific enterprise is carried out in private settings by company labs. In a post-industrial economy, the private sources of research and development is inevitable and not problematic in itself. What is problematic is the accountability issue, for example the failure of pharma companies to report negative findings. Of course, failure to publish negative results is not unique to the private sector, but it is a problem given financial implications for coverage. Other participants observed that when it comes to trust and accountability, the issue is not necessarily related to whether for example a research institute is privately owned or not, citing examples from Scandinavia.
Uncertainty and Usages
The effects of uncertainty on science and public decisions are asymmetrical. Uncertainty is precious in science, it is the trigger of the scientific enquiry, it is always there in processes of scientific discovery and scientific enquiry. In a sense for a scientist more uncertainty in a given domain is a good thing, it means that there is a lot of promising research that can be done. For policy-makers, instead, uncertainty is, so to speak, ‘bad’. Policy-makers do not want to follow arguments cast in the logic of uncertainty. When this asymmetry is coupled with ambiguity, the scene is set for multiple usages of science in public decisions. Science can be used INSTRUMENTALLY to improve on policies, or POLITICALLY to improve on popularity, elections, visibility, campaigns, and so on. Governments adopt reforms that have higher expected political payoffs rather than those with higher instrumental value. However, if one wants to reform and use science instrumentally, one has to be aware of the political feasibility of the reform. Consequently, not always do instrumental and political considerations clash, they can also be complementary.
Science can also be deployed SYMBOLICALLY to put a veneer of ‘scientific’ justification on decisions. This is a kind of back-of-the-envelope, justificatory science. For this reason, the evaluation of evidence used in public decision-making processes should be as pluralistic as possible. A sort of society-wide review of the scientific basis of public decisions (coming from different institutes and think tanks) and citizens mobilized to defend and extend their right to science are important. On the first point (that is, wide societal and pluralistic review), regulators and governments should assist with funds institutes and think tanks to carry out their own autonomous review of the evidence used by regulatory agencies and lawmakers, at least in cases of major controversial regulations. This idea was originally discussed in the USA by Resources for the Future, but it could be applied to the European Union. On the second point, the examples of Sense about Science and Science for Democracy show how advocacy for the campaign for the right to science may work in Europe and at the level of the United Nations.
Whether we call is evidence-based or evidence-inspired policy, we must be clear on the goal we have in mind. There are fundamental dimensions of success:
(a) In INFLUENCING policy makers
(b) On the SUBSTANCE of policy. The policy-makers may ‘successfully learn’ the wrong lesson by considering the weaker scientific argument because it is close to their ideology, and not learn the correct lesson. Clearly, this is not successful evidence-based policy in terms of substance, although the decision-makers, in this case, have been definitively ‘influenced’ by science.
(c) Success on preventing wrong choices, and more generally success as REACTIVE mode
(d) Success as PROACTIVE mode, in leading towards the right choice
Although there is no hard evidence, the literature seems to point more frequently to success in reactive mode – that is, cumulative evidence assists when failure of existing decisions or non-decisions is wide-spread. The challenge is to generate success in proactive mode and in science-based issues.
Finally, there is the problem of documenting success. Arguably, there is a publication bias towards documenting more failure than success. Of course, studying the inefficiencies and limitations of the use of science in public decisions is instructive. Scientists embrace a critical and sceptical thinking of what the government does. For public managers the incentive to document success is instead visible: they need to collate and show success to be promoted in their career, to show how they spend their budget, to report on how well their country is doing within international organisations. The two worlds operate with different biases, and we cannot simply average out the two biases – of social scientists and policy makers. For sure, social scientists should correct their bias – possibly encouraged by the choices made by editorial committees of the main outlets for policy research, such as policy research journals.
Supply and Demand
We often focus on the supply of evidence and how it should be considered by decision makers as well as the public. But what about the demand side? In terms of design, it is useful to think of ways in which advocacy organizations such as Science for Democracy can put pressure on politicians and regulators, make it costly to ignore evidence, and make them more likely to demand science. Procedural regulatory instruments make public administration accountable to science (broadly conceived) by design. Examples are the obligation to consult experts, to carry out and publish risk assessment, to provide estimates and sensitivity analyses on the impact on the environment of legislative and regulatory policy proposals, to use or not use a given discount rate and value-for-life estimates in policy formulation, to rely on objective counterfactual analysis in the evaluation of policy programs. These instrumentations for ‘accountability by design’ are examined in the Protego project for the EU-28 and the EU. Further, deliberative exercises that increase public awareness and interest in science would not be ignored by politicians. Transparency reviews put pressure on decision-making. Official statistics should be framed and addressed as public goods, and protected as such.
Understanding of Science, Understanding of Policy Processes
Considerable efforts have been done in increasing the public understanding of science. One important goal in these efforts is to raise awareness of science among politicians and bureaucrats. However, these actors do not necessarily have truth and knowledge as their priority. For this reason a new generation of efforts should be directed in raising the scientists’ awareness of the fundamental variables at play in the policy process and modes of learning in public policy. In short, after having tried to explain science to politicians and regulators, the social scientists should also empower natural scientists by explaining them how policy processes vary depending on key variables. This can be done by condensing our knowledge of policy processes into formats and presentations (someone said ‘tablets’) with high potential for dissemination. It also requires a new commitment of social scientists to judge the quality of their research in terms of how many audiences it can reach, beyond the community of other social scientists. This vision has been called translational social science, but has many roots, such as evidence use, research uptake, knowledge mobilisation and meta-science. Whatever our backgrounds, scientists need to be cautious about how, when and whether to engage, and to ensure they are using evidence-informed techniques to do so.
Here you can find the list of Participants and themes of the Round tables.