By Zara Salman
Contemporary models of governance work under the assumption that when research evidence is presented to policymakers, it immediately becomes part of their decision-making processes, and ultimately leads to evidence-based policymaking. But where is the proof to back such a linear process of evidence incorporation in policymaking? In reality, policymakers are constrained by a range of factors, and the assumption that they update preferences and decisions based on new evidence does not always hold. More often than not, their constraints and biases effect how evidence is both interpreted and how it eventually informs policy.
Recently, the Consortium for Development Policy Research convened a panel of two researchers and one policymaker to discuss how and when policymakers take up evidence presented to them.
Do policymakers use evidence
Taking up this broader question, Asad Liaqat, PhD candidate at Harvard, presented the findings of a recent study on civil servants in India and Pakistan that confirms policymakers do not always use or understand the evidence provided to them. The underlying cause in this instance was a lack of understanding of the data itself. For example, a question posed to civil servants in both countries asked them to assess the district with the highest absolute number of unemployed people. However, in terms of data they were only given a percentage figure for unemployment by district. Worryingly, only 13.5 percent of Pakistan’s middle cadre civil servants provided the correct response (i.e. the data given was insufficient to provide an answer to the question), compared to a much higher 72.2 percent from a comparable Indian cadre.
Asad further added that researchers assume policymakers are universally more likely to take up a proposed reform if backed by hard evidence (i.e. based on robust statistical methods), as opposed to anecdotal or soft evidence. However, he showed that, for some sectors, anecdotal evidence was more convincing for policymakers. An example cited was when school monitoring was proposed to reduce teacher absenteeism, policymakers were more likely to base their decision on views of parents, presented in narrative form, as opposed to the results of a scientific impact evaluation study.
Moreover, even when data is accurately interpreted, it is not always put to use, particularly when quick decisions have to be made and there is limited allowance for detailed analysis. This is acutely felt in instances where evidence supports a risky reform but reward structures for policymakers promote risk averseness.
In an ideal world, politicians and bureaucrats would have no pre-determined worldview that may lead to incorrect conclusions and would only make impartial decisions based on evidence. However, behavioral science shows decision-makers are influenced by inherent biases in their thought process, which impacts how they interpret evidence. While there can be multiple types of biases, Sheheryar Banuri, Lecturer at the University of East Anglia, sheds attention on the most common one – confirmation bias.
Confirmation bias is “the tendency to seek out information that confirms one’s prior beliefs” (Nickerson 1998). Banuri shared results of an experiment, where officials from the World Bank and Department of International Development (DFID) were asked to objectively interpret some data provided to them. Where the context of data presented was neutral, 65 percent arrived at the accurate answer. However, when the same evidence was presented to support strong pre-existing notions, only 45 percent answered correctly. Thus, errors in judgment became amplified when views on an issue with strong preconceived notions were sought. For example, those who stated strong preferences against income inequality were more likely to make errors when presented with findings that went against their beliefs.
Can such biases be mitigated?
It would be unnatural to assume that policymakers are not influenced by the narrative around them. Therefore, it is worth investing in solutions that mitigate the effect of such biases and aid in impartiality.
One remedial measure presented by Sheheryar was the use of deliberation. In the experiment, when policymakers were asked to undertake a problem in pairs, they were more likely to respond with the correct answer, relative to when they undertook the task individually. Hence, avenues of more open discussion by bureaucrats and politicians on available evidence may yield a higher chance of designing and implementing unbiased reforms.
Moreover, independent peer reviews are another tool now increasingly being used at organizations such as the World Bank and DFID, to support the policymaking process.
Salman Siddique, former federal secretary, responded to the research findings given his own experience and understanding of the bureaucracy and the government’s decision-making processes. In the context of Pakistan, where political hierarchy is such that the final authority rests with the senior most bureaucrat or politician, peer reviews challenge the existing power dynamics. A policymaker’s individual political and self-interest is most likely to determine the decision he makes.
It is not always the case that policy-makers are unwilling to use evidence. They are often unable to do so due to institutional constraints. The existing institutional setup can hinder implementation of policies that go against the current direction of thinking, even if the evidence points elsewhere. And while researchers are keen to work on providing more data and evidence, they are less inclined towards resolving the willingness to use evidence.
One solution would be for academics to engage more actively with policymakers in the design of their interventions to build ownership. This is a practice that the International Growth Centre actively pursues. Researchers may also need to package the evidence in a way that has more impact such as using a hybrid of soft and hard evidence. They could also try and target one anchor in the government that can be a champion for reform.
While the first step would be for policymakers to recognize their own internal biases, academics must also acknowledge the same when presenting evidence or conducting their research. It is only then, that measures such as enhanced deliberation or other solutions can have meaningful impact.
Zara Salman is a Senior Research Associate at the Consortium for Development Policy Research
Nickerson, R S (1998), “Confirmation bias: A ubiquitous phenomenon in many guises”, Review of General Psychology 2(2): 175.