What do we know about When Data does/doesn’t Influence Policy?
Josh Powell, Chief Strategy Officer at the Development Gateway weighs in on the Data and Development debate.
While development actors are now creating more data than ever, examples of impactful use are anecdotal and scant. Put bluntly, despite this supply-side push for more data, we are far from realizing an evidence-based utopia filled with data-driven decisions.
One of the key shortcomings of our work on development data has been failing to develop realistic models for how data can fit into existing institutional policy/program processes. The political economy – institutional structures, individual (dis)incentives, policy constraints – of data use in government and development agencies remains largely unknown to “data people” like me, who work on creating tools and methods for using development data.
We’ve documented several preconditions for getting data to be used, which could be thought of in a cycle:
While broadly helpful, I think we also need more specific theories of change (ToCs) to guide data initiatives in different institutional contexts. Borrowing from a host of theories on systems thinking and adaptive learning, I gave this a try with a simple 2×2 model. The x-axis can be thought of as the level of institutional buy-in, while the y-axis reflects whether available data suggest a (reasonably) “clear” policy approach. Different data strategies are likely to be effective in each of these four quadrants.
So what does this look like in the real world? Let’s tackle these with some examples we’ve come across:
Top-Right (Command and Control): In Tanzania and elsewhere, there is a growing trend of performance-based programs (results-based financing or payment by results) that use standard indicators to drive budget allocations. In Tanzania, this includes the health sector basket fund and a large World Bank results-based financing program. Note that this process of “indicator selection => indicator performance => budget allocation” provides a clear relationship between data and policy outcome, lending itself to a top-down approach (well-characterized by a Tanzania government official, who told me that “people do what you inspect, not what you expect”). Where fixed policies and actionable data are present, this command and control approach makes sense – but be careful before trying to move this approach to another quadrant.
Positive Deviance (bottom-left): Here relationships between data and policy are much less clear, and neither lends itself to action. So why not drill down and find what is already emerging/working on the ground (positive deviance)? (i) Identifying high-performing districts, (ii) studying factors (both internal and external) that differ from the norm, then (iii) developing specific theories of change and (iv) using peer learning or other dissemination methods to test and learn from these “outliers”.
Name and Shame (top-left): Where evidence-based policy options exist, but actors are either unaware or unwilling to adapt, good old-fashioned naming and shaming can work wonders. We saw this in Ghana: a District Health Director presented local (high) maternal mortality rates to the District Assembly, which rapidly led to an increase of health worker coverage and engagement with community education groups. When we spoke to the director recently, she reported that district maternal mortality rates had been cut in half over a 2-year period (of course, many factors may have contributed to this). Naming and shaming can be a powerful motivator when data suggest clear policy changes, but can be otherwise difficult to replicate.
Analyze and Define (bottom-right): Here, relevant decision-makers are keen to solve a problem, but data may not provide a clear-cut solution. Here’s where the “elbow grease” approach of exploratory analysis and comparison of inputs (e.g. aid allocation) and outcomes (e.g. poverty) comes in. As an example, Nepal’s Ministry of Finance struggled with planning its investments and use of external resources, and was frustrated by the transaction costs of working with 30+ development partners. Using data from its Aid Management Platform, the government did its own analysis to create a development cooperation policy, outlining the “rules of the game” for development partners in Nepal. Exploratory data analysis for policy should be accompanied with an adaptive policy mindset: perceived relationships within data may end up being blind alleys, requiring flexibility to test and change when theories are disproven.
So what? Putting some ToCs to the test
At Development Gateway, our Results Data Initiative is working with country governments and development agencies, using a PDIA approach. At country level, we’re convening quarterly inter-ministerial steering committees (authorizers/problem holders) to identify problems or decisions for which they want to use data, and technical committees (testers/problem solvers) to identify ways to use data to get at these issues. We plan to work only with the government’s existing data sources– to learn what they can (and cannot) do with what they’ve got. Throughout this program, we’ll be applying these ToCs, and will report back on what we learn. I know we’re not the only ones thinking about this, and would love to hear what others have done to use data to influence development programming.
This poor first appeared on Duncan Green's from Power to Politics blog.
Photo credit: NATS Press Office via Foter.com / CC BY-NC-ND