Government's Data-Driven Frenemies
This blog was originally published by Governing
Historically, there's been an unfortunate and unproductive divide between people who have the same goal of getting government to make more informed and data-driven decisions. On one side, there are those tasked with measuring performance. On the other are program evaluators.
“You could look at the history of program evaluation and performance measurement as a cautionary tale of two children who were brought up in the same house but were raised by different tribes and aren’t so friendly with one another,” says Don Moynihan, a professor at the La Follette School of Public Affairs at the University of Wisconsin-Madison. “The [split has become] institutionalized in government.”
Performance measurement adherents hold a variety of positions in state and local government. Many have some kind of training in measurement but don’t tend to consider their jobs as part of a singular profession that's required to adhere to a set of standards.
Then there's the more academic group of program evaluators. This is a title that has clear, distinct meaning. These men and women are deeply focused on standards and guidelines, have continuing professional development and have expertise in the field for which they're evaluating programs or agencies.
“Many evaluators have disdain for performance measurement types because they don’t think performance measurement is sufficiently rigorous,” says Phil Joyce, a professor at the University of Maryland's School of Public Policy.
What’s the difference in the products the two groups produce?
“Performance measurement is a great tool for monitoring purposes, but it doesn’t tell people whether the things you are measuring are the right things to measure, and it doesn’t tell why something is happening,” says Rakesh Mohan, director of the Office of Performance Evaluations in Idaho.
As Joyce explains further, “A performance measure could tell you that childhood obesity has declined by 5 percent from last year to this year, but it doesn’t tell you that the reason is a particular government program or a change in the economy or a private-sector initiative.”
Program evaluations, as a result, tend to be lengthy documents that include data (often from performance measurement), interviews and analysis to provide a full picture of the changes that should be made to help agencies or programs function most effectively.
There are, however, advantages and disadvantages to the two approaches.
For one, the fact that program evaluators follow carefully prescribed standards makes it more likely that one evaluation can easily be compared to another. Unlike performance measurement experts, program evaluators are also often independent of the program or agency they're evaluating.
But program evaluations can be expensive and often take at least two years to complete. Some people, including legislators, who would be eager to read an evaluation to inform their decisionmaking, can't wait that long because by the time the evaluation is released, the ground beneath the issue has often shifted. What’s more, evaluations take a snapshot of the status of an agency or program, but they're not useful for seeing what changes have taken place over time.
From our perspective as regular users of both evaluations and measurements, any rancor between the two groups defies common sense. In years past, the American Society of Public Administration has reportedly tried to bring the two groups together -- but with little impact. It seems to us that if a state or city doesn’t have capacity or time to do an evaluation, then performance measurements can still help them identify efforts that deserve deeper attention. If both sides of the rivalry can agree on nothing else, they certainly agree on this: The more information governments have, the better.