Skip to main content
Community Contribution

A New Study Highlights 10 Years of Developmental Evaluation at USAID

Aug 05, 2021
Chris Thompson

The spread of new monitoring, evaluation, and learning (MEL) approaches reflects the uptake process for research findings in general: it takes time, intention, and effort for knowledge to be disseminated, adopted, and used.

A new study by USAID/Indonesia’s Developmental Evaluation (DE) for USAID Jalin highlights the Agency’s own decade-long process of applying DE, a relatively new approach to utilization-focused evaluation.

This study found that, between Dr. Michael Quinn Patton publishing his landmark book Developmental Evaluation in 2010[1] and USAID including DE as a type of performance evaluation in its Operational Policy for the Program Cycle (ADS 201) in 2020, the Agency has conducted 14 DEs with a cumulative cost of approximately $8-10 million.[2]

While the study elaborates on these DEs’ duration, budget, and structure and general trends, four overarching themes are apparent:

1.     USAID is using DE to operationalize CLA. Because DE strengthens programs by enhancing adaptation, in part by embedding evaluators within projects, it can serve as a practical approach to implementing CLA. This is especially true for improving internal and external collaboration, expanding a technical evidence base, and informing decisions about programmatic adaptations.

This study suggests that both Missions and Washington-based Operating Units are turning to DE to operationalize these CLA concepts in a variety of contexts. In fact, the 14 USAID DEs since 2010 supported eight sectors. Moreover, seven DEs were country-specific, four were Washington-based, and three were global.

2.     USAID’s use of DE has increased over time. While USAID conducted two DEs between 2011 and 2015, it conducted 12 between 2016 and 2020. The study found that this increase coincides with a rising interest in this new MEL approach within the Agency, as evidenced by the commencement of DEPA-MERL in 2015 and other promotional efforts. It also reflects the uptake process for new ideas and practices in general as they diffuse and gain buy-in over time.

3.     DE design options exist to overcome common challenges. The study found that two challenges to USAID conducting DE were their perceived cost and overlap with other activities’ MEL. Yet, this need not be the case. The Innovation for Change DE was one of the longest and also one of the cheapest because it used a part-time embedded evaluator. Furthermore, during the DE for USAID Jalin, USAID, the Jalin project, and the DE divvied up MEL responsibilities to avoid duplicating efforts. These examples suggest that use of DE at USAID can grow if DEs are designed responsively to their objectives, contexts, stakeholders, and available resources.

4.     The evidence base on DE in USAID is growing. This study gathered data by surveying and interviewing 75 people at USAID and implementing partners and three listservs (two at USAID and Dr. Quinn Patton’s Blue Marble Network) with a total of 1,920 persons. It also reviewed resources at the USAID Developmental Exchange Clearinghouse and the American Evaluation Association. Taken altogether, these knowledgeable practitioners and accessible resources constitute a robust body of evidence from which to continue growing DE use at USAID.

Because DE is a utilization-focused approach to evaluation, USAID’s ongoing uptake is an important means for the Agency to meet its performance goals (PG), such as PG 4.1.1: Increase the Use of Evidence to Inform Decisions.

And, though this study found positive trends in USAID’s use of DE, room for growth exists. USAID commissions an average of 200 evaluations per year, totaling more than 1,100 evaluations since 2011. A tiny portion of which have been DEs, and this underscores the study’s greatest value: raising awareness of USAID’s own journey of adopting and applying a new MEL option that operationalizes CLA. Afterall, to get to where you want to go, you need to know where you are and where you came from.  

 

------

 

About the Author: Chris Thompson is Chief of Party of Social Impact’s DE for USAID Jalin. His current work focuses on informing decision-making processes to accelerate maternal and neonatal mortality reduction in Indonesia.

Acknowledgments: This study would not have been possible without the support of USAID/Indonesia; the hard work of USAID DEPA-MERL, especially Danielle de Garcia, Sierra Frischknecht, Dominick Margiotta, and Felipe Rangel; the involvement of Dr. Michael Quinn Patton, Charmagne Campbell Patton, and the Blue Marble Evaluation Group; and the information shared by all those who participated.


[1] While publishing Developmental Evaluation in 2010 represented DE’s public debut, evaluators had been discussing and implementing DE at least as early as Dr. Quinn Patton’s article “Developmental Evaluation” in the journal Evaluation Practices in 1994. Furthermore, it is very possible that evaluators at USAID acted developmentally before 2010 and 1994 even if they did not call their evaluations “DE.”

[2] This assessment identified an evaluation as a DE if: 1) the evaluator is embedded or closely connected with a program team; 2) the scope supports adaptive management and addresses complexity; and 3) the scope is not tied to summative evaluation questions. However, DE can be called adaptive evaluation, real-time evaluation, or emergent evaluation, and it is possible that this assessment may have omitted some DEs due to its limited time and resources. We welcome you sharing your DE or similar initiative by contacting [email protected].