Skip to main content
Community Contribution

Three Ways Experts Measure Adaptive Management

Aug 21, 2018
Meghan Jutras

At this year’s Moving the Needle event, a thought-provoking panel session on “Measuring the Hard to Measure” tackled a tough development question: how can we measure the contribution of collaborating, learning, and adapting (CLA) to better organizational and development outcomes? Panelists shared their diverse approaches for measuring the (nearly) unmeasurable to close evidence gaps.

Kicking off the discussion, Stacey Young (Senior Learning Advisor and CLA Team Lead, USAID Bureau for Policy, Planning and Learning) provided some context: USAID is investing in building the evidence base for CLA because it believes that systematic, intentional and resourced CLA in its programs will yield increased organizational effectiveness and improved development results. However, the contribution that CLA makes at either level is not clear or easily measured, and testing this theory of change is particularly thorny. Measuring benefits that are indirect is difficult at best, all the more so when prevailing approaches favor quantified evidence, narrowly described and easily attributed. USAID’s effort to build the  Evidence Base for CLA (see here for a new dashboard on EB4CLA) involves collecting and synthesizing the existing evidence on if, how, and under what conditions CLA contributes to better outcomes, and also identifies effective approaches for measuring this contribution.  

Move beyond linear measurement. Kerry Bruce (Executive Vice President, Social Impact) discussed how Global Learning for Adaptive Management (GLAM), a jointly funded DFID and USAID activity, is working towards adaptive rigor in a complex world. Kerry posited that the development community faces many challenges that demand a new approach to traditional, linear measurement. The Ebola crisis is a defining example of the need for a dynamic, iterative approach that integrates research, measurement, and programmatic and policy actions. GLAM’s emerging principles of adaptive rigor are supported by documentation of best practices in monitoring, evaluation, and learning for adaptive management. These include: holding multidisciplinary, cross-stakeholder sessions for problem analysis and theory definition; triangulating multiple data sources and perspectives; conducting regular, strategic stress testing; and developing an organizational culture that encourages and rewards being open, inquisitive, and responsive.

Get stakeholder buy-in. Shannon Griswold (Senior Scaling Advisor, USAID Global Development Lab) and Rebecca Herrington (Developmental Evaluator, USAID Global Development Lab / Social Impact) shared how they have leveraged developmental evaluation (DE) at USAID’s Global Development Lab (“the Lab”). A DE commissioned by the Lab explores which programmatic approaches work effectively towards sustained uptake and which do not. Teams are able to learn from others and establish feedback loops for active adaptation. Employing process tracing, positive deviance, and outcome harvesting, the DE has been a learning- and action-oriented approach to understanding complexity. Rebecca explained that buy-in from stakeholders can make or break the evaluation, so documentation of pivots (along with the reasons behind them) and continuous attention to stakeholder engagement are crucial.

Identify adaptive factors. Alison Hemberger (Senior Advisor, Markets and Learning, Mercy Corps) spoke about Mercy Corps’ AdaptScan process to assess, learn from, and improve its adaptive practices. This approach identifies how factors that enable collaborating, learning, and adaptive approaches contribute to adaptive actions taken by program teams, and the resulting difference in development outcomes. Through this process, Mercy Corps identified 15 adaptive factors over five themes, ranging from strategy to processes to partnerships to learning. Key findings, such as the importance of moving beyond traditional partnerships and planning for adaptation in budgets and reporting, were incorporated into an adaptive management plan. Mercy Corps saw impact from these efforts in team behavior, as well as the focus, reach, and sustainability of its work.

Audience members contributed valuable reflections in discussing their key takeaways and asking questions about presenters’ approaches. The participants suggested that while everyone wants to see how adaptive management can fit in their work, it is much more difficult to operationalize. In the examples shared, participants saw a common theme of focusing on and documenting small, incremental changes to help ensure that data is used. They considered when relationships may be more important to target for change than processes. The group also noted how having champions who buy into non-traditional approaches to measurement and evaluation - particularly for things that are hard to measure - is essential.