Skip to main content
Community Contribution

Learning Agenda Experiment

Oct 05, 2022
Thomas Johnson

Development hypotheses - the "if-then" statements contained in a logic model or framework - occur not just at the higher outcome level of the framework but throughout it. I refer to these as "design variables" since their treatment is integral to how a project or activity is formulated and implemented. Hypotheses at any level are infrequently validated through testing.

An important if underutilized means of doing this is through experimentation. The basic idea is to test the hypothesis/design variable, an approach which is the basis of the scientific method which constructs and then over time validates hypotheses. These in turn form the basis of a theory. Without validated hypotheses there can be no sound "theory of change."

Over the past two years Financial Services Volunteer Corps (FSVC) has conducted an experiment it developed in conjunction with a democracy and governance activity it has implemented for USAID/Niger. The methodology was an adaptation of "structured experiential learning (MeE)" developed in by Lant Pritchett and colleagues, introduced in an April 2013 working paper (no. 322) issued by the Center for Global Development. The stated purpose of MeE was to provide implementing agencies with a means to "rigorously search across alternative project designs to provide feedback into decision loops." MeE's developers argue that with-in project variations in design can serve as their own counterfactual while supporting innovation and providing an evidence base for funding agencies like USAID.

With the support of USAID/Niger, the experiment was conducted in conjunction with the “SHIGA” democratic governance program FSVC is responsible for implementing. The purpose of the experiment was to validate the design variable regarding the effectiveness of coalition-based collective action involving advocacy work of civil society organizations (CSOs). Such efforts are the first of SHIGA's three primary objectives.

SHIGA emphasizes the importance to governance of the national budgetary process in terms of citizen input and oversight. FSVC’s approach to capacity building of CSOs involved with such efforts is based on a year-long “incubation process” which FSVC (and others) had been using successfully to build the capacity of small and medium enterprise start-ups. Use of incubation with CSOs to engage in advocacy was an innovation.

Of the overall incubation cycle #1 cohort of CSOs, eight (8) in four groups were awarded sub-grants in June 2021. A key development hypothesis underlying Objective 1 was that advocacy/oversight efforts would be more effective if undertaken by CSO working together in consortia, rather than independently, thus employing collective action – a major theme of USAID’s regional RISE II program to which SHIGA contributes. A well-considered CSO consortium would also have the added benefit of providing intra-partner capacity building with skills transferred from more established CSOs to less-experienced partners.

Based on this development hypothesis the CSOs were strongly encouraged if not required to develop and submit small grant program (SGP) proposals which involved consortia. Many of these CSOs had little or no experience working with partners and for this reason SHIGA assisted the process by suggesting possible consortia partnerships. However, a few CSOs felt strongly about the advantages, in their view, of working independently.

The Experiment
This perspective prompted FSVC to question the validity of the hypothesis and led to the decision to test it using the Structured Experiential Learning (“MeE”) methodology, which is essentially a process to conduct an experiment involving what can be termed “design variables.” Testing of these is what MeE's developers refer to as "crawling the design space" by trying out design alternatives and then adapting the project based on the results. Use of MeE, as it related to learning and adaptive management, had been noted in FSVC’s original proposal for SHIGA, and this desired validation of a key development hypothesis provided a good case for its use. It is notable that USAID/Niger's solicitation for what became SHIGA encouraged CLA and by extension use of approaches such as MeE.

To test the collective action design variable, FSVC designed an experiment covering the six-month long SGP. The intent was to be able to compare the programmatic outcomes of consortia and at least one individual awardee. In considering proposals the SHIGA team purposefully selected for small grant award two consortia consisting of two members, one consortium of three members and a single CSO which desired to work alone. While the four awardee proposals had different foci, they are were similar enough in terms of the incubation cycle theme of national budget input and oversight to allow the comparison of results.

Experiment Results
The experiment results were somewhat mixed (as is frequency the case in science) but largely validate the tested development hypothesis. Activity implementation was adapted as a consequence, with notable documented results.

The experiment results relied on metrics associated with both improved CSO organizational capacity and the results of the advocacy work. The results generally supported the hypothesis that collective action was more effective than individual effort both in terms of capacity building and advocacy results. All of the participating CSOs improved both types of performance as a result of SHIGA support, some very significantly. The average improvement in the “advocacy index” using a Likert Scale was 68.5 percent while the similar measure for broader organizational improvement was also at the same level, with one CSO improving an impressive 110 percent. In terms of the results of advocacy efforts, one consortium prompted a desired action by the national government’s Prime Minister.

In addition to use of these two semi-quantitative indices, SHIGA used a qualitative process to assess each awardee’s programmatic results. This assessment was conducted by the SHIGA Objective 1 team responsible for the CSO incubation process. This group was very familiar with the entire cycle including the SGP portion which they oversaw. Using a 1-10 Likert scale each awardee received a score based on how actual results compared with those originally intended. All awardees scored in “positive” territory: two consortia awardees scored “8,” the individual awardee scored “7” and the final consortium scored a “6.”

One of the consortia scoring highest also excelled in capacity building and thus can be considered the overall most successful awardee. The low-scoring consortium was affected by different organizational cultures and priorities. The individual awardee showed low improvement in its Advocacy Index score but started with a high baseline score. Across all eight CSOs, those starting with lower initial scores showed the most improvement. This is evidence of desired development outcomes.

Summary of Key Takeaways
The experiment using the MeE methodology was a first within USAID. Experiments such as this provide a useful means of operationalizing portions of USAID’s Collaborating, Learning and Adapting (CLA) approach – namely learning and adapting. The development hypothesis/design variable tested, seen from another but related perspective, becomes a learning agenda question (LAQ); i.e. “Is multi-party collective action CSO advocacy more effective than a single-party effort in public policy advocacy?” The experimental methodology proved to be a sound means to testing a hypothesis and answering the related LAQ.

Results were used by FSVC to adapt the approach to subsequent programming in other CSO incubation rounds with some impressive results. For example, in the 2nd incubation round conducted in the city of Zinder, two of the participating CSO consortia were, based on the SHIGA experience, able to obtain grants from non-USAID sources. This “proof of concept” test suggests such experiments should be used more widely within USAID to support learning, adaptive management and ultimately improved performance.

FSVC was recently awarded additional funding for new programming over 18-24 months. The final design will be developed through a co-creation process involving the Mission, FSVC and Nigerien stakeholders. Ideally, another opportunity to utilize MeE will be identified. The final report of SHIGA’s MeE experiment is available in the Learning Lab community resource material or from FSVC.

About the authors
Image of blog author Thomas Johnson, circa 2006
Thomas Johnson

Thomas Johnson served for 25 years (1987-2012) with USAID in a number of backstops including Project Development. Since retirement in 2012, Thomas has worked as worked for USAID implementing partners with a focus on monitoring, evaluation, research and learning (MERL).