Skip to main content
USAID Contribution

Ways to Integrate a CLA Approach in the Performance Management of a Project/Activity

May 18, 2018
Motasim Billah

Motasim Billah is a Monitoring and Evaluation Specialist at USAID/Bangladesh.

Since the revision of ADS 201 that mandated us to include Collaborating, Learning and Adapting (CLA) in our work, the M&E team in Bangladesh started receiving growing demands from A/CORs and Implementing Partners to provide them a practical guide to integrate CLA approaches in the performance management or monitoring and evaluation of projects/activities. As the operationalization of CLA has been evolving for the Agency itself, there have been more opportunities for M&E practitioners like us to share reflections from our field experiences, which could ultimately contribute towards developing a comprehensive guide in this area. My desire to engage more deeply in this conversation became a reality when I secured the Program Cycle Fellowship with the Bureau of Policy, Planning and Learning! During my Fellowship, I was based in the office of Learning, Evaluation and Research where I intensively focused on CLA.

The Fellowship provided me opportunities to gain cutting-edge knowledge on CLA through my involvement in different CLA related processes such as the Program Cycle Learning Agenda, CLA Community of Practice, adaptive management workstream, and CLA Toolkit development. It also provided me access to a wide range of resources, including different Missions' experiences on CLA and experts' opinions on integrating CLA into monitoring and evaluation. My time with PPL helped me organize my thoughts on CLA and write these reflections. The write up will be divided into three sections that will shed light on ways to integrate CLA into performance management.

The first section will spell out CLA in the logic model development and indicators framing. The second section will show how to integrate CLA in the MEL plan, data quality assessment and evaluation. The final section will demonstrate using CLA in tracking critical assumptions/game changers, establishing feedback loops and Mission wide practices for instituting CLA in the performance management of projects/activities.

Integrating CLA When Developing Logic Models and Indicators

Robust Logic Model

The development of a robust logic model is critical to enable the performance management system of an intervention (that is, a project or an activity) to function and to capture performance in a complex environment. Constructing a robust logic model requires analyzing a development problem from different perspectives, identifying the root causes of the problem and its linkage with other contingent problems, and tailoring solutions suitable for a particular context.  Building a rigorous logic model requires designers to invest a significant amount of time and ensure active participation of stakeholders in the construction phase. In this respect, adopting a CLA approach is useful throughout the process of developing a logic model. 

At the onset, a project or activity should identify stakeholders who can provide substantial insights in crafting the logic model. Once stakeholders are identified, experts in CLA or design could facilitate a logic model workshop to surface the best knowledge, expertise and experience of stakeholders. A robust logic model may involve multidirectional causal pathways of solutions to a particular problem using a two staged process.

  • First, it identifies the core results where an intervention will be directly intervening based on resources and manageable interest.
  • Second, it uncovers other potential results that are also critical for the achievement and sustainability of its core results; where the project or activity will be leveraging from the presence of interventions of different development agencies, NGOs, private sector and governments.

USAID/FFP has been using the robust theory of change and logic model in its program in different countries which can be a useful guide for other USAID programs.

Designers can also use the collaboration mapping tool  (learn more here), developed by USAID/Rwanda and refined by PPL/LER, to unearth the additional actors operating in the targeted geographic areas. It can then rank the agencies and their respective interventions in terms of the benefits that the intervention can tease out and their effectiveness in achieving and sustaining our results. For example, in Bangladesh a USAID environment activity partnered with the Government, which allowed the activity to set up its district/sub-district level offices within the premises of Government Fisheries Agency. This substantially helped the activity reduce logistical costs and strengthened the partnership with the Government. Other examples include joint project development such as when USAID and DFID collaborated in a major NGO health service delivery project in Bangladesh. A designer can also do a beneficiary mapping exercise to reduce overlaps with other interventions in the same geographic region and thus maximize developmental gains for the target population. To document plans and efforts to develop partnerships, designers could include any collaboration map and stakeholders engagement strategy as an annex to a Project or Activity MEL plan.

Collaboration on Indicators 

The logic model workshop can be also used to extract a set of illustrative indicators to measure the result statements in the logic model.  The illustrative indicators will subsequently guide the development of intervention-specific indicators that would be documented in the Project/Activity MEL plans. Once an activity starts rolling out the Agreement/Contracting Officer's Representative (AOR/COR) could periodically (e.g., quarterly) hold indicator review meetings with Implementing Partners and other relevant stakeholders to assess the effectiveness of indicators in capturing performances and other factors influencing the activity. In this regard, data quality assessments conducted by both USAID and Implementing Partners can be good occasions to review indicators. At the Project level, Project Managers could organize similar indicator review meetings with AOR/CORs to learn about the status of indicators and their effectiveness. The participation of the Program Office in the project level indicators review meeting is critical as that would help later align the strategy-level indicators with projects as needed. If a project or activity needs to revise its indicators, it should be adequately reflected in the MEL plan. 

A CLA Approach in MEL Plans, DQAs, and Evaluations

Including a learning agenda in MEL plans

A project/activity MEL plan should devote a section on learning that would essentially include a learning agenda at the project/activity level. A learning agenda generally entails a set of prioritized questions addressing critical knowledge gaps. In terms of scope, the questions can ask about short, medium and long term issues that are critical for the achievement of results of an intervention. In this respect, a project-level learning agenda can guide activity-level learning questions, and a Mission-wide learning agenda can guide project-level learning questions. For example, in the recent times, the Senegal Mission has developed a learning plan as part of their Performance Management Plan (PMP) that can help projects and activities articulate learning questions in their respective contexts. The learning section should include the learning activities that would be employed to answer each learning question.  It should also include target audiences, learning products (dissemination tools) that will be used to share learning outcomes; roles and responsibilities of different actors, timeline, resources and next steps.

Data Quality Assessment

The periodic data quality assessment is an important reflection tool for USAID and implementing partners to learn about data quality, gaps in existing data collection processes, data storage and overall data management. A CLA approach can be very effective in conducting DQAs involving USAID, Implementing Partners (IPs), and the local NGOs who are often partners of IPs. Based on DQA findings, periodic (quarterly/bi-annual) reflection sessions could be organized at the activity level involving all sub-partners of IPs that would provide opportunities to take course correction measures while identifying data strength and areas of improvement. At the project level, a pause-and-reflect session on 'learning from DQAs' could be organized at the regular Implementing Partners' meetings. The session would help both USAID and IPs learn from each other's experiences in managing data in order to strengthen the Mission level performance information management system. In this regard, it would often be useful for the DQA section in the MEL plan to clearly describe 'the collaborative practices/activities' that would be undertaken to conduct DQAs and share the practices.

Evaluation

Evaluation is an effective tool for capturing systemic learning from the grassroots level. A collaborative approach involving the Program Office, Technical Offices, and relevant stakeholders in developing evaluation scopes of work can be instrumental in uncovering the most pressing issues in connection to implementation and management. In this regard, Project Managers and AOR/CORs should take a lead to consult with beneficiaries, implementing partners and relevant stakeholders in order to frame 'good evaluation questions.' While framing evaluation questions, it is helpful to explain how they relate to, or contribute to answering, at least one learning question on broader issues, for example, questions that test the development hypothesis or critical assumptions, or inquire about external factors such as local contexts or national/local level policies which might influence interventions. The Bangladesh Mission has recently started the practice of including learning questions in its evaluation scopes of work. The evaluation section in MEL plans could explicitly describe how evaluations, to be conducted in the life of a project or activity, contribute to answering learning questions.

The dissemination of evaluation findings should extend beyond out-briefs of the evaluation team and uploading the document to the Development Experience Clearinghouse (DEC). In this regard, innovative approaches can be followed to share the learning with pertinent stakeholders. At the Mission level, project-wide evaluation dissemination sessions can be organized to share learning with senior management and technical teams. The Program Office can facilitate this session in consultation with Project Managers or AOR/CORs and Technical Offices. This type of session would be another platform for project/activity level decision making, as important insights might come out of discussions which could be useful for existing and new projects/activities. 

Recommendation tracker of Evaluations: A collaborative approach should be in place between Program Office and Technical Offices to ensure that the recommendation tracker functions in an effective and timely manner.  The Program Office can nominate a staff member as a Point of Contact (POC) for a particular evaluation recommendation tracker to work closely with the AOR/CORs or Technical Offices to follow up on the actions suggested in the tracker and agreed by Technical Offices.

CLA Approach in Critical Assumptions, Feedback Loops, and Institutional Processes

Tracking Critical Assumptions/Risks/Game Changers

Many Project/Activity MEL plans could benefit from including a set of context indicators or complexity aware monitoring tools in order to ensure that the overall contextual landscape of the Project/Activity is monitored. This would help us track our critical assumptions and risks periodically, as well as capture any game changers that can have unintended consequences on outcomes. In this respect, Project Managers, AOR/CORs, and Implementing Partners can employ different tools, such as regular field visits, focus group discussions, before- or after-action reviews, and other pause and reflect methodologies to collect qualitative stories. Project Managers, AOR/CORs and Implementing Partners could organize grassroots-level stakeholder meetings with beneficiaries, teachers, local leaders, journalists, etc. (as relevant to the sector) at least quarterly to understand the changes of context. In 2016, CARE presented a participatory performance tracker at a conference organized by USAID that can guide the development of context specific community tools to gather contextual knowledge. The outcomes of these meetings and context monitoring related qualitative stories can be reflected in quarterly and annual reports. Moreover, at the activity level, AOR/CORs can also hold regular learning and sharing meetings with other donors with which the project or activity is collaborating. These learning meetings can potentially inform the status of ongoing collaboration including the challenges faced as well as opportunities to expand the existing collaboration. At the project level, the Mission can hold quarterly project learning meetings where Project Managers and AOR/CORs discuss issues related to performance, including theories of change, critical assumptions, and overall implementation and management. 

Establishing Feedback Loops: A Tool for Learning and Adaptive Management

Establishing strong feedback loops is important to capture systemic learning. It is helpful for Project and Activity MEL plans to explain how the feedback loops will be connected to overall performance management. In this regard, the feedback loops can be highlighted in any diagram of MEL activities and data collection flow charts that would demonstrate how they would continuously provide information that contributes to performance and data management. MEL experts can also set up digital feedback loops such as online platforms or manual feedback loops such as feedback after a training/intervention.  It can also be anonymous, such as setting up 'feedback boxes' in different hotspots in the field so that stakeholders can freely provide feedback. It is important to put mechanisms in place to ensure that relevant feedback flows to the decision makers at the Implementing Partners, AOR/CORs and Project Managers. In this connection, USAID Missions can learn from USAID/Uganda's feedback loop for real time adaptation.

CLA Practices in the Monitoring, Evaluations and Learning Mission Orders and MEL Working Group

Institutionalizing CLA practices in performance management requires reflecting them adequately in any Monitoring, Evaluation and Learning Mission Orders along with Project and Activity level MEL plans. The Mission Order would include overarching common principles on CLA practices that would in turn guide Project Managers and AOR/CORs to integrate CLA approaches into their Project and Activities and respective MEL plans. In this respect, a mission-wide working group on MEL can be formed, which can help implement the Mission Order and sustain good CLA practices in monitoring and evaluation. Currently, the Bangladesh Mission has a functional MEL working group comprised of M&E professionals from Technical Offices and the Program Office. The working group provides a platform for discussing M&E related issues. The working group plans to incorporate a strong 'L' component in its work through revising the existing M&E Mission Order and finding CLA Champions in the Mission.

Conclusions

Incorporating CLA practices into performance management is an evolving process. It is true that many of the recommendations provided in my three blog pieces might not work in all contexts, each of which might have realities requiring a set of different practices.  I hope these blog posts will stimulate further discussions in the area of CLA in performance management that will enable us learn from each other's experiences and apply the same in our respective contexts.