Results-based monitoring without indicators? Really? You are probably thinking now: “No way! As M&E consultants we spend so much time and capacity-building efforts to teach people about the importance of well-defined indicators and now you're telling us that indicators are not at all important?” Don’t worry, I don't. Indicators are great. Here is what I mean:
In results-based monitoring we ideally use indicators to measure progress. Provided that they meet the SMART criteria, indicators help us to make the intangible measurable and to systematically and objectively track progress. This is what the textbooks say. In the real world, however, sometimes this happens:
- Indicators are formulated at the project appraisal stage by logframe experts that in the end take no part in project implementation. Those who are actually tasked with monitoring had no say in it and in the worst case understand the indicators poorly or find them technical, overly academic and impractical for the actual project.
- Indicator data turns out to be difficult to collect.
- Indicator data is actually too costly to collect to the extent foreseen by the logframe.
- Indicator data is collected too late to allow for timely decision-making and corrective action.
So what to do? Let's recall that results-based monitoring is about getting relevant insights into the project progress. If, for whatever reason, our indicators fail to be practical in this endeavor, you may turn to what I refer to as “soft monitoring”: Getting insights in an informal way while still systematically following every step in your intervention logic/ results model/ theory of change/ logframe. Let me give two examples:
- I remember from a workshop a Coastal Management program that wanted to count the numbers of nesting seabirds as an indicator of ecosystem quality. Unfortunately, this meant in practice counting hundreds of thousands of birds without any secondary data available. Simply impossible! The “soft monitoring” alternative is to at least call fishers or farmers in the area and ask them if they had observed a change in bird population. Assuming that their livelihood is dependent on the fish and bird population, they probably have good insights. Of course don't ask just one but a number in different locations.
- Effects of capacity building: Many indicators measure how many training participants have adopted the knowledge acquired in a training in their daily life. This often implies tracer studies with extensive surveys a few months after the training. In the event that maybe there is no budget for this survey or it will only be able to be conducted after four rounds of training have already been completed, the “soft monitoring” alternative is to do spot checks with open question. Call a random sample of participants. Ask them (not on a scale of 0 to 10, but as an open question) if they could use the knowledge from their training. Why or why not? Talk to the trainers, if participants during the training had some comments on the usefulness of the content and if they had plans to use the knowledge. Because if the answer is no, we don’t even need to call them months later, but need to adjust the training content right now.
As you see, “soft monitoring” will provide you with no more than anecdotal evidence which is obviously not as meaningful as methodologically sound indicator information. But it is for sure better than gut feeling or waiting for indicator data until the project is almost over.
Soft monitoring really comes down to this: Regularly reach out to those executing and benefiting from the program. Talk, discuss, and get their feedback. When documented and systematically linked to your results, it will confirm or challenge what you have been expecting to see.
In our monitoring software WebMo we always encourage people to take notes related to project progress beyond indicators and share that with the team. These simple actions of actively observing and discussing the success of activities is monitoring too!
Don't get me wrong: if you have high quality indicators and the possibility to measure them, by all means, do! But if not, that is no excuse to skip results monitoring all together! Do what you can. Because monitoring is not about updating the logframe. It is about continuously improving the project and eventually achieving better impact.
I look forward to hearing your thoughts and comments!