When carrying out M&E consultancy or when building monitoring systems with our software WebMo for a client, I often get the request to make a direct link between activities and indicators. In these situations I explain that from a methodological perspective this is not proper practice: The center of results-based monitoring are – how else could it be – results. Activities are carried out to produce results. These are then measured and validated by indicators.
But in practice, I sometimes see that in results-based monitoring systems, results are simply skipped. The focus on activities and indicators is so strong that results become nothing more than the brackets around a group of indicators or sub-headings in the operational planning matrix.
Based on my experience, I see two main reasons for this neglect of results:
- Project staff sometimes lack the methodological knowledge. Although they “do” results monitoring and are aware of its importance, the difference between, targets, results and indicators is blurred.
- The second reason is more profound and can be traced back to the planning stage when indicators are formulated: Sometimes, indicators read like “12 workshops are executed”. No wonder staff see a direct link between activities, because indeed, this indicator is about activities, not results. This is poor indicator design that completely ignores the concept of results, i.e. what are the workshops supposed to achieve? What will people learn in these workshops and how will this change their behavior?
So, as an M&E consultant, what can I do? The problem is that indicators are often already cast in stone with the donor, so re-formulating them is not really an option. And program staff have enough to do than feeding two sets of indicators: one for the donors and one for the real results monitoring. How do you handle these situations? How do you create awareness for results, when at the same time the spotlight is on indicators?