It’s a daunting task to begin measuring the effectiveness of your work, both in relation to knowing what to measure and what you might discover. The key to success is the involvement of funders and service users. Exploring the factors that lead to poor performance in current monitoring and evaluation practice offers some supporting evidence.
In recent years some funders (they will remain unnamed) have designed monitoring and evaluation systems in isolation from the projects and services they fund, and then handed them to service deliverers to complete. As a result, these systems feel more like an imposition, rather than a vital check and balance of how our projects and services are performing. At best we feel demotivated, it feels like a chore, and at worst we feel frustration at being asked to measure the wrong things.
Far from entering into a discussion to create a system that works for funders, service deliverers, and service users alike, we tend to jump through hoops by half heartedly completing the monitoring and evaluation reports we inherit. Of course, this keeps the funders happy, which subsequently keeps funding rolling in. But take a longer term view of the situation. It gives funders the false impression that they are collecting the right information, and it hampers our ability to conduct effective monitoring and evaluation. We fail to learn and apply lessons because we lack the relevant data and motivation to do so. Ultimately, it’s our service users that are affected.
However, we are often guilty of committing the same error when it comes to our relationship with stakeholders. If we develop indicators and outcomes in consultation with our funders, we can be a little too convinced our own wisdom. There’s a temptation to second guess what is important to our service users, rather than just asking them. We’re experts, right? This approach can lead to vital information being overlooked, and it can have a knock-on effect on the services we provide. WRVS’s project to capture what is important illustrates this point perfectly.
Until two years ago WRVS measured the effectiveness of its work largely by measuring numbers, for example, how many meals on wheels were delivered on target. The senior management team took the bold decision to ask the question ‘so what?’ They wanted to know what difference their services actually made to service users’ lives. After commissioning researchers to ask service users what really matters to them, WRVS found that some participants in the meals on wheels programme did not eat the meals they received, yet they continued to participate, because the human contact left them feeling less isolated. WRVS still measure the number of meals delivered, but their monitoring and evaluation is now geared towards measuring softer outcomes, such as reduced isolation and increased confidence.
There are, of course, numerous pressures on both funders and service deliverers, which make for an imperfect relationship when it comes to monitoring and evaluation. However, both parties ultimately strive to achieve the same goal: a better standard of living for the people they support. Rather than entrenching old notions of what counts as success, we need to recognise that effective monitoring and evaluation is dependent on the involvement of both funders and service users.
New Philanthropy Capital will release a new publication this month which will explore what funders can do to help charities conduct monitoring and evaluation and demonstrate impact more effectively. Helping grantees focus on impact will be available to download for free from New Philanthropy Capital's website from 16th March
How to find, inspire and speak to individual donors
4 months ago