Wednesday 5 May 2010

Beware Comparisons

In April I spend a couple of weeks in the US attending various conferences and meeting with ACEVO’s partners. The topic of charity evaluation was mentioned in pretty much every single meeting.

At the GEO conference in Pittsburgh, foundations were fighting it out over the extent to which they should evaluate the performance of their grantees and what methods they should use. (Although interestingly they were rather less vocal about having their own performance evaluated.) While I was there I learned of two new donor advisory websites to add to the plethora on both sides of the Atlantic. (The new additions to the list are Philanthropedia and Givewell.)

There is a lucrative industry developing to service the sector’s evaluation, impact assessment, and performance management needs. This is of course welcome. There is no question that we need to be better at generating the evidence of the difference we make, and knowing as much as we can about what works and why. However, while much of this work is vital, some is harmful and we need to be clear what we are evaluating, why and who is doing it.

Firstly we need to ask why a charity wants to evaluate its own work. There are two main reasons. One is to get better and doing what you do, learning which programmes work and which don’t, and making sure your money goes where it adds the most value. The second is to convince others outside of the organisation (eg donors, funders, partners) of the good that you do.

These two motivations ought to be able to complement each other as long as the evaluation is sufficiently focused on assessing how well the organisation has achieved its mission. The right performance management framework should produce data which you can share with the outside world to show the impact you have had. However, it is unwise to focus only on gathering data to share with the outside world as you may end up with two systems, and if your performance management system is not concentrating on delivering your mission then it is not fit for purpose.

Secondly there is the question of who is doing the evaluating. A good performance management and outcome measurement framework developed for a particular organisation will tell you something about how well that organisation achieves its mission. (A great example of this is the work of St Giles Trust with Pro Bono Economics.)

Alternatively evaluations by be conducted by or through a third party often seeking to be a broker or a guide to help donors or funders identify the most impactful organisations.

In my view there is a huge amount of value in the work like that undertaken by St Giles. The evaluation they were able to develop demonstrated the outcomes they knew were important for the achievement of their mission. There is a danger though in evaluation frameworks being imposed on organisations where they may not fit, and particularly in being used to compare the relative impact of different organisations.

We should focus our attention on improving performance management in organisations, which would allow each organisation to communicate more effectively the difference they make. We should be wary of efforts to compare organisations for some key reasons.

Firstly, what choices are we actually talking about? If you are worried about the homeless in your home town and there is only one organisation supporting them then who are you going to compare them with? The only real choice is between this organisation and a potential organisation which doesn’t exist in that area. It would be wrong to automatically assume that for each need there is a choice of charities working to meet it.

Even if there is a choice of organisations, for example supporting excluded young people in your area, the approaches and philosophies of those organisations, driven by their missions, may be very different. One may see sport as the route to employment, one may see faith as a way of staying out of prison. Both Kids Company and Eastside Young Leaders Academy, for example, specialise in building the self confidence of troubled children and teenagers. However, their philosophies couldn’t be more different. Both produce the same results (well adjusted young people who can rise to their potential) yet if you swapped the performance management frameworks around both would fail by each other’s standards. A potential donor would be deceived in thinking that there is a straight choice between these two organisations.

Secondly we have to dig a little deeper into what actually motivates people to give. I am not a professional fundraiser but very few people who give to charity do so because they have read evaluations and come to a dispassionate objective judgment. People give because they have an emotional relationship with a cause and in many cases also with an organisation.


Paul Carttar who heads the Social Innovation Fund at the White House talked at the GEO Conference about the dilemma of on the one hand wanting to focus money at the innovations which work and yet on the other the reality that people value organisations in other ways. That does not mean that we shouldn’t prove to donors that we use their money wisely (and indeed keep that conversation live and fresh), but it may mean that we have to think more about how we use qualitative data as well as quantitative, in other words get better at telling our stories. This is much more sophisticated than a traffic light or a percentage score on a comparison website.

Even for more formal processes of choosing between organisations such as a public procurement process, I think we have to be realistic that hard evaluative data is still only part of the picture and subjective personal judgements come into play. Chris Stone, Head of the Hauser Institute at Harvard argued when we met that evaluation reports are unlikely to be what really secures the deal. Public officials are going to be thinking things like ‘do I like and trust the CEO?’, or ‘has the organisation delivered before?’

Thirdly, there is the danger of thinking that one methodology for evaluation or comparison is the holy grail. Tools like SROI are very useful, but they have their limits (as shown by NPC) and it is important that we understand those limits so we use them effectively. We have all railed against inappropriate methods being used to judge charities (such as overhead costs) but we are in danger of reinventing others if we don’t use the right tools in the right context.

The truth is that comparing charities is really really difficult. But even if it were possible it may still not be desirable. We are much better employed on getting better at demonstrating the difference that organisations make in working towards their missions. The Impact Coalition will shortly be launching its transparency manifesto and member organisations will commit to greater accountability and transparency in their work. This is the best start to achieve that goal.

No comments:

Post a Comment