Is Finding a Comparison a Sign of Achievement?
In last quarter’s Points of View, we explored the problems inherent in sharing template documents for volunteer programs. We concluded that article by saying:
In writing this Points of View, we realized that the search for templates is connected to other, quite important issues. For example, when volunteer resources managers propose a new type of volunteer role, why do senior managers so often ask “who else has done this before?” The question implies the wish to do what others do, not to be innovative. How do we get our executives to empower volunteer services to test the boundaries, not always stay in them?
Then there is the current desire to benchmark accomplishments, finding ways to compare and contrast the work of one organization to others. Does this stimulate experimentation or maintain the status quo?
So that’s where we start this issue’s Points of View, exploring the questions we didn't get to last time.
The Limits of Benchmarking
What is the most common question bosses ask volunteer managers and volunteer managers even ask one another? It’s this: “How many volunteers do you have?” Think about the discussions that periodically take place on online discussion groups to try and compare turnover/retention rates. We seem to be obsessed with comparing our volunteer programs in numerical ways.
There have been only a few attempts at formal, large-scale benchmarking exercises. In the UK, Agenda Consulting (an HR consultancy) has done more than anyone else to compare volunteer programs. Working with many of the largest charities, Agenda Consulting seek data on a range of 28 different measures for what they call their ‘Volunteers Count’ benchmarking. This is based on their existing People Count benchmarking exercise for paid staff.* The online information of Volunteers Count is impressive, but it is a service that organizations pay for and so is not accessible to all. Therefore most organizations seek to benchmark themselves against others more informally.
Whether using a formal benchmarking program or seeking a more informal comparison, what do we hope to achieve by comparing our data to others? Can we actually learn anything meaningful? Given the diversity of settings in which volunteers serve, is comparison even possible?
We certainly have our reservations.
Take a simple comparison, like the total number of volunteers. A high head count may look good on a report but reveals little of any value. The success of a volunteer program is not determined by how many people it involves but by what those people achieve. In fact, if we can achieve just as much impact with fewer volunteers (and therefore with less cost), aren’t we being more effective? Tony Goodrow makes a compelling case for this in the explanation of his Relative Impact ROI model when he talks about the wrong questions we ask in his 2010 e-Volunteerism article, Calculating the ROI of Your Volunteer Program – It’s Time to Turn Things Upside Down.
Many businesses and charities evaluate their success at reaching objectives by using ‘Key Performance Indicators’ (KPIs). However, sometimes KPIs are not chosen effectively and can prove dangerous to true progress towards goals. Rob once heard a volunteer manager enthusiastically proclaim victory at finally getting a KPI related to volunteers onto the senior team’s scorecard (volunteers had previous been absent from any goal setting). What was the KPI assigned? Growth in the number of volunteers in the organization. In other words, the senior management would measure performance success solely on whether more volunteers were on the books.
The debate Rob had with this person was whether any KPI, no matter how useless, was better than none. Was it better to have a KPI saying nothing of any value about the volunteer program and, in fact, could damage the standing of the volunteer manager if the number of volunteers did not increase? Or was it better to have no KPI on the senior team scorecard until such time as a ‘proper’ measure could be determined?
Retention as a Red Herring
Another specious indicator that many seem keen to benchmark is a comparison of retention or turnover rates. The thinking is that by contrasting how many volunteers start and leave in what periods of time in our agency to the data in other agencies, we will know if we are doing better or worse than they are.
Whilst this makes sense on one level, we feel this is also fundamentally flawed.
First of all—as Rob highlighted a while back on his blog in ‘It's time to ditch the word retention’—it is commonly assumed that successful volunteer retention means keeping people for as long as possible, while most turnover is a bad thing. In the 21st century world of shorter-term, flexible volunteering, this thinking is completely wrong headed.
Second, unless the organizations are doing similar work with similar profiles of volunteers, then comparing retention rates between different organizations is like comparing an elephant to an acorn.
If your volunteers are mainly young people seeking experience to gain employment, then high turnover is great if it means your volunteers are going into paid work. If your volunteers are seniors giving committed service to an inter-generational mentoring program, turnover may be much lower. Neither program can learn much from each others retention rates that would affect their own retention rates because the fundamental nature of who the volunteers are, what they do, and how they operate are so different.
So is benchmarking volunteer programs a total waste of time then? No, benchmarking can be a helpful tool when done thoughtfully. Going back to the list of Agenda Consulting’s benchmarking factors, there are some measures that volunteer managers have been comparing for years, such as: Do you have a volunteer expenses policy? Do you have a formal induction for volunteers? Do volunteers get an annual review/appraisal? While focused only on the practical activities of volunteer management, such indicators at least speak to a well-running infrastructure supporting volunteers.
The problems come when the benchmarking is done for the wrong reasons (e.g., because the data is easy to capture, such as total number of volunteers) or where the wrong questions are asked. What do we really want to know? What meaningful measures of success and impact should we be monitoring and comparing? Only when we know the answers to those questions can we really justify the effort of comparing what we do with others.
“Who Else Has Done this Before?”
We don’t know the source, but we agree with the observation that no executive wants to be the first to do something – or the last. Pioneering a new idea sounds risky while not keeping up with trends can bring obsolescence.
If some in the UK are currently enamored of KPIs, Americans love “innovation” (or at least the idea of it). So there is now a Social Innovation Fund and social entrepreneurs are celebrated. This trend is also prevalent in the UK, in part because the funders always want to give money to new things rather than fund what works. Susan explored this last year in a Hot Topic called “Volunteers and the Quest for Innovation,” pointing out that old is not necessarily outdated and new is not necessarily better.
The problem relevant in this article is how often senior managers will react to a new service proposal presented by the volunteer manager with, “Has anyone else done this before?” “Are there any models out there?”
Such questions are reasonable, but not as a first reaction. The priority concern should be: “Does this idea have potential to further our mission and serve our clients?” Whether or not it has been done before seems a singularly uninspired way to choose an activity.
No one should be reinventing the wheel. Once it has been decided to go in a certain direction, any competent manager will do research to learn more about available resources, experts, success stories, failure warnings. The way the project is structured and monitored should indeed build on the best practices others have evolved through prior experience.
More importantly, no one should reinvent thesquare wheel! Just because there are models to be found does not guarantee they offer the approach that will work well in a different organization. Why were choices made to go in one direction versus another? What external factors influenced the project that are not readily visible to the observer? What would the organizers do differently if they had the chance to start all over now?
Finally, are you looking at what others have done and consider successful, or simply something they have settled for? Believe me, the Electoral College in the United States is one model of electing a political leader, but it’s not necessarily a good model.
Being the First
Just as poorly framed KPIs can wrongly judge volunteer impact, the absence of a model for doing something new should never be grounds not to become the first to do it. Do not conclude that no models means the idea won’t work or will carry huge risk with it, nor assume that you could not have dreamed up something truly innovative.
There are several remarkable things about working with volunteers that Susan often identifies. The first is that there are no rules for what volunteers or those who lead them must do. Sure there are some legal restrictions that apply to everyone, but basically if people are willing to give time and energy to do something useful to further a cause that matters to them, who says they can’t or shouldn’t? Always remember that if your organization says no to something volunteers truly want to do, they can always leave and start their own organization. Lots of charities began in just that way.
Another uniqueness of volunteers is that we can ask them to experiment with a project specifically as a way to test whether a service idea is feasible – and before anyone requests money to fund it. So a proposal from the volunteer office does not necessarily commit the organization to anything long term, unless its success makes it desirable to expand the effort.
So it’s fine to consider safety, needed qualifications, methods of monitoring and reporting, and other proper management actions when trying out a new service with volunteers for the first time. None of which should stop you from pushing the envelope, moving outside the box, or any other metaphor of exploring new worlds!
Researching what others have done is useful in implementation of a new idea. But demanding existing models before granting permission to try something brand new simply shuts out discovery.
Why, then, do senior colleagues do this? Here are some thoughts:
They see volunteers as incapable of doing anything truly innovative.
This is nothing new. Many managers, indeed too many paid staff altogether, still view volunteers as the nice-to-haves, the little old ladies who make the tea and stuff the envelopes, causing no trouble and not really making much of a difference. With that mindset they clearly miss the huge potential of what Tom McKee calls the ‘New Breed’ of volunteer: skilled people, keen to work in teams on projects that make a real difference to a cause, not simply a contribution.
They see volunteers as inherently risky.
Linked to the point above, if volunteers are seen as largely incapable of doing any truly meaningful work, then engaging them in such endeavours must carry huge risks. Senior managers with this mindset forget that the agency that employs them probably started through volunteer taking a risk and doing something new. They literally owe their jobs to risk-taking volunteers!
They have a low opinion of the professionalism of those who lead and manage volunteers and volunteer programs.
As we pointed out earlier, competent managers will do research to learn more about available resources, experts, success stories, failure warnings. If you have brought an idea to the table and it’s been shot down because of a lack of evidence that it works elsewhere, maybe this is telling you how the manager concerned sees your role, our field: If the volunteers are incompetent, well meaning amateurs then so must be the people who manage them. Really?
What are your thoughts on benchmarking, KPIs, and existing models to measure or guide volunteer involvement? Please share in the Comment Box below.
*As an aside, we do note that Agenda Consulting's paid staff survey is called ‘People Count’ whilst the volunteer benchmarking is called Volunteers Count. Are volunteers therefore not people? Why is the paid staff benchmarking not called ‘Employees Count’?
Sean Devereaux, Aquarium of the Pacific, Long Beach, CA
Fri, 07/03/2015