A version of this post originally appeared on Inside Higher Ed's Call to Action blog.
It's a simple question that every industry struggles to answer: Is what we're doing working?
As the public increasingly asks that question of institutions of higher education, those schools are turning to their offices and asking it in turn. We've all felt pressured to answer it, expected to, as if defining and measuring intangible ROI is easy.
But it isn't easy. Many offices aren't measuring their performance at all. Many of those that are are measuring the wrong things. And that makes impossible to answer the question.
We know we're doing something. Like this chopstick fan designed to cool ramen:
But is that something providing value or performing a valueless function? The chopstick fan cools down noodles, sure, but much less conveniently than just blowing on them—or waiting for them to cool. It has a function, but it is value neutral or, arguably, value negative because it just gets in the way.
Too many of our own programs are the same way, and we don't even know it.
Measuring nothing
We've all experienced the terror of flying blind, operating without metrics of any kind. It's anxiety-inducing. It can make us feel inferior or even useless.
We ask ourselves, "What if the data make me look bad?" One thing is for sure: Ignoring the data will make you look worse.
And starting to measure one's own success is itself a useful task, a sign of someone doing their job well. From there, we can improve.
Measuring the wrong things
In 2010, Pepsi launched the Pepsi Refresh Project, an initiative where people could submit and vote for their favorite nonprofit projects to receive grants from Pepsi.
The project generated 3.5 million Facebook likes, 60,000 Twitter followers, and over 80 million votes for nonprofits.
But it didn't sell more Pepsi.
Pepsi cancelled the project in 2012 after falling from second to third place in national soda market share.
We can learn from their mistakes by not letting ourselves get caught up in the appearance of success, and instead constantly questioning whether our definition of success is accurate.
One common example of measuring the wrong thing is misuse of Net Promoter Score. In the NPS system, people are asked a question: "One a scale of 1 to 10, how likely would you be to recommend X to a friend?" Sometimes there are a few more questions, but this is always NPS's core.
NPS can be useful for measuring the quality of an event or program. For example, we might ask attendees of an alumni networking event to fill out an NPS survey. That would tell us whether they liked the event or not. But it wouldn't tell us...
- How many connections alumni made during the event.
- How many alumni hired or were hired as a result of networking.
- How much more likely attendees are to give as a result of attending.
And so on. So we can demonstrate to our colleagues that 90% of attendees rated the event a 9 or 10 on their Net Promoter Score survey, but we still can't demonstrate the event's ROI.
Of course it's important to create programs that our constituents would recommend to others. But it's equally if not more important to create programs that deliver value, that generate a return on investment for our constituents, our community, and our institution as a whole.
For more on this subject, see our post, Inputs vs. Outcomes: Are You Using the Right Data to Measure ROI? See also this Stanford Social Innovation Review article, Are You Delivering Services or Providing Value?
Measuring too many things
There are hundreds of data points we can track these days, and digital tools are adding more every year. Not just participation, reach, and dollars given, but the tangled network of variables that reveal how they interact.
In data-rich environments, it's easy to lose sight of the metrics that really matter. Without a focus on the right metrics, we lose track of our goals and sometimes inadvertently paper over our weaknesses. That's why many organizations in the private sector pick One Metric That Matters (OMTM) to clarify and direct their work.
Obviously, one variable won't be enough to measure the success of an entire office. But choosing your OMTM is helpful when planning and reflecting on individual projects. It's what enables you to take what you've built and make it better—to iterate. It's also what unites your team's different priorities.
An example:
An advancement office running a senior class giving campaign might have any number of variables to use to measure the campaign's success, like:
- % of seniors who give
- Average gift amount
- Total gift amount
- Year-over-year growth in participation rate
- Retention rate into seniors' first year as alumni
- Ongoing giving rate of participants vs. non-participants
- NPS of seniors who run the volunteer campaign committee
- # of entrants into annual senior giving campaign T-shirt design contest
- # of attendees at senior giving campaign events
- # of likes, comments, and retweets on posts about senior giving campaign
- etc.
Someone who isn't familiar with the ins and outs of fundraising in higher ed might choose total gift amount as the OMTM. Anyone who has been involved with a senior giving campaign before has probably focuses on percentage of the senior class who give. I would recommend focusing on the retention rate as seniors transition into their first year as alumni and the ongoing give rates of participants vs. non-participants, since the principle aim of most senior giving campaigns is to generate awareness among seniors about the importance of giving back to their alma mater.
That isn't to say other metrics should be ignored, just that they shouldn't be looked to as substitutes for the ones that really matter, or as means of softening the blow of an ill performing campaign.
For more on this subject, read this SSIR article, Selecting the Right Growth Metrics: Fewer but Better and the Knight Foundation blog post, Measuring What Matters.
In conclusion
Fortunately for us, it's usually easy to intuit what we have to measure in order to answer the question, "Is what we're doing working?" But sometimes we make excuses for not measuring ROI properly—"It would be too hard," "It would take too much time," "It can't be measured," etc.
I hope these rules of thumb will help you push past those excuses and make the case for choosing the right metrics. We can't improve what we don't measure.