Higher ed professionals face increasing pressure to collect data on the performance of their work, and to use that data to calculate return on investment. Our directive these days seems to be "All data are good data."
That may be true. But not all data are the right data.
We discussed the topic with some of our customer schools the other week at a roundtable hosted by Andy Shaindlin, founder of Alumni Futures, Vice President at GG+A, and advisor to Switchboard. Andy raised the issue of right and wrong data as a cautionary note about our growing obsession with performance metrics.
When we use the wrong data to evidence our success, we are not only likely to unintentionally inflate or fabricate positive outcomes, but also liable to accidentally mask inefficiencies and failures.
Take the following example. College A's career services office decides to invest 5 hours a week promoting and maintaining its career networking LinkedIn group for students and alumni. Within two years, over 5,000 people join. The office counts this as a huge success.
But is it? On closer inspection, we find that the group's 5,000 members have only held a couple dozen conversations in those two years. And only one of those conversations had a tangible positive result—one person found a new job.
On one hand, we have 5,000 group members. That's good! On the other, we have a success rate of two hundredths of a percent—.02%. That's not so good, especially for an investment of hundreds of hours of work.
Inputs vs. Outcomes
The difference between those two numbers is, in Andy Shaindlin's words, the difference between inputs and outcomes.
An input happens at the beginning of your "conversion funnel." The conversion funnel is a marketing metaphor analogous to the "giving pipeline"—it begins with awareness and ends with a positive result (a gift, say, or landing a new job). As your constituents travel through the funnel they go from being unengaged and needing help to being engaged and offering help.
"5,000 people joined our LinkedIn group": That's an input; it's at the beginning of the funnel. From there you need to engage those 5,000 people, get them to participate, and get them the help they need. "One person found a job": That's an outcome; it comes at the end of the funnel. It's the positive result you want, the positive result that demonstrates that your work is successful, and the positive result that motivates your constituents to give back in return.
You can see how using inputs as evidence of success instead of outcomes is jumping the gun. The input is just the beginning. 5,000 people joined our group—now what? With inputs, we have what we need to move toward where we want to be, but we aren't there yet. Measuring inputs and stopping there is like packing for a trip, filling our tank with gas, and then announcing that we've arrived. We still need to go the distance to get the outcomes we want.
A second example. University B's career services and alumni relations offices decide to collaborate on a student/alumni networking event. They count about 200 attendees, split evenly between students and alumni.
That's many fewer people than College A's LinkedIn group. But University B's outcomes metrics reveal that the event was far more successful: Of the 200 attendees, 19 reported that they made connections that led to jobs or internships, and the giving rate among alumni attendees increased by 10% the following fiscal year. As always, a number of factors likely contributed to that increased giving rate, but if University B can isolate the increase in giving among event attendees from increases across all of its constituency, it can prove a strong correlation between the input (event attendance) and the outcome (increased giving). How's that for ROI?
Positive outcomes aren't always so clearcut or easy to measure—especially when proving a causal link between input and outcome is difficult—but that doesn't mean we should be satisfied with using inputs as a metric for success instead. Positive outcomes are what we work for, and we should demand them for our constituents and our institutions. To ensure that we're delivering those outcomes, we need to track them, no matter how intimidating that process—or the data it generates—may be.
At our roundtable, Andy Shaindlin, paraphrasing Douglas Hubbard's How to Measure Anything, offered some reassuring advice to those unsure how to start: “Measurement is any reduction of uncertainty. You don’t need to know everything. You just need to know more.”*
* Andy notes that Chapter 4 of How to Measure Anything is especially useful for Advancement professionals.