These are basic questions about training evaluation, but in much of the corporate world they open a Pandora's box of frustrations. Training managers often feel pressured to "prove" that courses deliver a particular return on investment: It cost X dollars to design and deliver the program, and the result was worth Y dollars. But the impact of formal learning doesn't show up in a vacuum, and to isolate the contribution of a training course to, say, a jump in sales or profit margins requires control groups and other experimental mechanisms that executives probably don't want to pay for.
In the most Kafkaesque situations, the course whose ROI is supposed to be proven might have been demanded by an executive despite the training manager's objections that the problem it's intended to solve is not a training problem.
You might expect the cries for documented ROI to be loudest in companies most directly concerned with numbers and finance—firms such as tax and auditing giants Ernst & Young, Booz Allen Hamilton, or Deloitte & Touche. You would be wrong. "I can't remember the last time I was asked to demonstrate ROI on a course," says Mike Hamilton, Americas chief learning and development officer for Ernst & Young.
This is not to say Hamilton and his counterparts at Deloitte and Booz Allen don't sometimes calculate the business impact of a course. But training leaders at these firms seem to suffer none of the paranoia or frustration usually associated with the topic of evaluation. This isn't because their bosses don't care if the training they provide is any good. Rather, it's because the bosses care deeply and intimately. Many factors are involved, but the fundamental reason, expressed by training directors at all three firms, is that the only real "product" their companies sell is expertise. When their consultants go into client firms, they must be skilled to a high degree. If they aren't, who needs them?
As Joseph Gibbons, national director of U.S. education and development for Deloitte, puts it: "We're in the knowledge business. That's all we produce at Deloitte.... Our whole business is intimately involved with learning. So the process is collaborative; it's not the training people on the left and the business people on the right."
Evaluating courses in terms meaningful to the executive suite becomes far less problematic when training is integrated in that manner—when, as the cliche has it, training is "linked to driving business goals." These three companies provide a glimpse of what integration looks like at a high level.
When top leaders at Booz Allen think about what constitutes "good" training, or "training we're happy to pay for," what matters most to them? "It's rarely an issue of ROI," says Aimee George-Leary, director of learning and development for the McLean, VA-based firm. "It's more about return on expectations." Expectations tend to center upon "skills acquired, behaviors changed, and the ability to work well in a particular space," she says. "What [trainees] say when they return to work about the help they got from a course" also carries a lot of weight with the firm's higher-ups.
And how does Leary, herself, regard "good" training? "When we think of training we'll fund, we look at [annual] strategic business objectives, what we're hearing from our market and planning teams, what major campaigns might span the next couple of years.... What challenges will people face? What skills will they need?"
On the infrequent occasions when she tries to calculate the Level 4 impact (business results) of a particular course, Leary does so on her own initiative, usually because she suspects the course is flawed. If she hears about "a need we may not be hitting the mark on" with a course in, say, leadership or coaching, "we might look at some metrics around attrition or at the [participants'] annual assessment results."
Note her assumption that a worthwhile course, even on a "soft" topic such as coaching, ought to produce results that show up on assessments or in attrition rates. That kind of thinking is reflected in Booz Allen's process for deciding whether a course should be offered in the first place. "Any time we consider a learning project," Leary says, the training unit demands to know the business objectives, the expected impact, the intended audience, the course's expected shelf life, and more. "And where is the [alleged] need coming from?" she asks. "Do we have customer feedback or survey data that says this is a problem we need to address?"
It may be that ingrained Level 4 thinking in a training department reduces outside demands for Level 4 proof.
Trainers who experience constant pressure to "prove their value" usually work in companies where the training department tries to "sell" courses to management, observes Maria Manocchio, a vice president with ACS Global Learning, an Ernst & Young spin-off company that handles much of the firm's training. "Here," she says, "business leaders come to us with problems."
Hamilton agrees that this is a great advantage. But more to the point, he says, the nature of those problems, and the firm's way of addressing them, tends to draw the focus away from evaluating "courses," per se.
Typically, he says, partners (i.e., Ernst's owners) or business unit leaders "come to us with a need to change behavior or roll out a new methodology. Generally there is a clear set of objectives: 'Here are the results we want; here's how we think learning can contribute to them.'" But those business leaders understand that a "formal learning event is only one link in a chain that includes ongoing development, monitoring, reinforcement, and maybe mentoring on the job."
In the end, Hamilton says, the important "evaluation" question is, did the business issue get resolved? "But the formal learning piece might be only 30 percent of that." Recognizing this, Ernst's partners actually put great stock in what often is derided as mere "Level 1" feedback—participant reactions—when they judge the quality of a course.
Whether the event in question is a week-long leadership program for the partners themselves or a course for lower-level people in the organization, there are three key questions that resonate most strongly with Ernst's leaders, Hamilton says. All three are directed to participants: Was this a good use of your time? Was it a good use of your money? And would you recommend it to your peers?
On its own initiative, Ernst's training function incorporates Level 2 (knowledge testing) measures into almost all e-learning courses, which account for about half of all training. As for major new course initiatives, Level 3 (behavior transfer) and Level 4 (business results) tracking often happens more or less automatically.
How is that possible? Well, managers send people to the courses precisely because they want to see observable behavior changes that will produce observable business results—such as greater client satisfaction or lower staff turnover, both of which are monitored routinely. When you know what targets you're shooting at, keeping score becomes much easier.
In explaining what Deloitte's leaders value about training, Gibbons says it is hard to overstate the fact that, like Ernst and Booz Allen, the company's business is to sell expertise, pure and simple.
Deloitte makes its money in the form of billable hours from consultants who serve its clients. So the overriding goal of all training, Gibbons says, is to "make sure our practitioners are at an optimum state of readiness to serve clients. That translates directly into dollars because our business is all about billable hours." If the firm introduces a new consulting methodology, or if its practitioners need to understand a new tax regulation, or if new hires are brought onboard, no trainer needs to "sell" top management on the idea that the business will benefit if people achieve a state of "ready chargeability" as quickly and effectively as possible.
If that is the goal, what is management's main indicator that it's being met? "Client satisfaction," Gibbons answers. "Clients who are happy to pay the bills for our services."
Gibbons looks for direct ROI evidence mainly in cases where a course is new and unusually expensive. For instance, a recently developed program for new hires in the consulting division immerses them in a simulated consulting situation, filled with ambiguities and challenges. Because the simulation is "pricey and complex to stage," Gibbons paid special attention to its results. He found that "the return is a 30 to 40 percent faster assimilation rate on client assignments." Translated into billable hours, and subtracting the cost of the program, that represented an ROI of more than $66 million in the first year.
But Gibbons says he feels little pressure from the top to demonstrate such results. "Our organization is numbers-based, but top management is more concerned with the pulse of the population than with numerical data. They want to hear things like, 'My practice leader is taking time to hold career discussions with me,' or, 'I was going to leave Deloitte, but my mentor helped me navigate the cultural landscape.'"
In the end, he says, "anecdotal evidence carries a lot of weight."