Ask most training professionals about the success of a recent training initiative and you'll probably get an answer expressed in "levels." Since its advent in 1959, the four-level training evaluation model created by University of Wisconsin management professor Donald Kirkpatrick has become an industry standard for training measurement (see sidebar, page 86).
But the first three of those levels—the largely "soft" measurements most often cited by training professionals as proof of success—have begun to lose clout as justification for training. Many corporate training professionals, along with the decision-makers who approve their programs, would prefer to get that proof in the form of dollars and cents.
And it's not just the results of training—Kirkpatrick's Level IV—that corporate executives expect to measure. In addition to improving the organization's bottom line, training is now often expected to improve its own bottom line, through cost savings over previously employed training methods.
"Training's mission must be unashamedly economic," say authors David van Adelsberg and Edward Trolley in Running Training Like a Business (The Forum Corporation, 1999). "Education is still what training does. But business education is a means to business results, not an end in itself." Indeed, if training is to be accepted as a legitimate business unit, most training professionals accept that they'll have to accomplish more while spending less.
But as e-learning enthusiasts are painfully discovering, the one-time savings in labor and travel isn't worth much if the online learning initiative doesn't lead to ongoing, measurable business benefits. In other words, there has to be a point to the training in the first place.
At the same time, measurement experts have begun to question the wisdom of simply choosing one metric—increased profits, for instance—to justify the expense of training. Mark Graham Brown, a performance-measurement consultant in Los Angeles, calls this the "chicken efficiency" test. A certain fast-food chain with which he worked claimed to measure a lot of things, but what managers tracked most carefully was how much cooked chicken was thrown out. The result? According to restaurant managers, it was easy to meet "chicken efficiency" quotas. They simply didn't cook any chicken until it was ordered. Of course, customers expecting "fast" food had to wait 20 minutes for their meals, and they'd probably never return. So managers made their numbers, but at the expense of comparable store sales.
But even if the right "hard" numbers are measured, companies may still be missing a big piece of the roi puzzle, according to Harvard Business School accounting professor Robert Kaplan and Renaissance Solutions President David Norton. Measures of wasted product or gross sales are lagging indicators, they say, which will only tell you where you've been. To predict financial performance, which Kaplan and Norton suggest is the only true purpose of training, you must track operational measures—such as employee satisfaction and turnover—as well as financial ones.
So what's the answer? Is hard data or soft data tracked? Is information collected via evaluations, surveys and interviews or from profit and loss statements? Is it more important to create learner satisfaction or meet manager expectations? No consensus yet, but training professionals are making an earnest effort to quantify training's outcome—whatever it may be.
Measuring Dollars and Cents
Believe the hype surrounding e-learning and you might think it provided the solution for world hunger. But one inarguable benefit remains untouted: measurement.
"Historically, the need for training—particularly instructor-led training—was assumed," says Howard Simon, director of program design and evaluation for Bethpage, N.Y.-based Cablevision's newly launched corporate university. "This industry moves so quickly that developing a really researched design just isn't possible. The priority is getting the products to market." But the expense of e-learning, along with skepticism about the reception it will receive, has led training professionals like Simon to more thoroughly investigate training's impact on the bottom line. "Culturally, this is a new concept for our employees. We want to be sure that people will be accepting and willing to put in the appropriate amount of time before we go farther."
Future online programs will be developed internally, says Simon, but Cablevision partnered with Click2Learn.com for the initial e-learning "experiment"—leaving Simon free to focus on measurement. The training was designed around a specific goal: to create increased revenue through subscriber growth. And to evaluate the results, Simon has taken a scientific approach.
"We want to know, specifically, if the training will help employees to cross-sell and up-sell more, and if that will lead to subscriber growth," says Simon. To that end, Cablevision will track categories such as new services sales growth per employee, and the company will also track a control group of about 100 customer service representatives who will not receive training. The results are not yet confirmed, but Simon is cautiously optimistic. "If we were using computers to train their kids—say between the ages of five and 12—I would be tremendously confident. But our demographics will make the process more challenging."
In the future, Simon hopes to find a measurement approach that further isolates the results of training from other environmental factors that might affect roi—and to expand those results to include productivity improvements and customer and employee satisfaction. For now, however, he is happy to prove e-learning's viability as a solution for his employees. "Down deep inside, I'm not sure we have any alternative," Simon says. "Our rapid growth demands that we try this, and I think we're ready."
Adhering to Tradition
While some training professionals have abandoned Kirkpatrick's model—or at least the first two levels—in favor of more "pure" roi determinates, others have continued to rely on it, for both the development of training and the measurement of its outcomes. Dayton, Ohio-based ncr Corp.'s learning program manager Marj Lawson, for example, helped build a recent deployment project management program around a classic Level II question: Will the training develop the knowledge and skills of its participants? The answer, as it turned out, was "yes."
"We were introducing a new service offering," says Lawson. "We had to assume that if project managers were up to speed on the process, they would do a better job providing the service." This, in turn, would help accomplish ncr's overall business goal for the service offering, which was to increase revenue by ensuring that the service met customer expectations for time, cost and technical results. Of course, making the connection between increased knowledge and changed behavior—and then to an improved bottom line—required a leap of faith on the part of ncr management, which is why Lawson also attributes the success of the program to "tremendous executive support."
Once the assumptions were accepted, an instructional design team, led by Tim Underwood, began to construct a completely learner-centric program. The finished product included synchronous, Web-based classes; paper- and cd-rom-based performance support tools; interactive, online support; and finally, a traditional instructor-led workshop. "We were fortunate to be able to design a program specifically for our audience," says Lawson. "So every step was built around the needs and learning styles of our clients." The program even included an optional extra half-day workshop, during which participants could begin using the new deployment tools in a safe, supervised environment. Nearly 33 percent of the participants chose to take advantage of this option.
In an era of skyrocketing dropout rates for online courses, the ncr team was thrilled with the 90 percent completion rate enjoyed by the deployment project management program. And 96 percent of those who completed the program enjoyed the experience (Kirkpatrick's Level I). This was determined not just through smile sheets, but through documentation of an overwhelming number of positive comments from trainees. "They liked the fact that they could learn on their own time," says Lawson, "and that they could come to the class knowing that they were going to focus entirely on hands-on skills training. All of the preliminaries—the page-turning and the PowerPoint-flipping—were already done before they got to class."
The participants' enjoyment of the program paid off: everyone who completed the course scored 80 percent or better on the post-training skill assessment, meeting the requirements of Kirkpatrick's Level II. And although the initial plan didn't call for any further evaluation, Lawson also discovered Level III success, according to post-training monthly status reports. And since project managers are rewarded based upon the volume and quality of their work, Lawson believes that they wouldn't use the tools if they didn't have a positive business impact. The ncr team will soon begin to conduct Level IV evaluations, for which they will attempt to demonstrate any bottom-line improvements resulting from the training.
On the "running training like a business" front, ncr's deployment project management program proved as cost-effective as it was popular. "In the past," says Lawson, "the roll-out of a new service offering would be done through a five-day, instructor-led workshop. The cost of the instructor's time—and of keeping the trainees away from our customers—was enormous." With the blended learning model, employees were trained in one and a half days of classroom time, plus one day of self-study. The savings, which totaled nearly $310,000, more than made up for the cost of creating the program. So the program was a success for the business unit, as well as for ncr clients.
The Power of Expectations
Anyone who's ever tried to quantify training's return on investment knows how difficult it can be to move from "reasonable evidence" to proof—and the measurement process is often as expensive as the training. Toni Hodges, manager of measurements and evaluation for Verizon's Workforce Development Division, Silver Springs, Md., usually spends about $5,000 for an roi impact study. With the number of programs required to train Verizon's 240,000 employees, that number would increase exponentially if Hodges were to complete such a study for every initiative.
The solution: "roe." Rather than measuring training's roi, Hodges, who was recently named roi Network's "Best Practitioner of the Year," measures training's return on expectations. "Gathering roi data, in addition to being expensive, can be frustrating," Hodges says. "Most of our business units don't track the data we need at the individual level, so it's hard to isolate the specific effects of a training program."
Expectations, on the other hand, are fairly easy to evaluate. Prior to the beginning of training, a member of Hodges' team conducts a 15- to 20-minute interview with a key executive in the learning effort—usually a company vice president who is financially accountable for the project. Based on that person's expectations—that employees "waste less time and have fewer meetings," for instance—specific learning objectives are established. Perhaps employees will be taught time management skills or given techniques for having more effective meetings. Once the training is complete, the executive is interviewed again.
And Hodges doesn't simply ask for a "yes" or "no" answer regarding whether or not training met expectations. Rather, executives are asked to quantify the results of training. "If the vice president says that his employees are wasting less time, I'll ask him to attach a monetary value to that time." She'll then use that data as "reasonable evidence" in an roi calculation.
Hodges, who evaluated training at Bell Atlantic (before it merged with gte to create Verizon), cautions against conducting these interviews haphazardly. "Not everyone can do this kind of interview," she says. "It takes a skilled facilitator to both extract and quantify assessment information."
So how accurate is roe? When Hodges has been able to conduct corresponding roi impact studies, the results have supported roe findings—every time. "If I were to conduct an roi study on the roe process, the results would be tremendous," says Hodges. "The interviews are cheap, and the results I've gotten in every case are very, very valuable."
Despite the value of an roe evaluation, Hodges will not give up conducting true roi studies. For some training initiatives, such as those designed to drive sales in a particular area, she still believes that hard numbers provide the best measurement of success. But for training professionals looking to make educated decisions about their more subjective learning programs, the evaluation of expectations just might be a worthwhile investment.
Regardless of what is measured, however, training professionals agree that training for training's sake is a thing of the past. But which of these metrics will ultimately represent training's true value? Perhaps, some training professionals suggest, it doesn't really matter at all. What's important, they say, is that a business value has finally been attached to the corporate learning experience. By attempting to measure that value—by any means—we can't help but promote its existence.