In the New Economy, business metrics may surpass traditional Levels-based metrics in determining training's return on investment.
Ask most training professionals about the success of a recent training initiative and you'll probably get an answer expressed in "levels." Since its advent in 1959, the four-level training evaluation model created by University of Wisconsin management professor Donald Kirkpatrick has become an industry standard for training measurement.
But the first three of those levels?the largely "soft" measurements most often cited by training professionals as proof of success?have begun to lose clout as justification for training. Many corporate training professionals, along with the decision-makers who approve their programs, would prefer to get that proof in the form of dollars and cents.
And it's not just the results of training?Kirkpatrick's Level IV?that corporate executives expect to measure. In addition to improving the organization's bottom line, training is now often expected to improve its own bottom line, through cost savings over previously employed training methods.
"Training's mission must be unashamedly economic," say authors David van Adelsberg and Edward Trolley in "Running Training Like a Business" (The Forum Corporation, 1999). "Education is still what training does. But business education is a means to business results, not an end in itself." Indeed, if training is to be accepted as a legitimate business unit, most training professionals accept that they'll have to accomplish more while spending less.
But as e-learning enthusiasts are painfully discovering, the one-time savings in labor and travel isn't worth much if the online learning initiative doesn't lead to ongoing, measurable business benefits. In other words, there has to be a point to the training in the first place.
At the same time, measurement experts have begun to question the wisdom of simply choosing one metric?increased profits, for instance?to justify the expense of training. Mark Graham Brown, a performance-measurement consultant in Los
Angeles, calls this the "chicken efficiency" test. A certain fast-food chain with which he worked claimed to measure a lot of things, but what managers tracked most carefully was how much cooked chicken was thrown out. The result? According to restaurant managers, it was easy to meet "chicken efficiency" quotas. They simply didn't cook any chicken until it was ordered. Of course, customers expecting "fast" food had to wait 20 minutes for their meals, and they'd probably never return. So managers made their numbers, but at the expense of comparable store sales.
But even if the right "hard" numbers are measured, companies may still be missing a big piece of the ROI puzzle, according to Harvard Business School accounting professor Robert Kaplan and Renaissance Solutions President David Norton. Measures of wasted product or gross sales are lagging indicators, they say, which will only tell you where you've been. To predict financial performance, which Kaplan and Norton suggest is the only true purpose of training, you must track operational measures?such as employee satisfaction and turnover?as well as financial ones.
\So what's the answer? Is hard data or soft data tracked? Is information collected via evaluations, surveys and interviews or from profit and loss statements? Is it more important to create learner satisfaction or meet manager expectations? No consensus yet, but training professionals are making an earnest effort to quantify training's outcome?whatever it may be.
Measuring Dollars and Cents
Believe the hype surrounding e-learning and you might think it provided the solution for world hunger. But one inarguable benefit remains untouted: measurement.
"Historically, the need for training?particularly instructor-led training?was assumed," says Howard Simon, director of program design and evaluation for Bethpage, N.Y.-based Cablevision's newly launched corporate university. "This industry moves so quickly that developing a really researched design just isn't possible. The priority is getting the products to market." But the expense of e-learning, along with skepticism about the reception it will receive, has led training professionals like Simon to more thoroughly investigate training's impact on the bottom line. "Culturally, this is a new concept for our employees. We want to be sure that people will be accepting and willing to put in the appropriate amount of time before we go farther."
Future online programs will be developed internally, says Simon, but Cablevision partnered with Click2Learn.com for the initial e-learning "experiment"?leaving Simon free to focus on measurement. The training was designed around a specific goal: to create increased revenue through subscriber growth. And to evaluate the results, Simon has taken a scientific approach.
"We want to know, specifically, if the training will help employees to cross-sell and up-sell more, and if that will lead to subscriber growth," says Simon. To that end, Cablevision will track categories such as new services sales growth per employee, and the company will also track a control group of about 100 customer service representatives who will not receive training. The results are not yet confirmed, but Simon is cautiously optimistic. "If we were using computers to train their kids?say between the ages of five and 12?I would be tremendously confident. But our demographics will make the process more challenging."
In the future, Simon hopes to find a measurement approach that further isolates the results of training from other environmental factors that might affect ROI?and to expand those results to include productivity improvements and customer and employee satisfaction. For now, however, he is happy to prove e-learning's viability as a solution for his employees. "Down deep inside, I'm not sure we have any alternative," Simon says. "Our rapid growth demands that we try this, and I think we're ready."
Adhering to Tradition
While some training professionals have abandoned Kirkpatrick's model?or at least the first two levels?in favor of more "pure" ROI determinates, others have continued to rely on it, for both the development of training and the measurement of its outcomes. Dayton, Ohio-based NCR Corp.'s learning program manager Marj Lawson, for example, helped build a recent deployment project management program around a classic Level II question: Will the training develop the knowledge and skills of its participants? The answer, as it turned out, was "yes."
"We were introducing a new service offering," says Lawson. "We had to assume that if project managers were up to speed on the process, they would do a better job providing the service." This, in turn, would help accomplish NCR's overall business goal for the service offering, which was to increase revenue by ensuring that the service met customer expectations for time, cost and technical results. Of course, making the connection between increased knowledge and changed behavior?and then to an improved bottom line'required a leap of faith on the part of NCR management, which is why Lawson also attributes the success of the program to "tremendous executive support."
Once the assumptions were accepted, an instructional design team, led by Tim Underwood, began to construct a completely learner-centric program. The finished product included synchronous, Web-based classes; paper- and CD-ROM-based performance support tools; interactive, online support; and finally, a traditional instructor-led workshop. "We were fortunate to be able to design a program specifically for our audience," says Lawson. "So every step was built around the needs and learning styles of our clients." The program even included an optional extra half-day workshop, during which participants could begin using the new deployment tools in a safe, supervised environment. Nearly 33 percent of the participants chose to take advantage of this option.
In an era of skyrocketing dropout rates for online courses, the NCR team was thrilled with the 90 percent completion rate enjoyed by the deployment project management program. And 96 percent of those who completed the program enjoyed the experience (Kirkpatrick's Level I). This was determined not just through smile sheets, but through documentation of an overwhelming number of positive comments from trainees. "They liked the fact that they could learn on their own time," says Lawson, "and that they could come to the class knowing that they were going to focus entirely on hands-on skills training. All of the preliminaries?the page-turning and the PowerPoint-flipping?were already done before they got to class."
The participants? enjoyment of the program paid off: everyone who completed the course scored 80 percent or better on the post-training skill assessment, meeting the requirements of Kirkpatrick's Level II. And although the initial plan didn't call for any further evaluation, Lawson also discovered Level III success, according to post-training monthly status reports. And since project managers are rewarded based upon the volume and quality of their work, Lawson believes that they wouldn't use the tools if they didn't have a positive business impact. The NCR team will soon begin to conduct Level IV evaluations, for which they will attempt to demonstrate any bottom-line improvements resulting from the training.
On the "running training like a business" front, NCR's deployment project management program proved as cost-effective as it was popular. "In the past," says Lawson, "the roll-out of a new service offering would be done through a five-day, instructor-led workshop. The cost of the instructor's time?and of keeping the trainees away from our customers?was enormous." With the blended learning model, employees were trained in one and a half days of classroom time, plus one day of self-study. The savings, which totaled nearly $310,000, more than made up for the cost of creating the program. So the program was a success for the business unit, as well as for NCR clients.
The Power of Expectations
Anyone who's ever tried to quantify training's return on investment knows how difficult it can be to move from "reasonable evidence" to proof?and the measurement process is often as expensive as the training. Toni Hodges, manager of measurements and evaluation for Verizon's Workforce Development Division, Silver Springs, Md., usually spends about $5,000 for an ROI impact study. With the number of programs required to train Verizon's 240,000 employees, that number would increase exponentially if Hodges were to complete such a study for every initiative.
The solution: "roe." Rather than measuring training's ROI, Hodges, who was recently named ROI Network's "Best Practitioner of the Year," measures training's return on expectations. "Gathering ROI data, in addition to being expensive, can be frustrating," Hodges says. "Most of our business units don't track the data we need at the individual level, so it's hard to isolate the specific effects of a training program."
Expectations, on the other hand, are fairly easy to evaluate. Prior to the beginning of training, a member of Hodges? team conducts a 15- to 20-minute interview with a key executive in the learning effort?usually a company vice president who is financially accountable for the project. Based on that person's expectations?that employees "waste less time and have fewer meetings," for instance?specific learning objectives are established. Perhaps employees will be taught time management skills or given techniques for having more effective meetings. Once the training is complete, the executive is interviewed again.
And Hodges doesn't simply ask for a "yes" or "no" answer regarding whether or not training met expectations. Rather, executives are asked to quantify the results of training. "If the vice president says that his employees are wasting less time, I'll ask him to attach a monetary value to that time." She'll then use that data as "reasonable evidence" in an ROI calculation.
Hodges, who evaluated training at Bell Atlantic (before it merged with GTE to create Verizon), cautions against conducting these interviews haphazardly. "Not everyone can do this kind of interview," she says. "It takes a skilled facilitator to both extract and quantify assessment information."
So how accurate is roe? When Hodges has been able to conduct corresponding ROI impact studies, the results have supported roe findings?every time. "If I were to conduct an ROI study on the roe process, the results would be tremendous," says Hodges. "The interviews are cheap, and the results I've gotten in every case are very, very valuable."
Despite the value of an roe evaluation, Hodges will not give up conducting true ROI studies. For some training initiatives, such as those designed to drive sales in a particular area, she still believes that hard numbers provide the best measurement of success. But for training professionals looking to make educated decisions about their more subjective learning programs, the evaluation of expectations just might be a worthwhile investment.
Regardless of what is measured, however, training professionals agree that training for training's sake is a thing of the past. But which of these metrics will ultimately represent training's true value? Perhaps, some training professionals suggest, it doesn't really matter at all. What's important, they say, is that a business value has finally been attached to the corporate learning experience. By attempting to measure that value?by any means?we can't help but promote its existence.
Whose Job Is it Anyway?
Training House founder and chairman Scott Parry was one of the first in the industry to offer a money-back guarantee for his training programs. To date, the Princeton, N.J., company has not been asked for a refund. Parry's secret? He holds trainees, along with their managers, responsible for "keeping score" of training's return on investment.
Like most training providers, Parry builds each program and its measurement around the expectations of his client. He then goes one step further: "I bring in the manager of each participant so that the two of them are responsible for generating and implementing an action plan for each course." Performance improvement, Parry continues, is the responsibility of participants. "As trainers, we can provide tools to make it easier for employees, but ultimately, they're the ones who are going to measure and maintain performance."
This philosophy sows the seeds of Parry's success. "No graduate of the course and his or her manager is going to say, "We didn't see a damn thing different," because what they'd really be saying is, "We didn't do a damn thing different afterwards." "
Before the beginning of each course, Parry meets with the bosses of would-be participants to create a three-way partnership?the trainer, the trainee and the trainee's manager'responsible for effecting change. "If a company fails to meet its expected performance improvement," says Parry, "invariably it's because the managers did not follow through?and they're very quick to admit it." So in addition to creating more opportunities for success, this process also makes it possible to identify the causes of failure.
Parry's Training House isn't the only beneficiary of this system?participants, he says, are also coming out ahead. "One of the most common questions we were asked was, "Does my boss know what you're teaching?" because their bosses weren't practicing what we were preaching." As a result, participants?no matter how well they liked the course?were not confident that they would have management support to practice what they had learned.
"Training is too important to be left exclusively to the trainers," Parry says. "Any coach who has a winning team knows that he or she has to be the one coaching that team and must be there at all the practice sessions. No sane coach would tell his line trainers, "You train them, and I'll show up for the big games." And yet that's what's been happening in corporate America." There are some things, Parry believes, that can't be delegated. Training?and measuring its results?is one of them.
Levels of Learning
In 1959, University of Wisconsin management professor Donald Kirkpatrick proposed a model for training evaluation that remains widely accepted today. An assortment of other models?including one from industry icon Stephen Covey?has emerged since, but all are at least loosely based upon those original four levels.
Despite their age, there's still something for everyone in Kirkpatrick's levels: Levels I and II, which measure trainee satisfaction and increased knowledge, respectively, focus on an individual's training experience. Level III, which measures changes in on-the-job performance, focuses on training results at the individual level. Finally, Level IV, which measures the organizational results of training, focuses on training's return on investment for the company as a whole.
The disconnect, it seems, occurs between the levels training professionals want to measure and the ones their organizations or clients want to see. Trainers are more likely to cite Levels I and II results to support their programs?not only because these levels are easily calculated with smile sheets and pre- and post-training tests, but also because these measurements often reflect the "good job" being done by training professionals. What senior management generally wants to know is the actual, bottom-line result of training, which is far more difficult to measure. And the results are affected by factors beyond the control of trainers, so often they are reluctant to take responsibility or blame for the outcome.
Perhaps this paradox explains why Franklin Covey employs levels that parallel Kirkpatrick?s, but focuses on teaching trainers to prove and take responsibility for the real value of training. To make this value more meaningful to both trainers and their clients, Franklin Covey has divided Kirkpatrick's Level IV into two distinct measures of training results. The first (Franklin Covey's Level IV) is organizational: Did the organization improve as a result of training? The second (Franklin Covey's Level V) is less subjective: Did the financial benefits of training outweigh its cost?
"The ROI process represents one of the most effective ways in which training and development can increase its influence in the organization," says Jack Phillips, vice president of Franklin Covey's Jack Phillips Center for Research. "Trainers must measure the contribution of programs in terms that senior management appreciates."
The fact that trainers are coming around to Phillips? assessment is evidenced by the overwhelming number of participants enrolled in Covey's ROI workshop, "Measuring the Impact of Learning on Key Business Results." Fewer and fewer organizations, it seems, are willing to accept the cost of training on faith.
Still, Kirkpatrick's Level IV results (Covey's IV and V) are difficult and expensive to obtain, and there is plenty of controversy over which metrics should be given weight in the analysis. Levels I and II, on the other hand, are right at trainers? fingertips. In that respect, Kirkpatrick's model couldn't be more relevant, despite having celebrated its 41st birthday.
First Things First
As corporate training departments have transitioned from cost centers to legitimate business units, some training professionals have become more focused on defining and meeting the business objectives of their programs and less focused on meeting the individual development needs of employees.
In their push to definitively establish training's return on investment, however, some training professionals have forgotten that not all aspects of training are scientific. The trainees, after all, are human, complete with the requisite frailties and idiosyncrasies. In the end, they can only be taught if they're willing to learn.
With the help of Mentergy Chairman Steve Allen, managers of the Union Pacific Railroad were forced to confront this reality. In order to achieve company goals, measurable objectives had to be set aside?at least temporarily?in favor of a much "softer" strategy: improving employee morale.
In 1988, Union Pacific conductors? ability to deliver shipments on time and to the right location was well below average?fewer than 70 percent of orders were accurately delivered. Hoping to improve these figures, company executives decided to install computers and satellites on the engines, giving conductors greater means for communication and tracking.
Steve Allen, then president of Allen Communication (Allen merged with Gilat Communications and LearnLinc to form Mentergy last May), was contacted to provide training for the new system. "We went out to do audience analysis with these conductors, and they were a very hostile group," Allen recalls. "They hated the management and didn't understand the business aspects of the railroad at all. In that state of mind, we really couldn't help them with anything."
Allen reported this finding to the CEO of Union Pacific, explaining that he would be unable to provide computer training for conductors. "I informed him of their frustration with management and the administration and explained how it would make training impossible."
Allen's honesty paid off. At the CEO's request, Allen did a deeper analysis of Union Pacific's problems. "There was this tremendously disjointed system of communication and collaboration." The solution, Allen suggested, was to "soften hearts a little bit."
"If we could get people communicating again, and help them all to understand what was in front of them, we would have a chance," Allen says. "Then we could do whatever kind of training we needed to do."
Allen and his team set out to find something that would make "culture" training more universally appealing to Union Pacific's diverse employees. The answer? Country music. So a lyricist and songwriter were hired, along with a professional country western band. One of the singers was dressed as a conductor, and he sang about that group's particular frustrations, Allen explains. Another singer was dressed in a suit and tie and sang from the managers? point of view. Ultimately, a complete music video was produced and distributed to Union Pacific's 24,000 employees.
"It's kind of a different ROI story," Allen says. While this original training course didn't have a specific business benefit, it opened the door for other courses that did improve the company's bottom line. "Their ability to deliver products where and when they wanted improved by almost 25 percent. Accuracy is now near 90 percent."
The lesson, Allen believes, is that training programs can't always start with business goals. "Before anyone can help these groups collaborate and learn together, they have to understand each others? roles and how those roles affect the company's ability to deliver." Only then, he says, will training have a chance to make a real impact.
-Donna Goldwasser is senior editor of TRAINING. firstname.lastname@example.org
COPYRIGHT Bill Communications Inc. 2001. All rights reserved.