Think about it. Have you ever left a training program dumbfounded as to why the facilitator actually got paid to deliver such a poor program? Have you ever taken time out of your busy schedule to attend "intensive" training only to find minimal substance? Or worse, have you ever personally developed an excellent pre-approved curriculum only to have the client ultimately say it did not meet expectations?
What's happening here? Is it possible that we as training professionals just don't get it? If so, it's not our fault.
Allow us to clarify.
In the last 30 years much as been written about instructional design. We can recite Analyze, Design, Develop, Implement, and Evaluate (or ADDIE) as a premier Instructional Design model. And, separately, our depth of Evaluation understanding grew when the Four Levels of Evaluation were introduced back in 1959. Indeed, the Four Levels introduced a new depth of knowledge on how training success may be measured.
What has not been focused on, however, is exactly how an instructional designer is meant to integrate both the Four Levels of Evaluation and the ADDIE model, phase by phase to ensure evaluations have substance. And this is where "it's not our fault"—to some degree.
The critical role of evaluation development throughout the ADDIE process has not received enough consideration. The "E" is last, depicting Evaluation as an after-thought. It's time for designers to have a resource, a methodology, for Evaluation development above and beyond case studies, forms, and templates. We need something that will help designers set the stage and build a case for meaningful evaluations.
To start, we submit that "E" is not a standalone phase. Rather, it is an inherent part of each ADDIE phase—to not only create substantial evaluations, but to validate instructional design at each stage of development.
Evaluation must be on the instructional designer's mind from inception to post-training review—and it must be considered in ways that go beyond traditional thinking. In others words, the instructional designer must think of evaluation as not only a way to measure the success of a program, but as a way to ensure that evaluation findings are meaningful (not just nice) and that the client's training expectations will be met—no question.
To do this, the instructional designer must gather "evidence" during each stage of development—evidence that will ultimately be used to present a solid "case" post-training.
Right from the start, in addition to other analyses (e.g., task analysis, learner analysis), the Instructional Designer must set the stage for meaningful evaluations by considering and/or asking the client questions such as these:
- (Reconfirm) What is the goal of the training? Is it the same for each audience that will receive the training?
- (Critical) What will the client look at to determine if the training is successful or not?
* If known, what will be used as the basis for comparison between before and after training? What will determine success?
* What are the current standards and performance measurement tools or reports?
- Is there consistency between the stated training goal(s) and how the client will determine success? This simple validation may be the "golden ticket" to ensuring client satisfaction. By validating consistency between the stated training goal and evaluation expectations, the instructional designer may curtail post-training comments like, "The training was not what I expected."
Consider this example:
What is the goal of training?
Client response: "I would like participants to learn the features and benefits of the new product." (Level 2 - Learning)
How will you determine if the training is successful?
Client response: "Participants will install the new product without error after receiving training." (Level 3 - Behavior)
- What will the client do with evaluation findings? What is the evaluation "goal?" This helps to validate the anticipated level of evaluation. For example, instructional designers would want to know immediately if evaluation results will be used as part of a presentation to the Board to determine if this training program influences sales revenue. (This is NOT something instructional designers would want to learn during Implementation)
Such considerations will help the Instructional Designer get a sense of evaluation needs right-out-of-the-gate. Indeed, such investigation can change initial thoughts about the timeline and scope of an entire project. Further, from the start, the Instructional Designer must get a sense of what kind of evidence he/she must gather to set the stage for meaningful reports, including what kind of information the client will respond to (e.g., objective, subjective, both). The key is to know what is needed to prepare evaluation results that present credible findings to the client. And this must be known as early in the process as possible.
Using information obtained during Analysis, the instructional designer must then determine the evaluation approach to be developed.
For example: If measuring one-the-job behavior (Level 3) is required, pre- and post-training evaluations must be designed. Methods for evaluating at this level include observation, demonstration, and P\performance reports.
The instructional designer must document/describe the proposed evaluation approach in the design document in a thorough manner. Too often, this is overlooked; and we as training professionals miss an opportunity to fully engage the client and get buy-in on the entire training program (including validation of its success). Documenting the evaluation approach leaves no question about how training success will be measured and the client’s responsibility in helping that happen.
The format for such design documentation varies—there are many options. Regardless, it must include information about the expected Evaluation timing, goal, baseline, and method. Also, it is a good idea to include client responsibilities, when applicable.
Consider this example:
On-the-job performance will be evaluated via observation and performance reports. Evaluation will take place pre- and post-training (90 days after completion of the program). Where possible, a control group (not receiving the training) will be used to compare with those who are ultimately trained. The xyz department would be ideal for the role of 'control group.'
Client responsibilities include:
*Providing a copy of existing standards and/or performance records
*Scheduling observation time with each training participant
*Obtaining approval from appropriate parties to use the xyz department as a control group.
The design document should also describe the method that will be used to evaluate the designed program (e.g., the proposed training materials). Even if this is informal, let the client know what will happen before materials are finalized and distributed.
Overall, there are several considerations and tasks that must be addressed during Design depending on the level of evaluation that is necessary—failure to do so can result in meaningless evaluation reports, or worse.
Development has received more attention than any other phase in the ADDIE model. There are examples and templates available in just about any publication or website related to Evaluation. However, there are several development-related points an instructional designer could miss if focusing only on format and neglecting substance:
- Ensure evaluation questions match content that is addressed during the training program. Sounds like a no-brainer, but you would be surprised. Evaluation questions (particularly at Levels 2-4), should be geared toward terminal or enabling objectives. This is particularly important to watch when developing online training. Some Learning Management Systems (LMS), which develop evaluations based on Shareable Content Objects (SCOs), miss the mark if content is not entered properly.
*Compare evaluation questions or goals to actual training content. If the evaluation question cannot be easily found in the instructor’s guide, presentation, or handouts, something is wrong.
*Use a format, design, and writing style that matches the audiences' reading level.
*If the evaluation requires observation be sure to write instructions for the observers. Observation procedures must be uniform.
*Be sure to include information on the evaluation approach in the instructor's ugide, as applicable,
*Be sure to include evaluation materials and the overall approach in the pilot program—test the materials, process, observer instructions, etc.
If the previous phases of design are done thoroughly then the implementation of evaluations should fall into place and be fairly self-explanatory. This is also when the training specialist comes into the picture and plays a significant supporting role in the evaluation approach. The training specialist may provide first-hand feedback on the program. He is in a position to gauge the immediate reaction and learning, as witnessed during traditional classroom training. It's wise for the instructional designer to check in with the training specialist immediately following training to learn such details and determine if/how the program may be improved.
Evaluation does not end after training is implemented—there is plenty to do. Post-training evaluations must be completed, and the instructional designer must compile results and ultimately develop a report of findings for the client.
The report and findings must be presented in a clear and candid manner, and must include a clear outline that follows the client's earlier answer to the question, "What will you look at to determine if training is successful or not?" When finalized, a meeting should be set with both the training staff and client to review and discuss results, respectively.
Lastly, be sure to use the evaluation findings. If training fell short of expectations, adjust the program, as needed. Be candid with the client. Explain what will be done to address poor results. And, as needed, provide a proposal that addresses how training may be re-designed to ensure training goals are met. In short, do not leave the client just standing there. If training did not reach its goal, explain your Plan B.
If, as expected, the training program met objectives and reached that stated goal, capitalize on that success—document the achievement and ensure the client "gets it." Remember, success did not fall from the sky, the training department made it happen. And the instructional designer's overall approach to evaluation as an inherent part of the ADDIE model is the vehicle that proved this.
In closing, when we appropriately address evaluation development throughout the ADDIE process, we set the stage and gather evidence for meaningful evaluation reports—reports which address clients' real concerns and validate that training met their expectations. And when we talk with clients at a deeper level about evaluations, we take instructional design to a deeper level—we improve the client's view of the entire training department's function within an organization.
Knowing this, is it the training professional's fault if training misses the mark? You bet.
Allison A.S. Wimms is a senior training and development specialist at Johns Hopkins HealthCare LLC in Maryland. She currently is working on an Instructional Designer's Handbook specific to evaluation, which will include aids such as checklists, templates, and samples to facilitate evaluation development. Copies of examples related to this present article may be sent upon request. Author James Kirkpatrick's book will be published in August.
For more information, e-mail email@example.com or firstname.lastname@example.org.