One of the unexpected results from eLearning Guild Research has been the finding that only 24 percent of those who responded have hard data about the effectiveness of their training methods. Furthermore, only 11 percent have data showing that what they do really works. What do these people do differently?
First, the members that have data showing strong results have important reasons for measuring their results. The most important reasons are because the organization requires measurement, to provide learners with feedback, to meet regulatory demands or certification standards, and to provide information to senior management. Organizations that did not have data showing strong results tended to give these reasons significantly less value.
Second, members that have data showing good results also have a better track record in establishing success criteria early on. Those that have data showing good results evaluate success by using pass-fail rates, looking at test scores using standard methods, comparing pre-test and post-test scores, and comparing learner results to each other. Those that had less data showing good results did those things far less often.
Finally, members that have data showing good results also have more instructional designers with specific knowledge of and advanced education (including Master’s degrees) in measurement and evaluation methods.