So, you’ve taken steps to improve your learning design to increase the transfer of learning in the workplace. You’ve successfully communicated to stakeholders and agreed on a devolved degree of accountability for embedding learning into daily work.
Now you’re faced with another challenge: how exactly do you measure learning transfer after the training has finished?
Use learning evaluation models
If you’re still relying on the simple learner evaluation form—or ‘happy sheet’ as it’s sometimes known—you’re not alone. According to the most recent CIPD Learning and Skills at Work Survey, “measuring the impact, transfer and engagement of L&D activities can’t simply be done by an end-of-course questionnaire or post-training survey” yet only a small minority are assessing the behaviour change of participants by measuring the transfer of learning in the workplace.”
While a quick questionnaire or survey might tell you that your learners loved your course and those enthusiastic responses might look great on paper, they do little to reveal how much they have actually learned or applied in practice. The C-suite are not interested in how exciting your course is, but they do care about how the training they’ve invested in is affecting workplace performance and, ultimately, return on investment (ROI).
Professor Robert Brinkerhoff, the creator of the success case method (SCM) evaluation model, states that “performance results cannot be achieved by training alone, therefore, training should not be the object of the evaluation.” Instead of focusing solely on what happens during training, we need to look at the overall impact on performance once back in the workplace.
This is where learning evaluation models can help. We explore five common learning evaluation models in a previous blog post, but here’s a recap of some of the models you can choose from:
- Kirkpatrick’s Four Levels
- The Kirkpatrick-Phillips Model
- Anderson’s Value of Learning Model
- Brinkerhoff’s Success Case Method
- The Learning Transfer Evaluation Model (LTEM)
The choice of model is up to you, but make sure you keep your unique business aims and objectives in mind throughout the evaluation process.
Learning transfer and Kirkpatrick’s 4 Levels
Let’s take a look at the one you’re most likely familiar with—Kirkpatrick’s Four Levels, which consists of the following:
- Level 1: Satisfaction - This describes the learner’s immediate reaction to the learning program.
- Level 2: Learning - This involves measuring the learning outcome—has the learning been successfully retained and embedded?
- Level 3: Impact - This involves measuring the behaviour change of the learner to ensure they are now able to apply what they’ve learned in the workplace.
- Level 4: Results - This involves measuring the impact of the learner’s behaviour on the organisation.
The Kirkpatrick-Phillips model is an updated version of Kirkpatrick’s, but with a fifth level added: ROI. While ROI is often seen as a necessity for proving the business case of L&D to leaders, we need to bear in mind that ROI tends to be applied only after the learning intervention has taken place.
Too many L&D professionals are still ‘evaluating’ at level 1, learner satisfaction. While it can provide an early warning about what’s not working, it still only looks at learning effectiveness at a superficial level and is no guarantee that learners have acquired any knowledge.
Levels 3 and 4 of the model, however, enable you to determine the impact of behavioural change on the learner and the effect this has had on organisational performance. xAPI (aka Experience API) is a software specification used in e-learning and can be helpful as part of the evaluation process. It allows learning content and learning systems to ‘speak’ to each other in a way that records and tracks learning experiences. For L&D professionals, xAPI offers the opportunity to track more than just progress and scores. We are finally able to consider the bigger picture – from learners’ initial thoughts about the learning, through to the impact it has on their everyday working life.
You’ll find xAPI capability in learning systems such as Thinqi, which enables you to consider more widely what you want to track and how you want to track it. Previous e-learning standards (such as SCORM) largely just facilitated data capture of ‘Level 2: Learning’. Using xAPI, we are able to consider impact across all five of the Kirkpatrick-Phillips levels, capturing data on a range of behaviours—and help you prove the efficacy of learning transfer.
The Learning Transfer Evaluation Model (LTEM)
We can’t talk about learning transfer without mentioning Will Thalheimer’s LTEM model, a relatively newer and arguably more sophisticated model developed in 2018. an alternative to Kirkpatrick's model to help organisations and learning professionals determine the effectiveness of their evaluation methods. In his accompanying report, Thalheimer wrote that his model “provides more appropriate guideposts, enabling us, as learning professionals, to create virtuous cycles of continuous improvement.”
LTEM comprises eight levels – the first six of which focus on learning and the top two demonstrating where learning becomes application and integration at work.
The eight levels are as follows:
- Level 1: Attendance
- Level 2: Activity
- Level 3: Learner perceptions
- Level 4: Knowledge
- Level 5: Decision-making competence
- Level 6: Task competence
- Level 7: Learning transfer
- Level 8: Effects of the learning transfer
This eight-tier model is useful for identifying at what level you are currently evaluating your learning interventions and planning how to move beyond this for better insights at higher levels. LTEM is by no means an easy method for evaluating your learning interventions, but its thorough nature offers a more robust method with better-quality data and performance in mind. Make sure you dedicate enough time to perform a thorough evaluation if you’re going to use this method—we promise it will be worth the effort.
What would successful learning transfer look like to you?
To even begin measuring your results, you need to have a clear vision of what success looks like to you and the organisation. Ask yourself some simple questions to gain clarity:
- Where are you now and where do you need to be?
- What change are you trying to achieve for the organisation?
- What observable behaviour changes will you be able to see?
- How will you measure this?
- Who needs to be involved?
- How often will you need to measure?
- What are the consequences of ‘doing’ versus ‘not doing’?
The answers to these questions will help you identify what you’re trying to prove, the data you need to collect, how to collect it and what to do with it. Perhaps you need to look at using learning technologies to help you gain deeper insights and reduce the time and effort it takes to get an accurate picture.
Using learning technologies to support learning transfer
Imagine your new hires in the customer success team have completed their induction training. A digital badge could be automatically awarded upon successful completion of the module in Thinqi. A second badge can then be awarded for real-world application—for example, resolving five customer service requests successfully demonstrating the processes taught in the module. As a line manager or peer observes the knowledge being applied on the job, they can award a badge to acknowledge competence.
Finally, a ‘parent’ badge is awarded upon successful completion of both the learning and application badges. Once this badge is earned, congratulations! The introductory customer success training is complete. Detailed reporting then allows you to see the impact of learning on real-life application.
Perhaps you can see a 90% completion rate of the theory badge, but only 10% of these learners have successfully completed the application badge. This is a big indication that there’s an issue with learning transfer. Time to review and refine.
Later, you might see that 95% of your learners have achieved the learning badge, with 85% of those also gaining an application. Congratulations! Your customer success team has successfully applied the learning to their daily work and it’s showing in their performance. Your customers have never been more delighted with the service they’re receiving.
What’s more, breaking down these activities into smaller, more achievable steps gives learners a clear pathway for development, conducive to retaining your best talent.
In summary…
Learning transfer is one challenge, but data analysis can often feel like yet another barrier to great L&D (don’t worry—we’ve got a free expert guide to help you with that). Ensuring your data successfully joins the dots between training initiatives and performance is vital if you’re going to prove that learning transfer has occurred.
It’s time to go beyond metrics such as learner satisfaction and completion scores. With the right data, you have the key to inform your decisions and inspire the right actions to drive real performance through the power of learning.