L&D insights

What learning evaluation model should you really be using?

10m reading time
Nick Davies
Nick DaviesChief Commercial Officer (CCO)

In learning and development, you’re likely familiar with the pressure to use demonstrable data to measure the impact of learning programs to garner support from business leaders.

So why are so few learning professionals measuring training against business KPIs? How effectively are you currently evaluating your learning programs? One of the most common forms of training evaluation is the simple learner evaluation form—or ‘happy sheet'—and this, perhaps, is our clearest indication of where the problem lies. Most decisions are based on cost or learner feedback. The reason for this is that the return on investment (ROI) of the training isn’t worked out, and there aren’t any processes in place to include the measurement of effectiveness.

To help overcome this challenge and start proving results in a way that’s clear and evidence-based, learning evaluation models can provide a structured approach when evaluating the impact of your learning strategy. Which model – or, as you will find out later, which aspects of various learning evaluation models – you decide to use will depend entirely on your unique business aims and objectives.

Let’s explore five common models.

image1.jpg

1. Kirkpatrick’s Four Levels

You’re probably familiar with the old Kirkpatrick model, which involves four levels of learning evaluation:

  • Level 1: Satisfaction - This describes the learner’s immediate reaction to the learning program.
  • Level 2: Learning - This involves measuring the learning outcome – has the learning been retained and become embedded?
  • Level 3: Impact - This involves measuring the behaviour change of the learner to ensure that they are now able to apply what they’ve learned in the workplace.
  • Level 4: Results - This involves measuring the impact of the learner’s behaviour on the organisation.

Kirkpatrick's model was revolutionary when Dr Donald Kirkpatrick originally defined it in the 1950s but, over time, it became noted for its limitations. For example, while it provides a logical structure and process to measure learning, it neither establishes business aims nor does it take ROI into account.

1200x675 3.jpg

Key takeaways: While Level 1 might not be a great indicator of learning success – for example, a learner may not enthuse about a particular learning program but this doesn’t mean that they haven’t learned anything from it – it does however provide an early warning about what’s not working. This could be that the content wasn’t engaging enough, the delivery style was poor, or that the learning content wasn’t up to scratch.

Levels 3 and 4 of Kirkpatrick's model, on the other hand, are critical for evaluating your learning. Knowing the impact of behavioural change on both the learner and the organisation is key to measuring the results of training and learning programs. However, in order for the evaluation process to be effective, you have to be doing this continuously and not just as a one-off event.

If you want to use some of the methods of evaluating your learning outlined in the Kirkpatrick model, it’s worth remembering that xAPI can be useful to collect meaningful data on all levels.

image2.jpg

2. The Kirkpatrick-Phillips Model

L&D departments don’t always measure ROI and instead put too much emphasis on metrics such as cost-saving, compliance or user satisfaction. What other recognised criteria for the effectiveness of L&D are there and how can we measure them?

There are several models out there but the most popular one is the Kirkpatrick-Phillips model. Kirkpatrick defined satisfaction up to results, then Jack Phillips added the ROI cherry on top. For an in-depth explanation, it’s worth taking a look at our expert guide ‘How to Measure and Maximise Return on Investment from Learning & Development’.

While ROI is often seen as a necessity for proving the business case of L&D to leaders, we need to bear in mind that ROI tends to be applied only after the learning intervention has taken place. The downfall of this is that if the ROI calculation actually shows a higher resulting cost than overall value, it is by then too late to make changes.

Another drawback of the ROI calculation means that when a low-cost learning intervention is set against a much greater project cost, it can create a falsely positive impression.

Key takeaways: Phillips himself recommends that ROI should only be calculated when the learning intervention is:

  • Targeted towards a population
  • Important to the strategy
  • Expensive in terms of cost or time
  • A long-term project
  • High profile and of interest to senior management

Providing the intervention meets these criteria, you can then follow the steps below to help maximise your returns:

  • Align learning outcomes to the business challenge
  • Make sure both the learning content and the learning experience are designed effectively
  • Make sure your assessment is appropriate to the cognition level expected from your learning outcomes

These three points are explored in greater depth in the expert guide, but the key takeaways here are alignment, relevance and continuous monitoring of the levels of learning evaluation. This enables you to track your costs and successes against business aims to ensure you stay on track.

image3.jpg

3. Anderson’s Value of Learning Model

Anderson’s Value of Learning Model is a more recent development, published by CIPD in 2006. It is a high-level, three-stage evaluation model which aims to address the two main business challenges: the evaluation challenge and the value challenge.

The three stages are as follows:

  • Stage 1: Determine current alignment against strategic priorities - Is the training in line with the business goals?
  • Stage 2: Use a range of methods to assess and evaluate the results of training - This includes for key measures: return on expectation, return on investment, learning function, and benchmark and capacity
  • Stage 3: Establish the most relevant approaches for your organisation - This is the final, decision-making stage.

According to a report by CIPD, 91% of high-performing organisations have L&D that is fully-aligned with the strategic goals of the organisation. The Value of Learning Model therefore seeks to be of benefit at an organisational level, rather than for specific learning interventions. By using the model below, you can find the right measures to suit your organisation’s needs.

The model is not fail-proof, however; it is only suitable for providing insight into the effectiveness of the learning programs throughout the organisation as a whole. For single learning interventions, you will need to look beyond the Value of Learning Model.

Key takeaways: The Value of Learning Model is perfect for ensuring the learning strategy is in alignment with the organisation’s overall priorities and providing evidence that the resources are being used in the most effective way. The beauty of this model is that it allows the organisation to measure the right metrics based on their strategic aims.

However, as with the other learning evaluation models mentioned previously, it needs to be supported by other models for a more thorough evaluation.

Image-4.jpg

4. Brinkerhoff’s Success Case Method

Rob Brinkerhoff defines his Success Case Method (SCM) as a “low-cost, high-yield evaluation” which draws a comparison between the most successful and the least successful cases whenever a change is implemented. To discover why a particular method worked and how it could be improved, the following questions are asked:

  • “How have you used the training?”
  • “What benefits can be attributed to the training?”
  • “What problems did you encounter?”
  • “What were the negative consequences?”
  • “What criteria did you use to decide if you were using the training correctly or incorrectly?”

This should provide you with a set of qualitative data based on collected responses.

It’s important to remember that SCM is not limited just to learning interventions – it also takes into account that there are a number of variables that could be held accountable for results (for example, new technology or a change in processes).

Key takeaways: By identifying the most and least successful examples, SCM is an effective way to identify exactly what worked and what didn’t. This enables you to get specific about what needs to change and in the case of successes, gives you something to publicise. This will then help you to effectively boost the profile of learning and development to your leaders.

However, as these results are based on qualitative data, we recommend that you only use SCM as a one-time insight – it isn’t enough on its own to provide you with the full picture. This is why we suggest you combine SCM with other, more tangible methods of learning evaluation for ongoing analysis.

Image5.jpg

5. The Learning Transfer Evaluation Model (LTEM)

Will Thalheimer (2018) devised the Learning Transfer Evaluation Model (LTEM) as an alternative to Kirkpatrick's model to help organisations and learning professionals determine the effectiveness of their evaluation methods. In his accompanying report, Thalheimer wrote that his model “provides more appropriate guideposts, enabling us, as learning professionals, to create virtuous cycles of continuous improvement.”

LTEM comprises eight levels – the first six of which focus on learning and the top two demonstrating where learning becomes application and integration at work.

The eight levels are as follows:

  • Level 1: Attendance - The learner takes part in and completes the learning activity. Thalheimer argues that this is inadequate, as participation does not confirm that anything has been learned. Despite this, there are still many organisations who still only count attendance in their evaluation process.
  • Level 2: Activity - The learner takes part in the learning activity, which is broken down into three sub-levels: attention, interest and participation. However, even if a learner pays attention, shows interest and actively participates, there is no clear evidence that anything has actually been learned.
  • Level 3: Learner perceptions - The learner shares their perspective on the learning experience either formally (e.g. using ‘happy sheets’, surveys or feedback forms) or informally (e.g. a chat or word of mouth). However, just because a learner enjoyed the learning, this is not valid evidence that it worked. Surveying for effectiveness may provide some adequate data, but ideally we should also be measuring the remembered material itself.
  • Level 4: Knowledge - The learner is tested on how well they can recall knowledge and facts. This might be tested immediately after the learning has taken place (recitation), or a few days later (retention). However, knowledge on its own is not sufficient proof that a learner can perform better so we need to evaluate beyond this.
  • Level 5: Decision-making competence - This level is about testing the learner’s decision-making ability. This might involve asking the learner to engage with realistic scenarios and to make decisions based on underlying knowledge. As knowledge can be forgotten, this should ideally be done three or more days after the learning activity to test long-term decision-making ability.
  • Level 6: Task competence - At this level, the learner combines decision-making with taking action. As with the previous level, measuring task competence too soon after learning is not sufficient proof that they will remain task competent long-term.
  • Level 7: Transfer - At this stage, the learner demonstrates actual competence in the workplace, either through ‘supported transfer’ (e.g. a line manager encourages the learner to apply the learning) or ‘complete transfer’ (where the learner demonstrates a new behaviour independently). The question is whether the learner can better perform a certain task in the workplace as a result of the learning activity.
  • Level 8: Effects of the transfer - This level looks at the effect of the learning transfer on the learners, their colleagues, their families, their friends, the organisation, the community, society and the environment. This is about learning as a means to an end (e.g. training a manager in leadership skills so that they perform better as a manager and their team achieves better results). At this stage, we need to be considering both the benefits and harms of a learning intervention. For example, as a benefit, can a reduction in product returns be attributed to better quality-checking processes following training? On the other hand, has it resulted in time lost and reduced productivity?

Key takeaways: The eight-tier model is useful for identifying at what level you are currently evaluating your learning interventions and how to advance beyond this for better insights.

LTEM is by no means an easy method for evaluating your learning interventions, but its thorough nature offers a more robust method with better-quality data. Make sure you dedicate enough time to perform a thorough evaluation if you’re going to use this method – it’ll be worth the effort.

In summary…

Evaluation is critical to your learning interventions and it’s all too easy to start panicking about which learning evaluation model is best. Should you be keeping it simple with Kirkpatrick? Is Brinkerhoff’s qualitative method the way forward? Or is LTEM the solution you’ve been looking for?

At Thinqi, we encourage L&D practitioners to explore a customised approach to learning evaluation and urge you to experiment with different methods to find one that resonates. By taking the approach most useful and relevant to your organisation and its strategic aims, you can present a more robust set of evidence to your business leaders – and equip yourself with an essential tool for raising the business profile of L&D.

--

Get the full story behind learner progress in your organisation with Thinqi's cutting-edge reporting tools. Contact us to arrange a free demo.

Book your Thinqi demo today

Ready to discover what Thinqi can do for you? Our friendly learning experts are ready to help you solve your L&D challenges for good.

Book a demo