Kirkpatrick’s Model was invented in the 1950s by Donald Kirkpatrick. The model became highly influential in training. Its combination of simplicity and usefulness made it a model that could apply to a lot of situations, and despite its age, continues to be relevant into the digital era.
Kirkpatrick’s Model – or modifications of it – has become a standard method of evaluation in the elearning industry. One can see why – the model takes into account the opinions of the users, the goals of the trainers, and the desires of the stakeholders, all in one convenient, easy to remember model.
Although there’s some misuse of Kirkpatrick’s model, overall, it’s a very useful reminder of all the factors we should consider when creating a sustainable evaluation process for a particular program.
When attempting to implement the model in an online training environment, it may be difficult to know which tools one should use to extract data and evaluate program performance. Most learning management systems have a variety of built-in tools that can help you implement data collection for each Kirkpatrick level – and in some cases, even help automate the process.
The Four Kirkpatrick Levels
The Kirkpatrick Model is rooted in the use of four interconnected evaluation levels which theoretically provide a complete overview of the effects of training. Each level has a specific area it focuses on.
Let’s shortly go over them before diving into the ways that you can use your LMS to simplify the process of implementing each level of evaluation.
Level 1 – Reaction
This level focuses on learner reaction. How did they feel about the course? Were they engaged? Did they think the information was relevant?
Level 2 – Learning
This level asks the evaluator to measure how much learning and knowledge retention occurred during the course.
Level 3 – Behavior
This level takes place a pre-set period of time after course completion to examine whether the learners are implementing the behaviors and skills that the training program taught them.
Level 4 – Results
The results level wants to understand the impact of the training on the business. It looks to abstract metrics like customer satisfaction and also aims to measure financial outcome.
Each of the levels has their own challenges, but they can be overcome with some preparedness and consideration. For example, reaction is one of the easiest areas to measure, so sometimes learning and development professionals fall into the trap of focusing most of their energy on level one. But there are three other levels to address that are just as important (sometimes more so) and can be a little trickier to address.
To make sure you’re spending time, money, and energy as strategically as possible, be sure to plan your evaluation strategy as part of the early stages of course creation.
How to Implement and Automate the Kirkpatrick Levels in an LMS
Level 1 – Reaction
A learner’s emotional reaction to a course can include whether they found the training material relevant and valuable to their work lives, if they were engaged throughout the training, and if there was anything they would have changed.
There are many ways to measure this via an LMS:
- LMS metrics like “course completion rate”, “percentage of course finished”, “how long the course took to finish” and more
- Post-training surveys sent to each individual user
- Course rating features so users can leave optional public course feedback and ratings. This is especially useful for ecommerce.
You can automate these measurements with the following methods:
- Depending on the LMS, metrics can be collected via either standard or customizable reports. These reports can be generated automatically on a regular basis to keep evaluation ongoing without lifting a finger. They can even be automatically sent to the relevant managers and/or stakeholders by email. Many clients find that a great reporting engine is a game changer in and becomes one of their favorite and most-used features. It makes tracking learner reaction so much simpler.
- Post-training survey options, once set up, can be automatically distributed at the end of the course and ask questions about their experience. These surveys can even be labeled as mandatory for training completion status.
- Automatic emails can remind users that they can leave feedback after the end of a course.
Level 2 – Learning
Learning can be a tricky thing to measure. How do you know how much a course influenced the learner’s knowledge level? Will they retain that information over time? Are they confident in their new skills?
This section also covers the learner’s commitment or desire to implement their new skills. Training must persuade the learners why their new skills will benefit them to put into practice.
LMSs have a lot of features in place to help you evaluate learning comprehension:
- Pre- and post-training assessments, which can take a variety of forms – whether it’s a multiple-choice quiz, a skills checklist, or a more complex text with different question types. They can be taken alongside the reaction survey mentioned in the previous section.
- Peer observations by supervisors, managers, or team leaders that are then uploaded into the LMS.
Automation of these features is common in the best LMSs:
- Pre- and post-training assessments can be automatically assigned to learners in order to compare their knowledge both before and after the course. Instead of using a test at the end, using both will allow you to see the true impact of the course. The best part is, they can be scheduled in relation to live or blended training events as well as online training, so that you can keep records for every kind of training in one place.
- You can use the LMS’s built-in hierarchy to automatically email evaluation materials to personnel with certain levels of superiority at regular intervals. For example, you can automatically assign all managers to complete evaluations every six months based on course or training program completions. Once it’s all set up you can be confident the evaluations are happening without having to worry about it yourself.
Level 3 – Behavior
Behavior involves more environmental observations out in the workplace. This seems difficult to measure in a digital LMS, but the best LMSs have found a way.
- Peer observations can also be used to evaluate behavior. You can include questions in the evaluation to make sure certain behaviors are being observed in the learner’s everyday work life.
- Job checklists are a great tool for this – the manager can easily check the boxes and record the behavior to their learner profile. This feature also makes it easy to record blended training initiatives as well as online training.
- Similarly to the assessments, pre- and post-training checklists are also a good option for being sure to track the actual difference that the training is making.
These options can all be automated via automatic emails. They can be used to make sure that the observations get recorded with as little effort on the admin’s part as possible.
Level 4 – Results
Everyone loves a great ROI, but it’s often considered the most difficult to measure. Linking business performance indicators to behavior is not always simple or exact.
We recommend deciding before you implement the training program which business metrics you wish to measure so as you put it together you can be sure to organize it so that you can correlate metrics. Spending more time on pre-training assessments and follow up reports is ideal to demonstrate training value.
Results refers to short-term effects, but results should be evaluated again and again over time to see if the new learning effects the business in the long-term.
This area of the Kirkpatrick model can use checklists and surveys to create reports of the human side of the equation in order to combine the data with business data and search for correlation.
Alternatives to the Kirkpatrick’s Model
Kirkpatrick isn’t the only evaluation model out there, though it is the most common. The Kirkpatrick Model gets critiqued for its age, and some think that there should be a more updated version. Others believe stakeholders will never care about any levels other than “results”. For whatever reason, however, few people have been able to fully address those complaints and devise a model that wildly differs from Kirkpatrick’s. Here are some of your other options:
- Phillip’s Model: This model remains very similar to Kirkpatrick’s, but it chooses to make ROI a fifth level, instead of including it with level four. This puts a bit more focus on the financial impact of what’s being evaluated.
- Kaufman’s Model: This model, similarly to Kirkpatrick’s, breaks down evaluation into small pieces. However, it additionally asks the evaluator to look at societal contributions. The other pieces it asks to look at are: input, process, acquisition, application, organizational results, and societal consequences.
- Brinkerhoff’s Success Case Model: This model functions mostly as a supplement to Kirkpatrick’s Results level. It essentially asks you to identify business goals and expected results, survey a sample of all participants, and conduct in-depth interviews with both successful and unsuccessful participants. Then, the evaluator documents and analyzes what she finds.
Each of the Kirkpatrick Model levels has an impact on the business’s bottom line, whether it’s in worker retention and satisfaction, demonstrated profit, or a hard-to-pinpoint positive impact on company culture and morale.
DOWNLOAD THE Four LevelS of Evaluation Planner
We made an easy-to-use template based on this blog post to help you plan what you need to evaluate each Kirkpatrick Level for your course, including LMS tools, metrics to measure, and questions to ask in assessment and surveys.