
INSTANT DOWNLOAD: Four Levels of Evaluation Implementation Planner. This worksheet helps you decide which LMS features you will use to measure each level.
Kirkpatrick’s Model was invented in the 1950s by Donald Kirkpatrick. The model became highly influential in training. Its combination of simplicity and usefulness made it a model that could apply to a lot of situations, and despite its age, continues to be relevant into the digital era.
Kirkpatrick’s Model – or modifications of it – has become a standard method of evaluation in the elearning industry. One can see why – the model takes into account the opinions of the users, the goals of the trainers, and the desires of the stakeholders, all in one convenient, easy to remember model.
Although there’s some misuse of Kirkpatrick’s model, overall, it’s a very useful reminder of all the factors we should consider when creating a sustainable evaluation process for a particular program.
When attempting to implement the model in an online training environment, it may be difficult to know which tools one should use to extract data and evaluate program performance. Most learning management systems have a variety of built-in tools that can help you implement data collection for each Kirkpatrick level – and in some cases, even help automate the process.
The Four Kirkpatrick Levels
The Kirkpatrick Model is rooted in the use of four interconnected evaluation levels which theoretically provide a complete overview of the effects of training. Each level has a specific area it focuses on.
Let’s shortly go over them before diving into the ways that you can use your LMS to simplify the process of implementing each level of evaluation.
Level 1 – Reaction
This level focuses on learner reaction. How did they feel about the course? Were they engaged? Did they think the information was relevant?
Level 2 – Learning
This level asks the evaluator to measure how much learning and knowledge retention occurred during the course.
Level 3 – Behavior
This level takes place a pre-set period of time after course completion to examine whether the learners are implementing the behaviors and skills that the training program taught them.
Level 4 – Results
The results level wants to understand the impact of the training on the business. It looks to abstract metrics like customer satisfaction and also aims to measure financial outcome.

Each of the levels has their own challenges, but they can be overcome with some preparedness and consideration. For example, reaction is one of the easiest areas to measure, so sometimes learning and development professionals fall into the trap of focusing most of their energy on level one. But there are three other levels to address that are just as important (sometimes more so) and can be a little trickier to address.
To make sure you’re spending time, money, and energy as strategically as possible, be sure to plan your evaluation strategy as part of the early stages of course creation.
How to Implement and Automate the Kirkpatrick Levels in an LMS
Level 1 – Reaction
A learner’s emotional reaction to a course can include whether they found the training material relevant and valuable to their work lives, if they were engaged throughout the training, and if there was anything they would have changed.
There are many ways to measure this via an LMS:
- LMS metrics like “course completion rate”, “percentage of course finished”, “how long the course took to finish” and more
- Post-training surveys sent to each individual user
- Course rating features so users can leave optional public course feedback and ratings. This is especially useful for ecommerce.
You can automate these measurements with the following methods:
- Depending on the LMS, metrics can be collected via either standard or customizable reports. These reports can be generated automatically on a regular basis to keep evaluation ongoing without lifting a finger. They can even be automatically sent to the relevant managers and/or stakeholders by email. Many clients find that a great reporting engine is a game changer in and becomes one of their favorite and most-used features. It makes tracking learner reaction so much simpler.
- Post-training survey options, once set up, can be automatically distributed at the end of the course and ask questions about their experience. These surveys can even be labeled as mandatory for training completion status.
- Automatic emails can remind users that they can leave feedback after the end of a course.
Level 2 – Learning
Learning can be a tricky thing to measure. How do you know how much a course influenced the learner’s knowledge level? Will they retain that information over time? Are they confident in their new skills?
This section also covers the learner’s commitment or desire to implement their new skills. Training must persuade the learners why their new skills will benefit them to put into practice.
LMSs have a lot of features in place to help you evaluate learning comprehension:
- Pre- and post-training assessments, which can take a variety of forms – whether it’s a multiple-choice quiz, a skills checklist, or a more complex text with different question types. They can be taken alongside the reaction survey mentioned in the previous section.
- Peer observations by supervisors, managers, or team leaders that are then uploaded into the LMS.
Automation of these features is common in the best LMSs:
- Pre- and post-training assessments can be automatically assigned to learners in order to compare their knowledge both before and after the course. Instead of using a test at the end, using both will allow you to see the true impact of the course. The best part is, they can be scheduled in relation to live or blended training events as well as online training, so that you can keep records for every kind of training in one place.
- You can use the LMS’s built-in hierarchy to automatically email evaluation materials to personnel with certain levels of superiority at regular intervals. For example, you can automatically assign all managers to complete evaluations every six months based on course or training program completions. Once it’s all set up you can be confident the evaluations are happening without having to worry about it yourself.
Level 3 – Behavior
Behavior involves more environmental observations out in the workplace. This seems difficult to measure in a digital LMS, but the best LMSs have found a way.
- Peer observations can also be used to evaluate behavior. You can include questions in the evaluation to make sure certain behaviors are being observed in the learner’s everyday work life.
- Job checklists are a great tool for this – the manager can easily check the boxes and record the behavior to their learner profile. This feature also makes it easy to record blended training initiatives as well as online training.
- Similarly to the assessments, pre- and post-training checklists are also a good option for being sure to track the actual difference that the training is making.
These options can all be automated via automatic emails. They can be used to make sure that the observations get recorded with as little effort on the admin’s part as possible.
Level 4 – Results
Everyone loves a great ROI, but it’s often considered the most difficult to measure. Linking business performance indicators to behavior is not always simple or exact.
We recommend deciding before you implement the training program which business metrics you wish to measure so as you put it together you can be sure to organize it so that you can correlate metrics. Spending more time on pre-training assessments and follow up reports is ideal to demonstrate training value.
Results refers to short-term effects, but results should be evaluated again and again over time to see if the new learning effects the business in the long-term.
This area of the Kirkpatrick model can use checklists and surveys to create reports of the human side of the equation in order to combine the data with business data and search for correlation.

Alternatives to the Kirkpatrick’s Model
Kirkpatrick isn’t the only evaluation model out there, though it is the most common. The Kirkpatrick Model gets critiqued for its age, and some think that there should be a more updated version. Others believe stakeholders will never care about any levels other than “results”. For whatever reason, however, few people have been able to fully address those complaints and devise a model that wildly differs from Kirkpatrick’s. Here are some of your other options:
- Phillip’s Model: This model remains very similar to Kirkpatrick’s, but it chooses to make ROI a fifth level, instead of including it with level four. This puts a bit more focus on the financial impact of what’s being evaluated.
- Kaufman’s Model: This model, similarly to Kirkpatrick’s, breaks down evaluation into small pieces. However, it additionally asks the evaluator to look at societal contributions. The other pieces it asks to look at are: input, process, acquisition, application, organizational results, and societal consequences.
- Brinkerhoff’s Success Case Model: This model functions mostly as a supplement to Kirkpatrick’s Results level. It essentially asks you to identify business goals and expected results, survey a sample of all participants, and conduct in-depth interviews with both successful and unsuccessful participants. Then, the evaluator documents and analyzes what she finds.
- NEW: Thalheimer’s Learning-Transfer Evaluation Model (LTEM): This eight-tier model aims to better align with the science of learning and includes a comprehensive set of requirements that overcome important issues in learning practice.
- NEW: Stufflebeam’s Context-Input-Process-Product (CIPP) Model: This model is popular nowadays because it can be used for many different kinds of evaluations such as a projects, personnel, products, organizations, and more. It helps to identify solutions to project issues via its emphasis on “learning-by-doing.”
Conclusion
Each of the Kirkpatrick Model levels has an impact on the business’s bottom line, whether it’s in worker retention and satisfaction, demonstrated profit, or a hard-to-pinpoint positive impact on company culture and morale.
Very nice and helpful article, however if anyone could throw light on how to measure training effectiveness (L3 – Behavior) on behavioral aspects such as communication, team work, etc.
Great question, Rohan!
Some great evaluation methods for this step could be an on-the-job training evaluation checklist that’s completed by the trainee’s supervisor(s) and/or peers, a post-training survey given to the trainee that can be compared to a pre-training survey, and you could also give the trainee their own OJT checklist to see how they see their behavior evaluation compares to the supervisor’s. Hope that helps!
In addition to observation checklist that help an observer notice aspects of the candidate’s performance, you can also have the candidates complete self-assessment through reflective activities. For example they journal or blog about their experiences, their perceived levels of readiness to perform different roles, and their areas of competence and below-competence. You can make this entirely free-form, or you can give candidates templates and prompts to focus their reflection on specific areas of practice. The journals can be private, but they can then use these as part of a performance management process to contribute to continuous improvement and continuous learning.
Bill, that’s a great method to self-assess readiness and give others insight into whether or not a candidate has learned or improved on what he/she needs to or might need additional training. Thanks for commenting!
Hi Rohan,
How to measure the effectiveness of training when it relates to things like communication and teamwork is a common and very important question, so I’m glad you asked.
The key is to make sure you have identified a small set of critical behaviors for the skills you are teaching. Critical behaviors need to be observable and measurable, so you know with certainty if they are being performed. For example, if you are teaching supervisors communication skills as part of a leadership development curriculum. you need to define what “good communication” would look like for them in common scenarios. Perhaps it could be leading regular team meetings following a standardized agenda. Maybe it’s having one-on-one conversations with team members who are not producing the required results.
Not only does defining critical behaviors make evaluating Kirkpatrick Level 3 possible, it also improves your training, because you focus it on what you actually want people to do on the job.
To learn more and get access to over 100 free resources, go to https://kirkpatrickpartners.com/Resources
Great insight, Wendy! It’s so true that a general goal such as “communication skills” won’t be the same for each company and industry.
Thanks so much!