Find the spotlight for L&D measurements: Start with business goals
Previous articles in this series explored the streetlight effect through an old story about a drunkard looking for his keys under the streetlight, not where he lost it. L&D measurements often struggle with their own streetlight effects.
It is important to note that we measure and evaluate learning for a variety of reasons. It is recommended to continuously improve your program, prove compliance, and measure effectiveness (including ROI). Before you start measuring, first know your reasons!
So how do you escape the streetlight spell with L&D? The first step is to change where you want to start the search. Instead of designing the training, then asking, “Okay, how do you measure that effect?”, you flip the script over. I’ll start from the end. Identify the business outcomes you are trying to achieve and drive both training design and measurement planning.
Build your data strategy backwards
Starting with business goals and working backwards may sound obvious to some, but it represents a huge change. Surprisingly, less than 4% of companies say they design their learning programs based on a specific defined metric. [1]. The remaining 96%? Many create programs based on perceived needs and requests, provide training, and think about evaluations only (if any) afterwards. By not baking measurements during the design stage, the L&D team relies on those simple post-mortem metrics, as “there is no way to measure effort beyond just the basics.” [1].
Starting with business goals means clarifying what it looks like from an organization’s perspective. For example, if your business is aiming to reduce safety accidents by 20%, it’s your north star. Focusing on it, you can work backwards:
Who can reduce safety incidents? Directly and indirectly? (Since we can’t serve everything, we may need to have the most important impact and choose our target audience.) What behaviors should we change to reach a 20% reduction? Which employees (audience) should adopt these behaviors? What are currently hindering them (skill gaps, knowledge, motivations, processes issues)? Only then will we determine whether training is part of the solution and, if so, design interventions that target those behaviors. Importantly, plan to identify key performance indicators (KPIs) in advance, and in this case, track safety incident rates. The measurement approach involves collecting baseline safety data and comparing it after training (and possibly against a control group or trend line) to see if the needle has moved. You can also plan field observations or assessments to see if employees follow new safety procedures (direct behavioral measurements).
This approach is sometimes referred to as “backward design.” It ensures that your training is not a shot in the dark. In fact, it may make it clear that training is not the right solution at all. Perhaps the underlying cause of the problem is an incentive system that rewards broken processes, lack of proper tools, or wrong actions. In such cases, the solution could be anything other than traditional training (e.g., providing process modification or employment assistance). Starting with business goals and thorough needs analysis, L&D can avoid wasting your efforts on training programs that shed light in the wrong place.
Collaboration with business
A new study from the Association for Talent Development shows that only 43% of talent development experts have business and learning goals. [2].
Measurements are much easier when L&D designs alongside this business. Set clear targets (changes to KPIs or behavior) and collect data about those targets. You’re not searching aimlessly. Even if it’s dark at first, there is a map pointing to the park where the key is lost.
Over time, this practice also builds reliability. Rather than reporting whether employees have participated in the course or viewed resources, business leaders focus on L&D (e.g., sales growth, quality improvement, and reduced turnover) that focus on outcomes that leaders care about. And if training does not achieve the desired outcome, it is not a reason to hide behind the vanity metric, but an opportunity to learn and adjust.
Measuring is not only about proving success, but also about learning what works and what doesn’t. If L&D mainly focuses on what happens after a learning event to ensure the desired outcome, it will move from a cost center under the streetlight to a strategic partner that unveils data-driven insights that businesses can use to make decisions.
Frameworks and models to guide L&D measurements: Kirkpatrick, ROI, and LTEM
Luckily, L&D experts don’t navigate completely in the dark. There are established models and frameworks for training assessment. This works to act like a sign post (or a different kind of lantern) to guide your measurement efforts [3]. Three of the main ones are the four levels of Kirkpatrick, the Phillips ROI model, and the Learning Transfer Evaluation Model (LTEM). Each one provides a lens for what it measures and together pushes us to go beyond simple metrics.
Kirkpatrick’s four levels of ratings are best known and well documented, so I’m not going to spend my time here. The challenge I saw in the model is the actual implementation in workplace learning. L&D starts with a level 1 rating and is often trapped there. Even if you reach Level 2 (Learning), the measurement is still about short-term recalls (or worse, memorization during the course).
Jack Phillips added Level 5: ROI over Kirkpatrick’s model through the ROI Institute. ROI (Return on Investment) is essentially asked, but was the training worth financially? The Phillips model calculates the financial benefits of training, compares it with costs, and generates a percentage or ratio of ROI. [4]. For example, if your Leadership Development Program costs $100,000 and your productivity or sales improvements amount to an estimated $300,000, your ROI would be 200%. This appeals to executives to speak the language of finances.
Calculating the ROI for every project can be difficult and sometimes controversial. Several assumptions are included to separate the effects of training in dollar terms. Phillips supports techniques such as converting improvement metrics into money and asking participants to estimate the amount of improvements they make from training (and discounts due to optimism). The most important point for me is to emphasize that I care not only about the activity, but also about the outcome in the end. The ROI Institute currently has TDRP as a standard set of measurement libraries. Please check it [5]!
Both Kirkpatrick and Phillips highlight important points. Training assessments are not perfect until we examine the impact on work and organization. Or put it another way: did it change your behavior and was it important to the business?
Learning Transfer Evaluation Model
Over the past five years, I have implemented a new model, the Learning Transfer Evaluation Model [6]. LTEM was developed by Will Talheimer in response to shortcomings seen in general measurement practices. This is an 8-layer model that explicitly focuses on learning transfer. So are people using what they actually learn?
The lowest tier of LTEM (Tier 1 and 2) covers such as attendance and participation. Essentially, have people introduced or completed their learning activities? For example, we measure engagement (defined as the focus extended to a task) in three components from Tier 2: physical (what to do), emotions (how to feel, how to be connected), and cognition (whether they challenge and reflect). Tier 3 is learner perception, similar to Kirkpatrick level 1, but LTEM implemented new questions that revolve around behavioral drivers with a focus on performance (like motivation, opportunity, job search ability, and outcomes).
Tier 4-6 examines what they learn in more substantive ways, from simple retention of facts to skill demonstrations for realistic scenarios (task execution). Yet these are often measured in training context (quiz, simulation). This is important, but it is not in the real world yet. Tier 7 is where magic happens: it measures learning transfer – learners are performing correctly at work [7]?
Changes in behavior do not happen by chance
LTEM Tier 7, like Level 3 in Kirkpatrick, handles changes in job behavior, but focuses on directly assessing performance in the work environment. Finally, Tier 8 focuses on the impact of its improved performance on a wider outcome, essentially the impact of organizations similar to Kirkpatrick Level 4 (and further its impact on colleagues and customers).
One of the reasons I chose LTEM is its subtle views and messaging about what’s important. It highlights the fact that the value of training comes from what happens after training. In addition to the working inverse design mentioned above, this model provides practical guidance for all L&D roles to make a difference. More details on the next article are explained in detail.
Isolation of the effects of training: L&D measurements
One of the biggest barriers mentioned in the ATD survey is that L&D experts find it too difficult to isolate the effects of training. I agree. They’re not wrong. And this is why I highly recommend designing the solution backwards, not just measuring it. We highly recommend business goals and desired payoffs (or other effects that are indirectly related to key metrics), support performance goals, the audience that can achieve them, and actions. If there is no change in behavior, there is no effect.
Regardless of the measurement model or framework used, applying backward chains from the business facilitates the impact of learning. But what about the lack of time, resources and expertise to do this on a large scale? In the next final article, we will explore how AI can help and how the role of L&D can benefit.
References:
[1] Measure the impact of learning
[2] ATD Survey: Organizations struggle to measure the impact of training
[3] Models and Frameworks: Understand how each works
[4] ROI Methodology
[5] What role does TDRP play in the measurement space?
[6] Beyond Kirkpatrick: Three Approaches to Evaluating E-Learning
[7] Learning Measuring: Ask the Right Questions