Are you making a difference? Measure what’s important
You may know this version of the old story:
It’s midnight on the quiet street, a slightly tipsy, lonely figure crouches under the streetlights and slams the sidewalk. Passersby asks what he is doing. “I’m looking for the key to my door,” he sighs. The passerby said, “Are you sure you lost the key here?” the man shook his head and said, “No, I lost it in the park.” Confused, the helper responds, “Why are you searching here?” Man gestures to a pool of light cast with lamps. “Because the lighting is much better here.”
You might laugh at the absurdity, but this classic streetlight effect (also known as the search for drunkenness) shows common human prejudice. [1].
L&D street light effect
In the world of learning and development (L&D), we often play our own versions of this story. Faced with the difficult question, “Did we really make a difference?”, many L&D experts find themselves grasping bright areas of data, such as bright areas of data (LMS) reports, course completion rates, and smile sheet surveys. Not because it has an impact, but simply because those metrics are easily at hand.
The real “key” to performance impact can lie in the darkness and scattered across work performance dashboards, sales, or customer satisfaction scores, but those areas are difficult to illuminate. So we stay under proverb lights and generate reports on attendees and post-training quiz scores and more. It’s safe and satisfying. This is the street light effect in L&D measurements. Measure the simplest, not the simplest.
L&D street light effect: measure simple things, not important
The habit of “finding where the light is” explains many pitfalls in L&D measurements. Consider how successful training is often reported.
“500 people attended the workshop, and 95% of them said they would recommend it!” or “Our LMS shows 1200 courses completed this quarter! The total training time our team offers is 600 hours.”
These indicators of vanity shine brightly. They are easy to collect (who doesn’t love the LMS track completion automatically and good post-training research that we feel appreciated?). But will they really tell us whether employees have improved their skills or if the business benefited? Does a business interpret what was spent 600 hours of training as value provided (as opposed to investment)? In many cases, the answer is no.
One study found that companies “rely rely heavily on basic metrics like completion rates and smile sheets.” [3]. These are exactly the kinds under L&D street lights. They are visible and easy to measure. It’s automated, convenient, comfortable, and like the sparkle of that streetlight.
In a new study from the Association for Talent Development, only 43% of talent development experts say their business and learning goals are consistent. (n = 277) [4]
If we are not aligned or not, are we looking at the issue?
What are we not looking at?
One of my favorite questions when researching early on business issues and opportunities: “Now, what are we not looking at?”
Yes, asking questions and slowing the process can be expensive. But therefore, it may rely solely on convenient data points. Convenience costs money! By focusing on simple metrics under streetlights, organizations often miss real stories hidden in dark alleys. As one report stated, companies assume that if learners complete training and give thumbs, training must be effective. It is a “dangerous assumption” that completion equals success [3].
In reality, perfection and satisfaction do not guarantee learning, behavioral changes, or outcomes. Employees may give the course a 5 star because it was interesting, but they don’t change anything about the work the next day. The team may achieve 100% mandatory training completion, but the associated safety incidents and sales numbers have not improved. Under the cozy light of completion rates and average surveys, the shadow remains that it is not possible to promote actual change.
Isn’t it just me?
No, you are not alone. I have worked for organizations large and small over 25 years on hundreds of projects. I saw the same pattern. Measurements and assessments are often stuck in a “level 1” survey or knowledge check. I’m not the only one who says this. Industry research shows that most organizations struggle to measure their deeper impact. For example, 43% of companies say they don’t measure Level 4 at all [3]refers to level 4 of Kirkpatrick (results, business impact).
Why we stick to light: barriers to meaningful measurements
Do you know that I found all of these studies appealing (including my own experiences)? The L&D team knew that in theory we should measure important things. They knew what was important and what was important. So why? Would you like to measure it?
If measuring the actual impact is so important, why aren’t any more L&D teams doing it? It’s not because L&D experts are lazy or don’t care. In fact, 91% of companies believe that the impact of learning should be measured beyond the basics (only 9% say there is no need for a high-level assessment) [3]. There is an intention. The problem is that some deep-rooted barriers continue to pack L&D in bright zones.
“I don’t know where to start.”
Deciding how to measure behavioral change and business outcomes is overwhelming. Many teams don’t have a clear roadmap. The reported top challenges show that you simply know how and where to start from a measurement plan [2]. It is much easier to default to familiar routines that collect course feedback and test scores than to challenge unknown analytical areas. It’s okay to start where you are! Iteration and progress will take you further in the long run than waiting for the perfect conditions to begin. Lack of data access and integration
Reaching these “dark areas” (such as job performance metrics and business KPIs) often means pulling data from outside the L&D silo. This may require you to leverage sales systems, quality assurance data, or HR performance reviews. For many L&D teams, this is easier than that. DATAs exist in a variety of systems owned by other departments and may not be easily shared. Naturally, “access to necessary data” is cited as a persistent barrier to learning measurements [2]. Data security and privacy rules can also pose challenges as information may be misused. For example, if you are unable to obtain data on post-training error rates and customer satisfaction, you will be forced to rely on what you can get (LMS statistics and research). Lack of business alignment and stakeholder support
Measure the true impact often requires cooperation from the entire business. Managers observing and reporting changes in behavior, or executives may be required to prioritize measurement efforts. However, it can be difficult to convince the stakeholders who convince you that deep measurements are worth the effort. As long as employees check the training box, many stakeholders are satisfied. In fact, getting a buy-in that should be a priority is another major challenge [2]. Without leadership support, L&D may not have the time or resources to chase meaningful metrics hidden in the dark. With that note, stop and take a step back. What more value can you bring to the table to help your stakeholders make data-driven decisions? Think of data as a practical insight that can provide you with the value of your business taking action, as well as the “proof of impact” you’ve looked back on! What if X% of participants said they needed to further support the transition? Analytical skills and confidence
Let’s face it. Not all L&D professionals are data analysts and do not need a doctorate in statistics to be effective. However, today’s L&D teams are expected to wear multiple hats. Learning design and delivery is one skill set. It’s another to measure its impact on your business. Many L&D departments do not have strong capabilities in data analysis and experimental measurement techniques. There may be a lack of tools and expertise to perform robust assessments (e.g., connecting training cohorts to control groups and making statistical comparisons). Lack of shared data literacy, low confidence in large skill gaps, can contribute to hesitation. This is safer than trying to create a basic report than trying complex analysis that could potentially be beyond the team’s comfort zone. Complexity of behavior
Human behavior is complex, even with the right data and skills. It can be difficult to isolate the effects of training programs on practical behavior and measure important things. Behavior changes often unfold over time and can be influenced by many factors other than training (such as manager support, work environment, incentives, personal motivation, etc.). To measure it, you need to connect to performance metrics that fluctuate for reasons other than observation, follow-up assessment, or training. It’s not as easy as scoring a quiz. Because it is complex and sometimes slow to change, many organizations avoid digging into behavioral changes. But if we weren’t for a change in behavior, did we really make a difference?
These barriers explain why L&D measurements tend to hover in light of the simplicity. But there are consequences to staying there. A meaningful failure to measure will put blind flight at risk. By not establishing the resulting metrics in advance because one analyst omits it, the organization ends with a constant cycle of “place content and wanting the best.” [3].
Moreover, the inability to measure impact was cited by 69% of companies as the biggest challenge to achieve important learning outcomes [3].
In other words, not measuring impact is not just a measurement problem. It’s a business problem. This means that L&D cannot demonstrate alignment with strategic goals and therefore cannot prove (or improve) its value to the organization.
How can you evolve from here? Measure what’s important
In the next article in this series, we explore ways to move from handy streetlights to unknown darkness, spondrambling where the real impact lies, and measure what is important. We’ll explain how to choose measurement and evaluation models and what’s beyond the famous Kirkpatrick’s. Finally, we investigate how AI can be used as a force multiplier by scaling a limited number of spotlights that teams can handle thousands and thousands of on a scale.
References:
[1] Street light effect
[2] Measure L&D success: What are the most important reports for your organization?
[3] Measure the impact of learning
[4] The future of learning assessment and measurement of impact: Improve your skills and addressing challenges