The problem with bias: Be careful what you think!
We’ve rolled out an all-new candidate experience for hiring that automates many tedious tasks, such as scanning resumes to match skills and experience. For candidates who pass the AI filter and the initial recruitment call, the system will automatically send a survey.
How much do you think about your thoughts?
Here are the results: Although the survey response rate was lower than expected, candidates were absolutely satisfied with the experience. What’s wrong with this? I got BAMM!
BAMM doesn’t matter
Although BAMM is not formal, in my data literacy workshops I refer to the collective elements of biases, assumptions, myths, and misconceptions as “BAMM.” At the end of the day, it’s not the label that matters, but the extent to which it can be reduced in the decision-making process.
What do BAMMs have in common?
They are invisible and often undetected, but they have a huge impact on the way you think.
prejudice
These are cognitive shortcuts that distort perception and judgment. They arise unconsciously and lead us to prefer certain ideas or groups over others. For example, in L&D, confirmation bias may lead to only using metrics that support the perceived success of a training initiative. assumption
These are things we take for granted without any evidence. Assumptions often simplify complex scenarios, but they can introduce blind spots. For example, assuming all employees prefer self-paced e-learning, resources may be underutilized. mythology
These are widely held but false beliefs. Myths persist through repeated exposure and cultural norms. The myth of “learning styles,” or the belief that tailoring training to visual, auditory, or kinesthetic preferences improves learning, is a classic example in L&D. misunderstanding
These are incorrect understandings or interpretations of concepts. They are often rooted in incomplete, outdated, or half-truth information. A misconception in L&D can be to equate high course completion rates with learning outcomes.
Hidden Bias: How to Identify and Address It
Because BAMM impacts decision-making at every level, from program design to data interpretation, it is important to systematically identify and address BAMM. Let’s go back to the original story. What’s lurking in BAMM that we need to be wary of?
Assumptions and confirmation bias
First, without user testing, you may end up relying on assumptions about your new software. I can tell you from my experience looking for a new role that the most painful and frustrating part of the process was the application stage, which involved the dreaded Application Tracking System (ATS) with selection bias. [1].
survivor bias
The system then consulted only those who passed the ATS and human screening. This may just be an example of survivorship bias. Would you like to know the experiences of those who did not pass? Or, worse yet, are there people who were excellent candidates but decided not to apply because of their experience?
Politeness bias (response bias)
What about the results?One form of response bias is communicating what is expected or more socially acceptable instead of communicating the truth. Think about it. These candidates want the job. Will they really tell HR how bad the experience was? And finally, your own confirmation bias. You just invested a lot of money and resources to implement this system. You want to hear good things about this initiative. Confirmation bias can affect what questions you ask and how you phrase them. Confirmation bias also affects how we accept outcomes we like and reject outcomes we don’t like.
Can we completely ignore BAMM?
no. But you can reduce their influence on your decision-making simply by being aware of their existence and taking practical steps to mitigate them. Here are examples of each BAMM and strategies to mitigate them.
1. Bias: Definition of confirmation bias
The tendency to search for, interpret, and recall information that confirms existing beliefs. Confirmation bias often leads to echo chambers (everyone believes the same thing) and bandwagoning (everyone does something because everyone else is doing it). L&D example
Collect only feedback that is consistent with your belief that your new training program is effective. As you begin to “prove the value” of your program, you may limit your data collection to those elements that you believe support your theory.
When I run data literacy workshops, L&D teams often start by saying that data analysis is key to proving the value of L&D. By the end of the workshop, they had rephrased this statement. Because analytics is about understanding what works and what doesn’t, and predicting what will work and what won’t. It’s about understanding how to make decisions based on data, not trying to “prove” value. relief
Use diverse feedback channels and actively seek out contradictory evidence to challenge assumptions. For example, within Intel, we shared an AI assistant called “Holey Poke” with other L&D people. “Holey Poke” pokes holes in your ideas, arguments, and plans. I used it to challenge myself before engaging in anything with others. 2. Assumptions: Defining engagement and effectiveness
This is actually a 2 in 1. First, consider that certain metrics, such as user interface interactions in e-learning, equate to full engagement. Second, believe that if something is engaging, it is effective learning. L&D example
Officials say the content is fairly dry and needs to be “live” with interactions. This approach can involve a lot of clicking, dragging and dropping, clicking to reveal, etc.
Engagement is more than just a physical act. Engagement has affective (emotional) and cognitive components. Measuring engagement means measuring all three elements. Furthermore, over-indexing involvement in the emotional domain can lead to pure entertainment. Finally, effectiveness at work means that employees are able to apply what they learn on the job to get things done and get things done well. Effectiveness must be defined and measured in advance. relief
Measure application and real-world outcomes, not just content engagement. Always design and measure all three elements of engagement. Remember, people don’t come to work for entertainment. 3. Myth: Defining the digital native
The belief that younger generations are inherently better at technology. But but but… Yes, the younger generation may be much faster at sending messages, but try emailing with them. L&D example
Assuming everyone knows how to create a pivot table using Excel, or just because they grew up with technology, it’s hard to know when and how to use email or set up a meeting. Assume that everyone knows what to do. relief
Regardless of your age (by the way, the word “generation” itself is often a myth), assess your skill gaps and set clear expectations. It shows you what “good” looks like and explains not only the steps you should take but also the decisions you make. Communication skills are often intertwined with technology barriers. Please tell me about the two together. Don’t teach “communication skills” or “empathy” out of context. How can I perform tasks while applying these soft skills? Speaking of skills… 4. Myth: Defining skills-based learning and skills-based organizations
Well, that’s the misunderstanding. Each organization seems to have its own definition. In general, a skills-based organization (SBO) is a business model that prioritizes identifying, developing, and deploying employee skills over traditional job functions and titles. And skill-based learning makes that possible. L&D example
Donald Clark’s (as always) thought-provoking blog and LinkedIn post about skills-based organizations stirred up some emotions.
Workplace learning requires us to stop distracting ourselves with abstract concepts. The term “skills-based organization” has long been an empty metaphor. Because we have been seduced into thinking that leadership (spending more but spending less), culture, diversity, equality, values, inclusion, resilience, etc. are abstract nouns. is. , is a “skill” or a mystical miasma that promotes and produces skill. [2].
I always read Donald’s posts. Not because I always agree with everything he says, but that’s really the point. If you keep reading things that you completely agree with, you will never be challenged and you will never evolve.
Nick Shackleton-Jones has a very important comment related to this article on BAMM.
(paraphrase) We often implement a useful version of what we actually can do instead of what we should be doing.
What exactly is a “convenient version”?
Pick a label like mobile-first and implement video because it’s the only tool you have for mobile (regardless of whether you need video or not). They choose game-based learning for engagement, but end up using Jeopardy because they don’t have the resources, time, expertise, or tools. Even if you start gamification without deep expertise in motivation theory or behavioral science, you’ll end up with points, badges, and leaderboards. Microlearning is also available, but it’s basically just short pieces of content. You get the picture. relief
Don’t spend years building a “skills library” of abstract definitions and then ask L&D to build “communications training.” Let’s start with what needs to be done. Let’s think about what “good” means in important areas. Not all skills are equally important. Skills also have scales. I can dabble in programming, but I’m not looking to build the next enterprise application. If you don’t use your skills, they will deteriorate. Skills need to be evaluated not only on the basis of the skill, but also on the product you intend to create by applying the skill.
So the next time someone asks you to build communication or empathy training, ask them what needs to be done and how. Then we’ll show you how to apply good communication and empathy skills to that specific task.
I already know about BAMM. Don’t get bammed!
References:
[1] ATS is terrible: here’s what you need to know
[2] Recovering productivity: Aligning work, learning, and social needs
First publication date: December 28, 2024