
How to evolve with AI in L&D
In our last article, we began exploring lessons learned from the conference on how learning professionals can prepare for the changes that artificial intelligence (AI) and automation will bring in the near future. This article continues with the following five calls to action for adopting AI in L&D, and also attempts to answer a frequently asked question about large-scale language models (LLMs): How smart are LLMs at inference? .
Key points for implementing AI in L&D
Here are some takeaways from speaking with industry leaders about this approach at today’s conference.
1. Deeply understand behavioral science Research behavioral change models
Become familiar with models such as COM-B (Competence, Opportunity, Motivation, Behavior), Self-Determination Theory, and Fogg’s Behavioral Model to understand what drives learning motivation and engagement. The ultimate goal is to change behavior, not just retain knowledge. Design to increase motivation
Use insights from these models to create learning experiences that increase learner motivation through autonomy, competence, and relatedness, increasing the likelihood of lasting behavioral change. test and adapt
Continually test different strategies to motivate and engage your learners and adapt based on what resonates most effectively. Measure the right things! You need to go beyond the Level 1 survey and end-of-course “knowledge check.” For example, by shifting the focus from retrospective (content satisfaction) to prospective (drivers of behavior such as motivation, opportunities, job performance, and goal achievement), we can gain more actionable insights after a learning experience. You can, and then you and your stakeholders can get it. Take action. 2. Build your network Follow industry experts (both internal and external)
Follow industry leaders in L&D, AI, and future of work trends. Choose wisely. When it comes to implementing AI in L&D, people range from “AI will solve all problems” to “AI will destroy the world.” Don’t create an echo chamber where everyone says the same thing. Find practitioners who use AI and actually implement projects, not just blog about it. Stay informed and inspired by emerging trends by regularly reading insights from experts. It’s noisy at the playground today. Let industry leaders block out the noise and filter the dust. If you don’t, you’ll feel overwhelmed. Join the L&D community
Join communities like LinkedIn groups, conferences, and forums. Networking with other professionals provides fresh perspectives and innovative solutions. But don’t stop at the L&D bubble. See the following points: Beyond L&D and HR
Find champions within your company. Again, AI is first implemented somewhere and has a direct impact on business objectives. Be proactive. Learn from your early mistakes. 3. Focus on building a “learning” ecosystem, not just programs Think beyond the boundaries of the course
By “learning” we don’t just mean LMS or LXP, we mean anything specific to training. Learning is anything that enables, accelerates, and extends an employee’s ability to perform their job. Create an ecosystem that supports continuous, informal, and social learning. Try chatbots, forums, or peer coaching to foster a culture of learning within the flow of work. But also know where to get out. Use technology to integrate learning and performance systems
No one gets excited about logging into an LMS or LXP. No one searches LMS or LXP for instructions later. Yes, AI is now embedded in all learning technology applications, but it is fragmented and mostly wrappers around large language models. Integrate learning and performance systems (where employees work) behind the scenes (through application programming interfaces or APIs). You don’t need to know where your assets are stored. Just making it accessible is enough. Learning technology is any technology that supports learning. Build alliances. 4. Strengthen your change management skills Learn the change management framework
Familiarize yourself with frameworks and behavioral motivations such as ADKAR (Awareness, Desire, Knowledge, Abilities, Reinforcement) and Kotter’s 8-Step Change Model. Dealing with resistance to change
Develop strategies to overcome resistance by understanding employee concerns and demonstrating the long-term value of new learning approaches. AI implementations (at least for now) rely on human execution. Everyone wants change, but no one wants to change. Start by solving a specific problem for your stakeholders and audience. Start small, pilot, and expand from there through iteration. Gather skeptics as testers! They will happily try to break your application and point out flaws. 5. Build the foundation of understanding data security, data privacy, and ethics
Is the Data Privacy Council meeting today? If not, start building. Find out who owns data security within your organization. Work with them on data classification levels, i.e. clear guidance on what types of data can be used where. Understand your vendor’s data security and data privacy policies. You may or may not own the data. You can still own the data after separation, but you must archive it. There should be clear policies about how long data is stored and where and how it is stored (encryption in transit and at rest). Be clear about what data you collect and what it can be used for. (For example, if I collect data about skills to conduct a personal development program, can someone later decide to use this data for performance evaluation?)
After all, how smart is an LLM?
Finally, one of the most interesting questions I got from conference attendees was how smart today’s LLMs are. Are they good at reasoning or the illusion of reasoning? How reliable can they be when it comes to reasoning, especially when building solutions that connect AI (LLM) directly to the audience?
LLMs are trained on large datasets to learn patterns and use them to predict what will happen next. An oversimplification is to split all the data you collect into a training data set and a test data set. Train the AI model on the training data set. Once you think your pattern recognition is working, test it on unseen test data. It’s much more complicated than that, but the point is that “smartness” and reasoning can be misunderstood as pattern recognition.
What’s an example? Let’s say you trained a model on how to solve a mathematical problem. Once the model recognizes a problem, it follows a learned pattern of solutions. I have no opinions, beliefs, or any fundamental position regarding this. So if you simply tell the model it’s wrong, the model will apologize and reconsider its answer. Mathematical reasoning (as of today) is not a bright spot for them.
A study of all the models found in the GSM-Symbolic test showed that generating versions of the same mathematical problem by replacing certain elements (names, roles, numbers, etc.) can lead to model mismatches. , indicating that problem solving occurs through patterns. Recognition rather than inference [1].
Specifically, in the GSM-Symbolic benchmark, changing only the numbers in the questions degrades the performance of all models.
If you add seemingly relevant information to a problem that is actually unrelated, humans will ignore it through reasoning. Research suggests that LLM attempts to integrate new information even when it is not necessary for inference.
Adding one clause that seems relevant to the question results in significant performance degradation (up to 65 %) occurs.
In short, today’s LLMs are amazing at pattern recognition and can accomplish it at a speed and scale that humans can’t match. They are good at pretending to be someone else to practice soft skills. However, there are limitations (for now) when it comes to mathematical reasoning, especially when it comes to reasoning about why an answer is the answer. But new models, such as the Strawberry model by OpenAI, are changing this. [2].
References:
[1] GSM-Symbolic: Understanding the limits of mathematical reasoning in large-scale language models
[2] What’s new: About OpenAI’s “Strawberry” and its inference
