
What research says about AI, learning, and humans
I came to education late in my career. And it humbled me in ways I didn’t expect. There are skills and areas of study beyond what most people know about. The more I read about research, especially related to AI, the more I think we’re looking at this the wrong way. There are several versions of AI conversations in L&D: AI handles mundane instructions while L&D teams focus on the strategic. It seems safe. That’s also too simple.
Research on AI-assisted learning tells a more complex and interesting story. AI does more than just handle routines. When designed properly, it can truly outperform traditional facilitated learning in terms of measurable outcomes. And if the design is wrong, it may not produce any benefits and may even have negative consequences. The gap between well-designed and poorly designed AI learning is not getting smaller, which is exactly where the role of L&D practitioners becomes important.
Human-led instruction remains the most effective
Before exploring what AI can do, it’s important to understand exactly what it can’t do. A groundbreaking meta-analysis by Roorda et al. (2017) found that the quality of the instructor-learner relationship was one of the strongest predictors of engagement and learning outcomes. The reverse is equally true. Inadequate facilitative relationships can visibly damage results. This finding persists in the work environment. Human facilitators and L&D professionals possess four things that AI cannot reliably replicate.
read the room
Detect withdrawal, resistance, or psychological safety issues in your cohort. These issues cannot yet be inferred by any model from interaction data alone. Contextual judgment
Knowing when a learning objective is important is more important than what’s happening in the team or organization around it. values and culture
They form norms about how people learn together, challenge each other, and apply new skills in a particular organizational context. ethical authority
Make defensible decisions about assessment, performance and development that impact people’s careers
Human-driven developmentConstraints in development were not determined by motivation or expertise. We have scaled up. Providing truly personalized feedback and practice to every learner, at the pace they need, is not possible without the help of AI.
What AI-assisted learning can truly achieve
In 1984, Benjamin Bloom identified what he called the “two sigma problem.” In other words, learners who received one-on-one tutoring outperformed those who received group instruction by two standard deviations. [1]. The question then was how to accomplish that at scale. Forty years later, AI is beginning to provide practical answers.
A 2025 randomized controlled trial published in Nature Scientific Reports found that a research-designed AI tutoring system outperformed active-facilitated learning in terms of knowledge outcomes. Importantly, this benefit only emerged when systems were built to encourage critical thinking and application, rather than simply providing answers on demand. We found no measurable benefits for unguided AI access. It was all about designing the learning experience.
A similar conclusion was reached in another UK-based RCT (2024) that tested Google’s LearnLM. Learners supervised by an AI model had better transfer of knowledge to new problems than those who received only human-led instruction. [2]. The human facilitators in this study focused on pacing, motivation, and social-emotional support. The hybrid model outperformed either approach alone.
VanLehn’s fundamental research on tutoring system design explains why this works so well. Effective AI learning systems continuously turn assessment into instruction, providing feedback at every step of a module rather than at the end. This principle is further strengthened by large-scale language models that can respond to open-ended responses as well as multiple-choice choices.
However, AI-assisted learning has real failure modes that L&D professionals must design for.
hallucination
AI models can generate fluent, confident, and inaccurate content. From a compliance or technical skills perspective, this is a significant risk that requires human oversight.
Ready-to-use AI assistance can reduce the productive struggle to enhance long-term learning. Spaced searches and difficulty are features, not bugs. bias
Automated grading and feedback should be audited for differences in error rates across learner groups, especially in organizations with diverse workforces.
Formative vs. summative: A practical framework for L&D
The most helpful lens when deciding where to introduce AI in a learning program is the formative and summative distinction. Formative learning activities (practice, reflection, checking knowledge, responding to scenarios) are often where AI truly wins. Learners get faster feedback, more practice opportunities, and a lower-risk environment to make and learn from mistakes. Frontiers in Education’s 2025 systematic review confirms these benefits across 37 studies, but also notes that the benefits depend on L&D professionals remaining active brokers of the experience rather than passive adopters of tools. [3].
For comprehensive and high-stakes evaluations, the calculation method changes. Effectiveness, fairness, and defensibility are more important than efficiency. A study by Littman et al. (2021) on AI-assisted scoring clearly sets out when automated assessments are reliable and when human review is non-negotiable, especially those related to documentation, professional judgment tasks, and performance management. Practically speaking, let the AI carry the formative load. Keep humans up to date on everything that impacts a learner’s trajectory within your organization.
L&D Practitioners in AI-Assisted Learning Capabilities: Behaviors and Skills
The evidence points to a clear conclusion that the role of L&D practitioners will not diminish in AI-assisted learning environments. It changes and becomes more demanding in some respects. Here are the specific behaviors and skills that differentiate L&D practitioners who use AI effectively from those who struggle with it.
1. Learning Design Literacy: Know what AI should and shouldn’t do
A 2025 Nature RCT found that the use of unguided AI offers no learning benefit. Practitioners who derive value from AI tools are those who understand learning design well enough to specify what the AI should do, when it should do it, and with what constraints.
This means moving beyond content selection to designing learning architectures. This means combining and ordering AI practices with human reflection, incorporating search intervals, and specifying what the AI should not just hand over to the learner.
2. Data Interpretation: AI reads what surfaces and acts on it.
AI-assisted learning platforms generate learner data at a scale and granularity previously unavailable. L&D practitioners of the next decade need to be comfortable asking, “Does this pattern in the data tell me what’s not working?” Where do learners stumble? Which cohorts are dropping out and why? This is not a data science role, but it does require enough analytical fluency to move from dashboards to design decisions.
3. Prompts and system design: Specify exactly how the AI behaves
Deploying an AI learning tool is not the same as configuring it properly. A competent practitioner should be able to create a clear gist of the AI system. This means specifying the persona, the constraints, the type of feedback the AI should give, and the escalation points where the human facilitator should intervene. This is a new form of instructional design that is quickly becoming a core L&D skill.
4. Ethical oversight: auditing bias and maintaining defensibility
As AI takes on more of the formative assessment load, L&D professionals will have a new responsibility to ensure that automated feedback is fair, accurate, and does not systematically disadvantage any particular group of learners. This requires building audit habits into the program cycle, rather than treating fairness as a one-time procurement checklist item. It also means maintaining the confidence to override AI recommendations if human judgment determines something is wrong.
5. Facilitation that cannot be imitated by AI
As AI absorbs more of the knowledge transfer and practice workload, the human facilitation that remains will need to be truly irreplaceable. That means putting more emphasis on what research has identified as most important: psychological safety, motivational support, situational challenge, and the kind of feedback that requires getting to know the person, not just the answer. Successful L&D practitioners are those who see AI taking on repetitive, scalable tasks not as a threat to their professional identity, but as an opportunity to do human jobs better.
This study makes clear that, above all, the quality of L&D professionals’ judgments determines whether AI-assisted learning works or fails. It doesn’t diminish the role. It’s more consequential. The organizations that get this right are those that invest in upskilling their L&D functions alongside AI tools. Evidence shows that technology without practitioner competency is the same as no technology at all.
over to you
Which of these skills are your L&D teams already developing, and where are the biggest gaps? We’d love to hear from practitioners working on the front lines of this issue.
References:
[1] The 2 Sigma Problem: Exploring teaching methods that are as effective as one-on-one tutoring.
[2] AI tutoring can safely and effectively support students: an exploratory RCT in UK classrooms
[3] Educators’ thoughts on AI automated feedback in higher education: A structured integrative review of possibilities, pitfalls, and ethical aspects.
Research cited:
[1] Teacher-student affective relationships and student engagement and achievement: A meta-analytic update and test of the mediating role of engagement.
[2] The 2 Sigma Problem: Exploring teaching methods that are as effective as one-on-one tutoring.
[3] How the tutoring system works
[4] Fairness assessment of automated methods for scoring the use of textual evidence in writing
[5] AI tutoring outperforms in-class active learning: RCT using a novel research-based design in a full-scale educational setting
[6] AI tutoring can safely and effectively support students: an exploratory RCT in UK classrooms
[7] Educators’ thoughts on AI automated feedback in higher education: A structured integrative review of possibilities, pitfalls, and ethical aspects.
[8] What research shows about generative AI in tutoring
