
Balance AI efficiency with human-centered design
As eLearning embraces AI, there is an understanding that humans come first, just like our own learners. In machine learning, this is human-involved (HITL), where humans help machines make the right decisions. In instructional design, this is an understanding that goes beyond just efficient production: designers infuse coursework with humanity to ensure a relatable, accurate, and engaging learning experience.
The relationship between AI efficiency and human creativity doesn’t have to be a struggle; it’s important to strike a balance. It doesn’t have to be hostile. They can be complementary. AI can accelerate workflows and surface insights, and humans can ensure learning is meaningful, ethical, and emotionally resonant. Here are common concerns that designers face when working with AI and how human-involved thinking can ensure an immersive and authentic experience for human learners.
Human factors in AI-powered learning design alleviate concerns
1. Creativity
AI is faster than the humans who created it, but it is not necessarily more creative than the humans who created it. It combines existing patterns, but there is no creative synthesis. It can create change, but it cannot create meaning, emotion, or intention. Process. I don’t imagine that.
AI can accelerate production, reveal patterns, and even generate ideas never before seen by humans, but it cannot discern why something is important or who it should exist for. It’s about understanding learners and their needs. Its interpretive layers (context, empathy, storytelling) are purely human. The most effective designs use AI as a co-creator rather than a substitute, letting machines create possibilities while humans shape purpose and story. This creativity keeps them motivated and eager to learn. It gives learning authenticity, emotional resonance, and a spark of motivation. Keeping humans in “human-centered design” also includes designers.
2. Personalization
AI systems often promise “personalized learning,” but in reality, this personalization often relies on surface-level engagement metrics such as click-through rates and completion times, rather than deeper evidence of cognitive understanding. The result is that learners know what to do, but not how to apply it. [1] Due to the influence of algorithmic “glazing”, [2] Learners can receive recommendations that strengthen their existing strengths rather than addressing true skill gaps.
Without expert supervision, AI can misdiagnose learner needs and preferences, resulting in pseudo-individualization rather than true adaptation. This is not personalized learning in the instructional design sense. Rather, it is a versatile model disguised as customization. Skilled instructional designers counter this by using adaptive frameworks, branching scenarios, and flexible RTI (response to intervention) designs that change with the learner rather than around them.
3. Audio
AI’s written voice, like visual AI, has discernible clues, and once you start spotting them, they become dazzling, suspicious, and unappealingly obvious. There’s the goofball, the passive voice, and em dashes galore. Just as bad editing in a movie takes away from the experience for the viewer, the perception that they are reading AI content takes away from the learning experience for learners. That’s why we need constant reminders that AI is just a tool in the hands of experts. It’s up to humans to ensure that the learning experience they’re designing has a human voice, not a machine voice.
Be aware of common pitfalls with AI voices and edit accordingly. Read it aloud. Have it peer reviewed. Add some personality: stories, anecdotes, photos of your real office, etc. This includes relying on your organization’s style guide as a source of truth, reducing business jargon, and reading the output as if it were your responsibility (because you are). AI can speed up production, but it cannot replicate human warmth and intent. Maintaining this distinction maintains trust and keeps learners immersed in the experience you’ve designed.
4. Accountability
When the AI makes a mistake, which is statistically common, [3] Who discovers errors and who is responsible?Generative AI tools can produce plausible but inaccurate or outdated information. When AI models are trained on old or biased data sources, those beliefs can be smoothed into new contexts and persisted into waiting audiences, impacting the outcomes of evaluations, recommendation systems, and recruitment-related training. For global or DEI-focused programs, this can result in inequitable learning pathways and content visibility that disadvantages certain learner groups. AI-enhanced platforms can unintentionally widen accessibility gaps if training data and design choices are not representative of diverse learners.
Human designers must audit equity and ensure that learning technologies are designed to be inclusive, truthful, and welcoming. Without strict oversight of instructional design, training materials can contain subtle errors, copyright issues, and pedagogical flaws. Whether due to illusions, inaccuracies, or misinformation, errors can add up to significant liability and reputational risks, and human instructional designers, learning developers, subject matter experts, quality assurance analysts, and fact checkers mitigate these risks. After all, accountability cannot be outsourced. The responsibility for accuracy and consistency always lies with the human team.
5. Transparency
While AI-generated content can cause errors in learning systems, it can also lead to confidential information and copyright infringement. [4] Your organization will once again be at serious risk. As AI systems are trained to create new content, they can get too close to their own sources, leading to plagiarism and intellectual property issues.
Learners should be informed that the content they are learning is generated by AI. Ethical concerns arise when AI is used without transparency. Learners can be misled if they believe that the material was created solely by industry experts and then discover that the material was created by AI. Ethical use of AI in content creation requires clear transparency, rigorous human review, and organizational accountability.
AI’s role in learning will need to mature through continuous human feedback. Repetition, not automation, maintains quality and relevance. AI may enhance what we create, but it is human intent that gives meaning to learning. The goal is not to remove humans from the process, but to expand their contributions through intelligent partnerships. The future of learning belongs to the most thoughtful collaborations, not the fastest systems.
References:
[1] The era of deskilling
[2] The Glazing Effect: How AI Interactions Quietly Undermine Critical Thinking
[3] Largest study of its kind finds AI assistants misrepresent news content 45% of the time
[4] The dangers of using AI to create training course materials
Activica Training Solution
Activica combines solid instructional design principles, creativity, and technology to create unique and innovative training solutions that improve performance.
