
Moving from static e-learning and AI-generated content to competency-driven learning experiences
With over 25 years of experience in learning and development, Dimitris Tolis is the founder and CEO of Human Asset, where he has led the design of custom e-learning, learning academies, and AI-powered learning solutions for European institutions such as EUAA, CEPOL, and EUDA, as well as international organizations such as the Council of Europe, ESM, and UN ITU. As a senior instructional designer, certified executive coach, and AI researcher at the University of Turku in Finland, I integrate instructional design, neuroscience, and educational technology to create practice-based learning experiences that are more human-centered and adaptive. Through initiatives such as gAImify Hub, he is helping to shift the conversation from faster content creation to more meaningful learning design. Today he talks about the opportunities, risks, and future of AI in workplace learning.
Content generation handbook beyond AI
To find out how you can apply these ideas in practice, download Human Asset’s playbook.
Based on your experience, what are the risks of current AI use in learning and how can it hinder a meaningful L&D journey?
One of the biggest risks is that AI is learning to solve the wrong problems. This allows you to create content faster, but speed alone doesn’t improve learning. Instead, it can lead to content banality at scale. That means more slides, quizzes, and modules, but less depth of instruction, less originality, and a poorer learner experience. It can also create what I call the “little god” effect: the illusion that meaningful learning is also designed because the content is instantaneously generated. Without strong instructional design, content bloat and poor quality can quickly occur.
The second risk is cognitive offload combined with over-reliance on AI. When learners receive immediate answers, simplified summaries, and predictable feedback, they can become less motivated to learn. As we are already aware of happening, critical thinking, reflection, and judgment can weaken over time.
Another serious risk is AI hallucinations. Large-scale language models can produce output that sounds fluent, confident, and trustworthy, even when it’s inaccurate, misleading, or outright wrong. In learning situations, this is especially dangerous. This is because learners may trust an answer simply because it is well written. When this is combined with weak review processes, poor prompts, or lack of instructional guardrails, AI can spread confusion rather than support understanding.
So, if AI not only speeds up learning but also flattens it, meaningful L&D journeys can be hampered.
However, my view is optimistic. These are no reasons to back away from AI. They are a reason to design better.
What are the most overlooked opportunities for AI in learning, and why should organizations move from content generation to designing meaningful learning experiences as they implement this new technology?
One of the most overlooked opportunities is that AI can help move from information provision to capacity building. Most organizations still use AI primarily to generate content faster. But the real value lies in designing learning experiences that are more adaptable, more contextual, and more hands-on.
A good example is the role of adaptive quizzes. Quizzes are often just a recall check. With AI, you can become part of the learning process itself. Challenge levels change dynamically, weak areas can be strengthened, and custom feedback can move learners forward. This makes quiz practice more advanced and closer to actual learning.
Another great opportunity is open-ended practice with personalized feedback. Many important workplace skills, such as interviewing, providing feedback, coaching, and handling conflict, cannot be developed through multiple-choice questions alone. Learners need to answer in their own words, make decisions, and reflect on their choices. AI can support this through AI coaching personas that provide more targeted feedback on clarity, reasoning, empathy, tone, and intent.
This is important because meaningful learning doesn’t come from making things easy. It is created by providing the right challenge along with the right support. Aristotle’s insight still applies today. Learning requires effort. Real learning and development happens when learners are challenged. And Bloom’s 2 Sigma research reminds us of the value of individualized instruction. AI gives us the opportunity to integrate both at scale for the first time in human history.
Finally, AI creates significant opportunities for customization. Instead of one-size-fits-all training, you can shape learning to fit your organization, role, competency, and context. That’s why organizations need to move from content generation to designing meaningful learning experiences.
What is the importance of human-centered AI and human-involved approaches when building competency-driven learning experiences?
Illusions, the black-box nature of LLM, and what I often refer to as a “prompt-and-play” approach are exactly what increase the risks in learning AI. Simply asking a model to generate content, feedback, or ratings without a strong structure may yield output that sounds fluent and persuasive, but is not necessarily accurate, relevant, or pedagogically appropriate.
That’s why human-centered AI and human participation are so important, especially in competency-driven learning. These help move AI from improvisation to disciplined design.
With the right architecture, you can keep your AI focused through specific competency frameworks, scoring rubrics, clear instructional objectives, guardrails, moderation logic, and, of course, human review and approval. This makes a big difference. Instead of letting your AI wander, guide it toward what matters: the skills, behaviors, and standards you actually want your learners to develop.
In practical terms, this means that while AI can support the experience by generating practices, feedback, and adaptations, humans remain responsible for quality, coordination, and trust. The result is a more authentic, transparent, and developmentally more meaningful learning environment.
To me, this is the real value of a human-centered approach. This not only makes the AI more reliable, but also more useful. This allows you to benefit from speed, responsiveness, and personalization without compromising educational integrity. That balance is essential in competency-driven learning.
Can you describe a typical use case for AI-powered learning transformation from your work?
yes. A prime example of our work is with Europe’s leading law enforcement academies. That’s why we’re co-designing an AI-powered trainer development program focused on helping trainers strengthen their instructional design and teaching skills.
What makes this case particularly relevant is that the course is designed around two objectives. It is about mitigating the risks of AI, such as hallucinations, over-reliance, poor judgment, and inappropriate pedagogical use, and at the same time unlocking the opportunities of AI in more personalized, adaptive, and practice-based learning.
This transformation is not about adding AI to traditional courses. It’s about redesigning the learning experience itself. We use AI-assisted course design with structured templates, customization to the academy context and trainer role, adaptive quizzes that support practice rather than simple recall, open-ended scenarios with coaching-style feedback, and AI avatar simulations that allow trainers to rehearse realistic conversation and facilitation moments. We also use competency frameworks, rubrics, and human-involved reviews to keep the experience authentic and aligned with Academy standards.
What I find most exciting is that these types of projects move AI from content generation to feature building. To me, this is a very powerful example of AI-powered learning transformation. It’s not about speeding up content, it’s about improving learning design.
Do you have any recent development projects, product launches, or other initiatives you would like to share with our readers?
Yes, we are very excited to share one of our most recent and most important initiatives at Human Asset: gAImify Hub.
gAImify Hub is an AI-powered gamified learning platform designed to help organizations create learning that is more adaptive, more practice-based, and more closely tied to real-world workplace performance. What makes this especially important to us is that it reflects a very intentional philosophy that AI doesn’t just help you create content faster. It should help you design better learning experiences.
The platform integrates AI-assisted course design, contextual customization for organizations and roles, adaptive quizzes, open-ended scenarios with coaching-style feedback, real-time AI avatar simulation, and gamified learning journeys. So instead of relying solely on static e-learning, organizations can create experiences that make learners think, respond, practice, reflect, and improve.
A key part of this innovation is also a human-involved approach. AI supports the design and learner experience, but learning experts remain in control of review, refinement, and approval. For us, it is essential. This makes the experience more authentic, more relevant, and more aligned with actual learning objectives.
Just as importantly, gAImify Hub is designed with a focus on ethical AI and compliance. This includes responsible use of AI, clear human oversight, and attention to data protection, trust, and governance requirements, including GDPR and broader legal responses. We see this as a necessary foundation for innovation in learning, not an afterthought.
These innovations can be applied in two ways. Build new adaptive learning experiences with gAImify Hub or upgrade your existing SCORM courses with inSCORM AI.
What do you think the future holds for AI in the Adaptive Learning Academy?
I believe that the future of AI in the Adaptive Learning Academy is very promising, but it depends on the choices we make now. The future of AI in education will not be determined by who creates the most content, but by who designs the most meaningful learning.
The most powerful academies use AI to move beyond static courses and create learning ecosystems that are more adaptive, more practice-based, and more connected to real-world development. They do more than just provide information. These help learners think, practice, reflect, receive feedback, and improve over time.
For me, one principle is important. AI should make learning more challenging and engaging, not make it easier in the wrong way. It should not reduce effort or encourage passive dependence. It should help create the right kind of challenge at the right time, with the right support. This is where adaptive learning becomes truly powerful.
I also believe that academies will become smarter in how they treat their learners. We will see a stronger use of adaptive assessments, open-ended scenarios, simulation-based practices, and feedback loops that make development more visible and more personalized.
At the same time, the best academies remain human-centered. Combine AI with powerful educational design, ethical guardrails, and human judgment.
So I’m optimistic. I think AI gives the academy a real opportunity to evolve from a content library to a living environment for growth, reflection, and performance. That’s a more exciting future for me.
summary
Special thanks to Dimitris Tolis for sharing his insights into the potential risks and opportunities of using AI to create personalized and adaptive learning experiences. If you would like to learn more about this topic, check out Human Asset’s guide, AI in Workplace Learning: From Content Generation to Meaningful Learning Design.
