
Tips for leveraging AI to prepare for the next 12 months
Over the past two years, AI has gone from being an “interesting experiment” to something organizations secretly rely on. McKinsey’s 2025 State of AI Study found that approximately 4 in 5 companies are currently using AI in at least one business function, and more than 70% regularly use generative AI in their operations. However, fewer than one-third follow most scaling best practices, and fewer than one in five track clear KPIs for their generative AI solutions.
Something similar happens within learning teams. A recent ATD/Clarity survey reports that 80% of instructional designers are already using AI tools, and nearly two-thirds have only started using them in the past year. Training consultancy Clarity Consultants says AI has arrived before governance, workflows and the skills needed to use it effectively.
So over the next 12 months, I don’t think the question will be, “What’s the next big AI trend?” The better question for AI for learning leaders over the next 12 months is: How do you turn all this experimentation into lasting value for your learners, your business, and the people on your team?
In this article…
Setting clear ambitions for AI learning, not just “experiments”
Most organizations are experimenting with AI, but few have a clear understanding of what “good” looks like. According to a McKinsey study, 88% of respondents say their organization uses AI, but only about 39% report an impact on EBIT from those efforts. Many people are stuck in pilot mode, a frenzied trial with no clear destination.
For learning leaders, this is the year they decide what AI is for in their roles. Is your primary goal efficiency (speeding course development, reducing subject matter expert (SME) time), reach (serving more learners with the same number of people), quality (improving personalization and practice), or a combination of these? High-performing organizations in McKinsey research aren’t just chasing efficiency. We also clearly link AI to our growth and innovation goals.
In practice, this means setting two or three specific 12-month outcomes. Examples: “Reduce average development cycle time by 30%,” “Initiate AI-assisted support for the top five critical workflows,” or “Skill 70% of managers on responsible AI use.” Clear ambition allows teams to say “yes” to the right pilots and “not now” to shiny distractions.
Redesign one workflow at a time
One of the strongest findings from McKinsey’s 2025 study is that redesigning workflows, not just incorporating AI into existing workflows, is the practice most correlated with bottom-line impact. However, only one in five organizations using Gen AI say they have fundamentally redesigned even some of their workflows.
This is your sweet spot when it comes to learning. Course development and maintenance is a structured, repeatable process, and when done intentionally, that’s exactly where AI can help. Start by mapping one high-value workflow end-to-end. For example, “From SME acceptance to published e-learning” or “From policy changes to updated microlearning.” Identify all the steps that people are currently copying and pasting, reformatting, summarizing, and paraphrasing. Then select a small number of AI interventions that not only embellish the flow, but change it. That might mean using AI to:
Convert SME interview transcripts into structured outlines. Generate a first draft storyboard and quiz bank. Suggest variations for different audience segments.
Run one or two of these as real pilots with clear before and after metrics (cycle time, SME time, number of revisions). Our goal for the next 12 months is not to “AI everything.” It’s about proving in one or two workflows that AI and redesign can truly improve speed and quality.
Increase AI fluency across learning teams
AI is now part of the everyday toolkit of most learning professionals, but fluency varies. ATD research found that 80% of instructional designers are using AI, but many cite skill gaps and uncertainty about how to properly use it. In other words, adoption outweighs trust. As a learning leader, treat AI fluency as a core feature rather than an optional experiment. Over the next year, we will be looking at three tiers:
Basics for everyone
A common language about what AI can and cannot do, basic prompt patterns, and clear rules around data privacy, confidentiality, and bias. Role-specific patterns
Designers need patterns for storyboarding, evaluation design, and adaptation. Facilitators need session designs, reflection prompts, and feedback patterns. Coordinators need communication and logistics patterns. Quality and Ethics Guardrails
A simple checklist for reviewing AI output (accuracy, comprehensiveness, alignment with learning science) and clear examples of when human judgment is essential.
You don’t need a perfect “AI Academy” to get started. With a few short in-house clinics, a shared prompt library, and side-by-side examples, your team can move from hesitant initiatives to critical use with confidence.
Create and share securely with your business
One of the biggest changes I’m seeing is that learning content no longer starts just within L&D. Subject matter experts, operational leaders, and even front-line managers now have access to the same AI tools as designers. External data from a Business Insider report shows the same pattern. A recent Gallup poll found that approximately 23% of U.S. workers now use AI at least several times a week, nearly double the rate in mid-2024. Learning leaders can structure and take advantage of this, rather than fighting it. Over the next 12 months, we will consider how to:
We provide simple AI-enhanced templates for small businesses to create scenarios, SOP instructions, or quiz questions. Define what a “good enough first draft” from a company looks like. L&D is responsible for coordination, consistency, and education of layers that AI and SME cannot fully handle.
This is not “crowdsourced training” in an unsupervised manner. Either way, it’s about allowing others to create content, and giving them a safer and more effective path to do so. Done well, it reduces backlogs, brings expertise closer to learners, and allows experts to focus on high-stakes work that only they can design.
Evolving the way we measure value beyond completion
Executives are increasingly asking tough questions about AI investments. According to McKinsey’s 2025 study, AI adoption is widespread, but few organizations see a clear impact on EBIT, and few track specific KPIs for generative AI. Learning teams are not exempt from these expectations. Over the next 12 months, we will focus on three measurement shifts:
From quantity to capacity
Track how AI changes your capacity, not just the number of courses shipped, but also cycle times, SME hours per project, number of assets maintained per designer, response to urgent requests, and more. From satisfaction to preparation
Keep the traditional metrics (NPS, number of completions) and add simple metrics for time to proficiency and performance on important tasks, especially AI-powered programs. From “Utilizing AI” to its impact on AI
Stop tracking false statistics like “number of prompts used.” Instead, tie AI engagement to tangible outcomes, such as faster update times after policy changes, faster onboarding of new employees, and fewer errors in critical processes.
The goal is not to push the team into a corner. It’s about creating a story that you can share with senior leaders. Learn how AI is changing our capabilities and enabling businesses to achieve.
Make learners AI-ready without relying on AI
While we focus on our own tools and workflows, the reality for our learners is changing as well. Research in education and the workplace shows rapid adoption of AI tools by teachers, students, and employees, often without clear guidance. For example, according to TALIS 2024 data, approximately one in three teachers around the world are already using AI, primarily for tasks such as summarizing topics and creating lesson plans. In higher education, studies report that more than half of students and faculty have used tools such as ChatGPT for academic work. For learning leaders, this is both an opportunity and a responsibility. Over the next year, consider whether you can:
Incorporate AI literacy into your leadership, onboarding, and professional development programs. Model “proper use of AI” in their own learning experiences (e.g., show how to validate AI output rather than presenting it as truth) Teach learners how to use AI for reflection, planning, and practice, not just for shortcuts
The goal is not to monitor every interaction with the AI. This is to help people develop the habit of making AI a thoughtful co-pilot rather than a crutch, and to ensure that critical thinking, ethical judgment, and expertise remain firmly in human hands.
Leverage AI to make it happen in the next 12 months
If last year was the year of discovering what AI can do in learning, the next 12 months will be about deciding what we want it to do. That means setting clear goals, redesigning at least one key workflow, building AI fluency across your team, sharing your business and creations in a controlled way, measuring what actually matters, and giving your learners the skills to work with AI themselves.
None of these require moonshot implementation. It requires leadership. That means a willingness to move beyond distributed pilots and treat AI as part of the way a learning organization operates. If you can use this year to make two or three of these changes a reality, you will be in a stronger position when the next wave of tools, agents, and platforms arrive because your base and talent are already ready.
LEAi by LearnExperts
Based on decades of experience in building training programs, LearnExperts provides AI-enabled tools that allow clients to quickly and efficiently create learning and training content and exam questions to convey and develop skills.
