
How we unified 16 identity frameworks into one
Back in January, AI and I found ourselves working for each other. I was working in an office. Formatting, document structuring, and lots of copy-pasting. Claude had a creative career. You’ll be asked to exercise judgment, estimate the emotional impact of content, and draw insights from real-life experiences.
Plus, I didn’t know how to ask the AI to check what it was producing. Please use the ADDIE lens to look over the whole thing and tell me where the alignment is off. Do you want to start small, just making sure you follow Bloom’s classification when you create your goals? Wait, don’t forget Universal Design for Learning and Accessibility Checks…
After a few weeks of constant confusion and frustration, I started to take a step back and ask myself questions.
Is there an evidence-based framework that guides instructional designers on how to integrate AI with instructional design best practices throughout the course development process?
It is not a framework for teaching students about AI. It’s not a list of AI tools or prompts to try. The actual methodology that instructional designers use when designing and building courses using AI.
I went looking for it and what I found was a gap.
What existed and what did not exist
The field of instructional design has no shortage of long-standing frameworks. ADDIE, SAM, Action Mapping, Bloom Taxonomy. All are easy to recognize, and for good reason. they are working.
Furthermore, the field of education and learning that utilizes AI is developing day by day. We now have frameworks for teaching with AI in the classroom, guidelines for students to use AI tools ethically, and methods for learning AI literacy.
Significant progress is being made when it comes to the field of AI-integrated learning design. Adaptation of ADDIE to add AI tools to existing phases. Content generation frameworks such as GAIDE (Generative AI for Educational Development and Education).
However, I realized that there are still gaps. ID lacked a systematic methodology to help decide when to use AI and when to remain human throughout the development process. We lacked a framework that would give AI a reliable, research-based foundation to support our work.
16 frameworks, 50+ principles, 1 problem
I decided to make something that I couldn’t find. I started by returning to frameworks that I often felt stuck choosing between. We already trust and use them, but we needed to understand how we could use the right ones with AI to produce the best results. But which one was right?
We have confirmed 16 of them.
Four process frameworks: ADDIE, SAM, Backward Design, and Action Mapping.
12 learning science frameworks: Bloom’s Taxonomy, Gagne’s Nine Instructional Events, Merrill’s First Principles, Cognitive Load Theory, Mayer’s Multimedia Principles, Universal Design for Learning, ARCS Model, Constructivism, Social Learning Theory (Bandura), Experiential Learning (Kolb), Scaffolding Principles (Vygotsky), WCAG/Accessibility Standards.
Core disciplines of instructional design methodologies and learning science frameworks.
Together, these 16 frameworks contain over 50 individual principles and guidelines. No wonder we all pick and choose, using only a handful at a time. No instructional designer or AI tool can meaningfully apply the 50+ principles during the design process.
I realized that many of the principles overlap. That’s the part I found most interesting. Backward Design focuses on starting with a goal, just as Action Mapping focuses on performance goals. Cognitive load theory’s concerns about extraneous load are directly related to Mayer’s coherence principle. The frameworks weren’t contradictory to each other, they were just repeating the same thing in different ways.
Overall: 21 principles, 5 phases
Through systematic analysis, we deduplicated these 50+ principles down to 21. Each principle can be traced back to single or multiple source techniques, but is completely distinct from the others.
“The objective says that learners will ‘evaluate’ treatment options, but that assessment is a multiple-choice discrimination. What would this activity look like if assessment was required instead? (Principle 5: Align activities to cognitive level)” When you receive feedback, you know that you are building on Bloom’s Taxonomy (aligning assessments and activities to a hierarchy of cognitive levels), Backward Design (aligning assessment to goals with appropriate cognitive demands), and Merrill’s First Principles. (Evaluation requires demonstration at the specified performance level). Gain practical insights and the academic credibility to back them up.
The 21 principles are organized across five workflow phases that reflect how instructional designers actually work.
Phase 1: Planning
Phase 2: Structural design
Phase 3: Experience design
Phase 4: Design formatting
Phase 5: Review
Each of the 21 principles exists in one specific phase, and some principles overlap in multiple phases. This allows designers and AI to invoke the right principles at the right time, eliminates extraneous cognitive load, and enables flexibility and adaptability.
Why 21 and not 50?
You might be thinking, “If an AI can hold unlimited information, why distill 16 frameworks down to 21 principles? Why not just give it all 50+ raw principles and let it classify them?”
The reason is that this framework is not exclusive to AI. It’s for designers.
Dump your 16 raw frameworks into an AI prompt, ask them to review your course modules, and you’ll get feedback that references Gagné’s Event 4, Merrill’s Application Principle, and Bloom’s Level 3. I don’t know if these are three separate problems or if the same problem is stated in three different ways. There is too much information and not enough structure to evaluate what the AI is saying.
This synthesis solves three problems at once. By referring to certain principles, AI output becomes interpretable so that feedback can actually be understood and meaningfully evaluated. Inconsistencies and redundancies are removed so that four different methodologies do not report the same problem four times. Finally, designers can take the driver’s seat. 21 is the number of realistic principles that professional identity begins to recognize and eventually solidifies over time. This means you are always in a position to better understand and better question the AI output you receive.
Four basic principles
As I researched and constructed, in addition to the 21 principles, four key foundational elements emerged that were important to guide and check decision-making. These became the four core philosophies of my framework.
Resources vs. Experience
Is this information something learners will need to refer to later, or is it an experience designed to change behavior and build skills? Be clear before you build anything. The answer determines the format, complexity, level of interaction, and everything else. Without this, you end up overbuilding references that should be simple, or underbuilding experiences that need depth.
Office work vs. creative
Is this work mechanical or requires human judgment? Let AI handle compliance checks, pattern tracking, and in-memory retention of your framework. Let humans make implementation decisions, manage your small business, make design decisions, and collaborate with your team. Without this, we end up wasting human energy on tasks that AI is better at or leaving AI to make decisions that only humans should make. Ability and aptitude are not the same thing.
learner reality test
Would a real person, designed for a real situation, with real constraints, find this usable and valuable? Design for a real audience, not an ideal audience. Without this, you end up building courses that impress designers but frustrate learners.
evergreen test
Will this still work if the delivery method changes, if the person in charge or context changes, or if someone needs to update? Design with evergreen in mind. Without this, you end up building static learning experiences chained to one tool, one person, or one moment in time.
These philosophies are the decision-making lenses that determine how and when to apply principles. Principles tell you what to check. Philosophy teaches us how to think about what we’re building before we get to the principles.
What this means for the field
This framework is designed to be system and tool agnostic. It works regardless of your LMS, your organizational context, your specific compliance requirements, or your chosen AI tool. These are implementation details. Principles and philosophies are universal.
I’m currently testing this framework, applying it to real projects, and tracking what happens when you base your design decisions on all 21 principles and all four philosophies, rather than a subset that you can memorize and implement at any given moment.
Early results are promising. Issues that typically surface during QA reviews are discovered during the design phase. Feedback from AI tools is organized around a common set of principles, making it easier to interpret. Perhaps most importantly, the process feels more intentional. My course materials feel more solid, more consistent, and more impactful to my learners. I feel like my design skills are already improving by getting constant feedback with clear explanations and evidence-based reasoning.
Taking established and trusted learning science and integrating it into something that is systematically accessible through AI collaboration is a strategy that has been missing in the field. Rather than replacing what we already know with something new, we leverage the power of AI to make what we already know even more useful. There will be more to come as we continue to grow, but I would say the gap has been closed.
