
What does good learning design look like?
There are certain types of e-learning modules that most of us have taken. We begin with an overview of the regulations. Proceed through a series of bulleted obligations. It concludes with a 10-question quiz that tests your ability to remember what you see on the screen. And it will be marked as compliant. This approach is always a poor substitute for learning. In terms of EU AI law, this is also an obligation.
The problem isn’t effort or intent. It’s the design. Most compliance e-learning is built around conveying information rather than changing behavior. These are different problems that require different solutions, and the learning science on this has been consistent for decades.
Transfer (the ability to apply learning to new contexts) does not happen automatically after exposure to content. Research on context-dependent memory shows that memories are recalled by the environment in which learning occurs. People who read the slides and learned about the requirements of the AI Act are most likely to recall that information while sitting in front of the slides. The last thing you’re likely to remember is when you’re under pressure and sitting in a meeting trying to decide whether to flag an AI tool to your compliance team.
Spaced acquisition (going back to content over time rather than covering it once) is consistently better than single-session training when it comes to long-term retention. However, most compliance programs are built as one-time events, often timed around regulatory deadlines rather than learning curves. The result is training that issues a certificate of completion rather than competency. That distinction is critical for regulations that explicitly require workers to demonstrate adequate AI literacy.
What Article 4 actually requires from a learning design perspective
Article 4 of the EU AI Act states that providers and implementers of AI systems must take steps to ensure as much as possible that their staff have sufficient AI literacy. The regulations do not specify training times, module formats, or assessment methods. Specify the result. This is worth sitting on because most L&D teams actually interpret the wording of the regulation as a constraint.
This regulation questions whether employees are literate enough to properly interact with AI systems within their roles. This question can be fully answered through instructional design. The question of what “appropriate literacy” looks like for a procurement manager reviewing an AI-generated supplier risk score is different from “appropriate literacy” for a human resources manager using an AI-assisted resume screening tool. These are not the same learning problems and a single generic module cannot address both.
The instructional implications are a shift from program-level thinking to role-level thinking. Before designing a single slide, here are some learning design questions to ask: What decision does this person need to make and what does he need to understand in order to do it correctly?
This is a standard task analysis applied to AI literacy. Compliance courses are not required under the AI Act. It requires people to be able to do something. Specifically, you need to be able to approach AI systems with enough understanding to recognize risks, ask the right questions, and escalate when necessary. Instructional designers know how to design for that. Regulatory frameworks should not get in the way of your work.
Scenario design: Give the learner the power to decide, not the lecture.
If Article 4 is an outcome specification, then scenario-based design is an obvious enabling mechanism. The purpose is not to teach regulations. It is about building the judgment that allows learners to act correctly in situations they actually encounter.
Effective scenario design for AI compliance starts with realistic workplace situations. Describe concrete situations that your target learners will face, rather than abstract descriptions of “companies using AI.” Recruiters who receive a ranked shortlist from an AI screening tool and need to decide whether to follow it. A customer service team leader whose AI system flags customer interactions for review. An analyst asked to present AI-generated predictions to a board of directors without model documentation in hand. Each of these is a decision point rather than an information point. The job of the scenario is to place the learner in the choice, apply enough contextual pressure that the choice feels real, and reveal the outcomes of different paths.
Branching is essential here, but insufficient branching is just multiple routes to the same end screen. Branches should reflect the actual range of reasoning that learners bring to the situation. One branch for learners who follow the AI’s output uncritically. One is for learners who escalate appropriately. One is for learners who are aware of the problem but are handling it incorrectly. This is the most educationally valuable path and the one most often omitted.
The error path is where learning occurs. When learners choose the wrong branch, they need to experience why it was wrong. Rather than being taught right away, you need to experience the consequences afterwards. Realistic follow-up: Complaints, audit questions, and moments when colleagues push back. and remorse directly tied to the decisions they made.
This requires more production time than slide-based modules. It also produces meaningfully different results. Learners who practice situational decision making are more likely to make the right decision in a given situation. It’s not a design philosophy. That’s what metastasis research predicts.
For AI Act programs in particular, the most productive scenario themes tend to center around a few core decision types. How to determine if an AI system is being used within its authorized purpose. And how to escalate concerns without knowing the full technical picture. These are not knowledge questions. These are judgment questions and require practice in judgment.
Measuring what regulations actually focus on
Completion rates are not learning outcomes. They are indicators of participation. For many compliance programs, this was not a problem. The regulatory requirements were clearly satisfied by the evidence that the employee completed the module. Article 4 complicates this, as the desired outcome of this regulation is not completion. It’s an ability.
Therefore, the evaluation design of AI law programs should test applications rather than recalls. Test your memory with the question “What is the definition of a high-risk AI system?” A question that poses a scenario will test your judgment: “Your procurement team wants to use an AI tool to score a supplier contract. What should you do before approving it?” They are not equivalent, and evaluations built from the first type will not produce evidence of the second type.
From a design perspective, this means building assessment scenarios that are different from, but structurally similar to, learning scenarios. Learners should not perceive assessments as repeating content they have already seen. You will encounter a situation that you have not specifically practiced, and you will need to demonstrate that you can reason correctly.
For programs that need to prove compliance, performance data on scenario-based assessments is much more defensible than certificates of completion. A record of a learner successfully identifying and escalating high-risk AI use cases under assessment conditions provides evidence of competency. Your proof of attendance will be a record showing you clicked through the 12 slides and scored 80% on the recall quiz.
Instructional designers need to make this case early on to their compliance and legal colleagues. If the program is designed correctly, the standard of evidence that L&D can create is actually stronger than what most organizations are currently creating.
The document layer that L&D keeps ignoring
AI law compliance programs have a design challenge that most L&D teams haven’t yet faced: audit trails. Regulatory compliance requires not only that training occurs, but also that the right training is given to the right people and that records are kept. For programs built in a standard LMS environment, this is often treated as automatic output. The system logs the completion, so documentation exists.
This is insufficient for several reasons. First, the completion log doesn’t record what was completed, only that something was completed. If your program is later questioned by a regulator, auditor, or internal review, you’ll need to document that the learning content was appropriate for the learner’s role and the AI system they use. A function module logged into a generic LMS does not demonstrate this.
Second, if your program uses branching scenarios, the most valuable documentation is the path data, not just the completions. What decision did the learner make? How many attempts did the learner have to pass the assessment? Was a remedial pathway triggered? This information is evidence of a genuine commitment to learning and is rarely captured by default.
Document design is not a legal task. It’s a design job. This means first specifying what data the LMS or learning platform needs to capture and ensuring that the program architecture produces it. This is a conversation between the instructional designer and the LMS administrator that should occur before the build, not after launch.
What does “appropriate” actually mean for instructional designers?
The word ‘appropriate’ is used 17 times in EU AI law. This ambiguity is a headache for legal teams. For instructional designers, it’s a workspace.
“Appropriate” AI literacy cannot be defined because it cannot be defined centrally. What is appropriate for a radiologist using AI diagnostic tools is not appropriate for a warehouse worker whose shift schedule is managed by an algorithm. This regulation requires organizations to make decisions based on context, and those decisions are fundamentally a matter of instructional design. In other words, who needs to know what in order to act how?
Organizations that treat Article 4 as a checkbox will end up building the cheapest module that meets the narrowest interpretation of the requirements. Organizations reading this as a design brief will build role-differentiated programs based on realistic scenarios, evaluated with proven judgment, and documented in a way that stands up to scrutiny. The second approach requires more skill. You can also create training that actually works. This is important in the long run.
Regulatory ambiguity is no reason to wait for clearer guidance. This is a reason to apply good instructional design practices and document their rationale. The case for compliance is strong when learning objectives are clearly linked to a specific role, a specific set of AI interactions, and specific criteria (and evaluation evidence demonstrates that learners can meet those criteria). That’s what instructional designers are trained to build. It has just been made mandatory by the AI Act.
