
AI reveals competency issues
Most organizations say they are trying to prepare for AI. In reality, many people do a narrower range of things. They give people access to tools, offer introductory sessions, and encourage experimentation. This may generate activity. It does not necessarily create competence. This is an important distinction. AI is more than just introducing new tools into the workplace. This reveals whether the organization understands how the functionality is actually built, supported, and applied in real-world situations. And in many cases, this is not the case. That’s why much of the current response feels incomplete. Leaders have a sense of crisis. An employee is experimenting. Learning teams are under pressure to act quickly. However, much of what is released is still based on shaky assumptions about how performance will actually improve.
Mistakes many organizations make
Common patterns emerge. New pressures appear. AI becomes a hot topic. Employees need to “upskill”. We will suggest a course. Alternatively, some, in response to course fatigue, argue that learning should simply occur in the flow of work. Both answers may be missing the point.
The question isn’t whether the answer is a course, a resource, a prompt library, or a workflow tool. The question is whether the organization correctly understands what problem it is trying to solve. All too often, three very different needs get blurred together.
Build competency before performance. Supports recall during performance. Solving problems for which learning was not the main purpose in the first place.
When these distinctions are not clear, organizations tend to choose solutions based on inclination, convenience, or familiarity rather than performance needs.
Why conversations about “work flow” are oversimplified
Support for work flow is helpful. In many cases it is essential. But it’s no substitute for ability. Checklists can support recall. Quick guidance reduces friction. Job aids help you run known processes more reliably. These tools are valuable when the functionality already exists and the real problem is access, consistency, or memory when needed. They are far less effective if their work requires them to judge, prioritize, make trade-offs, or act under pressure.
People cannot rely on just-in-time support to build capabilities they don’t already have. You can only take advantage of that support if sufficient basic competencies already exist. This becomes even more important in AI-related work. Access to AI will not improve employee performance unless employees understand what good results look like, where the risks are, what requires escalation, or when human judgment needs to override the tool. It may simply lead you to make the wrong decisions sooner.
AI literacy is not a matter of tool proficiency
Many AI literacy efforts focus too much on platforms and prompts. That’s understandable, but it’s not enough. The more important questions are practical and role-based.
What tasks should AI support here? What decisions still require human judgment? What information can and cannot be used by the tool? What kind of output is acceptable for this function? When is review, approval, and escalation required?
Without that clarity, employees are forced to improvise. Some people avoid AI because the boundaries are unclear. Some people use it too casually because the guardrails are weak. In both cases, the organization becomes inconsistent rather than competent. This is why AI literacy should not be treated as a general awareness topic. It needs to be defined in relation to real work, real decisions, and real performance standards.
Better questions for L&D and business leaders
Rather than asking, “Should this be a course?” or “Can my workflow support this?” or “What is the least intrusive way I need to achieve the level of competency that the job actually requires?”
That question changes everything. In some cases, you need to build competency before performance, so structured practice, simulation, coaching, or guided applications may be the answer. In some cases, performance support may be the answer because the functionality already exists and needs enhancement or recall. Sometimes the answer is neither, because the problem is unclear processes, poor system design, weak management, or undefined expectations.
Many organizations still struggle with this. They work quickly to create learning assets without first deciding what needs to be built, what can be supported, and what needs to be solved elsewhere.
What AI is really revealing
AI acts as a stress test. It is becoming clear whether organizations can distinguish between information and judgment, support and skills, and activities and capabilities. Also, old problems that existed long before AI are coming to light. Many organizations don’t have a content problem. They have clarity issues. They don’t clearly define:
What does great performance look like? Which decisions are most important? What functionality needs to exist up front? If you have enough support. Responsibility.
When these questions remain vague, learning teams are often asked to solve the wrong problem. More content will be created. More resources are pushed into the workflow. More awareness is delivered. Still, the underlying performance issue remains.
What this means for learning and development
This moment isn’t just about moving faster or producing more. It’s about being more accurate. For L&D, that means resisting two equal and opposite mistakes. It’s about choosing a default course for every problem and overcorrecting by treating workflow support as the answer to everything.
A more strategic role is to help organizations make better intervention decisions. It starts with some practical questions.
What performance improvements do I need to make? What functionality must already be there when I need it? What can I support during runtime and what should I build upfront? Is this really a learning problem?
These questions are simple, but they force you to make better choices.
final thoughts
AI isn’t just changing the tools people use. It is raising the bar on how organizations think about capabilities. Access is not an ability. Information is not judgment. Support is different from preparation. The organizations that respond well are not the ones that act first to create AI content or incorporate more resources into their workflows. They will be clearer about what competent performance requires, more disciplined about how competency is built, and more selective about when learning is the answer in the first place. That’s a tougher response. It’s also much more convenient.
