
Onboarding is finished. No development required.
There is a document that almost all new employees receive in their first week. Although the names are different, the structure is always the same. 30 days to learn the product, 60 days to manage your first account, and 90 days to operate independently. The 30-60-90 plan is one of the most widely used onboarding tools in professional environments, and it’s not a bad framework. The problem isn’t the documentation. The question is what the organization expects to do once it’s done. They assume that person is on board.
I come from an L&D background and currently lead the Customer Success (CS) department. This combination provides an uncomfortable way to look at the same problem from both sides. On the L&D side, we understand why event models persist. Event models are measurable, deliverables, and give your business tangible metrics. On the CS side, you know what it costs. After a tenure of 30-60-90, new team members are forced to navigate promotions, product changes, and increasingly complex accounts without a comparable support structure. Onboarding ends. There is no need for development.
In this article…
Indicators that do not stop even after 90 days
The pressures on L&D teams right now are significant. Executives want to reduce time to productivity and time to proficiency. They want new employees to contribute faster, grow more smoothly, and stay longer. These are legitimate commercial obligations and are exactly what should be measured.
But time to efficiency is not a 90-day metric. It is repeated at every turning point in a professional career. When someone is promoted, they enter a new level with new expectations and new efficiency differentials. When a product changes significantly, the entire team faces the same version gap. When professionals take on new types of accounts, new markets, or new leadership responsibilities, they are functionally onboarding again. This organization has a commercial interest in closing that gap every time, not just the first quarter of employment.
The onboarding event model is more than just a learning design problem. This is a persistent performance drop, but no one is officially measuring it because we stopped counting after 90 days.
Why it is always difficult to provide a suitable model
The alternative is what I call persistent onboarding. The recognition is that development is a continuous cycle and that the support infrastructure built for new employees should, in principle, apply to every meaningful transition that professionals make during their tenure.
Most L&D practitioners instinctively understand this. The reason this isn’t the default model is not intellectual. It is operational. Providing personalized, contextual development support to every employee on your team, at every stage of their career, when they need it is a human resources issue. A manager cannot continually coach six people at the same time, each at a different level and facing different challenges. Therefore, organizations design programs for the average person at the average stage, execute them on a schedule, and measure completion. Because completion is something that can be counted.
The results are exactly what Josh Bersin’s research consistently shows. Completion rates increase, but performance outcomes do not follow. The learning infrastructure is optimized for the metrics it can capture, not the outcomes that the business actually cares about. I’ve been seeing this from the L&D side for years. Sitting in a CS leadership role feels different. The gap between what the onboarding program promised and what the team actually needed wasn’t a matter of content or budget. It was a model issue.
What exactly will AI change?
Artificial intelligence (AI) will not solve the L&D underinvestment problem. Anyone who says they can do that is selling something. What AI does is remove the human bottleneck that made persistent onboarding models operationally impossible at scale.
A well-designed AI coaching system can be present in the moment a professional is preparing for a high-stakes conversation with a client or senior stakeholder. You may get different answers to the same question from a novice and an advanced practitioner. This is because the two people require fundamentally different support. Managers can recognize that someone is navigating a context outside of their prior experience and increase their footing accordingly, without the need for managers to notice and intervene. All of this can be done simultaneously by your entire team at any time.
It’s not about artificial intelligence replacing human development. Artificial intelligence makes the right model operational the first time.
Building a proof of concept
My team and I tested this earlier this year. During a corporate hackathon, we built an AI coaching agent called CSM 360. It’s a permanent onboarding system designed for Customer Success Managers from day one on the role all the way to senior leadership.
Although this framework is based on Charles Jennings’ 70-20-10 model and Bersin’s Capability Academy research, the most important design decisions are simpler than in any theoretical framework. In other words, coaches treat every important transition as a new onboarding moment. Promotion is an onboarding moment. A major product release is an onboarding moment. After years of mid-market experience, a new enterprise account is an onboarding moment. The 30-60-90 structure covers the first loop of the cycle, but does not end the cycle.
Coaches are differentiated by level and utilize an in-house CS skills matrix to tailor not only the depth of their response, but also the type of support they provide. New contacts inquiring about at-risk accounts can receive scaffolding, guidance through the process, and reassurance that escalating is the right decision. A senior CSM asked the same question will be asked to diagnose the root cause before a framework is provided. The same question will give completely different answers because the development needs are completely different.
We built this in a small team hackathon. It’s not the specific agent that matters. Importantly, the concept was operationally viable and a small team with a deadline proved it.
A challenge to L&D
For too long, conversations about AI in L&D have been spent focusing on content generation and course automation. These are real-world applications, but they are optimizations of existing models that make the event-based approach slightly faster and slightly cheaper. The functionality of the model remains unchanged.
Persistent onboarding supported by intelligent performance tools built into the work flow is a completely different model. This is what ultimately aligns what L&D builds with what businesses actually measure: competency rather than completion, and continuous measurement rather than at onboarding. The professionals I manage don’t stop growing even after 90 days. The executives I report to continue to worry about time to efficiency even on day 90. The question L&D needs to address is: Why does the supporting infrastructure stop there?
If you work in L&D and own part of the onboarding experience, or if you’re a leader who truly values enablement and not just optics, it’s an easier place to start than building an agent from scratch. Consider the AI tools your organization already has access to. Stop using it to polish your emails. Start using it to close the efficiency gaps that arise whenever someone on your team moves, gets a promotion, or faces a challenge that your onboarding program didn’t prepare you for. The infrastructure for persistent onboarding is already in your hands. The only thing missing is the decision to use it that way.
