AI Can Build Your Training in Minutes. Can It Prove It Worked?
Future of L&D

AI Can Build Your Training in Minutes. Can It Prove It Worked?

Fergal Connolly·March 28, 2026·6 min read
Share

The ground is shifting

AI can write a training course in minutes. Not a rough outline. A full course. Learning objectives. Content modules. Assessments. Scenarios. Discussion prompts. It can generate an entire learning path before you've finished your first coffee.

It can personalise content at scale. It can create branching simulations. It can translate materials into forty languages overnight.

So if your value as an L&D professional rests on content creation, the ground beneath you is shifting. And it is shifting fast.

This is not a distant threat. It is happening now. The question is not whether AI will change L&D. It is whether you will be defined by what AI can already do, or by what it cannot.

What AI is already very good at

Let's be direct about the capabilities. AI can already:

Generate training content from a brief or a set of competencies. Build assessment questions aligned to learning objectives. Create realistic role-play scenarios for practice. Summarise complex research into accessible language. Produce microlearning modules at speed and scale. Adapt content to different audiences, roles, and reading levels.

These are not theoretical. These are production-ready capabilities that organisations are deploying today. The content creation function that once took weeks now takes hours, or less.

For decades, content creation has been a core part of the L&D identity. "We design great training." That was the value proposition. That was the differentiator.

It is no longer a differentiator.

What AI cannot do

Here is where it matters.

AI cannot diagnose whether the transfer environment supports behaviour change. It cannot tell you whether managers are ready to reinforce the new skills. It cannot identify the systemic barriers that will block application regardless of how good the content is.

AI cannot activate a manager. It cannot run a 90-day reinforcement campaign that adapts based on real learner signals. It cannot broker the human relationship between a manager and their team member that research identifies as the strongest predictor of transfer success (Grossman & Salas, 2011).

AI cannot measure whether skills became habits. It cannot produce an Actual Transfer Score that connects behaviour change to the business KPIs the C-suite tracks.

AI is exceptional at creating the 20% of the transfer equation. It has nothing to say about the 80%.

The real vulnerability

The real vulnerability for L&D is not that AI can create content faster and cheaper.

It is that nobody, human or machine, is proving the training changed anything.

Most L&D functions operate in a proof vacuum. You deliver training. You collect satisfaction scores. You report on completions. And you have no evidence of whether any of it changed behaviour on the job.

This was always a problem. AI makes it an urgent one.

When content was hard to create, the creation itself had value. The effort was the proof. "We built a three-day programme with bespoke case studies and expert facilitators." That sentence carried weight because it represented significant investment.

When AI can produce equivalent content in minutes, the creation no longer carries weight. The effort is gone. What remains is the question that was always underneath: did any of this change anything?

Brinkerhoff's Success Case Method was built precisely for this question. It asks not whether training was delivered or enjoyed, but whether it produced behaviour change that the organisation values (Brinkerhoff, 2003). It is a question most organisations still cannot answer.

The identity fork

This is the moment of choice.

You can be known for the content you create. AI is already very good at that.

Or you can be known for the impact you prove.

One of those futures is defensible. The other is not.

The L&D teams that thrive will not be the ones that create the best content. They will be the ones that prove the training changed something. They will be the ones who can stand in front of the C-suite and say: "73 of 100 learners are applying target behaviours at Day 60. Manager coaching engagement is at 84%. Customer satisfaction in the pilot group is up 12%."

That is a conversation AI cannot have. Because AI cannot manage the human system that produces those outcomes.

What proving impact actually looks like

Proving impact is not smile sheets. It is not asking learners whether they "feel more confident" at the end of Day 2. It is not completion rates. If you are still relying on surveys that measure satisfaction rather than transfer, the gap is wider than you think.

Kirkpatrick and Kirkpatrick's framework outlined four levels of evaluation: reaction, learning, behaviour, and results (Kirkpatrick & Kirkpatrick, 2016). Most L&D functions operate at levels one and two. Reaction and learning. How did they feel? What did they know?

The proof that matters lives at levels three and four. Behaviour and results. Are they doing it differently? Is the business metric moving?

Getting to Level 3 and 4 requires managing the transfer environment. It requires activating managers. Running reinforcement campaigns. Tracking actual behaviour signals over 90 days. Connecting those signals to the outcomes leadership cares about.

This is the work that justifies your seat at the table. Not content creation. Transfer management. Not designing training. Proving it mattered.

The future-proof skill

Every other function in the organisation has already been through this reckoning. Marketing proved its worth with attribution models. Sales proved it with pipeline metrics. Finance proved it with forecasting accuracy.

L&D has been the exception. The function that runs on conviction without evidence. The one that says "we believe this training made a difference" while every other function says "here's the proof."

AI closes that grace period. When content is commoditised, the only thing left to prove your value is the outcome. The behaviour change. The transfer.

The L&D leaders who build this capability now will define the next decade of the profession. They will be the ones the C-suite consults on capability. They will be the ones who direct investment rather than defend budgets.

And they will be the ones who can answer the question that AI never will: did it work?

Where this leaves you

You've always believed that investing in people drives performance. That training matters. That development changes organisations.

You were right.

But belief without evidence is a luxury that AI has made obsolete. The proof gap that L&D has lived with for decades is now the survival gap.

You can keep perfecting the 20%. AI will do it faster.

Or you can own the 80%. The diagnosis. The reinforcement. The manager activation. The evidence.

You can be known for the content you create.

Or for the impact you prove.


References

Brinkerhoff, R. (2003). The Success Case Method. San Francisco: Berrett-Koehler.

Grossman, R. & Salas, E. (2011). The transfer of training: what really matters. International Journal of Training and Development, 15(2), 103-120.

Kirkpatrick, J.D. & Kirkpatrick, W.K. (2016). Kirkpatrick's Four Levels of Training Evaluation. Alexandria, VA: ATD Press.