Researchers just reviewed every learning transfer tool in the academic literature.
They found 45.
None of them solve the full problem.
That is the finding of a 2026 systematic scoping review published in the Journal of Workplace Learning. Ardondi and colleagues searched seven databases, reviewed every peer-reviewed study evaluating a tool designed to assess whether workplace training transfers to actual job performance, and reached a conclusion the field has been edging towards for years without quite saying out loud.
The tools split into two camps. Some measure transfer outcomes. Others capture the enabling and hindering factors. Not one tool integrates both. The measurement landscape is rich, fragmented, and leaves a gap at the exact point where L&D leaders need clarity most.
This is not a gap at the margins of the research. It is the gap that explains why organisations keep measuring training without being able to improve it.
The review in plain language
Ardondi et al. (2026) conducted a systematic scoping review. That means they did not identify the best tools. They mapped the entire landscape. Every major academic database. Every instrument the field has produced. Every evaluation approach that met the inclusion criteria for a peer-reviewed assessment of workplace training transfer.
Forty-five tools made the final cut, drawn from seven databases: MEDLINE, Embase, Cinahl, Scopus, Web of Science, ERIC, and PsycINFO.
Each tool was designed to answer some version of the same question: does the learning from this training event transfer to job performance? After four decades of research effort, the field had produced 45 answers. The review set out to understand what those answers collectively tell us.
The finding was clear. The tools divide along a consistent line. On one side: instruments that measure the conditions that influence transfer — the climate, the manager support, the motivation, the opportunity to apply. On the other: instruments that measure whether transfer happened — the behaviour change, the performance outcome, the skill application on the job.
Not one instrument sits across both sides.
The two camps and what is missing between them
The most widely used instrument in the field is the Learning Transfer System Inventory (LTSI), developed by Holton, Bates, and Ruona. The LTSI is a well-validated, widely deployed instrument for measuring transfer climate. It surveys learners and managers on factors known to predict whether training transfers: peer support, manager support, opportunity to apply, transfer design, personal outcomes.
The LTSI is genuinely useful. It tells you whether the conditions are in place. If the scores are poor, you know you have a problem before training has been wasted.
But the LTSI measures conditions. It does not measure outcomes. It does not tell you whether behaviour changed 30 days after the programme ended. It does not tell you whether the skills are being applied on the job. It tells you what the transfer climate looked like at a point in time.
On the outcome side, Kirkpatrick's four-level model occupies the dominant position. Level 3 is the relevant measure: behaviour on the job after training. In principle, Kirkpatrick Level 3 closes the loop. In practice, most organisations never reach it. Level 1 (reaction) and Level 2 (learning) are manageable. Level 3 requires a measurement infrastructure most L&D teams do not have: manager observation data, 360-degree feedback, performance metric tracking, timed follow-up at 30, 60, and 90 days after training.
The result is a field with excellent instruments at either end of the transfer process and nothing reliably bridging them.
You can know the conditions before training. You can, with significant effort, measure the outcomes after. What the 45 tools in the Ardondi et al. review do not provide is a system that connects both, runs them in the same window, and produces a coherent picture.
For more on why the standard post-training survey fails to close this gap, see Post-Training Surveys That Actually Measure Transfer.
Why this matters in 2026
The scale of the investment makes the measurement gap consequential. Training expenditure in the United States reached $102.8 billion in 2024/25, according to Training magazine's annual Industry Report. The global corporate training market is estimated at $391 billion by Training Industry. That is a significant investment in learning events. The measurement architecture sitting beneath it is 45 fragmented instruments, each excellent within its scope, none capable of delivering the integrated picture that justifies the spend.
The field has not been idle. The 45 tools represent decades of serious research. The LTSI alone has been validated across industries, geographies, and organisational types. Kirkpatrick has been updated, extended, and applied in thousands of organisations. The work is rigorous.
But rigour at the instrument level does not solve an architecture problem. You can have excellent individual tools and still be unable to connect the dots across the transfer window. That is exactly the situation the review describes.
L&D leaders are living with the consequence. You have transfer climate data that does not connect to behaviour change data. You have pre-training diagnostics that sit separately from post-training evaluations. You have manager feedback that does not link to the learner's self-assessment. You generate more training data than ever, and you are less able to tell a coherent story with it than you should be.
For context on the broader case for a research-informed approach to L&D measurement, this piece on working smarter as an L&D professional is worth reading alongside the scoping review.
The integration gap
The Ardondi et al. review does not describe a failure. It describes an absence. Nobody has built the integration layer.
What the field has: instruments that measure conditions and climate (LTSI and its relatives), instruments that measure outcomes and behaviour change (Kirkpatrick-derived tools and others), and instruments that capture individual factors in the transfer environment in isolation.
What the field does not have: a system that diagnoses transfer readiness before training, tracks the enabling conditions during the transfer window, measures whether behaviour actually changed, and connects all three into a single picture that ties back to business performance.
That is not a data gap. Most organisations produce plenty of training data. It is an architecture gap. The data points live in separate systems, collected at different times, owned by different stakeholders, and presented in formats that never connect.
The 90-day window after training is where transfer either consolidates or dissolves. The research has understood this for years. The tools have known it. The LTSI was built to understand the conditions that govern it. Kirkpatrick Level 3 was designed to capture its outcomes. Neither spans the full window in a way that tells you what to do next.
What Transfer Intelligence means
Transfer Intelligence is not another instrument in the 45. It is a name for the architecture the review identifies as missing: an integrated system that runs diagnostics before training, manages the transfer window actively, and measures behaviour change at the end of it.
What the Ardondi et al. review confirms is that this architecture does not exist in the academic toolset. The 45 tools they found are evidence of how the field evolved: each instrument solving its piece of the problem. The LTSI for climate. Kirkpatrick for outcomes. Others for specific factors, contexts, or populations.
The gap is not a failure of any of those tools. It is a description of what the next generation of measurement needs to do: connect the before, the during, and the after into a system that produces actionable evidence.
For L&D leaders, choosing the best climate tool and the best outcome tool and running both does not close the gap. You still end up with two datasets that do not talk to each other, measured at different times, reported in different formats, and interpreted by different people. The integration has to be structural, not manual.
Three things to do with this research
Read the review. Ardondi et al. (2026) is published in the Journal of Workplace Learning and represents the most comprehensive mapping of transfer evaluation tools the field has produced. Whatever instruments your organisation currently uses, understanding where they sit in this landscape will sharpen how you interpret the data they produce and what questions they cannot answer.
Map your current measurement stack. Which instruments do you use? Which camp do they sit in: conditions and climate, or outcomes and behaviour? What is the gap between them? Name it. You cannot address a gap you have not identified.
Treat measurement as an architecture problem, not a tool problem. The temptation when measurement falls short is to add another instrument. Another survey. Another follow-up checkpoint. The Ardondi et al. review suggests this is not the constraint. The field has 45 instruments. The constraint is the system that connects them. That is the question worth asking: not "which tool?" but "how do these connect to produce a coherent picture of transfer across the full window?"
The research is clear
The Ardondi et al. review is the most thorough map of the transfer evaluation landscape the academic literature has produced. Forty-five tools. Seven databases. A definitive finding: nobody has built the integration layer.
The field has 45 ways to measure pieces of transfer. Zero ways to measure all of it.
Fragmented measurement does not solve the problem. It makes the problem more legible. That is valuable. It is not enough.
The research is clear: fragmented measurement is not working. The next step is the system that connects what the field knows into evidence that leadership respects.
References
Ardondi M, Fortunato E, Ruozi C, Di Fronzo P, Saffioti A, Ferrari I, Bassi MC, Ghirotto L, Pedroni C (2026), "Workplace training transfer: a systematic scoping review of evaluation tools for adult learning," Journal of Workplace Learning, Vol. 38 No. 1, pp. 1–21. https://www.emerald.com/jwl/article/38/1/1/1302036
Training magazine (2025), 2025 Training Industry Report. https://trainingmag.com/2025-training-industry-report/
Training Industry (2025), State of the Corporate Training Market. https://trainingindustry.com/press-release/measurement-and-analytics/training-industry-releases-2025-state-of-the-corporate-training-market-report/