What Makes AI Training Actually Stick
After running it across regulated finance teams and global media companies, what decides whether it works is the same. And it isn't the trainer.
I run AI training across companies that have almost nothing in common.
Heavily regulated financial services firms with strict compliance obligations. Global media companies where the rules are looser and the pace is faster. In-house marketing teams. Smaller specialist consultancies. Businesses with hundreds of staff and businesses with twenty.
The training itself is similar across all of them. The same exercises, the same kinds of workflows, the same starting questions. What is different, every time, is whether it sticks.
After enough rooms, the variable becomes obvious. It is not the tools. Not the budget. Not the trainer. Not even the seniority or quality of the team. The thing that decides whether a programme works is the management and culture support around it.
McKinsey, BCG and Gartner all reach the same conclusion: organisational and cultural readiness predicts AI progression more reliably than technical readiness. In the AI Maturity Assessment I built — used across corporate training programmes — AI Culture is the dimension that scores lowest in teams stuck below the Practitioner stage, more consistently than any other.
Two things decide whether AI training works
The first is permission.
In the rooms where training works, leadership has signalled, repeatedly and credibly, that experimenting with AI is part of the job. The failure mode they're worried about is not "someone made a mistake with a tool." It is "we got slow." Tools are accessible. Mistakes get reviewed, not punished. Senior people use AI visibly in front of their teams, including badly.
In the rooms where training does not work, the official position is "we're doing AI." The lived experience is that every interesting thing requires three approvals, the tooling is gated, and senior people do not use AI themselves, or at least not visibly. The team correctly reads this and acts accordingly.
This is not, mostly, about which industry you're in. I have worked with regulated firms where leadership has built genuine permission inside the compliance frame, and the training has stuck. I have worked with unregulated firms where leadership has been so cautious that the training has died regardless of how creative the team was. Industry plays a role. It does not decide.
The honest distinction is between necessary controls and fear-based gatekeeping. The team can usually tell which is which.
The second is accountability.
The harder one to see, and harder to fix, is who owns the result when AI helped produce a piece of work.
In high-functioning teams, the answer is clear. The human shipping the work owns it. They may have used AI to draft it, but their name is on it and their judgement decides what ships.
In low-functioning teams, the answer is fuzzy. AI sits in the workflow as a kind of plausible-deniability buffer. When something goes wrong, no one is sure whose mistake it was. People sense this and react predictably. They stop using AI for anything where the output matters.
Adoption programmes that don't fix this end up with a pattern most managers recognise. People are using AI for low-stakes work where it doesn't matter. People are not using it for the work that would actually move the business.
Take the AI Maturity Assessment
15 questions across five dimensions — Awareness, Tool Adoption, Workflow Integration, Measurement and AI Culture. See your team's score, find out which stage you're at, and get three specific next moves. Free, takes 5 minutes.
Take the Assessment →What this means in practice
I have stopped opening leadership engagements with the question "what tools do you want us to focus on" and started opening with two questions instead.
When someone in your team uses AI to produce work, whose name is on the result?
When did the most senior person in your function last use an AI tool visibly, in front of their team, including for something that didn't quite work?
If both answers are clean, the training will do its job. The team will move. The investment will pay back.
If either answer is messy, no amount of training will save the programme. The work has to happen further up. In the leadership signal, the access policy, the accountability structure, and the honest conversation about what AI actually means for the people in the room.
What changes and what doesn't
Tools change every quarter. Training providers come and go. The management and culture conditions are slow and unglamorous, and they decide everything.
Training that works starts with
the conditions around the room.
I design bespoke AI training around the culture and management conditions that decide whether it works. Available in London and internationally.