AI skills in organisations: how do you know if learning… | Pàu
blog

AI skills in organisations: how do you know if learning actually pays off?

Research & StrategyProduct development

Making AI skills in organizations visible with the Kirkpatrick model

AI is reshaping our landscape at a rapid pace. A few years ago, most organizations were still experimenting with early use cases. Today, we see something different happening: AI is finding its way into everyday work. Into presentations, emails, analyses, design flows, development, and support.

At the same time, many organizations run into the same challenge. There is plenty of enthusiasm to do something with AI,” but turning that enthusiasm into a real, shared skill across the organization proves to be much harder.

Because let’s be honest: organizing a training session is not the hardest part. The real challenge comes afterwards, when you need to be able to say: This worked. Our people can actually do this. And we see it reflected in how we work and what we deliver.”

That is the core question for any organization that wants to take AI skills seriously:

How do you know whether AI learning actually delivers?

Not in theory, but on the work floor. In behavior. In results.

The Future of Jobs Report 2025 makes this need even more explicit. A clear majority of employers expect AI to fundamentally change their business by 2030. At the same time, many organizations are planning significant reskilling and upskilling initiatives. And that makes sense. Without the right AI skills, AI remains a promise on paper.

The message is clear: learning about AI is no longer optional.

But the next step is just as important: understanding whether that learning actually lands.

At Pàu, we approach this using the Kirkpatrick model. No exotic framework, no trendy management jargon. Just a proven compass that helps us see whether knowledge truly translates into new behavior and tangible outcomes. Especially with something like AI, where the step from I understand it” to I actually use it” is often larger than people expect.

AI skills in organizations: the problem is rarely motivation

Across many organizations, we see the same pattern repeat itself. There is an inspiring session. Everyone is on board. That is followed by a training. Sometimes even another one. And then… momentum fades.

Not because people are unwilling. In most cases, there is genuine interest. It fades because AI often feels broad and vague, leaving people unsure where to start. Because day-to-day work keeps coming in, pushing experimentation to the background. Because uncertainty creeps in: Am I allowed to use this?” What about data?” What if I make mistakes?”

On top of that, there is often no rhythm to retain or exchange knowledge, and successes remain largely invisible. Which creates the impression that no one is really using it,” even when that is not entirely true.

The result is familiar: hours are invested in training, yet AI skills in organizations grow slowly. And that is frustrating. For HR, for team leads, for product owners, for management and often for the participants themselves.

That is why we prefer to look at AI learning the same way we look at digital change in consultancy: as a trajectory you build step by step. One that includes feedback, continuous adjustment and clear signals that it is actually working.

The Kirkpatrick model: more than just measurement

The Kirkpatrick model is a globally used framework to evaluate and improve learning journeys. It looks beyond the simple question Has someone completed a training?” and follows learning across four levels.

Reaction
 How do participants experience the learning journey? Do they feel motivated, does it spark curiosity, or do they encounter barriers?

Learning
 What knowledge or skills actually stick? Can people explain them or apply them within their own context?

Behavior
 Do those insights translate into new behavior on the work floor?

Results
 What concrete results become visible within the organization and for its customers?

The strength of the model lies in its simplicity. It helps you identify where the chain breaks. Perhaps the first impression was weak. Perhaps knowledge does not stick. Perhaps everyone understands the theory, but no one applies it in practice. Or perhaps a lot is happening, but the impact is not measured, which makes it invisible.

For AI skills, this is especially relevant. AI can quickly feel intuitive once people see a demo. In reality, building AI skills requires repetition, a safe space to practice and clear agreements. Without that, it remains a collection of isolated tricks rather than a sustainable capability.

Level 1: Reaction

First impressions determine whether people move forward

AI learning rarely starts with knowledge. It starts with emotion.

With some colleagues, you immediately feel energy: Finally, this is something I can use.” With others, there is hesitation: Do I really need to become an AI expert now?” And sometimes there is resistance: Another hype, it will be something else soon.”

These first reactions are not a side note. They strongly influence whether people will experiment later on, ask questions and persist. If the first session is too abstract, people disengage. If it is too technical, the same thing happens. And if it is mostly wow” without a clear link to day-to-day work, it remains entertainment.

What works in practice is making AI concrete. By showing recognizable use cases that connect to real roles, such as marketing, design, project management or engineering. By keeping things small: one problem, one flow, one tangible outcome. And by setting clear boundaries around safety from the start. What is allowed, what is not, and where people can turn with questions.

Language matters just as much. Not everyone wants to do prompting,” but everyone wants to save time, avoid mistakes or deliver higher quality work. Framing AI in those terms lowers the threshold to start using it.

Measuring Reaction does not need to be complex. A short survey after a session, an open question about what someone wants to try in the coming weeks, or a single question about what is still holding them back is often enough. The goal is not to collect scores, but to sense whether there is energy. And if not, to understand why.

Level 2: Learning

Insights that stick, not just during the training

At this level, the focus is on what people actually learn. Not whether a training was enjoyable, but whether it leads to AI skills that people can still apply later on.

AI learning can be deceptive. Someone may follow everything during a session and still get stuck a week later, because the foundation is missing or because the translation to their own context was never made. In organizations, we often see people remember isolated prompts without understanding the underlying principles. Quality control receives too little attention. And it remains unclear what is allowed and what is not when it comes to data and sources.

Strong Learning starts with a shared foundation. Short, accessible baseline trainings on generative AI, large language models and responsible AI help bring everyone to the same starting point. But learning does not stop there. It only becomes truly effective when people bring in their own tasks: rewriting an email, summarizing a meeting, creating variations of copy or structuring requirements.

Clear agreements are essential in this phase. AI is an assistant, not a source of truth. By comparing strong and weaker outputs side by side, people learn to judge quality and recognize risks.

Making Learning measurable does not have to be complex either. Asking people to explain concepts in their own words, letting them complete a short exercise, or reflecting on where AI can help them and where it should not be used already provides valuable insight. By the end of this level, you want to be able to say: people share a common foundation, understand what they are doing, and know how to safeguard quality. That is the foundation of strong AI skills in organizations.

Level 3: Behavior

From knowing to doing, on real work

This is often where the biggest bottleneck sits. Many organizations manage to organize training sessions. Far fewer succeed in embedding AI into day-to-day work.

Behavior is about what happens when no one is watching. Is AI used spontaneously? Does it come up in projects? Do people share outputs and insights? Does it fit naturally into their flow?

Progress often stalls because there is little room to practice during working hours, because people are afraid to ask basic” questions, or because the framework around tools and data remains unclear. As a result, success stays individual, while teams actually need a shared way of working with AI.

At Pàu, we stimulate behavior by making AI visible and open for discussion. One example is our huddles: short, low-threshold sessions where colleagues show how they use AI in their projects. No long presentations, just honest stories. This was my problem. This is how I approached it. This worked. This did not.

We also make AI skills part of growth and development conversations through Individual Development Plans. Using an AI Skills Matrix as a baseline and conversation starter, everyone gets a personal growth path aligned with their role and talents. Not everyone needs the same skills. A designer builds different AI capabilities than a developer or a project manager, and that is exactly the point.

Tracking behavior does not have to be heavy here either. Discussing AI in project retrospectives, picking up light signals of adoption, or observing whether learnings are shared across teams already provides a clear picture. And when someone shares that an AI tool saved them half a day of work, that kind of story is often more contagious than any additional training session.

Level 4: Results

Making impact visible, otherwise it feels like nothing is changing

Results is often the level management is waiting for. What does this deliver?” And that is a fair question. The challenge is that AI impact rarely fits into a single number.

When it comes to AI skills in organizations, impact shows up as a combination of effects. Time saved on repetitive tasks, quality improvements through better structure and consistency, faster project turnaround times, stronger knowledge sharing, and more room for higher-value work. Think of more time for strategy, creativity and meaningful human interaction.

Results also include risk management. Responsible AI is not a separate chapter; it is part of the outcome. Organizations with strong AI skills know where AI belongs, but also where it does not. They understand how to protect data and how to handle errors when they occur.

Making impact visible therefore requires nuance. Short success stories help, as does sharing results at moments that matter, such as team meetings or Connects. By making impact tangible for customers and choosing a limited set of KPIs that reflect the organization’s reality, you avoid AI efforts becoming invisible. Because without visibility, the feeling quickly arises: we are doing a lot, but I do not see it. And that is detrimental to motivation.

Why this works

The Kirkpatrick model helps you approach learning as a process you actively guide, rather than something you simply tick off. It shows how first impressions, knowledge, behavior and results are interconnected.

For AI skills in organizations, that coherence is essential. Without it, you quickly end up with isolated training sessions and individual tricks. With this model, you can build a shared foundation step by step, create a safe space to practice, and make progress visible. And when things stall, it becomes clear where to adjust, without drama.

A short reality check for organizations building AI skills

If you want to understand where your organization really stands when it comes to AI skills, there is no need to start with dashboards or maturity models right away. A few focused questions are often enough to get a clear picture.

At the Reaction level, it is worth asking whether people currently see AI as relevant to their work. Do they feel curiosity, or mainly hesitation? And which barriers come up most often in conversations with teams?

At the Learning level, the question is whether people can explain what they are doing and why. Do they understand the principles behind their use of AI, and do they know how to check and evaluate output? Or does it remain a matter of trying things out and hoping for the best?

For Behavior, you look at what happens in practice. Is AI actually being used on real tasks and real work? Does it show up in projects and collaborations? And are learnings shared between colleagues, or does knowledge remain largely individual?

At the Results level, you can ask yourself whether you can point to a few concrete examples of impact. Is there visibility on time saved, quality improvements or risk management? Or does the feeling persist that a lot is happening, while it remains difficult to articulate what it delivers?

If you find yourself answering I don’t know” at any of these levels, that is not a failure. It is simply a signal. A signal that it may be time to start measuring, adjusting, and having the right conversations.

Our philosophy: people first, impact as the outcome

At Pàu, everything starts with people. Technology is never a goal in itself, but a means to create better solutions together and deliver real value. That applies to our work with clients, and just as much to how we approach AI internally.

The Kirkpatrick model helps us make that belief tangible. It shows how knowledge only gains meaning when it is translated into behavior, and how behavior only matters when it results in outcomes that truly make a difference.

AI can do a lot. But only people can give it direction, context and purpose. And for us, that is where the real impact of strong AI skills in organizations lies.

This article was written by:
Coach Lead

Gerelateerde artikelen

ProductontwerpResearch & Strategy

From fragmented information to smarter talent matching

How better preparation, clear flows and feedback improve talent matching. Key insights from a Pàu hackathon.
Product development

The power of AI lies in who uses It

Why AI is no substitute for expertise and how we make it work for us.
Product designResearch & Strategy

Design Fiction

Discover how Design Fiction helps teams explore future scenarios, challenge assumptions, and turn strategy into tangible conversations.
Contact

Ready to make an impact?

Tell us what you need from us. Our specialists are ready to bring your project to life.

Let's connect