From fragmented information to smarter talent matching | Pàu
blog

From fragmented information to smarter talent matching

ProductontwerpResearch & Strategy

What a hackathon taught us about flows, preparation, and feedback

On paper, talent matching often sounds simpler than it is in reality. You have a profile, an assignment, and a decision. In theory, that makes sense. In practice, many organizations experience something quite different. Information is scattered across tools, conversations, emails, and documents. Expectations are aligned verbally, adjusted on the side, or only voiced once friction has already surfaced. Feedback comes late, remains incomplete, or disappears altogether. And decisions are made under time pressure, often with incomplete context.

That is what makes talent matching complex. Not because people are bad at their jobs, but because the process itself offers little structure to lean on. Recruiters, hiring managers, and candidates navigate a system that has grown organically over time. Each part works in isolation, but together they fail to form a clear whole. The result is friction: misunderstandings, rework, delays, and sometimes missed opportunities, on both sides of the table.

During a recent huddle at Pàu, Nils Govaerts and Pauline Verbeke shared how they, together with Michiel De Wandelaer, explored a conceptual flow for smarter talent matching during an internal hackathon. It’s important to set the context right away: this was not a finished product, nor a new tool that was launched.” It was a response to a concrete case and a familiar question:

How can you better prepare, more consistently assess, and more transparently follow up on the match between talent and assignment, without losing the human aspect?

The hackathon created a safe space to open up that question. Not to quickly build something, but to understand where things break down today. Which steps sit between intake and decision? Where does information get lost? And why does matching so often feel subjective, even when experienced people with good intentions are involved?

The outcome of that exercise was not a solution you can buy or roll out. It was a way of thinking. A way to look at talent matching as a coherent process rather than a series of disconnected actions. And that is exactly why this exercise is relevant for many organizations today. Not because they should copy this concept one-to-one, but because they recognize where the friction lies. And because better matching rarely starts with technology, but with structure, context, and conversation.

Why talent matching is rarely a tooling problem

When organizations struggle with talent matching, the focus often shifts quickly to tooling. New applicant tracking systems, AI matching engines, or additional automation are seen as potential solutions. That’s understandable. Tools are tangible, measurable, and promise efficiency. They create the sense that the problem can be solved by adding or replacing something.

In practice, however, this rarely addresses the core issue. Most organizations already have an extensive stack of systems in place. Profiles are stored somewhere, vacancies are shared across multiple channels, and communication happens via email, chat, or video. And yet, matching continues to require a lot of effort. Not because the tools fail, but because they operate within a process that was never explicitly designed.

The real issue is usually a lack of coherence. Information is spread across different places. Expectations live in people’s heads rather than in shared frameworks. Feedback is given verbally, but rarely captured in a structured way. As a result, every new match feels like a standalone case, disconnected from previous experiences and learnings.

For recruiters and account managers, this means constantly switching between contexts. For hiring managers, it means making decisions with incomplete information. And for candidates, it often leads to uncertainty: why something moves forward or doesn’t, or what is expected in the next step.

What becomes visible here is not a lack of technology, but a lack of process design. Talent matching has grown organically over time rather than being intentionally built. And as long as that remains the case, no tool will fundamentally solve the problem.

This is a familiar pattern in consultancy as well. When digital initiatives stall, the root cause is rarely the technology itself. It lies in unclear flows, implicit assumptions, and missing agreements. Only when those are made explicit can technology play its role as a reinforcement rather than a patch.

That’s why the hackathon did not start with the question which tool are we missing?”, but with a more fundamental one: which decisions do people need to make, and what information do they need to make them well? That perspective made it possible to look at talent matching with fresh eyes, independent of existing systems.

A hackathon as a space for thinking, not a production line

Hackathons are often associated with speed, prototypes, and technical experiments. At Pàu, we also use them in a different way: as a space for thinking. A temporary context where it is explicitly okay to open up existing processes, surface assumptions, and unpack complexity, without the pressure to immediately land on a solution.”

For the topic of talent matching, this proved particularly valuable. The team of Nils GovaertsPauline VerbekeMichiel De Wandelaer, and did not start from technology, but from observation. How does matching actually work today? Which steps are formally defined, and where does the real work happen between the lines? Which decisions truly matter, and where does information get diluted or lost along the way?

By walking through the process together, step by step, it quickly became clear how much implicit knowledge is involved. Expectations that everyone knows,” but that are never made explicit. Interpretations of what constitutes a good match. Decisions that make sense to those involved at the time, but are hard to explain or justify to others later on.

The hackathon setting made it possible to bring those implicit layers to the surface. Not to standardize them away or automate them out of existence, but to make them discussable. That led to sharper questions: where does the real uncertainty lie? Which pieces of information are we missing again and again? And why does matching so often feel subjective, even when experienced people are involved?

Because this was not an official project, there was room to slow down and reflect. That was exactly what this exercise required. Not more speed, but more clarity. Not more tooling, but a better understanding of the problem. Those insights formed the foundation for developing a coherent matching flow in the next step.

Talent matching as a flow rather than a series of disconnected steps

One of the key insights from the hackathon was that talent matching in many organizations consists of a series of isolated steps that follow each other chronologically, but build very little on one another in substance. There is an intake, a profile search, a conversation, and then a decision. Each step makes sense on its own, yet they are rarely designed as one coherent whole.

That lack of coherence creates friction. Information that is relevant at the start gets lost along the way. Expectations that were implicit only become visible once tension arises. Feedback is given, but not connected to earlier assumptions. The result is a process that constantly needs to be repaired” rather than supported by design.

The hackathon team therefore made a deliberate choice to approach matching as a flow. Not as a checklist of steps to complete, but as a trajectory in which each phase prepares the next. That flow consists of five key moments: making context explicit, structuring profiles, clarifying expectations, capturing feedback, and learning for the next match.

These steps are not new in themselves. Most organizations already do this today, albeit implicitly and in a fragmented way. The difference lies in the order and the coherence. What you make explicit early on prevents interpretation later. What you structure can be compared. And what you capture as feedback can be reused.

An important principle in designing this flow was that it should not feel like extra work. On the contrary, the goal was to reduce cognitive load. Less searching for context, less alignment afterwards, and less rework. By capturing information at the right moment, matching becomes lighter rather than heavier.

The flow was also explicitly designed to be human-centred. Not every step can or should be automated. Some decisions require experience, nuance, and conversation. The flow supports those moments, but does not replace them. That distinction proved crucial to maintaining trust among everyone involved.

Where things really break down: context, expectations, and feedback

When a match turns out not to work in hindsight, it is often attributed to a lack of skills or experience. In practice, we see something different. Mismatches rarely happen because someone is not good enough,” but because context, expectations, and feedback were never made sufficiently explicit throughout the process.

Context is often the first pain point. Roles are described in terms of skills, but without enough attention to the environment someone will operate in. What does senior” actually mean in this context? How much autonomy is there in practice? How complex is the stakeholder landscape? When these questions are not explicitly addressed, different people interpret the same profile in different ways. Matching then becomes an exercise in interpretation.

Expectations form the second bottleneck. Many assumptions remain unspoken: about pace, responsibility, communication, or collaboration. Those assumptions usually surface only once friction arises. By then, it is often too late, and a mismatch feels inevitable, even though it could have been avoided.

Feedback is the third structural issue. It is usually given, but rarely captured properly. Conversations generate valuable insights, but they disappear into people’s heads, emails, or loose notes. As a result, every new match starts from scratch, without learning from previous decisions.

During the hackathon, the team deliberately focused on making these three elements explicit. Not as an administrative exercise, but as a fixed part of the process. Context is clarified upfront, expectations are voiced, and feedback is linked back to those expectations. This makes feedback more concrete and easier to reuse.

That has a noticeable impact on decision quality. Conversations shift from persuasion to validation. Doubts are voiced earlier. And decisions become easier to explain, both internally and towards candidates.

In consultancy, we see this pattern time and again. Projects rarely fail due to a lack of expertise, but because expectations are unclear and feedback loops are weak. Making those explicit creates room to adjust without drama.

The same applies to talent matching. Those who take context, expectations, and feedback seriously remove a large part of the complexity from the process. Not by oversimplifying it, but by structuring it more deliberately.

Comparing profiles without reducing people to scores

Once context and expectations become more explicit, a new challenge emerges: comparability. In many matching processes, this is a major stumbling block. Profiles vary widely in structure, level of detail, and language. One candidate lists every project in detail, while another summarizes years of experience in just a few lines. Skills are described differently depending on role, background, or industry. And soft skills, which are often decisive, usually remain implicit.

Many systems attempt to solve this through scoring. Criteria are weighted, profiles are ranked, and a list emerges. This can help create an overview, but the risk of losing nuance is significant. People are reduced to numbers, while context is just as critical to a good match.

The hackathon concept therefore chose a different approach. Not who scores highest?”, but how can we describe profiles in a consistent way so that conversations become more substantive?” The starting point is not a final score, but a shared structure.

That structure looks at core skills, relevant experience in context, learning ability, and conditions such as availability or preferred way of collaborating. By viewing profiles along the same dimensions, it becomes easier to articulate differences without oversimplifying them.

AI can play a supporting role here by analyzing profiles and translating them into that shared structure. Not to make decisions, but to reveal patterns. This makes it clearer why someone is considered a strong match and where potential attention points lie.

The advantage of this approach is transparency. Decisions are better grounded and easier to explain, both internally and to candidates. Human judgment remains central, but it is better supported.

In consultancy, we encounter a similar tension. Models and scores are useful for making complexity manageable, as long as they do not replace the conversation. They should provide insight, not an endpoint. By making profiles comparable without reducing them, that space for better judgment is exactly what is created.

Learning as a lever for better matching

In many organizations, talent matching is still treated as an endpoint. A decision is made, someone is selected or not, and the process is considered complete. Yet this is precisely where a lot of potential value is lost. Every match, and especially every mismatch, contains insights that are relevant for the future.

During the hackathon, matching was therefore explicitly approached as a learning process, not just a selection moment. What does a decision reveal about the skills that are currently in demand? Which expectations keep resurfacing? And where do we see structural gaps emerging between what organizations need and what profiles are actually offering today?

By capturing feedback and expectations, a richer picture emerges over time. Not only of individual profiles, but of the broader landscape. Which competencies turn out to be scarce? Which skills are highly context-dependent? And where is there potential that is currently underutilized?

This opens the door to a different view on talent. Instead of repeatedly searching externally for the perfect profile,” space is created to look internally at growth and development. Matching then becomes not just a way to find the right person, but also a way to support people more deliberately in their next steps.

Technology can again play a supporting role here, for example by making patterns visible and offering suggestions. Not as a mandate, but as input for conversation. In that way, learning does not become a separate HR initiative, but a natural follow-up to concrete questions from practice.

In consultancy, we see this principle come back often. Project experiences generate insights that, when used well, help guide further development. By linking matching to learning, a continuum emerges instead of isolated moments. That makes organizations more resilient and talent more sustainably deployable.

The role of AI: supporting without deciding

AI played a clear role in the hackathon concept, but never as the final decision-maker. That was a deliberate choice. Talent matching remains human work. Context, nuance, and trust cannot be fully captured in models or scores.

Where AI does add value is in supporting human judgment. By bringing together information that is currently fragmented, by making patterns visible, and by adding structure to profiles, expectations, and feedback. Not to automate decisions, but to better inform conversations.

Transparency is crucial here. When AI summarizes, compares, or suggests, it must be clear what those outputs are based on. That way, people remain in control and there is no false sense of objectivity. AI does not become an arbiter, but an assistant that helps people get to the core more quickly.

In consultancy, this is where we see the greatest value. Not in replacing decisions, but in creating space. Space for better questions, sharper trade-offs, and more focus on what truly matters. In talent matching, that means spending less time searching and interpreting, and more time on dialogue and choice.

What this huddle says about how we work at Pàu

This hackathon did not result in a product, but in something more valuable: a sharper understanding of the problem. And that is characteristic of how we approach digital challenges at Pàu.

We do not start with technology, but with people and processes. We slow down to understand, so that what we build later genuinely adds value. We use hackathons as a method to test assumptions, share insights, and make complexity manageable.

For organizations struggling with talent matching, as well as with other complex processes, there is a broader lesson here. Progress does not start with automation, but with structure. Not with speeding up, but with clarifying.

Talent matching will always remain human work. But human work can be supported by better flows, sharper preparation, and deliberate choices. That is what this huddle taught us. And that is how we at Pàu continue to look at digital change.

This article was written by:
Service Designer

Gerelateerde artikelen

Research & StrategyProduct development

AI skills in organisations: how do you know if learning actually pays off?

How do you build AI skills that work? A consultancy view on learning, behaviour and impact, guided by the Kirkpatrick Model.
Productontwerp

How a product design consultant can bring your idea to a successful product faster

Discover how a product design consultant turns your idea into a successful digital product faster. From design thinking and prototyping to user testing.
Product designResearch & Strategy

Design Fiction

Discover how Design Fiction helps teams explore future scenarios, challenge assumptions, and turn strategy into tangible conversations.
Contact

Ready to make an impact?

Tell us what you need from us. Our specialists are ready to bring your project to life.

Let's connect