K. Tarkan Batgün arrives at this interview on a special day: Spain and Turkey are facing each other in a match he knows well from his dual perspective as a global analyst and Turkish expert in artificial intelligence applied to soccer. CEO of Comparisonator-a platform that contextualizes performance, compares players across leagues, and simulates how they would adapt to new competitive environments-he has worked in clubs, agencies, and international consulting firms. He created Bursaspor’s ‘Scouting Laboratory’, served on the board of Altnordu FK, advised companies such as Wyscout and SoccerLab, and was responsible for NIKE Türkiye’s scouting program for six years.
From this multifaceted perspective, he argues that context is the key to any data and that AI only makes sense if it helps to make better decisions. In this conversation, he explains how his technology translates soccer between leagues, detects invisible risks, and avoids million-dollar signings that could go wrong.
Question. You have worked on four continents and always insist that data without context is useless. What has been the biggest culture shock that has forced you to completely reinterpret a piece of data or a player’s profile?
Answer. Working on four continents taught me one thing very early on: the same number can mean completely different things depending on where it comes from. And the biggest culture shock, the moment that really forced me to reinterpret the data, came when I moved from the structured soccer of Australia to the emotional, chaotic, and high-intensity environment of Turkey.
Let me give you a specific example: In Australia, I analyzed a midfielder who had excellent passing accuracy: 92-93%. In that league, this usually indicates intelligence, patience, and well-trained positional play. But when I returned to Turkey and applied the same logic, I realized something shocking: a passer with 92% in the Turkish league is often not creative at all. They may simply be avoiding risks, playing backwards, or releasing the ball immediately due to pressure.
That was the moment I understood that context dictates the truth, and it prompted me to create Comparisonator as a contextual engine for sporting directors, coaches, and recruiters: to reinterpret the numbers through the prism of the league’s pace, to adjust performance to tactical style, to understand how a player performs outside his environment, to help clubs evaluate talent globally without falling into misleading statistical traps.
“The same number can mean totally different things depending on the country it comes from.”
Q: Your career combines club, agency, consulting, and teaching. What did you learn in each of those roles that you now apply directly to the design of Comparisonator’s artificial intelligence?
A. Each stage of my career gave me a different perspective on soccer, and today, all of those perspectives are directly integrated into Comparisonator’s AI.
From the club environment, I learned that decision-makers don’t have time; they need clarity. They don’t want ‘big data’; they want to know whether this player fits our style or not. That’s why our AI behaves more like a decision-making support advisor than a statistics machine.
From the agency world, I learned that talent trajectories are as important as talent itself. From consulting, I learned that every club has a different reality. That’s why Comparisonator’s AI adapts to the user. It learns the club’s style, needs, and priorities, and tailors its recommendations accordingly.
From teaching and conferences, I learned that understanding comes from explanation, not numbers. That’s why we created CompaGPT: an AI that explains soccer data to humans as an experienced scout or coach would.
Q. At Bursaspor, you created the ‘Scouting Lab’. What part of that idea is still relevant today, and what has become completely obsolete with current AI?
A. Thanks to my mentors Christophe Daum and his assistant Rudi Verkempinck, the Bursaspor Scouting Lab was my first attempt to create a systematic, evidence-based way to evaluate players. Many parts of that idea are still relevant today, but others have been completely transformed by modern AI. Let’s say the ‘Scouting Lab’ was the seed. The methodology (structure, clarity, collaboration) is still relevant. But everything manual, repetitive, or subjective has been surpassed by AI. Today, Comparisonator is the Scouting Lab transformed into a global, dynamic, and environment-aware intelligence engine.
Q. Many clubs believe they are using data, but in reality they are only looking to confirm their preconceived opinions. How much noise do these biases generate in the modern scouting process?
A. Bias is the biggest hidden cost of modern scouting, and it generates much more noise than clubs realize. Many clubs think they are using data, but in reality they are using numbers to justify decisions they have already made emotionally. This creates three major problems: you stop discovering new players, because if data is only used to confirm an opinion, you never question your first impression and you never discover unexpected profiles; you filter out the truth, as confirmation bias causes clubs to ignore red flags; and you lose your competitive advantage, because if all clubs use data to support pre-existing beliefs, they all end up signing the same players.
“The biggest hidden cost of modern scouting is bias: it distorts, limits, and wastes talent.”
Q. When a Comparisonator report contradicts the intuition of a coach or head scout, how is that conflict usually resolved? Who is more often wrong?
A. When data and intuition disagree, the first rule is simple: don’t choose either option, investigate. A coach sees things that data cannot see: body language, personality, behavior in training… Comparisonator sees things that a coach cannot see: adaptation to the league, tactical stress, hidden risk signals…
In my experience, when conflicts arise, environmental projection (how the player will adapt to the league and the system) is usually where intuition underestimates risk. That is precisely where Comparisonator adds value: it does not replace human judgment, but protects it from blind spots.
So who is more often wrong? Usually, the party that ignores context. And in modern soccer, context is non-negotiable.
Q. Standardization across leagues is one of the biggest challenges in the industry. Which competition presents the most ‘resistance’ to the algorithm and why?
A. The league that creates the most resistance to any algorithm is the one where soccer is the least standardized, where the pace, structure, and tactical discipline vary greatly within a single game.
For us, these tend to be leagues with large differences in pitch quality, unpredictable pace of play, inconsistent defensive organization, and extreme emotional momentum. Standardization is the problem; contextual intelligence is the solution.
Q. You talk a lot about AI points, trends, consistency, and functional role. Of all these signals, which one best predicts a player’s future progression?
A. The most reliable indicator of a player’s future progression is the consistency of their performance in different environments. AI points, trends, and role metrics are important, but the real signal is this: Does the player continue to perform when the context changes? Different pace, different pressure, different tactical demands, different quality of opponent.
“Virtual Transfer has already prevented signings that would have cost clubs millions”
Players who maintain their performance in multiple environments almost always progress. Players who collapse outside their comfort zone almost never do. That’s why Comparisonator focuses so much on performance stability, league translation, adaptability indicators, and role behavior under pressure.
Q. Virtual Transfer allows you to simulate a player’s performance in another league. Do you have any documented cases where the model has prevented a club from making a bad signing?
A. Yes, several, but I can’t reveal the names of the clubs or players. What I can say is this: Virtual Transfer has already saved clubs millions. A recent case involved a highly sought-after striker from a fast-paced, open league. His raw numbers were spectacular: dribbles, progressive runs, expected goals… Everything suggested he was a must-have signing.
But when we ran him through Virtual Transfer and simulated his performance in one of the top five European leagues, two red flags immediately appeared: his efficiency dropped by almost 50% under increased defensive pressure, and his decision-making slowed significantly in structured tactical environments. The club halted the transfer. Two months later, he signed for another European team and struggled in precisely the areas our model had predicted.
This is the main goal of Virtual Transfer: not to say “no,” but to reveal the truth about how a player performs outside their comfort zone. In modern recruitment, that clarity can mean the difference between a successful signing and a very costly mistake.
Q. AI platforms and models promise to eliminate bias, but they can also generate it. What has been the biggest ‘false positive’ or system failure that has forced you to revise the model?
A. The biggest false positive we’ve had came from a player who looked exceptional because his league environment artificially inflated his strengths. He played in a competition with very low defensive pressure, wide-open spaces, chaos in transitions, and extremely high ball recovery zones.
On paper, his metrics were elite. Our initial model ranked him very highly in his position. But when he moved to a more structured league, everything fell apart. Not because he lacked talent, but because his environment had created a statistical illusion.
“AI doesn’t become dangerous by making mistakes, but by not understanding context”
That was a turning point for us. We realized that the model needed deeper weighting. We rebuilt the engine so that environmental distortion is now one of the first things the system checks.
The lesson was simple: AI doesn’t become dangerous when it makes mistakes; AI becomes dangerous when it doesn’t understand context. That failure made Comparisonator stronger, more cautious, and much more adaptable.
Q. You’ve worked in undervalued markets and in the top leagues. What common pattern do you find in players who adapt best when they make a sudden competitive leap?
A. Across all continents, the players who adapt best after a big competitive leap share the same pattern: they learn fast, they’re not just fast players. The players who succeed are those who can recalibrate their habits almost immediately when the environment changes.
Q. More and more clubs are looking for the next Haaland before he bursts onto the scene. Is it realistic to think that AI can anticipate generational talent, or are we still looking for unicorns?
A. AI can identify extraordinary patterns early on, but it can’t manufacture a Haaland. Generational talent is not predicted, it is confirmed over time. What AI can do is recognize the signs that often appear before a big leap. A unicorn becomes a unicorn because of environment, training, personality, and mindset, not just metrics. AI finds the possibilities. Human scouting finds the destination.
Q. After 20 years in soccer and technology, what uncomfortable truth do you think the scouting industry needs to hear if it wants to take the next step?
A. That most clubs don’t have a scouting problem, they have a decision-making problem. Clubs collect tons of reports, videos, statistics, and opinions… but when the moment of truth arrives, many still make decisions based on emotions, politics, hierarchy, or panic.
The next step is not more data. It’s more discipline in how decisions are made. And that’s precisely why we created Comparisonator: not to replace scouts, but to force decisions to be clearer, fairer, and harder to manipulate.
Read the full article here


