9.5 C
Munich
星期六, 4 4 月, 2026

AI benchmarks are broken. Here’s what we need instead.

Must read

I am skipping the iPhone 18 Pro: Why the base iPhone 18 is shaping up to be Apple's ultimate AI sleeper hit

The "Pro" label may be losing its meaning. And our wallets may be the winners. #skipping #iPhone #Pro #base #iPhone #shaping #Apple039s #ultimate #sleeper #hit

Coleen Rooney's 40th birthday party underway at mansion as helicopter lands

Coleen Rooney's 40th birthday party is well underway and a helicopter was recently seen landing in the grounds of her Cheshire home today as...

Save up to $1,070 with $900 guaranteed with Motorola’s latest deal on the 1TB Razr Ultra (2025)

The phone is a steal at its current price, so save while you can! #Save #guaranteed #Motorolas #latest #deal #1TB #Razr #Ultra

‘Structural collapse’ at football stadium leaves 60 people injured in Peru

At least 60 people have been injured after a 'structural collapse' at a football stadium in Peru, with hospitals on high alert following the...

Across the organizations where this approach has emerged and started to be applied, the first step is shifting the unit of analysis. 

For example, in one UK hospital system in the period 2021–2024, the question expanded from whether a medical AI application improves diagnostic accuracy to how the presence of AI within the hospital’s multidisciplinary teams affects not only accuracy but also coordination and deliberation. The hospital specifically assessed coordination and deliberation in human teams using and not using AI. Multiple stakeholders (within and outside the hospital) decided on metrics like how AI influences collective reasoning, whether it surfaces overlooked considerations, whether it strengthens or weakens coordination, and whether it changes established risk and compliance practices. 

This shift is fundamental. It matters a lot in high-stakes contexts where system-level effects matter more than task-level accuracy. It also matters for the economy. It may help recalibrate inflated expectations of sweeping productivity gains that are so far predicated largely on the promise of improving individual task performance. 

Once that foundation is set, HAIC benchmarking can begin to take on the element of time. 

Today’s benchmarks resemble school exams—one-off, standardized tests of accuracy. But real professional competence is assessed differently. Junior doctors and lawyers are evaluated continuously inside real workflows, under supervision, with feedback loops and accountability structures. Performance is judged over time and in a specific context, because competence is relational. If AI systems are meant to operate alongside professionals, their impact should be judged longitudinally, reflecting how performance unfolds over repeated interactions. 

I saw this aspect of HAIC applied in one of my humanitarian-sector case studies. Over 18 months, an AI system was evaluated within real workflows, with particular attention to how detectable its errors were—that is, how easily human teams could identify and correct them. This long-term “record of error detectability” meant the organizations involved could design and test context-specific guardrails to promote trust in the system, despite the inevitability of occasional AI mistakes.

A longer time horizon also makes visible the system-level consequences that short-term benchmarks miss. An AI application may outperform a single doctor on a narrow diagnostic task yet fail to improve multidisciplinary decision-making. Worse, it may introduce systemic distortions: anchoring teams too early in plausible but incomplete answers, adding to people’s  cognitive workloads, or generating downstream inefficiencies that offset any speed or efficiency gains at the point of the AI’s use. These knock-on effects—often invisible to current benchmarks—are central to understanding real impact. 

The HAIC approach, admittedly promises to make benchmarking more complex, resource-intensive, and harder to standardize. But continuing to evaluate AI in sanitized conditions detached from the world of work will leave us misunderstanding what it truly can and cannot do for us. To deploy AI responsibly in real-world settings, we must measure what actually matters: not just what a model can do alone, but what it enables—or undermines—when humans and teams in the real world work with it.

 Angela Aristidou is a professor at University College London and a faculty fellow at the Stanford Digital Economy Lab and the Stanford Human-Centered AI Institute. She speaks, writes, and advises about the real-life deployment of artificial-intelligence tools for public good.

#benchmarks #broken #Heres

- Advertisement -

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest article

I am skipping the iPhone 18 Pro: Why the base iPhone 18 is shaping up to be Apple's ultimate AI sleeper hit

The "Pro" label may be losing its meaning. And our wallets may be the winners. #skipping #iPhone #Pro #base #iPhone #shaping #Apple039s #ultimate #sleeper #hit

Coleen Rooney's 40th birthday party underway at mansion as helicopter lands

Coleen Rooney's 40th birthday party is well underway and a helicopter was recently seen landing in the grounds of her Cheshire home today as...

Save up to $1,070 with $900 guaranteed with Motorola’s latest deal on the 1TB Razr Ultra (2025)

The phone is a steal at its current price, so save while you can! #Save #guaranteed #Motorolas #latest #deal #1TB #Razr #Ultra

‘Structural collapse’ at football stadium leaves 60 people injured in Peru

At least 60 people have been injured after a 'structural collapse' at a football stadium in Peru, with hospitals on high alert following the...

T-Mobile is destroying itself as insiders dump the stock and customers get ready to leave

T-Mobile’s days as the Un-carrier are over as ridiculous changes are ruining the wireless provider's image. #TMobile #destroying #insiders #dump #stock #customers #ready #leave