8.8 C
Munich
星期五, 3 4 月, 2026

AI benchmarks are broken. Here’s what we need instead.

Must read

The powerful iPad Pro M4 is somehow still available at its Spring Sale price

Discounts over $250 can't often be found on Apple tech, especially iPad Pro devices. #powerful #iPad #Pro #Spring #Sale #price

Plane crashes into restaurant and explodes into ball of flames with four killed

The Piper Malibu light aircraft came down in a built-up residential area in southern Brazil, killing the pilot and three passengers on board in...

WestJet adding fuel surcharge to companion voucher bookings

The impact of higher fuel prices due to violence in the Middle East is now starting to show up in more places, including flight...

You can't do better than the Galaxy S26 Ultra if you're after the sleekest camera flagship

Rivals may come with larger-sized sensors and whatnot, but Samsung's flagship is unsurpassed in terms of thickness. #can039t #Galaxy #S26 #Ultra #you039re #sleekest #camera #flagship

Across the organizations where this approach has emerged and started to be applied, the first step is shifting the unit of analysis. 

For example, in one UK hospital system in the period 2021–2024, the question expanded from whether a medical AI application improves diagnostic accuracy to how the presence of AI within the hospital’s multidisciplinary teams affects not only accuracy but also coordination and deliberation. The hospital specifically assessed coordination and deliberation in human teams using and not using AI. Multiple stakeholders (within and outside the hospital) decided on metrics like how AI influences collective reasoning, whether it surfaces overlooked considerations, whether it strengthens or weakens coordination, and whether it changes established risk and compliance practices. 

This shift is fundamental. It matters a lot in high-stakes contexts where system-level effects matter more than task-level accuracy. It also matters for the economy. It may help recalibrate inflated expectations of sweeping productivity gains that are so far predicated largely on the promise of improving individual task performance. 

Once that foundation is set, HAIC benchmarking can begin to take on the element of time. 

Today’s benchmarks resemble school exams—one-off, standardized tests of accuracy. But real professional competence is assessed differently. Junior doctors and lawyers are evaluated continuously inside real workflows, under supervision, with feedback loops and accountability structures. Performance is judged over time and in a specific context, because competence is relational. If AI systems are meant to operate alongside professionals, their impact should be judged longitudinally, reflecting how performance unfolds over repeated interactions. 

I saw this aspect of HAIC applied in one of my humanitarian-sector case studies. Over 18 months, an AI system was evaluated within real workflows, with particular attention to how detectable its errors were—that is, how easily human teams could identify and correct them. This long-term “record of error detectability” meant the organizations involved could design and test context-specific guardrails to promote trust in the system, despite the inevitability of occasional AI mistakes.

A longer time horizon also makes visible the system-level consequences that short-term benchmarks miss. An AI application may outperform a single doctor on a narrow diagnostic task yet fail to improve multidisciplinary decision-making. Worse, it may introduce systemic distortions: anchoring teams too early in plausible but incomplete answers, adding to people’s  cognitive workloads, or generating downstream inefficiencies that offset any speed or efficiency gains at the point of the AI’s use. These knock-on effects—often invisible to current benchmarks—are central to understanding real impact. 

The HAIC approach, admittedly promises to make benchmarking more complex, resource-intensive, and harder to standardize. But continuing to evaluate AI in sanitized conditions detached from the world of work will leave us misunderstanding what it truly can and cannot do for us. To deploy AI responsibly in real-world settings, we must measure what actually matters: not just what a model can do alone, but what it enables—or undermines—when humans and teams in the real world work with it.

 Angela Aristidou is a professor at University College London and a faculty fellow at the Stanford Digital Economy Lab and the Stanford Human-Centered AI Institute. She speaks, writes, and advises about the real-life deployment of artificial-intelligence tools for public good.

#benchmarks #broken #Heres

- Advertisement -

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest article

The powerful iPad Pro M4 is somehow still available at its Spring Sale price

Discounts over $250 can't often be found on Apple tech, especially iPad Pro devices. #powerful #iPad #Pro #Spring #Sale #price

Plane crashes into restaurant and explodes into ball of flames with four killed

The Piper Malibu light aircraft came down in a built-up residential area in southern Brazil, killing the pilot and three passengers on board in...

WestJet adding fuel surcharge to companion voucher bookings

The impact of higher fuel prices due to violence in the Middle East is now starting to show up in more places, including flight...

You can't do better than the Galaxy S26 Ultra if you're after the sleekest camera flagship

Rivals may come with larger-sized sensors and whatnot, but Samsung's flagship is unsurpassed in terms of thickness. #can039t #Galaxy #S26 #Ultra #you039re #sleekest #camera #flagship

Baby P’s mum set to reignite freedom bid despite admitting being a ‘bad mum’

Tracey Connelly, jailed for causing or allowing the death of her 17-month-old son Peter, will face a two-day Parole Board hearing in May to...