8.9 C
Munich
星期六, 4 4 月, 2026

AI is rewiring how the world’s best Go players think

Must read

Jet2 gives travel update on fears surrounding flight disruption this spring holiday

Jet2 has reassured passengers that spring flights to Greece will operate as normal. The update comes coupled with concerns about the Middle East conflict...

This 13-inch iPad Air is the must-have iPad for those seeking to save big

The M3 champ may remain at such an affordable price too long. #13inch #iPad #Air #musthave #iPad #seeking #save #big

Fed up with fly tipping, woman puts up blunt 6-word sign – ‘someone needed to do something’

A fuming Glasgow resident fed up with worsening fly-tipping in her neighbourhood has paid to print blunt, funny signs to shame litterers and push...

T-Mobile is about to test the limits of customer loyalty [UPDATED]

The days of free lines with discounted devices at T-Mobile are over. #TMobile #test #limits #customer #loyalty #UPDATED

Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result. 

For Shin Jin-seo, the top-ranked Go player in the world, AI is an invaluable training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for how closely his moves mimic AI’s, he traces the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on the digital grid to try to understand the machine’s thinking. “I constantly think about why AI chose a move,” he says.

When training for a match, Shin spends most of his waking hours poring over KataGo. “It’s almost like an ascetic practice,” he says. According to a study in 2022 by the Korean Baduk League, Shin’s moves match AI’s 37.5% of the time, well above the 28.5% average the study found among all players.

“My game has changed a lot,” says Shin, “because I have to follow the directions suggested by AI to some extent.” The Korea Baduk Association says it has reached out to Google DeepMind in the hopes of arranging a match between Shin and AlphaGo, to commemorate the 10th anniversary of its victory over Lee. A spokesperson for Google DeepMind said the company could not provide information at this time. But if a new match does happen, Shin, who has trained on more advanced AI programs, is optimistic that he’d win. “AlphaGo still had some flaws then, so I think I could beat it if I target those weaknesses,” he says.

AI rewrites the Go playbook

Go is an abstract strategy board game invented in China more than 2,500 years ago. Two players take turns placing black and white stones on a 19×19 grid, aiming to conquer territory by surrounding their opponent’s stones. It’s a game of striking mathematical complexity. The number of possible board configurations—roughly 10170—dwarfs the number of atoms in the universe. If chess is a battle, Go is a war. You suffocate your enemy in one corner while fending off an invasion in another.

To train AI to play Go, a vast trove of human Go moves are fed into a neural network, a computing system that mimics the web of neurons in the human brain. AlphaGo, which was later christened AlphaGo Lee after its victory over Lee Sedol, was trained on 30 million Go moves and refined by playing millions of games against itself. In 2017, its successor, AlphaGo Zero, picked up Go from scratch. Without studying any human games, it learned by playing against itself, with moves based only on the rules of the game. The blank-slate approach proved more powerful, unconstrained by the limits of human knowledge. After three days of training, it beat AlphaGo Lee 100 games to zero. 

#rewiring #worlds #players

- Advertisement -

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -

Latest article

Jet2 gives travel update on fears surrounding flight disruption this spring holiday

Jet2 has reassured passengers that spring flights to Greece will operate as normal. The update comes coupled with concerns about the Middle East conflict...

This 13-inch iPad Air is the must-have iPad for those seeking to save big

The M3 champ may remain at such an affordable price too long. #13inch #iPad #Air #musthave #iPad #seeking #save #big

Fed up with fly tipping, woman puts up blunt 6-word sign – ‘someone needed to do something’

A fuming Glasgow resident fed up with worsening fly-tipping in her neighbourhood has paid to print blunt, funny signs to shame litterers and push...

T-Mobile is about to test the limits of customer loyalty [UPDATED]

The days of free lines with discounted devices at T-Mobile are over. #TMobile #test #limits #customer #loyalty #UPDATED

Shark hair tool ‘my fiance won’t stop talking about’ drops £100 and ‘styles last for days’

'One of the best hair stylers I've used'This article contains affiliate links, we will receive a commission on any sales we generate from it....