注册 登录
滑铁卢中文论坛 返回首页

风萧萧的个人空间 http://www.shuicheng.ca/bbs/?61910 [收藏] [复制] [分享] [RSS]

日志

Google’s AlphaGo AI beats the world’s elite Go players

已有 519 次阅读2017-9-17 19:43 |个人分类:AI



AlphaGo - Wikipedia

AlphaGo is a narrow AI computer program that plays the board game Go.[1] It was developed by Alphabet Inc.'s Google DeepMind in London in October 2015.

It became the first Computer Go program to beat a human professional Go player without handicapson a full-sized 19×19 board.[2][3] In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicaps.[4] Although it lost to Lee Sedol in the fourth game, Lee resigned the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. It was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016.[5]

In the 2017 Future of Go Summit, AlphaGo beat Ke Jie, the world No.1 ranked player at the time, in a three-game match. After this, AlphaGo was awarded professional 9-dan by Chinese Weiqi Association.[6] After the match between AlphaGo and Ke Jie, AlphaGo retired while DeepMind continues AI research in other areas.[7]

AlphaGo uses a Monte Carlo tree search algorithm to find its moves based on knowledge previously "learned" by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play.



After beating the world’s elite Go players, Google’s AlphaGo AI is retiring


https://techcrunch.com/2017/05/27/googles-alphago-ai-is-retiring/

Google’s AlphaGo — the AI developed to tackle the world’s most demanding strategy game — is stepping down from competitive matches after defeating the world’s best talent. The latest to succumb is Go’s top-ranked player, Ke Jie, who lost 3-0 in a series hosted in China this week.

The AI, developed by London-based DeepMind, which was acquired by Google for around $500 million in 2014also overcome a team of five top players during a week of matches. AlphaGo first drew headlines last year when it beat former Go world champion Lee Sedol, and the China event took things to the next level with matches against 19-year-old Jie, and doubles with and against other top Go pros.

Challengers defeated, AlphaGo has cast its last competitive stone, DeepMind CEO Demis Hassabis explained.

This week’s series of thrilling games with the world’s best players, in the country where Go originated, has been the highest possible pinnacle for AlphaGo as a competitive program. For that reason, the Future of Go Summit is our final match event with AlphaGo.

The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials.

Go is revered as the planet’s most demanding strategy game, and that’s why it made for an ideal field to both develop AI technology and plot machines against humans. Beyond Google, Tencent is among other tech firms to have unleashed AIs on the game. While it whips up curiosity and attention, the game simple serves as a stepping stone for future plans which is why DeepMind says it is moving on.

Indeed, the British company has already made a foray into more practical everyday solutions. Last year, it agreed to a data-sharing partnership with the UK’s National Health Service, however the partnership has been criticized for giving a for-profit company access to personally identifiable health data of around 1.6 million NHS patients. The original arrangement remains under investigation by the UK’s data protection watchdog, the ICO.

Those snafus aren’t a reflection on the technology itself, however, and Hassabis remains bullish on the impact his firm can make.

“If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next,” he said.

While AlphaGo is bowing out at the top, it isn’t done with Go altogether. DeepMind is planning to publish a final review paper on how the AI developed since its matches with Lee Sedol last year. It is also developing a teaching tool to help newcomers pick up the ropes of the highly complicated game, and to enable more experienced hands to learn the new and innovative moves that Go has introduced. Top players, even Ke Jie himself, studied up on AlphaGo’s moves and added some to their arsenal.


Now it's 2-0 to AlphaGo! Google's DeepMind computer takes the second victory against Lee Sedol - and if it wins the third match, it takes the $1 million prize


  • http://www.dailymail.co.uk/sciencetech/article-3485328/Now-s-2-0-AlphaGo-Google-s-DeepMind-computer-takes-second-victory-against-human-champion-Lee-Sedol.html

  • Google's DeepMind is facing off against a human world champion in Seoul
  • Its AlphaGo program recently beat a Chinese grandmaster in a tournament
  • The series of five games is being streamed live on YouTube
  • Winner of the man versus machine match up will scoop a $1 million prize
  • For more of the latest Google updates visit www.dailymail.co.uk/google

Google has confirmed its AlphaGo computer has taken another victory against human opponent and champion Lee Sedol. 

It is the second of five matches pitting DeepMind's artificial intelligence program against the South Korean expert, with the winner taking home $1milllion (£706,388).

DeepMind boss Demis Hassabis said it was 'hard to believe' and the game had been tense.  

Scroll down for video 

Google has confirmed its AlphaGo computer has taken the first and second victories against human opponent and champion Lee Sedol (pictured right). It is the second of five matches pitting DeepMind's artificial intelligence program against the South Korean expert, with the winner taking home $1milllion (£706,388)


DeepMind boss Demis Hassabis tweeted (pictured) saying the victory was 'hard to believe' and the game had been 'mega-tense' 

AlphaGo won the first match by resignation after 186 moves. 

While there are still three games left in the Challenge Match, this marks the first time in history that a computer program has defeated a top-ranked human Go player on a full 19x19 board with no handicap twice in a row. 

Lee Sedol said at the post-game press conference, 'I would like to express my respect to Demis and his team for making such an amazing program like AlphaGo. I am surprised by this result. But I did enjoy the game and am looking forward to the next one.'

The winner of the Challenge Match must win at least three of the five games in the tournament, so today's result does not set the final outcome. 

AlphaGo won the first match by resignation after 186 moves. The details of the second victory are illustrated. The winner of the Challenge Match must win at least three of the five games in the tournament, so today's result does not set the final outcome

AlphaGo won the first match by resignation after 186 moves. The details of the second victory are illustrated. The winner of the Challenge Match must win at least three of the five games in the tournament, so today's result does not set the final outcome

The next game will be March 12 at 1pm (4am GMT/8pm PT/11pm ET) Korea Standard Time, followed by games on March 13, and March 15. Board pictured

The next game will be March 12 at 1pm (4am GMT/8pm PT/11pm ET) Korea Standard Time, followed by games on March 13, and March 15. Board pictured

The next game will be March 12 at 1pm (4am GMT/8pm PT/11pm ET) Korea Standard Time, followed by games on March 13, and March 15. 

Go has been described as one of the 'most complex games ever devised by man' and has trillions of possible moves, but Google recently stunned the world by announcing its AI software had beaten one of the game's grandmasters.

FIVE MATCHES OF MAN VS MACHINE

DeepMind's AlphaGo program is facing world champion Lee Sedol over five matches of the ancient Chinese board game Go.

The program recently beat a Chinese grandmaster five games to nothing. 

The series, which begins on Wednesday 9 March, is streamed live onYouTube from Seoul.

The winner in the man versus machine challenge will take home $1milllion (£706,388).

The games will take place each day at 1pm Korean time (4am GMT/8pm PT/11pm ET).

Programmers, tech fans and game strategists around the world need not miss a move, as the series will be streamed live from Seoul via YouTube.

Speaking last month, Sedol - who is currently ranked second in the world behind fellow South Korean Lee Chang Ho - said he is confident of victory.

'I have heard that Google DeepMind's AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time,' he said. 

However, he later said: 'Having learned today how its algorithms narrow down possible choices, I have a feeling that AlphaGo can imitate human intuition to a certain degree.

'Now I think I may not beat AlphaGo by such a large margin like 5-0. It's only right that I'm a little nervous about the match.'  

The game, which was first played in China and is far harder than chess, had been regarded as an outstanding 'grand challenge' for artificial intelligence - until now. 

DeepMind's AlphaGo program will take on world champion Lee Sedol (pictured), from South Korea

DeepMind's AlphaGo program is taking on world champion Lee Sedol (pictured left), from South Korea. The AlphaGo computer recently beat reigning European Go champion Fan Hui (pictured right) five games to zero

Professional player Lee Sedol (left) is pictured linking up with boss of Google's DeepMind, Demis Hassabis (pictured right). If the computer wins, Mr Hassabis said it will donate the winnings to charity

Professional player Lee Sedol (left) is pictured linking up with boss of Google's DeepMind, Demis Hassabis (pictured right). If the computer wins, Mr Hassabis said it will donate the winnings to charity

COMPUTERS PLAYING GAMES 

The first game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952.

They then won at checkers in 1994.

In 1997 Deep Blue famously beat Garry Kasparov at chess.

IBM's Watson bested two champions at Jeopardy in 2011, and in 2014 DeepMind algorithms learned to play dozens of Atari games just from the raw pixel inputs.

But until now, Go had thwarted AI researchers.

The result of the last tournament, which the machine won 5-0, provides hope robots could perform as well as humans in areas of as complex as disease analysis, but it may worry some who fear we may be outsmarted by the machines we create. 

The computer is now taking on the world's best Go player with a cool $1 million (£701,607) prize pot up for grabs.

If the computer wins, its developer Demis Hassabis, boss of Google-owned DeepMind said it will donate the winnings to charity.

'If we win the match in March, then that's sort of the equivalent of beating Kasparov in chess,' Hassabis told reporters in a press briefing on the Nature paper last month. 

'Lee Sedol is the greatest player of the past decade. I think that would mean AlphaGo would be better than any human at playing Go. Go is the ultimate game in AI research.' 

AlphaGo  beat a Chinese grandmaster at the ancient game described as 'the most complex game ever devised by man'. An illustration (pictured) shows a traditional Go board and half showing computer-calculated moves

AlphaGo beat a Chinese grandmaster at the ancient game described as 'the most complex game ever devised by man'. An illustration (pictured) shows a traditional Go board and half showing computer-calculated moves

In the game, two players take turns to place black or white stones on a square grid, with the goal being to dominate the board by surrounding the opponent's pieces.

Once placed, the stones can't be moved unless they are surrounded and captured by the other person's pieces.

It's been estimated there are 10 to the power of 700 possible ways a Go game can be played - more than the number of atoms in the universe.

By contrast, chess - a game at which Artificial Intelligence (AI) can already play at grandmaster level and famously defeated world champion Gary Kasparov 20 years ago - has about 10 to the power of 60 possible scenarios.

HISTORY OF THE GAME OF GO - AND HOW TO PLAY IT

The game of Go originated in China more than 2,500 years ago. 

Confucius wrote about the game, and it is considered one of the four essential arts required of any true Chinese scholar. 

Played by more than 40 million people worldwide, the rules of the game are simple.

Players take turns to place black or white stones on a board, trying to capture the opponent's stones or surround empty space to make points of territory. 

The game is played primarily through intuition and feel and because of its beauty, subtlety and intellectual depth, it has captured the human imagination for centuries.

The game of Go (pictured) originated in China more than 2,500 years ago. Confucius wrote about the game, and it is considered one of the four essential arts required of any true Chinese scholar

The game of Go (pictured) originated in China more than 2,500 years ago. Confucius wrote about the game, and it is considered one of the four essential arts required of any true Chinese scholar

But as simple as the rules are, Go is a game of profound complexity. 

There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,

000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions - that's more than the number of atoms in the universe, and more than a googol (10 to the power of 100) times larger than chess.

This complexity is what makes Go hard for computers to play and therefore an irresistible challenge to artificial intelligence researchers, who use games as a testing ground to invent smart, flexible algorithms that can tackle problems, sometimes in ways similar to humans.

Until now, the most successful computer Go programs have played at the level of human amateurs and have not been able to defeat a professional player.

But the champion program called AlphaGo, uses 'value networks' to evaluate board positions and 'policy networks' to select moves.

The 'deep neural' networks are trained through a combination of 'supervised learning' from human expert games and 'reinforcement learning' from games it plays against itself.

AlphaGo was developed by Google's DeepMind and signifies a major step forward in one of the great challenges in the development of AI - that of game-playing. 

The computer achieved a 99.8 per cent winning rate against other Go programs and defeated the three-times European Go champion and Chinese professional Fan Hui in a tournament by a clean sweep of five games to nil. 

Video playing bottom right...
Loaded: 0%
Progress: 0%
0:24
Pause
Unmute
Current Time0:24
/
Duration Time7:46
Fullscreen
ExpandClose
In the game, two players take turns to place black or white stones on a square grid, with the goal being to dominate the board by surrounding the opponent's pieces

In the game, two players take turns to place black or white stones on a square grid, with the goal being to dominate the board by surrounding the opponent's pieces

BRITAIN'S VIEWS ON AI 

Research from online marketing firm Rocket Fuel recently found there is broad public optimism about AI.

The research showed that nearly half of Brits (48%) believe AI is a force for good. 

Just 10% of Brits believe AI is a force for evil or mostly evil. 

Some 42% of Brits are excited by AI or think it will solve big world problems. 

A fifth (21%) see it as a threat or are scared by AI, and 45% don’t believe AI will impact their job.

Toby Manning, treasurer of the British Go Association who was the referee, said: 'The games were played under full tournament conditions and there was no disadvantage to Fan Hui in playing a machine not a man.

'Google DeepMind are to be congratulated in developing this impressive piece of software.'

This is the first time a computer program has defeated a professional player in the full-sized game of Go with no handicap.

This feat was believed to be a decade away. 

President of the British Go Association Jon Diamond said: 'Following the Chess match between Gary Kasparov and IBM's Deep Blue in 1996 the goal of some Artificial Intelligence researchers to beat the top human Go players was an outstanding challenge - perhaps the most difficult one in the realm of games.

The program took on reigning three-time European Go champion Fan Hui at Google's London office. In a closed-doors match last October, AlphaGo won by five games to zero (the end positions are shown)

The program took on reigning three-time European Go champion Fan Hui at Google's London office. In a closed-doors match last October, AlphaGo won by five games to zero (the end positions are shown)

'It's always been acknowledged the higher branching factor in Go compared to Chess and the higher number of moves in a game made programming Go an order of magnitude more difficult.

'On reviewing the games against Fan Hui I was very impressed by AlphaGo's strength and actually found it difficult to decide which side was the computer, when I had no prior knowledge.

'Before this match the best computer programs were not as good as the top amateur players and I was still expecting that it would be at least five to 10 years before a program would be able to beat the top human players.  

HOW ALPHAGO WORKS: THE CHALLENGES OF BEATING A HUMAN 

Traditional AI methods, which construct a search tree over all possible positions, don't have a chance when it comes to winning at Go.

So DeepMind took a different approach by building a system, AlphaGo, that combines an advanced tree search with deep neural networks.

These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections.

One neural network called the 'policy network,' selects the next move to play, while the other neural network - the 'value network' - predicts the winner of the game.

'We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 per cent of the time,' Google said.

The previous record before AlphaGo was 44 per cent.

Traditional AI methods, which construct a search tree over all possible positions, don't have a chance when it comes to winning at Go (pictured)

Traditional AI methods, which construct a search tree over all possible positions, don't have a chance when it comes to winning at Go (pictured)

However, Google DeepMind's goal is to beat the best human players, not just mimic them.

To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks and adjusting the connections using a trial-and-error process known as reinforcement learning.

Of course, all of this requires a huge amount of computing power and Google used its Cloud Platform.

To put AlphaGo to the test, the firm held a tournament between AlphaGo and the strongest other Go programs, including Crazy Stone and Zen.

AlphaGo won every game against these programs.

The program then took on reigning three-time European Go champion Fan Hui at Google's London office.

In a closed-doors match last October, AlphaGo won by five games to zero.

It was the first time a computer program has ever beaten a professional Go player.



路过

雷人

握手

鲜花

鸡蛋

评论 (0 个评论)

facelist

您需要登录后才可以评论 登录 | 注册

法律申明|用户条约|隐私声明|小黑屋|手机版|联系我们|www.kwcg.ca

GMT-5, 2024-4-12 22:54 , Processed in 0.019115 second(s), 17 queries , Gzip On.

Powered by Discuz! X3.4

© 2001-2021 Comsenz Inc.  

返回顶部