D
Let us all raise a glass to AlphaGo and the advance of artificial intelligence. AlphaGo, DeepMind’s Go-playing AI, just defeated the best Go-playi
ng human, Lee Sedol. But as we drink to its success, we should also begin trying to understand what it means for the future.
The number of possible moves in a game of Go is so huge that, in order to win against a player like Lee, AlphaGo was designed to adopt a human-like style of gameplay by using a relatively recent development—deep learning. Deep learning uses large data sets, “machine learning” algorithms (计算程序) and deep neural (神经的) networks to teach the AI how to perform a particular set of tasks. Rather than programming complex Go rules and strategies into AlphaGo, DeepMind designers taught AlphaGo to play the game by feeding it data based on typical Go moves. Then, AlphaGo played against itself, tirelessly learning from its own mistakes and improving its gameplay over time. The results speak for themselves.
Deep learning represents a shift in the relationship humans have with their technological creations. It results in AI that displays surprising and unpredictable behaviour. Commenting after his first loss, Lee described being shocked by an unconventional move he claimed no human would ever have made. Demis Hassabis, one of DeepMind’s founders, echoed this comment: “We’re very pleased that AlphaGo played some quite surprising and beautiful moves.”
Unpredictability and surprises are—or can be—a good thing. They can indicate that a system is working well, perhaps better than the humans that came before it. Such is the case with AlphaGo. However, unpredictability also indicates a loss of human control. That Hassabis is surprised at his creation’s behaviour suggests a lack of control in the design. And though some loss of control might be fine in the context of a game such as Go, it raises urgent questions elsewhere.
How much and what kind of control should we give up to AI machines? How should we design appropriate human control into AI that requires us to give up some of that very control? Is there some AI that we should just not develop if it means any loss of human control? How much of a say should corporations, governments, experts or citizens have in these matters? These important questions, and many others like them, have emerged in response, but remain unanswered. They require human, not human-like, solutions.
So as we drink to the milestone in AI, let’s also drink to the understanding that the time to answer deeply human questions about deep learning and AI is now.
67. What contributes most to the unconventional move of AlphaGo in the game?
68. A potential danger of AI is ______.
69. How should we deal with the unpredictability of AI?
70. What’s the author’s attitude towards this remarkable advance in AI?
推测题。根据第二段的AI学习过程可以得知AI可以通过不断和自己比赛吸取经验获得自我提高。
细节理解
联系上下文
抓不住要点
容易
细节题。根据第四段“However,unpredictability also indicate a loss ofhuman control”。
细节理解
联系上下文
抓不住要点
容易
推断题。根据第五段最后“They require human,not human-like,solutions”可知。
推理判断
联系上下文
抓不住要点
容易
作者态度题。根据最后一段可知作者对于AI的先进性持谨慎态度。
作者态度
联系上下文
抓不住要点
难度适中