Dear Visitor,

Our system has found that you are using an ad-blocking browser add-on.

We just wanted to let you know that our site content is, of course, available to you absolutely free of charge.

Our ads are the only way we have to be able to bring you the latest high-quality content, which is written by professional journalists, with the help of editors, graphic designers, and our site production and I.T. staff, as well as many other talented people who work around the clock for this site.

So, we ask you to add this site to your Ad Blocker’s "white list" or to simply disable your Ad Blocker while visiting this site.

Continue on this site freely
  HOME     MENU     SEARCH     NEWSLETTER    
TECHNOLOGY, DISCOVERY & INNOVATION. UPDATED ABOUT A MINUTE AGO.
You are here: Home / Innovation / Artificial Intelligence Getting Brainier
AI Is Getting Brainier: Will Machines Leave Us in the Dust?
AI Is Getting Brainier: Will Machines Leave Us in the Dust?
By Ian Sample Like this on Facebook Tweet this Link thison Linkedin Link this on Google Plus
PUBLISHED:
MARCH
19
2017

The road to human-level artificial intelligence is long and wildly uncertain. Most AI programs today are one-trick ponies. They can recognize faces, the sound of your voice, translate foreign languages, trade stocks and play chess. They may well have got the trick down pat, but one-trick ponies they remain. Google's DeepMind program, AlphaGo, can beat the best human players at Go, but it hasn't a clue how to play tiddlywinks, shove ha'penny, or tell one end of a horse from the other.

Humans, on the other hand, are not specialists. Our forte is versatility. What other animal comes close as the jack of all trades? Put humans in a situation where a problem must be solved and, if they can leave their smartphones alone for a moment, they will draw on experience to work out a solution.

The skill, already evident in preschool children, is the ultimate goal of artificial intelligence. If it can be distilled and encoded in software, then thinking machines will finally deserve the name.

DeepMind's latest AI has cleared one of the important hurdles on the way to human-level AGI -- artificial general intelligence. Most AIs can perform only one trick because to learn a second, they must forget the first. The problem, known as "catastrophic forgetting," occurs because the neural network at the heart of the AI overwrites old lessons with new ones.

DeepMind solved the problem by mirroring how the human brain works. When we learn to ride a bike, we consolidate the skill. We can go off and learn the violin, the capitals of the world and the finer rules of gaga ball, and still cycle home for tea. This program's AI mimics the process by making the important lessons of the past hard to overwrite in the future. Instead of forgetting old tricks, it draws on them to learn new ones.

Because it retains past skills, the new AI can learn one task after another. When it was set to work on the Atari classics -- Space Invaders, Breakout, Defender and the rest -- it learned to play seven out of 10 as well as a human can. But it did not score as well as an AI devoted to each game would have done. Like us, the new AI is more the jack of all trades, the master of none.

There is no doubt that thinking machines, if they ever truly emerge, would be powerful and valuable. Researchers talk of pointing them at the world's greatest problems: poverty, inequality, climate change and disease.

They could also be a danger. Serious AI researchers, and plenty of prominent figures who know less of the art, have raised worries about the moment when computers surpass human intelligence. Looming on the horizon is the “Singularity”, a time when super-AIs improve at exponential speed, causing such technological disruption that poor, unenhanced humans are left in the dust. These superintelligent computers needn't hate us to destroy us. As the Oxford philosopher Nick Bostrom has pointed out, a superintelligence might dispose of us simply because it is too devoted to making paper clips to look out for human welfare.

In January the Future of Life Institute held a conference on “Beneficial AI” in Asilomar, California. When it came to discussing threats to humanity, researchers pondered what might be the AI equivalents of nuclear control rods, the sort that are plunged into nuclear reactors to rein in runaway reactions. At the end of the meeting, the organizers released a set of guiding principles for the safe development of AI.

While the latest work on DeepMind edges scientists towards AGI, it does not bring it, or the Singularity, meaningfully closer. There is far more to the general intelligence that humans possess than the ability to learn continually. The DeepMind AI can draw on skills it learned on one game to play another. But it cannot generalize from one learned skill to another. It cannot ponder a new task, reflect on its capabilities, and work out how best to apply them.

The futurist Ray Kurzweil sees the Singularity rolling in 30 years from now. But for other scientists, human-level AI is not inevitable. It is still a matter of if, not when. Emulating human intelligence is a mammoth task. What scientists need are good ideas, and no one can predict when inspiration will strike.

© 2017 Guardian Web under contract with NewsEdge/Acquire Media. All rights reserved.
Tell Us What You Think
Comment:

Name:

Like Us on FacebookFollow Us on Twitter
MORE IN INNOVATION
SCI-TECH TODAY
NEWSFACTOR NETWORK SITES
NEWSFACTOR SERVICES
© Copyright 2017 NewsFactor Network. All rights reserved. Member of Accuserve Ad Network.