Rise of the cuddly machines? Elon Musk donates $10M toward artificial-intelligence study

Elon Musk apparently doesn’t want to do all the talking about the possible perils of artificial intelligence. So he’s paid up.

The Tesla and SpaceX CEO has donated $10 million to the Future of Life Institute, which will establish a global research program “aimed at keeping AI beneficial to humanity,” FLI said in a statement announcing the funding today.

Musk confirmed the donation in a tweet this morning:

As we’ve discussed on SiliconBeat, Musk has cited “The Terminator” as an example of AI gone awry. He has also called AI “potentially more dangerous than nukes.” Earlier this week, he joined Stephen Hawking and other scientists in signing a letter written by FLI calling for research into “robust and beneficial artificial intelligence.” Musk and Hawking are both listed as scientific advisory board members of FLI.

Others are delving into the possible rise-of-the-machine implications of artificial intelligence, too. As our own Steve Johnson wrote, Stanford University in December announced a century-long effort to study the social effects of AI.


Above: Elon Musk, left, and the Terminator, played by Arnold Schwarzenneger. (Photos by Nhat Meyer/Mercury News and the Associated Press, respectively)


Tags: , ,


Share this Post

  • Robert Derman

    As I recall, Isaac Asimov had a lot to say on this subject. Looking for a way to incorporate his “Laws of Robotics” into computer systems would be a good starting point.

  • Alex

    Elon Musk should be applauded for being very vocal about his concerns. Totally agree with his assessment. After all, our “intelligence” (ie. our conceptual- and pattern matching capabilities) is what separates us from mere animals. True AI will make nuclear power look like child’s play.

    We may be deceiving us, though, when we believe that the trajectory of AI will predictable; that we will be able to predict at all what shape AI will take when it will first enter society, what its capabilities and dangers will be. History shows that even inventive “geniuses” themselves have no crystal ball about what a new technological invention will truly look and feel like — until it has been created (for example, Wilbur Wright stated in 1910 that the maximum speed of aeroplanes will not exceed 40 mph or so. That was the best projection of the human being that understood more about aeroplanes than anyone else at that point in time.)

    My own guess is that the first incarnations of AI will probably be rather modest software programs running on the internet (the “cloud”), not intelligent robots, and there is a huge danger that the technology will be used by a few humans to command and control other humans. As always, we – and not HAL – are our own worst enemy.