Artificial intelligence doesn't have to include murderous, sentient super-intelligence to be dangerous. It's dangerous right now, albeit in generally more primitive terms. If a machine can learn based on real-world inputs and adjust its behaviors accordingly, there exists the potential for that machine to learn the wrong thing. If a machine can learn the wrong thing, it can do the wrong thing. Laurent Orseau and Stuart Armstrong, researchers at Google's DeepMind and the Future of Humanity Institute, respectively, have developed a new framework to address this in the form of "safely interruptible" artificial intelligence. In other words, their system, which is described in a paper to be presented at the 32nd Conference on Uncertainty in Artificial Intelligence, guarantees that a machine will not learn to resist attempts by humans to intervene in the its learning processes. Learn more at http://motherboard.vice.com/read/google-researchers-have-come-up-with-an-ai-kill-switch
The Extreme Science and Engineering Discovery Environment (XSEDE) is supported by the National Science Foundation.
For general questions, contact firstname.lastname@example.org | For user assistance, please submit a consulting ticket | ©2011 XSEDE. All Rights Reserved.