Science-Fiction movies often depict human made machines or robots destroying their creators, invading planets, and bringing an end to humanity. A few examples from a long list include I, Robot (2004), Artificial Intelligence (2001), Terminator (1984) and The Matrix (1999). Even the brilliant Stephen Hawking warns us that in fact “artificial intelligence could end mankind” (1). In light of the prevalence of this discussion, the question rises of how it may be a topic deserving of more attention within the HCI community. After all we are investigating ‘Human-Computer Interaction’. Should we not open a discussion on this topic as one deserving of research and conferences?
Professor Hawking’s main fear is that a new machine might be able to redesign and reinvent itself since it is not dependent on biological evolution. Hawking himself has for a longtime used an A.I. machine to communicate, giving him the experience to perhaps support his warning.
Elon Musk , the product architect of Tesla, has invested $10 million to try and keep A.I. friendly—or under control. He also shares a fear of A.I. When Bill Gates  was asked about his own thoughts, he said that he stands “in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.”
On the other side, there are some scientific and industrialist minds that consider the risk exaggerated . However, given the prevalence of the discussion regarding artificial intelligence, do you think there should be a larger focus on the matter within the HCI community?
“Stephen Hawking warns artificial intelligence could end mankind,” BBC, 2 December 2014.
Why Elon Musk Spent $10 Million To Keep Artificial Intelligence Friendly,” Forbes, 15 January 2015.
“Bill Gates Says You Should Worry About Artificial Intelligence, Forbes, 1 January, 2015.
“Scientists say AI fears unfounded, could hinder tech advances,” Computerworld, Jan 29, 2015.