Science-Fiction movies often depict human made machines or robots destroying their creators, invading planets, and bringing an end to humanity. A few examples from a long list include I, Robot (2004), Artificial Intelligence (2001), Terminator (1984) and The Matrix (1999). Even the brilliant Stephen Hawking warns us that in fact “artificial intelligence could end mankind” (1). In light of the prevalence of this discussion, the question rises of how it may be a topic deserving of more attention within the HCI community. After all we are investigating ‘Human-Computer Interaction’. Should we not open a discussion on this topic as one deserving of research and conferences?
Professor Hawking’s main fear is that a new machine might be able to redesign and reinvent itself since it is not dependent on biological evolution. Hawking himself has for a longtime used an A.I. machine to communicate, giving him the experience to perhaps support his warning.
Elon Musk , the product architect of Tesla, has invested $10 million to try and keep A.I. friendly—or under control. He also shares a fear of A.I. When Bill Gates  was asked about his own thoughts, he said that he stands “in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.”