The debate over “if robots would overtake humans” has recently been heated up by warnings against the potential threat of unregulated development of robots from some academic or industrial super stars. However, what is obviously missing in those warnings is a clear description of any realistic scenario by which mass text could assuredly challenge humans as a whole, not as puppets programmed and controlled by humans, but as autonomous powers acting on their own “will”. If this type of scenarios would never be realistic then even though we might possibly see robots be used as ruthless killing machines in near future by terrorists, dictators and warlords as warned by the elite scientists and experts , we might still not worry too much about the so called demonic threat of robots as warned by some elite experts since it is just another form of human threat in the end. However, if the type of scenarios mentioned above could foreseeably be realized in the real world, then humans do need to start worrying about how to prevent the peril from happening instead of how to win debates over imaginary dangers.
The reason that people on both sides of the debate could not see or show a very clear scenario that robots could indeed challenge humans in a very realistic way is truly a philosophical issue. So far all discussions on the issue have focused on the possibility of creating a robot that could be considered as a human in the sense that it could indeed think as a human instead of being solely a tool of humans operated with programmed instructions. According to this line of thought it seems that we do not need to worry about the threat of robots to our human species as a whole since nobody could yet provide any plausible reason that it is possible to produce this type of robots.
Unfortunately this way of thinking is philosophically incorrect because people who are thinking in this way are missing a fundamental point about our own human nature: human beings are social creatures.