The rebellion of the machines is one of the favorite themes of science fiction, especially that of the middle of the last century. There are few novels written with a war between man and robots as an argument, something that returns to the immediate present thanks to Artificial Intelligence. The question is, should we slow down its development?
Fortunately for humanity, a privileged mind like Isaac Asimov’s was already in charge of looking for the cat’s five feet. In his Saga of the Robots -in which the film Yo, Robot was inspired- and later in The Foundation, this topic is addressed. If artificial intelligence becomes too intelligent, it could be a danger.
To avoid this, in the Asimov universe there were several laws of robotics. All androids are obliged to follow these laws in a compulsory way, since they have been programmed to do so. If one of the robots tries, for example, to harm a human, its circuits will automatically melt.
It is an interesting perspective that is worth evaluating, although we have not yet reached the point of having serious problems.
It is necessary to clarify that to elaborate this article we have thrown a lot of imagination to him. Obviously it is difficult to apply the laws of the universe of Asimov to a world that does not have much to do. For example, the three laws of robotics could be applied if the production of robots and the programming of the AI were centralized in some way. If not, anyone could program it for whatever comes to mind.
To make it more fluid we have identified robots as Artificial Intelligence. The difference between the classic conception of robots and the modern one is the existence of hardware with humanoid aspect, practically discarded in our time.
For more than a year, in the real world we lived immersed in the fever by Artificial Intelligence. We already have autonomous cars and the AI takes over more and more complex operations. We take care of programming increasingly complex software, but how far is it advisable to arrive?
If we think to go further, providing almost robots – even virtual ones – with almost human intelligence, we must safeguard the integrity of humanity. It may seem ridiculous, and the truth is that said well, it is, but it does not make the subject less interesting.
The real problem, at least for the moment, is not what the robots will be able to do on their own, but what humans can do with the technology we have available , using robots for their purposes.
As in his novels, the question is basically reduced to one: should Artificial Intelligence prioritize the future well-being of humanity over the immediate physical well-being of a human being? To illustrate the need for a debate on the limits of Artificial Intelligence, we have compiled a series of situations in which Robotic Laws such as those created by Isaac Asimov would be necessary, if not directly the same:
- A robot will not harm a human being or, by inaction, allow a human being to suffer damage.
- A robot must obey the orders given by human beings, unless these orders enter into conflict with the 1st Law.
- A robot must protect its own existence to the extent that this protection does not conflict with the 1st or 2nd Law.
- A robot will not harm Humanity or, through inaction, allow Humanity to suffer damage. Known as Law 0.