The smartest mind in the world objects to artificial intelligence
The smartest mind in the world is Stephen Hawking , the great physicist who objects to the progress that American robot companies are making in the field of artificial intelligence in order to make robots more intelligent and nimble in a way that resembles humans.
Stephen Hawking is not an insignificant person, he is a Nobel Prize winner and this person himself talks about the problem posed by the American actor Johnny Depp’s movie “Transcendence”, which speaks dangerously about the effects of the uncalculated progress of artificial intelligence, and he wrote an article about the danger of artificial intelligence development in the independent newspaper about this the problem.
Hawking talks about the evil uses of technology infiltrating the Sillion Valley area in a large way and in this way that artificial intelligence develops may lead to the end of the human race, causing the last development of humans and the greatest disaster that may befall in this era.
Stephen Hawking fears that Apple’s Siri and Google these days may cause an acceleration of the end of normal humans on Earth and the disasters that could befall in the coming decades
But a capable person like Hawking does not forget the advantages of development in artificial intelligence as well, especially in dealing with natural disasters, wars, diseases and poverty.
Stephen Hawking says that the lengthening of artificial intelligence will become the largest event in human history, but at the same time it may be the last if rules are not established to avoid its dangers
. The world is in perpetual war
The problem that Hawking fears is that development and ideas have no limits, and the biggest problem in this matter is that there are no laws regulating development in this field , and it is
known to all. Indeed, it is not just an idea.
The question for everyone now is, are we as human beings ready to limit the development in this field to a certain limit and be satisfied with the advantages to avoid risks, or will technology remain open until the disaster occurs??
What do you think about this issue, are you with the unlimited development of artificial intelligence, and what are the laws that you want to put in place to govern this field??