Why Stephen Hawking Believed Artificial Intelligence Is Dangerous
One of the greatest physicists of all time, Stephen Hawking has issued a red light that artificial intelligence (AI) has the potential to destroy civilisation as we know it and that this could spell the end to humanity.
In a recent address to a technology conference in Portugal, Hawking told participants that humanity had to find a way to control computer-generated machines and robots.
The eminent physicist, now deceased, was the world’s leading theoretical physicist in the 1990s until his death in 2018. He was a leading authority on black holes, and his lifetime goal was to unify quantum mechanics with Einstein’s Theory of Relativity.
This would put an end to one of science’s most puzzling quest: How the universe started and its imminent end.
Here are his ideas on artificial intelligence and why he thinks its a dangerous thing:
· Computers possess the capacity to emulate human intelligence and even surpass it.
· That when we succeed in creating Artificial Intelligence that we can no longer control, it spells doom for humankind since it can replicate itself and perform tasks that were not in the original plan.
· Hawking has stated that primitive forms of artificial intelligence already in place have proven useful but fears the possibility of the development of machines that could surpass human capability.
· That we are still a long way from developing algorithms that form the base for artificial intelligence, but he believes these algorithms will be established shortly.
· There are already fears that smart machines capable of understanding jobs done by humans until now will destroy millions of jobs and render millions incapable of earning a living.
Costly errors caused by machines
Some people think that computer software gone rogue created the stock market crash of 1987.
Some major cities have also experienced power grid closures due to a computer error. Software and hardware glitches can be challenging to detect, but they can still cause untold damage and havoc in large institutions, even in the absence of hackers or malicious intention.
So for how long can we trust machines to do a better job for us?
At the moment, we witness increasing amounts of responsibility being transferred to machines. Some good examples are GPS, (Global Positioning Systems), handheld calculators or routine mathematical calculations.
On the other hand, this could be a system for guided missiles, driverless cars or mine sites in war situations.
Humans like delegating responsibility to hardware for reasons that range from cost and accuracy to time improvement. But there is a genuine fear of something going wrong and causing irreparable damage to the same humans attempting to harness artificial intelligence.
It is a good thing when computers will take over as their knowledge surpasses that of humans. But there exist dangers with this delegation of tasks.
The issue of smart machines taking over has been exhaustively analysed in a variety of popular culture and media. Imagine the movies Terminator and Transformers, to mention a few.
There are very active conversations going on in scientific labs as well as conferences and seminars about artificial intelligence. And chances are there exists the prospect of brilliant machines and hardware in the future, but grave dangers seem to get ignored at the expense of profitability.
It would be worthwhile if more and more scientists devoted their time to convince policymakers about the inherent risks.