The physicist says humans now have technology that can destroy Earth, but is he right?
Stephen Hawking made a bold headline last week: “This is the most dangerous time for our planet.”
In an essay in The Guardian, the renowned theoretical physicist wrote: “Whatever we might think about the decision by the British electorate to reject membership of the European Union and by the American public to embrace Donald Trump as their next president, there is no doubt in the minds of commentators that this was a cry of anger by people who felt they had been abandoned by their leaders.”
Technology is the main culprit here, widening the gulf between the haves and the have-nots. As Hawking explained, automation has already decimated jobs in manufacturing and is allowing the financial sector to accrue huge rewards that the rest of us underwrite. Over the next few years, technology will take more jobs from humans. Robots will drive the taxis and trucks; drones will deliver our mail and groceries; machines will flip hamburgers and serve meals.
And, if Amazon’s new cashierless stores are a success, supermarkets will replace cashiers with sensors. This is not speculation; it is imminent.
The dissatisfaction is not particularly limited to the West. With the developing world coming online with smartphones and tablets, billions more people are becoming aware of what they don’t have. The unrest we have witnessed in the United States, Britain and, most recently, Italy will become a global phenomenon.
Hawking’s solution is to break down barriers within and between nations, to have world leaders acknowledge that they have failed and are failing the many, to share resources and to help the unemployed retrain. But this is wishful thinking. It isn’t going to happen.
Witness the outcome of the elections: we moved backward on almost every front. Our politicians will continue to divide and conquer, the tech industry will deny its culpability, and the very technologies, such as social media and the internet, that were supposed to spread democracy and knowledge will instead be used to mislead, to suppress and to bring out the ugliest side of humanity.
That is why we can’t rely on our political leaders for change. All of us must learn about advancing technologies and participate in the decision-making. We still have a voice and a choice.
Uber would be nowhere if it hadn’t persuaded passengers to use its services and to lobby for legalisation. We can choose not to purchase the artificial intelligence chatbots that Amazon and Google are marketing. And we can certainly decide not to have our morning latte delivered by drone. We can also choose to stop using Facebook until it stops feeding us fake news and Twitter unless it banishes the trolls that misuse its platform.
In my forthcoming book, The Driver in the Driverless Car: How Our Technology Choices Will Create the Future, I suggest a filter through which to view advancing technologies when assessing their value to society and humankind. It boils down to three questions relating to equality, risks and autonomy:
1. Does the technology have the potential to benefit everyone equally?
2. What are the risks and the rewards?
3. Does the technology more strongly promote autonomy or dependence?
Why these three questions? To start, note the anger of the electorates, and then look ahead at the jobless future that technology is creating. If the needs and wants of every human being are met, as technology will make possible, we can deal with the social and psychological issues of joblessness. This won’t be easy, by any means, but at least people won’t be acting out of dire need and desperation. We can build a society with new values, perhaps one in which social gratification comes from teaching and helping others and from creative accomplishment in fields such as music and the arts.
And then there are technologies’ risks. Do we want the self-driving cars and robotic assistants watching everything we do, learning our needs and doing our chores? Most of us will want the benefits these bring. But what if the makers of these products use them to spy on us and the technologies themselves begin to exceed the intelligence of their creators? We clearly need to incorporate limits into our servant machines.
And what if we become physically and emotionally dependent on our robots? We really don’t want our technologies to become like recreational drugs; we want greater autonomy and the freedom to live our lives the way we wish.
No technology is all black or white. It can be used for good and for harm. We have to decide what the limits should be and where the ethical lines are. As Hawking pointed out, we are at an inflection point with all of these technologies, and we can still take them in a direction that uplifts humankind. But if we don’t learn and participate, our darkest fears will become reality.
RSS