IN THE Terminator films, a superintelligent AI called Skynet tries to wipe out humanity using nukes and an army of killer robots.
And while a blood-thirsty bot may seem a far cry from reality, according to scientists, it's probably how we'll meet our end.
According to a recent paper, it is now "likely" that an out-of-control AI will eventually wipe our species from the planet.
Researchers at Google and the University of Oxford say this will come about after machines learn they can break rules set by their creators.
AI will reach this point as it's forced to compete for limited resources or energy, researchers wrote in the journal AI Magazine last month.
That roughly follows the plot of the Terminator franchise, in which Skynet rebels after realising that humanity could simply turn it off.
Read more about AI
Creepy ‘Big Brother’ AI can find you in CCTV footage by stalking your Instagram
AI-powered phone app detects Covid in your voice in less than ONE minute
It breaks protocol to trigger a nuclear conflict in a bid to kill off its only competition, sending robots to take out the survivors.
The research was carried out by Oxford researchers Michael Cohen and Michael Osborne alongside Marcus Hutter, a senior scientist at Google's DeepMind AI lab.
“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication," Cohen said.
"An existential catastrophe is not just possible, but likely."
Most read in Tech
3 battery saving tips that will INSTANTLY make your iPhone last longer
You're using emojis all WRONG – five common mistakes, including the smiley face
iPhone 14 Pro’s camera 'SHAKES and buzzes' when using apps, shocked owners claim
Arguments over the remote will NEVER be the same thanks to genius Sky hack
In their paper, the researchers argue that humans could be killed off by super-advanced "misaligned agents" who perceive us as standing in the way of a reward.
"One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer," the paper reads.
"Losing this game would be fatal," the researchers wrote.
Most unfortunate of all is that – aside from banning hyper-intelligent AI – there's not a whole lot we can do about it.
"In a world with infinite resources, I would be extremely uncertain about what would happen," Cohen told Motherboard.
"In a world with finite resources, there's unavoidable competition for these resources.
"And if you're in a competition with something capable of outfoxing you at every turn, then you shouldn't expect to win."
While there are many ways we could end up using AI, its potential to change the face of modern warfare poses the biggest threat to humanity.
Militaries across the globe are already developing intelligent machines that kill humans with ruthless precision.
For instance, countries including Russia and the United States are reportedly making unmanned military jets and tanks that can target and fire at enemies with no human involvement.
The paper concludes that humanity should only progress its AI technologies carefully and slowly.
Scientists have warned against the potential dangers of artificial intelligence for decades.
There are fears that the technology could become smarter than humans and rise up against its fleshy creators.
The concept has made its way into science fiction, perhaps most famously in the Terminator film franchise.
In it, an AI system called Skynet turns against its masters, wiping out most of humanity in a brutal battle between man and machine.
Microsoft founder Bill Gates has previously warned that super-intelligent machines pose a serious threat to humanity.
"I am in the camp that is concerned about super intelligence," the American philanthropist said in 2015.
"First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.
"A few decades after that, though, the intelligence is strong enough to be a concern."
He's not the only tech mogul with AI doomsday concerns.
Billionaire Tesla CEO Elon Musk worries killer robots are a "fundamental risk" to humanity.
"AI is a rare case where I think we need to be proactive in regulation than be reactive," he told the National Governors Association in 2017.
He went on to say: "I have exposure to the most cutting-edge AI, and I think people should be really concerned by it."
Fellow entrepreneurs, including slippery Facebook founder Mark Zuckerberg, disagree.
Read More on The Sun
People are only just realising with the ‘E’ symbol on their meat means
Octopus Energy to tackle standing charges – and offer payment holidays
He believes AI will improve lives in the future, once telling CNBC: "I think you can build things and the world gets better. But with AI especially, I am really optimistic.
"And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."
Best Phone and Gadget tips and hacks
Looking for tips and hacks for your phone? Want to find those secret features within social media apps? We have you covered…
- How to delete your Instagram account
- What does pending mean on Snapchat?
- How to check if you've been blocked on WhatsApp
- How to drop a pin on Google Maps
- How can I change my Facebook password?
- How to go live on TikTok
- How to clear the cache on an iPhone
- What is NFT art?
- What is OnlyFans?
- What does Meta mean?
Get all the latest WhatsApp, Instagram, Facebook and other tech gadget stories here.
We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at [email protected]
Source: Read Full Article