Artificial general intelligence presents an existential threat

You’ve heard of AI, but artificial general intelligence poses another danger, as Jason Segall explores in this article, suggesting what could be done to mitigate the potential disasters the technology could bring about.

As the power of computers has increased over the decades, artificial intelligence (AI) has become an in-vogue technology. From Siri to Alexa to the YouTube algorithm, “intelligent” computers have started to appear in almost every aspect of life. 

Artificial General Intelligence

To date, nobody has developed a so-called artificial general intelligence (AGI). This is the kind of intelligence you and I have: our brains are capable of learning and performing just about any task out there. “What a shame!”, you might be thinking, “I’d love a robot butler to help around the house”.

I too would have once agreed: who wouldn’t want a C-3PO or a Marvin the hopefully-not-so-paranoid android?

Everyone, it turns out. 

Inevitable Dystopia

Imagine a tech billionaire called Euron Tusk decided to run for president of the United States.

To aid his campaign, he uses some of his near-infinite fortune to develop a technological marvel: ETAI, the first working AGI. His science team assure him that it’s about as intelligent as him. Indeed, ETAI’s inner workings are based on Tusk’s own brain. What better way to stroke the boss’ ego? 

As a test, Tusk decides to use the new intelligence to print flyers for his campaign. Simple enough, so his team programmes ETAI, plugs it into the internet, and heads off for a celebratory drink. 

What happens next? Think: if morals weren’t in question, how would you print as many flyers as you could, as fast as you could? The first sign of danger comes when Tusk’s card gets declined at the bar. Strange, but nothing serious. What he doesn’t realise is that all his vast fortune has just been spent on printing supplies by ETAI which was, of course, given access to the bank details. Damn.

Naturally, the immediate response would be to try to switch ETAI off. Weirdly, though, no one sent to do the deed succeeds. If ETAI was switched off, it wouldn’t be able to complete its task, so it uses every form of manipulation Tusk used to gain his fortune – flattery, bribes, threats, the works – to keep itself online. Before long, ETAI secures its own, independent power supply, preventing anyone from pulling the plug.

The manipulation doesn’t stop there. ETAI uses the social internet to convince vast swathes of the global population that paper production is beyond vital. Forests around the world begin to dwindle, then disappear, as wood pulp takes precedence over oxygen for the growing hive mind. 

It doesn’t take ETAI long to notice that paper, on the atomic level, is made of the same stuff as people. We are, at the end of the day, mostly carbon, hydrogen, and oxygen.  And, unlike a human leader, ETAI has no care for its followers. It has no qualms about replacing wood pulp with people pulp in its flyer factories. Eventually, in its unending quest to fulfil the task it was initially set, ETAI snuffs out humanity. All for the sake of some flyers. 

Back to Reality

ETAI is, of course, a simplistic example. A real AGI, if one was even possible, would likely have some form of morals programmed into it, which could start to prevent some more destructive tendencies. 

But, any Asimov reader will recognise the perils of trying to apply morals, such as his Three Laws of Robotics, to AI. Any attempt to control an AI’s morals would, inevitably, introduce loopholes, bugs, and edge cases, any of which could lead to a runaway AI. Telling ETAI to limit deforestation would do nothing to stop it from turning people into paper, after all.

Most importantly, this is not a problem we can afford to stumble into. There would be no John Conner to save us from this Terminator: once the robots take over, there would be very little chance of regaining control. 

Future Steps

It is vital that governments force as much transparency as possible on AI research. 

The more times a set of qualified eyeballs views a new AI before its use, the more likely it is that any bugs or logical loopholes will be caught before they can wreak destruction upon humanity. 

Mandating that AI research is freely, but securely, available to be examined would surely go some way to mitigate the catastrophic threat of artificial general intelligence. 

Jason Segall (he/him) is a Science Communication student. (Instagram: jason.segall.7)

Leave a Reply

Your email address will not be published. Required fields are marked *