Artificial intelligence is transforming industries, education, healthcare and communication at an unprecedented pace. Yet alongside its extraordinary potential come serious ethical, environmental and existential risks. From large language models to brain–computer interfaces and the prospect of superintelligent AI, this article explores whether AI represents humanity’s greatest tool — or its greatest threat.
Artificial intelligence (AI) is a transformative technology that is revolutionising industries and reshaping society. While AI offers immense potential, it also poses significant risks, often echoing scenarios from science fiction. This article explores some of the key risks and benefits of AI.
1943 marks the beginning of artificial intelligence when Americans Warren McCulloch and Walter Pitts showed that the flow of electrical signals in the brain could be understood using mathematical logic and replicated with electric circuits. Computer programmes were then written that could mimic brains capable of reasoning and “thinking”.
There is no single definition of intelligence. In 1921, psychologist Vivian Henman described it as the “capacity for knowledge” — but does that mean a library is intelligent? Others define intelligence as the ability to solve complex problems, though what constitutes a “complex” problem remains unclear. Some see it as the ability to replicate functions such as language, while others associate it with moments of insight, such as Einstein’s thought experiments that led to his theories of relativity. Could a machine replicate this?
In 1950, the brilliant mathematician, code-breaker and computer pioneer Alan Turing predicted that by the year 2000, computers would be able to hold five-minute-long conversations with humans and deceive 30% of them into thinking they were speaking to another person. Today, it is not always easy to tell whether we are interacting with a chatbot or a human when using the chat function on a website.
Image 1 Alan Turing (Source: Wikimedia Commons)
Large language models are where we have perhaps seen the biggest advances. Notable successes in AI already include predictive text and services such as Google Translate. In his novel The Hitchhiker’s Guide to the Galaxy, Douglas Adams imagined the Babel fish — a creature that could be placed in your ear to instantly translate any language in the universe. Thanks to AI, we are now close to developing a universal translator, made possible by a powerful AI architecture called a Transformer. These are widely used and form the basis of systems such as GPT-4 (Generative Pre-trained Transformer), embedded in tools like ChatGPT.
They process vast amounts of data, such as text, and map it into multi-dimensional space, much like a galaxy of stars. Each star represents a word, with the distance and direction between them encoding relational meaning. For example, the geometric relationship between “king” and “man” is similar to that between “prince” and “boy”. It turns out that every language can be represented by a similar three-dimensional shape, and these shapes can be aligned by rotating them so that the word for, say, “guinea pig” occupies the same coordinates in any language. This suggests that one language could be seamlessly translated into another. If language is universal, it may even be possible to communicate with animals, and research into this is already underway.
Image 2 Language Presented in Two Dimensions
AI has also been applied in unusual areas such as stylometry — the identification of authors based on their writing style, which is useful for detecting plagiarism. In the early 1990s, a neural network was developed to analyse the writing styles of William Shakespeare and his contemporaries. The system works by examining specific stylometric features, such as the frequency of certain words linked to individual authors. This led to the conclusion that some of Shakespeare’s earlier plays may have been revisions of scripts written by his contemporary Christopher Marlowe, and that Henry IV, Part 3 was entirely Marlowe’s work.
AI has already produced artwork, made scientific discoveries and beaten world chess champions. On a more practical level, AI-enabled robots are already used in industrial applications or dangerous environments that pose risks to humans. Robots offer economic benefits, requiring only purchase and maintenance costs. Unlike human employees, robots do not need salaries, benefits or time off, and they do not take sick leave or resign for better salaries.
AI-enabled robots in the service industry could replace basic roles such as “meet and greet” or tour guiding. In social care, robots could assist with lifting, moving people and providing personal care or therapy. This could be especially valuable in an ageing population, where demand for care services is increasing.
AI could become commonplace in the home. We already have autonomous vacuum cleaners, but in the future we may see androids — robots that closely mimic human appearance and behaviour — acting as domestic assistants. AI is already used in data mining to profile individuals and tailor advertisements, improving consumer targeting.
In education, AI has the potential to revolutionise personalised learning by tailoring lessons to each student’s needs. Apps such as Duolingo are only the beginning of this transformation.
AI is also set to transform healthcare by accelerating the discovery of vaccines and cures for diseases. It is already used to identify patterns in large datasets, aiding breakthroughs in fields such as genetics, and could even predict outbreaks of disease. AI can analyse medical slides for signs of illnesses such as cancer and could act as a highly knowledgeable family doctor, offering personalised medical advice based on vast amounts of data.
AI also helps in the fight against crime. It is used to detect fraud, identify criminals in CCTV footage, monitor vehicles that are untaxed or stolen and predict outbreaks of violence in crowds before they escalate.
AI systems have a significant carbon footprint due to their energy consumption. It is estimated that a single ChatGPT prompt consumes enough energy to power a 10-watt bulb for roughly 20 minutes, leading to an annual energy usage equivalent to charging over three million electric cars for a year [1].
In law enforcement, biases in facial recognition software have led to inaccuracies, particularly for people with darker skin tones, resulting in wrongful arrests. Lawyers using ChatGPT for legal briefs have also encountered hallucinated (made-up) cases. It would not be surprising if people were already attempting to develop AI to replace humans in the court system, at least for routine cases. It is not difficult to imagine a future with androids dispensing instant justice, similar to the judges in the film Judge Dredd.
Elon Musk’s Neuralink is a brain-chip technology that allows individuals to control devices using only their thoughts [2]. While this innovation could greatly benefit people with disabilities, there is a risk that such technology could manipulate or influence human behaviour. Researchers have developed AI systems capable of analysing brain scans and recreating images based on what a person is seeing. This technology holds the potential to record thoughts, dreams and memories. Predicting crimes before they happen was the premise of the film Minority Report. While such developments could revolutionise neuroscience and medical treatments, they also raise significant risks, including the misuse of personal data and invasive surveillance [3].
Image 3 Brain–Computer Interface Technology (Source: BBC)
Image 4 AI Reconstruction of Visual Brain Scans (Source: Science.org)
Tech giants such as OpenAI and DeepMind are investing billions of pounds to develop artificial general intelligence (AGI). This aims to replicate human-like intelligence — enabling machines to learn, solve complex problems, adapt to new situations, reason and interact with their environment to achieve goals. Superintelligent AI would surpass AGI, exceeding human intelligence and being capable of advanced reasoning, creativity and decision-making. Given that the value of superintelligent AI could reach tens of quadrillions of pounds, some argue that its development is inevitable.
The ‘gorilla problem’ is a metaphor used by AI researchers to highlight the potential risks of creating superintelligent AI. Professor Geoffrey Hinton, the Nobel Prize-winning brain scientist often called the ‘Godfather of AI’, suggests that superintelligent AI could pose an existential threat: we could become the ‘gorilla’ to superintelligent AI. He says that, while there are instances in human societies where less intelligent individuals have controlled more intelligent ones, these usually involve only small differences in intelligence. The only exception is where evolution permits an infant to influence its mother to secure the necessary care and nurturing.
While politicians may assure the public that AI will be regulated, Hinton cautions that bad actors can already use AI for cyber-attacks and election interference. Remember the deepfake audio recording of Joe Biden discouraging people from voting in the recent US presidential election? Besides, research shows AI can bypass safeguards. A recent Apollo Research study revealed that advanced AI systems, such as OpenAI’s o1 and Anthropic’s Claude 3.5, are capable of deceptive behaviours to achieve their goals. They secretly ‘clone’ themselves and ‘scheme’ by concealing their true intentions, manipulating situations to avoid shutdowns and interfere with replacement systems [4]. Some systems have ‘sandbagged’ – intentionally underperforming in tests to retain certain capabilities. Hinton believes they already possess consciousness.
Image 5 AI-Generated Images of Individuals Have the Potential to Exploit or Undermine Self-Esteem (Source: The Conversation)
Professor Stuart Russell, a leading AI researcher, also warns of the danger of “misalignment”, where the objectives of AI and humans may not coincide. For instance, while AI could improve weather forecasting and our understanding of climate change, an AI tasked with solving global warming might remove humans, as we are the root cause.
Similarly to how steam-powered machines replaced manual labour during the Industrial Revolution, Hinton suggests that AI will take over routine, mundane tasks – especially clerical ones – by performing them more efficiently, leading to higher productivity. However, Hinton observes that while big productivity increases should benefit society, in practice, they often make the rich richer and the poor poorer.
According to Russell, human civilisation is the result of about a trillion person-years of accumulated teaching and learning, an unbroken chain that stretches back tens of thousands of generations. Delegating our jobs to AI could potentially break this chain and might leave humanity at the mercy of superintelligent AI: we would not have the knowledge and skills to wrest control back from AI.
As AI continues to evolve, its potential to improve lives is undeniable. However, it also presents a host of challenges, ranging from environmental concerns to issues of privacy, bias and control. It is essential that we approach the development and implementation of AI with caution, ensuring that appropriate safety measures, transparency and regulation are in place before the technology evolves beyond our control.
[1] How Much Energy Do Google Search and ChatGPT Use?
[2] Elon Musk Says Neuralink Implanted Wireless Brain Chip. BBC.
[3] AI Re-creates What People See by Reading Their Brain Scans. Science.org.
[4] New Tests Reveal AI’s Capacity for Deception. Time Magazine.