Time for an upgrade? Exploring human neural enhancement

Image credit: Aytuguluturk, Pixabay

When considering neuroprosthetics and brain-machine interfaces, cyborgs and sentient robots may come to mind – part of a not too distant dystopian future, perhaps. Popular culture leans very heavily upon speculation and the boundless imagination of readers and writers alike, often arousing apprehension and calls to forego innovation for fear of what we may unwittingly create. The question often arises of whether it is ‘right’ to incorporate machines so closely into our bodies and minds. However, it is not necessarily a question of ‘right’ or ‘wrong’ but rather, and possibly more importantly, ‘why’.

Far removed from the imagined dystopia, neuroprosthetics are simply devices used to either replace or supplement inputs and outputs to the nervous system. These devices can range from surgical implants to non-invasive electrodes attached to the surface of the head. The concept of neuroprosthetics is by no means a modern invention. In fact, as early as the 19th century, scientists began to experiment with electrically stimulating the auditory system to improve hearing, predating the modern cochlear implant. Today, in addition to hearing, doctors are also able to improve sight, mobility, and importantly the independence of patients with sensory and motor impairments, in large part due to the development of this technology.

In the past decade alone, neuroprosthetics have garnered more attention, and rightly so. At the 2014 FIFA World Cup, the first kick was by a Brazilian man with paralysis of the lower limbs, aided by an exoskeleton controlled by electrical signals from his brain. In 2017, the UK-based company Open Bionics launched the world’s first clinical trial to provide affordable and functional 3D-printed bionic limbs to children in the UK. In 2018, researchers at MIT pioneered a system called the agonist-antagonist myoneural interface (AMI) which facilitates proprioceptive feedback (i.e. the ability to sense the body’s position in space) in limb prostheses, giving users more of a sense that an artificial limb is their own. The ‘why’ of the matter here is clear; these examples of neuroprosthetics have a medical application, vastly improving the health and quality of life of the user. It would seem unjust to suggest that we shouldn’t permit the integration of machines with our bodies in this way.

As a human invention, modern digital technology is well understood whereas the human brain remains an enigma.

However, brain-machine interfaces often receive criticism owing to the notorious associations with the development of artificial intelligence (AI). Brain-machine interfaces, using AI technology, are often found in neuroprosthetic devices as the component that translates electrical activity from the brain to enable interaction with a computer. Their applications can range from deep brain stimulation in Parkinson’s disease patients to the more radical Neuralink project developed by the tech mogul Elon Musk. Musk’s project aims to create a high-fidelity brain-machine interface that will allow humans to engage with wearable, hands-free, smartphone-like devices, as well as developing an implantable chip that would increase the processing power and storage capacity of the human brain. Musk’s work is often the subject of much scrutiny. As a human invention, modern digital technology is well understood whereas the human brain remains an enigma. The implantation of devices into the brain tends to require highly invasive surgery which in itself has many risks such as scarring and inflammation. While Musk’s microelectrode technology (‘Neural Lace’) aims to minimise the risk of damage to the brain, it is still unclear how robust the microelectrode ‘threads’ will be, and therefore how long they will last before they potentially break or degrade. Likewise, there are potential unforeseen neurological consequences of manipulating a system that we know relatively little about.

Even with the best intent and planning, there is always a risk of coming to an undesired outcome. Through the misuse or repurposing of this technology, we could expose ourselves to unforeseen dangers. For one, we must consider what this means for our privacy and data security. By integrating computer systems with the human brain, we are undoubtedly providing a new avenue for the dissemination of our biological (neural) information, as the implanted devices will require the collection of data to perform their respective functions. Therefore, as with any ordinary computer system, there is the potential for manipulation by those with malicious or criminal intent, and with this we arrive at an ethical quagmire that pits the potential benefits of this technology against the threat of its misuse. We should therefore consider not only the original purpose of the technology or what was ‘intended by design’ but where these developments might lead.

It is, however, important that we don’t forget the value that these systems already have, particularly within the area of health and disease. Ultimately, individuals are autonomous and are likely to undergo body modification whether for medical need or not, and this may include elective neural enhancement in the future. However, instead of radical policing of research and draconian measures, we should continue to entertain a degree of scepticism in our approach to innovation and avoid the temptation to make premature and grandiose claims. The technology might not currently be at a point to support the ambitions of those who use it, yet it is safe to say that in the future there is the potential for these ideas to be realised. 

Of course, an important question that we must continue to ask ourselves as scientists and innovators is just because we can do something, should we? Fear of the unknown should not become an insurmountable obstacle to progress in the field, and yet neither should it be ignored. It is likely that the integration of man and machine will become increasingly important as the years progress. But let us remain grounded in reality when contemplating the scope of its use rather than in needless conjecture. We must continue with caution, but perhaps leave the wild speculation to fiction.

Written by Ebony Coward and edited by Ailie McWhinnie.

Leave a Reply

Your email address will not be published. Required fields are marked *