Is mind and behaviour recognition going too far? A paranoid analysis

Image credit: Chris Urbanowicz via Flickr

Hey Siri, Alexa, Google, Echo, et cetera – these greetings have become normal to listen to on a day-to-day basis. You can hear them on the street or even while using a virtual assistant at home. These technologies to control our devices by just using a voice-command are here to help us. We can order them to find answers, schedule meetings or control our smart houses. The possibilities are endless. However, you need to say what you want out loud and record it. These technologies can also present major issues for privacy and accessibility. What they do with our data is a growing controversial problem without a simple solution.

Although all these technologies sound like a giant leap for humankind, they also have the potential to restrict freedom if they are used nefariously. 

In some situations, saying your question out loud is an impossible task, perhaps because of confidentiality, loud environments or a speech impairment. For this reason, silent speech recognition was developed by researchers at the University of Soochow, giving voiceless people a more accessible technology that can be used by people hands-free. This technology uses a highly flexible electrode patch, which is set along the jaw. With a silent movement of the lips, movements can be translated into words. This technology is the closest to mind-reading we have now. Mind-reading technologies, such as brain waves readouts, which document the electrical brain signals that move muscles in your body. They can be recorded using electrodes on your head and paired with specific words and eventually, actions. This is the most advanced silent speech (subvocalization) technology available now.

Although all these technologies sound like a giant leap for humankind, they also have the potential to restrict freedom if they are used nefariously.  We need to think about who, how and why they are being used. These considerations are now part of a $20bn industry. Not only our voices, but also our faces can also be tracked using speech and emotion detection technologies. This has led to AI (artificial intelligence) using depth machine learning to predict our emotions and in the long run, our behaviour. These algorithms are developed by massive tech companies and governments. The major tech companies such as Google, Amazon and Facebook are developing systems to read and predict our behaviour. Though governments don’t explicitly state this, it is accepted that their focus is about “security.”  In a world where you are generating a constant flow of data about your voice, your face and your emotions by using “free services” provided by these companies, you are unknowingly becoming the product.

On the bright side, these technologies are giving us access to voice control technology, for example, better security when we’re driving – Technology allows us to manage the GPS, pick up the phone without any distractions or play our favorite tune. Additionally, using voice commands to research a question gives faster access to information without the necessity to type, giving a voice to people with speech difficulties. It makes many aspects of our lives easier. Smart houses that were once a dream, are now a reality. The future is here now.

How can this technology be used in a malevolent way? These technologies are the precursor of future behaviour recognition algorithms. These algorithms can track your expressions, read your lips and learn to recognise patterns. If governments implement these systems, they can detect if you are a threat automatically. If the system has a problem with you, you can be condemned by an algorithm. This effect was explored in depth in the movie Minority Report. Another viewpoint is that keeping a perfect social profile could be necessary to access certain services. Without it, you would be an outcast without basic rights. Like something out of Black Mirror, this social profile was implemented in China last year. This system is mandatory for every citizen and it puts more pressure on the person to keep your score high to maintain access to its rights. Failure to do so makes you a second class citizen. The question is always the fragile balance between public security and privacy.

What we see in the movies will one day be real, and we need to start to create ethics and laws to control it.

In order to distinguish the potential benefits from the hidden implications, we need to think about what is important for the public. The reality according to tech companies is that if you don’t pay for the product, you are the product. Privacy is an important issue here. There has even been some massive data filtration, or selling of data to third parties, even with paid services. Examples in the “free services” include the case of Cambridge analytics using Facebook data to manipulate the US elections or in the private sector with the DNA test service“23andMe”, giving access to their database to GlaxoSmithKline, the sixth biggest pharmaceutical company in the world. Data should be private by default, but it is getting clear that the real business here is the data. 

This raises the question about what they do with our data without our consent. From the government’s perspective, the data can be used in pursuit of security but also as a tool to oppress. In business, privacy is the main issue, where data is the hard currency to sell their products. These “mind-reading” devices will evolve into science fiction technology for us. What we see in the movies will one day be real, and we need to start to create ethics and laws to control it. It is essential to keep in mind that a more technological world is an amazing advantage, but all these technologies need to be regulated in order to be fair and safe for all.

This post was written by Adrian Garcia-Burgos and edited by Miles Martin.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Twitter
YouTube
LinkedIn
RSS