A.I. Turns Thought into Speech

Breakthrough comes amid debate about ethics of facial recognition technology.

May 09, 2019

ALEXANDRIA, Va.—A recent article in Nature, an international science journal, details how artificial intelligence is creating speech by interpreting brain signals. This is a key advancement for people who can’t speak because it provides a direct technologically-enabled path from thought to speech, reports Fortune.
But the implications go beyond creating speech. While the study’s focus was on the mechanistic components of speech, such as direct muscle movement, the technology acquired information from the early stages of thought development to construct words that were identifiable about 70% of the time.
In other words, A.I. translated the code that makes up pre-speech.
In a separate study, A.I. allowed re-creation of vision through the reading of neural output. Functional magnetic resonance imaging (fMRI) data was combined with machine learning to visualize perceptual content from the brain. Image re-construction from this brain activity was translated by A.I. to produce images on a computer screen that even the casual observer could recognize as the original visual stimuli. Researchers say these advancements create the potential for a new level of direct communication mediated not by humans but by A.I. and technology.
Efforts are being made to transition such technology from research to real-life. Currently, the utility of an electronic mesh, a microscopic network of flexible circuits that are placed into the brain and insulated with actual nerve cells, is being tested in animals. Even Neuralink, Elon Musk’s company, is working to process impulses directly to and from the human brain by developing an interface between the computer and the brain using technology that allows microelectrodes to be incorporated into the structure of the brain itself.
Researchers believe these discoveries will drive a new vista for biology and technology where the sum of the parts—human and electronic—combine to transcend the limitations of the cell and the electron.
Meanwhile, facial recognition and the public’s sense of privacy are being debated. In a unanimous vote this week, a San Francisco municipal committee pushed the city closer toward instituting a complete ban on the government’s use of face recognition technology, according to Fast Company in an opinion piece written by Dr. S. A. Applin, a Silicon Valley-based anthropologist.
If passed, it would be first municipal ban on facial recognition tech. Currently, neither federal nor local laws regulate its use, although many people are wary of surveillance systems designed to automatically identify or profile people in public.
“[T]here’s a fundamental flaw in our justification for these technologies,” Rumman Chowdhury, Accenture’s Responsible AI Lead, said on Twitter last month, referring to the San Francisco proposal. “Do we live in a sufficiently dangerous state that we need this? Is punitive surveillance how we achieve a healthy and secure society [it is not]?”
The author questions why businesses or governments want this technology, and notes that “This isn’t about society or even civilization, it’s about money and power.” However, the push to adopt facial recognition is also about cooperation, she added.
Cooperation is how humans have managed to survive for so long, and the need to categorize some people as “the other” occurred since there have been humans. Unfortunately, misconceptions exist about who some of us are and how we might behave have contributed to fear and insecurity among citizens, governments and law enforcement.
Today, those fearful ideas, in combination with a larger, mobile and diverse population, have created a condition by which we know of each other, but do not necessarily know or engage with each other unless necessary. Our fears become another reason to invest in more “security,” even though, if we took time to be social, open, and cooperative in our communities, there would be less to fear, and more security as we looked out for each other’s well-being, the author noted.
A.I. and facial recognition is widely used in China to observe Uighurs, the country’s 11-million member Muslim minority group. 
“The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review,” the New York Times reported recently. “The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”
Efforts to ban uses of the software completely have faced resistance. Lawmakers and companies, such as Microsoft, have pushed for regulations, such as signage to alert people when facial recognition tools are being used in public. However, with no means to opt-out of surveillance in a public or private space except to leave that area, identifying signage offers people no reasonable choice.