Researchers have successfully tested a technique that could potentially allow a computer to decode the thought process of the human mind.
A team of scientists from the University of Washington (WU) developed a method that could potentially be applied to paralyzed people to establish a form of mind-reading communication.
Results of the recent experiments intimate that, with the use of brain implants and special software, computers could be able to convert information from brain signals to accurately determine what images a person sees at a particular moment, in real time.
Seven patients with severe epilepsy took part in an experiment led by neuroscientist Rajesh Rao and neurosurgeon Jeff Ojermann, alongside a team of scientists from WU. The patients had electrodes temporarily implanted into their temporal lobes so that doctors could observe the focal point of a seizure.
During the experiment the subjects were exposed to a certain visual information. They were told to look for an image of an upside-down house in a set of photos flickering on computer monitors in brief 400 millisecond intervals, a random combination of images from pictures of human faces and houses to blank gray screens. Meanwhile, machines were detecting the electrical activity of the brain via electrodes connected to sophisticated software.
The program sampled and digitized the incoming brain signals at a rate of 1,000 times per second to determine which combination of electrode locations and signals correlated best to what the patients were seeing. It turned out that different neurons fired when people were looking at faces versus when they were looking at houses, the researchers reported.
"We got different responses from different (electrode) locations; some were sensitive to faces and some were sensitive to houses," Rao said.
Later the patients watched a different set of pictures, but self-learning software allowed the computer to determine, with 96 percent accuracy and at nearly the speed of perception, if the patient was seeing a house, a face, or a grey screen.
Further study, beyond the document published on January 21 in the journal PLOS Computational Biology, is still required to see if the system would be able to learn a more diverse set of images and recognize the difference, for instance, between human face or the face of a dog.
Following the expected improvements, the technology could break the boundary between machines and humanity.
UN ICCPR UN Human Rights UN Human Rights Council Amnesty International International Bar Association International Criminal Court International Court of Justice Canadian GovernmentCC :firstname.lastname@example.org@email@example.comCP@firstname.lastname@example.org@email@example.com@firstname.lastname@example.orgBy this attached photo, please send and confirm if you all received what I sent, thanks.…See More