Brain computer Interface

Brain control headset for gamers
By Darren Waters
Technology editor, BBC News website, San Francisco

http://news.bbc.co.uk/2/hi/technology/7254078.stm

Gamers will soon be able to interact with the virtual world using their thoughts and emotions alone.

A neuro-headset which interprets the interaction of neurons in the brain will go on sale later this year.

"It picks up electrical activity from the brain and sends wireless signals to a computer," said Tan Le, president of US/Australian firm Emotiv.

"It allows the user to manipulate a game or virtual environment naturally and intuitively," she added.

The brain is made up of about 100 billion nerve cells, or neurons, which emit an electrical impulse when interacting. The headset implements a technology known as non-invasive electroencephalography (EEG) to read the neural activity.

Ms Le said: "Emotiv is a neuro-engineering company and we've created a brain computer interface that reads electrical impulses in the brain and translates them into commands that a video game can accept and control the game dynamically."

Headsets which read neural activity are not new, but Ms Le said the Epoc was the first consumer device that can be used for gaming.

"This is the first headset that doesn't require a large net of electrodes, or a technician to calibrate or operate it and does require gel on the scalp," she said. "It also doesn't cost tens of thousands of dollars."

Darren Waters
This area of immersion and control could prove to be the breakthrough gaming has longed for.
Darren Waters, BBC Technology editor

The use of Electroencephalography in medical practice dates back almost 100 years but it is only since the 1970s that the procedure has been used to explore brain computer interfaces.

The Epoc technology can be used to give authentic facial expressions to avatars of gamers in virtual worlds. For example, if the player smiles, winks, grimaces the headset can detect the expression and translate it to the avatar in game.

It can also read emotions of players and translate those to the virtual world. "The headset could be used to improve the realism of emotional responses of AI characters in games," said Ms Le.

"If you laughed or felt happy after killing a character in a game then your virtual buddy could admonish you for being callous," she explained.

The $299 headset has a gyroscope to detect movement and has wireless capabilities to communicate with a USB dongle plugged into a computer.

The Emotiv said the headset could detects more than 30 different expressions, emotions and actions.

The headset could be used to improve the realism of emotional responses of AI characters in games
Tan Le, Emotiv

They include excitement, meditation, tension and frustration; facial expressions such as smile, laugh, wink, shock (eyebrows raised), anger (eyebrows furrowed); and cognitive actions such as push, pull, lift, drop and rotate (on six different axis).

Gamers are able to move objects in the world just by thinking of the action.

Emotiv is working with IBM to develop the technology for uses in "strategic enterprise business markets and virtual worlds"

Paul Ledak, vice president, IBM Digital Convergence said brain computer interfaces, like the Epoc headset were an important component of the future 3D Internet and the future of virtual communication. 

http://news.bbc.co.uk/2/hi/technology/7254078.stm

 

Please go to LAST PAGE OF "Replies to this Discussion" to read NEWEST Information

You need to be a member of Peacepink3 to add comments!

Join Peacepink3

Votes: 0
Email me when people reply –

Replies

  • DARPA's mind-control prosthetics lets amuptees feel

    Michael Mayday

    http://www.itechpost.com/articles/10026/20130531/darpa-mind-control...

    Everyone's favorite secretive research organization, the Defense Advanced Research Projects Agency (DARPA), is working on a new set of prosthetic limbs which can be controlled simply by the power of thought.

    Such faux limbs could soon become commonplace thanks in part to rapid progress in the development of neuromuscular interfaces - miniature electrodes which can be surgically embedded in the remaining muscle structure of an amputated limb. Other thought-controlled limbs are usually made by making a direct connection to the brain.

    Those muscle-embedded electrodes pick up on signals sent from the brain to the muscles before translating those signals into movement. As Gizmodo notes, those brain signals are often responsible for the phantom limb effect - the sensation of having a limb after it has been amputated.

    DARPA released two videos showing off advances in prosthetic technology. The first demonstrates the progress of targeted muscle re-innervation (TMR)

    A veteran who lost his right arm in Iraq demonstrated how far TMR has come along by drinking from a cup of coffee, picking up bouncing a tennis ball and catching a piece of cloth with DARPA's Reliable Neural-Interface Technology (RENET) program. Those actions, seemingly simple at first glance, are fairly complicated maneuvers, requiring simultaneous joint control of the arm and hand.

    The second video demonstrates the progress made by researchers at Case Western Reserve University under the RE-NET program. The video shows a man rummaging behind a curtain for small cubes to place on the side, requiring him to feel for the objects. That's because the university's advancement uses a falt interface nerve electrode (FINE) which gives amputees sensory feedback through the user's fake limb. That development allows amputees to feel for objects - say coins in a pocket - without having to use their eyes to guide their limb.

    But while the advanced prosthetic limbs are neat, they're still a ways off according to DARPA officials.

    "Although the current generation of brain, or cortical, interfaces have been used to control many degrees of freedom in an advanced prosthesis, researchers are still working on improving their long-term viability and performance," Jack Judy, DARPA program manager said in a press release. "As the RE-NET program continues, we expect that the limb-control and sensory-feedback capabilities of peripheral-interface technologies will increase and that they will become even more widely available in the future."

     

  • Mind control brings new taste of life for paralysis patient (3:25) 

    Dec. 17 - A Pennsylvania woman who became a quadraplegic through a genetic disease has fed herself for the first time in nearly ten years, using a mind-controlled robotic arm. A  research team at the University of Pittsburgh School of Medicine developed the device, an arm that can move in seven dimensions and perform many of the natural motions of a real arm in everyday life. Sharon Reich reports. ( Transcript )

    http://uk.reuters.com/video/2012/12/19/factbox-21-dec-2012-end-of-t...

  • Mind Control: World's first 3D printed object created using brain waves


    A Chilean tech company has laid claim to creating the first physical object using the power of the mind.

    George Lakowsky, the Chief Technical Officer for Thinker Thing, a self-described “creative group” successfully created the object using a brain-computer interface headset according to technology blog neurogadget.com.

    Lakaowsky was able to use the interface to form a 3-D shape using his thoughts which were sent to a 3-D printer for fabrication of the object- a cluster of polygons that resemble the arm of a toy robot.

    “Whilst the first object George created was very simple it’s a breakthrough of epic proportions in our project,” Thinker Things founder Bryan Salt said in an interview with the blog.

    “George was able to control an evolutionary process that grows a 3D object using the small electrical impulses detectable in the brain. This evolving model is created in a form that can be read by the latest 3D printers, which allows us to create real physical objects directed by your mind,” he added.

    The government funded Thinker Thing is also launching a project that will principles of science, art, and engineering to young children in some of the most remote regions of Chile using the new technology.

    Children from these regions will use the interface to create “fantastical creatures of the mind” which will later be put on display.


  • SCIENCE
    Brainput Project Takes a Load Off Humans' Minds

    http://www.technewsworld.com/story/Brainput-Project-Takes-a-Load-Of...

    Brainput Project Takes a Load Off Humans' Minds

    A recent project conducted by researchers at MIT and other institutions demonstrated a method for computers to help people who are weighed down by intense multitasking. Sensors measured test subjects' brain activity as they juggled multiple tasks. When the machine sensed the person was focusing on one task to the exclusion of the other, it "took over" the neglected task.

    A group of researchers from several universities led by MIT have shown that robots controlled mentally by suitably equipped humans who are multitasking can take over some of the workload when needed.

    The Brainput project had researchers use a technique called "functional near-infrared" (fNIR) imaging to measure the activity of brains in test subjects while the subjects were keeping track of two robots and trying to prevent them from crashing into walls.

    When the robots sensed the driver was fully occupied, their behavior became more autonomous, meaning they made some of the navigational decisions themselves.

    Brainput is one of a series of experiments in using the mind to control robots or get devices to perform work that have been going on since the 2000s.

    "There's a huge amount of work being done to better blend people and machines, and this is one of those efforts," Rob Enderle, principal analyst at the Enderle Group, told TechNewsWorld. "The idea of a better human-machine interface has been floating around for some time and we're getting close."

     
    The Brainput Experiment

    The transmission and absorption of NIR light in human body tissues contains information about changes in the concentration of hemoglobin in those tissues. When a specific area of the brain is activated, the blood volume in that area increases quickly.

    The fNIR method measures the level of neuronic activity on the brain by looking at the relationship between metabolic activity and the level of oxygen in a subject's blood.

    The Brainput researchers had test subjects remotely navigate two robots to find the best location to transmit data they had collected back to the control center. The robots measured and reported the signal strength in their current location at the navigator's request in order to do this. The participants were told to avoid collisions with obstacles and walls and were advised not to leave either robot idle. This forced them to work with both robots simultaneously.

    I, Robot

    The researchers found that the robots would take on some of the workload whenever they detected that the test subjects were tending to multiple tasks. When the navigator stopped multitasking, the robots would go back to requiring instructions as to what to do next.

    The results led the researchers to conclude that Brainput provides measurable benefits to users with little additional effort required from them.

    Real-world applications for this technology Discover Proven Strategies to Improve the Security of Your Products. Free Whitepaper. could include military operations where unmanned drones are used as well as search-and-rescue operations in which robots are used to explore areas unsafe for humans.

    "We're still on the very cutting edge of this stuff," Enderle said. "In the next few years, we're going to see amazing advances."

  • Nov 13, 2011 - 8 hours ago by JohnThomas Didymus - comments

     

     Israelis working on mind-controlled avatars and telepresence


    Read more: http://www.digitaljournal.com/article/314353#ixzz1ddSuBs45

    Herzliyah- Doron Friedman of the Advanced Virtuality Lab (AVL), Israel, is leading a team of scientists developing the next generation of human-computer interfaces. The aim is to build interfaces for virtual worlds allowing users to control avatars mentally.
    The project called Virtual Embodiment and Robotic Re-embodiment (VERE) is funded by the European Union. The team includes AVL researchers at the Interdisciplinary Center (IDC), Herzliya, Israel: Doron Friedman, Ori Cohen and Dan Drai, who have teamed up with Rafael Malach from the Weizman Institute of Science in Rehovot, Israel.
    The Jerusalem Post reports the team recently used a brain scanner to control a computer application interactively in real time. Friedman commented on the potential applications of the recent achievement:
    “You could control an avatar just by thinking about it and activating the correct areas in the brain."
    Friedman noted the growing interest, in recent times, in developing a new generation of brain-computer interfaces and the recent breakthroughs which make the goal more than just a dream:
    "Recently, with advances in processing power, signal analysis, and neuro-scientific understanding of the brain, there is growing interest in Brain Computer Interface, and a few success stories. Current BCI research is focusing on developing a new communication alternative for patients with severe neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, and spinal cord injury."
    The project VERE, is one of a number of other projects the AVL team is focusing on. According to The Jerusalem Post, the team is also interested in the closely related problem of telepresence in modern communications technology. Friedman explains the concept of teleprescence:
    “Although we have phones, emails and video conferences – we still prefer a real meeting. The question is why? What is missing in mediated communication, and how can we develop technologies that will feel like a real meeting?"
    The Israeli team is convinced it already has a viable approach to the challenge of telepresence in modern communication in the BEAMING project. The project BEAMING (Being in Augmented Multi-modal Naturally-networked Gatherings) aims to develop new approaches to producing lifelike interactions using "mediated technologies such as surround video conference, virtual and augmented reality, virtual sense of touch (haptics) and spatialized audio and robotics."
    Friedman explained that the AVL team is involved in a special package in the international collaborative BEAMING project. He explained that AVL team, under the BEAMING project, is working on a package it has ambitiously named "Better than Being There." Friedman extolled the virtues of the "BEAMING proxies" it is developing:
    "...with BEAMING you can be in several places at once...With BEAMING proxies, the virtual character will not only look like you, but behave in the same way that you would, limited only by today’s artificial intelligence capabilities.”
    A BEAMING proxy comes with a special difference from ordinary computer game avatars. A BEAMING proxy looks like the person it represents and interacts in virtual environment like the person would in a real-world environment.
    A version of the BEAMING proxy the AVL team developed was recently used in SecondLife, an online virtual world. Another online study on religion used BEAMING proxies as virtual robots. The Jerusalem Post reports:
    "The bot [virtual robot] wandered around in the virtual environment of SecondLife and collected data to evaluate social and ethical implications of the bot and the other 'real' players."
    Recent demonstrations of how BEAMING proxies work give us glimpses of how we may be living and socializing in the not-too-distant future:
    Dr. Beatrice Hasler was scheduled to give a lecture in SecondLife, but because she had to be elsewhere at the time of the lecture, her lecture was pre-recorded and her avatar programmed to answer questions and also reproduce her characteristic mannerisms and body language. In a situation in which the avatar is uncertain how to answer a question, it may call Hasler on the phone and ask for help.
    Friedman says he is already looking beyond implementation of virtual robots as personal avatars ("proxies") in virtual environments. He is already thinking of the possibility of developing proxies that can interact outside virtual spaces, that is, we could soon be having avatars that can attend real-life events.
    Friedman cautions, however, that in spite of the developments so far, the research is still in early stages. He explains that there is a significant gap between academic research and its industrial application. Friedman explains:
    “The EU body instructed us to focus on technologies that will have a great impact in 20 years. A good academic research is ahead of today’s technologies and/or focuses on things that are not commercial but are important to our future society.”



    Read more: http://www.digitaljournal.com/article/314353#ixzz1ddT8U5yu

  • http://www.gottabemobile.com/2011/11/11/gift-tech-toys-gadgets-for-...

    Mindflex Duel (8 and up)

    Mind control has never been so much fun.

    mindflex duel

    This is one of those toys that is as fascinating as it is fun to play. The concept is simple, but wrapping your brain around it is hard. In MindFlex Duel kids control a floating ball and attempt to navigate it through a series of obstacles all while another kid (or hapless parents) tries to do the same, pushing the ball back towards the opponent’s side of the game. Each person must move the ball using only their minds.

    Sounds science fictional, I know, but this actually works. Through the lightweight headset, the game measures brainwaves to determine levels of concentration. The more concentration you achieve, the more control you have over the ball.

    Kids can design many different obstacle courses ranging from simple to complex or play interactive games solo or with a friend. Once they master it they may expect to control other objects with their minds. When that happens, I’m sure Professor X will be along shortly with an invitation to his special school…

  • Hackers control Siri using only their minds and a lot of hardware




    Project Black Mirror

    Sure, Siri's useful — but wouldn't it be better if your iPhone 4S could just read your mind? That's what the people behind Project Black Mirror believe, and they've actually made it a reality. By hooking an iPhone 4S up to an elaborate Arduino / MacBook Pro control setup, these hackers have placed calls simply by activating Siri with the home button and thinking about who they want to call.

    The setup starts with a user hooked up to ECG pads, which capture analog brain waves that are fed into the board hooked up to the iPhone 4S. This board has a program that was trained to recognize Siri commands (like call, reminder, and so forth) from the ECG pads and sends them to a voice synthesizer chip. Here's the trick: Siri's not being controlled by your mind but by the voice commands created by the synthesizer. Those commands are fed to Siri via the iPhone's microphone jack, and Siri then places your call.

    The system certainly makes for a great demo video, but it's a long way from being practical. However, Project Black Mirror's not stopping here: they want to get it polished and to the masses via a Kickstarter campaign. We'll see what these guys manage to come up with if they can get a little funding.

  • The Army's Bold Plan to Turn Soldiers Into Telepaths

    The U.S. Army wants to allow soldiers to communicate just by thinking. 
The new science of synthetic telepathy could soon make that happen.

    by Adam Piore; illustration by Sam Kennedy

    From the April 2011 issue; published online July 20, 2011

    http://discovermagazine.com/2011/apr/15-armys-bold-plan-turn-soldie...

    On a cold, blustery afternoon the week before Halloween, an assortment of spiritual mediums, animal communicators, and astrologists have set up tables in the concourse beneath the Empire State Plaza in Albany, New York. The cavernous hall of shops that 
connects the buildings in this 98-acre complex is a popular venue for autumnal events: Oktoberfest, the Maple Harvest Festival, and today’s “Mystic Fair.”

    Traffic is heavy as bureaucrats with ID badges dangling from their necks stroll by during their lunch breaks. Next to the Albany Paranormal Research Society table, a middle-aged woman is solemnly explaining the workings of an electromagnetic sensor that can, she asserts, detect the presence of ghosts. Nearby, a “clairvoyant” ushers a government worker in a suit into her canvas tent. A line has formed at the table of a popular tarot card reader.

    Amid all the bustle and transparent hustles, few of the dabblers at the Mystic Fair are aware that there is a genuine mind reader in the building, sitting in an office several floors below the concourse. This mind reader is not able to pluck a childhood memory or the name of a loved one out of your head, at least not yet. But give him time. He is applying hard science to an aspiration that was once relegated to clairvoyants, and unlike his predecessors, he can point to some hard results.

    The mind reader is Gerwin Schalk, a 39-year-old biomedical scientist and a leading expert on brain-computer interfaces at the New York State Department of Health’s Wads­worth Center at Albany Medical College. The 
Austrian-born Schalk, along with a handful of other researchers, is part of a $6.3 million U.S. Army project to establish the basic science required to build a thought helmet—a device that can detect and transmit the unspoken speech of soldiers, allowing them to communicate with one another silently.

    As improbable as it sounds, synthetic telepathy, as the technology is called, is getting closer to battlefield reality. Within a decade Special Forces could creep into the caves of Tora Bora to snatch Al Qaeda operatives, communicating and coordinating without hand signals or whispered words. Or a platoon of infantrymen could telepathically call in a helicopter to whisk away their wounded in the midst of a deafening firefight, where intelligible speech would be impossible above the din of explosions.

    For a look at the early stages of the technology, I pay a visit to a different sort of cave, Schalk’s bunkerlike office. Finding it is a workout. I hop in an elevator within shouting distance of the paranormal hubbub, then pass through a long, linoleum-floored hallway guarded by a pair of stern-faced sentries, and finally descend a cement stairwell to a subterranean warren of laboratories and offices.

    Schalk is sitting in front of an oversize computer screen, surrounded by empty metal bookshelves and white cinder-block walls, bare except for a single photograph of his young family and a poster of the human brain. The fluorescent lighting flickers as he hunches over a desk to click on a computer file. A volunteer from one of his recent mind-reading experiments appears in a video facing a screen of her own. She is concentrating, Schalk explains, silently thinking of one of two vowel sounds, aah or ooh.

    The volunteer is clearly no ordinary research subject. She is draped in a hospital gown and propped up in a motorized bed, her head swathed in a plasterlike mold of bandages secured under the chin. Jumbles of wires protrude from an opening at the top of her skull, snaking down to her left shoulder in stringy black tangles. Those wires are connected to 64 electrodes that a neurosurgeon has placed directly on the surface of her naked cortex after surgically removing the top of her skull. “This woman has epilepsy and probably has seizures several times a week,” Schalk says, revealing a slight Germanic accent.

    The main goal of this technique, known as electrocorticography, or ECOG, is to identify the exact area of the brain responsible for her seizures, so surgeons can attempt to remove the damaged areas without affecting healthy ones. But there is a huge added benefit: The seizure patients who volunteer for Schalk’s experiments prior to surgery have allowed him and his collaborator, neuro­surgeon Eric C. Leuthardt of Washington University School of Medicine in St. Louis, to collect what they claim are among the most detailed pictures ever recorded of what happens in the brain when we imagine speaking words aloud.

    Special Forces could creep into the caves of Tora Bora to snatch Al Qaeda operatives, 
communicating without hand signals or whispered words.

    Those pictures are a central part of the project funded by the Army’s multi-university research grant and the latest twist on science’s long-held ambition to read what goes on inside the mind. Researchers have been experimenting with ways to understand and harness signals in the areas of the brain that control muscle movement since the early 2000s, and they have developed methods to detect imagined muscle movement, vocalizations, and even the speed with which a subject wants to move a limb.

    At Duke University Medical Center in North Carolina, researchers have surgically implanted electrodes in the brains of m... and trained them to move robotic arms at MIT, hundreds of miles away, just by thinking. At Brown University, scientists are working on a similar implant they hope will allow paralyzed human subjects to control artificial limbs. And workers at Neural Signals Inc., outside Atlanta, have been able to extract vowels from the motor cortex of a paralyzed patient who lost the ability to talk by sinking electrodes into the area of his brain that controls his vocal cords.

    But the Army’s thought-helmet project is the first large-scale effort to “really attack” the much broader challenge of synthetic telepathy, Schalk says. The Army wants practical applications for healthy people, “and we are making progress,” he adds.

    Schalk is now attempting to make silent speech a reality by using sensors and computers to explore the regions of the brain responsible for storing and processing thoughts. The goal is to build a helmet embedded with brain-scanning technologies that can target specific brain waves, translate them into words, and transmit those words wirelessly to a radio speaker or an earpiece worn by other soldiers.

    As Schalk explains his vast ambitions, I’m mesmerized by the eerie video of the bandaged patient on the computer screen. White bars cover her eyes to preserve her anonymity. She is lying stock-still, giving the impression that she might be asleep or comatose, but she is very much engaged. Schalk points with his pen at a large rectangular field on the side of the screen depicting a region of her brain abuzz with electrical activity. Hundreds of yellow and white brain waves dance across a black backdrop, each representing the oscillating electrical pulses picked up by one of the 64 electrodes attached to her cortex as clusters of brain cells fire.

    Somewhere in those squiggles lie patterns that Schalk is training his computer to recognize and decode. “To make sense of this is very difficult,” he says. “For each second there are 1,200 variables from each electrode location. It’s a lot of numbers.”

    Schalk gestures again toward the video. Above the volunteer’s head is a black bar that extends right or left depending on the computer’s ability to guess which vowel the volunteer has been instructed to imagine: right for “aah,” left for “ooh.” The volunteer imagines “ooh,” and I watch the black bar inch to the left. The volunteer thinks “aah,” and sure enough, the bar extends right, proof that the computer’s analysis of those hundreds of squiggling lines in the black rectangle is correct. In fact, the computer gets it right “close to 100 percent of the time,” Schalk says.

    He admits that he is a long way from decoding full, complex imagined sentences with multiple words and meaning. But even extracting two simple vowels from deep within the brain is a big advance. Schalk has no doubt about where his work is leading. “This is the first step toward mind reading,” he tells me.

    “Show us the evidence that 
this could really work—that you are not just hallucinating it,” 
the Army asked Schmeisser.

    The motivating force behind the thought helmet project is a retired Army colonel with a Ph.D. in the physiology of vision and advanced belts in karate, judo, aikido, and Japanese sword fighting. Elmar Schmeis­ser, a lanky, bespectacled scientist with a receding hairline and a neck the width of a small tree, joined the Army Research Office as a program manager in 2002. He had spent his 30-year career up to that point working in academia and at various military research facilities, exhaustively investigating eyewear to protect soldiers against laser exposure, among other technologies.

    Schmeisser had been fascinated by the concept of a thought helmet ever since he read about it in E. E. “Doc” Smith’s 1946 science fiction classic, Skylark of Space, back in the eighth grade. But it was not until 2006, while Schmeisser was attending a conference on advanced prosthetics in Irvine, California, that it really hit him: Science had finally caught up to his boyhood vision. He was listening to a young researcher expound on the virtues of extracting signals from the surface of the brain. The young researcher was Gerwin Schalk.

    Schalk’s lecture was causing a stir. Many neuroscientists had long believed that the only way to extract data from the brain specific enough to control an external device was to penetrate the cortex and sink electrodes into the gray matter, where the electrodes could record the firing of individual neurons. By claiming that he could pry information from the brain without drilling deep inside it—information that could allow a subject to move a computer cursor, play computer games, and even move a prosthetic limb—Schalk was taking on “a very strong existing dogma in the field that the only way to know about how the brain works is by recording individual neurons,” Schmeisser vividly recalls of that day.

    Many of those present dismissed Schalk’s findings as blasphemy and stood up to attack it. But for Schmeisser it was a magical moment. If he could take Schalk’s idea one step further and find a way to extract verbal thoughts from the brain without surgery, the technology could dramatically benefit not only disabled people but the healthy as well. “Everything,” he says,” all of a sudden became possible.”

    The next year, Schmeisser marched into a large conference room at Army Research Office headquarters in Research Triangle Park, North Carolina, to pitch a research project to investigate synthetic telepathy for soldiers. He took his place at a podium facing a large, U-shaped table fronting rows of chairs, where a committee of some 30 senior scientists and colleagues—division chiefs, directorate heads, mathematicians, particle physicists, chemists, computer scientists, and Pentagon brass in civilian dress—waited for him to begin.

    Schmeisser had 10 minutes and six Power­Point slides to address four major questions: Where was the field at the moment? How might his idea prove important? What would the Army get out of it? And was there reason to believe that it was doable?

    The first three questions were simple. It was that last one that tripped him up. “Does this really work?” Schmeisser remembers the committee asking him. “Show us the evidence that this could really work—that you are not just hallucinating it.”

    The committee rejected Schmeisser’s proposal but authorized him to collect more data over the following year to bolster his case. For assistance he turned to Schalk, the man who had gotten him thinking about a thought helmet in the first place.

    Schalk and Leuthardt had been conducting mind-reading experiments for several years, exploring their patients’ ability to play video games, move cursors, and type by means of brain waves picked up via a scanner. The two men were eager to push their research further and expand into areas of the brain thought to be associated with language, so when Schmeisser offered them 
a $450,000 grant to prove the feasibility of a thought helmet, they seized the opportunity.

    Schalk and Leuthardt quickly recruited 12 epilepsy patients as volunteers for their first set of experiments. As I had seen in the video in Schalk’s office, each patient had the top of his skull removed and electrodes affixed to the surface of the cortex. The researchers then set up a computer screen and speakers in front of the patients’ beds.

    The patients were presented with 36 words that had a relatively simple consonant-vowel-consonant structure, such as bet, bat, beat, and boot. They were asked to say the words out loud and then to simply imagine saying them. Those instructions were conveyed visually (written on a computer screen) with no audio, and again vocally with no video. The electrodes provided a precise map of the resulting neural activity.

    Schalk was intrigued by the results. As one might expect, when the subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex and an area in its vicinity long believed to be associated with speech, called Wernicke’s area, were also active.

    When the subjects imagined words, the motor cortex went silent while the auditory cortex and Wernicke’s area remained active. Although it was unclear why those areas were active, what they were doing, and what it meant, the raw results were an important start. The next step was obvious: Reach inside the brain and try to pluck out enough data to determine, at least roughly, what the subjects were thinking.

    Schmeisser presented Schalk’s data to the Army committee the following year and asked it to fund a formal project to develop a real mind-reading helmet. As he conceived it, the helmet would function as a wearable interface between mind and machine. When activated, sensors inside would scan the thousands of brain waves oscillating in a soldier’s head; a microprocessor would apply pattern recognition software to decode those waves and translate them into specific sentences or words, and a radio would transmit the message. Schmeisser also proposed adding a second capability to the helmet to detect the direction in which a soldier was focusing his attention. The function could be used to steer thoughts to a specific comrade or squad, just by looking in their direction.

    The words or sentences would reach a receiver that would then “speak” the words into a comrade’s earpiece or be played from a speaker, perhaps at a distant command post. The possibilities were easy to imagine:

    “Look out! Enemy on the right!”

    “We need a medical evacuation now!”

    “The enemy is standing on the ridge. Fire!”

    Any of those phrases could be life-saving.

    This time the committee signed off.

    Grant applications started piling up in Schmeisser’s office. To maximize the chance of success, he decided to split the Army funding between two university teams that were taking complementary approaches to the telepathy problem.

    The first team, directed by Schalk, was pursuing the more invasive ECOG approach, attaching electrodes beneath the skull. The second group, led by Mike D’Zmura, a cognitive scientist at the University of California, Irvine, planned to use electroencephalography (EEG), a noninvasive brain-scanning technique that was far better suited for an actual thought helmet. Like ECOG, EEG relies on brain signals picked up by an array of electrodes that are sensitive to the subtle voltage oscillations caused by the firing of groups of neurons. Unlike ECOG, EEG requires no surgery; the electrodes attach painlessly to the scalp.

    For Schmeisser, this practicality was critical. He ultimately wanted answers to the big neuroscience questions that would allow researchers to capture complicated thoughts and ideas, yet he also knew that demonstrating even a rudimentary thought helmet capable of discerning simple commands would be a valuable achievement. After all, soldiers often use formulaic and reduced vocabulary to communicate. Calling in a helicopter for a medical evacuation, for instance, requires only a handful of specific words.

    “We could start there,” Schmeisser says. “We could start below that.” He noted, for instance, that it does not require a terribly complicated message to call for an air strike or a missile launch: “That would be a very nice operational capability.”

    The relative ease with which EEG can be applied comes at a price, however. The exact location of neural activity is far more difficult to discern via EEG than with many other, more invasive methods because the skull, scalp, and cerebral fluid surrounding the brain scatter its electric signals before they reach the electrodes. That blurring also makes the signals harder to detect at all. The EEG data can be so messy, in fact, that some of the researchers who signed on to the project harbored private doubts about whether it could really be used to extract the signals associated with unspoken thoughts.

    In the initial months of the project, back in 2008, one of D’Zmura’s key collaborators, renowned neuroscientist David Poeppel, sat in his office on the second floor of the New York University psychology building and realized he was unsure even where to begin. With his research partner Greg Hickok, an expert on the neuroscience of language, he had developed a detailed model of audible speech systems, parts of which were widely cited in textbooks. But there was nothing in that model to suggest how to measure something imagined.

    For more than 100 years, Poeppel reflected, speech experimentation had followed a simple plan: Ask a subject to listen to a specific word or phrase, measure the subject’s response to that word (for instance, how long it takes him to repeat it aloud), and then demonstrate how that response is connected to activity in the brain. Trying to measure imagined speech was much more complicated; a random thought could throw off the whole experiment. In fact, it was still unclear where in the brain researchers should even look for the relevant signals.

    Solving this problem would call for a new experimental method, Poeppel realized. He and a postdoctoral student, Xing Tian, decided to take advantage of a powerful imaging technique called magnetoencephalography, or MEG, to do their reconnaissance work. MEG can provide roughly the same level of spatial detail as ECOG but without the need to remove part of a subject’s 
skull, and it is far more accurate than EEG.

    Poeppel and Tian would guide subjects into a three-ton, beige-paneled room constructed of a special alloy and copper to shield against passing electromagnetic fields. At the center of the room sat a one-ton, six-foot-tall machine resembling a huge hair dryer that contained scanners capable of recording the minute magnetic fields produced by the firing of neurons. After guiding subjects into the device, the researchers would ask them to imagine speaking words like athlete, musician, and lunch. Next they asked them to imagine hearing the words.

    When Poeppel sat down to analyze the results, he noticed something unusual. As a subject imagined hearing words, his auditory cortex lit up the screen in a characteristic pattern of reds and greens. That part was no surprise; previous studies had linked the auditory cortex to imagined sounds. However, when a subject was asked to imagine speaking a word rather than hearing it, the auditory cortex flashed an almost identical red and green pattern.

    Poeppel was initially stumped by the results. “That is really bizarre,” he recalls thinking. “Why should there be an auditory pattern when the subjects didn’t speak and no one around them spoke?” Over time he arrived at an explanation. Scientists had long been aware of an error-correction mechanism in the brain associated with motor commands. When the brain sends a command to the motor cortex to, for instance, reach out and grab a cup of water, it also creates an internal impression, known as an efference copy, of what the resulting movement will look and feel like. That way, the brain can check the muscle output against the intended action and make any necessary corrections.

    Poeppel believed he was looking at an efference copy of speech in the auditory cortex. “When you plan to speak, you activate the hearing part of your brain before you say the word,” he explains. “Your brain is predicting what it will sound like.”

    The potential significance of this finding was not lost on Poeppel. If the brain held on to a copy of what an imagined thought would sound like if vocalized, it might be possible to capture that neurological record and translate it into intelligible words. As happens so often in this field of research, though, each discovery brought with it a wave of new challenges. Building a thought helmet would require not only identifying that efference copy but also finding a way to isolate it from a mass of brain waves.

    D’Zmura and his team at UC Irvine have spent the past two years taking baby steps in that direction by teaching pattern recognition programs to search for and recognize specific phrases and words. The sheer size of a MEG machine would obviously be impractical in a military setting, so the team is testing its techniques using lightweight EEG caps that could eventually be built into a practical thought helmet.

    The caps are comfortable enough that Tom Lappas, a graduate student working with D’Zmura, often volunteers to be a research subject. During one experiment last November, Lappas sat in front of a computer wearing flip-flops, shorts, and a latex EEG cap with 128 gel-soaked electrodes attached to it. Lappas’s face was a mask of determined focus as he stared silently at a screen while military commands blared out of a nearby speaker.

    “Ready Baron go to red now,” a recorded voice intoned, then paused. “Ready Eagle go to red now…Ready Tiger go to green now...” As Lappas concentrated, a computer recorded hundreds of squiggly lines representing Lappas’s brain activity as it was picked up from the surface of his scalp. Somewhere in that mass of data, Lappas hoped, were patterns unique enough to distinguish the sentences from one another.

    With so much information, the problem would not be finding similarities but rather filtering out the similarities that were irrelevant. Something as simple as the blink of an eye creates a tremendous number of squiggles and lines that might throw off the recognition program. To make matters more challenging, Lappas decided at this early stage in the experiment to search for patterns not only in the auditory cortex but in other areas of the brain as well.

    That expanded search added to the data his computer had to crunch through. In the end, the software was able to identify the sentence a test subject was imagining speaking only about 45 percent of the time. The result was hardly up to military standards; an error rate of 55 percent would be disastrous on the battlefield.

    Schmeisser is not distressed by that high error rate. He is confident that synthetic telepathy can and will rapidly improve to the point where it will be useful in combat. “When we first started this, we didn’t know if it could be done,” he says. “That we have gotten this far is wonderful.” Poeppel agrees. “The fact that they could find anything just blows me away, frankly,” he says.

    Schmeisser notes that D’Zmura has already shown that test subjects can type in Morse code by thinking of specific vowels 
in dots and dashes. Although this exercise is not actual language, subjects have achieved an accuracy of close to 100 percent.

    The next steps in getting a thought helmet to work with actual language will be improving the accuracy of the pattern-recognition programs used by Schalk’s and D’Zmura’s teams and then adding, little by little, to the library of words that these programs can discern. “Whether we can get to fully free-flowing, civilian-type speech, I don’t know. It would be nice. We’re pushing the limits of what we can get, opening the vocabulary as much as we can,” Schmeisser says.

    For some concerned citizens, this research is pushing too far. Among the more paranoid set, the mere fact that the military is trying to create a thought helmet is proof of a conspiracy to subject the masses to mind control. More grounded critics consider the project ethically questionable. Since the Army’s thought helmet project became publicly known, Schmeis­ser has been deluged with Freedom of Information Act requests from individuals and organizations concerned about privacy issues. Those requests for documentation have required countless hours and continue 
to this day.

    Schalk, for his part, has resolved to keep a low profile. From his experience working with more invasive techniques, he had seen his fair share of controversy in the field, and he anticipated that this project might attract close scrutiny. “All you need to do is say, ‘The U.S. Army funds studies to implant people for mind reading,’ ” he says. “That’s all it takes, and then you’re going to have to do damage control.”

    D’Zmura and the rest of his team, perhaps to their regret, granted interviews about their preliminary research after it was announced in a UC Irvine press release. The negative reaction was immediate. Bizarre e-mail messages began appearing in D’Zmura’s in-box from individuals ranting against the government or expressing concern that the authorities were already monitoring their thoughts. One afternoon, a woman appeared outside D’Zmura’s office complaining of voices in her head and asking for assistance to remove them.

    Should synthetic telepathy make significant progress, the worried voices will surely grow louder. “Once we cross these barriers, we are doing something that has never before been done in human history, which is to get information directly from the brain,” says Emory University bioethicist Paul Root Wolpe, a leading voice in the field of neuroethics. “I don’t have a problem with sticking this helmet on the head of a pilot to allow him to send commands on a plane. The problem comes when you try to get detailed information about what someone is either thinking or saying nonverbally. That’s something else altogether. The skull should remain a realm of absolute privacy. If the right to privacy means anything, it means the right to the contents of my thoughts.”

    Schmeisser says he has been reflecting on this kind of concern “from the beginning.” He dismisses the most extreme type of worry out of hand. “The very nature of the technology and of the human brain,” he maintains, “would prevent any Big Brother type of use.” Even the most sophisticated existing speech-recognition programs can obtain only 95 percent accuracy, and that is after being calibrated and trained by a user to compensate for accent, intonation, and phrasing. Brain waves are “much harder” to get right, Schmeisser notes, because every brain is anatomically different and uniquely shaped by experience.

    
Merely calibrating a program to recognize a simple sentence from brain waves would take hours. “If your thoughts wander for just an instant, the computer is completely lost,” Schmeisser says. “So the method is completely ethical. There is no way to coerce users into training the machine if they don’t want to. Any attempt to apply coercion will result in more brain wave disorganization, from stress if nothing else, and produce even worse computer performance.” Despite the easy analogies, synthetic telepathy bears little resemblance to mystical notions of mind reading and mind control. The bottom line, Schmeisser insists, “is that I see no risks whatsoever. Only benefits.”

    
Nor does he feel any unease that his funding comes from a military agency eager to put synthetic telepathy to use on the battlefield. The way he sees it, the potential payoff is simply too great.

    “This project is attempting to make the scientific breakthrough that will have application for many things,” Schmeisser says. “If we can get at the black box we call the brain with the reduced dimensionality of speech, then we will have made a beginning to solving fundamental challenges in understanding how the brain works—and, with that, of understanding individuality.”

  • New map shows where tastes are coded in the brain

    September 6, 2011 by Editor
    http://www.kurzweilai.net/new-map-shows-where-tastes-are-coded-in-t...
    bitter_hot_spot.jpg

    Bitter hot spot in the mouse insular cortex (credit: Xiaoke Chen et al./Science)

    Howard Hughes Medical Institute and NIH scientists have discovered that our four basic tastes — sweet, bitter, salty, and “umami,” or savory —- are processed by neurons arranged discretely in a “gustotopic map” in the brain.

    In the past, researchers measured the electrical activity of small clusters of neurons to see which areas of a mouse’s brain were activated by different tastes. In those experiments, the areas responding to different tastes seemed to blend together, so scientists concluded that neurons appeared to process all tastes broadly.

    Questioning that assumption, they tried a powerful new technique called two-photon calcium imaging to determine which neurons responded when an animal is exposed to different taste qualities.

    How two-photon calcium imaging works

    When a neuron is activated, it releases a wave of calcium throughout the cell. So the level of calcium can serve as a proxy for measuring activation of neurons. The researchers injected dye into the neurons of mice that made those cells light up with fluorescence every time calcium was released. Then, they looked at the brains of the mice under high-powered microscopes that allowed them to watch hundreds of nerve cells at a time deep within the brain of mice. When a cell was activated, the researchers saw it fluoresce. This allowed them to monitor the activity of large ensembles of cells, as opposed to previous methods, which tracked only a few cells at a time.

    They observed that when a mouse is given something bitter to taste, or the receptors on its tongue that sense bitter are stimulated, many neurons in one small, specific area of the brain light up. When the mouse is given something salty, an area a few millimeters away is activated. Each taste corresponded to a different hotspot in the brain. None of the areas overlapped — in fact, there was space between all of them.

    “The idea of maps in the brain is one that has been found in other senses,” says Nicholas J. P. Ryba of the National Institute of Dental and Craniofacial Research. “But in those cases the brain maps correspond to external maps.” Visual neurons are found in an arrangement that mimics the field of vision sensed by the eyes. However, taste offers no preexisting arrangement before reaching the brain; furthermore, the receptors for all tastes are found randomly throughout the tongue — thus the spatial organization of taste neurons into a topographic brain map is all the more surprising.

    The researchers next want to uncover how taste combines with other sensory inputs like olfaction and texture, and the internal state — hunger and expectation, for example — to choreograph flavor, taste memories, and taste behaviors.

    Ref.: Xiaoke Chen, et al., A Gustotopic Map of Taste Qualities in the Mammalian Brain, Science, September 2011 [DOI: 10.1126/science.1204076]

  • Remote Control, With a Wave of a Hand


    PLAYING a computer game once meant sitting on the couch and pushing buttons on a controller, but those buttons have been disappearing of late, replaced by human gestures that guide the action.

    Desney Tan of Microsoft Research is working on technology that could enable human gestures to control electronic devices.

    11-NOVEL2-articleInline.jpg
    Microsoft

    In one experiment, a man wore a sensor that picked up his body's electrical signal, indicating his location and his gestures.

    Microsoft

     

    Soon gestures may be controlling more than just games. Scientists at Microsoft Research and the University of Washington have come up with a new system that uses the human body as an antenna. The technology could one day be used to turn on lights, buy a ticket at a train station kiosk, or interact with a world of other computer applications. And no elaborate instruments would be required.

    “You could walk up to a ticket-purchasing machine, stand in front and make a gesture to be able to buy your ticket — or set the kind of gas you want at the gas station,” said Desney Tan, a senior researcher at Microsoft Research and one of the creators of the technology. The system, demonstrated so far only in experiments, is “a fascinating step forward,” said Joseph A. Paradiso, an associate professor of media arts and sciences at the Massachusetts Institute of Technology and co-director of the Things That Think Consortium.

    There is no reason to fear that the new technology will affect people’s health, he said; it merely exploits electromagnetic fields that are already in the air. “Suddenly someone takes advantage of it and opens up an example that is potentially useful,” he said of the new gesture technology.

    The innovation is potentially inexpensive, as it requires no handheld wireless wand, as the Nintendo Wii does, or the instrumentation of Microsoft’s Kinect, which uses infrared light and cameras to track motion.

    Instead, the technology uses something that is always with us, unless we live in the wilderness: ambient electromagnetic radiation emitted as a matter of course by the wiring in households, by the power lines above homes, and by those gas pumps at the service station.

    The human body produces a small signal as it interacts with this ambient electrical field. The new system employs algorithms to interpret and harness that interaction.

    In initial tests, the technology determined people’s locations and gestures from the way their bodies interacted with the electrical field, said one of its inventors, Shwetak N. Patel, an assistant professor at the University of Washington.

    Matt Reynolds, an assistant professor of electrical and computer engineering at Duke, who collaborates with Dr. Patel, says it has long been known that people function as antennas as they move near power lines — for example, those within the walls of a home. “What’s new,” he said, “is leveraging those signals as useful data that can be the basis for an interface for a computer system.”

    Dr. Patel has had a longstanding interest in delving into signals in the home and finding new uses for them. He helped to found a company called Zensi, which created an energy monitoring device that can be plugged into any outlet in a home to figure out which appliances are drawing power. (The company was sold last year.)

     

    He is also working on a system to monitor home water use. By detecting minute changes in pressure at a spigot, it infers how much water the toilet or dishwasher is consuming.

    Practical applications of gesture technology will take time to develop, Dr. Tan of Microsoft cautioned. One of those applications may be in homes, where a wave of the hand might control lighting, security systems, air-conditioners or televisions. Of course, designers must take care that gestures don’t accidentally set off a device, he said. For example, it could be commanded to start only with an unusual gesture — perhaps drawing a circle in the air, or touching a certain number of fingers on the wall. Once users have done this, the system knows the gesture is intended as a command.

    Such home-automation devices would have to be calibrated individually for each household, and recalibrated if people moved to another home, as the homes would have different wiring. But Robert Jacob, a professor of computer science at Tufts University, said that such calibration would be a relatively minor chore for machine learning.

    “A computer can be quickly trained to do that,” he said.


    E-mail: novelties@nytimes.com.

This reply was deleted.