Information about Mind Reading

Mind Reading -- 60 minutes CBS News video
June 28, 2009 4:50 PM
Neuroscience has learned so much about how we think and the brain activity linked to certain thoughts that it is now possible - on a very basic scale - to read a person's mind. Lesley Stahl reports.
How Technology May Soon "Read" Your Mind
Read more: 
http://www.cbsnews.com/stories/1998/07/08/60minutes/main4694713.shtml?tag=cbsnewsSidebarArea.0

http://www.foxnews.com/story/0,2933,426485,00.html

LiveScience Topics: Mind Reading

Mind-machine interfaces can read your mind, and the science is improving. Devices scan the brain and read brain waves with electroencephalography, or EEG, then use a computer to convert thoughts into action. Some mind-reading research has recorded electrical activity generated by the firing of nerve cells in the brain by placing electrodes directly in the brain. These studies could lead to brain implants that would move a prosthetic arm or other assistive devices controlled by a brain-computer interface.

http://utdev.livescience.com/topic/mind-reading

 

16:09 03/11/2010 © Alex Steffler

Rossiiskaya Gazeta
Mind-reading devices to help screen Russian cops

It reads like science fiction, but it’ll soon be science fact. Special mind-reading devices are to be rolled out across Russia’s revamped police force.
http://en.rian.ru/papers/20101103/161202851.html

 

Homeland Security Detects Terrorist Threats by Reading Your MindTuesday, September 23, 2008
By Allison Barrie
Baggage searches are SOOOOOO early-21st century. Homeland Security is now testing the next generation of security screening — a body scanner that can read your mind.

Most preventive screening looks for explosives or metals that pose a threat. But a new system called MALINTENT turns the old school approach on its head. This Orwellian-sounding machine detects the person — not the device — set to wreak havoc and terror.

MALINTENT, the brainchild of the cutting-edge Human Factors division in Homeland Security's directorate for Science and Technology, searches your body for non-verbal cues that predict whether you mean harm to your fellow passengers.

It has a series of sensors and imagers that read your body temperature, heart rate and respiration for unconscious tells invisible to the naked eye — signals terrorists and criminals may display in advance of an attack.

But this is no polygraph test. Subjects do not get hooked up or strapped down for a careful reading; those sensors do all the work without any actual physical contact. It's like an X-ray for bad intentions.

Currently, all the sensors and equipment are packaged inside a mobile screening laboratory about the size of a trailer or large truck bed, and just last week, Homeland Security put it to a field test in Maryland, scanning 144 mostly unwitting human subjects.

While I'd love to give you the full scoop on the unusual experiment, testing is ongoing and full disclosure would compromise future tests.

• Click here for an exclusive look at MALINTENT in action.

But what I can tell you is that the test subjects were average Joes living in the D.C. area who thought they were attending something like a technology expo; in order for the experiment to work effectively and to get the testing subjects to buy in, the cover story had to be convincing.

While the 144 test subjects thought they were merely passing through an entrance way, they actually passed through a series of sensors that screened them for bad intentions.

Homeland Security also selected a group of 23 attendees to be civilian "accomplices" in their test. They were each given a "disruptive device" to carry through the portal — and, unlike the other attendees, were conscious that they were on a mission.

In order to conduct these tests on human subjects, DHS had to meet rigorous safety standards to ensure the screening would not cause any physical or emotional harm.

So here's how it works. When the sensors identify that something is off, they transmit warning data to analysts, who decide whether to flag passengers for further questioning. The next step involves micro-facial scanning, which involves measuring minute muscle movements in the face for clues to mood and intention.

Homeland Security has developed a system to recognize, define and measure seven primary emotions and emotional cues that are reflected in contractions of facial muscles. MALINTENT identifies these emotions and relays the information back to a security screener almost in real-time.

This whole security array — the scanners and screeners who make up the mobile lab — is called "Future Attribute Screening Technology" — or FAST — because it is designed to get passengers through security in two to four minutes, and often faster.

If you're rushed or stressed, you may send out signals of anxiety, but FAST isn't fooled. It's already good enough to tell the difference between a harried traveler and a terrorist. Even if you sweat heavily by nature, FAST won't mistake you for a baddie.

"If you focus on looking at the person, you don't have to worry about detecting the device itself," said Bob Burns, MALINTENT's project leader. And while there are devices out there that look at individual cues, a comprehensive screening device like this has never before been put together.

While FAST's batting average is classified, Undersecretary for Science and Technology Adm. Jay Cohen declared the experiment a "home run."

As cold and inhuman as the electric eye may be, DHS says scanners are unbiased and nonjudgmental. "It does not predict who you are and make a judgment, it only provides an assessment in situations," said Burns. "It analyzes you against baseline stats when you walk in the door, it measures reactions and variations when you approach and go through the portal."

But the testing — and the device itself — are not without their problems. This invasive scanner, which catalogues your vital signs for non-medical reasons, seems like an uninvited doctor's exam and raises many privacy issues.

But DHS says this is not Big Brother. Once you are through the FAST portal, your scrutiny is over and records aren't kept. "Your data is dumped," said Burns. "The information is not maintained — it doesn't track who you are."

DHS is now planning an even wider array of screening technology, including an eye scanner next year and pheromone-reading technology by 2010.

The team will also be adding equipment that reads body movements, called "illustrative and emblem cues." According to Burns, this is achievable because people "move in reaction to what they are thinking, more or less based on the context of the situation."

FAST may also incorporate biological, radiological and explosive detection, but for now the primary focus is on identifying and isolating potential human threats.

And because FAST is a mobile screening laboratory, it could be set up at entrances to stadiums, malls and in airports, making it ever more difficult for terrorists to live and work among us.

Burns noted his team's goal is to "restore a sense of freedom." Once MALINTENT is rolled out in airports, it could give us a future where we can once again wander onto planes with super-sized cosmetics and all the bottles of water we can carry — and most importantly without that sense of foreboding that has haunted Americans since Sept. 11.

Allison Barrie, a security and terrorism consultant with the Commission for National Security in the 21st Century, is FOX News' security columnist.

 

Please go to LAST PAGE OF "Replies to this Discussion" to read NEWEST Information

You need to be a member of Peacepink3 to add comments!

Join Peacepink3

Votes: 0
Email me when people reply –

Replies

  • http://www.dtic.mil/ndia/2008intell/kruse.pdf
    Operational Neuroscience- Monitoring Brainwaves from Satellites currently being Researched and Implemented by DARPA headed up by Dr. Amy Kruse on American Citizen designated Labrats. It's in their own report.

    Phase 2 Technical Challenges
    ƒ Capture brain signals in real time during realisticz
    imagery analysis on baseline imagery
    exploitation systems
    ƒ Categorize target detection brain signals based
    on object / scene complexity
    ƒ Integrate neuromorphic computational image
    analysis and physiological brain signals

    Phase 2 Applied Science
    Apply Phase 1 breakthrough science
    in operational contexts
    ƒ Extend capture of brain signals for
    target detection to:
    ƒ Multiple imagery types
    ƒ Diverse target and scene
    complexity
    ƒ Integrate brain-assisted search into
    standard imagery analysis software
    ƒ Leverage/converge with automated
    machine vision technologies
    ƒ Demonstrate with trained analysts with
    realistic tasks and environment

    I have just presented the EVIDENCE FROM THE DOCUMENTATION FROM THE GOVERNMENT-- BECAUSE PEOPLE DON'T READ WHAT IS ALREADY KNOWN-- IS THEIR PROBLEM-- IT'S A DONE DEAL!!!
    The Neuro Revolution - Full HD ZACK LYNCH and and Entire Org. TRACKS IT

    http://www.neurotechindustry.org/people.html There's an entire organization that tracks it. He's the President of it.

    https://www.youtube.com/watch?v=NBK8SkOjMTw Play IT... DON'T JUST LOOK AT THE FRIGGIN LINK --- PLAY IT!!!!!!!!!!!!!!!!!!11

    "Brain activity can be monitored in real-time
    in operational environments with EEG"

    http://www.dtic.mil/ndia/2008intell/kruse.pdf Here's the report from the Gov't

    Operational Neuroscience

    Intelligence Community Forum

    Dr. Amy Kruse
    Program Manager
    DARPA/DSO
    Nov 5, 2008

    Don't tell me it can't be done when they are publishing the docs online that it has already been done!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    The plan is to turn this shit on the total population for "total control". The only way to find these things out is to simply read the documentation that is already out there!!!

    http://video.google.com/videoplay?docid=1263677258215075609 Interview with Bilderberg Group member

    Rockefeller Admitted Elite Goal Of Microchipped Population14:58 - 5 years ago
    Hollywood director and documentary film maker Aaron Russo has gone in-depth on the astounding admissions of Nick Rockefeller, who personally told him that the elite's ultimate goal was to create a microchipped population and that the war on terror was a hoax, Rockefeller having predicted an "event" that would trigger the invasions of Iraq and Afghanistan eleven months before 9/11. Rockefeller also told Russo that his family's foundation had created and bankrolled the women's liberation movement in order to destroy the family and that population reduction was a fundamental aim of the global elite.

  • Can A Satellite Read Your Thoughts? - Interim Investigation Summary

    The signal looks similar to this if it were uniformly laid out in time. Pulses of varying lengths throughout a defined band.

     

    Given the sudden change in direction this investigation has taken, I thought it best to summarize exactly where we are. Today we will examine what type of signal we are looking for, the equipment needed to produce the signal, the equipment needed to hunt for the signal and how to narrow down the possibilities.

    Unlike previous articles we are going to get very specific on the requirements and provide people, with the appropriate equipment, a chance to not only locate the signal, but defend against it.

    The hunt begins in earnest.

    Phased Array

    With the new understanding of plasma around the axon opening voltage-gated channels to trigger action potentials, we can now speculate on the type of hardware required to track, interface and control neural firing patterns.

    It has become obvious that we are seeking a high frequency phased-array with electronic steering. This could imply both ground-based and satellite-based systems. For those unfamiliar with phased arrays you can learn more here:

    http://www.radartutorial.eu/06.antennas/an14.en.html

    To achieve the discrimination required to trigger individual neurons means that the plasma around axons respond mainly to particular frequencies. No doubt this relates to the plasma frequency, but I have yet to discover the exact mechanism. Although, my gut reaction is that the system exploits resonant frequencies and may even have a method of determining that frequency. My cursory examination of plasma antenna theory demonstrates that this information can be revealed remotely by a signal.

    For amateur hams, or even engineers, attempting to locate this signal I would focus on channels that appear to be occupied by radars or complex data streams. Standard radio equipment will not be of much use here, the resolution is too low to discern the structure of the signal.

    Also, given that we are looking for a steerable phased array, it means that the wavefront where the most energy is concentrated may be out of phase depending on your location. This means that the signal strength will be distributed in time, so some dynamic reconstruction may be required to analyze the signal if you are not located at the focal point. There is also the possibility that more than one array is being used at the same time on a single target. This may prevent hotspots from developing.

    So, let's give you the technical specifications of a radio system that will provide the required resolution.

    High Resolution Software Defined Radio

    For those who have experience of software defined radio, you will be aware that most high-end systems can manage around 1.6Mhz bandwidth across the range 10KHz - 32MHz with around 80MSps on a 16 bit ADC. This is a fantastic specification on a general purpose receiver, but you would be unable to analyze the signal to any great detail.

    The reason is the way FFT works. The greater resolution you have in the time domain, the less you have in the frequency domain. We can look at some of the equations here:

    bins = sample input length / 2
    bin size = (samplerate / 2) / bins (or Sampling rate / sample input length)
    time taken = (1 / samplerate) * sample input length

    If we provide some practical examples based upon the specifications of the high-end radio given above we can see the limitations:

    binsize = 3200000 / 524288 = 6.1Hz
    time taken = (1 / 3200000) * 524288 = 16.4ms

    If a signal is modulated faster than 6.1 times per second, it would appear as a continuous line on the spectrum. Also, if two more signals are started within a 16.4ms time frame, they would appear to start at the same time. All signals that lie within 6.1Hz of each other would be seen as a single signal.

    If we attempt to increase our frequency resolution, our time resolution becomes worse:

    binsize = 3200000 / 2000000 = 1.6Hz
    time taken = (1 / 3200000) * 2000000 = 0.625ms

    From this you think there is a direct relationship between the time taken and bandwidth of an FFT bin. We can demonstrate this is not the case by examining a lower sample rate.

    binsize = 96000 / 524288 = 0.183Hz
    time taken = (1 / 96000) * 524288 = 5.461s

    To see the modulation in the type of signals we have been discussing, we need to strike a balance between frequency and time resolution. This can be very difficult.

    Firstly, we need to define what exactly we are looking for. This will set the specifications of the receiver required to analyze the structure of the signal. As I mentioned above, we are looking for a steerable phased array. The signal itself will be a densely packed structure of high frequency radio waves modulated in the sub-3KHz band. This should cover most, if not all, of the potential operating frequencies that the plasma around the axon can be driven by.

    Thus, we should see pulses with varying timings layered together in a narrow frequency band. In short, low frequency pulse width modulation (PWM) on a high frequency carrier. No doubt there are similarities to GSM and the application of Gaussian filters to initiate action potentials. For that reason I am not going to rule out a network of fake cell towers as a potential source. That said, given that their transmissions would not conform to GSM standards, or any known standard, they would stick out to a professional with the right gear.

    Further, I do not expect the signals to be confined to a single band of contiguous frequencies. That said, I fully expect the signal to be located in bands allocated to the US and UK military. Given that this has most certainly been figured out by Russia and China, cross-referencing the overlaps in allocation of bands to the military may reduce the search further. Initially though, I would focus on the US allocations particularly between 35Mhz and 1Ghz.

    From these requirements we can see that we need a time resolution of around 0.3ms but we are guessing at the bin size. In this case, we would also need a frequency resolution as fine as possible, microhertz or better.

    That presents a problem. The only way to get a frequency resolution below 1Hz is for the sample input length to exceed the sample rate. We can see this in the last equation we performed. In doing so, we are guaranteed that our time resolution will be in the order of seconds, not the fraction of milliseconds that we require. Zero padding and overlaps are of no help here.

    Thus it should be clear that FFT is the wrong tool for the job and, as such, commercial SDRs and spectrograms linked to radios are pointless for signal analysis. So, if you ever see a "SIGINT" system with an FFT, have a quiet giggle.

    It should be obvious now why amateur hams have never reported such a system. They can't see it on SDR and on a standard radio it would sound like a complex digital mode transmission.

    So, how do we detect it?

    FFT provides what frequency components exist in a signal over a defined period. We require a transform that provides fine grained resolution of both time and frequency. We have options such as the short time Fourier transform, FWT, Wigner distributions, etc.

    STFT has resolution issues, as does FWT. So, we can ignore these methods as they will not reveal the signal structure. Given the similarities with GSM, the best approach would be to modify the method used to sniff GSM packets in the air. I found a very good article on this by David A. Hall from National Instruments:

    http://www.eetimes.com/design/microwave-rf-design/4018964/Sniffing-...

    Unlike GSM I do not expect a defined carrier, at least not one that represents the axons of an entire person. It is quite possible that axons close in frequency terms will create a "carrier like" signal on a vector signal analyzer. This should, in theory, betray the location of the signals. It may be much harder to track down signals that are narrower than the resolution of the vector signal analyzer. At this point, it may be useful to examine the basics of vector signal analysis. I found another good resource from Agilent on this:

    http://cp.literature.agilent.com/litweb/pdf/5989-1121EN.pdf

    As we can see from this document, all the real work is performed in the digital domain. With that information, we now know that we can turn any decent SDR into a vector spectrum analyzer just by modifying the software. That said, it can be computationally quite heavy and parallel processing expands capability. Again, there is a resolution issue in narrowband signals such as 100 μHz but it is typically faster than other methods. So, brief signals spread out could potentially hide, however, for complex integration with the brain frequencies would need to be sustained revealing carriers.

    The next stage is to pass this information to a Gabor Spectrogram to reveal a break down of the low frequency pulse modulation. How effective this will be in terms of frequency separation is unknown, but the time resolution should be adequate enough to determine pulse rates, assuming the frequencies do not overlap in the image causing longer or continuous lines. You may need to apply some bandpass filters to isolate the signal.

    If you are having difficulties with this part, or you suspect multiple layers, then try the Time Domain IQ visualization as described by the National Instruments article. I don't know how revealing this will be, but it is one of the last steps towards eventual decoding of the signal. I use the term decoding, rather than demodulation, as the signal just triggers neurons and contains no embedded data. That said, it may contain elements similar to any phase-shift keying as the signal would need to track through neuron clusters to provide fine control. Thus, the signal would be a form of continuous-phased frequency shift keying.

    As a side note, this brings up the role of Collins Radio Company (now Rockwell Collins) in early tests. Although, their involvement is not strictly necessary as the patent has been available since 1958. That said, they would have held the patent until 1975 and field tests were well underway by then.

    Whilst people play with this idea and make recordings of suspect signals, I will look into better methods and appropriate software solutions that can be used.

    Signal Content

    It must be stressed at this point that what we will be receiving is neural patterns from Mr Computer. I don't expect that anyone will receive neural signals from people. These neural signals will be a mixture of conversations, verbal commands, images, sounds, ideas, conclusions, emotional stimuli and motor controls. Whilst we may not be able to decode them immediately, that will come in time as we learn the meaning of neural codes. One good indicator that you have stumbled upon the correct signals, is the patterns will be relatively similar in terms of structure and given the nature of a phased array will have different signal strength and reception timings. Depending on the footprint of the wavefront, we may find these signals spread across a band of frequencies, or the same frequencies reused but within a defined spacial constraint. That is, the same frequencies are used for everyone but the signal only develops the correct strength around the target.

    Shielding

    One good thing to come out of this is the fact that we know the signal can be blocked by a Faraday's Cage. At this stage, we don't know if a second signal reveals the neural patterns, or if it is direct detection of emissions. A phased array has a limited power output and a defined signal strength must be maintained at the target to induce neurons into firing. Whilst it may be feasible to increase the power to overcome the attenuation by a single Faraday cage, if a significant proportion on the system do this, it becomes unfeasible to maintain the power requirements. I will come back to this issue of shielding later and attempt to develop a low-cost solution that should suit most people's needs.

  • Page last updated at 22:00 GMT, Wednesday, 3 February 2010
    Vegetative state patients can respond to questions
    By Fergus Walsh      Medical correspondent, BBC News

    Scientists have been able to reach into the mind of a brain-damaged man and communicate with his thoughts.

    The research, carried out in the UK and Belgium, involved a new brain scanning method.

    Awareness was detected in three other patients previously diagnosed as being in a vegetative state.

    The study in the New England Journal of Medicine shows that scans can detect signs of awareness in patients thought to be closed off from the world.

    Patients in a vegetative state are awake, not in a coma, but have no awareness because of severe brain damage.

    Scanning technique

    The scientists used functional magnetic resonance imaging (fMRI) which shows brain activity in real time.

    They asked patients and healthy volunteers to imagine playing tennis while they were being scanned.

    In each of the volunteers this stimulated activity in the pre-motor cortex, part of the brain which deals with movement.

    This also happened in four out of 23 of the patients presumed to be in a vegetative state.

    I volunteered to test out the scanning technique.

    I gave the scientists two women's names, one of which was my mother's.

    I imagined playing tennis when they said the right name, and within a minute they had worked out her name.

    They were also able to guess correctly whether I had children.

    Questions

    This is a continuation of research published three years ago, when the team used the same technique to establish initial contact with a patient diagnosed as vegetative.

    But this time they went further.

    With one patient - a Belgian man injured in a traffic accident seven years ago - they asked a series of questions.

    He was able to communicate "yes" and "no" using just his thoughts.

    The team told him to use "motor" imagery like a tennis match to indicate "yes" and "spatial" imagery like thinking about roaming the streets for a "no".

    The patient responded accurately to five out of six autobiographical questions posed by the scientists.

    For example, he confirmed that his father's name was Alexander.

    The study involved scientists from the Medical Research Council (MRC), the Wolfson Brain Imaging Centre in Cambridge and a Belgian team at the University of Liege.

    Dr Adrian Owen from the MRC in Cambridge co-authored the report:

    "We were astonished when we saw the results of the patient's scan and that he was able to correctly answer the questions that were asked by simply changing his thoughts."

    Dr Owen says this opens the way to involving such patients in their future treatment decisions: "You could ask if patients were in pain and if so prescribe painkillers and you could go on to ask them about their emotional state."

    It does raise many ethical issues - for example - it is lawful to allow patients in a permanent vegetative state to die by withdrawing all treatment, but if a patient showed they could respond it would not be, even if they made it clear that was what they wanted.

    The Royal Hospital for Neurodisability in London is a leading assessment and treatment centre for adults with brain injuries.

    Helen Gill, a consultant in low awareness state, welcomed the new research but cautioned that it was still early days for the research: "It's very useful if you have a scan which can

  • Pentagon Preps Soldier Telepathy PushBy

    Katie Drummond

    http://www.wired.com/dangerroom/2009/05/pentagon-preps-soldier-tele...

    04_smartsensor

    Forget the battlefield radios, the combat PDAs or even infantry hand signals. When the soldiers of the future want to communicate, they’ll read each other’s minds.

    At least, that’s the hope of researchers at the Pentagon’s mad-science division Darpa. The agency’s budget for the next fiscal year includes $4 million to start up a program called Silent Talk. The goal is to “allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals.” That’s on top of the $4 million the Army handed out last year to the University of California to investigate the potential for computer-mediated telepathy.

    Before being vocalized, speech exists as word-specific neural signals in the mind. Darpa wants to develop technology that would detect these signals of  “pre-speech,” analyze them, and then transmit the statement to an intended interlocutor. Darpa plans to use EEG to read the brain waves. It’s a technique they’re also testing in a project to devise mind-reading binoculars that alert soldiers to threats faster the conscious mind can process them.

    The project has three major goals, according to Darpa. First, try to map a person’s EEG patterns to his or her individual words. Then, see if those patterns are generalizable — if everyone has similar patterns. Last, “construct a fieldable pre-prototype that would decode the signal and transmit over a limited range.”

    The military has been funding a handful of  mind-tapping technology recently, and already have monkeys capable of telepathic limb control. Telepathy may also have advantages beyond covert battlefield chatter. Last year, the National Research Council and the Defense Intelligence Agency released a report suggesting that neuroscience might also be useful to “make the enemy obey our commands.” The first step, though, may be getting a grunt to obey his officer’s remotely-transmitted thoughts.

    – Katie Drummond and Noah Shachtman

    [Photo: ONR]

    ALSO:

  • CareFusion Launches Wireless Diagnostic and Monitoring Neurological Device

  • My Take: Keep government out of mind-reading business

    tzleft.wolpe.courtesy.jpg

    Editor's Note: Paul Root Wolpe, Ph.D., is director of Emory University’s Center for Ethics.

    By Paul Root Wolpe, Special to CNN

    http://religion.blogs.cnn.com/2011/11/12/my-take-keep-government-ou...

    (CNN) – “My thoughts, they roam freely. Who can ever guess them?”

    So goes an old German folk song. But imagine living in a world where someone can guess your thoughts, or even know them for certain. A world where science can reach into the deep recesses of your brain and pull out information that you thought was private and inaccessible.

    Would that worry you?

    If so, then start worrying. The age of mind reading is upon us.

     Neuroscience is advancing so rapidly that, under certain conditions, scientists can use sophisticated brain imaging technology to scan your brain and determine whether you can read a particular language, what word you are thinking of, even what you are dreaming about while you are asleep.

    The research is still new, and the kinds of information scientists can find through brain imaging are still simple. But the recent pace of progress in neuroscience has been startling and new studies are being published all the time.

    In one experiment, researchers at Carnegie Mellon looked at images of people’s brains when they were thinking of some common objects – animals, body parts, tools, vegetables – and recorded which areas of their brains activated when they thought about each object.

    The scientists studied patterns of brain activity while subjects thought about 58 such objects. Then they predicted what the person’s brain would look like if researchers gave them a brand new object, like “celery.”

    The scientists’ predictions were surprisingly accurate.

    Many scholars predicted as recently as a few years ago that we would never get this far. Now we have to ask: If we can tell what words you are thinking of, is it much longer before we will be able to read complex thoughts?

    In another experiment, researchers at the Max Planck Institute of Psychiatry in Munich, Germany, sought out a group of “lucid dreamers” - people who remain aware that they are dreaming and even maintain some control over their dreams while they sleep.

    The researchers asked the subjects to clench either their right hand or left hand in their dreams, then scanned their brain while they slept. The subjects’ motor cortex, the part of the brain that controls movement, lit up in the same manner it would if a person clenched their left hand while awake – even though the actual hand of the sleeping subjects never moved.

    The images revealed that the subjects were dreaming of clenching their left fists.

    Throughout human history, the inner workings of our minds were impenetrable, known only to us and, perhaps, to God. No one could see what you were thinking, or know what you were feeling, unless you chose to reveal it to them.

    In fact, the idea of being able to decipher what is going on in that three pounds of grey mush between our ears seemed an impossible task even a couple of decades ago.

    Now, for the first time in human history, we are peering into the labyrinth of the mind and pulling out information, perhaps even information you would rather we did not know.

    Neuroscientists are actively developing technologies to create more effective lie detectors, to determine if people have been at a crime scene, or to predict who may be more likely to engage in violent crime.

    As the accuracy and reliability of these experiments continue to improve, the temptation will be strong to use these techniques in counter-terrorism, in the courtroom, perhaps even at airports.

    And if brain imaging for lie detection is shown to be reliable, intelligence agencies may want to use it to discover moles, employers may want to use it to screen employees, schools to uncover vandals or cheaters.

    But should we allow it?

    I believe not. The ability to read our thoughts threatens the last absolute bastion of privacy that we have. If my right to privacy means anything, it must mean the right to keep my innermost thoughts safe from the prying eyes of the state, the military or my employer.

    My mind must remain mine alone, and my skull an inviolable zone of privacy.

    Right now, our right to privacy – even the privacy of our bodies – ends when a judge issues a warrant. The court can order your house searched, your computer files exposed, and your diary read. It can also order you to submit to a blood test, take a drug screen, or to provide a DNA sample.

    There is no reason, right now, that it could not also order a brain scan.

    Right now, the technology is not reliable enough for the courts to order such tests. But the time is coming, and soon.

    Eventually, courts will have to decide whether it is allowable to order a defendant to get a brain scan. There is even an interesting question of whether forcing me to reveal my inner thoughts through a brain scan might violate my Fifth Amendment protection against self-incrimination.

    But not even a court order should be enough to violate your right to a private inner life. The musings of my mind and heart are the most precious and private possessions that I have, the one thing no one can take away from me.

    Let them search my house, if they must, or take some blood, if that will help solve a case. But allowing the state to probe our minds ends even the illusion of individual liberty, and gives government power that is far too easy to abuse.

    The opinions expressed in this commentary are solely those of Paul Root Wolpe.

     The Editors - CNN Belief Blog

    Filed under: Culture & ScienceEthics
  • The mind machine interface Nissan is referring to has more to do with telepathy.  However rare, there are some natural telepaths, though most today have an artificial way of sending and receiving these messages.  This is the technology of the future in most if not all modes of transportation.
  • Mind-reading car could drive you round the bend

    Nissan collaborates with Swiss scientists to develop interface between man and machine, saying it will help road safety

    car-reads-driver-mind-nissan
    The Nissan Leaf electric car. Now the manufacturer is helping to develop a car that can interact with its driver's brain. Photograph: Simon Stuart-Miller

    One of the world's largest motor manufacturers is working with scientists based in Switzerland to design a car that can read its driver's mind and predict his or her next move.

    The collaboration, between Nissan and the École Polytechnique Fédérale de Lausanne (EPFL), is intended to balance the necessities of road safety with demands for personal transport.

    Scientists at the EPFL have already developed brain-machine interface (BMI) systems that allow wheelchair users to manoeuvre their chairs by thought transference. Their next step will be finding a way to incorporate that technology into the way motorists interact with their cars.

    If the endeavour proves successful, the vehicles of the future may be able to prepare themselves for a left or right turn – choosing the correct speed and positioning – by gauging that their drivers are thinking about making such a turn.

    However, although BMI technology is well established, the levels of human concentration needed to make it work are extremely high, so the research team is working on systems that will use statistical analysis to predict a driver's next move and to "evaluate a driver's cognitive state relevant to the driving environment".

    By measuring brain activity, monitoring patterns of eye movement and scanning the environment around the car, the team thinks the car will be able to predict what a driver is planning to do and help him or her complete the manoeuvre safely.

    Lucian Gheorghe, who joined Nissan's mobility research centre after graduating in computer science and artificial intelligence from Kobe University, Japan, said he believed the joint project could benefit both scientists and motorists.

    "Brain wave analysis has helped me understand driver burden in order to reduce driver stress," he said. "During our collaboration with EPFL, I believe we will not only be able to contribute to the scientific community but we will also find engineering solutions that will bring us close to providing easy access to personal mobility for everyone."

    Professor José del R Millán, who is leading the project, said the idea behind the research was a simple one: "to blend driver and vehicle intelligence together in such a way that eliminates conflicts between them, leading to a safer motoring environment".

  • The Army's Bold Plan to Turn Soldiers Into Telepaths

    The U.S. Army wants to allow soldiers to communicate just by thinking. 
The new science of synthetic telepathy could soon make that happen.

    by Adam Piore; illustration by Sam Kennedy

    From the April 2011 issue; published online July 20, 2011

    http://discovermagazine.com/2011/apr/15-armys-bold-plan-turn-soldie...

    On a cold, blustery afternoon the week before Halloween, an assortment of spiritual mediums, animal communicators, and astrologists have set up tables in the concourse beneath the Empire State Plaza in Albany, New York. The cavernous hall of shops that 
connects the buildings in this 98-acre complex is a popular venue for autumnal events: Oktoberfest, the Maple Harvest Festival, and today’s “Mystic Fair.”

    Traffic is heavy as bureaucrats with ID badges dangling from their necks stroll by during their lunch breaks. Next to the Albany Paranormal Research Society table, a middle-aged woman is solemnly explaining the workings of an electromagnetic sensor that can, she asserts, detect the presence of ghosts. Nearby, a “clairvoyant” ushers a government worker in a suit into her canvas tent. A line has formed at the table of a popular tarot card reader.

    Amid all the bustle and transparent hustles, few of the dabblers at the Mystic Fair are aware that there is a genuine mind reader in the building, sitting in an office several floors below the concourse. This mind reader is not able to pluck a childhood memory or the name of a loved one out of your head, at least not yet. But give him time. He is applying hard science to an aspiration that was once relegated to clairvoyants, and unlike his predecessors, he can point to some hard results.

    The mind reader is Gerwin Schalk, a 39-year-old biomedical scientist and a leading expert on brain-computer interfaces at the New York State Department of Health’s Wads­worth Center at Albany Medical College. The 
Austrian-born Schalk, along with a handful of other researchers, is part of a $6.3 million U.S. Army project to establish the basic science required to build a thought helmet—a device that can detect and transmit the unspoken speech of soldiers, allowing them to communicate with one another silently.

    As improbable as it sounds, synthetic telepathy, as the technology is called, is getting closer to battlefield reality. Within a decade Special Forces could creep into the caves of Tora Bora to snatch Al Qaeda operatives, communicating and coordinating without hand signals or whispered words. Or a platoon of infantrymen could telepathically call in a helicopter to whisk away their wounded in the midst of a deafening firefight, where intelligible speech would be impossible above the din of explosions.

    For a look at the early stages of the technology, I pay a visit to a different sort of cave, Schalk’s bunkerlike office. Finding it is a workout. I hop in an elevator within shouting distance of the paranormal hubbub, then pass through a long, linoleum-floored hallway guarded by a pair of stern-faced sentries, and finally descend a cement stairwell to a subterranean warren of laboratories and offices.

    Schalk is sitting in front of an oversize computer screen, surrounded by empty metal bookshelves and white cinder-block walls, bare except for a single photograph of his young family and a poster of the human brain. The fluorescent lighting flickers as he hunches over a desk to click on a computer file. A volunteer from one of his recent mind-reading experiments appears in a video facing a screen of her own. She is concentrating, Schalk explains, silently thinking of one of two vowel sounds, aah or ooh.

    The volunteer is clearly no ordinary research subject. She is draped in a hospital gown and propped up in a motorized bed, her head swathed in a plasterlike mold of bandages secured under the chin. Jumbles of wires protrude from an opening at the top of her skull, snaking down to her left shoulder in stringy black tangles. Those wires are connected to 64 electrodes that a neurosurgeon has placed directly on the surface of her naked cortex after surgically removing the top of her skull. “This woman has epilepsy and probably has seizures several times a week,” Schalk says, revealing a slight Germanic accent.

    The main goal of this technique, known as electrocorticography, or ECOG, is to identify the exact area of the brain responsible for her seizures, so surgeons can attempt to remove the damaged areas without affecting healthy ones. But there is a huge added benefit: The seizure patients who volunteer for Schalk’s experiments prior to surgery have allowed him and his collaborator, neuro­surgeon Eric C. Leuthardt of Washington University School of Medicine in St. Louis, to collect what they claim are among the most detailed pictures ever recorded of what happens in the brain when we imagine speaking words aloud.

    Special Forces could creep into the caves of Tora Bora to snatch Al Qaeda operatives, 
communicating without hand signals or whispered words.

    Those pictures are a central part of the project funded by the Army’s multi-university research grant and the latest twist on science’s long-held ambition to read what goes on inside the mind. Researchers have been experimenting with ways to understand and harness signals in the areas of the brain that control muscle movement since the early 2000s, and they have developed methods to detect imagined muscle movement, vocalizations, and even the speed with which a subject wants to move a limb.

    At Duke University Medical Center in North Carolina, researchers have surgically implanted electrodes in the brains of m... and trained them to move robotic arms at MIT, hundreds of miles away, just by thinking. At Brown University, scientists are working on a similar implant they hope will allow paralyzed human subjects to control artificial limbs. And workers at Neural Signals Inc., outside Atlanta, have been able to extract vowels from the motor cortex of a paralyzed patient who lost the ability to talk by sinking electrodes into the area of his brain that controls his vocal cords.

    But the Army’s thought-helmet project is the first large-scale effort to “really attack” the much broader challenge of synthetic telepathy, Schalk says. The Army wants practical applications for healthy people, “and we are making progress,” he adds.

    Schalk is now attempting to make silent speech a reality by using sensors and computers to explore the regions of the brain responsible for storing and processing thoughts. The goal is to build a helmet embedded with brain-scanning technologies that can target specific brain waves, translate them into words, and transmit those words wirelessly to a radio speaker or an earpiece worn by other soldiers.

    As Schalk explains his vast ambitions, I’m mesmerized by the eerie video of the bandaged patient on the computer screen. White bars cover her eyes to preserve her anonymity. She is lying stock-still, giving the impression that she might be asleep or comatose, but she is very much engaged. Schalk points with his pen at a large rectangular field on the side of the screen depicting a region of her brain abuzz with electrical activity. Hundreds of yellow and white brain waves dance across a black backdrop, each representing the oscillating electrical pulses picked up by one of the 64 electrodes attached to her cortex as clusters of brain cells fire.

    Somewhere in those squiggles lie patterns that Schalk is training his computer to recognize and decode. “To make sense of this is very difficult,” he says. “For each second there are 1,200 variables from each electrode location. It’s a lot of numbers.”

    Schalk gestures again toward the video. Above the volunteer’s head is a black bar that extends right or left depending on the computer’s ability to guess which vowel the volunteer has been instructed to imagine: right for “aah,” left for “ooh.” The volunteer imagines “ooh,” and I watch the black bar inch to the left. The volunteer thinks “aah,” and sure enough, the bar extends right, proof that the computer’s analysis of those hundreds of squiggling lines in the black rectangle is correct. In fact, the computer gets it right “close to 100 percent of the time,” Schalk says.

    He admits that he is a long way from decoding full, complex imagined sentences with multiple words and meaning. But even extracting two simple vowels from deep within the brain is a big advance. Schalk has no doubt about where his work is leading. “This is the first step toward mind reading,” he tells me.

    “Show us the evidence that 
this could really work—that you are not just hallucinating it,” 
the Army asked Schmeisser.

    The motivating force behind the thought helmet project is a retired Army colonel with a Ph.D. in the physiology of vision and advanced belts in karate, judo, aikido, and Japanese sword fighting. Elmar Schmeis­ser, a lanky, bespectacled scientist with a receding hairline and a neck the width of a small tree, joined the Army Research Office as a program manager in 2002. He had spent his 30-year career up to that point working in academia and at various military research facilities, exhaustively investigating eyewear to protect soldiers against laser exposure, among other technologies.

    Schmeisser had been fascinated by the concept of a thought helmet ever since he read about it in E. E. “Doc” Smith’s 1946 science fiction classic, Skylark of Space, back in the eighth grade. But it was not until 2006, while Schmeisser was attending a conference on advanced prosthetics in Irvine, California, that it really hit him: Science had finally caught up to his boyhood vision. He was listening to a young researcher expound on the virtues of extracting signals from the surface of the brain. The young researcher was Gerwin Schalk.

    Schalk’s lecture was causing a stir. Many neuroscientists had long believed that the only way to extract data from the brain specific enough to control an external device was to penetrate the cortex and sink electrodes into the gray matter, where the electrodes could record the firing of individual neurons. By claiming that he could pry information from the brain without drilling deep inside it—information that could allow a subject to move a computer cursor, play computer games, and even move a prosthetic limb—Schalk was taking on “a very strong existing dogma in the field that the only way to know about how the brain works is by recording individual neurons,” Schmeisser vividly recalls of that day.

    Many of those present dismissed Schalk’s findings as blasphemy and stood up to attack it. But for Schmeisser it was a magical moment. If he could take Schalk’s idea one step further and find a way to extract verbal thoughts from the brain without surgery, the technology could dramatically benefit not only disabled people but the healthy as well. “Everything,” he says,” all of a sudden became possible.”

    The next year, Schmeisser marched into a large conference room at Army Research Office headquarters in Research Triangle Park, North Carolina, to pitch a research project to investigate synthetic telepathy for soldiers. He took his place at a podium facing a large, U-shaped table fronting rows of chairs, where a committee of some 30 senior scientists and colleagues—division chiefs, directorate heads, mathematicians, particle physicists, chemists, computer scientists, and Pentagon brass in civilian dress—waited for him to begin.

    Schmeisser had 10 minutes and six Power­Point slides to address four major questions: Where was the field at the moment? How might his idea prove important? What would the Army get out of it? And was there reason to believe that it was doable?

    The first three questions were simple. It was that last one that tripped him up. “Does this really work?” Schmeisser remembers the committee asking him. “Show us the evidence that this could really work—that you are not just hallucinating it.”

    The committee rejected Schmeisser’s proposal but authorized him to collect more data over the following year to bolster his case. For assistance he turned to Schalk, the man who had gotten him thinking about a thought helmet in the first place.

    Schalk and Leuthardt had been conducting mind-reading experiments for several years, exploring their patients’ ability to play video games, move cursors, and type by means of brain waves picked up via a scanner. The two men were eager to push their research further and expand into areas of the brain thought to be associated with language, so when Schmeisser offered them 
a $450,000 grant to prove the feasibility of a thought helmet, they seized the opportunity.

    Schalk and Leuthardt quickly recruited 12 epilepsy patients as volunteers for their first set of experiments. As I had seen in the video in Schalk’s office, each patient had the top of his skull removed and electrodes affixed to the surface of the cortex. The researchers then set up a computer screen and speakers in front of the patients’ beds.

    The patients were presented with 36 words that had a relatively simple consonant-vowel-consonant structure, such as bet, bat, beat, and boot. They were asked to say the words out loud and then to simply imagine saying them. Those instructions were conveyed visually (written on a computer screen) with no audio, and again vocally with no video. The electrodes provided a precise map of the resulting neural activity.

    Schalk was intrigued by the results. As one might expect, when the subjects vocalized a word, the data indicated activity in the areas of the motor cortex associated with the muscles that produce speech. The auditory cortex and an area in its vicinity long believed to be associated with speech, called Wernicke’s area, were also active.

    When the subjects imagined words, the motor cortex went silent while the auditory cortex and Wernicke’s area remained active. Although it was unclear why those areas were active, what they were doing, and what it meant, the raw results were an important start. The next step was obvious: Reach inside the brain and try to pluck out enough data to determine, at least roughly, what the subjects were thinking.

    Schmeisser presented Schalk’s data to the Army committee the following year and asked it to fund a formal project to develop a real mind-reading helmet. As he conceived it, the helmet would function as a wearable interface between mind and machine. When activated, sensors inside would scan the thousands of brain waves oscillating in a soldier’s head; a microprocessor would apply pattern recognition software to decode those waves and translate them into specific sentences or words, and a radio would transmit the message. Schmeisser also proposed adding a second capability to the helmet to detect the direction in which a soldier was focusing his attention. The function could be used to steer thoughts to a specific comrade or squad, just by looking in their direction.

    The words or sentences would reach a receiver that would then “speak” the words into a comrade’s earpiece or be played from a speaker, perhaps at a distant command post. The possibilities were easy to imagine:

    “Look out! Enemy on the right!”

    “We need a medical evacuation now!”

    “The enemy is standing on the ridge. Fire!”

    Any of those phrases could be life-saving.

    This time the committee signed off.

    Grant applications started piling up in Schmeisser’s office. To maximize the chance of success, he decided to split the Army funding between two university teams that were taking complementary approaches to the telepathy problem.

    The first team, directed by Schalk, was pursuing the more invasive ECOG approach, attaching electrodes beneath the skull. The second group, led by Mike D’Zmura, a cognitive scientist at the University of California, Irvine, planned to use electroencephalography (EEG), a noninvasive brain-scanning technique that was far better suited for an actual thought helmet. Like ECOG, EEG relies on brain signals picked up by an array of electrodes that are sensitive to the subtle voltage oscillations caused by the firing of groups of neurons. Unlike ECOG, EEG requires no surgery; the electrodes attach painlessly to the scalp.

    For Schmeisser, this practicality was critical. He ultimately wanted answers to the big neuroscience questions that would allow researchers to capture complicated thoughts and ideas, yet he also knew that demonstrating even a rudimentary thought helmet capable of discerning simple commands would be a valuable achievement. After all, soldiers often use formulaic and reduced vocabulary to communicate. Calling in a helicopter for a medical evacuation, for instance, requires only a handful of specific words.

    “We could start there,” Schmeisser says. “We could start below that.” He noted, for instance, that it does not require a terribly complicated message to call for an air strike or a missile launch: “That would be a very nice operational capability.”

    The relative ease with which EEG can be applied comes at a price, however. The exact location of neural activity is far more difficult to discern via EEG than with many other, more invasive methods because the skull, scalp, and cerebral fluid surrounding the brain scatter its electric signals before they reach the electrodes. That blurring also makes the signals harder to detect at all. The EEG data can be so messy, in fact, that some of the researchers who signed on to the project harbored private doubts about whether it could really be used to extract the signals associated with unspoken thoughts.

    In the initial months of the project, back in 2008, one of D’Zmura’s key collaborators, renowned neuroscientist David Poeppel, sat in his office on the second floor of the New York University psychology building and realized he was unsure even where to begin. With his research partner Greg Hickok, an expert on the neuroscience of language, he had developed a detailed model of audible speech systems, parts of which were widely cited in textbooks. But there was nothing in that model to suggest how to measure something imagined.

    For more than 100 years, Poeppel reflected, speech experimentation had followed a simple plan: Ask a subject to listen to a specific word or phrase, measure the subject’s response to that word (for instance, how long it takes him to repeat it aloud), and then demonstrate how that response is connected to activity in the brain. Trying to measure imagined speech was much more complicated; a random thought could throw off the whole experiment. In fact, it was still unclear where in the brain researchers should even look for the relevant signals.

    Solving this problem would call for a new experimental method, Poeppel realized. He and a postdoctoral student, Xing Tian, decided to take advantage of a powerful imaging technique called magnetoencephalography, or MEG, to do their reconnaissance work. MEG can provide roughly the same level of spatial detail as ECOG but without the need to remove part of a subject’s 
skull, and it is far more accurate than EEG.

    Poeppel and Tian would guide subjects into a three-ton, beige-paneled room constructed of a special alloy and copper to shield against passing electromagnetic fields. At the center of the room sat a one-ton, six-foot-tall machine resembling a huge hair dryer that contained scanners capable of recording the minute magnetic fields produced by the firing of neurons. After guiding subjects into the device, the researchers would ask them to imagine speaking words like athlete, musician, and lunch. Next they asked them to imagine hearing the words.

    When Poeppel sat down to analyze the results, he noticed something unusual. As a subject imagined hearing words, his auditory cortex lit up the screen in a characteristic pattern of reds and greens. That part was no surprise; previous studies had linked the auditory cortex to imagined sounds. However, when a subject was asked to imagine speaking a word rather than hearing it, the auditory cortex flashed an almost identical red and green pattern.

    Poeppel was initially stumped by the results. “That is really bizarre,” he recalls thinking. “Why should there be an auditory pattern when the subjects didn’t speak and no one around them spoke?” Over time he arrived at an explanation. Scientists had long been aware of an error-correction mechanism in the brain associated with motor commands. When the brain sends a command to the motor cortex to, for instance, reach out and grab a cup of water, it also creates an internal impression, known as an efference copy, of what the resulting movement will look and feel like. That way, the brain can check the muscle output against the intended action and make any necessary corrections.

    Poeppel believed he was looking at an efference copy of speech in the auditory cortex. “When you plan to speak, you activate the hearing part of your brain before you say the word,” he explains. “Your brain is predicting what it will sound like.”

    The potential significance of this finding was not lost on Poeppel. If the brain held on to a copy of what an imagined thought would sound like if vocalized, it might be possible to capture that neurological record and translate it into intelligible words. As happens so often in this field of research, though, each discovery brought with it a wave of new challenges. Building a thought helmet would require not only identifying that efference copy but also finding a way to isolate it from a mass of brain waves.

    D’Zmura and his team at UC Irvine have spent the past two years taking baby steps in that direction by teaching pattern recognition programs to search for and recognize specific phrases and words. The sheer size of a MEG machine would obviously be impractical in a military setting, so the team is testing its techniques using lightweight EEG caps that could eventually be built into a practical thought helmet.

    The caps are comfortable enough that Tom Lappas, a graduate student working with D’Zmura, often volunteers to be a research subject. During one experiment last November, Lappas sat in front of a computer wearing flip-flops, shorts, and a latex EEG cap with 128 gel-soaked electrodes attached to it. Lappas’s face was a mask of determined focus as he stared silently at a screen while military commands blared out of a nearby speaker.

    “Ready Baron go to red now,” a recorded voice intoned, then paused. “Ready Eagle go to red now…Ready Tiger go to green now...” As Lappas concentrated, a computer recorded hundreds of squiggly lines representing Lappas’s brain activity as it was picked up from the surface of his scalp. Somewhere in that mass of data, Lappas hoped, were patterns unique enough to distinguish the sentences from one another.

    With so much information, the problem would not be finding similarities but rather filtering out the similarities that were irrelevant. Something as simple as the blink of an eye creates a tremendous number of squiggles and lines that might throw off the recognition program. To make matters more challenging, Lappas decided at this early stage in the experiment to search for patterns not only in the auditory cortex but in other areas of the brain as well.

    That expanded search added to the data his computer had to crunch through. In the end, the software was able to identify the sentence a test subject was imagining speaking only about 45 percent of the time. The result was hardly up to military standards; an error rate of 55 percent would be disastrous on the battlefield.

    Schmeisser is not distressed by that high error rate. He is confident that synthetic telepathy can and will rapidly improve to the point where it will be useful in combat. “When we first started this, we didn’t know if it could be done,” he says. “That we have gotten this far is wonderful.” Poeppel agrees. “The fact that they could find anything just blows me away, frankly,” he says.

    Schmeisser notes that D’Zmura has already shown that test subjects can type in Morse code by thinking of specific vowels 
in dots and dashes. Although this exercise is not actual language, subjects have achieved an accuracy of close to 100 percent.

    The next steps in getting a thought helmet to work with actual language will be improving the accuracy of the pattern-recognition programs used by Schalk’s and D’Zmura’s teams and then adding, little by little, to the library of words that these programs can discern. “Whether we can get to fully free-flowing, civilian-type speech, I don’t know. It would be nice. We’re pushing the limits of what we can get, opening the vocabulary as much as we can,” Schmeisser says.

    For some concerned citizens, this research is pushing too far. Among the more paranoid set, the mere fact that the military is trying to create a thought helmet is proof of a conspiracy to subject the masses to mind control. More grounded critics consider the project ethically questionable. Since the Army’s thought helmet project became publicly known, Schmeis­ser has been deluged with Freedom of Information Act requests from individuals and organizations concerned about privacy issues. Those requests for documentation have required countless hours and continue 
to this day.

    Schalk, for his part, has resolved to keep a low profile. From his experience working with more invasive techniques, he had seen his fair share of controversy in the field, and he anticipated that this project might attract close scrutiny. “All you need to do is say, ‘The U.S. Army funds studies to implant people for mind reading,’ ” he says. “That’s all it takes, and then you’re going to have to do damage control.”

    D’Zmura and the rest of his team, perhaps to their regret, granted interviews about their preliminary research after it was announced in a UC Irvine press release. The negative reaction was immediate. Bizarre e-mail messages began appearing in D’Zmura’s in-box from individuals ranting against the government or expressing concern that the authorities were already monitoring their thoughts. One afternoon, a woman appeared outside D’Zmura’s office complaining of voices in her head and asking for assistance to remove them.

    Should synthetic telepathy make significant progress, the worried voices will surely grow louder. “Once we cross these barriers, we are doing something that has never before been done in human history, which is to get information directly from the brain,” says Emory University bioethicist Paul Root Wolpe, a leading voice in the field of neuroethics. “I don’t have a problem with sticking this helmet on the head of a pilot to allow him to send commands on a plane. The problem comes when you try to get detailed information about what someone is either thinking or saying nonverbally. That’s something else altogether. The skull should remain a realm of absolute privacy. If the right to privacy means anything, it means the right to the contents of my thoughts.”

    Schmeisser says he has been reflecting on this kind of concern “from the beginning.” He dismisses the most extreme type of worry out of hand. “The very nature of the technology and of the human brain,” he maintains, “would prevent any Big Brother type of use.” Even the most sophisticated existing speech-recognition programs can obtain only 95 percent accuracy, and that is after being calibrated and trained by a user to compensate for accent, intonation, and phrasing. Brain waves are “much harder” to get right, Schmeisser notes, because every brain is anatomically different and uniquely shaped by experience.

    
Merely calibrating a program to recognize a simple sentence from brain waves would take hours. “If your thoughts wander for just an instant, the computer is completely lost,” Schmeisser says. “So the method is completely ethical. There is no way to coerce users into training the machine if they don’t want to. Any attempt to apply coercion will result in more brain wave disorganization, from stress if nothing else, and produce even worse computer performance.” Despite the easy analogies, synthetic telepathy bears little resemblance to mystical notions of mind reading and mind control. The bottom line, Schmeisser insists, “is that I see no risks whatsoever. Only benefits.”

    
Nor does he feel any unease that his funding comes from a military agency eager to put synthetic telepathy to use on the battlefield. The way he sees it, the potential payoff is simply too great.

    “This project is attempting to make the scientific breakthrough that will have application for many things,” Schmeisser says. “If we can get at the black box we call the brain with the reduced dimensionality of speech, then we will have made a beginning to solving fundamental challenges in understanding how the brain works—and, with that, of understanding individuality.”

  • How to see neurons of the deep brain for months at a time

    January 17, 2011 by Editor
    http://www.kurzweilai.net/how-to-see-neurons-of-the-deep-brain-for-...
    deepbrain_apparatus.jpg

    A diagram of the experimental setup. (Mark Schnitzer and Nature Medicine)

    Stanford scientists have devised a new method that not only lets them peer deep inside the brain to examine its neurons, but also allows them to continue monitoring for months.

    The technique promises to improve understanding of both the normal biology and diseased states of this hidden tissue.

    Other recent advances in micro-optics had enabled scientists to take a peek at cells of the deep brain, but their observations captured only a momentary snapshot of the microscopic changes that occur over months and years with aging and illness.

    The Stanford development appears online Jan. 16 in the journal Nature Medicine. It also will appear in the February 2011 print edition.

    Scientists study many diseases of the deep brain using mouse models, mice that have been bred or genetically engineered to have diseases similar to human afflictions.

    “Researchers will now be able to study mouse models in these deep areas in a way that wasn’t available before,” said senior author Mark Schnitzer, associate professor of biology and of applied physics.

    Because light microscopy can only penetrate the outermost layer of tissues, any region of the brain deeper than 700 microns or so (about 1/32 of an inch) cannot be reached by traditional microscopy techniques. Recent advances in micro-optics had allowed scientists to briefly peer deeper into living tissues, but it was nearly impossible to return to the same location of the brain and it was very likely that the tissue of interest would become damaged or infected.

    With the new method, “Imaging is possible over a very long time without damaging the region of interest,” said Juergen Jung, operations manager of the Schnitzer lab. Tiny glass tubes, about half the width of a grain of rice, are carefully placed in the deep brain of an anaesthetized mouse. Once the tubes are in place, the brain is not exposed to the outside environment, thus preventing infection. When researchers want to examine the cells and their interactions at this site, they insert a tiny optical instrument called a microendoscope inside the glass guide tube. The guide tubes have glass windows at the ends through which scientists can examine the interior of the brain.

    “It’s a bit like looking through a porthole in a submarine,” said Schnitzer.

    The guide tubes allow researchers to return to exactly the same location of the deep brain repeatedly over weeks or months. While techniques like MRI scans could examine the deep brain, “they couldn’t look at individual cells on a microscopic scale,” said Schnitzer. Now, the delicate branches of neurons can be monitored during prolonged experiments.

    To test the use of the technique for investigating brain disease, the researchers looked at a mouse model of glioma, a deadly form of brain cancer. They saw hallmarks of glioma growth in the deep brain that were previously known in tumors described as surficial (on or near the surface).

    The severity of glioma tumors depends on their location. “The most aggressive brain tumors arise deep and not superficially,” said Lawrence Recht, professor of neurology and neurological sciences. Why the position of glioma tumors affects their growth rate isn’t understood, but this method would be a way to explore that question, Recht said.

    In addition to continuing their studies of brain disease and the neuroscience of memory, the researchers hope to teach other researchers how to perform the technique.

    Adapted from materials provided by Stanford University

    Topics: Biotech | Cognitive Science/Neuroscience | Electronics

This reply was deleted.