Worldwide Campaign to stop the Abuse and Torture of Mind Control/DEWs

Mind Reading -- 60 minutes CBS News video
June 28, 2009 4:50 PM
Neuroscience has learned so much about how we think and the brain activity linked to certain thoughts that it is now possible - on a very basic scale - to read a person's mind. Lesley Stahl reports.
How Technology May Soon "Read" Your Mind
Read more:,2933,426485,00.html

LiveScience Topics: Mind Reading

Mind-machine interfaces can read your mind, and the science is improving. Devices scan the brain and read brain waves with electroencephalography, or EEG, then use a computer to convert thoughts into action. Some mind-reading research has recorded electrical activity generated by the firing of nerve cells in the brain by placing electrodes directly in the brain. These studies could lead to brain implants that would move a prosthetic arm or other assistive devices controlled by a brain-computer interface.


16:09 03/11/2010 © Alex Steffler

Rossiiskaya Gazeta
Mind-reading devices to help screen Russian cops

It reads like science fiction, but it’ll soon be science fact. Special mind-reading devices are to be rolled out across Russia’s revamped police force.


Homeland Security Detects Terrorist Threats by Reading Your MindTuesday, September 23, 2008
By Allison Barrie
Baggage searches are SOOOOOO early-21st century. Homeland Security is now testing the next generation of security screening — a body scanner that can read your mind.

Most preventive screening looks for explosives or metals that pose a threat. But a new system called MALINTENT turns the old school approach on its head. This Orwellian-sounding machine detects the person — not the device — set to wreak havoc and terror.

MALINTENT, the brainchild of the cutting-edge Human Factors division in Homeland Security's directorate for Science and Technology, searches your body for non-verbal cues that predict whether you mean harm to your fellow passengers.

It has a series of sensors and imagers that read your body temperature, heart rate and respiration for unconscious tells invisible to the naked eye — signals terrorists and criminals may display in advance of an attack.

But this is no polygraph test. Subjects do not get hooked up or strapped down for a careful reading; those sensors do all the work without any actual physical contact. It's like an X-ray for bad intentions.

Currently, all the sensors and equipment are packaged inside a mobile screening laboratory about the size of a trailer or large truck bed, and just last week, Homeland Security put it to a field test in Maryland, scanning 144 mostly unwitting human subjects.

While I'd love to give you the full scoop on the unusual experiment, testing is ongoing and full disclosure would compromise future tests.

• Click here for an exclusive look at MALINTENT in action.

But what I can tell you is that the test subjects were average Joes living in the D.C. area who thought they were attending something like a technology expo; in order for the experiment to work effectively and to get the testing subjects to buy in, the cover story had to be convincing.

While the 144 test subjects thought they were merely passing through an entrance way, they actually passed through a series of sensors that screened them for bad intentions.

Homeland Security also selected a group of 23 attendees to be civilian "accomplices" in their test. They were each given a "disruptive device" to carry through the portal — and, unlike the other attendees, were conscious that they were on a mission.

In order to conduct these tests on human subjects, DHS had to meet rigorous safety standards to ensure the screening would not cause any physical or emotional harm.

So here's how it works. When the sensors identify that something is off, they transmit warning data to analysts, who decide whether to flag passengers for further questioning. The next step involves micro-facial scanning, which involves measuring minute muscle movements in the face for clues to mood and intention.

Homeland Security has developed a system to recognize, define and measure seven primary emotions and emotional cues that are reflected in contractions of facial muscles. MALINTENT identifies these emotions and relays the information back to a security screener almost in real-time.

This whole security array — the scanners and screeners who make up the mobile lab — is called "Future Attribute Screening Technology" — or FAST — because it is designed to get passengers through security in two to four minutes, and often faster.

If you're rushed or stressed, you may send out signals of anxiety, but FAST isn't fooled. It's already good enough to tell the difference between a harried traveler and a terrorist. Even if you sweat heavily by nature, FAST won't mistake you for a baddie.

"If you focus on looking at the person, you don't have to worry about detecting the device itself," said Bob Burns, MALINTENT's project leader. And while there are devices out there that look at individual cues, a comprehensive screening device like this has never before been put together.

While FAST's batting average is classified, Undersecretary for Science and Technology Adm. Jay Cohen declared the experiment a "home run."

As cold and inhuman as the electric eye may be, DHS says scanners are unbiased and nonjudgmental. "It does not predict who you are and make a judgment, it only provides an assessment in situations," said Burns. "It analyzes you against baseline stats when you walk in the door, it measures reactions and variations when you approach and go through the portal."

But the testing — and the device itself — are not without their problems. This invasive scanner, which catalogues your vital signs for non-medical reasons, seems like an uninvited doctor's exam and raises many privacy issues.

But DHS says this is not Big Brother. Once you are through the FAST portal, your scrutiny is over and records aren't kept. "Your data is dumped," said Burns. "The information is not maintained — it doesn't track who you are."

DHS is now planning an even wider array of screening technology, including an eye scanner next year and pheromone-reading technology by 2010.

The team will also be adding equipment that reads body movements, called "illustrative and emblem cues." According to Burns, this is achievable because people "move in reaction to what they are thinking, more or less based on the context of the situation."

FAST may also incorporate biological, radiological and explosive detection, but for now the primary focus is on identifying and isolating potential human threats.

And because FAST is a mobile screening laboratory, it could be set up at entrances to stadiums, malls and in airports, making it ever more difficult for terrorists to live and work among us.

Burns noted his team's goal is to "restore a sense of freedom." Once MALINTENT is rolled out in airports, it could give us a future where we can once again wander onto planes with super-sized cosmetics and all the bottles of water we can carry — and most importantly without that sense of foreboding that has haunted Americans since Sept. 11.

Allison Barrie, a security and terrorism consultant with the Commission for National Security in the 21st Century, is FOX News' security columnist.


Please go to LAST PAGE OF "Replies to this Discussion" to read NEWEST Information

Views: 7754

Reply to This

Replies to This Discussion

Russian Brain Scanner that Analysis of Vortex Magnatic Fields.
Introduction: What is the easiest way to explain Bioresonance?

Irish mentalist Keith Barry makes major American breakthrough– VIDEOS

Dublin magicians show on Discovery shows how to read minds


Published Wednesday, June 1, 2011, 7:18 AM


Irish mentalist Keith Barry has launched his US TV career with the Discovery Channel. His program “Deception with Keith Barry” tackles mentalism and psychological illusions such as reading people, implanting thoughts, predicting behavior and hacking into the subconscious.


Not only does Barry do all of this but he also shows the audience how.


Barry has shown his audiences in Los Vegas these tricks but they are rarely applied to real world situations. This raises  the question could they be used in your career, in relationships or perhaps your finances? In the show Barry chooses a variety of scenarios ranging from the extreme to normal everyday scenes, such as the car showroom. The series essentially looks at what would happen if we could read minds.


The first episode is called “Black Ops”. Barry takes a look at the Cold War when the CIA spent millions studying mind control in order to defeat Soviet spies. In a series of mind experiments he attempts mass hypnosis, programs ordinary people to be spies and even creates a sleeper agent.


Barry is mining the mysteries of the mind he has first-hand experience to show just what the mind can do. In his on stage show “Asylum” Barry explained that in 2007 he was in a serious car accident and his two legs were very badly damaged. He was cut from the car and at the time was happy he did not lose his legs.


Months afterwards when he was told he would never walk properly again. Barry decided to use the power of his mind to fix his legs and focused  intensely on healing and visualising himself walking without a limp. It worked. Today he walks with no hint of an injury.


In 2008 Barry gave a lecture for TED: Ideas worth spreading. His profile describes him as “a technologist, an elite software engineer of the human brain” and it is those techniques that it seems he will display in his upcoming series.


Back in 2010 Barry won the awards of “Best Magician in Las Vegas 2009” and “Mentalist of the Year”. Currently he is preparing for his brand new stage show “8 Deadly Sins” which he will launch in Dublin this July.


His Discovery Channel show “Deception with Keith Barry” will air every Wednesday from June 1st at 10pm.

Scottish Scientists Make ‘Mind-Reading Machine’ Breakthrough


Scottish scientists have come a step closer to creating a “mind-reading machine” that can show mental images.


A team from the University of Glasgow have successfully decoded brain signals related to vision.


Six volunteers were shown images of people’s faces displaying different emotions such as happiness, fear and surprise.


In a series of trials, parts of the images were randomly covered so that, for example, only the eyes or mouth were visible. Participants were then asked to identify the emotion being displayed while electrodes attached to the scalp measured the volunteers’ brainwaves.


The scientists were able to show that brainwaves varied greatly according to which part of the face was being looked at…


“Beta” waves, with a frequency of 12 hertz, carried information about the eyes, while four hertz “theta” waves were linked to the mouth.


Information was also encoded by the phase, or timing, of the brainwave, and less so by its amplitude, or strength.


Professor Philippe Schyns, who led the study, said: “It’s a bit like unlocking a scrambled television channel. Before, we could detect the signal but couldn’t watch the content; now we can.


“How the brain encodes the visual information that enables us to recognise faces and scenes has long been a mystery. While we are able to detect EEG activity in certain areas of the brain when particular tasks are performed, we’ve not known what information is being carried in those brainwaves.


“What we have done is to find a way of decoding brainwaves to identify the messages within.”


The research is published in the online journal Public Library of Science Biology.


Prof Schyns said the study revealed how the brain tuned into different brainwave patterns to code different visual features.


“It is a bit like radiowaves coding different radio stations at different frequency bands,” he added. “This work has huge potential in the development of brain-computer interfaces.

Video: Mind reading system may help us drive better

By Christie Nicholson | July 29, 2011, 11:26 AM PDT

We like to think we have control over our thoughts in real time, instantaneously. The moment we think of doing something is the very same moment the neurons in our brains start firing, creating that intention. But this isn’t how it works.

The time lag between your brain knowing what it’s about to do and you being aware of it is shockingly long, often on the order of seconds. So every intention we have—going to the fridge to get a beer for instance—the intention to do so formed seconds before you were conscious of thinking about getting a beer. We are constantly living in the past moments of our brain.

Beyond the existential deep thoughts that might inspire, a group of German researchers are taking advantage of this quirk of nature and applying it to safer driving. They are experimenting with a system that taps into brain signals of drivers, monitors signs of an intention to brake, and then bypasses our slower conscious thought and even slower muscles to press the pedal. Their paper is published today in the Journal of Neural Engineering.

The scientists are using electroencephalography (or EEG)—a cap of electrodes attached to the head—to monitor brain waves. They found that the system can detect an intention to brake 130 milliseconds faster than it would take for that person to push the brake pedal. For highway driving at 60 mph this time gap translates into a distance difference of 12 feet, about the length of a compact car.

EEG has been used to monitor fatigue and mental workload in truck drivers and soldiers in the field, but it has never been tested with emergency signaling.

Scientists tested this system with 18 participants in a simulator. With EEG caps on their heads, subjects “drove” a car much like they might in a video game. Scientists also recorded the subjects’ muscle activity with electromyography activity (or EMG.) Signals are triggered by muscle contraction and EMG can be used to detect intention to move before a leg actually moves.

Subjects were instructed to stay within 65 feet of a car in front of them, driving 60 mph on a windy, well-trafficked road. Randomly, the car in front would slam on its brakes, setting into motion a crisis situation. Scientists logged data from both the EEG and EMG and recorded the time it took to move from gas pedal to brake pedal and to decelerate the car. The researchers determined what parts of the brain are most highly sensitive to initiating braking. The initial reaction comes from the occipito-temporal area which involves visual perception, most from seeing the red braking lights of the car in front. Scientists noted a broad reaction within the parietal and occipital brain areas which translates to what they call an “oddball scenario” or something unexpected happening. Again, here the trigger is the sudden braking of the car in front. Activity in the motor cortex is then noted which indicates a plan to move one’s leg, presumably in this case from the gas pedal to the brake. From this experiment the scientists conclude that it’s absolutely possible to detect a driver’s intention to brake, and of course this precedes any observable actions. By tapping into that intention signal an emergency braking system could be triggered that is significantly faster than our own.

Check out this video of one participant in the simulator. At around 46 seconds you’ll see the spike in brain activity in response to seeing the red brake lights flash.

We have to remember of course, this is the lab. Real-world driving is much more dynamic and may provide extreme challenges to detecting signal. How accurately the EEG can detect the difference between normal and exceptional traffic situations has yet to be seen. Already there are many variables that might swamp out the neural signal from a braking light. For instance a sharp turn in the car, or the head and body movements of the driver can obscure the fine neural signal of intention. Also, the electrical activity caused by something as simple as chewing gum, raising an eyebrow, or winking can all wreck havoc with the ability for an EEG system to read neural signals associated with sudden braking.

Another issue is of course the EEG caps themselves. They are uncomfortable, like bathing caps, and they contain gooey gel so the electrical signals flow better. Plus they are just plain unattractive. The scientists assure us that this is improving. Currently there are companies working on dry caps that are much smaller and more comfortable. The researchers are already planning a real-world driving study, so we’ll soon know whether our brains can drive better than we can.

Start your day smarter with our weekly e-mail newsletter. It's your cheat sheet for good ideas. Get it.

A Bike that Reads Your Mind, Shifts Gears
01 Aug 2011
You won’t have to fiddle with the gears on this bike. A Toyota concept model lets riders shift their gears using only the powers of their mind.

The futuristic bike, called the PXP, is the combined work of Toyota, Saatchi LA, Parlee Cycles and Deeplocal. In addition to the mind-reading helmet—it actually detects electrical brain activity—the bike has a built-in smartphone dock and a carbon-fiber frame.

With the PXP, riders can control their gears “without using a single one of their appendages” and instead lets them command an electronic shifter with their own brain waves, captured by a helmet “stuffed with neurotransmitters”, the designers wrote.

The bike’s designers said they were inspired by the Toyota Prius’ innovative technology; in concepting the bike, the designers had asked themselves: “What if the Prius were a bicycle?”

Piezotronics produces electrical signals from biomechanical actions ("Georgia Institute of Technology researchers have demonstrated a new type of piezoelectric resistive switching device in which the write-read access of memory cells is controlled by electromechanical modulation. Operating on flexible substrates, arrays of these devices could provide a new way to interface the mechanical actions of the biological world to conventional electronic circuitry. .... The zinc oxide nanowires are about 0.5 micron in diameter and about 50 microns long. ....") Piezotronics produces electrical signals from biomechanical actions | Kurzweil

My PC Needs ESP

Google's Instant Pages and the joys of a computer that can read your mind.

Computer desktop with icons. Click image to expand.


Chrome is reading my mind. When I type a search query in Google's Web browser, it offers me the most likely results right in the address bar. It doesn't just offer suggestions for keywords, as other browsers do, but actual results—when I type Virginia W, there's a link to the Wikipedia entry on Virginia Woolf right there in the drop-down list. Google Instant, the search engine's system to update Google's results as you type, is also baked into Chrome. As I type Virginia, Google results load up with each character I press. I don't even have to press Enter.

Chrome has had these features for a while, but now it's started doing something even more amazing. When I type a search query, it "preloads" the first Google result into its memory. Say I'm looking for that old Slate article in which I called Chrome "the best Web browser on the planet." I type manjoo chrome slate into Google, and the article I want appears at the top of the page. If I clicked on that article in Firefox or Safari, the Slate page would take four or five seconds, maybe more, to load up. But because Chrome knows that I'm likely to click on that first link, it's already downloaded all the content from the Slate page. As soon as I click, the page appears instantly.

Google announced this feature, called Instant Pages, in June, and it launched it in all versions of Chrome this week. Here's a video that demonstrates how it works:

Taken together, all these mind-reading features make for an incredible experience, one in which I hop from page to page as if I'm on speed. Once you get used to Chrome's manic pace, you begin to feel lost without it. After using Instant Pages for a few weeks, I get annoyed when I switch to other browsers—all those precious seconds, wasted! I feel particularly pained when I browse the Web on my phone—preloading pages would be especially useful on these devices, which have slower data connections to begin with. Google has said that it plans to bring Instant Pages to Android. I'm hoping Apple, Microsoft, and HP copy the feature for their devices, too.

But I'm not just tickled by Instant Pages as a feature. I also like the philosophy behind it—the idea that my software is analyzing what I do and adjusting its behavior accordingly. If Chrome can read my mind—if it can offer me search results before I've finished typing my query, and load up Web pages before I even click on them—why doesn't every other app?

I spend half my waking life using computers, and most of the time I do lots and lots of repetitive tasks. I'm sure you have a routine, too. Every morning, you might load the same few applications, or open the same few documents, and arrange your windows in the same way. When you open a new spreadsheet document, you might create the same few table headings, and when you start a Word document, you might choose the same font and paragraph settings. Or look at iTunes—you may have tens of thousands of songs, but there's a good chance that there are just a few dozen you listen to again and again, and perhaps there are some that you're more likely to listen to on Fridays than on Mondays.

Perhaps you're not bothered by all these redundancies. Loading up programs and arranging windows is just part of the routine of modern work. It's also true that most software offers ways for you to customize the settings to avoid repeating yourself—you can decide which programs load up when you start your computer, for example. But often that's a lot of extra work. Why should I have to tell my computer what to do? After all, it's watching me all the time. Why can't it start anticipating my needs rather than simply waiting for my commands?

This is the kind of thing I'm imagining: Once or twice a month, I work on my finances in an Excel document. Along with the spreadsheet, I also load up the websites of my bank, my bill pay service, and, and then I arrange the windows across my two monitors in a way that lets me see most of the important stuff in one glance. The whole thing takes about a minute and a half to do. It's a bit of a pain. An intelligent operating system would notice this pattern, and it would step in to help me out. After loading up the spreadsheet, it might bring up an alert saying, "Hey, you've done this before! Click here and I'll load up everything else you need." You click, your windows come up and get arranged on the screen, and voila, you've escaped Groundhog Day.

There are several other things that my computer could do for me. My online calendar keeps track of when I've got to interview someone over the phone. This usually involves calling someone via Skype, and typing up notes in Textpad. Instead of just telling me I've got an appointment, then, the computer could load up and arrange the necessary apps on the screen. Or how about helping me out with e-mail? Every couple days, someone asks me for my postal address—why can't my machine notice these messages and offer to insert my address into my reply? And then there's travel: Last month I booked a flight online. The computer was right there watching me. It knows the date I'm flying. Shouldn't it offer to check me in to my flight?

You might worry that the computer's constant suggestions could get annoying. They could—there's a fine line between helpfulness and obsequiousness. (Remember Clippy?) But I'd expect the machine to skirt that line by learning my preferences. I'd want it to notice when I decline its suggestions, so that it doesn't keep offering to do the same thing over and over again.

Is this sort of intelligence plausible? Of all the tech giants, Google seems the most interested in using data mining to make predictions about user behavior. Beyond its search engine and browser, there are mind-reading tricks in Gmail, too. (Type in one e-mail recipient, and it offers suggestions for others whom you typically e-mail at the same....) Since Google now makes two operating systems—Android and Chrome OS—it's likely that we'll see this philosophy extend across entire devices soon. The real coup, though, would be if Microsoft began building intelligence into its operating system. There are more than a billion Windows users worldwide, and we do the same damn things every day. Steve Ballmer, we need your help.

Become a fan of Slate on Facebook. Follow us on Twitter.


Scientists use computer to 'read' human thoughts

After hooking up a computer to human brains, scientists were able to program the computer to "read" the thoughts of disabled patients, thereby enabling them to control the cursor on the screen.

  • A computer monitor showing the NeuroSys project, a language-based, brain-computer interface, is displayed during an Intel Corp. open house in New York.

By Mary Altaffer, AP

A computer monitor showing the NeuroSys project, a language-based, brain-computer interface, is displayed during an Intel Corp. open house in New York.

"We have been fundamentally interested in creating a brain-computer interface that could help people with severe disabilities interact with the world," said Dr. Eric C. Leuthardt, lead author of a paper describing the findings in the April 7 issue of the Journal of Neural Engineering.

People with spinal cord injury, paralysis, who have had a stroke in the part of the brain that controls speech, and even amputees who are unable to operate a keyboard or mouse may benefit from this, said Justin C. Sanchez, director of the Neuroprosthetics Research Group at the University of Miami Miller School of Medicine.

"Right now, people who have paralysis will use 'intermediate steps' to control things, like a puffer tube (that you breathe into) or a joystick," Sanchez said. "Interfacing directly with the brain opens up the possibilities. They'd be able to do more things and do things more naturally. They could think about doing something and express it immediately in a seamless way," he explained.

"People have a disconnect between the brain and the body," he added. "What we're trying to deliver back to them is an engineered connection between the brain and the body."

This is, in a sense, a "neural" or "speech prosthetic," in much the same way people have artificial arms or legs.

These researchers looked at four patients, aged 36 to 48, all with intractable epilepsy who were already undergoing placement of an electrode in their brain to determine where the seizures were originating.

The electrodes were then used to connect the brain to the computer.

Then, with the electrodes still in the brain, the researchers did very simple speech tests; for example, four simple-sounding words. They looked at the electrical "signatures" of each word or sound when it was being spoken or merely thought about to see if they could be distinguished from each other.

They could, and so Leuthardt and his team then programmed the signatures into the computer to correspond with certain instructions, such as moving the cursor left or right.

When the person said or thought the word or sound, they were basically able to manipulate the computer with their mind or speech with 68% to 91% accuracy within 15 minutes.

"We're starting to parse apart the different components of human speech and we're excited because speech is very rich. There's a lot of information we can convey with that," said Leuthardt, who is assistant professor of neurosurgery and of biomedical engineering at Washington University in St. Louis.

Similar brain-computer interfaces have been used to restore the sight of one patient and stimulate limb movement in others.

The scientists envision one day having tiny, permanent implants in people's brains.

And not just the brains of disabled people.

"It would be a way to interface with your computer in a much more rich relationship than just a keyboard and a mouse," Sanchez said.

Fretting about passwords may be soon be as quaint as fretting about petticoats. Technical breakthroughs in neuroscience have allowed machines to peer so far into the mind -- they may soon be able to our photograph dreams.

Part Two of The Current

End of Privacy - Michio Kaku

We started this segment with the sound of an MRI machine. Magnetic resonance imaging is now a textbook medical procedure that many Canadians undergo everyday. It can indicate to doctors if a patient has a brain tumor or suffered a stroke.

And MRI research is just getting started. It's leading the way in mind reading technology, potentially allowing others access to your deepest thoughts. It doesn't require a lot of deep thinking to wonder where a technology like that could lead. What happens when the boundaries of your own body aren't enough to stop intruders?

To discuss how science may soon change the nature of privacy, we were joined by Michio Kaku, a professor of theoretical physics at the CIty University of New York. His new book is called Physics of the Future: How Science will Shape Human Destiny and Our.... He was in New York City.

End of Privacy - Steve Laken

Eavesdropping on another brain, foolproof lie detection, exposing inner secrets, perhaps even changing a mind. Those things are not yet technically possible, but you'll likely be surprised how tantalizingly close they are.

To explain how close, we were joined by Dr. Steve Laken, president and founder of Cephos Corp. He was in Boston.

End of Privacy - Annabelle Belcher

There are obvious, significant ethical concerns surrounding these kinds of technologies. Annabelle Belcher is a post-doctoral researcher at the National Institute on Drug Abuse who spent a year on the MacArthur Foundation Law and Neuroscience Project. We reached Annabelle Belcher in Baltimore.

Related Links:

Last Word - The Help

We've been talking about machines that may soon get into our heads to reveal our thoughts. That, of course, has been the job of writers. A film opened this week about a young white writer trying to get into the heads of the black domestic workers of Mississippi. The Help is set in the early 1960s when black people offering opinions about white society wasn't controversial -- it was suicidal. Some reviewers say the film is just the old story of a white do-gooder. We ended the program today with some thoughts from a critic, and a defence from actress Viola Davis.

Were going to destroy humankind with this stuff. This reminds me of Jurassic Park when the dinosaurs break out and eat people, except it's technology that will get out of control, not dinosaurs, obviously! I cant imagine what life will be like in the next few generations. Scary, I bet. Look at what we live through. Someone thinks making this technology is great, and people will claim it is for medical breakthroughs, medical breakthroughs wont matter when someone turns millions of people into brain controlled zombies or some other unimaginable horror. Hilter will seem like a primitive joke when this stuff hits the public.
Brain-reading devices could kill off keyboard

'Mind-reading' from scans identifies certain thoughts with certain words

The QWERTY keyboard has dominated computer typing for more than 40 years, but a new breakthrough that translates human thought into digital text may spell the beginning of the end for manual word processing. A first step toward such mind-reading has come from using brain scans to identify certain thoughts with certain words.

The fMRI brain scans showed certain patterns of human brain activity sparked by thinking about physical objects, such as a horse or a house. Researchers also used the brain scans to identify brain activity shared by words related to certain topics — thinking about "eye" or "foot" showed patterns similar to those of other words related to body parts.

"The basic idea is that whatever subject is on someone's mind — not just topics or concepts, but also emotions, plans or socially oriented thoughts — is ultimately reflected in the pattern of activity across all areas of his or her brain," said Matthew Botvinick, a psychologist at Princeton University's Neuroscience Institute.

Brain-reading devices would likely first help paralyzed people such as physicist Stephen Hawking, but still won't happen for years, Botvinick cautioned. There is also the problem of making brain scan technologies more portable, if ordinary people hope to get a shot at freeing up their hands from typing.

Yet Botvinick envisioned a future where such technology could translate any mental content about not just objects, but also people, actions, abstract concepts and relationships.

One existing technology allows patients suffering from complete paralysis — known as locked-in syndrome — to use their eyes to select one letter at a time to form words. Another lab prototype allows patients to make synthesized voices by using their thoughts to create certain vowel sounds, even if they can't yet form coherent words. But truly direct thought-to-word translation remains out of reach.

That's where the current work comes into play. Botvinick had first worked with Francisco Pereira, a Princeton postdoctoral researcher, and Greg Detre, a researcher who obtained his Ph.D. from Princeton, on using brain-activity patterns to reconstruct images that volunteers viewed during a brain scan. But the research soon inspired them to try expressing certain elements in words rather than pictures.

First, they used a Princeton-developed computer program to come up with 40 possible topics based on Wikipedia articles that contained words associated with such topics. They then created a color-coded system to identify probability of certain words being related to an object that a volunteer thought about while reading a Wikipedia article during a brain scan.

In one case, a more red word showed that a person was more likely to associate it with "cow." A bright blue word suggested a strong connection to "carrot," and black or gray words had no specific association.

There are still limits. The researchers can tell if participants had thoughts of vegetables, but can't distinguish between "carrot" versus "celery." They hope to make their method more sensitive to such details in the future.

Follow InnovationNewsDaily on Twitter @News_Innovation, or on Facebook.

© 2011 All rights reserved. More from

Thursday, 01 September 2011 10:17

Scientists can print out your thoughts

Written by Nick Farrell
Should not take long

Boffins at Princeton Universe are getting close to working out a way to get a hard copy of your thoughts. Princeton scientists show that it is possible to generate text about the mental content seen in brain images.

Princeton boffins have for the first time matched images of brain activity with categories of words related to the concepts a person is thinking about. The results could lead to a better understanding of how people consider meaning and context when reading or thinking. It is still far away from getting close to mind reading. Besides most blokes are supposed to think about sex every 30 seconds which would prevent the brain coming up with a coherent sex sentence. Sorry I mean sentence, boobies.

According Frontiers in Human Neuroscience which we get for the spot the brain cell competition, functional magnetic resonance imaging (fMRI) can be used to identify areas of the brain activated when study participants thought about physical objects such as a carrot, a horse or a house. The researchers then generated a list of topics related to those objects and used the fMRI images to determine the brain activity that words within each topic shared. For instance, thoughts about "eye" and "foot" produced similar neural stirrings as other words related to body parts.

Once the boffins knew the brain activity a topic sparked, they were able to use fMRI images alone to predict the subjects and words a person likely thought about during the scan.

Nick Farrell

This e-mail address is being protected from spambots. You need JavaScript enabled to view it

Reply to Discussion


Latest Activity

Santanu Banerjee is now friends with Jeremy C A Paul, DANIEL J PAVLOVSKY and Shishir mishra
8 minutes ago
Xuthal posted a blog post

Think I'm free

I think my last implant came out this morning. I might be free. I'm still kickin'! I was planning on leaving. My little brother is a T.I too and the perps got him kicked out of his apartment. The plan was to leave once these implants were out, now I don't know. Something about these things coming out, lots of torture. Perps said many things with the V2K. They told me I would die, I was half convinced because of the marijuana in the air. Pot is legal in Canada. The perps came into my place and…See More
3 hours ago
AJ is now a member of Peacepink
17 hours ago
outi tuomi commented on H's group Finland
"tuosta alla olevasta viestistä poistettiin se, että nyt pitää ottaa yhteyttä MT-puolelle ja sukulaisiin. Voi vittu, että aina vaan ällöttävät klassiset syytökset."
12 hours ago
outi tuomi commented on H's group Finland
"Omasta mielestäni kirjoitin tänne siitä lataamoon toimittamisuhkauksesta, joka särisee,sirisee ketjuun oli kirjoitettu. Ensin ällöttävästä säälittiin, kuinka jotkut luulee olevansa niin…"
12 hours ago
Just Believe replied to Lauren C's discussion My Social Media Horror Story : Electronic Abuse & Stalking
"I believe every word you wrote. It is not a person that is trying to attack you or steal your life. it is a group of dark scientists that are doing experiments on people. They are choosing certain people and create the story to enter your life. All…"
18 hours ago
Gretta Fahey posted blog posts
Liberty liked Lauren C's discussion My Social Media Horror Story : Electronic Abuse & Stalking



© 2021   Created by Soleilmavis.   Powered by

Badges  |  Report an Issue  |  Terms of Service