Mind Reading -- 60 minutes CBS News video
June 28, 2009 4:50 PM
Neuroscience has learned so much about how we think and the brain activity linked to certain thoughts that it is now possible - on a very basic scale - to read a person's mind. Lesley Stahl reports.
How Technology May Soon "Read" Your Mind
Read more:
http://www.cbsnews.com/stories/1998/07/08/60minutes/main4694713.shtml?tag=cbsnewsSidebarArea.0
http://www.foxnews.com/story/0,2933,426485,00.html
LiveScience Topics: Mind Reading
Mind-machine interfaces can read your mind, and the science is improving. Devices scan the brain and read brain waves with electroencephalography, or EEG, then use a computer to convert thoughts into action. Some mind-reading research has recorded electrical activity generated by the firing of nerve cells in the brain by placing electrodes directly in the brain. These studies could lead to brain implants that would move a prosthetic arm or other assistive devices controlled by a brain-computer interface.
http://utdev.livescience.com/topic/mind-reading
16:09 03/11/2010 © Alex Steffler
Rossiiskaya Gazeta
Mind-reading devices to help screen Russian cops
It reads like science fiction, but it’ll soon be science fact. Special mind-reading devices are to be rolled out across Russia’s revamped police force.
http://en.rian.ru/papers/20101103/161202851.html
Homeland Security Detects Terrorist Threats by Reading Your MindTuesday, September 23, 2008
By Allison Barrie
Baggage searches are SOOOOOO early-21st century. Homeland Security is now testing the next generation of security screening — a body scanner that can read your mind.
Most preventive screening looks for explosives or metals that pose a threat. But a new system called MALINTENT turns the old school approach on its head. This Orwellian-sounding machine detects the person — not the device — set to wreak havoc and terror.
MALINTENT, the brainchild of the cutting-edge Human Factors division in Homeland Security's directorate for Science and Technology, searches your body for non-verbal cues that predict whether you mean harm to your fellow passengers.
It has a series of sensors and imagers that read your body temperature, heart rate and respiration for unconscious tells invisible to the naked eye — signals terrorists and criminals may display in advance of an attack.
But this is no polygraph test. Subjects do not get hooked up or strapped down for a careful reading; those sensors do all the work without any actual physical contact. It's like an X-ray for bad intentions.
Currently, all the sensors and equipment are packaged inside a mobile screening laboratory about the size of a trailer or large truck bed, and just last week, Homeland Security put it to a field test in Maryland, scanning 144 mostly unwitting human subjects.
While I'd love to give you the full scoop on the unusual experiment, testing is ongoing and full disclosure would compromise future tests.
• Click here for an exclusive look at MALINTENT in action.
But what I can tell you is that the test subjects were average Joes living in the D.C. area who thought they were attending something like a technology expo; in order for the experiment to work effectively and to get the testing subjects to buy in, the cover story had to be convincing.
While the 144 test subjects thought they were merely passing through an entrance way, they actually passed through a series of sensors that screened them for bad intentions.
Homeland Security also selected a group of 23 attendees to be civilian "accomplices" in their test. They were each given a "disruptive device" to carry through the portal — and, unlike the other attendees, were conscious that they were on a mission.
In order to conduct these tests on human subjects, DHS had to meet rigorous safety standards to ensure the screening would not cause any physical or emotional harm.
So here's how it works. When the sensors identify that something is off, they transmit warning data to analysts, who decide whether to flag passengers for further questioning. The next step involves micro-facial scanning, which involves measuring minute muscle movements in the face for clues to mood and intention.
Homeland Security has developed a system to recognize, define and measure seven primary emotions and emotional cues that are reflected in contractions of facial muscles. MALINTENT identifies these emotions and relays the information back to a security screener almost in real-time.
This whole security array — the scanners and screeners who make up the mobile lab — is called "Future Attribute Screening Technology" — or FAST — because it is designed to get passengers through security in two to four minutes, and often faster.
If you're rushed or stressed, you may send out signals of anxiety, but FAST isn't fooled. It's already good enough to tell the difference between a harried traveler and a terrorist. Even if you sweat heavily by nature, FAST won't mistake you for a baddie.
"If you focus on looking at the person, you don't have to worry about detecting the device itself," said Bob Burns, MALINTENT's project leader. And while there are devices out there that look at individual cues, a comprehensive screening device like this has never before been put together.
While FAST's batting average is classified, Undersecretary for Science and Technology Adm. Jay Cohen declared the experiment a "home run."
As cold and inhuman as the electric eye may be, DHS says scanners are unbiased and nonjudgmental. "It does not predict who you are and make a judgment, it only provides an assessment in situations," said Burns. "It analyzes you against baseline stats when you walk in the door, it measures reactions and variations when you approach and go through the portal."
But the testing — and the device itself — are not without their problems. This invasive scanner, which catalogues your vital signs for non-medical reasons, seems like an uninvited doctor's exam and raises many privacy issues.
But DHS says this is not Big Brother. Once you are through the FAST portal, your scrutiny is over and records aren't kept. "Your data is dumped," said Burns. "The information is not maintained — it doesn't track who you are."
DHS is now planning an even wider array of screening technology, including an eye scanner next year and pheromone-reading technology by 2010.
The team will also be adding equipment that reads body movements, called "illustrative and emblem cues." According to Burns, this is achievable because people "move in reaction to what they are thinking, more or less based on the context of the situation."
FAST may also incorporate biological, radiological and explosive detection, but for now the primary focus is on identifying and isolating potential human threats.
And because FAST is a mobile screening laboratory, it could be set up at entrances to stadiums, malls and in airports, making it ever more difficult for terrorists to live and work among us.
Burns noted his team's goal is to "restore a sense of freedom." Once MALINTENT is rolled out in airports, it could give us a future where we can once again wander onto planes with super-sized cosmetics and all the bottles of water we can carry — and most importantly without that sense of foreboding that has haunted Americans since Sept. 11.
Allison Barrie, a security and terrorism consultant with the Commission for National Security in the 21st Century, is FOX News' security columnist.
Please go to LAST PAGE OF "Replies to this Discussion" to read NEWEST Information
Replies
Mind reading machine
Scientists introduce a machine they claim can read human intentions. CNN's Frederik Pleitgen reports. (May 18 2007)
http://edition.cnn.com/video/#/video/tech/2007/05/18/pleitgen.germa...
New non-invasive sensor can detect brainwaves remotely
24 October 2002
http://www.sussex.ac.uk/press_office/media/media260.shtml
Scientists have developed a remarkable sensor that can record brainwaves without the need for electrodes to be inserted into the brain or even for them to be placed on the scalp.
Conventional electroencephalograms (EEGs) monitor electrical activity in the brain with electrodes placed either on the scalp (involving hair removal and skin abrasion) or inserted directly into the brain with needles. Now a non-invasive form of EEG has been devised by Professor Terry Clark and his colleagues in the Centre for Physical Electronics at the University of Sussex.
Instead of measuring charge flow through an electrode (with attendant distortions, in the case of scalp electrodes) the new system measures electric fields remotely, an advance made possible by new developments in sensor technology. Professor Clark says: "It's a new age as far as sensing the electrical dynamics of the body is concerned."
The Sussex researchers believe their new sensor will instigate major advances in the collection and display of electrical information from the brain, especially in the study of drowsiness and the human-machine interface.
"The possibilities for the future are boundless," says Professor Clark. "The advantages offered by these sensors compared with the currently used contact electrodes may act to stimulate new developments in multichannel EEG monitoring and in real-time electrical imaging of the brain."
"By picking up brain signals non-invasively, we could find ourselves controlling machinery with our thoughts alone: a marriage of mind and machine."
The same group of scientists has already made remote-sensing ECG units as well, which can detect heartbeats with no connections at all.
Notes for editors
The research is published in Applied Physics Letters of 21 October: C. J. Harland, T. D. Clark and R. J. Prance, 'Remote detection of human electroencephalograms using ultrahigh input impedance electric potential sensors'. See http://ojps.aip.org/dbt/dbt.jsp?KEY=APPLAB&Volume=81&Issue=17 .
For an image of Dr Harland's brain taken using the new sensor, see http://www.aip.org/mgr/png/2002/166.htm .
For information about the Centre for Physical Electronics at the University of Sussex, see http://www.sussex.ac.uk/Units/pei .
To contact Professor Terry Clark, tel. 01273 678087, email t.d.clark@sussex.ac.uk
Press Office contacts:
Alison Field or Peter Simmons
tel. 01273 678888
fax 01273 877456
email A.Field@sussex.ac.uk or P.J.Simmons@sussex.ac.uk
a form of magnetic resonance imaging that scans the brain and assesess/judges based
on heat data in different brain locations.
article "Whatever Happened To...Mind Control"
http://74.125.95.132/search?q=cache:FLerkf1CFtMJ:discovermagazine.c...
By Alexis Madrigal March 16, 2009 5:41 pm
http://www.wired.com/wiredscience/2009/03/noliemri/
Defense attorneys are for the first time submitting a controversial neurological lie-detection test as evidence in U.S. court.
In an upcoming juvenile-sex-abuse case in San Diego, the defense is hoping to get an fMRI scan, which shows brain activity based on oxygen levels, admitted to prove the abuse didn’t happen.
The technology is used widely in brain research, but hasn’t been fully tested as a lie-detection method. To be admitted into California court, any technique has to be generally accepted within the scientific community.
The company that did the brain scan, No Lie MRI, claims their test is over 90 percent accurate, but some scientists and lawyers are skeptical.
"The studies so far have been very interesting. I think they deserve further research. But the technology is very new, with very little research support, and no studies done in realistic situations," Hank Greely, the head of the Center for Law and the Biosciences at Stanford, wrote in an e-mail to Wired.com.
Lie detection has tantalized lawyers since before the polygraph was invented in 1921, but the accuracy of the tests has always been in question. Greely noted that American courts and scientists have "85 years of experience with the polygraph" and a wealth of papers that have tried to describe its accuracy. Yet they aren’t generally admissible in court, except in New Mexico.
Other attempts to spot deception using different brain signals continue, such as the EEG-based technique developed in India, where it has been used as evidence in court. And last year, attorneys tried to use fMRI evidence for chronic pain in a worker’s compensation claim, but the case was settled out of court. The San Diego case will be the first time fMRI lie-detection evidence, if admitted, is used in a U.S. court.
According to Emily Murphy, a behavioral neuroscientist at the Stanford Center for Law and the Biosciences who first reported on the fMRI evidence, the case is a child protection hearing to determine if the minor should stay in the home of the custodial parent accused of sexual abuse.
Apparently, the accused parent hired No Lie MRI, headquartered in San Diego with a testing facility in Tarzana, California, to do a brain scan. The company’s report says fMRI tests show the defendant’s claim of innocence is not a lie.
The company declined to be interviewed for this story, but its founder and CEO, Joel Huizenga, spoke to Wired.com in September about the technology.
"This is the first time in human history that anybody has been able to tell if someone else is lying," he said.
Though the company’s scientific board includes fMRI experts such as Christos Davatzikos, a radiologist at the University of Pennsylvania, some outside scientists and bioethicists question the reliability of the tests.
"Having studied all the published papers on fMRI-based lie detection, I personally wouldn’t put any weight on it in any individual case. We just don’t know enough about its accuracy in realistic situations," Greely said.
Laboratory studies using fMRI, which measures blood-oxygen levels in the brain, have suggested that when someone lies, the brain sends more blood to the ventrolateral area of the prefrontal cortex. In a very small number of studies, researchers have identified lying in study subjects with accuracy ranging from 76 percent to over 90 percent (pdf). But some scientists and lawyers like Greely doubt that those results will prove replicable outside the lab setting, and others say it just isn’t ready yet.
"It’s certainly something that is going to evolve and continue to get better and at some point, it will be ready for prime time. I’m just not sure it’s really there right now," said John Vanmeter, a neurologist at Georgetown’s Center for Functional and Molecular Imaging. "On the other hand, maybe it’s good that it’s going to start getting tested in the court system. It’s really been just a theoretical thing until now."
No Lie MRI licensed its technology from psychiatrist Daniel Langleben of the University of Pennsylvania. Langleben, like the company, declined to be interviewed for this article, but offered a recent editorial he co-authored in the Journal of the American Academy of Psychiatry and the Law on the "future of forensic functional brain imaging."
From the editorial, it’s clear that Langleben is a bit uneasy that his work has been commercially applied. He draws a clear distinction between "deception researchers" like himself and "the merchants of fMRI-based lie detection" and describes the "uneasy alliances between this industry and academia, brokered by university technology-commercialization departments."
Langleben has pushed for large-scale trials to determine the efficacy of fMRI-based deception-spotting. But in an interview conducted in late 2007, he doubted whether No Lie MRI and its competitor, Cephalos, had the resources to conduct the type of trials he wants.
"We need to run clinical trials with 200 to 300 people, so we can say, ‘This is the accuracy of this test,’" Langleben told Wired.com. "But only two or three companies are trying to develop the technology. Do those companies have deep pockets? No. Do clinical trials cost a lot? Yes."
In September, Huizenga said the company was trying to get a grant for a study on a large group of people. "To date there really has been no study that has tried to optimize fMRI for lie detection," he said.
But even if the science behind a technology isn’t fully established, Brooklyn Law School’s Edward Cheng, who studies scientific evidence in legal proceedings, said it might still be appropriate to use it in the courtroom.
"Technology doesn’t necessarily have to be bulletproof before it can come in, in court," Cheng.
He questioned whether society’s traditional methods of lie detection, that is to say, inspection by human beings, is any more reliable than the new technology.
"It’s not clear whether or not a somewhat reliable but foolproof fMRI machine is any worse than having a jury look at a witness," Cheng said. "It’s always important to think about what the baseline is. If you want the status quo, fine, but in this case, the status quo might not be all that good."
But the question of whether Cheng’s fMRI can be "somewhat reliable but foolproof" remains open.
Ed Vul, an fMRI researcher at the Kanwisher Lab at MIT, said that it was simply too easy for a suspect to make fMRI data of any type unusable.
"I don’t think it can be either reliable or practical. It is very easy to corrupt fMRI data," Vul said. "The biggest difficulty is that it’s very easy to make fMRI data unusable by moving a little, holding your breath, or even thinking about a bunch of random stuff."
A trained defendant might even be able to introduce bias into the fMRI data. In comparison with traditional lie-detection methods, fMRI appears more susceptible to gaming.
"So far as I can tell, there are many more reliable ways to corrupt data from an MRI machine than a classic polygraph machine," Vul said.
Elizabeth Phelps, a neuroscientist at New York University, agreed there is little evidence that fMRI is more reliable than previous lie-detection methods.
"When you build a model based on people in the laboratory, it may or may not be that applicable to someone who has practiced their lie over and over, or someone who has been accused of something," Phelps said. "I don’t think that we have any standard of evidence that this data is going to be reliable in the way that the courts should be admitting."
All these theoretical considerations will be put to the test for the first time in a San Diego courtroom soon. Stanford’s Murphy reported that the admissibility of the evidence in this particular case could rest on which scientific experts are allowed to comment on the evidence.
"The defense plans to claim fMRI-based lie detection (or “truth verification”) is accurate and generally accepted within the relevant scientific community in part by narrowly defining the relevant community as only those who research and develop fMRI-based lie detection," she wrote.
Murphy says that the relevant scientific community should be much larger, including a broader swath of neuroscientists, statisticians, and memory experts.
If the broader scientific community is included in the fact-finding, Greely doesn’t expect the evidence to be admitted.
"In a case where the issues were fully explored with good expert witnesses on both sides, it is very hard for me to believe that a judge would admit the results of fMRI-based lie detection today," Greely said.
But that’s not to say that lie-detection won’t eventually find a place in the courts, as the science and ethics of brain scanning solidify.
Wired.com editor Betsy Mason contributed to this report.
See Also:
Woman Convicted of Child Abuse Hopes fMRI Can Prove Her Innocence
http://www.wired.com/science/discoveries/news/2007/11/fmri_lies
Don’t Even Think About Lying
http://www.wired.com/wired/archive/14.01/lying.html
Brain Scanners Know Where You’ve Been
http://www.wired.com/wiredscience/2009/03/brainspace/
This Is Your Brain on Hillary: Political Neuroscience Hits New Low
http://www.wired.com/wiredscience/2007/11/this-is-your-br/
Religion: Biological Accident, Adaptation — or Both
http://www.wired.com/wiredscience/2009/03/religionbrain/
India’s Judges Overrule Scientists on ‘Guilty Brain’ Tech
http://www.wired.com/wiredscience/2008/10/indias-judges-o/
http://www.angelfire.com/nj/jhgraf/fint.html#BB )? Can anyone find a picture of him?
From The Sunday Times November 1, 2009
http://www.timesonline.co.uk/tol/news/science/living/article6898177...
Scientists have discovered how to “read” minds by scanning brain activity and reproducing images of what people are seeing — or even remembering.
Researchers have been able to convert into crude video footage the brain activity stimulated by what a person is watching or recalling.
The breakthrough raises the prospect of significant benefits, such as allowing people who are unable to move or speak to communicate via visualisation of their thoughts; recording people’s dreams; or allowing police to identify criminals by recalling the memories of a witness.
However, it could also herald a new Big Brother era, similar to that envisaged in the Hollywood film Minority Report, in which an individual’s private thoughts can be readily accessed by the authorities.
Jack Gallant and Shinji Nishimoto, two neurologists from the University of California, Berkeley, last year managed to correlate activity in the brain’s visual cortex with static images seen by the person. Last week they went one step further by revealing that it is possible to “decode” signals generated in the brain by moving scenes.
In an experiment which has yet to be peer reviewed, Gallant and Nishimoto, using functional magnetic resonance imaging (fMRI) technology, scanned the brains of two patients as they watched videos.
A computer programme was used to search for links between the configuration of shapes, colours and movements in the videos, and patterns of activity in the patients’ visual cortex.
It was later fed more than 200 days’ worth of YouTube internet clips and asked to predict which areas of the brain the clips would stimulate if people were watching them.
Finally, the software was used to monitor the two patients’ brains as they watched a new film and to reproduce what they were seeing based on their neural activity alone.
Remarkably, the computer programme was able to display continuous footage of the films they were watching — albeit with blurred images.
In one scene which featured the actor Steve Martin wearing a white shirt, the software recreated his rough shape and white torso but missed other details, such as his facial features.
Another scene, showing a plane flying towards the camera against a city skyline, was less successfully reproduced. The computer recreated the image of the skyline but omitted the plane altogether.
“Some scenes decode better than others,” said Gallant. “We can decode talking heads really well. But a camera panning quickly across a scene confuses the algorithm.
“You can use a device like this to do some pretty cool things. At the moment when you see something and want to describe it to someone you have to use words or draw it and it doesn’t work very well.
“You could use this technology to transmit the image to someone. It might be useful for artists or to allow you to recover an eyewitness’s memory of a crime.”
Such technology may not be confined to the here and now. Scientists at University College London have conducted separate tests that detect, with an accuracy of about 50%, memories recalled by patients.
The discoveries come amid a flurry of developments in the field of brain science. Researchers have also used scanning technology to measure academic ability, detect early signs of Alzheimer’s and other degenerative conditions, and even predict the decision a person is about to make before they are conscious of making it.
Such developments may have controversial ramifications. In Britain, fMRI scanning technology has been sold to multinational companies, such as Unilever and McDonald’s, enabling them to see how we subconsciously react to brands.
In America, security agencies are researching the use of brain scanners for interrogating prisoners, and Lockheed Martin, the US defence contractor, is reported to have studied the possibility of scanning brains at a distance.
This would allow an individual’s thoughts and anxieties to be examined without their knowledge in sensitive locations such as airports.
Russell Foster, a neuroscientist at Oxford University, said rapid advances in the field were throwing up ethical dilemmas.
“It’s absolutely critical for scientists to inform the public about what we are doing so they can engage in the debate about how this knowledge should be used,” he said.
“It’s the age-old problem: knowledge is power and it can be used for both good and evil.”
How America's Wars Are Systematically Destroying Our Liberties
By Prof. Alfred W. McCoy
http://www.globalresearch.ca/index.php?context=va&aid=16123
Global Research, November 16, 2009
Tom Dispatch - 2009-11-12
In his approach to National Security Agency surveillance, as well as CIA renditions, drone assassinations, and military detention, President Obama has to a surprising extent embraced the expanded executive powers championed by his conservative predecessor, George W. Bush. This bipartisan affirmation of the imperial executive could "reverberate for generations," warns Jack Balkin, a specialist on First Amendment freedoms at Yale Law School. And consider these but some of the early fruits from the hybrid seeds that the Global War on Terror has planted on American soil. Yet surprisingly few Americans seem aware of the toll that this already endless war has taken on our civil liberties.
Don't be too surprised, then, when, in the midst of some future crisis, advanced surveillance methods and other techniques developed in our recent counterinsurgency wars migrate from Baghdad, Falluja, and Kandahar to your hometown or urban neighborhood. And don't ever claim that nobody told you this could happen -- at least not if you care to read on.
Think of our counterinsurgency wars abroad as so many living laboratories for the undermining of a democratic society at home, a process historians of such American wars can tell you has been going on for a long, long time. Counterintelligence innovations like centralized data, covert penetration, and disinformation developed during the Army's first protracted pacification campaignin a foreign land -- the Philippines from 1898 to 1913 -- were repatriated to the United States during World War I, becoming the blueprint for an invasive internal security apparatus that persisted for the next half century.
Almost 90 years later, George W. Bush's Global War on Terror plunged the U.S. military into four simultaneous counterinsurgency campaigns, large and small -- in Somalia, Iraq, Afghanistan, and (once again) the Philippines -- transforming a vast swath of the planet into an ad hoc"counterterrorism" laboratory. The result? Cutting-edge high-tech security and counterterror techniques that are now slowly migrating homeward.
As the War on Terror enters its ninth year to become one of America's longest overseas conflicts, the time has come to ask an uncomfortable question: What impact have the wars in Afghanistan and Iraq -- and the atmosphere they created domestically -- had on the quality of our democracy?
Every American knows that we are supposedly fighting elsewhere to defend democracy here at home. Yet the crusade for democracy abroad, largely unsuccessful in its own right, has proven remarkably effective in building a technological template that could be just a few tweaks away from creating a domestic surveillance state -- with omnipresent cameras, deep data-mining, nano-second biometric identification, and drone aircraft patrolling "the homeland."
Even if its name is increasingly anathema in Washington, the ongoing Global War on Terror has helped bring about a massive expansion of domestic surveillance by the FBI and the National Security Agency (NSA) whose combined data-mining systems have already swept up several billion private documents from U.S. citizens into classified data banks. Abroad, after years of failing counterinsurgency efforts in the Middle East, the Pentagon began applying biometrics -- the science of identification via facial shape, fingerprints, and retinal or iris patterns -- to the pacification of Iraqi cities, as well as the use of electronic intercepts for instant intelligence and the split-second application of satellite imagery to aid an assassination campaign by drone aircraft that reaches from Africa to South Asia.
In the panicky aftermath of some future terrorist attack, Washington could quickly fuse existing foreign and domestic surveillance techniques, as well as others now being developed on distant battlefields, to create an instant digital surveillance state.
The Crucible of Counterinsurgency
For the past six years, confronting a bloody insurgency, the U.S. occupation of Iraq has served as a white-hot crucible of counterinsurgency, forging a new system of biometric surveillance and digital warfare with potentially disturbing domestic implications. This new biometric identification system first appeared in the smoking aftermath of "Operation Phantom Fury," a brutal, nine-day battle that U.S. Marines fought in late 2004 to recapture the insurgent-controlled city of Falluja. Bombing, artillery, and mortars destroyed at least half of that city's buildings and sent most of its 250,000 residents fleeing into the surrounding countryside. Marines then forced returning residents to wait endless hours under a desert sun at checkpoints for fingerprints and iris scans. Once inside the city's blast-wall maze, residents had to wear identification tags for compulsory checks to catch infiltrating insurgents.
The first hint that biometrics were helping to pacify Baghdad's far larger population of seven million came in April 2007 when the New York Times published an eerie image of American soldiers studiously photographing an Iraqi's eyeball. With only a terse caption to go by, we can still infer the technology behind this single record of a retinal scan in Baghdad: digital cameras for U.S. patrols, wireless data transfer to a mainframe computer, and a database to record as many adult Iraqi eyes as could be gathered. Indeed, eight months later, the Washington Post reported that the Pentagon had collected over a million Iraqi fingerprints and iris scans. By mid-2008, the U.S. Army had also confined Baghdad's population behind blast-wall cordons and was checking Iraqi identities by satellite link to a biometric database.
Pushing ever closer to the boundaries of what present-day technology can do, by early 2008, U.S. forces were also collecting facial images accessible by portable data labs called Joint Expeditionary Forensic Facilities, linked by satellite to a biometric database in West Virginia. "A war fighter needs to know one of three things," explained the inventor of this lab-in-a-box. "Do I let him go? Keep him? Or shoot him on the spot?"
A future is already imaginable in which a U.S. sniper could take a bead on the eyeball of a suspected terrorist, pause for a nanosecond to transmit the target's iris or retinal data via backpack-sized laboratory to a computer in West Virginia, and then, after instantaneous feedback, pull the trigger.
Lest such developments seem fanciful, recall that Washington Post reporter Bob Woodward claims the success of George W. Bush's 2007 troop surge in Iraq was due less to boots on the ground than to bullets in the head -- and these, in turn, were due to a top-secret fusion of electronic intercepts and satellite imagery. Starting in May 2006, American intelligence agencieslaunched a Special Action Program using "the most highly classified techniques and information in the U.S. government" in a successful effort "to locate, target and kill key individuals in extremist groups such as al-Qaeda, the Sunni insurgency and renegade Shia militias."
Under General Stanley McChrystal, now U.S. Afghan War commander, the Joint Special Operations Command (JSOC) deployed "every tool available simultaneously, from signals intercepts to human intelligence" for "lightning quick" strikes. One intelligence officer reportedly claimed that the program was so effective it gave him "orgasms." President Bush called it "awesome." Although refusing to divulge details, Woodward himself compared it to the Manhattan Project in World War II. This Iraq-based assassination program relied on the authority Defense Secretary Donald Rumsfeld granted JSOC in early 2004 to "kill or capture al-Qaeda terrorists" in 20 countries across the Middle East, producing dozens of lethal strikes by airborne Special Operations forces.
Another crucial technological development in Washington's secret war of assassination has been the armed drone, or unmanned aerial vehicle, whose speedy development has been another by-product of Washington's global counterterrorism laboratory. Half a world away from Iraq in the southern Philippines, the CIA and U.S. Special Operations Forces conducted an early experiment in the use of aerial surveillance for assassination. In June 2002, with a specially-equipped CIA aircraft circling overhead offering real-time video surveillance in the pitch dark of a tropical night, Philippine Marines executed a deadly high-seas ambush of Muslim terrorist Aldam Tilao (a.k.a. "Abu Sabaya").
In July 2008, the Pentagon proposed an expenditure of $1.2 billion for a fleet of 50 light aircraft loaded with advanced electronics to loiter over battlefields in Afghanistan and Iraq, bringing "full motion video and electronic eavesdropping to the troops." By late 2008, night flights over Afghanistan from the deck of the USS Theodore Roosevelt were using sensors to give American ground forces real-time images of Taliban targets -- some so focused that they could catch just a few warm bodies huddled in darkness behind a wall.
In the first months of Barack Obama's presidency, CIA Predator drone strikes have escalated in the Pakistani tribal borderlands with a macabre efficiency, using a top-secret mix of electronic intercepts, satellite transmission, and digital imaging to kill half of the Agency's 20 top-priority al-Qaeda targets in the region. Just three days before Obama visited Canada last February, Homeland Security launched its first Predator-B drones to patrol the vast, empty North Dakota-Manitoba borderlands that one U.S. senator has called America's "weakest link."
Homeland Security
While those running U.S. combat operations overseas were experimenting with intercepts, satellites, drones, and biometrics, inside Washington the plodding civil servants of internal security at the FBI and the NSA initially began expanding domestic surveillance through thoroughly conventional data sweeps, legal and extra-legal, and -- with White House help -- several abortive attempts to revive a tradition that dates back to World War I of citizens spying on suspected subversives.
"If people see anything suspicious, utility workers, you ought to report it," said President George Bush in his April 2002 call for nationwide citizen vigilance. Within weeks, his Justice Department had launched Operation TIPS (Terrorism Information and Prevention System), with plans for "millions of American truckers, letter carriers, train conductors, ship captains, utility employees and others" to aid the government by spying on their fellow Americans. Such citizen surveillancesparked strong protests, however, forcing the Justice Department to quietly bury the president's program.
Simultaneously, inside the Pentagon, Admiral John Poindexter, President Ronald Reagan's former national security advisor (swept up in the Iran-Contra scandal of that era), was developing a Total Information Awareness program which was to contain "detailed electronic dossiers" on millions of Americans. When news leaked about this secret Pentagon office with its eerie, all-seeing eye logo, Congress banned the program, and the admiral resigned in 2003. But the key data extraction technology, the Information Awareness Prototype System, migrated quietly to the NSA.
Soon enough, however, the CIA, FBI, and NSA turned to monitoring citizens electronically without the need for human tipsters, rendering the administration's grudging retreats from conventional surveillance at best an ambiguous political victory for civil liberties advocates. Sometime in 2002, President Bushgave the NSA secret, illegal orders to monitor private communications through the nation's telephone companies and its private financial transactions through SWIFT, an international bank clearinghouse.
After the New York Times exposed these wiretaps in 2005, Congress quickly capitulated, first legalizing this illegal executive program and then granting cooperating phone companies immunity from civil suits. Such intelligence excess was, however, intentional. Even after Congress widened the legal parameters for future intercepts in 2008, the NSA continued to push the boundaries of its activities, engaging in what the New York Times politely termed the systematic "overcollection" of electronic communications among American citizens. Now, for example, thanks to a top-secret NSA database called "Pinwale," analysts routinely scan countless "millions" of domestic electronic communications without much regard for whether they came from foreign or domestic sources.
Starting in 2004, the FBI launched an Investigative Data Warehouse as a "centralized repository for... counterterrorism." Within two years, it contained 659 million individual records. This digital archive of intelligence, social security files, drivers' licenses, and records of private finances could be accessed by 13,000 Bureau agents and analysts making a million queries monthly. By 2009, when digital rights advocates sued for full disclosure, the database had already grown to over a billion documents.
And did this sacrifice of civil liberties make the United States a safer place? In July 2009, after a careful review of the electronic surveillance in these years, the inspectors general of the Defense Department, the Justice Department, the CIA, the NSA, and the Office of National Intelligenceissued a report sharply critical of these secret efforts. Despite George W. Bush's claims that massive electronic surveillance had "helped prevent attacks," these auditors could not find any "specific instances" of this, concluding such surveillance had "generally played a limited role in the F.B.I.'s overall counterterrorism efforts."
Amid the pressures of a generational global war, Congress proved all too ready to offer up civil liberties as a bipartisan burnt offering on the altar of national security. In April 2007, for instance, in a bid to legalize the Bush administration's warrantless wiretaps, Congressional representative Jane Harman (Dem., California) offered a particularly extreme example of this urge. Sheintroduced the Violent Radicalization and Homegrown Terrorism Prevention Act, proposing a powerful national commission, functionally a standing "star chamber," to "combat the threat posed by homegrown terrorists based and operating within the United States." The bill passed the House by an overwhelming 404 to 6 vote before stalling, and then dying, in a Senate somewhat more mindful of civil liberties.
Only weeks after Barack Obama entered the Oval Office, Harman's life itself became a cautionary tale about expanding electronic surveillance. According to information leaked to theCongressional Quarterly, in early 2005 an NSA wiretap caught Harman offering to press the Bush Justice Department for reduced charges against two pro-Israel lobbyists accused of espionage. In exchange, an Israeli agent offered to help Harman gain the chairmanship of the House Intelligence Committee by threatening House Democratic majority leader Nancy Pelosi with the loss of a major campaign donor. As Harman put down the phone, she said, "This conversation doesn't exist."
How wrong she was. An NSA transcript of Harman's every word soon crossed the desk of CIA Director Porter Goss, prompting an FBI investigation that, in turn, was blocked by then-White House Counsel Alberto Gonzales. As it happened, the White House knew that the New York Times was about to publish its sensational revelation of the NSA's warrantless wiretaps, and felt it desperately needed Harman for damage control among her fellow Democrats. In this commingling of intrigue and irony, an influential legislator's defense of the NSA's illegal wiretapping exempted her from prosecution for a security breach discovered by an NSA wiretap.
Since the arrival of Barack Obama in the White House, the auto-pilot expansion of digital domestic surveillance has in no way been interfered with. As a result, for example, the FBI's "Terrorist Watchlist," with 400,000 names and a million entries, continues to grow at the rate of 1,600 new names daily.
In fact, the Obama administration has even announced plans for a new military cybercommandstaffed by 7,000 Air Force employees at Lackland Air Base in Texas. This command will be tasked with attacking enemy computers and repelling hostile cyber-attacks or counterattacks aimed at U.S. computer networks -- with scant respect for what the Pentagon calls "sovereignty in the cyberdomain." Despite the president's assurances that operations "will not -- I repeat -- will not include monitoring private sector networks or Internet traffic," the Pentagon's top cyberwarrior, General James E. Cartwright, has conceded such intrusions are inevitable.
Sending the Future Home
While U.S. combat forces prepare to draw-down in Iraq (and ramp up in Afghanistan), military intelligence units are coming home to apply their combat-tempered surveillance skills to our expanding homeland security state, while preparing to counter any future domestic civil disturbances here.
Indeed, in September 2008, the Army's Northern Command announced that one of the Third Division's brigades in Iraq would be reassigned as a Consequence Management Response Force (CMRF) inside the U.S. Its new mission: planning for moments when civilian authorities may need help with "civil unrest and crowd control." According to Colonel Roger Cloutier, his unit's civil-control equipment featured "a new modular package of non-lethal capabilities" designed to subdue unruly or dangerous individuals -- including Taser guns, roadblocks, shields, batons, and beanbag bullets.
That same month, Army Chief of Staff General George Casey flew to Fort Stewart, Georgia, for the first full CMRF mission readiness exercise. There, he strode across a giant urban battle map filling a gymnasium floor like a conquering Gulliver looming over Lilliputian Americans. With 250 officers from all services participating, the military war-gamed its future coordination with the FBI, the Federal Emergency Management Agency, and local authorities in the event of a domestic terrorist attack or threat. Within weeks, the American Civil Liberties Union filed an expedited freedom of information request for details of these deployments, arguing: "[It] is imperative that the American people know the truth about this new and unprecedented intrusion of the military in domestic affairs."
At the outset of the Global War on Terror in 2001, memories of early Cold War anti-communist witch-hunts blocked Bush administration plans to create a corps of civilian tipsters and potential vigilantes. However, far more sophisticated security methods, developed for counterinsurgency warfare overseas, are now coming home to far less public resistance. They promise, sooner or later, to further jeopardize the constitutional freedoms of Americans.
In these same years, under the pressure of War on Terror rhetoric, presidential power has grown relentlessly, opening the way to unchecked electronic surveillance, the endless detention of terror suspects, and a variety of inhumane forms of interrogation. Somewhat more slowly, innovative techniques of biometric identification, aerial surveillance, and civil control are now being repatriated as well.
In a future America, enhanced retinal recognition could be married to omnipresent security cameras as a part of the increasingly routine monitoring of public space. Military surveillance equipment, tempered to a technological cutting edge in counterinsurgency wars, might also one day be married to the swelling domestic databases of the NSA and FBI, sweeping the fiber-optic cables beneath our cities for any sign of subversion. And in the skies above, loitering aircraft and cruising drones could be checking our borders and peering down on American life.
If that day comes, our cities will be Argus-eyed with countless thousands of digital cameras scanning the faces of passengers at airports, pedestrians on city streets, drivers on highways, ATM customers, mall shoppers, and visitors to any federal facility. One day, hyper-speed software will be able to match those millions upon millions of facial or retinal scans to photos of suspect subversives inside a biometric database akin to England's current National Public Order Intelligence Unit, sending anti-subversion SWAT teams scrambling for an arrest or an armed assault.
By the time the Global War on Terror is declared over in 2020, if then, our American world may be unrecognizable -- or rather recognizable only as the stuff of dystopian science fiction. What we are proving today is that, however detached from the wars being fought in their name most Americans may seem, war itself never stays far from home for long. It's already returning in the form of new security technologies that could one day make a digital surveillance state a reality, changing fundamentally the character of American democracy.
Alfred W. McCoy is the J.R.W. Smail Professor of History at the University of Wisconsin-Madison and the author of A Question of Torture, among other works. His most recent book is Policing America's Empire: The United States, the Philippines, and the Rise of the Surveillance State (University of Wisconsin Press) which explores the influence of overseas counterinsurgency operations throughout the twentieth century in spreading ever more draconian internal security measures here at home.
Please support Global Research
Global Research relies on the financial support of its readers.
Your endorsement is greatly appreciated
Subscribe to the Global Research e-newsletter
--------------------------------------------------------------------------------
Disclaimer: The views expressed in this article are the sole responsibility of the author and do not necessarily reflect those of the Centre for Research on Globalization. The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible or liable for any inaccurate or incorrect statements contained in this article.
To become a Member of Global Research
The CRG grants permission to cross-post original Global Research articles on community internet sites as long as the text & title are not modified. The source and the author's copyright must be displayed. For publication of Global Research articles in print or other forms including commercial internet sites, contact: crgeditor@yahoo.com
www.globalresearch.ca contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.
For media inquiries: crgeditor@yahoo.com
© Copyright Alfred W. McCoy, Tom Dispatch, 2009
The url address of this article is: www.globalresearch.ca/index.php?context=va&aid=16123
The Weird Russian Mind-Control Research Behind a DHS Contract
By Sharon Weinberger 09.20.07
A dungeon-like room in the Psychotechnology Research Institute in Moscow is used for human testing. The institute claims its technology can read the subconscious mind and alter behavior.
MOSCOW -- The future of U.S. anti-terrorism technology could lie near the end of a Moscow subway line in a circular dungeon-like room with a single door and no windows. Here, at the Psychotechnology Research Institute, human subjects submit to experiments aimed at manipulating their subconscious minds.
Elena Rusalkina, the silver-haired woman who runs the institute, gestured to the center of the claustrophobic room, where what looked like a dentist's chair sits in front of a glowing computer monitor. "We've had volunteers, a lot of them," she said, the thick concrete walls muffling the noise from the college campus outside. "We worked out a program with (a psychiatric facility) to study criminals. There's no way to falsify the results. There's no subjectivism."
The Department of Homeland Security (DHS) has gone to many strange places in its search for ways to identify terrorists before they attack, but perhaps none stranger than this lab on the outskirts of Russia's capital. The institute has for years served as the center of an obscure field of human behavior study -- dubbed psychoecology -- that traces it roots back to Soviet-era mind control research.
What's gotten DHS' attention is the institute's work on a system called Semantic Stimuli Response Measurements Technology, or SSRM Tek, a software-based mind reader that supposedly tests a subject's involuntary response to subliminal messages.
SSRM Tek is presented to a subject as an innocent computer game that flashes subliminal images across the screen -- like pictures of Osama bin Laden or the World Trade Center. The "player" -- a traveler at an airport screening line, for example -- presses a button in response to the images, without consciously registering what he or she is looking at. The terrorist's response to the scrambled image involuntarily differs from the innocent person's, according to the theory.
Gear for testing MindReader 2.0 software hangs on a wall at the Psychotechnology Research Institute in Moscow. Marketed in North America as SSRM Tek, the technology will soon be tested for airport screening by a U.S. company under contract to the Department of Homeland Security.
"If it's a clean result, the passengers are allowed through," said Rusalkina, during a reporter's visit last year. "If there's something there, that person will need to go through extra checks."
Rusalkina markets the technology as a program called Mindreader 2.0. To sell Mindreader to the West, she's teamed up with a Canadian firm, which is now working with a U.S. defense contractor called SRS Technologies. This May, DHS announced plans to award a sole-source contract to conduct the first U.S.-government sponsored testing of SSRM Tek.
The contract is a small victory for the Psychotechnology Research Institute and its leaders, who have struggled for years to be accepted in the West. It also illustrates how the search for counter-terrorism technology has led the U.S. government into unconventional -- and some would say unsound -- science.
All of the technology at the institute is based on the work of Rusalkina's late husband, Igor Smirnov, a controversial Russian scientist whose incredible tales of mind control attracted frequent press attention before his death several years ago.
Smirnov was a Rasputin-like character often portrayed in the media as having almost mystical powers of persuasion. Today, first-time visitors to the institute -- housed in a drab concrete building at the Peoples Friendship University of Russia -- are asked to watch a half-hour television program dedicated to Smirnov, who is called the father of "psychotronic weapons," the Russian term for mind control weapons. Bearded and confident, Smirnov in the video explains how subliminal sounds could alter a person's behavior. To the untrained ear, the demonstration sounds like squealing pigs.
Elena Rusalkina demonstrates the terrorist-screening tool. She says it works faster than a polygraph and can be used at airports.
According to Rusalkina, the Soviet military enlisted Smirnov's psychotechnology during the Soviet Union's bloody war in Afghanistan in the 1980s. "It was used for combating the Mujahideen, and also for treating post-traumatic stress syndrome" in Russian soldiers, she says.
In the United States, talk of mind control typically evokes visions of tinfoil hats. But the idea of psychotronic weapons enjoys some respectability in Russia. In the late 1990s, Vladimir Lopatin, then a member of the Duma, Russia's parliament, pushed to restrict mind control weapons, a move that was taken seriously in Russia but elicited some curious mentions in the Western press. In an interview in Moscow, Lopatin, who has since left the Duma, cited Smirnov's work as proof that such weaponry is real.
"It's financed and used not only by the medical community, but also by individual and criminal groups," Lopatin said. Terrorists might also get hold of such weapons, he added.
After the fall of the Soviet Union, Smirnov moved from military research into treating patients with mental problems and drug addiction, setting up shop at the college. Most of the lab's research is focused on what it calls "psychocorrection" -- the use of subliminal messages to bend a subject's will, and even modify a person's personality without their knowledge.
The slow migration of Smirnov's technology to the United States began in 1991, at a KGB-sponsored conference in Moscow intended to market once-secret Soviet technology to the world. Smirnov's claims of mind control piqued the interest of Chris and Janet Morris -- former science-fiction writers turned Pentagon consultants who are now widely credited as founders of the Pentagon's "non-lethal" weapons concept.
In an interview last year, Chris Morris recalled being intrigued by Smirnov -- so much so that he accompanied the researcher to his lab and allowed Smirnov to wire his head up to an electroencephalograph, or EEG. Normally used by scientists to measure brain states, Smirnov peered into Morris's EEG tracings and divined the secrets of his subconscious, right down to intimate details like Morris' dislike of his own first name.
The underlying premise of the technology is that terrorists would recognize a scrambled terrorist image like this one without even realizing it, and would be betrayed by their subconscious reaction to the picture.
"I said, 'gee, the guys back at home have got to see this,'" Morris recalled.
The Morrises shopped the technology around to a few military agencies, but found no one willing to put money into it. However, in 1993 Smirnov rose to brief fame in the United States when the FBI consulted with him in hope of ending the standoff in Waco with cult leader David Koresh. Smirnov proposed blasting scrambled sound -- the pig squeals again -- over loudspeakers to persuade Koresh to surrender.
But the FBI was put off by Smirnov's cavalier response to questions. When officials asked what would happen if the subliminal signals didn't work, Smirnov replied that Koresh's followers might slit each other's throats, Morris recounted. The FBI took a pass, and Smirnov returned to Moscow with his mind control technology.
"With Smirnov, the FBI was either demanding a yes or a no, and therefore our methods weren't put to use, unfortunately," Rusalkina said, taking a drag on her cigarette.
Igor Smirnov, founder of the Psychotechnology Research Institute, died of a heart attack in 2005. Smirnov is best known in the United States for consulting with the FBI during the 1993 Waco siege.
Smirnov died in November 2004, leaving the widowed Rusalkina -- his long-time collaborator -- to run the institute. Portraits of Smirnov cover Rusalkina's desk, and his former office is like a shrine, the walls lined with his once-secret patents, his awards from the Soviet government, and a calendar from the KGB's cryptographic section.
Despite Smirnov's death, Rusalkina predicts an "arms race" in psychotronic weapons. Such weapons, she asserts, are far more dangerous than nuclear weapons.
She pointed, for example, to a spate of Russian news reports about "zombies" -- innocent people whose memories had been allegedly wiped out by mind control weapons. She also claimed that Russian special forces contacted the institute during the 2003 Moscow theater siege, in which several hundred people were held hostage by Chechen militants.
"We could have stabilized the situation in the concert hall, and the terrorists would have called the whole thing off," she said. "And naturally, you could have avoided all the casualties, and you could have put the terrorists on trial. But the Alfa Group" -- the Russian equivalent of Delta Force -- "decided to go with an old method that had already been tested before."
The Russians used a narcotic gas to subdue the attackers and their captives, which led to the asphyxiation death of many of the hostages.
These days, Rusalkina explained, the institute uses its psychotechnology to treat alcoholics and drug addicts. During the interview, several patients -- gaunt young men who appeared wasted from illness -- waited in the hallway.
But the U.S. war on terror and the millions of dollars set aside for homeland security research is offering Smirnov a chance at posthumous respectability in the West.
Smirnov's technology reappeared on the U.S. government's radar screen through Northam Psychotechnologies, a Canadian company that serves as North American distributor for the Psychotechnology Research Institute. About three years ago, Northam Psychotechnologies began seeking out U.S. partners to help it crack the DHS market. For companies claiming innovative technologies, the past few years have provided bountiful opportunities. In fiscal year 2007, DHS allocated $973 million for science and technology and recently announced Project Hostile Intent, which is designed to develop technologies to detect people with malicious intentions.
One California-based defense contractor, DownRange G2 Solutions, expressed interest in SSRM Tek, but became skeptical when Northam Psychotechnologies declined to make the software available for testing.
"That raised our suspicion right away," Scott Conn, CEO and president of DownRange, told Wired News. "We weren't prepared to put our good names on the line without due diligence." (When a reporter visited last year, Rusalkina also declined to demonstrate the software, saying it wasn't working that day.)
While Conn said the lack of testing bothered him, the relationship ended when he found out Northam Psychotechnologies went to SRS Technologies, now part of ManTech International Corp.
Semyon Ioffe, the head of Northam Psychotechnologies, who identifies himself as a "brain scientist," declined a phone interview, but answered questions over e-mail. Ioffe said he signed a nondisclosure agreement with Conn, and had "a few informal discussions, after which he disappeared to a different assignment and reappeared after (the) DHS announcement."
As for the science, Ioffe says he has a Ph.D in neurophysiology, and cited Smirnov's Russian-language publications as the basis for SSRM Tek.
However, not everyone is as impressed with Smirnov's technology, including John Alexander, a well-known expert on non-lethal weapons. Alexander was familiar with Smirnov's meetings in Washington during the Waco crisis, and said in an interview last year that there were serious doubts then as now.
"It was the height of the Waco problem, they were grasping at straws," he said of the FBI's fleeting interest. "From what I understand from people who were there, it didn't work very well."
Geoff Schoenbaum, a neuroscientist at the University of Maryland's School of Medicine, said that he was unaware of any scientific work specifically underpinning the technology described in SSRM Tek.
"There's no question your brain is able to perceive things below your ability to consciously express or identify," Schoenbaum said. He noted for example, studies showing that images displayed for milliseconds -- too short for people to perceive consciously -- may influence someone's mood. "That kind of thing is reasonable, and there's good experimental evidence behind it."
The problem, he said, is that there is no science he is aware of that can produce the specificity or sensitivity to pick out a terrorist, let alone influence behavior. "We're still working at the level of how rats learn that light predicts food," he explained. "That's the level of modern neuroscience."
Developments in neuroscience, he noted, are followed closely. "If we could do (what they're talking about), you would know about it," Schoenbaum said. "It wouldn't be a handful of Russian folks in a basement."
In the meantime, the DHS contract is still imminent, according to those involved, although all parties declined to comment on the details, or the size of the award. Rusalkina did not respond to a recent e-mail, but in the interview last year, she confirmed the institute was marketing the technology to the United States for airport screening.
Larry Orloskie, a spokesman for DHS, declined to comment on the contract announcement. "It has not been awarded yet," he replied in an e-mail.
"It would be premature to discuss any details about the pending contract with DHS and I will be happy to do an interview once the contract is in place," Ioffe, of Northam Psychotechnologies, wrote in an e-mail. Mark Root, a spokesman for ManTech, deferred questions to DHS, noting, "They are the customer."
Read More http://www.wired.com/politics/security/news/2007/09/mind_reading?cu...
by Julia Layton
http://health.howstuffworks.com/mind-reading.htm
Many who saw the movie "Minority Report" experienced two distinct reactions: first, "Please, this is pure science fiction" and then, "There but for the grace of God..." Really, how many of us have not fantasized at least once about what we would do if we ever came upon that guy who stole our car? And maybe on a trip to Best Buy, you imagined for a second what it would be like to just pick up that 60-inch DLP out-of-the-box set, hoist it on your back and walk out of the store. Would you get tackled by a salesperson?
But these are just passing thoughts, even the stuff of jokes. They're not actually plans, right? The distinction between the two is just a part of the ethical debate surrounding a study published in the journal Current Biology in February 2007, which reports the findings of an experiment on reading people's intentions. The study, led by John-Dylan Haynes of the Max Planck Institute for Human Cognitive Brain Sciences in Germany, shows that through brain scans and corresponding computer software designed to correlate specific brain activity with specific thoughts, researchers are able to read people's intentions with great accuracy.
How did they do it? Find out on the next page.
Richard Ingham, AFP
http://dsc.discovery.com/news/2008/03/05/brain-thought.html
March 5, 2008 -- Venturing into the preserve of science fiction and stage magicians, scientists in the United States on Wednesday said they had made extraordinary progress towards reading the brain.
The researchers said they had been able to decode signals in a key part of the brain to identify images seen by a volunteer, according to their study, published by the British journal Nature.
The tool used by the University of California at Berkeley neuroscientists is functional magnetic resonance imaging (fMRI), a non-invasive scanner that detects minute flows of blood within the brain, thus highlighting which cerebral areas are triggered by light, sound and touch.
Their zone of interest was the visual cortex -- a frontal part of the brain that reconstitutes images sent by the retina.
Using two of their number as volunteers, the team built a computational model based on telltale blood-flow patterns in three key areas of the visual cortex.
The signatures were derived from 1,750 images of objects, such as horses, trees, buildings and flowers, that were flashed up in front of the subjects.
Using this model, the program then scanned a new set of 120 brand new pictures to predict what kind of fMRI patterns these would make in the visual cortex.
After that, the volunteers themselves looked at the 120 new pictures while being scanned. The computer then matched the measured brain activity against the predicted brain activity, and picked an image that it believed was the closest match.
They notched up a 92 percent success rate with one volunteer, and accuracy was 72 percent in the other. The probability of this happening on the basis of chance -- i.e. the computer picking the right image out of the 120 -- is only 0.8 percent.
Lead author Jack Gallant likened the task to that of a magician who asks a member of the audience to pick a card from a pack, and then figures out which one it was.
"Imagine that we begin with a large set of photographs chosen at random," Gallant said.
"You secretly select just one of these and look at it while we measure your brain activity. Given the set of possible photographs and the measurements of your brain activity, the decoder attempts to identify which specific photograph you saw."
The ambitious experiment was taken a stage further, expanding the set of novel images from 120 to up to 1,000. The first volunteer took this test, and accuracy declined, but only slightly, from 92 percent to 82 percent.
"Our estimates suggest that even with a set of one billion images -- roughly the number of images indexed by Google on the Internet -- the decoder would correctly identify the image about 20 percent of the time," said Gallant.
The researchers say the device cannot "read minds," the common term for unscrambling thoughts. It cannot even reconstruct an image, only identify an image that was taken from a known set, they point out.
All the same, the potential is enormous, they believe.
Doctors could use the technique to diagnose brain areas damaged by a stroke or dementia, determine the outcome of drug treatment or stem-cell therapy and fling open a door into the strange world of dreams.
And, according to one futuristic scenario, paraplegic patients, by thinking of a series of images whose fMRI patterns are recognised by computer, may one day be able to operate machines by remote control.
Even so, brain-reading is hedged with potential controversy.
Within 30 or 50 years, advances could raise fears about breach of privacy and authoritarian abuse of the kind that dog biotechnology today, the authors say.
"No-one should be subjected to any form of brain-reading process involuntarily, covertly, or without complete informed consent," they say.
What if you could suck evidence right of a criminal's mind? That's what one ground breaking invention called Brain Fingerprinting intends to do, while revolutionizing traditional lie detectors in the process.
http://science.discovery.com/videos/popscis-future-of-brain-fingerp...