FULL NAME's Posts (12)

Sort by
- Besides the idea and possibility for an receiving implant or a particle driven to a specific part of the body, many simptoms are caused and experienced because of the different reaction of human body organs to specific electromagnetic fields. A main concern was the idea and decision to use frequencies that will intentionally influence specific organs.- Common; Eyes, testucular or ovarian sensation, discomfort or pain.1. Diathermy heating of human tissue caused by EM radiation only heats tissues that are good electrical conductors, such as blood vessels and muscle. Adipose tissue (fat) receives little heating by induction fields because an electrical current is not actually going through the tissue.2. "Two areas of the body, the eyes and the testes, can be particularly susceptible to radiation emitted by electromagnetic fields.Heating in this areas by RF energy occur because of the relative lack of available blood flow to dissipate the excessive heat load".Electromagnetic fields has been recognized as hazards that affect testicular function by generating reactive oxygen species and reduce the bioavailability of androgen to maturing spermatozoa. "Thus, microwave exposure adversely affects male fertility".3. Depending on type, strenght and lenght of exposure to radiation, various health efects of such influences has been noted.Temporary sterility from high frequency fields to permanet from low, significant testosterone and melatonin decrease, changes in sperm count and in sperm motility, DNA strand breakage in testicular cells, tumor, cancer, cataracts, retina detachment..While damage caused by short term RF heating is often reversible, recovery may be impaired by influence of very low magnetic fields. Indications of capacity and use of EM radiation from wider spectrum is present to combine the influence.4. There are medical tests available to test for each of the possible health issues mentioned.- The conclusion was pulled after examining various scientific papers from research on animal and human.To keep it simple wikipedia was used for short reference.-----------------------https://en.m.wikipedia.org/wiki/Electromagnetic_radiation_and_health Biological hazardsThe best understood biological effect of electromagnetic fields is to cause dielectric heating. For example, touching or standing around an antenna while a high-power transmitter is in operation can cause severe burns. These are exactly the kind of burns that would be caused inside a microwave oven.This heating effect varies with the power and the frequency of the electromagnetic energy. A measure of the heating effect is the specific absorption rate or SAR, which has units of watts per kilogram (W/kg). The IEEE and many national governments have established safety limits for exposure to various frequencies of electromagnetic energy based on SAR, mainly based on ICNIRP Guidelines, which guard against thermal damage.There are publications which support the existence of complex biological effects of weaker non-thermal electromagnetic fields (see Bioelectromagnetics), including weak ELF magnetic fields and modulated RF and microwave fields. Fundamental mechanisms of the interaction between biological material and electromagnetic fields at non-thermal levels are not fully understood. While the most acute exposures to harmful levels of electromagnetic radiation are immediately realized as burns, the health effects due to chronic or occupational exposure may not manifest effects for months or years. Extremely-low-frequency RFHigh-power extremely-low-frequency RF with electric field levels in the low kV/m range are known to induce perceivable currents within the human body that create an annoying tingling sensation. These currents will typically flow to ground through a body contact surface such as the feet, or arc to ground where the body is well insulated Shortwave frequency RFShortwave diathermy heating of human tissue only heats tissues that are good electrical conductors, such as blood vessels and muscle. Adipose tissue (fat) receives little heating by induction fields because an electrical current is not actually going through the tissues. MicrowavesMicrowave exposure at low-power levels below the specific absorption rate set by government regulatory bodies is considered harmless non-ionizing radiation and has no effect on the human body. Levels above the specific absorption rate set by the US Federal Communications Commission (FCC) are those they considered to be potentially harmful. ANSI standards for safe exposure levels to RF and microwave radiation are set to a SAR level of 4 W/kg, the threshold before hazardous thermical effects occur due to energy absorption in the body. A safety factor of ten was then incorporated to arrive at the final recommended protection guidelines of a SAR exposure threshold of 0.4 W/kg for RF and microwave radiation. There is disagreement over exactly what levels of RF radiation are safe, particularly with regard to low levels of exposure. Russia and eastern European countries set SAR thresholds for microwaves and RF much lower than western countries.Two areas of the body, the eyes and the testes, can be particularly susceptible to heating by RF energy because of the relative lack of available blood flow to dissipate the excessive heat load. Laboratory experiments have shown that short-term exposure to high levels of RF radiation (100–200 mW/cm²) can cause cataracts in rabbits. Temporary sterility, caused by such effects as changes in sperm count and in sperm motility, is possible after exposure of the testes to high-level RF radiation.Long-term exposure to high-levels of microwaves, is recognized, from experimental animal studies and epidemiological studies in humans, to cause cataracts. The mechanism is unclear but may include changes in heat sensitive enzymes that normally protect cell proteins in the lens. Another mechanism that has been advanced is direct damage to the lens from pressure waves induced in the aqueous humor.Exposure to sufficiently high-power microwave RF is known to create effects ranging from a burning sensation on the skin and microwave auditory effect, to extreme pain at the mid-range, to physical microwave burns and blistering of skin and internals at high power levels.
Read more…

Optogenetics - non invasive brain controll

http://newsoffice.mit.edu/2014/noninvasive-brain-control-0629New light-sensitive protein enables simpler, more powerful optogenetics.Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.Mining nature’s diversityTo find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”Restoring visionWorking with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.The research at MIT was funded by Jerry and Marge Burnett, the Defense Advanced Research Projects Agency, the Human Frontiers Science Program, the IET A. F. Harvey Prize, the Janet and Sheldon Razin ’59 Fellowship of the MIT McGovern Institute, the New York Stem Cell Foundation-Robertson Investigator Award, the National Institutes of Health, the National Science Foundation, and the Wallace H. Coulter Foundation.
Read more…
http://www.popsci.com/researchers-control-mouse-movements-remote-control-deviceResearchers have a hard time understanding how tiny changes to networks of neurons in an animal's brain can affect its behavior. Now a team of researchers has figured out a way to tinker with the neural networks of mice in real time, using a wireless controller that can both shine light on the brain and deliver drugs to it. The device even lets scientists control the movements of their test mice from afar.Neural circuits are thought to be key for various neurological disorders having to do with stress, depression, addiction, and pain—researchers can study them by adding various chemicals or light to neurons to mimic the disease. But observing or manipulating neural circuits to figure out how they work has been notoriously difficult. When researchers do tests on mice, the animals have to be awake and moving around, but the devices to monitor their brain activity are usually hooked up to wires, so they can’t move freely. When they’re implanted, the devices often displace a substantial amount of the mouse’s brain tissue, which could change the outcome of the experiments.In this recent study, published last week in Cell, researchers tested a device made of soft materials and just one-tenth the width of a human hair—much less invasive than the devices typically used in drug injection experiments. The devices were also hooked up to tiny batteries so that they didn’t need to be attached to wires and contained reservoirs of the drugs or viruses that they wanted to test so that they could be injected wirelessly.To test the devices, researchers implanted them in the brains of several mice, which were put in a cage about three feet from the remote control. The device allowed them to precisely map the mice’s neural circuits by injecting viruses that label cells with genetic dyes. By injecting a morphine-like drug to the ventral tegmental area (VTA) of the brain that controls motivation and addiction, the researchers made the mice walk in circles. Viruses injected into the mice’s brains using the device made certain neurons in the VTA very sensitive to light; when the researchers shined light on those neurons using the device, the mice stayed only on one side of their cage.In the study the researchers provide explicit instructions for how to manufacture the device, which contains four LEDs and has the capacity to inject four different chemicals or drugs into the mice’s brains. The researchers hope that other scientists will use their plans to discover more about how neural networks work to create or treat diseases.
Read more…
http://www.huffingtonpost.com/stuart-hameroff/is-your-brain-really-a-co_b_7756700.htmlHow does the brain - a lump of 'pinkish gray meat' - produce the richness of conscious experience, or any subjective experience at all? Scientists and philosophers have historically likened the brain to contemporary information technology, from the ancient Greeks comparing memory to a 'seal ring in wax,' to the 19th century brain as a 'telegraph switching circuit,' to Freud's sub-conscious desires 'boiling over like a steam engine,' to a hologram, and finally, the computer.Because brain neurons and synapses appear to act like switches and 'bits' in computers, and because brain disorders like depression, Alzheimer's disease and traumatic brain injury ravage humanity with limited effective therapies, scientists, governments and funding agencies have bet big on the brain-as-computer analogy. For example billions of dollars and euros are being poured into 'brain mapping,' the notion that identifying, and then simulating brain neurons and their synaptic connections can reproduce brain function, e.g. the 'human brain project' in Europe, and the Allen Institute's efforts in Seattle to map the mouse cortex. But the bet, so far at least, isn't paying off.For example, beginning more modestly, a world-wide consortium has simulated the already-known 302 neuron 'brain' of a simple round worm called C elegans. The biological worm is fairly active, swimming nimbly and purposefully, but the simulated C elegans just lies there, with no functional behavior. Something is missing. Funding agencies are getting nervous. Bring in the 'P.R. guys.'In a New York Times piece, 'Face It, Your Brain is a Computer' (June 27, 2015), NYU psychologist/neuroscientist Gary Marcus desperately beats the dead horse. Following a series of failures by computers to simulate basic brain functions (much less approach 'The C-word', consciousness) Marcus is left to ask, in essence, if the brain isn't a computer, what else could it possibly be?Actually, the brain is looking more like an orchestra, a multi-scalar vibrational resonance system, than a computer. Brain information patterns repeat over spatiotemporal scales in fractal-like, nested hierarchies of neuronal networks, with resonances and interference beats. One example of a multi-scalar spatial mapping is the 2014 Nobel Prize-winning work (O'Keefe, Moser and Moser) on 'grid cells', hexagonal representations of spatial location arrayed in layers of entorhinal cortex, each layer encoding a different spatial scale. Moving from layer to layer in entorhinal cortex is precisely like zooming in and out in a Google map.Indeed, neuroscientist Karl Pribram's assessment of the brain as a 'holographic storage device' (which Marcus dismisses) seems now on-target. Holograms encode distributed information as multi-scalar interference of coherent vibrations, e.g. from lasers. Pribram lacked a proper coherent source, a laser in the brain, but evidence now points to structures inside brain neurons called microtubules as sources of laser-like coherence for the brain's vibrational hierarchy.Microtubules are cylindrical lattice polymers of the protein 'tubuln', major components of the structural cytoskeleton inside cells, and the brain's most prevalent protein. Their lattice structure and self-organization have suggested microtubule information processing, stemming from Charles Sherrington (1951) calling them 'the cell's nervous system'. In the 1980s, my colleagues and I proposed microtubules acted like computers, specifically as Boolean switching matrices, or molecular automata, processing information, encoding memory, oscillating coherently and regulating neuronal functions from within.For the past 20 years I've teamed with British physicist Sir Roger Penrose on a quantum theory of consciousness ('orchestrated objective reduction', 'Orch OR') linking microtubule quantum processes to fluctuations in the structure of the universe. Our idea was criticized harshly, as the brain seemed too 'warm, wet and noisy' for apparently delicate quantum coherence. But evidence now clearly shows (1) plant photosynthesis routinely uses quantum coherence in warm sunlight (if a potato can do it....?), and (2) microtubules have quantum resonances in gigahertz, megahertz and kilohertz frequency ranges (the work of Anirban Bandyopadhyay and colleagues at National Institute of Material Science in Tsukuba, Japan).These coherent 'fractal frequencies' in microtubules apparently couple to even faster, smaller-scale terahertz vibrations among intra-tubulin 'pi electron resonance clouds', and to slower ones, e.g. by interference 'beats' giving rise to larger scale EEG. My colleagues and I (Craddock et al, 2015) have identified a 'quantum underground' inside microtubules where anesthetic gases bind to selectively erase consciousness, dampening and dispersing terahertz dipole vibrations. A multi-scalar, vibrational hierarchy could play key roles in neuronal and brain functions, driven at the 'bottom', inside neurons, by microtubule quantum resonators.The most likely sites for consciousness are microtubule networks in dendrites and soma of cortical layer 5 giant pyramidal neurons whose apical dendrites give rise to EEG. Dendritic-somatic microtubules are unique, being interrupted and arrayed in mixed polarity networks, unsuited for structural support but optimal for information processing, resonance and interference.Finally, Marcus raises 2 points, which should be addressed.Citing conventional wisdom, he concedes 'the brain's nerve cells are too slow and variable to be a good match for the transistors and logic gates that we use in modern computers.'True! But microtubules inside those nerve cells act at terahertz through gigahertz, megahertz and kilohertz frequencies, and are a very good match, in fact far, far better.Finally, Marcus says 'We need to find a common underlying circuit...logic block that can be configured, and reconfigured...to do a wide range of tasks...highly orchestrated sets of fundamental building blocks,... 'field-programmable gate arrays', ...'computational primitives....the Rosetta stone that unlocks the brain.I agree completely. And microtubules inside neurons provide exactly, precisely, 100 percent what Marcus is looking for. Viewing neurons as computational primitives is an insult to neurons. Brain mappers should look deeper, smaller, faster, inside neurons. Cytoskeletal circuits of mixed polarity microtubules ('quantum resonators') are key instruments of the quantum orchestra.Stuart Hameroff MD is Professor, Anesthesiology and Psychology Director, Center for Consciousness Studies Banner-University Medical Center, The University of Arizona, Tucson, Arizona
Read more…

SCI-HO

http://www.newscientist.com/topic/brainVarious scientific things about the brain, for whoever has time or wish to read. Its actually all out there and they are doing it at the universities as you speak. Preety much everything i have heard of from the "TI", i have been able to find scientific support in current experimenting and development worldwide.
Read more…

Linked monkeys vs linked himans :)

http://www.newscientist.com/article/dn27869-animal-brains-connected-up-to-make-mindmelded-computer.html?full=true#.VZ7hLY7D8m_Animal brains connected up to make mind-melded computer.Two heads are better than one, and three monkey brains can control an avatar better than any single monkey. For the first time, a team has networked the brains of multiple animals to form a living computer that can perform tasks and solve problems.If human brains could be similarly connected, it might give us superhuman problem-solving abilities, and allow us to communicate abstract thoughts and experiences. "It is really exciting," says Iyad Rahwan at the Masdar Institute in Dubai, UAE, who was not involved in the work. "It will change the way humans cooperate."The work, published today, is an advance on standard brain-machine interfaces – devices that have enabled people and animals to control machines and prosthetic limbsMovie Camera by thought alone. These tend to work by converting the brain's electrical activity into signals that a computer can interpret.Miguel Nicolelis at Duke University Medical Center in Durham, North Carolina, and his colleagues wanted to extend the idea by incorporating multiple brains at once. The team connected the brains of three monkeys to a computer that controlled an animated screen image representing a robotic arm, placing electrodes into brain areas involved in movement.By synchronising their thoughts, the monkeys were able to move the arm to reach a target – at which point the team rewarded them with with juice.BrainetThen the team made things trickier: each monkey could only control the arm in one dimension, for example. But the monkeys still managed to make the arm reach the target by working together. "They synchronise their brains and they achieve the task by creating a superbrain – a structure that is the combination of three brains," says Nicolelis. He calls the structure a "brainet".These monkeys were connected only to a computer, not one another, but in a second set of experiments, the team connected the brains of four rats to a computer and to each other. Each rat had two sets of electrodes implanted in regions of the brain involved in movement control – one to stimulate the brain and another to record its activity.The team sent electrical pulses to all four rats and rewarded them when they synchronised their brain activity. After 10 training sessions, the rats were able to do this 61 per cent of the time. This synchronous brain activity can be put to work as a computer to perform tasks like information storage and pattern recognition, says Nicolelis. "We send a message to the brains, the brains incorporate that message, and we can retrieve the message later," he says.This is the way parallel processing works in computing, says Rahwan. "In order to synchronise, the brains are responding to each other," he says. "So you end up with an input, some kind of computation, and an output – what a computer does." Dividing the computing of a task between multiple brains is similar to sharing computations between multiple processors in modern computers, he says.Bypassing language"This is incredible," says Andrea Stocco at the University of Washington in Seattle, who was not involved in the project. "We are sampling different neurons from different animals and putting them together to create a superorganism."Things could get even more interesting once we are able to connect human brains. This will probably only be possible when better non-invasive methods for monitoringMovie Camera and stimulating the brain have been developed."Once brains are connected, applications become just a matter of what different animals can do," says Stocco. All anyone can probably ask of a monkey is to control movement, but we can expect much more from human minds, he says.A device that allows information transfer between brains could, in theory, allow us to do away with language – which plays the role of a "cumbersome and difficult-to-manage symbolic code", Stocco says."I could send thoughts from my brain to your brain in a way not represented by sounds or words," says Andrew Jackson at Newcastle University, UK. "You could envisage a world where if I wanted to say 'let's go to the pub', I could send that thought to your brain," he says. "Although I don't know if anyone would want that. I would rather link my brain to Wikipedia."The ability to share abstract thoughts could enable us to solve more complex problems. "Sometimes it's really hard to collaborate if you are a mathematician and you're thinking about very complex and abstract objects," says Stocco. "If you could collaboratively solve common problems [using a brainet], it would be a way to leverage the skills of different individuals for a common goal."Collective surgeryThis might be a way to perform future surgery, says Stocco. At present, when a team of surgeons is at work, only one will tend to have control of the scalpel at any moment. Imagine if each member of the team could focus on a particular aspect of the operation and coordinate their brain power to collectively control the procedure. "We are really far away from that scenario, but Nicolelis's work opens up all those possibilities for the first time, which is exciting," he says.But there is a chance that such scenarios won't improve on current performance, Stocco says. Jason Ritt of Boston University agrees. "In principle we could communicate information much faster [with a brainet] than with vision and language, but there's a really high bar," he says. "Our ability to communicate with technology is still nowhere near our ability to communicate with speech."The ability to share our thoughtsMovie Camera and brain power could also leave us vulnerable to new invasions of privacy, warns Rahwan. "Once you create a complex entity [like a brainet], you have to ensure that individual autonomy is protected," he says. It might be possible, for example, for one brain to manipulate others in a network.There's also a chance that private thoughts might slip through along with ones to be shared, such as your intentions after drinking with someone you invited to the pub, says Nicholas Hatsopoulos at the University of Chicago in Illinois. "It might be a little scary," he says. "There are lots of thoughts that we have that we wouldn't want to share with others."In the meantime, Nicolelis, who also develops exoskeletons that help people with spinal cord injuries regain movement, hopes to develop the technology trialled in monkeys for paraplegic people. He hopes that a more experienced user of a prosthetic limbMovie Camera or wheelchair, for example, might be able to collaborate with a less experienced user to directly train them to control it for themselves.http://www.pbs.org/wgbh/nova/next/body/brain-linked-monkeys-form-super-organism-deftly-control-robotic-arm/
Read more…

Nanobots ready to swim in human blood

New mini robots soon to be injected into humansEnter the nanobotsScientists at Micro/Nanophysics Research Laboratory at Australia's Monash University have developed tiny nanobot micromotors that are a mere quarter of a millimeter, powered by tiny piezoelectric motors, capable of swimming in the human bloodstream. They are putting the finishing touches on the motors and readying them for clinical tests on animals and, before long, humans.While the team is still devising ways to remote control the new robots, they feel that they have a solid solution for an autonomous motor design in the form of piezoelectricity. Piezoelectricity is the ability of devices to generate electric pulses based on mechanical movement or vibrations. Piezoelectric devices include computer's clocks, electric guitar pickups, electric stove lighters, and some inkjet printer heads.In the human body, the flow of blood provides abundant kinetic energy. While a nanobot is too small to likely have a useful battery, it could exploit this kinetic energy to power tiny micromotors.The new micromotors will allow nanobots to reach places that previous minimally invasive surgery could not, like the human brain.The team has developed prototypes of the micromotors, which they describe in a journal article found in the Journal of Micromechanics and Microengineering.The next step is to develop more efficient assembly methods, and to devise ways to control the motors more accurately. The team's work should be a natural fit, though for other researchers' designs, which feature useful arterial nanobots, but lack a system of propulsion. The new work is similar to work by American researchers at Georgia Tech who are working to create blood-powered generators for implants.- This is from 2008,It appeared funny when people were mentioning nano robot implants, but they actually exist.
Read more…
by Linda XuWhen you imagine telepathy, your mind probably jumps immediately to science fiction: the Vulcans of Star Trek, Legilimency in Harry Potter, or the huge variety of superheroes and super-villains who possess powers of telekinesis or mind control. Twenty years ago, these concepts would have been mere fictional speculation, but today, in neuroscience labs around the world, new research is turning the startling possibility of brain-to-brain communication into reality.Imagine this: a man wearing a strange polka-dot cap sits in front of a computer screen, watching an animated rocket fly from a pirate ship toward a city. In the game, the only defense against the rocket is a cannon, which can be fired by pressing a key on a keyboard in an adjacent room. As the man watches the rocket make its first flight from the ship, he thinks about moving his finger, without actually moving anything at all.In the adjacent room, another capped man — his cap connected to the first man’s through wires, a computer, and the internet — sits with his hand relaxed over the keyboard, unable to see the first man and oblivious to the impending doom of his animated city. Suddenly, his brain receives a jolt of electrical stimulation, and his finger involuntarily jerks down on the key, bringing down the rocket. Together, these two men have successfully saved the city, and more importantly, they have achieved the once unthinkable task of direct brain-to-brain communication (1).If you haven’t been keeping up with the current neuro-technology scene, the above scenario may sound like nothing more than a tale of scientific fancy. However, incredibly enough, this exact experiment was completed just a few months ago at the University of Washington and is only the latest in a long string of milestones toward “synthetic telepathy.” In this article, we will touch upon each of these milestones, and most notably on the development of the brain computer interface, a key stepping stone in the path to the brain-to-brain interface. After a brief discussion of the research itself, we will then take a look at the ethics of the technology and what these advancements really mean for the rest of us.From Synapses to SensationIn order to fully appreciate brain interface technology, one must go back to the basics of neuroscience. The nervous system (consisting of the spinal cord and the brain) is made up of cells called neurons, which have the unique characteristic of being able to communicate with each other using electrical signals, through connections called neural synapses (2). Using this communication system, one neuron can send an electrical “message” to another neuron or even to an entire network of neurons, allowing for an immense number of possible firing patterns. This complexity is the primary reason why, to this day, the exact mechanisms by which neural firing patterns create phenomena like memory, consciousness, sensory experience, and motor action remain largely unknown.Despite this obstacle, scientists have discovered ways to manipulate the brain without completely understanding the mechanisms behind its activity. Twentieth-century neurosurgeon Wilder Penfield was at the forefront of this advancement, earning the title as the “greatest living Canadian” for his famous neural stimulation experiments (3). During his surgeries, Penfield probed the brains of his numbed but conscious patients and observed what effect probing a certain area of the brain had on the patient. Remarkably, Penfield found that stimulation of specific areas of the brain correlated with very specific functions and areas of the body; for instance, probing the temporal lobes would cause the patient to undergo a vivid memory recall, while probing another area of the brain might cause the feeling of being rubbed on the stomach or pinched on the left foot (3).This rudimentary “brain-poking” experiment was what eventually led scientists to the theory that thought and behavior could be predicted by measuring patterns of activity in the brain. In other words, Penfield’s observations implied that someone could get an idea of what you were doing or thinking simply by taking relatively basic measurements of the changes in your brain activity. From this theory, the path was paved for the development of the first brain computer interface, a device that could connect a brain to a computer that would record and interpret its activity. Three basic steps would be required to achieve this landmark development: reliable detection of the electrical activity of the brain, accurate interpretation of the meaning of this activity, and prompt transformation of this interpreted activity into useful action.The Dawn of the Brain Computer InterfaceA solution to the first step in creating the first brain computer interface (reliable detection of the electrical activity of the brain) was already on its way by 1929, when German neurologist Hans Berger recorded the first human electroencephalogram (EEG) (4). By placing external electrodes on the scalp of a 17-year-old surgical patient, Berger was able to measure the electrical potential across the electrodes and thus detect the changing electrical activity of the neurons in the patient’s brain. To this day, the EEG remains one of the most common methods of recording activity in the brain, favored over the MRI and PET scan for its cheap and relatively simple procedure. The next step, then, was to translate this recorded brain activity into meaningful information.This step was taken in the 1970s by researchers at UCLA, funded by none other than the US Defense Advanced Research Projects Agency (DARPA), the leading national organization in military defense research. In a project rightfully named the Brain-Computer Interface project, researchers strove to develop the first interface between a brain and a computer that would not only detect brain activity but interpret it and use it for communicative or medical purposes (5). In a resulting publication from this project, researcher Jacques Vidal laid out the basics for BCI research, exploring the potential methods, limitations, and possibilities for the development of a EEG-based BCI, a device that he believed was still in the far future (5).While laying the foundations for BCI research, Vidal could not have predicted the rapid string of breakthroughs that would follow in the next decades. Already by the late 1960s and 1970s, researchers like Eberhard Fetz were able to demonstrate that brain activity could actually be controlled to a significant extent, as demonstrated by the successful training of a monkey to deliberately increase the rate of its neural firing in specific neurons (6). In the 1980s, correlations between brain activity and motor response were specified in great detail, allowing scientists to pinpoint the exact neurons and firing patterns behind specific movements (7). In 1999, Miguel Nicolelis, who would go on to create the first animal brain-to-brain interface just fourteen years later, trained rats to control a robotic limb using only their brains (8).By the 21st century, the media had caught up to the breakneck speed of BCI research, with the popularization of fMRI image reconstruction, touted by reporters to give scientists the ability to “pee[r] into another man’s mind” (9). In these studies, subjects viewed specific images — anything from a simple black plus sign to a fully colored landscape — and their concomitant brain activity was recorded by an fMRI machine. Based solely on this recorded brain activity, scientists were able to use computer algorithms to “reconstruct” the image viewed by the subject with startling accuracy (10). Dubbed as “mind-reading” and as a possible avenue for an accurate lie detector, fMRI image reconstruction brought the incredible possibilities of BCI research into the public eye.The Brain-to-Brain Interface: From Reading Minds to Controlling MindsHowever, where things start to get truly exciting — and truly controversial — is not in brain-computer interface research, but in the pioneering field of brain-to-brain interface (BBI) research. While BCI connects a brain to a computer that then interprets brain activity, BBI connects a brain to another brain, which can then receive information from the first brain or even be induced to perform specific behaviors, as in the example of the telepathic cannon game. Arguably, BBI is not significantly different than BCI; a brain can simply be seen as a more sophisticated, organic computer, and the BBI as just the next logical extension of the BCI. Nevertheless, there is an undeniable sense of intrigue that comes with the idea of connecting living brains.It is difficult to overstate the sheer magnitude of possibilities in this new field of research, but it is also important to not stray too far from the raw experimental findings. Research in this area began most notably in the aforementioned Nicolelis Lab at Duke University, in a study on rat brain-to-brain interface. In the 2013 study, “A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information,” two rats were placed in separate cages and each given a choice of two levers — one that resulted in a reward of water and one that did not. A rat dubbed the “encoder” rat was shown a flash of light above the correct lever and was trained to learn this association. The “decoder” rat, on the other hand, was given no visual cues, but its brain received the stimulation from the cortical area of the “encoder” rat through the BBI. In a breakthrough finding, Nicolelis discovered that the “decoder” rat was able to make the correct choice of lever with over 70% accuracy, with no cues or information except for the learned knowledge “sent” by the neural activity of the “encoder” rat’s brain (11).Within the year, researchers at Harvard Medical School were able to connect a human brain and a rat brain through BBI and move a rat’s tail with 94% accuracy, using only the deliberate neural activity of the human’s brain (12). Four months later, the experiment described in the introduction was completed at the University of Washington, demonstrating the first successful use of a human brain-to-brain interface (1).Hopes and considerations for the futureWe are only in the infancy of BCI and BBI research, and as the technology continues to take leaps and bounds into the future, more and more areas of our lives will feel the impact of these advances. In particular, prosthetic limbs, prosthetic vision devices, prosthetic hearing devices, and communication devices for paralytics are all currently being implemented as prototypes, such as the robotic arm created at Brown University in 2012, which allowed a woman who had been paralyzed for nearly 15 years to use a robotic arm to drink a bottle of coffee (13). Outside of the medical field, research in military communication technology has continued to progress, as demonstrated in the research of Gerwin Schalk, who recently published a study on the successful identification of spoken and imagined words using EEG (14).Although advances in games and entertainment may seem trivial compared to the more “practical” developments of medicine and technology, the impact of brain interface technology in everyday life is certainly worth pondering as well. Imagine, for instance, being able to play virtual video games in which you control your character simply by thinking an action or imagining a scenario. Companies like NeuroSky, Mindflex, and Necomimi are already putting out BCI products for the public, including a game that uses “brain force” to navigate a ball through a maze and a pair of costume cat ears that wiggle, perk up, or lay flat depending on your neural activity. As research continues, devices such as these are sure to be welcomed into the entertainment world and could even be used for educational or therapeutic purposes, for adults and children alike.Undoubtedly, brain interface technology has both the power and the potential to do incredible good. With this in mind, it is crucial to also recognize the possibility for ethical wrongdoing. Concerns with privacy, autonomy, enhancement, and consequent aggravation of social stratification are only a handful of the ethical issues on the horizon, and these concerns are only intensified by the fear of media exaggeration and inaccuracy. Furthermore, philosophical questions of human existence will become increasingly important as research progresses. What does it mean for brains to be “connected?” What kind of information can be taken and shared between living brains? What distinguishes a human from a machine, and what — if anything — distinguishes a brain from a computer? These questions may have been impossible to answer in the past, but with the advancement of brain-to-brain interface technology, we may reach satisfying answers at last.In the end, the future of a world with brain interface technology relies on the preparation and research done today. Consideration of the ethical issues to come, as well as thorough discussion of the boundaries that must be set, will help to ensure that ethical lines are not crossed in our enthusiasm to push the limits of technology. From medicine and military technology to games and recreation, brain interfacing truly has the potential to change the world. By maintaining a judicious balance between scientific progress and ethical caution, we can ensure that these changes are for the better.Linda Xu is a sophomore in Eliot concentrating in Neurobiology.ReferencesR. P. N. Rao and A. Stocco. (2013). Direct brain-to-brain communication in humans: a pilot study. [Online]. Available: http://homes.cs.washington.edu/~rao/brain2brain/index.html. [2014, Feb. 24].M. F. Bear et al., Ed., Neuroscience (Lippincott Williams & Wilkins, Philadelphia, ed. 2, 2007).Wilder Penfield. PBS. [Online]. Available: http://www.pbs.org/wgbh/aso/databank/entries/ bhpenf.html. [2014, Feb. 24].L. Haas, Hans Berger (1873–1941), Richard Caton (1842–1926), and electroencephalography. J Neurol. Neurosurg. Psychiatry 74, 9 (2003).J. J. Vidal, Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 2, 157-180 (1973).A. P. Georgopoulous, J. T. Lurito, M. Petrides, A. B. Schwartz, J. T. Massey. Mental rotation of the neuronal population vector. Science 243, 234-236 (1989).E. E. Fetz. Operant conditioning of cortical unit activity. Science 163, 955-958 (1969).J. K. Chapin, K. A. Moxon, R. S. Markowitz, M. A. L. Nicolelis. Nature Neuroscience 2, 664-670 (1999).J. Wise. Thought Police: How Brain Scans Could Invade Your Private Life. Popular Mechanics. [Online]. Available: http://www.popularmechanics.com/science/health/nueroscience/4226614. [2014, Feb. 25].F. Tong and M. S. Pratte. Decoding patterns of human brain activity. Annual Review of Psychology 63, 483-509 (2012).M. Pais-Vieira, M. Lebedev, C. Kunicki, J. Wang, M. A. L. Nicolelis. A brain-to-brain interface for real-time sharing of sensorimotor information. Scientific Reports 3, 1-10 (2013).S. Yoo, H. Kim, E. Filandrianos, S.J . Taghados, S. Park. Non-invasive brain-to-brain interface (BBI): establishing functional links between two brains. PLoS ONE 8, e60410 (2013).L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372-375 (2012).X. Pie, D. L. Barbour, E. C. Leuthardt, G. Schalk. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. J. Neural Eng. 8, e046028 (2011).http://harvardsciencereview.com/2014/05/01/synthetic-telepathy/
Read more…

Title

This weird :)  message is not intended for members from this site, so dont bother reading it, its for them, those that does this, im not sure if they got earlier today it so ill just post it here :)) Do not delite or censor by moderators, please.



6/26/2014 - GOOD EVENING LADIES AND FAGOTS, HOW ARE YOU DOING? I HAVE LOST YOUR CONTACT, IT HAS BEEN ALMOST A YEAR AND DIDNT EXPECTED IT WILL GET THIS FAR, PLUS SYNTHETIC TELEPATHY IS NON FUNCTIONAL SO I HAD TO WRITE YOU LIKE THIS. YOUR PROJECT HAVE BEEN SAFE AND WELL COVERED SO FAR BUT I THINK ITS TIME TO SHUT IT DOWN AND LEAVE THIS HOUSEHOLD / NEIGHBOURHOOD ALONE, THE OTHERS CONNECTED TO IT AS WELL, QUITE A BIG NUMBER. IT CAUSES TOO MUCH HEADACHES AND VERY BAD NEURAL INTERFERENCE TO EVERYONE, ALSO HAS CAUSED MAJOR PARANOIA, PANIC ATTACKS AND PSYCHOSYS TO MANY PEOPLE, YOU DONT WANT TO MAKE A BIG LIST OF PEOPLE REGISTERED AT THE DOCTORS OFFICE, IT HAS BEEN SEVERAL SO FAR, MANY NAMES CAN BE CONNECTED TO EACH OTHER INCLUDING MEDICAL RECORDS. THEY HAVE FAMILYES AND YOU ARE RUINING THEIR LIFES, IT WENT QUITE FAR SO FAR. WHY WOULD YOU INDUCE PSYCHOSYS TO A FINE CANADIAN WORKING FAMILY MEMBERS TO SPEND MONEY ON DOCTORS AND USELESS MEDICATIONS? MAKE PROFIT ELSE WHERE. ALSO I DONT KNOW WHAT MAKE YOU THINK THAT ITS HARMLES TO PLAY IT ON KIDS, ARE YOU THAT SICK? DEFINTELY NO FUNNY. AND ME, IM DOING VERY FINE, LUCKYLY IM NOT VERY AFECTED BY IT, THANK YOU FOR ALL THE INFO YOU HAVE GIVEN TO ME, IN SAME TIME I HOPE, AND IM QUITE SURE YOU GOT LOADED WITH PLENTY OF CRAP AND I JUST HOPE I OR SOMEONE RELATED TO ME DONT GET REMOTE CANCER OR SOME KING OF LIFE THRETHENING ACCIDENT, BESIDES THE ATTEMPT FOR NEURAL INFLUENCE FROM YOU. YOU CAN TAKE IT TO YOUR HOME AND YOUR FAMILY, YOU WILL HAVE MUCH BETTER FEEDBACK IN REAL TIME, AND HOPEFULLY SUCCESSFULL EXPERIMENT SINCE THIS IS NOT WORKING, YOU ARE DOING SAME SCHEME FOR YEARS AND HAVENT WENT NOWHERE, ARE YOU RETARDED? IM VERY AWARE WHO IM DEALING WITH, YOU WILL SUCCEED IN COVERING IT FOR A WHILE, BUT STILL, DONT UNDERESTIMATE THE TRACEABLE RECORDS, MOST OF THE THINGS TIME WONT ERASE, AND IM NOT TALKING WITH OR ABOUT DELUSIONS. I HOPE I HAVE SPENT LOTS OF YOURS FUNDINGS AND TIME, I TRIED MY BEST. PS: PUT A FINGER IN YOUR ASS TO COMPENSATE AND FEEL BETTER. :))

+ FOR THOSE THAT SPONSOR AND SUPPORT THIS TYPE OF PROJECTS: ITS NOT WHAT YOU THINK IT IS, IT WILL NEVER BE.

Read more…