Worldwide Campaign to stop the Abuse and Torture of Mind Control/DEWs
American Psychologist article: 1973 Voice to Skull Demonstration
Artificial microwave voice to skull transmission was successfully demonstrated by researcher Dr. Joseph Sharp in 1973, announced at a seminar from the University of Utah in 1974, and in the journal "American Psychologist" in the March, 1975 issue, article title "Microwaves and Behavior" by Dr. Don Justesen. USE YOUR BROWSER'S ZOOM FEATURE TO MAKE READING THE SCANS EASIER. (Try the “View” menu.)
http://www.randomcollection.info/ampsychv2s.pdf
V2K (voice to skull), in 2002, the Air Force Research Laboratory patented precisely such a technology: Nonleghal weapon which includes
(1) a neuro-electromagnetic device which uses microwave transmission of sound into the skull of persons or animals by way of pulse-modulated microwave radiation; and
(2) a silent sound device which can transmit sound into the skull of person or animals. NOTE: The sound modulation may be voice or audio subliminal messages. One application of V2K is use as an electronic scarecrow to frighten birds in the vicinity of airports. http://call.army.mil/products/thesaur/00016275.htm
http://www.fas.org/sgp/othergov/dod/vts.html
Electronics behind voice to skull
There are 2 types of voice to skull:
1. The pulsed microwave method: every time the voice wave goes from positive to negative we generate a microwave pulse. For every pulse the brain hears a click. All these clicks are a form of digital audio. This goes through walls.
2. The silent sound method: a steady tone is frequency modulated with a voice wave. The ear hears hissing, but the brain hears a voice. This is a form of analog audio. This doesn't go through walls.
Then we can combine the 2 methods: we use the output of method 2 as the input of method 1. This goes through walls.
http://www.hearingvoices-is-voicetoskull.com/ElectronicsBehindV2K.htm
MEDUSA (Mob Excess Deterrent Using Silent Audio) is a directional, non-lethal weapon designed for crowd control and exploiting the microwave auditory effect. It uses microwave pulses to generate uncomfortably high noise levels in human skulls, bypassing the ears and ear drums.
MEDUSA is developed by the Sierra Nevada Corporation.
A device - dubbed MEDUSA (Mob Excess Deterrent Using Silent Audio) - exploits the microwave audio effect, in which short microwave pulses rapidly heat tissue, causing a shockwave inside the skull that can be detected by the ears. A series of pulses can be transmitted to produce recognisable sounds. The device was aimed for military or crowd-control applications, but may have other uses.
Microwave ray gun controls crowds with noise,
03 July 2008 by David Hambling
http://www.newscientist.com/article/dn14250
NASA Develops System To Computerize Silent, "Subvocal Speech" http://www.nasa.gov/home/hqnews/2004/mar/HQ_04093_subvocal_speech.html
DARPA UNCLASSIFIED DOCUMENT - SONIC FREQUENCIES TRANSMISSION.
http://www.cftc.gov/ucm/groups/public/@lrfederalregister/documents/...
NLPis the program that is behing the "voices" by V2K :
"NLP is the branch of computer science focused on developing systems that allow computers to communicate with people using everyday language."
http://www.cs.utexas.edu/~mooney/cs343/slide-handouts/nlp.pdf
Summary Information
The main goal of the Phase I project wad to design and build a breadboard prototype of a temporary personnel incapacitation system called MEDUSA (Mob Excess Deterrent Using Silent Audio). This non-lethal weapon is based on the well established microwave auditory effect (MAE). MAE results in a strong sound sensation in the human head when it is irradiated with specifically selected microwave pulses of low energy. Through the combination of pulse parameters and pulse power, it is possible to raise the auditory sensation to the “discomfort” level, deterring personnel from entering a protected perimeter or, if necessary, temporarily incapacitating particular individuals.
Summary of Results from the Phase I EffortThe major results of the Phase I effort were that - An operating frequency was chosen - Hardware requirements were established (commercial magnetron, high-voltage pulse former) - Hardware was designed and built - Power measurements were taken and the required pulse parameters confirmed - Experimental evidence of MAE was observed
Potential Applications and BenefitsPotential applications of the MEDUSA system are as a perimeter protection sensor in deterrence systems for industrial and national sites, for use in systems to assist communication with hearing impaired persons, use by law enforcement and military personnel for crowd control and asset protection. The system will: be portable, require low power, have a controllable radius of coverage, be able to switch from crowd to individual coverage, cause a temporarily incapacitating effect, have a low probability of fatality or permanent injury, cause no damage to property, and have a low probability of affecting friendly personnel.
http://www.navysbirprogram.com/NavySearch/Summary/summary.aspx?pk=F...
Patented applications
Flanagan GP. Patent #3393279 “Nervous System Excitation Device” USPTO granted 7/16/68.
http://www.google.com/patents?vid=3393279
Puharich HK and Lawrence JL. Patent #3629521 “Hearing systems” USPTO granted 12/21/71.
Malech RG. Patent #3951134 “Apparatus and method for remotely monitoring and altering brain waves” USPTO granted 4/20/76.
Thijs VMJ. Application #WO1992NL0000216 “Hearing Aid Based on Microwaves” World Intellectual Property Organization Filed 1992-11-26, Published 1993-06-10.
4858612 – Hearing device – A condensed summary states that “This invention provides for sound perception by individuals who have impaired hearing resulting from ear damage, auditory nerve damage, and damage to the auditory cortex. This invention provides for simulation of microwave radiation which is normally produced by the auditory cortex. Stocklin, August 22, 1989
4877027 – Hearing system – A condensed abstract states that “Sound is induced in the head of a person by radiating the head with microwaves...” Brunkan, October 31, 1989
5159703 - A silent communication system [which] relates in general to electronic audio signal processing and, in particular, to subliminal presentation techniques. Lowery, October 27, 1992.
6587729 – Apparatus for audibly communicating speech using the radio frequency hearing effect. O'Loughlin, et al. July 1, 2003
Mardirossian A. Patent #6011991 “Communication system and method including brain wave analysis and/or use of brain activity” USPTO granted 1/4/00.
http://www.google.com/patents?vid=6011991
O'Loughlin, James P. and Loree, Diana L. Patent #6470214 "Method and device for implementing the radio frequency hearing effect" USPTO granted 22-OCT-2002.
Video: V2K Documentary appx 15 mins
Here is a V2K documentary, it has a theme of a reasonable argument:
Book “Twelve Years in the Grave - Mind Control with Electromagnetic Spectrums, the Invisible Modern Concentration Camp”, authored by Soleilmavis Liu, provides the sound facts and evidence about the secret abuse and torture with remote voice-to-skull and electromagnetic mind control technologies.
http://www.lulu.com/spotlight/soleilmavis
Please go to LAST PAGE OF "Replies to this Discussion" to read NEWEST Information
Tags:
http://www.itu.dk/research/delca/papers/marius/interfacing_ambient_...
Interfacing ambient intelligence
Marius Hartmann
IT University of Copenhagen
Rued Langgaards Vej 7, 2300 København S, Denmark
hartmann@itu.dk
ABSTRACT
This paper describes an interface for disembodied, locationspecific conversational agents (DELCA) called ‘Ghosts’.
The design includes conversation dialogue and a novel, non-intrusive minimal dynamic visualization. The paper presents two discrete visualizations Ghost Wake and Animated Ghost Icons (AGI), which make use of the temporal dimension to increase spatial resolution. The paper argues that design for ambient intelligence must strive for a balance between visibility and nonintrusiveness.
Author Keywords
DELCA, Ghosts, mixed reality, interface agents, voice recognition, ambient intelligence.
ACM Classification Keywords
Primary Classification: H.5.2 User Interfaces (D.2.2, H.1.2, I.3.6).
INTRODUCTION
“ghost … 3: the visible disembodied soul of a dead person….” [14].
The spread of computers is approaching Weiser’s famous vision [13] in which the computer will become invisible.
This paper proposes a discrete visualization method for a mobile computing interface, which makes use of invisible personalized agents that mainly manifest themselves by the use of speech. A prototype of DELCA is currently being implemented that makes use of WLAN tracking, PDA and mobile devices, AIML and an ensemble of more than 30 synthetic voices, each representing a Ghost with individual specialties and character traits. The system consists of audio, mobile displays and the enhancement of physical space with small-size LED signs.[11] [4]
User scenario
Mrs. Jones enters the IT University of Copenhagen a bit late for a meeting with Mr. Hansen. In the reception she is greeted by a male voice: “Hello I am the Butler, may I offer my assistance? Please t u r n o n y o u r P D A ”
Mrs. Jones accepts the DELCA Ghost client on her PDA; it immediately recognizes the invitation for the meeting Mr. Hansen sent her the day before. The Butler continues: “Allow me to guide you to room 2.31 where Mr. Hansen will be joining us. Let us take the stairs to the left.”
The main difference between the DELCA approach and the traditional HCI “agent” (e.g. Smartakus, Mob-I, Rea [3, 6, 9]) is an intensive use of auditory communication supplied with timely minimal visual cues. The character is humanlike by virtue of their vocal expressions. Visual cues signaling presence are kept to a minimum (hence ‘Ghosts’).
The Ghost metaphor acts as an immediate explanation for the characters lack of bodily presence in the visual domain. The ability of a seemingly omnipresent Ghost service to follow the user around in a building hopefully becomes easier to conceive.
RELATED WORK
Interface agents which apply anthropomorphic features in order to gain a more personalized service have been suggested as an alternative to the Windows-Icons-Menus-Pointer (WIMP) interface [10]. SmartKom [9] is a uniform multimodal dialogue interface that makes use of a ‘personalized interaction agent’ called ‘Smartakus’.
Smartakus is designed as an anthropomorphic threedimensional character whose visual appearance changes with the various screen sizes he may inhabit. For instance, only his head is visible when shown on a PDA.
The Mob-i [6] is a virtual creature visually designed as a mobile phone itself. Mob-i is capable of displaying a number of system states by facial expressions. The ‘reminder’ message, for instance, is comprised of a happy 2 face while the ‘low battery’ indication is comprised of a tired or sad looking face. Suppose you need to get a ‘reminder’ message while the Mob-i is low on batteries: What should the face look like then? The range of expressions a system would need in order to cover all of the possible state combinations seems too vast for a cartoonlike approach. In addition ambient intelligence systems have to relate to external contextual states as well.
Ben Shneiderman has warned against the use of anthropomorphized representations for digital assistance
because he fears that it may mislead the user into believing that the agent possesses real intelligence [10]. The application of social traits to that of computers can be achieved with less than fully articulate visual agents. Nass et al. [8] show that users are willing to interact with computers as if they were distinct selves without any other assignment of human qualities than that of voice. Users were willing to treat computers as humans, even though they were aware this was not the case. Nass et al. also discovered that the sole usage of human voice is sufficient to induce this behavioral pattern and that different voices are treated by the users as distinct agents [11].
Brennan puts forward that human communicational skills are highly adapted to adjust to the abilities of the interaction partner. For instance, older children adapt their syntax when speaking to younger children. This is, in part, what makes interaction with ‘dumb’ technology like the computer possible in the first place[2].
Like Brennan and Nass et al. we believe that users will be capable of distinguishing between artificial and human intelligence. The facial representation of agents applied to a mobile display has serious limitations because they require constant attention and do not allow the user to focus on other activities. Moreover, an anthropomorphic visual agent struggles with the limitations of small-size displays, which makes it difficult to express subtle affective facial features. As an alternative the DELCA approach uses the audiolanguage modality in timely combinations with discrete visual signs on the mobile device and in the physical surroundings.
CHALLENGES
How should a visual support of invisible agents be designed and what is the purpose of visualizing? So far we have identified: landmarks, presence, service identity and cover range as important foundational components in the DELCA interface.
The challenge is how to convey the fundamental components by visual means to support the audio-language interface of a Ghost. The interface should have no face – it should work regardless of image resolution. Finally it should be omnipresent, yet calm.
VISUAL DESIGN OF A GHOST
The visual design of the DELCA interface is divided into three main areas: dialogue, exteriors and announcement.
Dialogue is the primary interaction mode illustrated in the scenario and further discussed in Folmann’s research [4].
Exteriors expand the interface of the physical device and takes advantage of physical environments. In guidance task, for instance, the system may employ local speakers,monitors, idle computers or low cost, low-resolution LED signs connected to the network (Fig.1).
Announcements notify the user of the coming and goings of Ghosts while moving around in the surroundings.
Fig. 1 A prototype of a low-res ghost indicated on a 8x8 LED displays mounted in the building.
Exteriors
By timing the visual occurrences with user interactions causal relations to the dynamic environment are established.
For instance, when a user asks the system to show him the direction to Mr. Hansen wall-mounted electronic displays light up guiding him along.
User scenario continued
Mrs. Jones starts walking, but heads in the wrong direction. "Excuse me Mrs. Jones. You are not going in the right direction", the Butler comments.
"If you need directions, press help". Mrs. Jones press 'help' on her PDA and the animated ghost pattern appears on the display. "Follow me please, I am now on the wall display.” Mrs. Jones looks up and notices the same pattern on a wall mounted mini display some meters ahead of her. She walks toward the animated figure. “That’s the way to go” the Butler comments....
Announcement
We suggest two ways to show the presence of a ghost service on a small PDA-like display.
Ghost Wake
The Ghost Wake technique looks like moving an object behind a thin cloth, revealing the object solely through the displacement of the cloth. In this way the Ghost lives in an invisible world behind our own which we may just get a glimpse of. In our case we make small displacements of pixels. Ghost Wake visualizes dynamic relations to the environment without intrusiveness or causing occlusion of the primary task. The Ghost Wake employs the temporal dimension to avoid cluttering the mobile display. Apart from having the quality of not adding or removing pixels from the interface, the temporal changes are immediately recognized by the user. The Wake do not necessarily require previous knowledge or mental decoding to produce a meaning in that the link from visual stimuli to the related service lies in the timing of the moving object to produce a causal experience [7].
While the Ghost leaves a temporary imprint in the primary task interface, it is not possible to see the actual identity of it. There is no other change in either color or composition of the primary task interface (Fig.2).
Fig. 2. A Ghost announces its presence as the user is browsing a webpage on his PDA by sailing across the page creating a highly visible trail.
Animated Ghost Icons
We use Animated Ghost Icons to deal with the spatially very limited possibilities of portable displays. They are like the cellular automata ‘Game Of life’ created by Conway [5]. The structures possess an object identity recognizable with a very limited resolution, despite their constant transformations.
…“Physical Joe” helps people with exercises during a working day.
…“Printer Jan” manages print queues etc.
…“The Butler” guides people around.
Fig.3. Images of various low-res animations of Ghosts. The full animation cycles consists of between 4 to 20 frames in an 8x8 pixel resolution.
Fig. 4. Combination of Ghost Wake and AGI. Three Ghosts shown in the rim are currently available, one more Ghost announces itself and will be shown in the rim as well.
These non human-like faceless visual indications are the product of animated cycles. Animation has shown
importance in relation to user perception of dynamic relations [7] and understanding of functionality [1]. The cycles are functions of ghost identities and current capacities. ‘Physical Joe’ looks like he’s doing physical exercise. ‘Printer Jan’ acts like printing. ‘The Butler’ turns in all directions. The visual effect can be experienced on http://www.itu.dk/people/hartmann/delca/nordichi.
The frequency of the animation cycles in itself may indicate the different states of the Ghost (‘busy’ or ‘waiting’) or it may move in response to ongoing user dialogue. Hereby the user will be able to see whom her or she is talking to. The immediate responsiveness of Ghost movements to the users commands is crucial to establish a causal relationship [7].
DISCUSSION
The Announcement and Exterior visualizations have to compete for user attention with an unknown number of distracters within the physical space. Motion is a potent way of grasping the user attention [12]. It may even prove to be too powerful, like seen on the jumping icon in OS X or in animated banner adds on web pages. However, we believe in the dynamic environment of mobile computing getting the users attention justifies such potent means. The visualizations have to be detectable not only in the users periphery but also across distances, and narrow temporal slots.
CONCLUSION
The visual design for DELCA contains outlines for a new kind of calm visualization. The general idea of how to visualize personalized services without falling victim to the communicational caveats of facial representations seems promising. Striving for a balance between calmness and visual significance seems to be the major design goal.
Animations can be used as powerful means to attract attention in exterior surroundings even on low-res displays.
They may also be used on mobile displays without occupying too much screen real estate. When the primary interaction mode is verbal, abstract animations may be a feasible alternative to face-like figures.
Ambient systems are omnipresent. The interface of such systems should be designed accordingly.
ACKNOWLEDGMENTS
REFERENCES
1. Baecker, R., Ian Small and Mander, R., Bringing icons to life. in Conference on Human Factors in Computing Systems (CHI '91), (1991), 1-6.
2. Brennan, S., Laurel, B. and Shneiderman, B., Antropomorphism: From Eliza to Terminator 2. in CHI'92,(1992).
3. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, H. and Yan, H., Embodiment in Conversational Interfaces: Rea. in CHI 99,(1999).
4. Folmann, T.B. DELCA : Disembodied Location-Specific Conversational Agents. Available at http://www.itu.dk/research/delca/papers/troels/DELCA%20THESIS.pdf.
5. Gardner, M. The fantastic combinations of John Conway's new solitaire game "life". Scientific American, 223. 120-123.
6. Marcus, A. and Chen, E. Designing the PDA of the future. Interactions. 34-44.
7. Michotte, A. The perception of causality. Methuen, 1963.
8. Nass, C., Steuer, J., Tauber, E. and Reeder, H., Antropomorphism, Agency, & Ethopoiea: Computer as Social Actors. in Conference on Human Factors in Computing Systems, ( Amsterdam, The Netherlands, 1993), 111-112.
9. Reithinger, N., Streit, M., Tschernomas, V., Alexandersson, J., Becker, T., Blocher, A., Engel, R., Löckelt, M., Müller, J., Pfleger, N. and Poller, P., SmartKom - Adaptive and Flexible Multimodal Access to Multiple Applications. in ICMI'03 International Conference On Multimodal Interface, (Vancouver, British Columbia, Canada, 2003), ACM Press, 101-108.
10. Shneiderman, B. and Maes, P. Direct Manipulation vs Interface Agents. interactions.
11. Sørensen, M.H. Enter the World of Ghosts. New Assisting and Entertaining Virtual Agents. Available at
http://www.itu.dk/people/megel/delcaghosts.doc.
12. Ware, C., Bonner, J., Knight, W. and Cater, R. Moving icons as a human interrupt. International Journal
of Human-Computer Interaction, 4 (4). 341-348.
13. Weiser, M. The Computer for the 21st Century. Scientific American, 265 (3). 94-104.
14. WORDNET. Princeton University., 1997.
© 2021 Created by Soleilmavis.
Powered by