Synesthesia Technology

Figure 1. Brain-Computer Interface Device
Image Unavailable
Diodato, B. (2008) Brain-computer Interface Device. [Photo]
Retrieved March 28, 2013 from http://ricelander.wordpress.com/
2008/11/18/brain-computer-interface-will-human-become-machine/

Synesthesia is a neurological phenomenon where an individual experiences simultaneous stimulation of two different sensory pathways by only one stimulus[1]. Today, scientists have used synesthetic concepts to develop new technologies that converts one sensory input into a different sensory output. One of the main purposes for synesthetic technology is sensory prosthesis where a severely damaged sense is replaced by a functional one. An example would be reading Braille, which is a form of writing language using patterns of raised dots, so that the blind can “read” the words with their fingers instead of their eyes. There are also synesthesia technology that enhances certain sensory functions such as detection of toxins in chemical workplaces, gaming control, and medical diagnosis of certain diseases[2][3]. A major research subfield is the Brain Computer Interface (BCI) technology that links the human brain to a computerized device. An example of such a device is the BrainPort System developed by the Wicab Lab in Wisconsin, USA[4]. It uses a camera to pick up images of one’s surroundings and an Intra-Oral Device (IOD) to convert it for transfer to the brain via the stimulation of the tongue[4]. Another synesthetic technology is the Optoelectronic Nose that a lab from the chemistry department at University of Illinois created[2]. The Optoelectronic Nose transfigures olfactory input to visual output using a colorimetric sensor array with chemically responsive dyes that changes color with changes in their chemical environment[2].

1. Brain-Computer Interface System

1.1. Overview

A brain-computer interface (BCI) is a system that combines software and hardware communication devices so that cerebral activity alone can utilize them to control external gadgets such as computers[5]. The main goal of BCIs is to allow for communication functions in disabled people who could not express themselves in a normal fashion due to paralysis, spinal cord injuries, or other neuromuscular disorders[6]. A BCI recognizes a set of brain signal patterns using five stages: signal acquisition, preprocessing (or signal enhancement), feature extraction, classification, and control interface[7].

Figure 2. Brain-Computer Interface Process
Image Unavailable
(2006) Multi-stage procedure for on-line BCI. [Digital]
Retrieved March 28, 2013 from http://www.brain.riken.jp/bsi-news/bsinews34/no34/research1e.html

Signal acquisition stage receives input from brain signals and may additionally execute artifact processing and noise reduction. The preprocessing stage converts the signals into a more suitable format for further processing. The feature extraction stage extracts target information from the recorded signals where it is then mapped onto a vector with distinct characteristics from the observed signals. At the classification stage, the characteristics of the vectors are used to classify the signals, thus allowing for pattern recognition so that the BCI machine can interpret the user’s objective. The control interface stage is where the classified signals transformed into directions for the apparatus, e.g. a wheelchair[7].

1.2. Mechanisms

1.2.1. Signal Acquisition

To receive the user’s intentions via brain signals, CBIs collects measured brain activity via tractable electrical signals during the recording stage. Brain activity is usually determined either electrophysiologically or hemodynamically.

Table 1. Examples of Neuroimaging Methods
Image Unavailable
Adapted from “Brain Computer Interfaces, a Review,” by L.F. Nicolas-Alonso and J. Gomez-Gil, 2012, Sensors, 12, pp. 1211-1279.
EEG - Electroencephalography; MEG: magnetoencephalography; ECoG: Electrocorticography; fMRI: Functional Magnetic Resonance Imaging;
NIRS: Near Infrared Spectroscopy

In the electrophysiological monitoring, brain activity is measured by amount of ionic current flow changes between neurons during the info exchange process by electrochemical transmitters[8]. This can be done with electrical signal acquisition procedures such as electroencephalography (EEG) or magnetoencephalography(MEG)[8]. In hemodynamic monitoring, the hemodynamic response is exploited, where glucose is released more often to areas with active neurons than those with inactive neurons[9]. The discernible difference in oxyhemoglobin to deoxyhemoglobin ratio are calculated by neuroimaging methods like functional magnetic resonance and near infrared spectroscopy (NIRS)[9]. Other possible neuroimaging techniques that can be used for this stage are listed in Table 1 with their functional properties.

1.2.2. Preprocessing/Signal Enhancement

Table 2. Examples of currently used control signals
Image Unavailable
Adapted from “Brain Computer Interfaces, a Review,” by L.F. Nicolas-Alonso and J. Gomez-Gil, 2012, Sensors, 12, pp. 1211-1279. 
VEP: Visual Evoked Potentials; SCP: Slow Cortical Potential; P300: P300 Evoked Potentials;

Brain signal patterns have been translated in relation to cognitive tasks. Thus, using these translations, BCI systems can decipher the user’s goals. By manipulating these brain signals, people can express their intent to the BCI system. These brain signals can act as control signals in BCI units. Today, a variety of brain signal groups are studied to investigate their potential as a control signal in BCI systems[10]. Table 2 lists some currently employed control signals in BCI systems.

1.2.3. Feature Extraction

BCIs recognizes and classifies different brain signaling patterns into a unique group based on its traits. Brain signals are evaluated at multiple channels and undergo dimension reduction techniques to remove extraneous info, thereby reducing computational costs as well[11]. Table 3 lists some feature extraction methods and their properties.

Table 3. Feature Extraction Methods - Part 1
Image Unavailable
Adapted from “Brain Computer Interfaces, a Review,” by L.F. Nicolas-Alonso and J. Gomez-Gil, 2012, Sensors, 12, pp. 1211-1279.
PCA: Principal Component Analysis; ICA: Independent Component Analysis; CSP: Common Spatial Pattern

Table 4. Feature Extraction Methods - Part 2
Image Unavailable
Adapted from “Brain Computer Interfaces, a Review,” by L.F. Nicolas-Alonso and J. Gomez-Gil, 2012, Sensors, 12, pp. 1211-1279.
AR: AutoRegressive Components; MF: Matched Filtering; WT: Wavelet Transform

The neurological phenomenon of brain activity can be affected by artifacts (unwanted signals that influence brain activity), which could attenuate the BCI system’s performance[12]. Two major types are physiological and non-physiological or technical artifacts. Physiological artifacts may come from other body organ activities such as the heart [electrocardiography (ECG) artifacts], bodily muscles [electromusculography (EMG) artifacts] and the eye [electrooculography (EOG) artifacts][13]. To avoid such artifacts, patients are asked to refrain from blinking or any body movements [14], or signal samples where ocular or muscular activity is detected can be removed via observation of EOG and EMG signals[15]. A more effective method of artifact handling would be using artifact removal algorithms in EEG such as linear filtering, linear combination and regression, which finds artifacts and excises them, keeping the brain activity intact instead of discarding samples[13]. Technical artifacts are usually due to power-line noises or electrode disruptions, but can be solved with good filtering or shielding[13].

1.2.4. Classification

This is the stage where the BCI attempts to learn what the user wants by the characteristic vector that distinguishes different brain activity. Two types of algorithms are currently in use: regression and classification[16]. Regression algorithms directly use EEG signal features as independent variables in calculating the user’s objective[16]. In classification algorithms, the EEG signal features are employed to outline the perimeters between two targets in the target space[17].

Table 5. Classification Method Examples
Image Unavailable
Adapted from “Brain Computer Interfaces, a Review,” by L.F. Nicolas-Alonso and J. Gomez-Gil, 2012, Sensors, 12, pp. 1211-1279.
LDA: Linear Discriminant Analysis; SVM: Support Vector Machine; k-NNC: K-Nearest Neighbor Classifier; ANN: Artificial Neural Network

1.3. Applications

BCI allows for new methods of control and communication sans use of peripheral muscles and nerves[18]. By restoring lost function in those with severe motor deficits, the BCI system indirectly helps alleviate financial burden of caregivers and other medical costs. Generally, BCI target users have three populations: i) Complete Lock-In State (CLIS) users who have no motor control, ii) Locked-In State (LIS) users who are partially paralyzed with residual control over movement, and iii) healthy users with good motor control. In figure 3, Kübler and Birbaumer (2008) showed a strong negative correlation between BCI performance and the degree physical deficiency. I.e., the more physically impaired the user is, the less uses the user has for the BCI.

Figure 3. Relationship between BCI Performance and User Capabilities
Image Unavailable
Adapted from “Brain-Computer Interfaces and communication in paralysis: extinction of goal directed thinking in completely paralyzed patients?,”
by A. Kübler and N. Birbaumer, 2008, Journal of Clinical Neurophysiology, 119, pp. 2658-2666. LIS: Locked-In State; CLIS: Completely Locked-In State

Scientists have been able to develop communication techniques using BCI technology, such as the virtual keyboard where the user modulates his/her EEG through mental imagery of hand and leg movement to “type” out words on the screen with a cursor[20].

Not only can it allow for communication, BCIs can also return motor function, allowing for restoration of normal daily function and may even eliminate psychological and social distress[21]. An example of such a BCI would be the functional electrical stimulation (FES) that stimulation artificial muscle contractions by electrical currents that generate action potentials in periperhal motor nerves in the target muscle[22].

Currently, researchers have also been interested in the positive applications of BCIs where they act to enrich senses instead of recovering lost functions. BCI can produce a new interaction medium in video games (which can in turn potentially enhance cognitive functions) or other interactive programs, permitting a new set of experiences and challenges. Also, researchers can use data from BCI to retrieve uncontaminated responses of test subjects' behaviours during trials[23].

2. BrainPort System

2.1. Overview

To be more specific, the Brainport System (BPS) should be called a Computer-Brain Interface (CBI) since it transfers information from the computer to the brain, unlike the usual BCI direction of information flow from brain to computer[24]. An example of a CBI device would be the cochlear implant. The BPS project was initiated by Paul Bach-y-Rita and its development was continued by the Wicab, Inc. company[4]. It was originally made to help people with balance disorders such as bilateral vestibular dysfunction, but now the BPS has gained more purposes. Being a platform technology, the BPS can act as the basis for many new applications involving other systems, technologies, and devices[25]. Useful info is delivered to the brain by the tongue using electrotactile stimulation conducted by the BPS[4]. The BPS’ goals are to provide absent sensory signals and to reduce the probability of sensory overload[4].

2.2. The BrainPort Mechanism

Figure 4. Schematic Outline of the BrainPort System  
Image Unavailable
Adapted from “Brainport: an alternative input to the brain,” by  Danilov, Y. & Tyler, M., 2005, Journal of Integrative Neuroscience, 4, 537-550.

With respect to the BrainPort Balance Device, the system includes an Intra-Oral Device (IOD) and a controller that contains a microprocessor with drive electronics, user controls, safety circuits, and battery power supply[25]. The IOD consists a Micro-Electro-Mechanical Systems (MEMS) accelerometer to sense motion and head tilt. A wireless radio frequency (RF) link is used to conduct information transfer to external devices[4].
The IOD with the proper circuitry transfigures the signals into monophasic, pulsed stimuli onto a round tongue display via electrodes. The tongue display is a polytetrafluoroethylene circuit board that acts as a palate for tongue array[4]. Surface electrode array sends qualitative and quantitative info on the tongue by electrically stimulation that forms an “electrotactile screen” which represents an image in real time with different degrees of complexity. The tongue projects a “tactile image” to the brain that can decode this info by its spatial, temporal, intensive and qualitative features[4]. This translation can help settle an urgent need and resolve signal detection and recognition problems. Although not completely understood, scientists have suggested neuroplasticity as the reason to how one sense can be interpreted as another in the brain. Some theories on such neuroplasticity (changes in neuronal functional representation) have been discussed in Why Synesthesia Occurs neurowiki page.

Video 1. BrainPort Vision System
(2011, June 12). Electro-Medicine : Biological Physics - Cure for Blindness : BrainPort Vision Through Tongue
[Video file] Retrieved from http://www.youtube.com/watch?v=MegGOFkO-Sg

2.3. Electrotactile Stimulation in the Human Brain

Many experiments since 1970 have shown that electrotactile stimulation are effective sensory substitution systems and have hypothesized that cross-modal plasticity is what causes this synesthesia-like effect[27][24][28][29][30]. For example, when the blind uses the Braille reading method, their primary and secondary visual cortical areas are activated[31][32][33]. After BrainPort training, the primary visual cortex that was dormant before was shown to be strongly active, suggesting that the training aroused the more analytical and complex components of the visual cortex for mechanosensitive tactile information analysis[34]. With positron emission tomography, PET, it was demonstrated how the congenitally blind can train the tongue to stimulate the visual cortex using somatosensory information[35]. Figure 5 shows how much more active the occipital cortex were in blind subjects (row A) compared to control (blindfolded individuals with normal visual acuity).

Figure 5. Comparison of change in regional cerebral blood flow (rCBF) between the blind and normally sighted.  
Image Unavailable
Adapted from “Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind,” by Ptito, M. et al., 2005, Brain, 128, 606-614. A) blind subjects; B) sighted controls; Bottom row: z-coordinate of the slice

2.4. Applications

Figure 6. The four classes of BrainPort Applications  
Image Unavailable
Adapted from “Brainport: an alternative input to the brain,” by  Danilov, Y. & Tyler, M., 2005, Journal of Integrative Neuroscience, 4, 537-550.

Using different resolution grids, the BPS can noninvasively transfer a variety of qualitative and quantitative data[4]. By combining low or high resolution with qualitative or quantitative info, four possible application classes have been proposed, each with a unique platform for further advancement into other possible functions[4]. It is valuable to note that as the resolution increases of a BPS, it’s sensitivity decreases[4].

2.4.1. First Class: Qualitative Info, Low Resolution

In this class, the BrainPort detects specific environmental changes (e.g. temperature, chemical), and reports using a “Yes/No” paradigm to alert the operator of any toxins for example. Research has already implemented it's expansion into a source orientation system, which would allow for identification of a target and permit response to simple situations, like catching a ball.

2.4.2. Second Class: Qualitative Info, High Resolution

With higher resolution, a more precise, clear image of surroundings can be delivered to the operator, thus allowing for locomotion since one can amend the trajectory to their destination in the case of obstacles. Signal feedback has also been incorporated to help with navigation using the BPS.

2.4.3. Third Class: Quantitative Info, Low Resolution

An example of a third class BPS would be the BrainPort Balance Device with a MEMS accelerometer that informs the brain of any deviations of the head position from a reference point so that the brain would know when and how to readjust the body posture to not lose balance. This balance device has already shown positive results in treating bilateral vestibular dysfunction[36].

2.4.4. Fourth Class: Quantitative Info, High Resolution

The BrainPort Vision system is a good representation of a fourth class BPS. Signals captured from a video camera are transfigured onto a microprocessor where it then gets sent to the Intra-Oral Unit and subsequently the tongue as a real-time electrotactile image. This quantitative, high resolution processing can be used to perform visual detection and recognition tasks, like picking up a newspaper. Besides restoring eyesight, this class can also be used as an enhancement tool where the BrainPort Vision system can be adapted for night vision using infrared light or for ultraviolet vision.

3. Related Links

Auditory-Visual Synesthesia
Brain-Computer Interface
Cochlear Implantation
Cognitive Benefits of Video Games
Lexical-Gustatory Synesthesia
Mental Imagery
Subcortical Visual Prosthetics
Visually-Induced Synesthesia
Why Synesthesia Occurs

Bibliography
1. Simner, J. Defining synaesthesia. British Journal of Psychology 103, 1-15 (2012).
2. Suslick, K. S. Synesthesia in science and technology: more than making the unseen visible. Current Opinion in Chemical Biology 16, 557-563 (2012).
3. Liao, L. D. et al. Gaming control using a wearable and wireless EEG-based brain-computer interface device with novel dry foam-based sensors. Journal of NeuroEngineering and Rehabilitation 9, 1-12 (2012).
4. Danilov, Y. & Tyler, M. Brainport: An alternative input to the brain. Journal of Integrative Neuroscience 4, 537-550 (2005).
5. Wolpaw, J.R. et al. Brain-computer interfaces for communication and control. Journal of Clinical Neurophysiology 113, 767-791 (2002).
6. Wolpaw, J.R. Brain-computer interfaces as new brain output pathways. The Journal of Physiology 579, 613-619 (2007).
7. Khalid, M.B. et al. Towards a brain computer interface using wavelet transform with averaged and time segmented adapted wavelets. In Proceedings of the 2nd International Conference on Computer, Control and Communication (IC4’09) Karachi, Sindh, Pakistan, 1-4 (2009).
8. Baillet, S., Boly, M., & Leahy, R.M. Electromagnetic brain mapping. IEEE Signal Processing Magazine 18, 14-30 (2001).
9. Laureys, S., Boly, M., & Tononi, G. Functional Neuroimaging. In The Neurology of Consciousness; Steven, L., Giulio, T., Eds.; Academic Press: New York, NY, USA, 2009; pp. 31-42.
10. Mason, S.G., & Birch, G.E. A general framework for brain-computer interface design. IEEE Transactions on Neural Systems and Rehabilitation Engineering 11, 70-85 (2003).
11. Ghafar, R. et al. Comparison of FFT and AR techniques for Scalp EEG analysis. In Proceedings of the 4th Kuala Lumpur International Conference on Biomedical Engineering 2008, Kuala Lumpur, Malaysia, Jun 2008; Abu Osman, N.A., Ibrahim, F., Wan Abas, W.A.B., Abdul Rahman, H.S., Ting, H.-N., Magjarevic, R., Eds.: Springer Berline Heidelberg: Berlin, Germany 21, 158-161 (2008).
12. Usakli, A.B. Improvement of EEG signal acquisition: an electrical aspect for state of the art of front end. Computational Intelligence and Neuroscience 2010, 630-649 (2010).
13. Fatourechi, M., Bashashati, A., Ward, R.K., & Birch, G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Journal of Clinical Neurophysiology 118, 480-494 (2007).
14. Vigaro, R.N. Exraction of ocular artefacts from EEG using independent component analysis. Electroencephalography and Clinical Neurophysiology 103, 395-404 (1997).
15. del R Mllan, J. et al. A local neural classifier for the recognition of EEG patterns associated to mental tasks. IEEE Transactions on Neural Networks and Learning Systems 13, 678-686 (2002).
16. Lotte, F. et al. A review of classification algorithms for EEG-based brain-computer interfaces. Journal of Neural Engineering 4 (2007).
17. McFarland, D.J. & Wolpaw, J.R. Sensorimotor rhythm-based brain-computer interface (BCI): feature selection by regression improves performance. IEEE Transactions on Neural Systems and Rehabilitation Engineering 13, 372-379 (2005).
18. De la Rosa, R. et al. Man-machine interface system for neuromuscular training and evaluation based on EMG and MMG signals. 10, 11100-11125 (2010).
19. Kübler, A., & Birbaumer, N. Brian-computer interfaces and communication in paralysis: Extinction of goal directed thinking in completely paralyzed patients? Journal of Neurophysiology 119, 2658-2666 (2008).
20. Obermaier B., Muller, G.R., & Pfurtscheller, G. “Virtual keyboard” controlled by spontaneous EEG activity. IEEE Transactions on Neural Systems and Rehabilitation Engineering 11, 422-426 (2003).
21. Braz, G.P., Russold, M., & Davis, G.M. Functional electrical stimulation control of standing and stepping after spinal cord injury: A review of technical characteristics. Neuromodulation: Technological at the Neural Interface 12, 180-190 (2009).
22. Lauer, R.T., Peckham, P.H., & Kilgore, K.L. EEG-based control of a hand grasp neuropresthesis. NeuroReport 10, 1767-1771 (1999).
23. Roman, K, Benjamin, B., Gabriel, C., & Klaus-Robert, M. The Berline Brain-Computer Interface (BBCI) – Towards a new communication channel for online control in gaming applications. Multimedia Tools and Applications 33, 73-90 (2007).
24. Bach-Y-Rita, P. Visual information through the skin – A tactile vision substitution system. The American Academy of Otolaryngology 78, 729-749 (1974).
25. Tyler, M., Danilov, Y., & Bach-Y-Rita, P. Closing an open-loop control system: vestibular substitution through the tongue. Journal of Integrative Neuroscience 2, 159-64 (2003).
26. Wang, J. et al. (1997) Basic experimental research on electrotactile physiology for deaf auditory substitution. Acta Acad. Med. Shandong 35, 1–5 (1997).
27. White, B.W. et al. Seeing with the skin. Perception and Psychophysics 7, 23-27 (1970).
28. Kaczmarek, K.A., et al. A tactile vision substitution system for the blind: Computer-controlled partial image sequencing. IEEE Transactions on Biomedical Engineering 32, 602-608 (1985).
29. Bach-y-Rita, P. et al. Form perception with 49-point electrotactile stimulus array on the tongue. Journal of Rehabilitation Research and Development 35, 427-430 (1998).
30. Bach-y-Rita, P., & Tyler, M.E. Tongue man-machine interface. Studies in Health Technology and Informatics 70, 17-19 (2000).
31. Sadato, N. et al. Activation of the primary visual cortex by Braille reading in blind subjects. Nature 380, 526-528 (1996).
32. Buchel, C. et al. Different activation patterns in the visual cortex of late and congenitally blind subjects. Brain 121, 409-419 (1998).
33. Burton, H. et al. Adaptive changes in early and late blind: a fMRI study of Braille reading. Journal of Neurophysiology 87, 589-607 (2002).
34. Bach-y-Rita, P. Seeing with the brain. International Journal of Human-Computer Studies 15, 287-297 (2003).
35. Ptito, M. et al. Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain 128, 606-614 (2005).
36. Polat, S., & Uneri, A. Vestibular substitution: comparative study. The Journal of Laryngology and Otology 124, 852-858 (2010).

Add a New Comment
_s
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License