UKI

 로그인  회원가입

Brain Computer Interface
UKI manager  2018-10-17 08:41:25, 조회 : 84, 추천 : 94

g.BCIsys - g.tec's Brain-Computer Interface research environment
g.tec provides complete MATLAB-based research and development systems, including all hard- and software components needed for data acquisition, real-time and off-line data analysis, data classification and providing neurofeedback.
A BCI system can be built with g.MOBIlab+, g.USBamp, g.HIamp or g.Nautilus. g.MOBIlab+ is available with up to 8 EEG channels with wireless signal transmission and is portable. g.USBamp is available for 16-64 EEG channels and transmits the data over USB to the PC or notebook. g.HIamp acquires 64 - 256 channels over USB. g.Nautilus wireless EEG is available with 8 - 64 channels.
With the software package High-Speed Online Processing under SIMULINK, you can read the biosignal data directly into SIMULINK. SIMULINK blocks are used to visualize and store the data. The parameter extraction and classification can be performed with standard SIMULINK blocks, the g.RTanalyze library or self-written S-functions.
After the EEG data acquisition, the data can be analyzed with g.BSanalyze, the EEG and classification toolbox.
With ready-to-use BCI sample applications, you can develop state-of-the-art BCI experiments within a few hours. g.tec started to develop BCI systems more than 15 years ago. Therefore, all important BCI functions are included in the package and can easily be used and modified.

Motor Rehabilitation System
One of the most common types of Brain-Computer Interface (BCI) systems relies on motor imagery (MI). The user is
asked to imagine moving either the right or left hand. This produces specific patterns of brain activity in the EEG signal, which an artificial classifier can interpret to detect which hand the user imagined moving. This approach has been used for a wide variety of communication and control purposes, such as spelling, navigation through a virtual environment, or controlling a cursor, wheelchair, orthosis, or prosthesis.
In the last few years, however, a totally novel and promising application for MI-based BCIs has gained great attention. Several recent articles have shown that MI-based BCIs can induce neural plasticity and thus serve as an important tool to enhance motor rehabilitation for stroke patients. In other words, the overall goal of the BCI system is not communication, but improved stroke recovery. Furthermore, other work has shown that this rehabilitation can be even more effective when combined with immersive graphical environments that can help users interact effectively and naturally with the BCI system. Immersive BCI stroke rehabilitation is an ongoing research effort in numerous American and European research projects, many of which involve g.tec.
g.REHAbci - Motor Rehabilitation with Virtual Limbs  
Neurofeedback is critical in a MI-based BCI. Rehabilitation is most effective when users get immersive feedback that relates to the activities they imagine or perform. For example, if people imagine grasping an object with their left hand, then an image of a grasping hand can help users visualize this activity. If a stroke patient keeps trying to imagine or perform the same movement, while receiving feedback that helps to guide this movement, then users might regain the ability to grasp, or at least recover partial grasp function.
Recently, g.tec developed a full research package for stroke rehabilitation. The system consists of a 64 channel cap with active EEG electrodes that are connected to biosignal amplifier g.HIamp. To train the BCI system, the user imagines left and right hand movements. Common Spatial Patterns (CSPs) are then calculated from the 64 channels that weight each electrode according to its importance. This electrode selection is done fully automatically and includes algorithms to improve the signal-to-noise ratio. Furthermore, a linear discriminant analysis is trained to distinguish left vs. right hand movements. When this training is finished, which typically takes less than an hour, the patient can control virtual hands that are projected in a highly immersive 3D environment using g.VRsys. Smaller setups can be realized with computer screens or headmounted devices.The g.HIsys development environment contains also a block that allows to trigger and tune an FES stimulator in real-time to optimize the neurorehabiliation procedure. The stimulators supports the rehabilitation of lower and upper limbs with biphasic current stimulation.
As with all g.tec BCI systems, the BCI stroke rehabilitation system relies on well-known software platforms such as Matlab Simulink, which can easily be interfaced with other components from other sources. For more information, including a list of references or technical details, please contact g.tec.

Motor Rehabilitation with Robotic Devices
Exercising motor imagery (MI) is known to be an effective therapy in stroke rehabilitation, even if no feedback about the performance is given to the user. Providing additional real-time feedback can elicit Hebbian plasticity, which increases cortical plasticity, and could improve functional recovery. The MI based Brain-Computer Interface (BCI) is linked to a rehabilitation robot (Amadeo, tyromotion GmbH, Austria), giving motoric and haptic feedback to the user. If a correct pattern of right-hand MI was detected, the robot performed a complete movement (flexion and extension) of the hand, thus giving online feedback.

BCI Award and Stroke Rehabilitation
Much of this work is summarized in a recently published roadmap, which was developed over two years by a consortium of different groups. This roadmap lists BCI for rehabilitation as one of the two emerging disruptive technologies that could dramatically change BCI research. This roadmap is available at future-bnci.org. More information about BCI systems for rehabilitation can also be found under State-of-the-Art in BCI Research (Intec, 2011): BCI Award 2010, which has two sections from Infocomm Research, A*Star, Singapore and Keio University, Japan. Two of the 10 projects nominated for the BCI Award 2010 used BCI systems for rehabilitation purposes. BCIs for rehabilitation have also been prominent in the 2011, 2012, and 2013 BCI Awards.

Ping-Pong game
Everybody knows the famous Ping-Pong game that was played in the seventies on TV sets. In this example, two persons are connected to the BCI system and can control the paddle with motor imagery. The paddle moves upwards via left hand movement imagination and downwards via right hand movement imagination. The algorithm extracts EEG bandpower features in the alpha and beta ranges of two EEG channels per person. Therefore, in total, 4 EEG channels are analyzed and classified.
High Gamma Activity
While most BCIs rely on the EEG, some newer work has drawn attention to BCIs based on ECoG. ECoG based systems have numerous advantages over EEG systems, including (i) higher spatial resolution, (ii) higher frequency range, (iii) fewer artifacts, and (iv) no need to prepare users for each session of BCI use, which usually requires scraping the skin and applying electrode gel. Recent research has demonstrated, over and over, that ECoG can outperform comparable EEG methods because of these advantages. Scientific work showed that ECoG methods can not only improve BCIs but also help us address fundamental questions in neuroscience. A few efforts have sought to map “eloquent cortex” with ECoG. That is, scientists have studied language areas of the brain while people say different words or phonemes. Results revealed far more information than EEG based methods, and have inspired new ECoG BCIs that are impossible with EEG BCIs. Other work explored the brain activity associated with movement. This has been very well studied with the EEG, leading to the well-known dominant paradigm that real and imagined movement affects activity in the 8-12 Hz range. ECoG research showed that this is only part of the picture. Movement also affects a higher frequency band, around 70-200 Hz, which cannot be detected with scalp EEG. This higher frequency band is more focal and could lead to more precise and accurate BCIs than EEG methods could ever deliver.  

P300 Spelling
The P300 paradigm presents e.g. 36 letters in a 6 x 6 matrix on the computer monitor. Each letter (or row or column of letters) flashes in a random order, and the subject has to silently count each flash that includes the letter that he or she wants to communicate. As soon as the corresponding letter flashes, a P300 component is produced inside the brain. The algorithms analyze the EEG data and select the letter with the highest P300 component. Then, this letter is written onto the screen. Normally, between 2-15 flashes per letter are required for high accuracy. The number depends on many factors, including the electrodes and their scalp positions, the data processing parameters, and the individual height of the subject's P300 brainwave.
          
P300 Smart Home Control
The BCI was connected to a Virtual Reality (VR) system. The virtual 3D representation of the smart home had different control elements (TV, music, windows, heating system, phone), and allowed the subjects to move through the apartment. Users could perform tasks like playing music, watching TV, open doors, or moving around. Therefore, seven control masks were created: a light mask, a music mask, a phone mask, a temperature mask, a TV mask, a move mask and a "go to" mask. The controlling mask for the TV is shown.

P300 Second Life Control
g.tec implemented a BCI system based on the P300 brainwave. Different symbols are arranged on a computer screen and are highlighted in a random order. If the subject silently counts one specific symbol that is flashing, the P300 should be elicited, and the BCI system can recognize this P300 and therefore the symbol. To control Second-Life, different masks (GUI with icons) were created for moving around, chatting, or other tasks specialized to each user's wishes.

Hyperscanning - Connecting Minds
Many futurists believe that people in the distant future will use advanced technology to work together more directly, something like a “hive mind”. People could use technology to help them not just work together but also think together, accomplishing goals more quickly and effectively. That future may not be so distant. Recently, the P300 speller was used for a demonstration called “Hyperscanning” that represents an important step toward direct cooperation through thought alone. Today, several different groups have EEG-based P300 spellers that can identify targets reliably with about 3 flashes per letter. But, despite very extensive effort from groups around the world, faster communication has not been possible without neurosurgery, since brainwave activity from one flash is usually too noisy for accurate classification. Recently, eight people worked together to spell “Merry Christmas” through the P300 speller with only one flash per letter. They spelled all 14 characters without a single mistake. Hence, by combining the brainwave signals across eight people, the system managed to substantially improve communication speed and accuracy. This approach could be used for cooperative control for many different applications. People might work together to play games or draw paintings, or could work together for other tasks like making music, voting or otherwise making decisions, or solving problems. Someday, users might put their heads together for the most direct “meeting of the minds” ever.

Vibro-tactile Stimulation
However, P300 BCIs based on visual stimuli do not work with patients who lost their vision. Auditory paradigms can also be implemented using a frequent stimulus with a certain frequency and an infrequent stimulus with another frequency. The user is asked to count how many times the infrequent stimulus occurs. Like with the visual P300 speller, the infrequent stimuli also produce a P300 response in the EEG. The same principle can be used for vibrotactile stimulation if e.g. the right hand is frequently stimulated and the left hand is infrequently stimulated. The EEG will also exhibit a P300 if the user is paying attention to the infrequent stimuli. This auditory and vibrotactile setup can assess whether the patient is able to follow instructions and experimental procedures. To answer yes and no questions, it is necessary to extend the vibrotactile setup to 3 stimulators. One of the stimulators applies the frequent stimuli, and 2 stimulators apply the infrequent stimuli. The user can concentrate on one of the infrequent stimulators to say (in this case) yes or no. Typically, an evoked potential is calculated by averaging the frequent and infrequent stimuli. A statistical analysis helps to visualize statistically significant differences, which is especially important for patient data collected in field settings, which is frequently noisy.
          
Avatar control
Avatar control has been developed through the research project VERE (Virtual Embodiment and Robotic Re-Embodiment). The VERE project is concerned with embodiment of people in surrogate bodies so that they have the illusion that the surrogate body is their own body – and that they can move and control it as if it were their own. There are two types of embodiment: (i) robotic embodiment and (ii) virtual embodiment. In the first type, the person is embodied in a remote physical robotic device, which they control through a BCI. For example, a patient confined to a wheelchair or bed, who is unable to physically move, may nevertheless re-enter the world actively and physically through such remote embodiment. In the second type, the VERE project uses the intendiX ACTOR protocol to access the BCI output from within the eXtreme Virtual Reality (XVR) environment (VRMedia S.r.l., Pisa, Italy) to control both the virtual and robotic avatars. The BCI is part of the intention recognition and inference component of the embodiment station. The intention recognition and inference unit takes inputs from fMRI, EEG and other physiological sensors to create a control signal together with access to a knowledge base, taking into account body movements and facial movements. This output is used to control the virtual representation of the avatar in XVR and to control the robotic avatar. The user gets feedback showing the scene and the BCI control via the HMD or a display. The BCI overlay, for example, allows users to embed the BCI stimuli and feedback within video streams recorded by the robot and the virtual environment of the user’s avatar. The user is situated inside the embodiment station, which also provides different stimuli such as visual, auditory and tactile. The setup can also be used for invasive recordings with the electrocorticogram (ECoG). The avatar control is promising from a market perspective because it could be used in rehabilitation systems, such as for motor imagery with stroke patients.

With four choices, anyone could easily move a robot forwards, backwards, to the left and to the right. Hence, in our SSVEP BCI, we have four lights. (Of course, SSVEP BCIs have been developed with more or less than four lights, depending mainly on how many commands are required.) All the user has to do now is to look at one specific flickering light (for example, the light that is assigned to the "move forward" command). Our algorithms determine which EEG frequency component(s) are higher than normal, which reveals which light the user was observing and thus which movement command the user wanted to send. This system also uses a "no-control" state. When the user does not look at any oscillating light, the robot doesn't move.

The user was seated in front of a computer monitor and was connected with active EEG electrodes to a biosignal amplifier. The amplifier sent the EEG data to the BCI system that allowed the subject to control a robotic device (e-puck) in real-time. The robotic device was located beside the subject on the floor and the movement was observed with a tracking camera that recorded x- and y-positions on the tracking system computer (EthoVision, Noldus, The Netherlands). Additionally the robotic movements were also captured with a feedback camera that passed the video image to the computer monitor in front of the subject (Technical University of Munich, Germany) and showed the experimental paradigm together with the BCI controls that the subject used to control the robotic device. The code-based BCI system reached a very high on-line accuracy, which is very promising for real-time control applications where a continuous control signal is needed.
The code-based BCI principle is available in g.HIsys as an add-on toolbox g.BCI_CSP. The toolbox allows to analyze EEG data in real-time and to provide code-flickering icons to remote screens via the SOCI module. This allows you to integrate control icons in external applications that are e.g. programmed in unity. All parameter estimation and classification algorithms are integrated in this toolbox to quickly develop your own application.

The intendiX BCI system was designed to be operated by caregivers or the patient’s family at home. It consists of active EEG electrodes, a biosignal amplifier and a computer running the software. This allows the operator to communicate with her or his environment by processing P300 evoked potentials, CVEP (code based visual evoked potentials) or SSVEP (steady state visual evoked potentials) from the EEG data in realtime. The software shows an alphabet, numbers or icons on the computer screen. Then the characters are highlighted in a random order while the user concentrates on the specific character he or she wants to select. By using the ACTOR protocol, the system can be configured with respect to the user’s needs. Furthermore, the ACTOR protocol allows interacting with connected devices and defines the currently displayed controls inside the SOCI interface. This setup provides a powerful communication channel that allows the user to easily interact with the environment or nearby devices such as TV, switches, central heating or computer programs.

The intendiX SOCI system (Screen Overlay Control Interface module) can be used especially for virtual reality (VR) applications and similar applications where merging BCI controls with the applications native interface is essential for an improved and optimal user experience. Using SOCI the intendiX platform can be configured to remotely display its stimuli and feedback on various different devices and systems. The intendiX SOCI can be embedded in host applications to directly interact with BCI controls inside the displayed scene. It generates CVEP, SSVEP stimuli and supports single symbol, row column and random patterns for P300 stimulation.
Through dedicated interfaces it is possible to define and replay custom patterns such as scanning cursors as used by the g.EOGEMGcontrol application. Besides the basic highlighting and colour inversion stimulus types the SOCI system to use a predefined set of coulour images, for example images of famous faces, as stimuli, which is for example used by intendiX and g.BCI_SOCI to implement the face speller.

For complex control tasks, the BCI Application ConTrol and Online Reconfiguration (ACTOR) Protocol is provided. The ACTOR protocol uses eXtensible Markup Language (XML) formatted message strings to exchange information between the BCI and the attached system. Whenever the BCI system is started, it broadcasts a dedicated hello message to identify the available and active applications. As soon as the BCI has detected external applications, it will request from the application the list of applications and services available from this client. The BCI will acknowledge the received list of commands, services and actions and report whether it was able to process them successfully.
This allows you to easily configure your BCI system according to your applications via UDP or from definition files, either at start-up or during operation, which makes the system very flexible. The BCI system also sends standard XML commands to the external applications for e.g. switching on the light in a smart home environment. If external applications are able to understand the ACTOR protocol, it can just be plugged into the BCI system. The ACTOR protocol is already used in many EC research projects, including such as Brainable, Backhome, Vere, and ALIAS.
By combining the ACTOR protocol with the SOCI system, the BCI can be fully embedded within and controlled by a large variety of user applications, and configured, dynamically by each of them. The ACTOR protocol is designed to empower users to communicate and interact with their environment and control various applications, services and devices therein using one single BCI device.
        
Hybrid BCIs combine different input signals to provide more flexible and effective control. g.HIsys supports (i) mouse control, (ii) EMG 1D and 2D control, (iii) EOG 1D control and (iv) eye-tracker control, as well as the standard BCI signals.
EMG and EOG are recorded via the biosignal amplifier and are analyzed with g.RTanalyze to generate the control signals, while the mouse and the eye-tracker use external devices that are interfaced with g.HIsys.
The combination of these input signals makes it possible to use a BCI system for a larger patient group and to make the system faster and more reliable.


  추천하기   목록보기

Copyright 1999-2018 Zeroboard / skin by zero
UKI 고객센터
UKI 오시는 길
제조사 List
품목 List