Controlling a game via steady-state visually evoked potentials
Tue—Casino_1.801—Poster2—5308
Presented by: Alexander Blöck
A typical brain-computer interface based on visually evoked potentials (VEPs) presents multiple flickering stimuli to participants. VEPs are recorded via EEG and a machine learning algorithm classifies the VEPs to predict which stimulus the participant is looking at. This can be used to automatically trigger an action, which can be beneficial for certain clinical groups. Stimuli are usually high-contrast white-black patterns, alternating at low-frequency (e.g., 15 Hz). However, looking at such stimuli for prolonged periods of time is tiring. With N=10 participants, we investigated whether gamification combined with stimuli flickering at frequencies beyond the flicker fusion threshold could improve participants’ experience. We conducted a gamified experiment (in which participants played the game “2048” by looking at buttons on a computer screen). Stimuli were binary coded by different phase-shifts, such that 'true' reflected black-white flickering and 'false' white-black flickering. Each bit-sequence was shown for 0.5 s on a monitor (refresh rate 120 Hz). Questionnaires indicated that participants were highly engaged. However, decoding VEPs using a stacked regression model only achieved an accuracy of 71.6% (SEM: 7.6%; chance level: 25%), which was substantially reduced compared to our prior (non-gamified) experiments. One possibility to alleviate this accuracy reduction would be to dynamically increase presentation times until the classification model evaluates the accuracy to be sufficiently high. We discuss the potential causes for this accuracy reduction as well as other pros and cons of the gamified experiment as compared to classical, non-gamified experiments.
Keywords: EEG, VEP, Low-level Vision, Temporal Processing, Machine Learning, Gamification, Brain-Computer Interface