Gaming and Priming
In this project, we used a popular game “DOOM-3” to capture player’s intent using brain activity (EEG). We additionally leveraged the scenario where the player must be “primed” by some stimulus that leads them to imagine the desired concept. Priming is an implicit memory effect in which the exposure to a stimulus influences the user’s response to another.
Conceptual Imagery allows Brain-Computer Interfaces (BCIs) to detect when users imagine concepts within a semantic category: a cup or a chair. Aside from direct motion control applications with Motor Imagery, the semantics of the BCI rarely matches the semantics of the task. Conceptual Imagery allows eliminating the problem by providing more natural interactions for many applications. We investigate and contribute an application of Conceptual Imagery in this project.
We leverage the natural semantics of Conceptual Imagery in order to propose a BCI that control a video game but where the training phase is also integrated in the game. We first propose and explicit interaction, where the instructions for training are conveyed within the game environment. Then, we propose a completely seamless training phase that is fully integrated into the game narrative by using semantic priming, a technique of psychological conditioning.
We were using an FPS game, Doom 3 for this project. In this game we leverage that the player must be “primed” by some stimulus that leads them to imagine the desired concept. Priming is an implicit memory effect in which the exposure to a stimulus influences the user’s response to another. Conceptual priming in particular is a type of priming in which the stimulus is the image or shape of an object with the effect of reinforcing the effects of stimuli pertaining to objects related in shape or appearance.
More concretely, in our paper, we integrated the mental imagery of concepts such as “weapon” or “flashlight” (that are relevant to the particular game to which the system was applied) as we triggered corresponding in-game actions.
The player starts at the beginning of the level, in a dark corridor, and the flashlight is toggled by the game engine automatically, we, however, record the brain signals of the player. Later, when the player finds themselves again in a dark corridor, our hypothesis is that they will remember/rethink the moment when they got the flashlight earlier in the game. We record and compare the brain signals, and if the current signal corresponds to the previously recorded “sample” of the flashlight, the player will get the flashlight from the game engine. In a similar way we proceed with the “weapon” class: in the first iteration with the zombie, the weapon is being provided to the player, and we record their brain signals, in the next iteration, the player will get the weapon automatically, if they think about it. The priming for the BrainAPI training is performed when the player first needs to use the flashlight and then when the payer must use the gun for the first time. The players do not know about the ways to toggle the weapon and the flashlight and we did not provide them with any details about how their brain data might be used. However, we used the game experience questionnaire (GEQ) coupled with reliability analysis (Cronbach’s alpha). The Brain API integration did not change the feeling of competence of our 36 gamers who participated in the study. However, flow and immersion increased significantly when using BrainAPI compared to the classic gameplay or “explicit” instructions gameplay.