Surface electroencephalography (EEG) is a standard and noninvasive way to measure electrical brain activity. By analyzing those signals it is possible to detect activation patterns that allow interpreting the brain mechanisms related to a particular task. Recently, with the developments in the artificial intelligence (AI) community, great advances have been achieved in the automatic detection of brain patterns, allowing the creation of increasingly faster, more reliable and accessible Brain-Computer Interfaces (BCIs).
Although different paradigms can be used to communicate, in the last few years, interest has grown to interpret and characterize the “inner voice”. This paradigm, called inner speech, raises the possibility to execute an order by just thinking about it, allowing a more ‘natural’ way of communication. Unfortunately, since it is a recently explored field, there are no EEG datasets publicly available, limiting the development of new techniques and AI algorithms for inner speech recognition. In this work we construct a dataset with 10 subjects using the inner speech paradigm, in order to i) better understand the brain mechanisms and patterns related to the inner voice and, ii) to provide to the scientific community with an open-access multiclass EEG database of inner speech commands seeking at fostering the rapid development of new AI methods for robust inner speech recognition.