The project’s goal was to create a demo of a classifier, based on EEG signals produced by the MUSE hardware. We performed three experiments of excitation measured by the Muse - sounds in different frequencies, sounds from different ears and observation of different colored circles. In all the experiments one event was frequent (appeared 80% of the time) and the other was rare (20%). In these experiments we tried to classify between the two classes. The classifiers we used were SVM and MDRM, and the measures were accuracy, precision, recall and f1-score. In the audio experiment we achieved poor results but using a different version of the visual experiment- changing position of the circles, we managed to achieve satisfying results. At the end we tried to adapt between users and over time. Adaptation between users did not work since the experiment was preformed differently by the users. Time adaptation led to better results; therefore, in order to create a demo, the muse will need to be calibrated for each user separately.