MIR is used in applications which organize music databases. It characterizes the user’s musical taste, reproduces and classifies music. The most common method for solving this task uses “Collaborative Filtering”, predicting the taste of one user using data from the other users. The method used in this project is based on processing of the signal itself. As of today, there is no good enough MIR system.
This project uses deep convolutional neural network to extract a given song’s Genre and artist. This work compares the use of deep learning with audio signal and deep learning with common audio features, the MFCC.
This paper also introduces contemporary methods for working with big data.