In this project, we attempt to present classification of sound signals of different dolphins’ calls from a pre-tagged database. We present a classifier using a deep-learning method, and includes three stages: transforming the raw signal into a spectrogram, pre-processing the spectrogram, and afterwards, classification using deep learning convolutional neural network. The classifier presented shows that despite the small original database, prior knowledge of the typical features of dolphins’ sounds spectrograms, can be used to assist the network to converge on relevant features to the problem, thus preventing over-fitting. The suggested model performs basic image processing methods on the spectrograms in order to better the relevant shapes for correct classification, as well as cleaning noise and added artifacts of the time-frequency transformation. In order to overcome the need in a very large number of samples to train a neural network, we assist the learning process by pre-training it on a synthetic database we generated. Finally, we show that after training the net with the synthetic dataset, we can perform a final training of the classifier, using the original dataset, and achieve 95% accuracy, with low over-fit.