'Alarm Detection app for the Hearing Impaired' is a research project with practical application. The goal is to develop a robust algorithm with short decision-making time to enable the hearing impaired to recognize alarm sounds in real time. The project is a continuation of a previous project performed in SIPL. Unlike some predecessors, who solved this problem using classical signal processing tools, we approached the problem by using deep learning algorithms, an approach that showed promising results in recent years, to achieve better and more generic results. While various papers suggest using deep learning on audio data, the uniqueness of this project is in the attempt to solve a binary classification problem when the false detections rate of no-alarms as alarms must be negligible. Our algorithm applies a Convolutional Neural Network (CNN) to images of Short-Time Fourier Transforms (STFTs). We achieved better performance than the previous project and various papers, about 97% accuracy, but the results still don’t prove any clear possibility of everyday use that avoids false alerts and as a result we don’t except this kind of usage. After reaching a saturation with the improvement rate of the results by improving and extending our dataset with various methods, we are planning for project B to find a model more specific for the time-frequency characteristics of sirens and alarms.