In this project, we developed a fast and robust algorithm that allows the hearing impaired to recognize alarm sounds in real-time. The project is a continuation of a previous project performed in SIPL. Unlike our predecessors, who solved this problem using classical signal processing tools, we approached the problem by using deep learning algorithms, an approach that has shown promising results in recent years. While various papers suggest using deep learning on audio signals, the uniqueness of this project is the attempt to solve a binary classification problem when the false detections rate of no-alarms as alarms must be negligible. In part A, we applies a Convolutional Neural Network (CNN) to images of Short-Time Fourier Transforms (STFTs) of the audio signal. Using this approach, we achieved higher accuracy than the previous project but still not satisfactory for a practical application. In part B, we applied transfer learning to a CNN trained on the large AudioSet dataset by Google, and achieved substantially better results with 99.5% accuracy (compared to 70%-80% in the previous project), 0% false alarm rate and 0.5% miss detection rate.