Crossing a road is a dangerous activity for pedestrians and therefore pedestrian crossings and intersections often include pedestrian-directed traffic lights. These traffic lights may be accompanied by audio signals to aid the visually impaired. In many cases, when such an audio signal is not available, a visually impaired pedestrian cannot cross the road without help. In this project, we propose a technique that may help visually impaired people by detecting pedestrian traffic lights and their state (walk/don’t walk) from video taken with a mobile phone camera. The proposed technique consists of two main modules - an object detector that uses a deep convolutional network – Tiny YOLO, and a decision module. We test performances in accuracy and runtime and compare the results to the previous project (Faster R-CNN combined with a KCF tracker) for achieving better performance. The proposed technique aims to operate on a mobile phone in a client-server architecture. It proves to be fast and accurate with running time of 6 ms per frame on a desktop computer with GeForce GTX 1080 GPU and detection accuracy of more than 99%.