Unexpected input detection in deep neural networks

Deep neural networks (DNNs) are powerful models that achieve high performance on various tasks in machine learning, computer vision, speech and audio recognition, and language processing. Leading DNN architectures are known to generalize well and achieve impressive performance when tested on samples drawn from the distribution observed at the training phase.
However, DNNs tend to behave unexpectedly when encountering input taken from an unfamiliar distribution. In such instances, an out-of-distribution (OOD) input causes most models to mis-predict and emit unexpected results. This behavior poses a severe concern about the reliability and applicability of DNNs in real-world scenarios.
In this work, we will focus on leading DNN-based algorithms that were trained to tackle various machine learning problems. First, we will examine their behavior for out-of-distribution or unexpected types of input. Then, we will propose an algorithm that allows leading DNN architectures to accurately discern out-of-distribution input, while maintaining their high performance.

Thermal image to RGB

Unexpected input detection in deep neural networks