Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition

J. Stallkamp, M. Schlipsing, J. Salmen, Christian Igel

542 Citations (Scopus)

Abstract

Traffic signs are characterized by a wide variability in their visual appearance in real-world environments. For example, changes of illumination, varying weather conditions and partial occlusions impact the perception of road signs. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans, who perform very well at this task. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. But little systematic comparison of such systems exist. What is the status quo? Do today’s algorithms reach human performance? For assessing the performance of state-of-the-art machine learning algorithms, we present a publicly available traffic sign dataset with more than 50,000 images of German road signs in 43 classes. The data was considered in the second stage of the German
Traffic Sign Recognition Benchmark held at IJCNN 2011. The results of this competition are reported and the best-performing algorithms are briefly described. Convolutional neural networks (CNNs) showed particularly high classification accuracies in the competition. We measured the performance of human subjects on the same data—and the CNNs outperformed the human test persons.
Original languageEnglish
JournalNeural Networks
Volume32
Pages (from-to)323-332
Number of pages10
ISSN0893-6080
DOIs
Publication statusPublished - Aug 2012
Event2011 International Joint Conference on Neural Networks - San Jose, California , United States
Duration: 31 Jul 20115 Aug 2011

Conference

Conference2011 International Joint Conference on Neural Networks
Country/TerritoryUnited States
CitySan Jose, California
Period31/07/201105/08/2011

Cite this