Deep CNNs with Rotational Filters for Rotation Invariant Character Recognition

Erik Barrow, Mark Eastwood, Chrisina Jayne

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper explores the use of parallel columns of convolutional layers with tied weights presented to each column in a layer at different rotations, to create a rotation invariant deep convolutional network (CNN). Results of the columns are combined using a winner takes all pooling method to produce approximate rotation invariance, with the approximation improving with smaller rotation increments between parallel columns. Results of applying invariant deep CNN to the MNIST and the CHARS74K rotated test data showed great improvement over traditional deep CNN with a 52.32% increase in accuracy on the MNIST dataset and a 36.44% accuracy increase on the CHARS74K dataset. This paper also introduces a Caffe implementation of the method for use with object recognition research.
Original languageEnglish
Title of host publication2018 International Joint Conference on Neural Networks (IJCNN) Proceedings
PublisherIEEE
Number of pages8
ISBN (Electronic)9781509060146
ISBN (Print)9781509060153
DOIs
Publication statusE-pub ahead of print - 15 Oct 2018
Event2018 International Joint Conference on Neural Networks (IJCNN) - Rio de Janeiro, Brazil
Duration: 8 Jul 201813 Jul 2018

Publication series

NameInternational Joint Conference on Neural Networks (IJCNN)
PublisherIEEE
Volume2018
ISSN (Electronic)2161-4407

Conference

Conference2018 International Joint Conference on Neural Networks (IJCNN)
Abbreviated titleIJCNN 18
Country/TerritoryBrazil
CityRio de Janeiro
Period8/07/1813/07/18

Fingerprint

Dive into the research topics of 'Deep CNNs with Rotational Filters for Rotation Invariant Character Recognition'. Together they form a unique fingerprint.

Cite this