Training feedforward neural network for OCR [closed]
Asked Answered
K

4

11

Currently I'm learning about neural networks and I'm trying to create an application that can be trained to recognize handwritten characters. For this problem I use a feed-forward neural network and it seems to work when I train it to recognize 1, 2 or 3 different characters. But when I try to make the network learn more than 3 characters it will stagnate at a error percentage around the 40 - 60%.

I tried with multiple layers and less/more neurons but I can't seem to get it right, now I'm wondering if a feedforward neural network is capable of recognizing that much information.

Some statistics:

Network type: Feed-forward neural network

Input neurons: 100 (a 10 * 10) grid is used to draw the characters

Output neurons: The amount of characters to regocnize

Does anyone know what's the possible flaw in my architecture is? Are there too much input neurons? Is the feedforward neural network not capable of character regocnition?

Kenyon answered 13/3, 2012 at 12:48 Comment(4)
How many hidden neurons are you using?Forever
Input and output neurons seems to be fine for your task but how do you train your network, what algorithm do you use? How do you initialize weights?Submissive
i tried using backpopagation and a genetic algorithm. also i tried it with one hidden layer of 70 neurons and once with 2 hidden layers (70 and 40) neurons.Kenyon
What was the solution in the end? Which of the 5 points did make a difference?Brigettebrigg
M
14

For handwritten character recognition you need

  1. many training examples (maybe you should create distortions of your training set)
  2. softmax activation function in the output layer
  3. cross entropy error function
  4. training with stochastic gradient descent
  5. a bias in each layer

A good test problem is the handwritten digit data set MNIST. Here are papers that successfully applied neural networks on this data set:

Y. LeCun, L. Bottou, Y. Bengio and P. Haffner: Gradient-Based Learning Applied to Document Recognition, http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf

Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, Juergen Schmidhuber: Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition, http://arxiv.org/abs/1003.0358

I trained an MLP with 784-200-50-10 architecture and got >96% accuracy on the test set.

Myall answered 13/3, 2012 at 21:4 Comment(0)
U
10

You probably want to follow Lectures 3 and 4 at http://www.ml-class.org. Professor Ng has solved this exact problem. He is classifying 10 digits (0...9). Some of the things that he did in the class that gets him to a 95% training accuracy are :

  • Input Nueron : 400 (20x20)
    • Hidden Layers : 2
    • Size of hidden layers : 25
    • Activation function : sigmoid
    • Training method : gradient descent
    • Data size : 5000
Unread answered 14/3, 2012 at 21:12 Comment(1)
That course is now at coursera.org/learn/machine-learning and I think "9. Neural Networks: Learning" is the part you are referring to.Settlings
L
3

Examine this example program Handwritten Digit Recognation

Program uses a Semeion Handwritten Digit Data Set with FANN library

Layette answered 14/3, 2012 at 22:6 Comment(0)
K
1

I had a similar problem some time ago trying to identify handwritten digits using the MNIST dataset. My feedforward neural net was giving an accuracy of about 92% on the validation set but was frequently misclassifying the images I gave it.

I fixed this problem by adding a hidden layer in my net and using RMSProp. The net now gives around 97% accuracy and correctly classifies the images that I give it.

Moreover, if your cost isn't decreasing, it probably means that your learning rate is too high or your net is probably stuck in a local minima. In such a situation, you could try decreasing your learning rate and initial weights.

Knowall answered 5/4, 2017 at 9:49 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.