What are some algorithms for symbol-by-symbol handwriting recognition?
Asked Answered
A

3

7

I think there are some algorithms that evaluate difference between drawn symbol and expected one, or something like that. Any help will be appreciated :))

Astray answered 28/11, 2011 at 17:39 Comment(5)
There are two kinds of handwriting recognition: recognition of symbols as they are drawn (online) and recognition of already-drawn symbols (offline). There are different approaches to recognition for both approaches. Which of these are you more interested in?Recountal
one draws ONE symbol at time, machine recognizes it at once and clears input for next one.Astray
English or Non-English, its big differenceFubsy
You should study Palm's Graffiti en.wikipedia.org/wiki/Graffiti_%28Palm_OS%29 I love it: it's easy to learn the alphabet (for the user), and also easy (for the PDA) to recognise it, and the accuracy is far better than recognising free handwriting.Ochs
A special alphabet should really not be necessary any longer: Palms had extremely little resources when Graffiti was developed. On today's hardware with modern algorithms, excellent results can be obtained from naturally written characters. Also: language should make no difference for character-by-character recognition. Differences will arise from different alphabets: Latin, Arabic, Mandarin...Lynx
I
8

You can implement a simple Neural Network to recognize handwritten digits. The simplest type to implement is a feed-forward network trained via backpropagation (it can be trained stochastically or in batch-mode). There are a few improvements that you can make to the backpropagation algorithm that will help your neural network learn faster (momentum, Silva and Almeida's algorithm, simulated annealing).

As far as looking at the difference between a real symbol and an expected image, one algorithm that I've seen used is the k-nearest-neighbor algorithm. Here is a paper that describes using the k-nearest-neighbor algorithm for character recognition (edit: I had the wrong link earlier. The link I've provided requires you to pay for the paper; I'm trying to find a free version of the paper).

If you were using a neural network to recognize your characters, the steps involved would be:

  1. Design your neural network with an appropriate training algorithm. I suggest starting with the simplest (stochastic backpropagation) and then improving the algorithm as desired, while you train your network.
  2. Get a good sample of training data. For my neural network, which recognizes handwritten digits, I used the MNIST database.
  3. Convert the training data into an input vector for your neural network. For the MNIST data, you will need to binarize the images. I used a threshold value of 128. I started with Otsu's method, but that didn't give me the results I wanted.
  4. Create your network. Since the images from MNIST come in an array of 28x28, you have an input vector with 784 components and 1 bias (so 785 inputs), to your neural network. I used one hidden layer with the number of nodes set as per the guidelines outlined here (along with a bias). Your output vector will have 10 components (one for each digit).
  5. Randomly present training data (so randomly ordered digits, with random input image for each digit) to your network and train it until it reaches a desired error-level.
  6. Run test data (MNIST data comes with this as well) against your neural network to verify that it recognizes digits correctly.

You can check out an example here (shameless plug) that tries to recognize handwritten digits. I trained the network using data from MNIST.

Expect to spend some time getting yourself up to speed on neural network concepts, if you decide to go this route. It took me at least 3-4 days of reading and writing code before I actually understood the concept. A good resource is heatonresearch.com. I recommend starting with trying to implement neural networks to simulate the AND, OR, and XOR boolean operations (using a threshold activation function). This should give you an idea of the basic concepts. When it actually comes down to training your network, you can try to train a neural network that recognizes the XOR boolean operator; it's a good place to start for an introduction to learning algorithms.

When it comes to building the neural network, you can use existing frameworks like Encog, but I found it to be far more satisfactory to build the network myself (you learn more that way I think). If you want to look at some source, you can check out a project that I have on github (shameless plug) that has some basic classes in Java that help you build and train simple neural-networks.

Good luck!

EDIT

I've found a few sources that use k-nearest-neighbors for digit and/or character recognition:

For resources on Neural Networks, I found the following links to be useful:

Islington answered 8/12, 2011 at 0:6 Comment(7)
The paper you linked uses a neural network, not K-nearest-neighbor.Recountal
@PeterO. You're right. I got the link wrong. The paper I was referring to was by D.Y. Lee. The title is "Handwritten Digit Recognition Using K Nearest-neighbor, Radial Basis Function andBackpropagation Neural Networks". I can't seem to find the link to it though for some reason. I will update my answer. Thanks for pointing it out.Islington
Ah, neural networks... reminds me of my years at uni when I wanted to work in AI... +1 for the nostalgia, these were the days...Fontanel
@Fontanel It is a fun topic. I recently got the opportunity to create a neural network of my own for a class project. I found the experience very educational!Islington
Oh yes, I can remember the feeling when you see your network begin to actually learn and do things better that you would have imagined :) Like you created something aliveFontanel
Designing neural net seems to be the most efficient (and most difficult) Way to do it. Can you give me some introduction-tutorial links? These aren't easy at all.Astray
I started with this. Unfortunately, neural networks are pretty math-heavy (especially the back-propagation algorithm), so you really can't get away from it.Islington
L
2

Addendum

If you have not implemented machine learning algorithms before you should really check out: www.ml-class.org

It's a free class taught by Andrew Ng, Director of the Stanford Machine Learning Centre. The course is an entirely online-taught course specifically on implementing a wide range of machine learning algorithms. It does not go too much into the theoretical intricacies of the algorithms but rather teaches you how to choose, implement, use the algorithms and how diagnose their performance. - It is unique in that your implementation of the algorithms is checked automatically! It's great for getting started in machine learning at you have instantaneous feedback.

The class also includes at least two exercises on recognising handwritten digits. (Programming Exercise 3: with multinomial classification and Programming Exercise 4: with feed-forward neural networks)

The class has started a while ago but it should still be possible to sign up. If not, a new run should start early next year. If you want to be able to check your implementations you need to sign up for the "Advanced Track".

One way to implement handwriting recognition

The answer to this question depends on a number of factors, including what kind of resource constraints you have (embedded platform) and whether you have a good library of correctly labelled symbols: i.e. different examples of a handwritten letter for which you know what letter they represent.

If you have a decent sized library, implementation of a quick and dirty standard machine learning algorithm is probably the way to go. You can use multinomial classifiers, neural networks or support vector machines.

I believe a support vector machine would be fastest to implement as there are excellent libraries out there who handle the machine learning portion of the code for you, e.g. libSVM. If you are familiar with using machine learning algorihms, this should take you less than 30 minutes to implement.

The basic procedure you would probably want to implement is as follows:

Learning what symbols "look like"

  1. Binarise the images in your library.
  2. Unroll the images into vectors / 1-D arrays.
  3. Pass the "vector representation" of the images in your library and their labels to libSVM to get it to learn how the pixel coverage relates to the represented symbol for the images in the library.
  4. The algorithm gives you back a set of model parameters which describe the recognition algorithm that was learned.

You should repeat 1-4 for each character you want to recognise to get an appropriate set of model parameters.

Note: steps 1-4 you only have to carry out once for your library (but once for each symbol you want to recognise). You can do this on your developer machine and only include the parameters in the code you ship / distribute.

If you want to recognise a symbol:

Each set of model parameters describes an algorithm which tests whether a character represents one specific character - or not. You "recognise" a character by testing all the models with the current symbol and then selecting the model that best fits the symbol you are testing.

This testing is done by again passing the model parameters and the symbol to test in unrolled form to the SVM library which will return the goodness-of-fit for the tested model.

Lynx answered 7/12, 2011 at 9:49 Comment(0)
H
2

Have you checked Detexify. I think it does pretty much what you want http://detexify.kirelabs.org/classify.html

It is open source, so you could take a look at how it is implemented. You can get the code from here (if I do not recall wrongly, it is in Haskell) https://github.com/kirel/detexify-hs-backend

In particular what you are looking for should be in Sim.hs

I hope it helps

Houston answered 9/12, 2011 at 11:53 Comment(1)
No idea you could ask Daniel Kirsch ([email protected]) he is very friendly.Houston

© 2022 - 2024 — McMap. All rights reserved.