How does the Google "Did you mean?" Algorithm work? [closed]
Asked Answered
N

18

467

I've been developing an internal website for a portfolio management tool. There is a lot of text data, company names etc. I've been really impressed with some search engines ability to very quickly respond to queries with "Did you mean: xxxx".

I need to be able to intelligently take a user query and respond with not only raw search results but also with a "Did you mean?" response when there is a highly likely alternative answer etc

[I'm developing in ASP.NET (VB - don't hold it against me! )]

UPDATE: OK, how can I mimic this without the millions of 'unpaid users'?

  • Generate typos for each 'known' or 'correct' term and perform lookups?
  • Some other more elegant method?
Nucleo answered 20/11, 2008 at 23:34 Comment(5)
Here is the VB.NET version of the Norvig Spelling Corrector. You may find this useful if it is not too late!Castile
possible duplicate of How do you implement a "Did you mean"?Schizo
I type on a non-qwerty keyboard (Colemak) and the feature isn't half as clever. It surely learns from recorded mistake-correction pairs and is thus tuned to qwerty. Ordinary spell checkers work fine for my keyboard, as expected—string edit distance is layout-invariant.Regeneration
I’m voting to close this question because Machine learning (ML) theory questions are off-topic on Stack Overflow - gift-wrap candidate for Cross-ValidatedRosenquist
#58065520Israelitish
O
391

Here's the explanation directly from the source ( almost )

Search 101!

at min 22:03

Worth watching!

Basically and according to Douglas Merrill former CTO of Google it is like this:

1) You write a ( misspelled ) word in google

2) You don't find what you wanted ( don't click on any results )

3) You realize you misspelled the word so you rewrite the word in the search box.

4) You find what you want ( you click in the first links )

This pattern multiplied millions of times, shows what are the most common misspells and what are the most "common" corrections.

This way Google can almost instantaneously, offer spell correction in every language.

Also this means if overnight everyone start to spell night as "nigth" google would suggest that word instead.

EDIT

@ThomasRutter: Douglas describe it as "statistical machine learning".

They know who correct the query, because they know which query comes from which user ( using cookies )

If the users perform a query, and only 10% of the users click on a result and 90% goes back and type another query ( with the corrected word ) and this time that 90% clicks on a result, then they know they have found a correction.

They can also know if those are "related" queries of two different, because they have information of all the links they show.

Furthermore, they are now including the context into the spell check, so they can even suggest different word depending on the context.

See this demo of google wave ( @ 44m 06s ) that shows how the context is taken into account to automatically correct the spelling.

Here it is explained how that natural language processing works.

And finally here is an awesome demo of what can be done adding automatic machine translation ( @ 1h 12m 47s ) to the mix.

I've added anchors of minute and seconds to the videos to skip directly to the content, if they don't work, try reloading the page or scrolling by hand to the mark.

Osteology answered 20/11, 2008 at 23:35 Comment(7)
How does the algorithm work though? How does Google go from "We receive billions of searches with various terms, and these are those searches" to "this term must therefore be a common misspelling of this term"? They have solved this problem, but I am interested in how. How do they figure that two searches are from the same user, and which word is a 'correction' of another, and how to they aggregate this over billions of searches?Terzetto
If everyone started misspelling "night" ... I believe they already ran into this with people searching for "Flickr."Magically
What do factorials have to do with the question?Rafaello
the problem with everyone misspelling something has already happened in a much more severe sense: Try typing 'fuscia' into Google. Google says "Did you mean fuschia?" The correct spelling, in fact, is "fuchsia," but no one can spell it correctly for some reason. The problem is even worse on Dictionary.com; if you type "fuschia" into their search, it gives you "No results for fuschia. Did you mean 'fuschia'?" (i.e., did you mean what you just typed?)Afrikah
I don't believe they only use misspelling data - there's definitely some Levenshtein distance or similar going on - search for 'Plack' (and one or more other words) and it always gets corrected to 'black', which is a very unlikely misspelling/typoAirsickness
@Airsickness no it's not unlikely if you use another keyboard mapping than qwerty (eg: Bépo).Retort
@Jakub I think they have fixed the problem since I made that comment 4+ years ago. Indeed, Google has also fixed the problem. A search for fuschia includes results for fuchsia automatically.Afrikah
M
114

I found this article some time ago: How to Write a Spelling Corrector, written by Peter Norvig (Director of Research at Google Inc.).

It's an interesting read about the "spelling correction" topic. The examples are in Python but it's clear and simple to understand, and I think that the algorithm can be easily translated to other languages.

Below follows a short description of the algorithm. The algorithm consists of two steps, preparation and word checking.

Step 1: Preparation - setting up the word database

Best is if you can use actual search words and their occurence. If you don't have that a large set of text can be used instead. Count the occurrence (popularity) of each word.

Step 2. Word checking - finding words that are similar to the one checked

Similar means that the edit distance is low (typically 0-1 or 0-2). The edit distance is the minimum number of inserts/deletes/changes/swaps needed to transform one word to another.

Choose the most popular word from the previous step and suggest it as a correction (if other than the word itself).

Misapprehend answered 20/11, 2008 at 23:41 Comment(2)
@Davide: """the examples are in python but it's clear and simple to understand""": I don't understand your use of "but" ... I'd say given Python + Norvig's writing style, "clear and simple to understand" is the expected outcome.Oxheart
The "but" was there because Harry said in his question that he is a VB.NET developer, so I assumed he wasn't confident with python language.Misapprehend
I
64

For the theory of "did you mean" algorithm you can refer to Chapter 3 of Introduction to Information Retrieval. It is available online for free. Section 3.3 (page 52) exactly answers your question. And to specifically answer your update you only need a dictionary of words and nothing else (including millions of users).

Inflexion answered 21/11, 2008 at 0:55 Comment(0)
F
11

Hmm... I thought that google used their vast corpus of data (the internet) to do some serious NLP (Natural Language Processing).

For example, they have so much data from the entire internet that they can count the number of times a three-word sequence occurs (known as a trigram). So if they see a sentence like: "pink frugr concert", they could see it has few hits, then find the most likely "pink * concert" in their corpus.

They apparently just do a variation of what Davide Gualano was saying, though, so definitely read that link. Google does of course use all web-pages it knows as a corpus, so that makes its algorithm particularly effective.

Funest answered 20/11, 2008 at 23:45 Comment(0)
C
9

My guess is that they use a combination of a Levenshtein distance algorithm and the masses of data they collect regarding the searches that are run. They could pull a set of searches that have the shortest Levenshtein distance from the entered search string, then pick the one with the most results.

Cotto answered 20/11, 2008 at 23:57 Comment(1)
Let's say that you have a total of billions of web pages' worth of words stored. There is no easy way to index Levenshtein distance for fast retrieval of near matches without calculating the Levenshtein distance some billions of times for every word queried. Levenshtein distance is therefore not of much use in this situation, at least not in the first stage, where Google needs to narrow down from billions of existing words to just those words which are likely to be misspellings of the current word. It can definitely apply Levenshtein as a later step once it has already fetched likely matches.Terzetto
C
7

Normally a production spelling corrector utilizes several methodologies to provide a spelling suggestion. Some are:

  • Decide on a way to determine whether spelling correction is required. These may include insufficient results, results which are not specific or accurate enough (according to some measure), etc. Then:

  • Use a large body of text or a dictionary, where all, or most are known to be correctly spelled. These are easily found online, in places such as LingPipe. Then to determine the best suggestion you look for a word which is the closest match based on several measures. The most intuitive one is similar characters. What has been shown through research and experimentation is that two or three character sequence matches work better. (bigrams and trigrams). To further improve results, weigh a higher score upon a match at the beginning, or end of the word. For performance reasons, index all these words as trigrams or bigrams, so that when you are performing a lookup, you convert to n-gram, and lookup via hashtable or trie.

  • Use heuristics related to potential keyboard mistakes based on character location. So that "hwllo" should be "hello" because 'w' is close to 'e'.

  • Use a phonetic key (Soundex, Metaphone) to index the words and lookup possible corrections. In practice this normally returns worse results than using n-gram indexing, as described above.

  • In each case you must select the best correction from a list. This may be a distance metric such as levenshtein, the keyboard metric, etc.

  • For a multi-word phrase, only one word may be misspelled, in which case you can use the remaining words as context in determining a best match.

Crisis answered 16/4, 2009 at 18:7 Comment(0)
P
7

Use Levenshtein distance, then create a Metric Tree (or Slim tree) to index words. Then run a 1-Nearest Neighbour query, and you got the result.

Paintbrush answered 4/10, 2009 at 18:7 Comment(0)
C
4

Google apparently suggests queries with best results, not with those which are spelled correctly. But in this case, probably a spell-corrector would be more feasible, Of course you could store some value for every query, based on some metric of how good results it returns.

So,

  1. You need a dictionary (english or based on your data)

  2. Generate a word trellis and calculate probabilities for the transitions using your dictionary.

  3. Add a decoder to calculate minimum error distance using your trellis. Of course you should take care of insertions and deletions when calculating distances. Fun thing is that QWERTY keyboard maximizes the distance if you hit keys close to each other.(cae would turn car, cay would turn cat)

  4. Return the word which has the minimum distance.

  5. Then you could compare that to your query database and check if there is better results for other close matches.

Cassilda answered 21/11, 2008 at 1:17 Comment(0)
H
4

Here is the best answer I found, Spelling corrector implemented and described by Google's Director of Research Peter Norvig.

If you want to read more about the theory behind this, you can read his book chapter.

The idea of this algorithm is based on statistical machine learning.

Hinman answered 12/3, 2014 at 6:29 Comment(0)
C
3

As a guess... it could

  1. search for words
  2. if it is not found use some algorithm to try to "guess" the word.

Could be something from AI like Hopfield network or back propagation network, or something else "identifying fingerprints", restoring broken data, or spelling corrections as Davide mentioned already ...

Clarke answered 20/11, 2008 at 23:45 Comment(0)
S
3

I saw something on this a few years back, so may have changed since, but apparently they started it by analysing their logs for the same users submitting very similar queries in a short space of time, and used machine learning based on how users had corrected themselves.

Survive answered 20/11, 2008 at 23:46 Comment(0)
L
3

Apart from the above answers, in case you want to implement something by yourself quickly, here is a suggestion -

Algorithm

You can find the implementation and detailed documentation of this algorithm on GitHub.

  • Create a Priority Queue with a comparator.
  • Create a Ternay Search Tree and insert all english words (from Norvig's post) along with their frequencies.
  • Start traversing the TST and for every word encountered in TST, calculate its Levenshtein Distance(LD) from input_word
  • If LD ≤ 3 then put it in a Priority Queue.
  • At Last extract 10 words from the Priority Queue and display.
Lone answered 8/6, 2017 at 4:53 Comment(0)
O
2

Simple. They have tons of data. They have statistics for every possible term, based on how often it is queried, and what variations of it usually yield results the users click... so, when they see you typed a frequent misspelling for a search term, they go ahead and propose the more usual answer.

Actually, if the misspelling is in effect the most frequent searched term, the algorythm will take it for the right one.

Outlandish answered 20/11, 2008 at 23:48 Comment(1)
Nobody has doubted that Google has all the necessary data to do this, but the question was asking for details on how Google has come up with an algorithm to do this, with so much data, in a reasonable amount of time. They would have gazillions of searches a day - how do they easily identity whether a search term is a 'spelling correction' of another, recent one? What factors make Google decide that one term is a misspelling of another? These are implementation details that would be of interest.Terzetto
B
2

regarding your question how to mimic the behavior without having tons of data - why not use tons of data collected by google? Download the google sarch results for the misspelled word and search for "Did you mean:" in the HTML.

I guess that's called mashup nowadays :-)

Berget answered 21/11, 2008 at 0:57 Comment(2)
how long until google stops your bot from scraping? - or wouldn't google even notice these days?Nucleo
I don't think they'll notice if the reqs/sec aren't too high.Feeler
S
1

You mean to say spell checker? If it is a spell checker rather than a whole phrase then I've got a link about the spell checking where the algorithm is developed in python. Check this link

Meanwhile, I am also working on project that includes searching databases using text. I guess this would solve your problem

Suprasegmental answered 13/7, 2011 at 11:49 Comment(0)
M
1

This is an old question, and I'm surprised that nobody suggested the OP using Apache Solr.

Apache Solr is a full text search engine that besides many other functionality also provides spellchecking or query suggestions. From the documentation:

By default, the Lucene Spell checkers sort suggestions first by the score from the string distance calculation and second by the frequency (if available) of the suggestion in the index.

Mithgarthr answered 6/3, 2012 at 20:29 Comment(0)
C
0

There is a specific data structure - ternary search tree - that naturally supports partial matches and near-neighbor matches.

Centerboard answered 7/9, 2009 at 11:24 Comment(0)
W
-1

Easiest way to figure it out is to Google dynamic programming.

It's an algorithm that's been borrowed from Information Retrieval and is used heavily in modern day bioinformatics to see how similiar two gene sequences are.

Optimal solution uses dynamic programming and recursion.

This is a very solved problem with lots of solutions. Just google around until you find some open source code.

Wagshul answered 21/11, 2008 at 1:5 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.