Scoring System Suggestion - weighted mechanism?
Asked Answered
U

6

10

I'm trying to validate a series of words that are provided by users. I'm trying to come up with a scoring system that will determine the likelihood that the series of words are indeed valid words.

Assume the following input:

xxx yyy zzz

The first thing I do is check each word individually against a database of words that I have. So, let's say that xxx was in the database, so we are 100% sure it's a valid word. Then let's say that yyy doesn't exist in the database, but a possible variation of its spelling exist (say yyyy). We don't give yyy a score of 100%, but maybe something lower (let's say 90%). Then zzz just doesn't exist at all in the database. So, zzz gets a score of 0%.

So we have something like this:

xxx = 100%
yyy = 90%
zzz = 0%

Assume further that the users are either going to either:

  1. Provide a list of all valid words (most likely)
  2. Provide a list of all invalid words (likely)
  3. Provide a list of a mix of valid and invalid words (not likely)

As a whole, what is a good scoring system to determine a confidence score that xxx yyy zzz is a series of valid words? I'm not looking for anything too complex, but getting the average of the scores doesn't seem right. If some words in the list of words are valid, I think it increases the likelihood that the word not found in the database is an actual word also (it's just a limitation of the database that it doesn't contain that particular word).

NOTE: The input will generally be a minimum of 2 words (and mostly 2 words), but can be 3, 4, 5 (and maybe even more in some rare cases).

Upheaval answered 26/3, 2013 at 20:31 Comment(4)
You can decide the score of a word according to its minimum Levenshtein distance from the other words in your database. Sounds quite slow though. (en.wikipedia.org/wiki/Levenshtein_distance)Surfeit
It's not a sting distance algorithm I need. I need a weighted scoring system.Upheaval
To begin to answer your question, let's come up with target numbers and bounds for the most basic use cases. What would you imagine to be the confidence score for xxx yyy when one is in the DB and the other is definitely invalid? Would the confidence score be different if the words were in a different order? Other more general questions: What are real life use cases for this system? (Some sort of CAPTCHA?) Could there potentially be adversarial users of this system? Etc..Casein
If xxx yyy zzz were all in the database, I thin the total confidence score should be 100%. If none of the words are in the database, then possibly 0%. In your example, if xxx yyy was the input and only one of them were in the database and the other wasn't, then MAYBE it's 50%? There is no such thing as "definitely invalid"; there is however a "definitely valid." The idea is to figure out the likelihood that the groups of words are valid. CAPTCHA would be an application of such a scoring system.Upheaval
R
14

EDIT I have added a new section looking at discriminating word groups into English and non-English groups. This is below the section on estimating whether any given word is English.


I think you intuit that the scoring system you've explained here doesn't quite do justice to this problem.

It's great to find words that are in the dictionary - those words can be immediately give 100% and passed over, but what about non-matching words? How can you determine their probability? This can be explained by a simple comparison between sentences comprising exactly the same letters:

  1. Abergrandly recieved wuzkinds
  2. Erbdnerye wcgluszaaindid vker

Neither sentence has any English words, but the first sentence looks English - it might be about someone (Abergrandly) who received (there was a spelling mistake) several items (wuzkinds). The second sentence is clearly just my infant hitting the keyboard.

So, in the example above, even though there is no English word present, the probability it's spoken by an English speaker is high. The second sentence has a 0% probability of being English.

I know a couple of heuristics to help detect the difference:

Simple frequency analysis of letters

A typical distribution of letters in English. From Wikipedia

In any language, some letters are more common than others. Simply counting the incidence of each letter and comparing it to the languages average tells us a lot.

There are several ways you could calculate a probability from it. One might be:

  1. Preparation
    1. Compute or obtain the frequencies of letters in a suitable English corpus. The NLTK is an excellent way to begin. The associated Natural Language Processing with Python book is very informative.
  2. The Test
    1. Count the number of occurrences of each letter in the phrase to test
    2. Compute the Linear regression where the co-ordinate of each letter-point is:
      1. X axis: Its predicted frequency from 1.1 above
      2. Y axis: The actual count
    3. Perform a Regression Analysis on the data
      1. English should report a positive r close to 1.0. Compute the R^2 as a probability that this is English.
      2. An r of 0 or below is either no correlation to English, or the letters have a negative correlation. Not likely English.

Advantages:

  • Very simple to calculate

Disadvantages:

  • Will not work so well for small samples, eg "zebra, xylophone"
  • "Rrressseee" would seem a highly probably word
  • Does not discriminate between the two example sentences I gave above.

Bigram frequencies and Trigram frequencies

This is an extension of letter frequencies, but looks at the frequency of letter pairs or triplets. For example, a u follows a q with 99% frequency (why not 100%? dafuq). Again, the NLTK corpus is incredibly useful.

Digraph frequency based on a sample to 40,000 words

Above from: http://www.math.cornell.edu/~mec/2003-2004/cryptography/subs/digraphs.jpg

This approach is widely used across the industry, in everything from speech recognition to predictive text on your soft keyboard.

Trigraphs are especially useful. Consider that 'll' is a very common digraph. The string 'lllllllll' therefore consists of only common digraphs and the digraph approach makes it look like a word. Trigraphs resolve this because 'lll' never occurs.

The calculation of this probability of a word using trigraphs can't be done with a simple linear regression model (the vast majority of trigrams will not be present in the word and so the majority of points will be on the x axis). Instead you can use Markov chains (using a probability matrix of either bigrams or trigrams) to compute the probability of a word. An introduction to Markov chains is here.

First build a matrix of probabilities:

  • X axis: Every bigram ("th", "he", "in", "er", "an", etc)
  • Y axis: The letters of the alphabet.
  • The matrix members consist of the probability of the letter of the alphabet following the bigraph.

To start computing probabilities from the start of the word, the X axis digraphs need to include spaces-a, space-b up to space-z - eg the digraph "space" t represents a word starting t.

Computing the probability of the word consists of iterating over digraphs and obtaining the probability of the third letter given the digraph. For example, the word "they" is broken down into the following probabilities:

  • h following "space" t -> probability x%
  • e following th -> probability y%
  • y following he -> probability z%

Overall probability = x * y * z %

This computation solves the issues for a simple frequency analysis by highlighting the "wcgl" as having a 0% probability.

Note that the probability of any given word will be very small and becomes statistically smaller by between 10x to 20x per extra character. However, examining the probability of known English words of 3, 4, 5, 6, etc characters from a large corpus, you can determine a cutoff below which the word is highly unlikely. Each highly unlikely trigraph will drop the likelihood of being English by 1 to 2 orders of magnitude.

You might then normalize the probability of a word, for example, for 8-letter English words (I've made up the numbers below):

  • Probabilities from Markov chain:
    • Probability of the best English word = 10^-7 (10% * 10% * .. * 10%)
    • Cutoff (Probability of least likely English word) = 10^-14 (1% * 1% * .. * 1%)
    • Probability for test word (say "coattail") = 10^-12
  • 'Normalize' results
    • Take logs: Best = -7; Test = -12; Cutoff = -14
    • Make positive: Best = 7; Test = 2; Cutoff = 0
    • Normalize between 1.0 and 0.0: Best = 1.0; Test = 0.28; Cutoff = 0.0
    • (You can easily adjust the higher and lower bounds to, say, between 90% and 10%)

Now we've examined how to get a better probability that any given word is English, let's look at the group of words.

The group's definition is that it's a minimum of 2 words, but can be 3, 4, 5 or (in a small number of cases) more. You don't mention that there is any overriding structure or associations between the words, so I am not assuming:

  • That any group is a phrase, eg "tank commander", "red letter day"
  • That the group is a sentence or clause, eg " I am thirsty", "Mary needs an email"

However if this assumption is wrong, then the problem becomes more tractable for larger word-groups because the words will conform to English's rules of syntax - we can use, say, the NLTK to parse the clause to gain more insight.


Looking at the probability that a group of words is English

OK, in order to get a feel of the problem, let's look at different use cases. In the following:

  • I am going to ignore the cases of all words or all not-words as those cases are trivial
  • I will consider English-like words that you can't be assumed to be in a dictionary, like weird surnames (eg Kardashian), unusual product names (eg stackexchange) and so on.
  • I will use simple averages of the probabilities assuming that random gibberish is 0% while English-like words are at 90%.

Two words

  1. (50%) Red ajkhsdjas
  2. (50%) Hkdfs Friday
  3. (95%) Kardashians program
  4. (95%) Using Stackexchange

From these examples, I think you would agree that 1. and 2. are likely not acceptable whereas 3. and 4. are. The simple average calculation appears a useful discriminator for two word groups.

Three words

With one suspect words:

  1. (67%) Red dawn dskfa
  2. (67%) Hskdkc communist manifesto
  3. (67%) Economic jasdfh crisis
  4. (97%) Kardashian fifteen minutes
  5. (97%) stackexchange user experience

Clearly 4. and 5. are acceptable.

But what about 1., 2. or 3.? Are there any material differences between 1., 2. or 3.? Probably not, ruling out using Baysian statistics. But should these be classified as English or not? I think that's your call.

With two suspect words:

  1. (33%) Red ksadjak adsfhd
  2. (33%) jkdsfk dsajjds manifesto
  3. (93%) Stackexchange emails Kardashians
  4. (93%) Stackexchange Kardashian account

I would hazard that 1. and 2. are not acceptable, but 3. and 4 definitely are. (Well, except the Kardashians' having an account here - that does not bode well). Again the simple averages can be used as a simple discriminator - and you can choose if it's above or below 67%.

Four words

The number of permutations starts getting wild, so I'll give only a few examples:

  1. One suspect word:
    1. (75%) Programming jhjasd language today
    2. (93%) Hopeless Kardashian tv series
  2. Two suspect words:
    1. (50%) Programming kasdhjk jhsaer today
    2. (95%) Stackexchange implementing Kasdashian filter
  3. Three suspect words:
    1. (25%) Programming sdajf jkkdsf kuuerc
    2. (93%) Stackexchange bitifying Kardashians tweetdeck

In my mind, it's clear which word groups are meaningful aligns with the simple average with the exception of 2.1 - that's again your call.

Interestingly the cutoff point for four word groups might be different from three-word groups, so I'd recommend that your implementation has different a configuration setting for each group. Having different cutoffs is a consequence that the quantum jump from 2->3 and then 3->4 does not mesh with the idea of smooth, continuous probabilities.

Implementing different cutoff values for these groups directly addresses your intuition "Right now, I just have a "gut" feeling that my xxx yyy zzz example really should be higher than 66.66%, but I'm not sure how to express it as a formula.".

Five words

You get the idea - I'm not going to enumerate any more here. However, as you get to five words, it starts to get enough structure that several new heuristics can come in:

  1. Use of Bayesian probabilities/statistics (what is the probability of the third word being a word given that the first two were?)
  2. Parsing the group using the NLTK and looking at whether it makes grammatical sense

Problem cases

English has a number of very short words, and this might cause a problem. For example:

  1. Gibberish: r xu r
  2. Is this English? I am a

You may have to write code to specifically test for 1 and 2 letter words.

TL;DR Summary

  1. Non-dictionary words can be tested for how 'English' (or French or Spanish, etc) they are using letter and trigram frequencies. Picking up English-like words and attributing them a high score is critical to distinguish English groups
  2. Up to four words, a simple average has great discriminatory power, but you probably want to set a different cutoff for 2 words, 3 words and 4 words.
  3. Five words and above you can probably start using Bayesian statistics
  4. Longer word groups if they should be sentences or sentence fragments can be tested using a natural language tool, such as NLTK.
  5. This is a heuristic process and, ultimately, there will be confounding values (such as "I am a"). Writing an perfect statistical analysis routine may therefore not be especially useful compared to a simple average if it can be confounded by a large number of exceptions.
Resultant answered 4/4, 2013 at 7:18 Comment(7)
Your solution deals with analyzing the word that does not exist in the dictionary. There is probably some merit to it and I should go over your solution in depth. One possible problem that'll need to be addressed is that I am not dealing with just English words; it can be a mix of English and none-English words. In any case, the problem I am trying to solve is how to determine the likelihood that the chain of words is valid based on the fact that some of the words are definitely valid words (without having to analyze the word that doesn't exist in the dictionary).Upheaval
@StackOverflowNewbie: Your analysis of my answer is correct - it's about how to get the best estimate for whether a word is likely English. First point to note: the frequencies (letters, bigrams and trigrams) can be trained on multiple corpuses and so language is not a problem.Resultant
@StackOverflowNewbie: You mention in your question that the number of words is short (2, 3, 4, 5) - in which case can I assume you're not analysing sentences (with implied structure) but simply datapoints. Given this lack of structure or relationship between the words, your question seems close to: "Are three points on a line or a curve?" If you have good estimates for the probability of individual words (i.e. where I was driving at in my attempt), won't the average probabilities of the words give a better (if not correct) result?Resultant
If xxx is definitely a word (meaning, it exist in the dictionary), and yyy is definitely a word, but zzz does not exist in the dictionary, I feel that the probability that xxx yyy zzz is a valid chain of words is higher than 66.66%. I'm basing this on the following assumptions: most users will provide valid chain of words, some users will provide a mixture of valid and invalid words, and a few users will provide all invalid words. Right now, I just have a "gut" feeling that my xxx yyy zzz example really should be higher than 66.66%, but I'm not sure how to express it as a formula.Upheaval
@StackOverflowNewbie: The right way to express the probability in a formula is definitely to use Bayes' Theorem, as suggested by Emilio M Bumachar. However, if there are only two or three words, even with the additional information suggested by digram or trigram, there might not be enough new information to come up with a useful improvement to the probability estimate.Partain
@StackOverflowNewbie: You have some very valid questions. I've tried to address these with a major addition to my answer (it was obvious to me, but not in hindsight an obvious corollary of what I had written). It enumerates example groups of words for 2-, 3- and 4- word groups (your main use case) and looks at whether the specific examples are good or not good.Resultant
@Simon: Thanks for the comment. I also feel this is an important insight so I've expanded on this extensively in a new section in my answer.Resultant
A
5

Perhaps you could use Bayes' formula.

You already have numerical guesses for the probability of each word to be real.

Next step is to make educated guesses about the probability of the entire list being good, bad or mixed (i.e., turn "most likely", "likely" and "not likely" into numbers.)

Antonio answered 3/4, 2013 at 18:20 Comment(0)
D
3

I'll give a bayesian hierarchical model solution. It has a few parameters that must be set by hand, but it is quite robust regarding these parameters, as the simulation below shows. And it can handle not only the scoring system for the word list, but also a probable classification of the user who entered the words. The treatment may be a little technical, but in the end we'll have a routine to calculate the scores as a function of 3 numbers: the number of words in the list, the number of those with an exact match in the database, and the number of those with a partial matching (as in yyyy). The routine is implemented in R, ,but if you never used it, just download the interpreter, copy and paste the code in it's console, and you'll see the results shown here.

BTW english is not my first language, so bear with me... :-)

1. Model Specification:

There are 3 classes of users, named I, II, III. We assume that each word list is generated by a single user, and that the user is drawn randomly from a universe of users. We say that this universe is 70% class I, 25% class II and 5% class III. These numbers can be changed, of course. We have so far

Prob[User=I] = 70%

Prob[User=II] = 25%

Prob[User=III] = 5%

Given the user, we assume conditional independence, i.e., the user will not look to previous words to decide if he'll type in a valid or invalid word.

User I tends to give only valid words, User II only invalid words, and user III is mixed. So we set

Prob[Word=OK | User=I] = 99%

Prob[Word=OK | User=II] = 0.001%

Prob[Word=OK | User=III] = 50%

The probabilities of the word being invalid, given the class of the user, are complimentary. Note that we give a very small, but non-zero probability of a class-II user entering valid words, since even a monkey in front of a typewriter will, eventually type a valid word.

The final step of the model specification regards the database. We assume that, for each word, the query may have 3 outcomes: a total match, a partial match (as in yyyy ) or no match. In probability terms, we assume that

Prob[match | valid] = 98% (not all valid words will be found)

Prob[partial | valid] = 0.2% (a rare event)

Prob[match | INvalid] = 0 (the database may be incomplete, but it has no invalid words)

Prob[partial | INvalid] = 0.1% (a rare event)

The probabilities of not finding the word don't have to be set, as they are complimentary. That's it, our model is set.

2. Notation and Objective

We have a discrete random variable U, taking values in {1, 2, 3} and two discrete random vectors W and F, each of size n (= the number of words), where W_i is 1 if the word is valid and 2 if the word is invalid, and F_i is 1 if the word is found in the database, 2 if it's a partial match and 3 if it's not found.

Only vector F is observable, the others are latent. Using Bayes theorem and the distributions we set up in the model specification, we can calculate

(a) Prob[User=I | F],

i. e., the posterior probability of the user being in class I, given the observed matchings; and

(b) Prob[W=all valid | F],

i. e., the posterior probability that all words are valid, given the observed matchings.

Depending on your objective, you can use one or another as a scoring solution. If you are interested in distinguishing a real user from a computer program, for instance, you can use (a). If you only care about the word list being valid, you should use (b).

I'll try to explain shortly the theory in the next section, but this is the usual setup in the context of bayesian hierarchical models. The reference is Gelman (2004), "Bayesian Data Analysis".

If you want, you can jump to section 4, with the code.

3. The Math

I'll use a slight abuse of notation, as usual in this context, writing

p(x|y) for Prob[X=x|Y=y] and p(x,y) for Prob[X=x,Y=y].

The goal (a) is to calculate p(u|f), for u=1. Using Bayes theorem:

p(u|f) = p(u,f)/p(f) = p(f|u)p(u)/p(f).

p(u) is given. p(f|u) is obtained from:

p(f|u) = \prod_{i=1}^{n} \sum_{w_i=1}^{2} (p(f_i|w_i)p(w_i|u))

p(f|u) = \prod_{i=1}^{n} p(f_i|u)

= p(f_i=1|u)^(m) p(f_i=2|u)^(p) p(f_i=3)^(n-m-p)

where m = number of matchings and p = number of partial matchings.

p(f) is calculated as:

\sum_{u=1}^{3} p(f|u)p(u)

All these can be calculated directly.

Goal (b) is given by

p(w|f) = p(f|w)*p(w)/p(f)

where

p(f|w) = \prod_{i=1}^{n} p(f_i|w_i)

and p(f_i|w_i) is given in the model specification.

p(f) was calculated above, so we need only

p(w) = \sum_{u=1}^{3} p(w|u)p(u)

where

p(w|u) = \prod_{i=1}^{n} p(w_i|u)

So everything is set for implementation.

4. The Code

The code is written as a R script, the constants are set at the beginning, in accordance to what was discussed above, and the output is given by the functions

(a) p.u_f(u, n, m, p)

and

(b) p.wOK_f(n, m, p)

that calculate the probabilities for options (a) and (b), given inputs:

u = desired user class (set to u=1)
n = number of words
m = number of matchings
p = number of partial matchings

The code itself:

### Constants:

# User:

# Prob[U=1], Prob[U=2], Prob[U=3]

Prob_user = c(0.70, 0.25, 0.05)

# Words:

# Prob[Wi=OK|U=1,2,3]

Prob_OK = c(0.99, 0.001, 0.5)

Prob_NotOK = 1 - Prob_OK

# Database:

# Prob[Fi=match|Wi=OK], Prob[Fi=match|Wi=NotOK]:

Prob_match = c(0.98, 0)

# Prob[Fi=partial|Wi=OK], Prob[Fi=partial|Wi=NotOK]:

Prob_partial = c(0.002, 0.001)

# Prob[Fi=NOmatch|Wi=OK], Prob[Fi=NOmatch|Wi=NotOK]:

Prob_NOmatch = 1 - Prob_match - Prob_partial


###### First Goal: Probability of being a user type I, given the numbers of matchings (m) and partial matchings (p).


# Prob[Fi=fi|U=u]
#
p.fi_u <- function(fi, u)
{
    unname(rbind(Prob_match, Prob_partial, Prob_NOmatch) %*% rbind(Prob_OK, Prob_NotOK))[fi,u]
}

# Prob[F=f|U=u]
#
p.f_u <- function(n, m, p, u)
{
    exp( log(p.fi_u(1, u))*m + log(p.fi_u(2, u))*p + log(p.fi_u(3, u))*(n-m-p) )
}

# Prob[F=f]
#
p.f <- function(n, m, p)
{
    p.f_u(n, m, p, 1)*Prob_user[1] + p.f_u(n, m, p, 2)*Prob_user[2] + p.f_u(n, m, p, 3)*Prob_user[3]
}

# Prob[U=u|F=f]
#
p.u_f <- function(u, n, m, p)
{
    p.f_u(n, m, p, u) * Prob_user[u] / p.f(n, m, p)
}

# Probability user type I for n=1,...,5:

for(n in 1:5) for(m in 0:n) for(p in 0:(n-m))
{
    cat("n =", n, "| m =", m, "| p =", p, "| Prob type I =", p.u_f(1, n, m, p), "\n")
}

##################################################################################################

# Second Goal: Probability all words OK given matchings/partial matchings.

p.f_wOK <- function(n, m, p)
{
    exp( log(Prob_match[1])*m + log(Prob_partial[1])*p + log(Prob_NOmatch[1])*(n-m-p) )
}

p.wOK <- function(n)
{
    sum(exp( log(Prob_OK)*n + log(Prob_user) ))
}

p.wOK_f <- function(n, m, p)
{
    p.f_wOK(n, m, p)*p.wOK(n)/p.f(n, m, p)
}

# Probability all words ok for n=1,...,5:

for(n in 1:5) for(m in 0:n) for(p in 0:(n-m))
{
    cat("n =", n, "| m =", m, "| p =", p, "| Prob all OK =", p.wOK_f(n, m, p), "\n")
}

5. Results

This are the results for n=1,...,5, and all possibilities for m and p. For instance, if you have 3 words, one match, one partial match, and one not found, you can be 66,5% sure it's a class-I user. In the same situation, you can attribute a score of 42,8% that all words are valid.

Note that option (a) does not give 100% score to the case of all matches, but option (b) does. This is expected, since we assumed that the database has no invalid words, hence if they are all found, then they are all valid. OTOH, there is a small chance that a user in class II or III can enter all valid words, but this chance decreases rapidly as n increases.

(a)

n = 1 | m = 0 | p = 0 | Prob type I = 0.06612505 
n = 1 | m = 0 | p = 1 | Prob type I = 0.8107086 
n = 1 | m = 1 | p = 0 | Prob type I = 0.9648451 
n = 2 | m = 0 | p = 0 | Prob type I = 0.002062543 
n = 2 | m = 0 | p = 1 | Prob type I = 0.1186027 
n = 2 | m = 0 | p = 2 | Prob type I = 0.884213 
n = 2 | m = 1 | p = 0 | Prob type I = 0.597882 
n = 2 | m = 1 | p = 1 | Prob type I = 0.9733557 
n = 2 | m = 2 | p = 0 | Prob type I = 0.982106 
n = 3 | m = 0 | p = 0 | Prob type I = 5.901733e-05 
n = 3 | m = 0 | p = 1 | Prob type I = 0.003994149 
n = 3 | m = 0 | p = 2 | Prob type I = 0.200601 
n = 3 | m = 0 | p = 3 | Prob type I = 0.9293284 
n = 3 | m = 1 | p = 0 | Prob type I = 0.07393334 
n = 3 | m = 1 | p = 1 | Prob type I = 0.665019 
n = 3 | m = 1 | p = 2 | Prob type I = 0.9798274 
n = 3 | m = 2 | p = 0 | Prob type I = 0.7500993 
n = 3 | m = 2 | p = 1 | Prob type I = 0.9864524 
n = 3 | m = 3 | p = 0 | Prob type I = 0.990882 
n = 4 | m = 0 | p = 0 | Prob type I = 1.66568e-06 
n = 4 | m = 0 | p = 1 | Prob type I = 0.0001158324 
n = 4 | m = 0 | p = 2 | Prob type I = 0.007636577 
n = 4 | m = 0 | p = 3 | Prob type I = 0.3134207 
n = 4 | m = 0 | p = 4 | Prob type I = 0.9560934 
n = 4 | m = 1 | p = 0 | Prob type I = 0.004198015 
n = 4 | m = 1 | p = 1 | Prob type I = 0.09685249 
n = 4 | m = 1 | p = 2 | Prob type I = 0.7256616 
n = 4 | m = 1 | p = 3 | Prob type I = 0.9847408 
n = 4 | m = 2 | p = 0 | Prob type I = 0.1410053 
n = 4 | m = 2 | p = 1 | Prob type I = 0.7992839 
n = 4 | m = 2 | p = 2 | Prob type I = 0.9897541 
n = 4 | m = 3 | p = 0 | Prob type I = 0.855978 
n = 4 | m = 3 | p = 1 | Prob type I = 0.9931117 
n = 4 | m = 4 | p = 0 | Prob type I = 0.9953741 
n = 5 | m = 0 | p = 0 | Prob type I = 4.671933e-08 
n = 5 | m = 0 | p = 1 | Prob type I = 3.289577e-06 
n = 5 | m = 0 | p = 2 | Prob type I = 0.0002259559 
n = 5 | m = 0 | p = 3 | Prob type I = 0.01433312 
n = 5 | m = 0 | p = 4 | Prob type I = 0.4459982 
n = 5 | m = 0 | p = 5 | Prob type I = 0.9719289 
n = 5 | m = 1 | p = 0 | Prob type I = 0.0002158996 
n = 5 | m = 1 | p = 1 | Prob type I = 0.005694145 
n = 5 | m = 1 | p = 2 | Prob type I = 0.1254661 
n = 5 | m = 1 | p = 3 | Prob type I = 0.7787294 
n = 5 | m = 1 | p = 4 | Prob type I = 0.988466 
n = 5 | m = 2 | p = 0 | Prob type I = 0.00889696 
n = 5 | m = 2 | p = 1 | Prob type I = 0.1788336 
n = 5 | m = 2 | p = 2 | Prob type I = 0.8408416 
n = 5 | m = 2 | p = 3 | Prob type I = 0.9922575 
n = 5 | m = 3 | p = 0 | Prob type I = 0.2453087 
n = 5 | m = 3 | p = 1 | Prob type I = 0.8874493 
n = 5 | m = 3 | p = 2 | Prob type I = 0.994799 
n = 5 | m = 4 | p = 0 | Prob type I = 0.9216786 
n = 5 | m = 4 | p = 1 | Prob type I = 0.9965092 
n = 5 | m = 5 | p = 0 | Prob type I = 0.9976583 

(b)

n = 1 | m = 0 | p = 0 | Prob all OK = 0.04391523 
n = 1 | m = 0 | p = 1 | Prob all OK = 0.836025 
n = 1 | m = 1 | p = 0 | Prob all OK = 1 
n = 2 | m = 0 | p = 0 | Prob all OK = 0.0008622994 
n = 2 | m = 0 | p = 1 | Prob all OK = 0.07699368 
n = 2 | m = 0 | p = 2 | Prob all OK = 0.8912977 
n = 2 | m = 1 | p = 0 | Prob all OK = 0.3900892 
n = 2 | m = 1 | p = 1 | Prob all OK = 0.9861099 
n = 2 | m = 2 | p = 0 | Prob all OK = 1 
n = 3 | m = 0 | p = 0 | Prob all OK = 1.567032e-05 
n = 3 | m = 0 | p = 1 | Prob all OK = 0.001646751 
n = 3 | m = 0 | p = 2 | Prob all OK = 0.1284228 
n = 3 | m = 0 | p = 3 | Prob all OK = 0.923812 
n = 3 | m = 1 | p = 0 | Prob all OK = 0.03063598 
n = 3 | m = 1 | p = 1 | Prob all OK = 0.4278888 
n = 3 | m = 1 | p = 2 | Prob all OK = 0.9789305 
n = 3 | m = 2 | p = 0 | Prob all OK = 0.485069 
n = 3 | m = 2 | p = 1 | Prob all OK = 0.990527 
n = 3 | m = 3 | p = 0 | Prob all OK = 1 
n = 4 | m = 0 | p = 0 | Prob all OK = 2.821188e-07 
n = 4 | m = 0 | p = 1 | Prob all OK = 3.046322e-05 
n = 4 | m = 0 | p = 2 | Prob all OK = 0.003118531 
n = 4 | m = 0 | p = 3 | Prob all OK = 0.1987396 
n = 4 | m = 0 | p = 4 | Prob all OK = 0.9413746 
n = 4 | m = 1 | p = 0 | Prob all OK = 0.001109629 
n = 4 | m = 1 | p = 1 | Prob all OK = 0.03975118 
n = 4 | m = 1 | p = 2 | Prob all OK = 0.4624648 
n = 4 | m = 1 | p = 3 | Prob all OK = 0.9744778 
n = 4 | m = 2 | p = 0 | Prob all OK = 0.05816511 
n = 4 | m = 2 | p = 1 | Prob all OK = 0.5119571 
n = 4 | m = 2 | p = 2 | Prob all OK = 0.9843855 
n = 4 | m = 3 | p = 0 | Prob all OK = 0.5510398 
n = 4 | m = 3 | p = 1 | Prob all OK = 0.9927134 
n = 4 | m = 4 | p = 0 | Prob all OK = 1 
n = 5 | m = 0 | p = 0 | Prob all OK = 5.05881e-09 
n = 5 | m = 0 | p = 1 | Prob all OK = 5.530918e-07 
n = 5 | m = 0 | p = 2 | Prob all OK = 5.899106e-05 
n = 5 | m = 0 | p = 3 | Prob all OK = 0.005810434 
n = 5 | m = 0 | p = 4 | Prob all OK = 0.2807414 
n = 5 | m = 0 | p = 5 | Prob all OK = 0.9499773 
n = 5 | m = 1 | p = 0 | Prob all OK = 3.648353e-05 
n = 5 | m = 1 | p = 1 | Prob all OK = 0.001494098 
n = 5 | m = 1 | p = 2 | Prob all OK = 0.051119 
n = 5 | m = 1 | p = 3 | Prob all OK = 0.4926606 
n = 5 | m = 1 | p = 4 | Prob all OK = 0.9710204 
n = 5 | m = 2 | p = 0 | Prob all OK = 0.002346281 
n = 5 | m = 2 | p = 1 | Prob all OK = 0.07323064 
n = 5 | m = 2 | p = 2 | Prob all OK = 0.5346423 
n = 5 | m = 2 | p = 3 | Prob all OK = 0.9796679 
n = 5 | m = 3 | p = 0 | Prob all OK = 0.1009589 
n = 5 | m = 3 | p = 1 | Prob all OK = 0.5671273 
n = 5 | m = 3 | p = 2 | Prob all OK = 0.9871377 
n = 5 | m = 4 | p = 0 | Prob all OK = 0.5919764 
n = 5 | m = 4 | p = 1 | Prob all OK = 0.9938288 
n = 5 | m = 5 | p = 0 | Prob all OK = 1 
Dumas answered 5/4, 2013 at 5:50 Comment(4)
So, if we can classify each word as match/partial/no-match, table(b) tabulates the probability that the group was entered by a someone intending to enter correctly. We can choose a cutoff and use this data (adjusting properties as required) to build a table which says, for example 3 words: OK if 3 partials OR 1 match, 2 partials OR 2 matches, 1 partial. This is extremely close to the table I presented (up to n=4), but I see merit for n>=5. However this does not take into account how we decide what is a partial, or whether some partials are more word-like than others.Resultant
@AndrewAlcock, "the probability that the group was entered by a someone intending to enter correctly" is in table (a). With a probability score, he can either choose a cut-off or do some calculation uisng the score as a weight. To decide whether a word is a partial match, he can use a levenshtein distance, and say it's partial if the distance is no longer than a certain threshold, say, 3 or 4 edits. Also, it's straightforward to extend the model for several levels of partial matchings, for instance, F_i = 1 if full match (distance=0), F_i=2 (1 edit away), F_i=3 (2 or 3 edits)...Dumas
@AndrewAlcock. Our answers are not contradictory. Your study to say whether a word looks like English or not is very interesting and sophisticated. It could be used to provide an even better "F" than a simple Levenshtein distance in an extension of the model I propose.Dumas
understand. I was actually saying our different approaches yield the same results up to about n=4 and adds rigour when n>4. The addition of multiple partial matches is, I think, and important addition to your model.Resultant
W
2

If "average" is no solution because the database lacks of words, I'd say: extend the database :)

another idea could be, to 'weigh' the results, to get light an adjusted average, as an example:

100% = 1.00x weight
90%  = 0.95x weight
80%  = 0.90x weight
...
0%   = 0.50x weight

so for your example you would:

(100*1 + 90*0.95 + 0*0.5) / (100*1 + 100*0.95 + 100*0.5) = 0.75714285714
 => 75.7%
regular average would be 63.3%
Whitleather answered 26/3, 2013 at 21:53 Comment(1)
Extending the DB is definitely the preferred approach. However, since the DB will never really be complete (it's just the nature of the data), a scoring system needs to be developed. Your proposed system doesn't feel right. You've assigned weights to individual scores. I think the weight of a particular score should be based on the scores of the other words.Upheaval
S
2

Since the order of words is not important in your description, the independent variable is the fraction of valid words. If the fraction is a perfect 1, i.e. all words are found to be perfect matches with the DB, then you are perfectly sure to have the all-valid outcome. If it's zero, i.e. all words are perfect misses in the DB, then you are perfectly sure to have the all-invalid outcome. If you have .5, then this must be the unlikely mixed-up outcome because neither of the other two is possible.

You say the mixed outcome is unlikely while the two extremes are moreso. You are after likelihood of the all-valid outcome.

Let the fraction of valid words (sum of "surenesses" of matches / # of words) be f and hence the desired likelihood of the all-valid outcome be L(f). By the discussion so far, we know L(1)=1 and L(f)=0 for 0<=f<=1/2 .

To honor your information that the mixed outcome is less likely than the all-valid (and the all-invalid) outcome, the shape of L must rise monotonically and quickly from 1/2 toward 1 and reach 1 at f=1.

Since this is heuristic, we might pick any reasonable function with this character. If we're clever it will have a parameter to control the steepness of the step and perhaps another for its location. This lets us tweak what "less likely" means for the middle case.

One such function is this for 1/2 <= f <= 1:

L(f) = 5 + f * (-24 + (36 - 16 * f) * f) + (-4 + f * (16 + f * (-20 + 8 * f))) * s

and zero for 0 <= f < 1/2. Although it's hairy-looking, it's the simplest polynomial that intersects (1/2,0) and (1,1) with slope 0 at f=1 and slope s at f=0.

You can set 0 <= s <= 3 to change the step shape. Here is a shot with s=3, which probably what you want: Screen shot of function plot

If you set s > 3, it shoots above 1 before settling down, not what we want.

Of course there are infinitely many other possibilities. If this one does't work, comment and we'll look for another.

Selfemployed answered 30/3, 2013 at 5:19 Comment(3)
OK, this is a bit in depth and I need to study it carefully. Can you show some examples of this formula in action?Upheaval
It's just a way of warping the average of the individual word scores upward in the range .5 to 1. This implements your statement that the all-valid case is most likely.Selfemployed
I agree with @Gene: if, ultimately, all you need is to define a cutoff, this simply changes the value of the cutoff.Resultant
F
1

averaging is, of course, rubbish. If the individual word probabilities were accurate, the probability that all words are correct is simply the product, not the average. If you have an estimate for the uncertainties in your individual probabilities, you could work out their product marginalized over all the individual probabilities.

Faefaeces answered 4/4, 2013 at 21:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.