How to apply machine learning to fuzzy matching
Asked Answered
S

1

20

Let's say that I have an MDM system (Master Data Management), whose primary application is to detect and prevent duplication of records.

Every time a sales rep enters a new customer in the system, my MDM platform performs a check on existing records, computes the Levenshtein or Jaccard or XYZ distance between pair of words or phrases or attributes, considers weights and coefficients and outputs a similarity score, and so on.

Your typical fuzzy matching scenario.

I would like to know if it makes sense at all to apply machine learning techniques to optimize the matching output, i.e. find duplicates with maximum accuracy.
And where exactly it makes the most sense.

  • optimizing the weights of the attributes?
  • increase the algorithm confidence by predicting the outcome of the match?
  • learn the matching rules that otherwise I would configure into the algorithm?
  • something else?

There's also this excellent answer about the topic but I didn't quite get whether the guy actually made use of ML or not.

Also my understanding is that weighted fuzzy matching is already a good enough solution, probably even from a financial perspective, since whenever you deploy such an MDM system you have to do some analysis and preprocessing anyway, be it either manually encoding the matching rules or training an ML algorithm.

So I'm not sure that the addition of ML would represent a significant value proposition.

Any thoughts are appreciated.

Swelter answered 12/4, 2017 at 10:16 Comment(4)
My intuition is that the incremental gain you would achieve would not justify the effort. What would be interesting is to use natural language processing/understanding to provide additional context when searching for possible duplicates, but it would be no small project!Brezin
If you do pursue this project one thing to watch will be the essentially binary outcome of your task (match vs no match), combined with potentially unbalanced dataset (more non-matches than matches). You could end up with a machine that looks very accurate, but is actually just telling you what you already know.Brezin
@fgregg: Wondering if you could use deduplication instead of the brand-new record-linkage. Seems to be the same concept.Laurelaureano
@NathanTuggy, It seems to me that most of the questions tagged with deduplication is about removing exact matches. the techniques that you use for that are pretty different than the probabilistic approaches associated with record linkageRodgerrodgers
R
8

The main advantage of using machine learning is the time saving.

It is very likely that, given enough time, you could hand tune weights and come up with matching rules that are very good for your particular dataset. A machine learning approach could have a hard time outperforming your hand made system customized for a particular dataset.

However, this will probably take days to make a good matching system by hand. If you use an existing ML for matching tool, like Dedupe, then good weights and rules can be learned in an hour (including set up time).

So, if you have already built a matching system that is performing well on your data, it may not be worth investigating ML. But, if this is a new data project, then it almost certainly will be.

Rodgerrodgers answered 14/4, 2017 at 17:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.