Let's say that I have an MDM system (Master Data Management), whose primary application is to detect and prevent duplication of records.
Every time a sales rep enters a new customer in the system, my MDM platform performs a check on existing records, computes the Levenshtein or Jaccard or XYZ distance between pair of words or phrases or attributes, considers weights and coefficients and outputs a similarity score, and so on.
Your typical fuzzy matching scenario.
I would like to know if it makes sense at all to apply machine learning techniques to optimize the matching output, i.e. find duplicates with maximum accuracy.
And where exactly it makes the most sense.
- optimizing the weights of the attributes?
- increase the algorithm confidence by predicting the outcome of the match?
- learn the matching rules that otherwise I would configure into the algorithm?
- something else?
There's also this excellent answer about the topic but I didn't quite get whether the guy actually made use of ML or not.
Also my understanding is that weighted fuzzy matching is already a good enough solution, probably even from a financial perspective, since whenever you deploy such an MDM system you have to do some analysis and preprocessing anyway, be it either manually encoding the matching rules or training an ML algorithm.
So I'm not sure that the addition of ML would represent a significant value proposition.
Any thoughts are appreciated.