I found some helpful information here:
https://github.com/twitter/typeahead.js/blob/master/doc/migration/0.10.0.md#tokenization-methods-must-be-provided
The most common tokenization methods split a given string on whitespace or non-word characters. Bloodhound provides implementations for those methods out of the box:
// returns ['one', 'two', 'twenty-five']
Bloodhound.tokenizers.whitespace(' one two twenty-five');
// returns ['one', 'two', 'twenty', 'five']
Bloodhound.tokenizers.nonword(' one two twenty-five');
For query tokenization, you'll probably want to use one of the above methods. For datum tokenization, this is where you may want to do something a tad bit more advanced.
For datums, sometimes you want tokens to be dervied from more than one property. For example, if you were building a search engine for GitHub repositories, it'd probably be wise to have tokens derived from the repo's name, owner, and primary language:
var repos = [
{ name: 'example', owner: 'John Doe', language: 'JavaScript' },
{ name: 'another example', owner: 'Joe Doe', language: 'Scala' }
];
function customTokenizer(datum) {
var nameTokens = Bloodhound.tokenizers.whitespace(datum.name);
var ownerTokens = Bloodhound.tokenizers.whitespace(datum.owner);
var languageTokens = Bloodhound.tokenizers.whitespace(datum.language);
return nameTokens.concat(ownerTokens).concat(languageTokens);
}
There may also be the scenario where you want datum tokenization to be performed on the backend. The best way to do that is to just add a property to your datums that contains those tokens. You can then provide a tokenizer that just returns the already existing tokens:
var sports = [
{ value: 'football', tokens: ['football', 'pigskin'] },
{ value: 'basketball', tokens: ['basketball', 'bball'] }
];
function customTokenizer(datum) { return datum.tokens; }
There are plenty of other ways you could go about tokenizing datums, it really just depends on what you are trying to accomplish.
It seems unfortunate that this information wasn't easier to find from the main documentation.
["Dog", "cat"]
. Then, when results arrive, the datumTokenizer splits those as well. So, if you have a result with a song name of "Dogs and cats rock out", that'll get split into an array as well. Finally, Bloodhound compares the two arrays, and if the entirety of the query array is in the datum array, it considers it a match. I'm about 80% sure on this. – Splanchnic