I've recently started studying OpenEars speech recognition and it's great! But I also need to support speech recognition and dictation in other languages such as Russian, French and German.I've found that here are available various acoustic and language models.
But I cannot really understand - is that enough what I need to integrate extra language support in application?
Question is - what steps should I take in order to successfully integrate, for example russian, in Open Ears?
As far as I understood - all acoustic and language models for english language in Open Ears demo is located in folder hub4wsj_sc_8k . Same files can be found in voxforge language archives. So I just replaced them in demo. One thing is different - in demo English language, there also was a sendump
2MB large file, which is not located in voxforge language archives.There are two other files used in Open Ears demo:
- OpenEars1.languagemodel
- OpenEars1.dic
These I replaced with:
- msu_ru_nsh.lm.dmp
- msu_ru_nsh.dic
as .dmp is similar to .languagemodel. But application is crashing without any error.
What am I doing wrong? Thank You.