(Mis)-using open.ai whisper for text-to-text translation
G

1

9

I noticed that transcribing speech in multiple languages with openai whisper speech-to-text library sometimes accurately recognizes inserts in another language and would provide the expected output, for example: 八十多个人 is the same as 八十几个人. So 多 and 几 are interchangeable and they can both mean several.

Yet, the same audio input on a different pass (with the same model, or a smaller/bigger model) would intermittently result in glitches where the entire sentence is being translated rather than transcribed. I.e. a fragment would be translated either into the first or the second language that appears in the audio. With the example input above either the entire sentence would be in English (with Chinese bits translated to English), or the entire sentence would be in Chinese (with the English bits translated to Chinese). Important: in both cases no input language was specified, and no task type was passed (which implies the default --task transcribe).

The docs for whisper mention translation to English as the only available target language (with the option --task translate in the command line version), but there is no mention of translating to other target languages. Yet the behavior mentioned above indicates that the models are capable of doing translation to other languages too.

The question is if there is a known way to configure the models to do just text-to-text translation? Or is the behavior just some sort of glitch that is not something that can be 'exploited' or configured on a lower level that would allow using the models just for text translation between any of the supported languages?

Gaines answered 3/12, 2022 at 15:12 Comment(0)
G
8

According to a comment in the whisper's issue tracker this might be a possible answer:

From the paper, the dataset that was used did not use any English audio to polish text samples. The dataset was cleaned by using a different model to match spoken language with text language. If they did not match, the sample was excluded. An exception was made for a portion of the training data to match any spoken language to English text (X->en) translation.

So unfortunately there is no direct way, the model wasn't trained on it. For your use case, this can transcribe to English text, but there has to be some an outside system to translate from English text to Polish text.

The --language parameter is defined in the cli as:

--language
    language spoken in the audio, specify None 
    to perform language detection (default: None)

Yet, despite the help text above this can have potentially useful undocumented side effects.

The 'exploit'

The undocumented glitch that was observed is that if you set a source language e.g. es but the audio input contains English then the English part of the input will be translated to Spanish. Parts of the audio input that are not in English will be transcribed although depending on the language it might not always work or it might generate garbage translations.

So the 'exploit' is that the models can be used to parse English audio and then translate it to a supported language.

The behaviour above occurs with the regular transcribe mode (the default, ie. --task transcribe), and is reproducible with both the original whisper implementation in python, as well as the CPU-optimized C++ port whisper.cpp which is using the same models but apparently with different parameters.

The quality of the non-English translation would depend on the language, and seems to be generally of lower quality than what you would get with dedicated open-source models e.g. Helsinki-NLP/opus-mt-es-en, facebook/m2m100_1.2B, facebook/nllb-200-3.3B fairseq etc.

Gaines answered 12/12, 2022 at 14:6 Comment(3)
maybe we can use the facebook M2M100 to do a post translation, so we can have a better translation solution?Animality
@EricHo: m2m100, fairseq, etc, as well as some models available for example in the gpt4all app are all doing a better job at translation than whisper; so you'll generally get better translation quality.Gaines
I think the m2m100 model would be the most suitable method for this case; It is free and easy of use. GPT4 api cost some dollars. Fairseq -> so complex to use. Helsinki-NLP/opus-mt-es-en, -> seen the author and I do not know this model at all. I am also looking for solutions to transcribe Japanese porn video and convert the subtitle to Chinese words. In my case, M2M100 (have 2 variant model) seem to be the right one.Animality

© 2022 - 2024 — McMap. All rights reserved.