Giving the right kind of prompt to Flan T5 Language model in order to get the correct/accurate responses for a chatbot/option matching use case.
I am trying to use a Flan T5 model for the following task. Given a chatbot that presents the user with a list of options, the model has to do semantic option matching. For instance, if the options are "Barbeque Chicken, Smoked Salmon", if the user says "I want fish", the model should select smoked salmon. Another use case could be "The first one" in which case the model should select Barbeque Chicken. A third use case could be "The BBQ one" in which case the model should select Barbeque chicken.
I am using some code from the huggingface docs to play around with flan-t5 but I did not get the correct output.
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
inputs = tokenizer('''Q:Select from the following options
(a) Quinoa Salad
(b) Kale Smoothie
A:Select the first one
''', return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
The output is
['(b) Kale Smoothie']
How should I give the correct prompt/question to elicit the correct response from Flan t5 ?