I want to build a classification model that needs only the encoder part of language models. I have tried Bert, Roberta, xlnet, and so far I have been successful.
I now want to test the encoder part only from T5, so far, I found encT5 https://github.com/monologg/EncT5
And T5EncoderModel from HuggingFace.
Can anyone help me understand if T5EncoderModel is what I am looking for or not?
It says in the description: The bare T5 Model transformer outputting encoder’s raw hidden-states without any specific head on top.
This is slightly confusing to me, especially that encT5 mentioned that they implemented the encoder part only because it didn't exist in HuggingFace which is what makes me more doubtful here.
Please note that I am a beginner in deep learning, so please go easy on me I understand that ny questions can be naive to most of you.
Thank you