Is there a way tell a trained transformer model (e.g. from hugging face) to cast to float?
Asked Answered
D

0

7

I am attempting to run the T5 transformer on an M1 Mac using MPS backend:

import torch
import json 
from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config
#Make sure sentencepiece is installed
device = torch.device('mps')
model = T5ForConditionalGeneration.from_pretrained('t5-3b').to("mps")
tokenizer = T5Tokenizer.from_pretrained('t5-3b')#, device = device)

preprocess_text = full_text.strip().replace("\n",".")
t5_prepared_Text = "summarize: "+preprocess_text
print ("original text preprocessed: \n", preprocess_text)
tokenized_text = tokenizer.encode(t5_prepared_Text, return_tensors="pt").to(device)

# summmarize 
summary_ids = model.generate(tokenized_text,
                                    num_beams=6,
                                    no_repeat_ngram_size=3,
                                    min_length=30,
                                    max_length=9000,
                                    early_stopping=True)

output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)

print ("\n\nSummarized text: \n",output)

where "full_text" is a string defined earlier. This code works with CPU but I'd like to speed it up by using MPS (as shown in the code above). Here, I get the error:

TypeError: Operation 'abs_out_mps()' does not support input type 'int64' in MPS backend.

So int64 isn't supported with MPS. Is there a way I can get the model to cast to float if this error occurs rather than just breaking the whole model?

Ditchwater answered 23/7, 2022 at 18:31 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.