First of all, it should be noted that the sentence transformer supports a different sequence length than the underlying transformer. You check those values with:
# that's the sentence transformer
print(model.max_seq_length)
# that's the underlying transformer
print(model[0].auto_model.config.max_position_embeddings)
Output:
256
512
That means, the position embedding layer of the transformers has 512 weights, but the sentence transformer will only use and was trained with the first 256 of them. Therefore, you should be careful with increasing the value above 256. It will work from a technical perspective, but the position embedding weights (>256) are not properly trained and can therefore mess up your results. Please also check this StackOverflow post.
Regarding throwing an exception, I think that is not offered by the library and you, therefore, have a write a workaround by yourself:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('all-MiniLM-L6-v2')
my_text = "this is a test "*1000
try:
o = model[0].tokenizer(my_text, return_attention_mask=False, return_token_type_ids=False)
if len(o.input_ids) > model.max_seq_length:
raise ValueError("Oh no!")
except ValueError:
...
model.encode(my_text)