How to revert BERT/XLNet embeddings?
Asked Answered
A

2

5

I've been experimenting with stacking language models recently and noticed something interesting: the output embeddings of BERT and XLNet are not the same as the input embeddings. For example, this code snippet:

bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")
tok = transformers.BertTokenizer.from_pretrained("bert-base-cased")

sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))
enc = bert.get_input_embeddings()(sent)
dec = bert.get_output_embeddings()(enc)

print(tok.decode(dec.softmax(-1).argmax(-1)))

Outputs this for me:

,,,,,,,,,,,,,,,,,

I would have expected the (formatted) input sequence to be returned since I was under the impression that the input and output token embeddings were tied.

What's interesting is that most other models do not exhibit this behavior. For example, if you run the same code snippet on GPT2, Albert or Roberta, it outputs the input sequence.

Is this a bug? Or is it expected for BERT/XLNet?

Atropos answered 2/4, 2020 at 17:23 Comment(0)
H
7

Not sure if it's too late, but I've experimented a bit with your code and it can be reverted. :)

bert = transformers.BertForMaskedLM.from_pretrained("bert-base-cased")
tok = transformers.BertTokenizer.from_pretrained("bert-base-cased")

sent = torch.tensor(tok.encode("I went to the store the other day, it was very rewarding."))
print("Initial sentence:", sent)
enc = bert.get_input_embeddings()(sent)
dec = bert.get_output_embeddings()(enc)

print("Decoded sentence:", tok.decode(dec.softmax(0).argmax(1)))

For this, you get the following output:

Initial sentence: tensor([  101,   146,  1355,  1106,  1103,  2984,  1103,  1168,  1285,   117,
         1122,  1108,  1304, 10703,  1158,   119,   102])  
Decoded sentence: [CLS] I went to the store the other day, it was very rewarding. [SEP]
Hebraist answered 12/12, 2020 at 16:51 Comment(0)
N
0

There is a 2023 paper on exactly this. The authors claim that their

model can recover important personal information (full names) from a dataset of clinical notes.

The warning in the conclusion is particularly noteworthy if you are working in a domain utilizing sensitive data:

Our findings indicate a sort of equivalence between embeddings and raw data, in that both leak similar amounts of sensitive information. This equivalence puts a heavy burden on anonymization requirements for dense embeddings: embeddings should be treated as highly sensitive private data and protected, technically and perhaps legally, in the same way as one would protect raw text.

Naivete answered 12/3 at 8:1 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.