首页 > 解决方案 > 需要一个“词袋”类型的变压器

问题描述

我有一个 NLP 项目,其中一组单词当前由 编码w2v,以与其他单词集合进行比较。我想尝试transformers哪个可以提供比w2v. 但是,由于数据的性质,我根本不需要位置编码(因为单词的集合没有顺序)。Is there a pretrained transformer that won't do positional encoding?

标签: nlpword2vechuggingface-transformerstransformer

解决方案


您可以使用get_input_embeddings()访问相应的嵌入层。请看一下 roberta 的这个例子:

import torch
from transformers import RobertaTokenizerFast, RobertaModel
t = RobertaTokenizerFast.from_pretrained('roberta-base')
m = RobertaModel.from_pretrained('roberta-base')
e = m.get_input_embeddings()
 
myWordCollection = ['This', 'That', 'stackoverflow', 'huggingface']

#some of the words will consist of several tokens (i.e. several vectors)
i = t(myWordCollection, return_attention_mask=False, add_special_tokens=False)
#a dictionary with words:vectors for each token
o = {word:e(torch.tensor(ids))    for word, ids in zip(myWordCollection, i.input_ids)}


推荐阅读