首页 > 解决方案 > 分词器改变词条

问题描述

我有一些我想对其执行 NLP 的文本。为此,我下载了一个预训练的标记器,如下所示:

import transformers as ts

pr_tokenizer = ts.AutoTokenizer.from_pretrained('distilbert-base-uncased', cache_dir='tmp')

然后我用我的数据创建自己的标记器,如下所示:

from tokenizers import Tokenizer
from tokenizers.models import BPE
tokenizer = Tokenizer(BPE(unk_token="[UNK]"))

from tokenizers.trainers import BpeTrainer
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])

from tokenizers.pre_tokenizers import Whitespace
tokenizer.pre_tokenizer = Whitespace()

tokenizer.train(['transcripts.raw'], trainer)

现在是我感到困惑的部分......我需要更新预转录标记器(pr_tokenizer)中的条目,它们是键与我的标记器(tokenizer)中的相同。我尝试了几种方法,所以这里是其中一种:

new_vocab = pr_tokenizer.vocab
v = tokenizer.get_vocab()

for i in v:
    if i in new_vocab:
        new_vocab[i] = v[i]

那我现在该怎么办?我在想类似的事情:

pr_tokenizer.vocab.update(new_vocab)

或者

pr_tokenizer.vocab = new_vocab

都不工作。有谁知道这样做的好方法?

标签: pythonpython-3.xnlphuggingface-transformershuggingface-tokenizers

解决方案


为此,您只需从 GitHub 或HuggingFace 网站将标记器源下载到与您的代码相同的文件夹中,然后在加载标记器之前编辑词汇表:

new_vocab = {}

# Getting the vocabulary entries
for i, row in enumerate(open('./distilbert-base-uncased/vocab.txt', 'r')): 
    new_vocab[row[:-1]] = i

# your vocabulary entries
v = tokenizer.get_vocab()

# replace common (your code)
for i in v:
    if i in new_vocab:
        new_vocab[i] = v[i]

with open('./distilbert-base-uncased/vocabb.txt', 'w') as f:
    # reversed vocabulary
    rev_vocab = {j:i for i,j in zip(new_vocab.keys(), new_vocab.values())}
    # adding vocabulary entries to file
    for i in range(len(rev_vocab)):
        if i not in rev_vocab: continue
        f.write(rev_vocab[i] + '\n')

# loading the new tokenizer
pr_tokenizer = ts.AutoTokenizer.from_pretrained('./distilbert-base-uncased')

推荐阅读