首页 > 解决方案 > 如何在标记 Keras 时忽略字符

问题描述

我正在尝试使用 Keras 训练和构建标记器,这是我正在执行此操作的代码片段:

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense

txt1="""What makes this problem difficult is that the sequences can vary in length,
be comprised of a very large vocabulary of input symbols and may require the model 
to learn the long term context or dependencies between symbols in the input sequence."""

#txt1 is used for fitting 
tk = Tokenizer(nb_words=2000, lower=True, split=" ",char_level=False)
tk.fit_on_texts(txt1)

#convert text to sequencech
t= tk.texts_to_sequences(txt1)

#padding to feed the sequence to keras model
t=pad_sequences(t, maxlen=10)

在测试 Tokenizer 学习了哪些单词时,它给出了它只学习了字符而不学习单词。

print(tk.word_index)

输出:

{'e': 1, 't': 2, 'n': 3, 'a': 4, 's': 5, 'o': 6, 'i': 7, 'r': 8, 'l': 9, 'h': 10, 'm': 11, 'c': 12, 'u': 13, 'b': 14, 'd': 15, 'y': 16, 'p': 17, 'f': 18, 'q': 19, 'v': 20, 'g': 21, 'w': 22, 'k': 23, 'x': 24}

为什么它没有任何文字?

此外,如果我打印 t,它清楚地表明,单词被忽略并且每个单词都被 char 字符化

print(t)  

输出:

[[ 0  0  0 ...  0  0 22]
 [ 0  0  0 ...  0  0 10]
 [ 0  0  0 ...  0  0  4]
 ...
 [ 0  0  0 ...  0  0 12]
 [ 0  0  0 ...  0  0  1]
 [ 0  0  0 ...  0  0  0]]

标签: pythonkerasnlptokenize

解决方案


我发现了错误。如果文本按以下方式传递:

txt1=["""What makes this problem difficult is that the sequences can vary in length,
be comprised of a very large vocabulary of input symbols and may require the model 
to learn the long term context or dependencies between symbols in the input sequence."""]

用括号,它会工作得很好。这是 t 的新输出:

print(t)


[[30 31 32 33 34  5  2  1  4 35]]

这意味着该函数需要一个列表而不仅仅是一个文本


推荐阅读