首页 > 解决方案 > 试图理解 keras 的分词器 texts_to_sequences

问题描述

我在用:

from keras.preprocessing.text import Tokenizer

max_words = 10000

text = 'Decreased glucose-6-phosphate dehydrogenase activity along with oxidative stress affects visual contrast sensitivity in alcoholics.'

tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(text)
sequences = tokenizer.texts_to_sequences(text)

print(sequences)

这导致:

[[8], [2], [7], [12], [2], [5], [1], [2], [8], [], [14], [9], [16], [7], [6], [1], [2], [], [19], [], [17], [10], [6], [1], [17], [10], [5], [3], [2], [], [8], [2], [10], [15], [8], [12], [6], [14], [2], [11], [5], [1], [2], [], [5], [7], [3], [4], [13], [4], [3], [15], [], [5], [9], [6], [11], [14], [], [20], [4], [3], [10], [], [6], [21], [4], [8], [5], [3], [4], [13], [2], [], [1], [3], [12], [2], [1], [1], [], [5], [18], [18], [2], [7], [3], [1], [], [13], [4], [1], [16], [5], [9], [], [7], [6], [11], [3], [12], [5], [1], [3], [], [1], [2], [11], [1], [4], [3], [4], [13], [4], [3], [15], [], [4], [11], [], [5], [9], [7], [6], [10], [6], [9], [4], [7], [1], []]

这实际上意味着什么?为什么有这么多条目?我可以看到Keras上面的文本有 16 个单词,如下所示:

{'oxidative', 'contrast', '6', 'affects', 'in', 'dehydrogenase', 'visual', 'stress', 'glucose', 'phosphate', 'along', 'activity', 'with', 'alcoholics', 'decreased', 'sensitivity'}

顺便说一句,这对我的场景来说是错误的,因为我想防止分裂,glucose-6-phosphate但我认为我可以使用以下方法防止这种情况:

tokenizer = Tokenizer(num_words=max_words, filters='!"#$%&()*+,./:;<=>?@[\\]^_`{|}~\t\n')

标签: python-3.xkeras

解决方案


tokenizer.fit_on_texts需要一个文本列表,您将在其中传递一个字符串。同样对于tokenizer.texts_to_sequences(). 尝试将列表传递给这两种方法:

from keras.preprocessing.text import Tokenizer

max_words = 10000

text = 'Decreased glucose-6-phosphate dehydrogenase ...'

tokenizer = Tokenizer(num_words=max_words, filters='!"#$%&()*+,./:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts([text])
sequences = tokenizer.texts_to_sequences([text])

这将为您提供对句子中的单词进行编码的整数序列列表,这可能是您的用例:

sequences

[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]]

推荐阅读