首页 > 解决方案 > Keras Tokenizer 字符级别不起作用

问题描述

我通过 Keras Tokenizer 发送一个列表列表,其中 char_level = True,但结果是单词标记化,而不是字符标记化。

    from tensorflow import keras
    from keras.preprocessing.text import Tokenizer


    # List of lists
    train_data = [['SMITH', 'JOHN', '', 'CHESTERTOWN', 'MD', '21620', '555555555', 'F'], ['CROW', 'JOE', '', 'FREDERICK', 'MD', '217011313', '9999999999', 'F']]

    t = Tokenizer(filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', split=',', char_level=True, oov_token=True) 
    t.fit_on_texts(train_data)
    train_token = np.array(t.texts_to_sequences(train_data)) 

    print(train_token)
    array([[ 5,  6,  2,  7,  3,  8,  9,  4], [10, 11,  2, 12,  3, 13, 14,  4]])

标签: kerastokenize

解决方案


发生这种情况是因为您的数据应该是字符串,而不是列表。如果将所有单词连接成一个字符串,它将按预期工作。

只需将以下内容添加到您的代码中:

def concat_list(l):
    concat = ''
    for word in l:
        concat += word + ' '
    return concat

train_data = [concat_list(data) for data in train_data]

然后你会得到:

>>> [list([16, 9, 17, 10, 11, 2, 18, 7, 11, 19, 2, 2, 12, 11, 5, 16, 10, 5, 8, 10, 7, 20, 19, 2, 9, 13, 2, 14, 6, 23, 14, 21, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 15, 2])
     list([12, 8, 7, 20, 2, 18, 7, 5, 2, 2, 15, 8, 5, 13, 5, 8, 17, 12, 24, 2, 9, 13, 2, 14, 6, 25, 21, 6, 6, 22, 6, 22, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 15, 2])]

推荐阅读