首页 > 解决方案 > 在熊猫数据框中编码文本列

问题描述

我哪里错了?我正在尝试遍历数据框的每一行并对文本进行编码。

data['text'] = data.apply(lambda row: 
    codecs(row['text'], "r", 'utf-8'), axis=1)

我收到此错误 - 为什么 uft 编码会影响部分代码,如果我不运行 UTF 编码,我不会收到错误:

    TypeError                                 Traceback (most recent call last)
    <ipython-input-101-0e1d5977a3b3> in <module>
    ----> 1 data['text'] = codecs(data['text'], "r", 'utf-8')
          2 
          3 data['text'] = data.apply(lambda row: 
          4     codecs(row['text'], "r", 'utf-8'), axis=1)

    TypeError: 'module' object is not callable

当我应用解决方案时,两者都可以工作,但是我收到此错误:

    data['text_tokens'] = data.apply(lambda row: 
        nltk.word_tokenize(row['text']), axis=1)

错误:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-138-73972d522748> in <module>
      1 data['text_tokens'] = data.apply(lambda row: 
----> 2     nltk.word_tokenize(row['text']), axis=1)

~/env/lib64/python3.6/site-packages/pandas/core/frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds)
   6485                          args=args,
   6486                          kwds=kwds)
-> 6487         return op.get_result()
   6488 
   6489     def applymap(self, func):

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in get_result(self)
    149             return self.apply_raw()
    150 
--> 151         return self.apply_standard()
    152 
    153     def apply_empty_result(self):

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in apply_standard(self)
    255 
    256         # compute the result using the series generator
--> 257         self.apply_series_generator()
    258 
    259         # wrap results

~/env/lib64/python3.6/site-packages/pandas/core/apply.py in apply_series_generator(self)
    284             try:
    285                 for i, v in enumerate(series_gen):
--> 286                     results[i] = self.f(v)
    287                     keys.append(v.name)
    288             except Exception as e:

<ipython-input-138-73972d522748> in <lambda>(row)
      1 data['text_tokens'] = data.apply(lambda row: 
----> 2     nltk.word_tokenize(row['text']), axis=1)

~/env/lib64/python3.6/site-packages/nltk/tokenize/__init__.py in word_tokenize(text, language, preserve_line)
    142     :type preserve_line: bool
    143     """
--> 144     sentences = [text] if preserve_line else sent_tokenize(text, language)
    145     return [
    146         token for sent in sentences for token in _treebank_word_tokenizer.tokenize(sent)

~/env/lib64/python3.6/site-packages/nltk/tokenize/__init__.py in sent_tokenize(text, language)
    104     """
    105     tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
--> 106     return tokenizer.tokenize(text)
    107 
    108 

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in tokenize(self, text, realign_boundaries)
   1275         Given a text, returns a list of the sentences in that text.
   1276         """
-> 1277         return list(self.sentences_from_text(text, realign_boundaries))
   1278 
   1279     def debug_decisions(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in sentences_from_text(self, text, realign_boundaries)
   1329         follows the period.
   1330         """
-> 1331         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1332 
   1333     def _slices_from_text(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in <listcomp>(.0)
   1329         follows the period.
   1330         """
-> 1331         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1332 
   1333     def _slices_from_text(self, text):

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in span_tokenize(self, text, realign_boundaries)
   1319         if realign_boundaries:
   1320             slices = self._realign_boundaries(text, slices)
-> 1321         for sl in slices:
   1322             yield (sl.start, sl.stop)
   1323 

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _realign_boundaries(self, text, slices)
   1360         """
   1361         realign = 0
-> 1362         for sl1, sl2 in _pair_iter(slices):
   1363             sl1 = slice(sl1.start + realign, sl1.stop)
   1364             if not sl2:

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _pair_iter(it)
    316     it = iter(it)
    317     try:
--> 318         prev = next(it)
    319     except StopIteration:
    320         return

~/env/lib64/python3.6/site-packages/nltk/tokenize/punkt.py in _slices_from_text(self, text)
   1333     def _slices_from_text(self, text):
   1334         last_break = 0
-> 1335         for match in self._lang_vars.period_context_re().finditer(text):
   1336             context = match.group() + match.group('after_tok')
   1337             if self.text_contains_sentbreak(context):

TypeError: ('cannot use a string pattern on a bytes-like object', 'occurred at index 0')

标签: python

解决方案


编码

正如第一个错误所说,codecs不可调用。其实就是模块的名字。

你可能想要:

data['text'] = data.apply(lambda row: 
    codecs.encode(row['text'], 'utf-8'), axis=1)

代币化

引发的错误word_tokenize是由于该函数用于先前编码的字符串:codecs.encode将文本呈现为字节文字字符串。
codecs 文档

大多数标准编解码器是文本编码,将文本编码为字节,但也提供了将文本编码为文本和字节编码为字节的编解码器。

word_tokenize不适用于字节 literar,就像错误所说的那样(错误回溯的最后一行)。
如果您删除编码通道,它将起作用。


关于您对视频的担忧:前缀u表示unicode1
前缀b表示字节文字2如果您在使用codecs.encode.
在 python 3 中(我从回溯中看到您的版本是 3.6),默认字符串类型是 Unicode,因此u是多余的并且通常不显示,但字符串已经是 unicode。
所以我很确定你是安全的:你可以安全地不使用codecs.encode.


推荐阅读