首页 > 解决方案 > 基于正则表达式标记推文

问题描述

我有以下示例文本/推文:

RT @trader $AAPL 2012 is o´o´o´o´o´pen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh url_that_cannot_be_posted_on_SO

我想按照Li, T, van Dalen, J 和 van Rees, PJ (Pieter Jan) 中的表 1 的程序进行操作。(2017)。不仅仅是噪音?对金融市场股票微博信息内容的考察。信息技术杂志。doi:10.1057/s41265-016-0034-2以清理推文。

他们以这样的方式清理推文,最终结果是:

 {RT|123456} {USER|56789} {TICKER|AAPL} {NUMBER|2012} notooopen nottalk patent {COMPANY|GOOG} notdefinetli treatment {HASH|samsung} {EMOTICON|POS} haha {URL}

我使用以下脚本根据正则表达式对推文进行标记:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import re

emoticon_string = r"""
(?:
  [<>]?
  [:;=8]                     # eyes
  [\-o\*\']?                 # optional nose
  [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth      
  |
  [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth
  [\-o\*\']?                 # optional nose
  [:;=8]                     # eyes
  [<>]?
)"""

regex_strings = (
# URL:
r"""http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+"""
,
# Twitter username:
r"""(?:@[\w_]+)"""
,
# Hashtags:
r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)"""
,
# Cashtags:
r"""(?:\$+[\w_]+[\w\'_\-]*[\w_]+)"""
,
# Remaining word types:
r"""
(?:[+\-]?\d+[,/.:-]\d+[+\-]?)  # Numbers, including fractions, decimals.
|
(?:[\w_]+)                     # Words without apostrophes or dashes.
|
(?:\.(?:\s*\.){1,})            # Ellipsis dots. 
|
(?:\S)                         # Everything else that isn't whitespace.
"""
)

word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE)

emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE)

######################################################################

class Tokenizer:
   def __init__(self, preserve_case=False):
       self.preserve_case = preserve_case

   def tokenize(self, s):
       try:
           s = str(s)
       except UnicodeDecodeError:
           s = str(s).encode('string_escape')
           s = unicode(s)
       # Tokenize:
       words = word_re.findall(s)
       if not self.preserve_case:
           words = map((lambda x: x if emoticon_re.search(x) else x.lower()), words)
       return words

if __name__ == '__main__':
    tok = Tokenizer(preserve_case=False)
    test = ' RT @trader $AAPL 2012 is oooopen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh url_that_cannot_be_posted_on_SO'
    tokenized = tok.tokenize(test)
    print("\n".join(tokenized))

这会产生以下输出:

rt
@trader
$aapl
2012
is
oooopen 
to
‘
talk
’
about
patents
with
goog
definitely
not
the
treatment
#samsung
got
:-)
heh
url_that_cannot_be_posted_on_SO

如何调整此脚本以获得:

rt
{USER|trader}
{CASHTAG|aapl}
{NUMBER|2012}
is
oooopen 
to
‘
talk
’
about
patents
with
goog
definitely
not
the
treatment
{HASHTAG|samsung}
got
{EMOTICON|:-)}
heh
{URL|url_that_cannot_be_posted_on_SO}

提前感谢您帮助我度过了愉快的时光!

标签: pythonregex

解决方案


您确实需要使用命名的捕获组(由 thebjorn 提到),并用于groupdict()在每次匹配时获取名称-值对。它需要一些后期处理:

  • None必须丢弃该值的所有对
  • 如果self.preserve_case为假,则可以立即将值转换为小写
  • 如果组名是WORDELLIPSIS或者ELSE值按words原样添加
  • 如果组名是HASHTAG, CASHTAGUSER或者URL先添加值,然后在开始时去掉$,#@字符,然后words作为{<GROUP_NAME>|<VALUE>}项添加到
  • 所有其他匹配项都words作为{<GROUP_NAME>|<VALUE>}项目添加。

请注意,\w默认情况下匹配下划线,因此[\w_]= \w。我稍微优化了模式。

这是一个固定的代码片段

import re

emoticon_string = r"""
(?P<EMOTICON>
  [<>]?
  [:;=8]                     # eyes
  [-o*']?                    # optional nose
  [][()dDpP/:{}@|\\]         # mouth      
  |
  [][()dDpP/:}{@|\\]         # mouth
  [-o*']?                    # optional nose
  [:;=8]                     # eyes
  [<>]?
)"""

regex_strings = (
# URL:
r"""(?P<URL>https?://(?:[-a-zA-Z0-9_$@.&+!*(),]|%[0-9a-fA-F][0-9a-fA-F])+)"""
,
# Twitter username:
r"""(?P<USER>@\w+)"""
,
# Hashtags:
r"""(?P<HASHTAG>\#+\w+[\w'-]*\w+)"""
,
# Cashtags:
r"""(?P<CASHTAG>\$+\w+[\w'-]*\w+)"""
,
# Remaining word types:
r"""
(?P<NUMBER>[+-]?\d+(?:[,/.:-]\d+[+-]?)?)  # Numbers, including fractions, decimals.
|
(?P<WORD>\w+)                     # Words without apostrophes or dashes.
|
(?P<ELLIPSIS>\.(?:\s*\.)+)            # Ellipsis dots. 
|
(?P<ELSE>\S)                         # Everything else that isn't whitespace.
"""
)

word_re = re.compile(r"""({}|{})""".format(emoticon_string, "|".join(regex_strings)), re.VERBOSE | re.I | re.UNICODE)
#print(word_re.pattern)
emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE)

######################################################################

class Tokenizer:
   def __init__(self, preserve_case=False):
       self.preserve_case = preserve_case

   def tokenize(self, s):
       try:
           s = str(s)
       except UnicodeDecodeError:
           s = str(s).encode('string_escape')
           s = unicode(s)
       # Tokenize:
       words = []
       for x in word_re.finditer(s):
           for key, val in x.groupdict().items():
               if val:
                   if not self.preserve_case:
                       val = val.lower()
                   if key in ['WORD','ELLIPSIS','ELSE']:
                       words.append(val)
                   elif key in ['HASHTAG','CASHTAG','USER','URL']: # Add more here if needed
                       words.append("{{{}|{}}}".format(key, re.sub(r'^[#@$]+', '', val)))
                   else:
                       words.append("{{{}|{}}}".format(key, val))
       return words

if __name__ == '__main__':
    tok = Tokenizer(preserve_case=False)
    test = ' RT @trader $AAPL 2012 is oooopen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh http://some.site.here.com'
    tokenized = tok.tokenize(test)
    print("\n".join(tokenized))

test = ' RT @trader $AAPL 2012 is oooopen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh http://some.site.here.com'它输出

rt
{USER|trader}
{CASHTAG|aapl}
{NUMBER|2012}
is
oooopen
to
‘
talk
’
about
patents
with
goog
definitely
not
the
treatment
{HASHTAG|samsung}
got
{EMOTICON|:-)}
heh
{URL|http://some.site.here.com}

在线查看正则表达式演示


推荐阅读