首页 > 解决方案 > 斯坦福 CoreNLP TokensRegex / 在 Python 中解析 .rules 文件时出错

问题描述

我试图在此链接中解决此问题,但无法使用 stanford nlp 库中的 regexner。

(注意:我使用的是 stanfordnlp 库版本 0.2.0、Stanford CoreNLP 版本 3.9.2 和 Python 3.7.3)

所以我想尝试使用 TokenRegex 的解决方案。作为第一次尝试,我尝试使用此解决方案中的令牌正则表达式文件 tokenrgxrules.rules :

ner = { type: "CLASS", value: "edu.stanford.nlp.ling.CoreAnnotations$NamedEntityTagAnnotation" }

$ORGANIZATION_TITLES = "/inc\.|corp\./"

$COMPANY_INDICATOR_WORDS = "/company|corporation/"

ENV.defaults["stage"] = 1

{ pattern: (/works/ /for/ ([{pos: NNP}]+ $ORGANIZATION_TITLES)), action: (Annotate($1, ner, "RULE_FOUND_ORG") ) }

ENV.defaults["stage"] = 2

{ pattern: (([{pos: NNP}]+) /works/ /for/ [{ner: "RULE_FOUND_ORG"}]), action: (Annotate($1, ner, "RULE_FOUND_PERS") ) }

这是我的python代码:

import stanfordnlp


from stanfordnlp.server import CoreNLPClient
# example text
print('---')
print('input text')
print('')
text = "The analysis of shotgun sequencing data from metagenomic mixtures raises complex computational challenges. Part of the difficulty stems from the read length limitation of existing deep DNA sequencing technologies, an issue compounded by the extensive level of homology across viral and bacterial species. Another complication is the divergence of the microbial DNA sequences from the publicly available references. As a consequence, the assignment of a sequencing read to a database organism is often unclear. Lastly, the number of reads originating from a disease causing pathogen can be low (Barzon et al., 2013). The pathogen contribution to the mixture depends on the biological context, the timing of sample extraction and the type of pathogen considered. Therefore, highly sensitive computational approaches are required."
text = "In practice, its scope is broad and includes the analysis of a diverse set of samples such as gut microbiome (Qin et al., 2010), (Minot et al., 2011), environmental (Mizuno et al., 2013) or clinical (Willner et al., 2009), (Negredo et al., 2011), (McMullan et al., 2012) samples."
print(text)
# set up the client
print('---')
print('starting up Java Stanford CoreNLP Server...')
#I am not sure if I can add here the tokensregex rules
prop={'regexner.mapping': 'rgxrules.txt', "tokensregex.rules": "tokenrgxrules.rules", 'annotators': 'tokenize,ssplit,pos,lemma,ner,regexner,tokensregex'}


# set up the client


with CoreNLPClient(properties=prop,timeout=100000, memory='16G',be_quiet=False ) as client:
    # submit the request to the server
    ann = client.annotate(text)
    # get the first sentence
    sentence = ann.sentence[0]

这是我得到的结果:

    Starting server with command: java -Xmx16G -cp /Users/stanford-corenlp-full-2018-10-05//* edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 100000 -threads 5 -maxCharLength 100000 -quiet False -serverProperties corenlp_server-f8a9bab3cb0b44da.props -preload tokenize,ssplit,pos,lemma,ner,tokensregex
[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
[main] INFO CoreNLP - setting default constituency parser
[main] INFO CoreNLP - warning: cannot find edu/stanford/nlp/models/srparser/englishSR.ser.gz
[main] INFO CoreNLP - using: edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz instead
[main] INFO CoreNLP - to use shift reduce parser download English models jar from:
[main] INFO CoreNLP - http://stanfordnlp.github.io/CoreNLP/download.html
[main] INFO CoreNLP -     Threads: 5
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[main] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.6 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.8 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.1 sec].
[main] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.6 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
[main] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 580704 unique entries out of 581863 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_caseless.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 4869 unique entries out of 4869 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_cased.tab, 0 TokensRegex patterns.
[main] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 585573 unique entries from 2 files

一切正常,直到它开始解析 tokenregex.rules 文件:

[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokensregex
[main] ERROR CoreNLP - Could not pre-load annotators in server; encountered exception:
java.lang.RuntimeException: Error parsing file: Users/Documents/utils/tokenrgxrules.rules
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:293)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:275)
    at edu.stanford.nlp.pipeline.TokensRegexAnnotator.<init>(TokensRegexAnnotator.java:77)
    at edu.stanford.nlp.pipeline.AnnotatorImplementations.tokensregex(AnnotatorImplementations.java:78)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators$6(StanfordCoreNLP.java:524)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null$30(StanfordCoreNLP.java:602)
    at edu.stanford.nlp.util.Lazy$3.compute(Lazy.java:126)
    at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
    at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:251)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:192)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:188)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.main(StanfordCoreNLPServer.java:1505)
Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL
    at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:480)
    at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:617)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:287)
    ... 12 more
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[pool-1-thread-3] INFO CoreNLP - [/0:0:0:0:0:0:0:1:49907] API call w/annotators tokenize,ssplit,pos,lemma,ner,tokensregex
In practice, its scope is broad and includes the analysis of a diverse set of samples such as gut microbiome (Qin et al., 2010), (Minot et al., 2011), environmental (Mizuno et al., 2013) or clinical (Willner et al., 2009), (Negredo et al., 2011), (McMullan et al., 2012) samples.
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokensregex
java.lang.RuntimeException: Error parsing file: Users/Documents/utils/tokenrgxrules.rules
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:293)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:275)
    at edu.stanford.nlp.pipeline.TokensRegexAnnotator.<init>(TokensRegexAnnotator.java:77)
    at edu.stanford.nlp.pipeline.AnnotatorImplementations.tokensregex(AnnotatorImplementations.java:78)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$getNamedAnnotators$6(StanfordCoreNLP.java:524)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.lambda$null$30(StanfordCoreNLP.java:602)
    at edu.stanford.nlp.util.Lazy$3.compute(Lazy.java:126)
    at edu.stanford.nlp.util.Lazy.get(Lazy.java:31)
    at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:149)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:251)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:192)
    at edu.stanford.nlp.pipeline.StanfordCoreNLP.<init>(StanfordCoreNLP.java:188)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.mkStanfordCoreNLP(StanfordCoreNLPServer.java:368)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.access$800(StanfordCoreNLPServer.java:50)
    at edu.stanford.nlp.pipeline.StanfordCoreNLPServer$CoreNLPHandler.handle(StanfordCoreNLPServer.java:855)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
    at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
    at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
    at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
    at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to open "Users/Documents/utils/tokenrgxrules.rules" as class path, filename or URL
    at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:480)
    at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:617)
    at edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor.createExtractorFromFiles(CoreMapExpressionExtractor.java:287)
    ... 23 more
Traceback (most recent call last):
  File "/Users/anaconda3/lib/python3.7/site-packages/stanfordnlp/server/client.py", line 330, in _request
    r.raise_for_status()
  File "/Users/anaconda3/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://localhost:9000/?properties=%7B%27outputFormat%27%3A+%27serialized%27%7D

我找了几个小时来解决这个问题,但找不到任何解决方案,我也尝试了来自官方stanford nlp 页面的代码 basic_NER,但它给了我同样的错误。我用 python 编码,所以我无法测试他们的 Java 代码。

任何帮助或链接都可能非常有用。

先感谢您

标签: pythonregexstanford-nlpnamed-entity-recognition

解决方案


这个问题似乎发生在 stanza 1.0.0 和 Stanford CoreNLP 3.9.2 中,团队正在努力解决这个问题。我相信这意味着其他东西正在尝试使用相同的端口,并且服务器正在默默地失败。您应该首先验证您可以让服务器在 Python 客户端之外运行。由于您看到此错误,我认为服务器甚至没有启动。


推荐阅读