首页 > 解决方案 > 将 content_transformer 与 udpipe_annotate 一起使用

问题描述

所以我刚刚发现 udpipe 有一种很棒的显示相关性的方式,所以我开始研究它。如果我在导入后在 csv 文件上使用它并且不对它进行任何更改,则该站点的代码可以完美运行。

但是一旦我创建了一个语料库并更改/删除了一些单词,我的问题就会出现。我不是 R 方面的专家,但我用谷歌搜索了很多,我似乎无法弄清楚。

这是我的代码:

txt <- read_delim(fileName, ";", escape_double = FALSE, trim_ws = TRUE)

# Maak Corpus
docs <- Corpus(VectorSource(txt))
docs <- tm_map(docs, tolower)
docs <- tm_map(docs, removePunctuation)
docs <- tm_map(docs, removeNumbers)
docs <- tm_map(docs, stripWhitespace)
docs <- tm_map(docs, removeWords, stopwords('nl'))
docs <- tm_map(docs, removeWords, myWords())
docs <- tm_map(docs, content_transformer(gsub), pattern = "afspraak|afspraken|afgesproken", replacement = "afspraak")
docs <- tm_map(docs, content_transformer(gsub), pattern = "communcatie|communiceren|communicatie|comminicatie|communiceer|comuniseren|comunuseren|communictatie|comminiceren|comminisarisacie|communcaite", replacement = "communicatie")
docs <- tm_map(docs, content_transformer(gsub), pattern = "contact|kontact|kontakt", replacement = "contact")

comments <- docs

library(lattice)
stats <- txt_freq(x$upos)
stats$key <- factor(stats$key, levels = rev(stats$key))
#barchart(key ~ freq, data = stats, col = "cadetblue", main = "UPOS (Universal Parts of Speech)\n frequency of occurrence", xlab = "Freq")

## NOUNS (zelfstandige naamwoorden)
stats <- subset(x, upos %in% c("NOUN")) 
stats <- txt_freq(stats$token)
stats$key <- factor(stats$key, levels = rev(stats$key))
barchart(key ~ freq, data = head(stats, 20), col = "cadetblue", main = "Most occurring nouns", xlab = "Freq")

## ADJECTIVES (bijvoeglijke naamwoorden)
stats <- subset(x, upos %in% c("ADJ")) 
stats <- txt_freq(stats$token)
stats$key <- factor(stats$key, levels = rev(stats$key))
barchart(key ~ freq, data = head(stats, 20), col = "cadetblue", main = "Most occurring adjectives", xlab = "Freq")

## Using RAKE (harkjes)
stats <- keywords_rake(x = x, term = "lemma", group = "doc_id", relevant = x$upos %in% c("NOUN", "ADJ"))
stats$key <- factor(stats$keyword, levels = rev(stats$keyword))
barchart(key ~ rake, data = head(subset(stats, freq > 3), 20), col = "cadetblue", main = "Keywords identified by RAKE", xlab = "Rake")

## Using Pointwise Mutual Information Collocations
x$word <- tolower(x$token)
stats <- keywords_collocation(x = x, term = "word", group = "doc_id")
stats$key <- factor(stats$keyword, levels = rev(stats$keyword))
barchart(key ~ pmi, data = head(subset(stats, freq > 3), 20), col = "cadetblue", main = "Keywords identified by PMI Collocation", xlab = "PMI (Pointwise Mutual Information)")

## Using a sequence of POS tags (noun phrases / verb phrases)
x$phrase_tag <- as_phrasemachine(x$upos, type = "upos")
stats <- keywords_phrases(x = x$phrase_tag, term = tolower(x$token), pattern = "(A|N)*N(P+D*(A|N)*N)*", is_regex = TRUE, detailed = FALSE)
stats <- subset(stats, ngram > 1 & freq > 3)
stats$key <- factor(stats$keyword, levels = rev(stats$keyword))
barchart(key ~ freq, data = head(stats, 20), col = "cadetblue", main = "Keywords - simple noun phrases", xlab = "Frequency")


cooc <- cooccurrence(x = subset(x, upos %in% c("NOUN", "ADJ")), 
                                         term = "lemma", 
                                         group = c("doc_id", "paragraph_id", "sentence_id"))
head(cooc)
library(igraph)
library(ggraph)
library(ggplot2)
wordnetwork <- head(cooc, 30)
wordnetwork <- graph_from_data_frame(wordnetwork)
ggraph(wordnetwork, layout = "fr") +
    geom_edge_link(aes(width = cooc, edge_alpha = cooc), edge_colour = "pink") +
    geom_node_text(aes(label = name), col = "darkgreen", size = 4) +
    theme_graph(base_family = "Arial Narrow") +
    theme(legend.position = "none") +
    labs(title = "Cooccurrences within sentence", subtitle = "Nouns & Adjective")

一旦我将导入的文件转换为语料库,它就会失败。任何人都知道我如何仍然可以执行 tm_map 函数然后运行 ​​udpipe 代码?

提前Tnx!

标签: rtmudpipe

解决方案


您想要的有多种解决方案。但是由于您的语料库是使用矢量源创建的,因此它只是一个长输入向量。你可以很容易地回到向量中,这样udpipe就可以接管了。

udpipe示例文档中,所有内容都定义为,x所以我也会这样做。清理您的语料库后,只需执行以下操作:

x <- as.character(docs[1])

docs 之后的 [1] 很重要,否则您会得到一些不需要的额外字符。完成后,运行 udpipe 命令将向量转换为您需要的 data.frame。

x <- udpipe_annotate(ud_model, x)
x <- as.data.frame(x)

另一种方法是首先将语料库(检查?writeCorpus更多信息)写入磁盘,然后再次读取清理后的文件并通过 udpipe 将其放入。这更像是一种解决方法,但可能会产生更好的工作流程。

udpipe 还处理标点符号,它放入名为 PUNCT 的特殊 upos 类中,并带有 xpos 描述(如果使用荷兰模型,则为荷兰语)Punc|komma 或 unc|punt。如果名词有大写字母,则引理为小写。

在您的情况下,我将只使用基本的正则表达式选项来浏览数据,而不是使用 tm. 荷兰语停用词只是删除了一些动词,如“zijn”、“worden”、“kunnen”、“te”等副词以及“ik”和“we”等代词。由于您只查看名词和形容词,因此您无论如何都会在您的 udpipe 代码中过滤掉这些内容。


推荐阅读