首页 > 解决方案 > 计算 R 中字典文件中的单词数

问题描述

我正在通过quanteda包将字典读入 R。这个包预装了一些很棒的词典,其中之一是我感兴趣的道德基础词典。这本词典有几个类别(农场、公平、内部团体等),分为美德和副子类别。

我想计算 R 中每个基础的每个子类别中的单词数。我该怎么做呢?

对于一个可重现的示例,我可以通过运行访问道德基础词典(标记为data_dictionary_MFDlibrary(quanteda.dictionaries)

谢谢!

标签: rdictionaryword-countquanteda

解决方案


尚不完全清楚您在寻找什么,但这可能归结为术语。 quanteda词典使用术语“键”表示规范类别(在 R 中,列表元素的名称),“值”表示用于匹配单词以计算每个键出现次数的模式。

MFD有两套​​“钥匙”:关爱、公平等道德“基础”,以及每个基础类别的“恶”和“美德”所代表的“价值”。然而,正如我们在 中记录的那样——至少quanteda.dictionaries::data_dictionary_MFD在 quanteda.dictionaries 的v0.22中——字典被扁平化到只有一个级别。

我们可以看到这一点,并计算每个字典“键”中的值,这里结合了基础和化合价,如下所示:

library("quanteda")
## Package version: 1.5.2

data(data_dictionary_MFD, package = "quanteda.dictionaries")

# number of "words" in each MFD dictionary key
lengths(data_dictionary_MFD)
##      care.virtue        care.vice  fairness.virtue    fairness.vice 
##              182              288              115              236 
##   loyalty.virtue     loyalty.vice authority.virtue   authority.vice 
##              142               49              301              130 
##  sanctity.virtue    sanctity.vice 
##              272              388

# first 5 values in each dictionary key
lapply(data_dictionary_MFD, head, 5)
## $care.virtue
## [1] "alleviate"   "alleviated"  "alleviates"  "alleviating" "alleviation"
## 
## $care.vice
## [1] "abused"  "abuser"  "abusers" "abuses"  "abusing"
## 
## $fairness.virtue
## [1] "avenge"   "avenged"  "avenger"  "avengers" "avenges" 
## 
## $fairness.vice
## [1] "am partial"  "bamboozle"   "bamboozled"  "bamboozles"  "bamboozling"
## 
## $loyalty.virtue
## [1] "all for one" "allegiance"  "allegiances" "allegiant"   "allied"     
## 
## $loyalty.vice
## [1] "against us"  "apostate"    "apostates"   "backstab"    "backstabbed"
## 
## $authority.virtue
## [1] "acquiesce"   "acquiesced"  "acquiescent" "acquiesces"  "acquiescing"
## 
## $authority.vice
## [1] "anarchist"   "anarchistic" "anarchists"  "anarchy"     "apostate"   
## 
## $sanctity.virtue
## [1] "abstinance" "abstinence" "allah"      "almighty"   "angel"     
## 
## $sanctity.vice
## [1] "abhor"    "abhored"  "abhors"   "addict"   "addicted"

要应用它来计算匹配“键”(基础和价的组合)的单词,我们可以创建一个 dfm,然后使用dfm_lookup()

# number of words in a text matching the MFD dictionary
dfm(data_corpus_inaugural) %>%
  dfm_lookup(dictionary = data_dictionary_MFD) %>%
  tail()
## Document-feature matrix of: 6 documents, 10 features (10.0% sparse).
## 6 x 10 sparse Matrix of class "dfm"
##               features
## docs           care.virtue care.vice fairness.virtue fairness.vice
##   1997-Clinton           8         4               6             2
##   2001-Bush             21         8              11             1
##   2005-Bush             14        12              16             4
##   2009-Obama            18         6               8             1
##   2013-Obama            14         6              15             2
##   2017-Trump            16         7               2             4
##               features
## docs           loyalty.virtue loyalty.vice authority.virtue authority.vice
##   1997-Clinton             37            0                3              0
##   2001-Bush                36            1               18              2
##   2005-Bush                38            3               33              4
##   2009-Obama               33            1               18              2
##   2013-Obama               39            2               12              0
##   2017-Trump               44            0               20              1
##               features
## docs           sanctity.virtue sanctity.vice
##   1997-Clinton              14             8
##   2001-Bush                 21             1
##   2005-Bush                 16             0
##   2009-Obama                18             3
##   2013-Obama                14             0
##   2017-Trump                13             3

然而,有一种更好的方法可以利用 MFD 的嵌套结构,但我们需要先修改字典对象以使其嵌套。如所提供的,MFD 已经“扁平化”。我们希望将其展开,以便基础形成第一级密钥,而化合价形成第二级密钥。然后,使用 and 中的参数levels,我们将能够选择我们在文本中计算匹配项的级别。tokens_lookup()dfm_lookup()

首先,重新创建字典以使其嵌套。

# remake the dictionary into nested catetgory of foundation and valence
data_dictionary_MFDnested <-
  dictionary(list(
    care = list(
      virtue = data_dictionary_MFD[["care.virtue"]],
      vice = data_dictionary_MFD[["care.vice"]]
    ),
    fairness = list(
      virtue = data_dictionary_MFD[["fairness.virtue"]],
      vice = data_dictionary_MFD[["fairness.vice"]]
    ),
    loyalty = list(
      virtue = data_dictionary_MFD[["loyalty.virtue"]],
      vice = data_dictionary_MFD[["loyalty.vice"]]
    ),
    authority = list(
      virtue = data_dictionary_MFD[["authority.virtue"]],
      vice = data_dictionary_MFD[["authority.vice"]]
    ),
    sanctity = list(
      virtue = data_dictionary_MFD[["sanctity.virtue"]],
      vice = data_dictionary_MFD[["sanctity.vice"]]
    )
  ))

检查这一点,我们可以看到字典的详细信息:

lengths(data_dictionary_MFDnested)
##      care  fairness   loyalty authority  sanctity 
##         2         2         2         2         2
lapply(data_dictionary_MFDnested, lengths)
## $care
## virtue   vice 
##    182    288 
## 
## $fairness
## virtue   vice 
##    115    236 
## 
## $loyalty
## virtue   vice 
##    142     49 
## 
## $authority
## virtue   vice 
##    301    130 
## 
## $sanctity
## virtue   vice 
##    272    388

现在我们可以将它应用到我们的文本中:

# now apply it to texts
dfm(data_corpus_inaugural) %>%
  dfm_lookup(dictionary = data_dictionary_MFDnested, levels = 1) %>%
  tail()
## Document-feature matrix of: 6 documents, 5 features (0.0% sparse).
## 6 x 5 sparse Matrix of class "dfm"
##               features
## docs           care fairness loyalty authority sanctity
##   1997-Clinton   12        8      37         3       22
##   2001-Bush      29       12      37        20       22
##   2005-Bush      26       20      41        37       16
##   2009-Obama     24        9      34        20       21
##   2013-Obama     20       17      41        12       14
##   2017-Trump     23        6      44        21       16

dfm(data_corpus_inaugural) %>%
  dfm_lookup(dictionary = data_dictionary_MFDnested, levels = 2) %>%
  tail()
## Document-feature matrix of: 6 documents, 2 features (0.0% sparse).
## 6 x 2 sparse Matrix of class "dfm"
##               features
## docs           virtue vice
##   1997-Clinton     68   14
##   2001-Bush       107   13
##   2005-Bush       117   23
##   2009-Obama       95   13
##   2013-Obama       94   10
##   2017-Trump       95   15

指定两个级别(或默认级别levels = 1:5)与我们最初使用扁平化字典的内容相匹配:

dfm(data_corpus_inaugural) %>%
  dfm_lookup(dictionary = data_dictionary_MFDnested, levels = 1:2) %>%
  tail()
## Document-feature matrix of: 6 documents, 10 features (10.0% sparse).
## 6 x 10 sparse Matrix of class "dfm"
##               features
## docs           care.virtue care.vice fairness.virtue fairness.vice
##   1997-Clinton           8         4               6             2
##   2001-Bush             21         8              11             1
##   2005-Bush             14        12              16             4
##   2009-Obama            18         6               8             1
##   2013-Obama            14         6              15             2
##   2017-Trump            16         7               2             4
##               features
## docs           loyalty.virtue loyalty.vice authority.virtue authority.vice
##   1997-Clinton             37            0                3              0
##   2001-Bush                36            1               18              2
##   2005-Bush                38            3               33              4
##   2009-Obama               33            1               18              2
##   2013-Obama               39            2               12              0
##   2017-Trump               44            0               20              1
##               features
## docs           sanctity.virtue sanctity.vice
##   1997-Clinton              14             8
##   2001-Bush                 21             1
##   2005-Bush                 16             0
##   2009-Obama                18             3
##   2013-Obama                14             0
##   2017-Trump                13             3

推荐阅读