首页 > 解决方案 > 为什么 lm() 没有在 R 中显示一些输出?

问题描述

我想知道为什么lm()5 coefs not defined because of singularities,然后NA在汇总输出中给出 5 个系数的所有内容。

请注意,我所有的预测变量都是分类的。

我关于这 5 个系数或代码的数据有什么问题吗?我怎么可能解决这个问题?

d <- read.csv("https://raw.githubusercontent.com/rnorouzian/m/master/v.csv", h = T) # Data

nms <- c("Age","genre","Length","cf.training","error.type","cf.scope","cf.type","cf.revision")

d[nms] <- lapply(d[nms], as.factor) # make factor

vv <- lm(dint~Age+genre+Length+cf.training+error.type+cf.scope+cf.type+cf.revision, data = d)

summary(vv) 

前 6 行输出:

     Coefficients: (5 not defined because of singularities)
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)    0.17835    0.63573   0.281 0.779330    
Age1          -0.04576    0.86803  -0.053 0.958010    
Age2           0.46431    0.87686   0.530 0.596990    
Age99         -1.64099    1.04830  -1.565 0.118949    
genre2         1.57015    0.55699   2.819 0.005263 ** 
genre4              NA         NA      NA       NA    ## For example here is all `NA`s? there are 4 more !

标签: rdataframeregression

解决方案


正如其他人指出的那样,一个问题是您似乎具有多重共线性。另一个是您的数据集中缺少值。缺失的值可能应该被删除。至于相关变量,您应该检查您的数据以识别这种共线性并将其删除。决定删除哪些变量以及保留哪些变量是一个非常特定于领域的主题。但是,如果您希望决定使用正则化并在保留所有变量的同时拟合模型,则可以。这也允许您在n(样本数)小于p(预测变量数)时拟合模型。

我在下面展示了代码,演示了如何检查数据中的相关结构,并确定哪些变量最相关(感谢这个答案。我已经包含了一个使用 L2 正则化拟合这种模型的示例(众所周知作为岭回归)。

d <- read.csv("https://raw.githubusercontent.com/rnorouzian/m/master/v.csv", h = T) # Data

nms <- c("Age","genre","Length","cf.training","error.type","cf.scope","cf.type","cf.revision")

d[nms] <- lapply(d[nms], as.factor) # make factor

vv <- lm(dint~Age+genre+Length+cf.training+error.type+cf.scope+cf.type+cf.revision, data = d)


df <- d
df[] <- lapply(df, as.numeric)
cor_mat <- cor(as.matrix(df), use = "complete.obs")

library("gplots")
heatmap.2(cor_mat, trace = "none")

## https://stackoverflow.com/questions/22282531/how-to-compute-correlations-between-all-columns-in-r-and-detect-highly-correlate
library("tibble")
library("dplyr")
library("tidyr")

d2 <- df %>% 
  as.matrix() %>%
  cor(use = "complete.obs") %>%
  ## Set diag (a vs a) to NA, then remove
  (function(x) {
    diag(x) <- NA
    x
  }) %>%
  as.data.frame %>%
  rownames_to_column(var = 'var1') %>%
  gather(var2, value, -var1) %>%
  filter(!is.na(value)) %>%
  ## Sort by decreasing absolute correlation
  arrange(-abs(value))

## 2 pairs of variables are almost exactly correlated!
head(d2)
#>         var1       var2     value
#> 1         id study.name 0.9999430
#> 2 study.name         id 0.9999430
#> 3   Location      timed 0.9994082
#> 4      timed   Location 0.9994082
#> 5        Age   ed.level 0.7425026
#> 6   ed.level        Age 0.7425026
## Remove some variables here, or maybe try regularized regression (see below)
library("glmnet")

## glmnet requires matrix input
X <- d[, c("Age", "genre", "Length", "cf.training", "error.type", "cf.scope", "cf.type", "cf.revision")]
X[] <- lapply(X, as.numeric)
X <- as.matrix(X)
ind_na <- apply(X, 1, function(row) any(is.na(row)))
X <- X[!ind_na, ]
y <- d[!ind_na, "dint"]
glmnet <- glmnet(
    x = X,
    y = y,
    ## alpha = 0 is ridge regression
    alpha = 0)

plot(glmnet)

reprex 包(v0.3.0)于 2019 年 11 月 8 日创建


推荐阅读