How to use specific years to analysis? - r

I want to use specific 3 years to analysis, so I create a vector "score_3y".
When I use "score1" only, it display correctly.
When I use score1_3y, it display nothing, and shows:
Error in `check_aesthetics()`:
! Aesthetics must be either length 1 or the same as the data (54): x
Run `rlang::last_error()` to see where the error occurred.
Warning message:
`guides(<scale> = FALSE)` is deprecated. Please use `guides(<scale> = "none")` instead.
What is the problem?
Here is the code:
score1_3y <- score1[year == 2020 | year == 2021 | year == 2022]
ggplot(kaoyan, aes(score1_3y, fill = major))+
geom_density(alpha = 0.6)+
facet_wrap(~major)
str(kaoyan)
tibble [54 x 11] (S3: tbl_df/tbl/data.frame)
$ college : chr [1:54] "SUDA" "SUDA" "SUDA" "SUDA" ...
$ applicants: num [1:54] 87 87 87 87 87 87 87 87 87 87 ...
$ admission : num [1:54] 11 11 11 11 11 11 11 11 11 11 ...
$ ratio : num [1:54] 7.91 7.91 7.91 7.91 7.91 ...
$ exemption : num [1:54] 3 3 3 3 3 3 3 3 3 3 ...
$ major : Factor w/ 2 levels "情报学","档案学": 1 1 1 1 1 1 1 2 2 2 ...
$ year : Factor w/ 5 levels "2018","2019",..: 4 4 4 4 4 4 4 4 4 4 ...
$ score1 : num [1:54] 416 410 377 358 358 364 344 403 400 406 ...
$ score2 : num [1:54] 409 408 378 390 387 372 385 401 398 392 ...
$ score3 : num [1:54] 825 818 755 748 745 736 729 804 798 798 ...
$ gjx : num [1:54] 341 341 341 341 341 341 341 341 341 341 ...

The reason is that we are using original dataset with the subset of rows object in aes, which obviously cause a length difference. Instead, just filter or subset the data and use 'score1'
library(dplyr)
library(ggplot2)
kaoyan %>%
filter(year %in% 2020:2022) %>%
ggplot(aes(score1, fill = major)) +
geom_density(alpha = 0.6) +
facet_wrap(~major)

Related

read.csv() parses numeric columns as factors [duplicate]

This question already has answers here:
How to convert a factor to integer\numeric without loss of information?
(12 answers)
Closed 5 years ago.
Have the below dataframe where all the columns are factors which I want to use them as numeric columns. I tried different ways but it is changing to different values when I try as.numeric(as.character(.))
The data comes in a semicolon separated format. A subset of data to reproduce the problem is:
rawData <- "Date;Time;Global_active_power;Global_reactive_power;Voltage;Global_intensity;Sub_metering_1;Sub_metering_2;Sub_metering_3
21/12/2006;11:23:00;?;?;?;?;?;?;
21/12/2006;11:24:00;?;?;?;?;?;?;
16/12/2006;17:24:00;4.216;0.418;234.840;18.400;0.000;1.000;17.000
16/12/2006;17:25:00;5.360;0.436;233.630;23.000;0.000;1.000;16.000
16/12/2006;17:26:00;5.374;0.498;233.290;23.000;0.000;2.000;17.000
16/12/2006;17:27:00;5.388;0.502;233.740;23.000;0.000;1.000;17.000
16/12/2006;17:28:00;3.666;0.528;235.680;15.800;0.000;1.000;17.000
16/12/2006;17:29:00;3.520;0.522;235.020;15.000;0.000;2.000;17.000
16/12/2006;17:30:00;3.702;0.520;235.090;15.800;0.000;1.000;17.000
16/12/2006;17:31:00;3.700;0.520;235.220;15.800;0.000;1.000;17.000
16/12/2006;17:32:00;3.668;0.510;233.990;15.800;0.000;1.000;17.000
"
hpc <- read.csv(text=rawData,sep=";")
str(hpc)
When run against the full data file after dropping the date and time variables, the output from str() looks like:
> str(hpc)
'data.frame': 2075259 obs. of 7 variables:
$ Global_active_power : Factor w/ 4187 levels "?","0.076","0.078",..: 2082 2654 2661 2668 1807 1734 1825 1824 1808 1805 ...
$ Global_reactive_power: Factor w/ 533 levels "?","0.000","0.046",..: 189 198 229 231 244 241 240 240 235 235 ...
$ Voltage : Factor w/ 2838 levels "?","223.200",..: 992 871 837 882 1076 1010 1017 1030 907 894 ...
$ Global_intensity : Factor w/ 222 levels "?","0.200","0.400",..: 53 81 81 81 40 36 40 40 40 40 ...
$ Sub_metering_1 : Factor w/ 89 levels "?","0.000","1.000",..: 2 2 2 2 2 2 2 2 2 2 ...
$ Sub_metering_2 : Factor w/ 82 levels "?","0.000","1.000",..: 3 3 14 3 3 14 3 3 3 14 ...
$ Sub_metering_3 : num 17 16 17 17 17 17 17 17 17 16 ...
Can anyone help me in getting the expected output?
expected output:
> str(hpc)
'data.frame': 2075259 obs. of 7 variables:
$ Global_active_power : num "?","0.076","0.078",..: 2082 2654 2661 2668 1807 1734 1825 1824 1808 1805 ...
$ Global_reactive_power: num "?","0.000","0.046",..: 189 198 229 231 244 241 240 240 235 235 ...
$ Voltage : num "?","223.200",..: 992 871 837 882 1076 1010 1017 1030 907 894 ...
$ Global_intensity : num "?","0.200","0.400",..: 53 81 81 81 40 36 40 40 40 40 ...
$ Sub_metering_1 : num "?","0.000","1.000",..: 2 2 2 2 2 2 2 2 2 2 ...
$ Sub_metering_2 : num "?","0.000","1.000",..: 3 3 14 3 3 14 3 3 3 14 ...
$ Sub_metering_3 : num 17 16 17 17 17 17 17 17 17 16 ...
Not able to test your data frame, but hopefully this will work. I notice that in the output of str(hpc) not all columns are factors. mutate_if can apply a function to those meet the requirement of a predictive function.
library(dplyr)
hpc2 <- hpc %>%
mutate_if(is.factor, funs(as.numeric(as.character(.))))

leaflet, Error: cannot allocate vector of size 177.2 Mb

I have tried everything I can think of to fix this error but I have not been able to figure it out. 32 bit machine, trying to build a choropleth. The data file is pretty basic some municipal IDs with population figures associated with it. The shape file is taken from here: www.ontario.ca/data/municipal-boundaries
library('tmap')
library('leaflet')
library('magrittr')
library('rio')
library('plyr')
library('scales')
library('htmlwidgets')
library('tmaptools')
setwd("C:/Users/rdhasa/desktop")
datafile <- "shapefiles2/Population - 2014.csv"
Pop2014 <- rio::import(datafile)
Pop2014$Population <- as.factor(Pop2014$Population)
str(Pop2014)
'data.frame': 454 obs. of 9 variables:
$ MUNID : int 20002 18000 18013 18001 18005 18017 18009 18039 18020 18029 ...
$ YEAR : int 2015 2015 2015 2015 2015 2015 2015 2015 2015 2015 ...
$ MAH CODE : int 1106 10000 10101 10102 10401 10402 10404 10601 10602 10603 ...
$ V4 : int 1999 1800 1813 1801 1805 1817 1809 1839 1820 1829 ...
$ Municipality: chr "Toronto C" "Durham R" "Oshawa C" "Pickering C" ...
$ Tier : chr "ST" "UT" "LT" "LT" ...
$ A : int 11 11 11 11 11 11 11 11 11 11 ...
$ B : chr "a" "a" "a" "a" ...
$ Population : Factor w/ 438 levels "-","1,006","1,026",..: 160 359 117 432 86 419 97 73 179 171 ...
mnshape <- "shapefiles2/MUNICIPAL_BOUNDARY_LOWER_AND_SINGLE_TIER.shp"
mngeo2 <- read_shape(file=mnshape)
str(mngeo2#data)
'data.frame': 683 obs. of 13 variables:
$ MUNID : int 1002 1002 1002 1009 1009 1009 1016 1016 1016 1026 ...
$ MAH_CODE : int 71616 71616 71616 71618 71618 71618 71614 71614 71614 71613 ...
$ SGC_CODE : int 1005 1005 1005 1011 1011 1011 1020 1020 1020 1030 ...
$ ASSESSMENT: int 101 101 101 406 406 406 506 506 506 511 ...
$ LEGAL_NAME: Factor w/ 414 levels "CITY OF BARRIE",..: 369 369 369 370 370 370 96 96 96 334 ...
$ STATUS : Factor w/ 2 levels "LOWER TIER","SINGLE TIER": 1 1 1 1 1 1 1 1 1 1 ...
$ EXTENT : Factor w/ 3 levels "ISLANDS","LAND",..: 1 2 3 1 2 3 1 2 3 2 ...
$ MSO : Factor w/ 4 levels "CENTRAL","EASTERN",..: 2 2 2 2 2 2 2 2 2 2 ...
$ NAME_PREFI: Factor w/ 8 levels "-","CITY OF",..: 6 6 6 6 6 6 4 4 4 6 ...
$ UPPER_TIER: Factor w/ 30 levels "BRUCE","DUFFERIN",..: 27 27 27 27 27 27 27 27 27 27 ...
$ NAME : Factor w/ 413 levels "ADDINGTON HIGHLANDS",..: 339 339 339 342 342 342 337 337 337 259 ...
$ Shape_Leng: num 0.115 1.622 1.563 0.551 1.499 ...
$ Shape_Area: num 2.32e-05 6.95e-02 7.51e-03 5.63e-04 5.09e-02 ...
mnmap <- append_data(mngeo2, Pop2014, key.shp = "MUNID", key.data="MUNID")
minPct <- min(c(mnmap#data$Population))
maxPct <- max(c(mnmap#data$Population))
paletteLayers <- colorBin(palette = "RdBu", domain = c(minPct, maxPct), bins = c(0, 50000,200000 ,500000, 1000000, 2000000) , pretty=FALSE)
rm(mngeo2)
rm(Pop2014)
rm(mnshape)
rm(datafile)
rm(maxPct)
rm(minPct)
gc()
leaflet(mnmap) %>%
addProviderTiles("CartoDB.Positron") %>%
addPolygons(stroke=TRUE,
smoothFactor = 0.2,
weight = 1,
fillOpacity = .6)
Error: cannot allocate vector of size 177.2 Mb
Is there I can maybe safe space through simplfying the shape file. If so how would I go about doing this efficiently?
THanks

non meaningful operation for fractor error when storing new value in data frame: R

I am trying to update a a value in a data frame but am getting--what seems to me--a weird error about operation that I don't think I am using.
Here's a summary of the data:
> str(us.cty2015#data)
'data.frame': 3108 obs. of 15 variables:
$ STATEFP : Factor w/ 52 levels "01","02","04",..: 17 25 33 46 4 14 16 24 36 42 ...
$ COUNTYFP : Factor w/ 325 levels "001","003","005",..: 112 91 67 9 43 81 7 103 72 49 ...
$ COUNTYNS : Factor w/ 3220 levels "00023901","00025441",..: 867 1253 1600 2465 38 577 690 1179 1821 2104 ...
$ AFFGEOID : Factor w/ 3220 levels "0500000US01001",..: 976 1472 1879 2813 144 657 795 1395 2098 2398 ...
$ GEOID : Factor w/ 3220 levels "01001","01003",..: 976 1472 1879 2813 144 657 795 1395 2098 2398 ...
$ NAME : Factor w/ 1910 levels "Abbeville","Acadia",..: 1558 1703 1621 688 856 1075 148 1807 1132 868 ...
$ LSAD : Factor w/ 9 levels "00","03","04",..: 5 5 5 5 5 5 5 5 5 5 ...
$ ALAND : num 1.66e+09 1.10e+09 3.60e+09 2.12e+08 1.50e+09 ...
$ AWATER : num 2.78e+06 5.24e+07 3.50e+07 2.92e+08 8.91e+06 ...
$ t_pop : num 0 0 0 0 0 0 0 0 0 0 ...
$ n_wht : num 0 0 0 0 0 0 0 0 0 0 ...
$ n_free_blk: num 0 0 0 0 0 0 0 0 0 0 ...
$ n_slv : num 0 0 0 0 0 0 0 0 0 0 ...
$ n_blk : num 0 0 0 0 0 0 0 0 0 0 ...
$ n_free : num 0 0 0 0 0 0 0 0 0 0 ...
> str(us.cty1860#data)
'data.frame': 2126 obs. of 29 variables:
$ DECADE : Factor w/ 1 level "1860": 1 1 1 1 1 1 1 1 1 1 ...
$ NHGISNAM : Factor w/ 1236 levels "Abbeville","Accomack",..: 1142 1218 1130 441 812 548 1144 56 50 887 ...
$ NHGISST : Factor w/ 41 levels "010","050","060",..: 32 13 9 36 16 36 16 30 23 39 ...
$ NHGISCTY : Factor w/ 320 levels "0000","0010",..: 142 206 251 187 85 231 131 12 6 161 ...
$ ICPSRST : Factor w/ 37 levels "1","11","12",..: 5 13 21 26 22 26 22 10 15 17 ...
$ ICPSRCTY : Factor w/ 273 levels "10","1010","1015",..: 25 93 146 72 247 122 12 10 228 45 ...
$ ICPSRNAM : Factor w/ 1200 levels "ABBEVILLE","ACCOMACK",..: 1108 1184 1097 432 791 535 1110 55 49 860 ...
$ STATENAM : Factor w/ 41 levels "Alabama","Arkansas",..: 32 13 9 36 16 36 16 30 23 39 ...
$ ICPSRSTI : int 14 31 44 49 45 49 45 24 34 40 ...
$ ICPSRCTYI : int 1210 1970 2910 1810 710 2450 1130 110 50 1450 ...
$ ICPSRFIP : num 0 0 0 0 0 0 0 0 0 0 ...
$ STATE : Factor w/ 41 levels "010","050","060",..: 32 13 9 36 16 36 16 30 23 39 ...
$ COUNTY : Factor w/ 320 levels "0000","0010",..: 142 206 251 187 85 231 131 12 6 161 ...
$ PID : num 1538 735 306 1698 335 ...
$ X_CENTROID : num 1348469 184343 1086494 -62424 585888 ...
$ Y_CENTROID : num 556680 588278 -229809 -433290 -816852 ...
$ GISJOIN : Factor w/ 2126 levels "G0100010","G0100030",..: 1585 627 319 1769 805 1788 823 1425 1079 2006 ...
$ GISJOIN2 : Factor w/ 2126 levels "0100010","0100030",..: 1585 627 319 1769 805 1788 823 1425 1079 2006 ...
$ SHAPE_AREA : num 2.35e+09 1.51e+09 8.52e+08 2.54e+09 6.26e+08 ...
$ SHAPE_LEN : num 235777 155261 166065 242608 260615 ...
$ t_pop : int 25043 653 4413 8184 174491 1995 4324 17187 4649 8392 ...
$ n_wht : int 24974 653 4295 6892 149063 1684 3001 17123 4578 2580 ...
$ n_free_blk : int 69 0 2 0 10939 2 7 64 12 409 ...
$ n_slv : int 0 0 116 1292 14484 309 1316 0 59 5403 ...
$ n_blk : int 69 0 118 1292 25423 311 1323 64 71 5812 ...
$ n_free : num 25043 653 4297 6892 160007 ...
$ frac_free : num 1 1 0.974 0.842 0.917 ...
$ frac_free_blk: num 1 NA 0.0169 0 0.4303 ...
$ frac_slv : num 0 0 0.0263 0.1579 0.083 ...
> str(overlap)
'data.frame': 15266 obs. of 7 variables:
$ cty2015 : Factor w/ 3108 levels "0","1","10","100",..: 1 1 2 2 2 2 2 1082 1082 1082 ...
$ cty1860 : Factor w/ 2126 levels "0","1","10","100",..: 1047 1012 1296 1963 2033 2058 2065 736 1413 1569 ...
$ area_inter : num 1.66e+09 2.32e+05 9.81e+04 1.07e+09 7.67e+07 ...
$ area1860 : num 1.64e+11 1.81e+11 1.54e+09 2.91e+09 2.32e+09 ...
$ frac_1860 : num 1.01e-02 1.28e-06 6.35e-05 3.67e-01 3.30e-02 ...
$ sum_frac_1860 : num 1 1 1 1 1 ...
$ scaled_frac_1860: num 1.01e-02 1.28e-06 6.35e-05 3.67e-01 3.30e-02 ...
I am trying to multiply a vector of variables vars <- c("t_pop", "n_wht", "n_free_blk", "n_slv", "n_blk", "n_free") in the us.cty1860#data data frame by a scalar overlap$scaled_frac_1860[i], then add it to the same vector of variables in the us.cty2015#data data frame, and finally overwrite the variables in the us.cty2015#data data frame.
When I make the following call, I get an error that seems to be saying that I am trying to preform invalid operations on factors (which is not the case (you can confirm from the str output)).
> us.cty2015#data[overlap$cty2015[1], vars] <- us.cty2015#data[overlap$cty2015[1], vars] + (overlap$scaled_frac_1860[1] * us.cty1860#data[overlap$cty1860[1], vars])
Error in Summary.factor(1L, na.rm = FALSE) :
‘max’ not meaningful for factors
In addition: Warning message:
In Ops.factor(i, 0L) : ‘>=’ not meaningful for factors
However, when I don't attempt to overwrite the old value, the operation works fine.
> us.cty2015#data[overlap$cty2015[1], vars] + (overlap$scaled_frac_1860[1] * us.cty1860#data[overlap$cty1860[1], vars])
t_pop n_wht n_free_blk n_slv n_blk n_free
0 118.3889 113.6468 0.1317233 4.610316 4.742039 113.7785
I'm sure there are better ways of accomplishing what I am trying to do but does anyone have any idea what is going on?
Edit:
I am using the following libraries: rgdal, rgeos, and maptools
The all the data/object are coming from NHGIS shapefiles 1860 and 2015 United States Counties.

How to deal with " rank-deficient fit may be misleading" in R?

I'm trying to predict the values of test data set based on train data set, it is predicting the values (no errors) however the predictions deviate A LOT by the original values. Even predicting values around -356 although none of the original values exceeds 200 (and there are no negative values). The warning is bugging me as I think the values deviates a lot because of this warning.
Warning message:
In predict.lm(fit2, data_test) :
prediction from a rank-deficient fit may be misleading
any way I can get rid of this warning? the code is simple
fit2 <- lm(runs~., data=train_data)
prediction<-predict(fit2, data_test)
prediction
I searched a lot but tbh I couldn't understand much about this error.
str of test and train data set in case someone needs them
> str(train_data)
'data.frame': 36 obs. of 28 variables:
$ matchid : int 57 58 55 56 53 54 51 52 45 46 ...
$ TeamName : chr "South Africa" "West Indies" "South Africa" "West Indies" ...
$ Opp_TeamName : chr "West Indies" "South Africa" "West Indies" "South Africa" ...
$ TeamRank : int 4 3 4 3 4 3 10 7 5 1 ...
$ Opp_TeamRank : int 3 4 3 4 3 4 7 10 1 5 ...
$ Team_Top10RankingBatsman : int 0 1 0 1 0 1 0 0 2 2 ...
$ Team_Top50RankingBatsman : int 4 6 4 6 4 6 3 5 4 3 ...
$ Team_Top100RankingBatsman: int 6 8 6 8 6 8 7 7 7 6 ...
$ Opp_Top10RankingBatsman : int 1 0 1 0 1 0 0 0 2 2 ...
$ Opp_Top50RankingBatsman : int 6 4 6 4 6 4 5 3 3 4 ...
$ Opp_Top100RankingBatsman : int 8 6 8 6 8 6 7 7 6 7 ...
$ InningType : chr "1st innings" "2nd innings" "1st innings" "2nd innings" ...
$ Runs_OverAll : num 361 705 348 630 347 ...
$ AVG_Overall : num 27.2 20 23.3 19.1 24 ...
$ SR_Overall : num 128 121 120 118 118 ...
$ Runs_Last10Matches : num 118.5 71 102.1 71 78.6 ...
$ AVG_Last10Matches : num 23.7 20.4 20.9 20.4 23.2 ...
$ SR_Last10Matches : num 120 106 114 106 116 ...
$ Runs_BatingFirst : num 236 459 230 394 203 ...
$ AVG_BatingFirst : num 30.6 23.2 24 21.2 27.1 ...
$ SR_BatingFirst : num 127 136 123 125 118 ...
$ Runs_BatingSecond : num 124 262 119 232 144 ...
$ AVG_BatingSecond : num 25.5 18.3 22.8 17.8 22.8 ...
$ SR_BatingSecond : num 125 118 112 117 114 ...
$ Runs_AgainstTeam2 : num 88.3 118.3 76.3 103.9 49.3 ...
$ AVG_AgainstTeam2 : num 28.2 23 24.7 22.1 16.4 ...
$ SR_AgainstTeam2 : num 139 127 131 128 111 ...
$ runs : int 165 168 231 236 195 126 143 141 191 135 ...
> str(data_test)
'data.frame': 34 obs. of 28 variables:
$ matchid : int 59 60 61 62 63 64 65 66 69 70 ...
$ TeamName : chr "India" "West Indies" "England" "New Zealand" ...
$ Opp_TeamName : chr "West Indies" "India" "New Zealand" "England" ...
$ TeamRank : int 2 3 5 1 4 8 6 2 10 1 ...
$ Opp_TeamRank : int 3 2 1 5 8 4 2 6 1 10 ...
$ Team_Top10RankingBatsman : int 1 1 2 2 0 0 1 1 0 2 ...
$ Team_Top50RankingBatsman : int 5 6 4 3 4 2 5 5 3 3 ...
$ Team_Top100RankingBatsman: int 7 8 7 6 6 5 7 7 7 6 ...
$ Opp_Top10RankingBatsman : int 1 1 2 2 0 0 1 1 2 0 ...
$ Opp_Top50RankingBatsman : int 6 5 3 4 2 4 5 5 3 3 ...
$ Opp_Top100RankingBatsman : int 8 7 6 7 5 6 7 7 6 7 ...
$ InningType : chr "1st innings" "2nd innings" "2nd innings" "1st innings" ...
$ Runs_OverAll : num 582 618 470 602 509 ...
$ AVG_Overall : num 25 21.8 20.3 20.7 19.6 ...
$ SR_Overall : num 113 120 123 120 112 ...
$ Runs_Last10Matches : num 182 107 117 167 140 ...
$ AVG_Last10Matches : num 37.1 43.8 21 24.9 27.3 ...
$ SR_Last10Matches : num 111 153 122 141 120 ...
$ Runs_BatingFirst : num 319 314 271 345 294 ...
$ AVG_BatingFirst : num 23.6 17.8 20.6 20.3 19.5 ...
$ SR_BatingFirst : num 116.9 98.5 118 124.3 115.8 ...
$ Runs_BatingSecond : num 264 282 304 256 186 ...
$ AVG_BatingSecond : num 28 23.7 31.9 21.6 16.5 ...
$ SR_BatingSecond : num 96.5 133.9 129.4 112 99.5 ...
$ Runs_AgainstTeam2 : num 98.2 95.2 106.9 75.4 88.5 ...
$ AVG_AgainstTeam2 : num 45.3 42.7 38.1 17.7 27.1 ...
$ SR_AgainstTeam2 : num 125 138 152 110 122 ...
$ runs : int 192 196 159 153 122 120 160 161 70 145 ...
In simple word, how can I get rid of this warning so that it doesn't effect my predictions?
(Intercept) matchid TeamNameBangladesh
1699.98232628 -0.06793787 59.29445330
TeamNameEngland TeamNameIndia TeamNameNew Zealand
347.33030177 -499.40074338 -179.19192936
TeamNamePakistan TeamNameSouth Africa TeamNameSri Lanka
-272.71610614 -3.54867488 -45.27920191
TeamNameWest Indies Opp_TeamNameBangladesh Opp_TeamNameEngland
-345.54349798 135.05901017 108.04227770
Opp_TeamNameIndia Opp_TeamNameNew Zealand Opp_TeamNamePakistan
-162.24418387 -60.55364436 -114.74599364
Opp_TeamNameSouth Africa Opp_TeamNameSri Lanka Opp_TeamNameWest Indies
196.90856999 150.70170068 -6.88997714
TeamRank Opp_TeamRank Team_Top10RankingBatsman
NA NA NA
Team_Top50RankingBatsman Team_Top100RankingBatsman Opp_Top10RankingBatsman
NA NA NA
Opp_Top50RankingBatsman Opp_Top100RankingBatsman InningType2nd innings
NA NA 24.24029455
Runs_OverAll AVG_Overall SR_Overall
-0.59935875 20.12721378 -13.60151334
Runs_Last10Matches AVG_Last10Matches SR_Last10Matches
-1.92526750 9.24182916 1.23914363
Runs_BatingFirst AVG_BatingFirst SR_BatingFirst
1.41001672 -9.88582744 -6.69780509
Runs_BatingSecond AVG_BatingSecond SR_BatingSecond
-0.90038727 -7.11580086 3.20915976
Runs_AgainstTeam2 AVG_AgainstTeam2 SR_AgainstTeam2
3.35936312 -5.90267210 2.36899131
You can have a look at this detailed discussion :
predict.lm() in a loop. warning: prediction from a rank-deficient fit may be misleading
In general, multi-collinearity can lead to a rank deficient matrix in logistic regression.
You can try applying PCA to tackle the multi-collinearity issue and then apply logistic regression afterwards.

"length of 'dimnames' [1] not equal to array extent" error in linear regression summary in r

I'm running a straightforward linear regression model fit on the following dataframe:
> str(model_data_rev)
'data.frame': 128857 obs. of 12 variables:
$ ENTRY_4 : num 186 218 208 235 256 447 471 191 207 250 ...
$ ENTRY_8 : num 724 769 791 777 707 237 236 726 773 773 ...
$ ENTRY_12: num 2853 2989 3174 3027 3028 ...
$ ENTRY_16: num 2858 3028 3075 2992 3419 ...
$ ENTRY_20: num 7260 7188 7587 7560 7165 ...
$ EXIT_4 : num 70 82 105 114 118 204 202 99 73 95 ...
$ EXIT_8 : num 1501 1631 1594 1576 1536 ...
$ EXIT_12 : num 3862 3923 4158 3970 3895 ...
$ EXIT_16 : num 1559 1539 1737 1681 1795 ...
$ EXIT_20 : num 2145 2310 2217 2330 2291 ...
$ DAY : Ord.factor w/ 7 levels "Sun"<"Mon"<"Tues"<..: 2 3 4 5 6 7 1 2 3 4 ...
$ MONTH : Ord.factor w/ 12 levels "Jan"<"Feb"<"Mar"<..: 3 3 3 3 3 3 3 3 3 3 ...
I split the data in to training and test sets as follows using the caret package:
split<-createDataPartition(y = model_data_rev$EXIT_20, p = 0.7, list = FALSE)
d_training = model_data_rev[split,]
d_test = model_data_rev[-split,]
I train the model using the train function in the caret package:
ctrl<-trainControl(method = 'cv',number = 5)
lmCVFit<-train(EXIT_20 ~ ., data = d_training, method = 'lm', trControl = ctrl, metric='Rsquared')
summary(lmCVFit)
When I run summary(lmCVFit) I get the following error:
Error in summary.lm(object$finalModel, ...) :
length of 'dimnames' [1] not equal to array extent
In addition: Warning message:
In cbind(est, se, tval, 2 * pt(abs(tval), rdf, lower.tail = FALSE)) :
number of rows of result is not a multiple of vector length (arg 1)
I thought it might be the related to the my initial dataframe above. Specifically, i thought it could have to do with the factor variables. So I cut them off (not shown), ran everything again, and got the same error.
I also ran the regression without CV using the 'lm' function in R and got the same error when I ran summary()
Has anyone seen this and can anyone help? I can't find anything on line that speaks to this error in the context of regression.
Thanks in advance.
EDIT
I modified the ordinal variable to standard character variables. Structure now looks like this:
> str(model_data_rev)
'data.frame': 128857 obs. of 12 variables:
$ ENTRY_4 : num 186 218 208 235 256 447 471 191 207 250 ...
$ ENTRY_8 : num 724 769 791 777 707 237 236 726 773 773 ...
$ ENTRY_12: num 2853 2989 3174 3027 3028 ...
$ ENTRY_16: num 2858 3028 3075 2992 3419 ...
$ ENTRY_20: num 7260 7188 7587 7560 7165 ...
$ EXIT_4 : num 70 82 105 114 118 204 202 99 73 95 ...
$ EXIT_8 : num 1501 1631 1594 1576 1536 ...
$ EXIT_12 : num 3862 3923 4158 3970 3895 ...
$ EXIT_16 : num 1559 1539 1737 1681 1795 ...
$ EXIT_20 : num 2145 2310 2217 2330 2291 ...
$ DAY : Factor w/ 7 levels "Friday","Monday",..: 2 6 7 5 1 3 4 2 6 7 ...
$ MONTH : Factor w/ 12 levels "April","August",..: 8 8 8 8 8 8 8 8 8 8 ...
I still get the error when running summary after fitting the model.
It is also important emphasize that the model fitting works without throwing an error. It is summary() that is throwing off the error.
Thanks.

Resources