Multiple Linear Regression handle NA - r

I am new to the world of Statistics, So some simple suggestions will be acknowledged ...
I have a data frame in R
Ganeeshan
Year General OBC SC ST VI VacancySC VacancyGen VacancyOBC Banks Participated VacancyST VacancyHI
1 2016 52.5 52.5 41.75 31.50 37.5 1338 4500 2319 20 665 154
2 2015 76.0 76.0 50.00 47.75 36.0 1965 6146 3454 23 1050 270
3 2014 82.0 80.0 70.00 56.00 38.0 2496 8212 4482 23 1531 458
4 2013 61.0 60.0 50.00 26.00 27.0 3208 10846 5799 21 1827 458
5 2012 135.0 135.0 127.00 106.00 127.0 3409 11058 6062 21 1886 436
VacancyOC VacancyVI
1 113 102
2 358 242
3 323 321
4 208 390
5 257 345
and want to built a linear Model taking dependent variable as "General", I used the following command
GaneeshanModel1 <- lm(General ~ ., data = Ganeeshan)
I get " NA " instead of values in summary of model
Call:
lm(formula = General ~ ., data = Ganeeshan)
Residuals:
ALL 5 residuals are 0: no residual degrees of freedom!
Coefficients: (9 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6566.6562 NA NA NA
Year -3.2497 NA NA NA
OBC 0.5175 NA NA NA
SC -0.2167 NA NA NA
ST 0.6078 NA NA NA
VI NA NA NA NA
VacancySC NA NA NA NA
VacancyGen NA NA NA NA
VacancyOBC NA NA NA NA
`Banks Participated` NA NA NA NA
VacancyST NA NA NA NA
VacancyHI NA NA NA NA
VacancyOC NA NA NA NA
VacancyVI NA NA NA NA
why I am not getting any data here

This can happen if you don't do data preprocessing correctly first. It seems that your 'Bank' column is empty (NaN) and you should think about what to do with it (I am not sure if this is the whole file or there are other non-empty values inside your 'Bank' column). In general, before starting to use your data, you need to replace the NaN (empty) values in your columns with some numerical values (usually it is mean or median value of a column). In R, for your column 'Banks' (in case it has other non-empty values) for example you can do it like this:
dataset$Banks = ifelse(is.na(dataset$Banks),
ave(dataset$Banks, FUN = function(x) mean(x, na.rm = TRUE)),
dataset$Banks)
Otherwise, depending on your data set, if some of your values are represented by a period (or any other non number value) you can import your csv as
dataset = read.csv("data.csv", header = TRUE, c(" ", ".", "NA"))
to change 'period' and 'empty' values to NaN (NA) and after that use the line above to replace the NA (NaN) with mean/median/something else.

Related

Linear regression on 415 files, output just filename, regression coefficient, significance

I am a beginner in R, I am learning the basics to analyse some biological data. I have 415 .csv files, each is a fungal species. Each file has 5 columns - (YEAR, FFD, LFD, MEAN, RANGE)
YEAR FFD LFD RAN MEAN
1 1950 NA NA NA NA
2 1951 NA NA NA NA
3 1952 NA NA NA NA
4 1953 NA NA NA NA
5 1954 NA NA NA NA
6 1955 NA NA NA NA
7 1956 NA NA NA NA
8 1957 NA NA NA NA
9 1958 NA NA NA NA
10 1959 140 141 1 140
11 1960 NA NA NA NA
12 1961 NA NA NA NA
13 1962 NA NA NA NA
14 1963 NA NA NA NA
15 1964 NA NA NA NA
16 1965 155 156 1 155
17 1966 NA NA NA NA
18 1967 NA NA NA NA
19 1968 152 153 1 152
20 1969 NA NA NA NA
21 1970 NA NA NA NA
22 1971 161 162 1 161
23 1972 NA NA NA NA
24 1973 143 144 1 143
25 1974 NA NA NA NA
26 1975 NA NA NA NA
27 1976 NA NA NA NA
28 1977 NA NA NA NA
29 1978 NA NA NA NA
30 1979 NA NA NA NA
31 1980 NA NA NA NA
32 1981 NA NA NA NA
33 1982 155 156 1 155
34 1983 NA NA NA NA
35 1984 NA NA NA NA
36 1985 157 158 1 157
37 1986 170 310 140 240
38 1987 173 274 101 232
39 1988 192 236 44 214
40 1989 234 320 86 277
41 1990 172 287 115 213
42 1991 148 287 139 205
43 1992 140 278 138 206
44 1993 152 273 121 216
45 1994 142 319 177 228
46 1995 261 318 57 287
47 1996 247 315 68 285
48 1997 164 270 106 230
49 1998 186 187 1 186
50 1999 235 236 1 235
51 2000 NA NA NA NA
52 2001 309 310 1 309
53 2002 203 308 105 256
54 2003 140 238 98 189
55 2004 204 313 109 267
56 2005 253 313 60 287
57 2006 247 300 53 279
58 2007 185 295 110 225
59 2008 259 260 1 259
60 2009 296 315 19 309
61 2010 230 303 73 275
62 2011 247 248 1 247
63 2012 206 207 1 206
64 2013 NA NA NA NA
65 2014 250 317 67 271
First I would like to see the regression coefficient (slope of the line) for each file, and the significance (p-value) for all of the files.
I can do it individually with:
fruit<-read.csv(file.choose(),header=TRUE)
yr<-fruit[,1]
ffd<-fruit[,2]
res<-lm(ffd~yr)
summary(res)
when I do this for the data, I get:
Call:
lm(formula = ffd ~ yr)
Residuals:
Min 1Q Median 3Q Max
-77.358 -20.858 -5.714 22.494 96.015
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -4162.0710 950.1439 -4.380 0.000119 ***
yr 2.1864 0.4765 4.588 6.55e-05 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 38.75 on 32 degrees of freedom
(31 observations deleted due to missingness)
Multiple R-squared: 0.3968, Adjusted R-squared: 0.378
F-statistic: 21.05 on 1 and 32 DF, p-value: 6.549e-05
The only information I need from this at the moment is the regression coefficient (2.1864) and the p-value (6.549e-05)
The perfect output would be if I could get R to cycle through the 415 files, and give an output in the form of a table with 3 columns: filename, regression coefficient, and significance. There would be 415 rows, one for each file.
I would then like to do YEAR~LFD, YEAR~RANGE, and YEAR~MEAN. I am hoping that I can easily edit the code for YEAR~FFD and run it for the other 3 regressions.
The following code will probably work.
I have tested it with your data in two files. The functions that do all the work are these ones:
regrFun <- function(DF){
fit <- lm(DF[[1]] ~ DF[[2]])
coef(summary(fit))[2, c(1, 4)]
}
regrList <- function(iv, L){
res <- lapply(seq_along(L), function(i){
dftmp <- L[[i]]
cfs <- regrFun(dftmp[c(1, iv)])
data.frame(file = names(L)[i], Estimate = cfs[1], p.value = cfs[2])
})
res <- do.call(rbind, res)
row.names(res) <- NULL
res
}
Now read in the data files. In the following code line, substitute a common filename part for "pattern" in the obvious place.
filenames <- list.files(pattern = "pattern")
df_list <- lapply(filenames, read.csv)
names(df_list) <- filenames
And compute the values you want.
results_list <- lapply(2:ncol(df_list[[1]]), regrList, df_list)
names(results_list) <- names(df_list[[1]][-1])
First I simulate like 5 csv files, with columns that look like yours:
for(i in 1:5){
tab=data.frame(
YEAR=1950:2014,
FFD= rpois(65,100),
LFD= rnorm(65,100,10),
RAN= rnbinom(65,mu=100,size=1),
MEAN = runif(65,min=50,max=150)
)
write.csv(tab,paste0("data",i,".csv"))
}
Now, we need a vector of all the files in your directory, this will be different for yours, but try to create this somehow using the pattern argument:
csvfiles = dir(pattern="data[0-9]*.csv$")
So we use three libraries from tidyverse, and I guess each csv file is not so huge, so the code below reads in all the files, group them by the source and performs the regression, note you can use call the columns from the data frame, not having to rename them:
library(dplyr)
library(purrr)
library(broom)
csvfiles %>%
map_df(function(i){df = read.csv(i);df$data = i;df}) %>%
group_by(data) %>%
do(tidy(lm(FFD ~ YEAR,data=.))) %>%
filter(term!="(Intercept)")
# A tibble: 5 x 6
# Groups: data [5]
data term estimate std.error statistic p.value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 data1.csv YEAR -0.0228 0.0731 -0.311 0.756
2 data2.csv YEAR -0.139 0.0573 -2.42 0.0182
3 data3.csv YEAR -0.175 0.0650 -2.70 0.00901
4 data4.csv YEAR -0.0478 0.0628 -0.762 0.449
5 data5.csv YEAR 0.0204 0.0648 0.315 0.754
You can just change the formula inside lm(FFD ~ YEAR,data=.) to get the other regressions
data.table version, using StupidWolf's csv files layout and names, featuring the requested fields:
library(data.table)
input.dir = "/home/user/Desktop/My Folder/" # adjust to your needs
csvfiles <- list.files(path=input.dir, full.names=TRUE, pattern=".*data(.*)\\.csv") # adjust pattern
Above, I used a more specific regex pattern, but you could just pick pattern="*.csv" if you want to process all csv files in that folder.
# order the files
csvfiles <- csvfiles[order(as.numeric(gsub(".*data(.*)\\.csv", "\\1", csvfiles)))]
# function to read file and return requested columns
regrFun <- function(x){
DT <- fread(x)
fit <- lm(FFD ~ YEAR, data=DT)
return(as.list(c(filename=basename(x), coef(summary(fit))[2, c(1, 4)])))
}
# apply function and rename columns
DT <- rbindlist(lapply(csvfiles, regrFun))
setnames(DT, c("filename", "regression coefficient", "significance"))
DT
Result:
filename regression coefficient significance
1: data1.csv -0.113286713286712 0.0874762832713643
2: data2.csv -0.044449300699302 0.457096760642717
3: data3.csv 0.0464597902097902 0.499618510612891
4: data4.csv -0.032473776223776 0.638494798460044
5: data5.csv 0.0562062937062939 0.452955919860998
---
411: data411.csv 0.0381555944055959 0.544185411150829
412: data412.csv -0.0672202797202807 0.314346452751388
413: data413.csv 0.116564685314687 0.0694785724198052
414: data414.csv -0.0908216783216786 0.110811677724832
415: data415.csv -0.0282779720279721 0.638766712090455
You could write a R script that runs on a single file, then run it on every file via the terminal.
The script is simply a .R file with code inside.
To run it on every file you would execute on your terminal something on the lines of (using bash)
for file in $(ls yourDataDirectory); do
Rscript yourScriptFile.R $file >> finalOutput
done
This would run the script in yourScriptFile.R on every file in yourDataDircetory and save the output on finalOutput.
The script code itself would be very similar to the one you already wrote, but instead of file.choose() you would use the argument passed by the command line, as described here, and you would have to print only the information you're insterested, instead of the output of summary.
finalOutput could even be a csv file, if you format the script output correctly.

How to count rows in a logical vector

I have a data frame called source that looks something like this
185 2002-07-04 NA NA 20
186 2002-07-05 NA NA 20
187 2002-07-06 NA NA 20
188 2002-07-07 14.400 0.243 20
189 2002-07-08 NA NA 20
190 2002-07-09 NA NA 20
191 2002-07-10 NA NA 20
192 2002-07-11 NA NA 20
193 2002-07-12 NA NA 20
194 2002-07-13 4.550 0.296 20
195 2002-07-14 NA NA 20
196 2002-07-15 NA NA 20
197 2002-07-16 NA NA 20
198 2002-07-17 NA NA 20
199 2002-07-18 NA NA 20
200 2002-07-19 NA 0.237 20
and when I try
> nrow(complete.cases(source))
I only get NULL
can someone explain why this is the case and how can I count how many rows there are without NA or NaN values?
Instead use sum. Though the safest option would be NROW (because it can handle both data.frams and vectors)
sum(complete.cases(source))
#[1] 2
Or alternatively if you insist on using nrow
nrow(source[complete.cases(source), ])
#[1] 2
Explanation: complete.cases returns a logical vector indicating which cases (in your case rows) are complete.
Sample data
source <- read.table(text =
"185 2002-07-04 NA NA 20
186 2002-07-05 NA NA 20
187 2002-07-06 NA NA 20
188 2002-07-07 14.400 0.243 20
189 2002-07-08 NA NA 20
190 2002-07-09 NA NA 20
191 2002-07-10 NA NA 20
192 2002-07-11 NA NA 20
193 2002-07-12 NA NA 20
194 2002-07-13 4.550 0.296 20
195 2002-07-14 NA NA 20
196 2002-07-15 NA NA 20
197 2002-07-16 NA NA 20
198 2002-07-17 NA NA 20
199 2002-07-18 NA NA 20
200 2002-07-19 NA 0.237 20")
complete.cases returns a logical vector that indicates the rows which are complete. As a vector doesn't have a row attribute, you cannot use nrow here, but as suggested by others sum. With sum the TRUE and FALSE are transformed to 1 and 0 internally, so using sum counts the TRUE values of your vector.
sum(complete.cases(source))
# [1] 2
If you however are more interested in the data.frame, which is left after you exclude all non-complete rows, you can use na.exclude. This returns a data.frame and you can use nrow.
nrow(na.exclude(source))
# [1] 2
na.exclude(source)
# V2 V3 V4 V5
# 188 2002-07-07 14.40 0.243 20
# 194 2002-07-13 4.55 0.296 20
You can even try:
source[rowSums(is.na(source))==0,]
# V1 V2 V3 V4 V5
# 4 188 2002-07-07 14.40 0.243 20
# 10 194 2002-07-13 4.55 0.296 20
nrow(source[rowSums(is.na(source))==0,])
#[1] 2

Calculating rates when data is in long form

A sample of my data is available here.
I am trying to calculate the growth rate (change in weight (wt) over time) for each squirrel.
When I have my data in wide format:
squirrel fieldBirthDate date1 date2 date3 date4 date5 date6 age1 age2 age3 age4 age5 age6 wt1 wt2 wt3 wt4 wt5 wt6 litterid
22922 2017-05-13 2017-05-14 2017-06-07 NA NA NA NA 1 25 NA NA NA NA 12 52.9 NA NA NA NA 7684
22976 2017-05-13 2017-05-16 2017-06-07 NA NA NA NA 3 25 NA NA NA NA 15.5 50.9 NA NA NA NA 7692
22926 2017-05-13 2017-05-16 2017-06-07 NA NA NA NA 0 25 NA NA NA NA 10.1 48 NA NA NA NA 7719
I am able to calculate growth rate with the following code:
library(dplyr)
#growth rate between weight 1 and weight 3, divided by age when weight 3 is recorded
growth <- growth %>%
mutate (g.rate=((wt3-wt1)/age3))
#growth rate between weight 1 and weight 2, divided by age when weight 2 is recorded
merge.growth <- merge.growth %>%
mutate (g.rate=((wt2-wt1)/age2))
However, when the data is in long format (a format needed for the analysis I am running afterwards):
squirrel litterid date age wt
22922 7684 2017-05-13 0 NA
22922 7684 2017-05-14 1 12
22922 7684 2017-06-07 25 52.9
22976 7692 2017-05-13 1 NA
22976 7692 2017-05-16 3 15.5
22976 7692 2017-06-07 25 50.9
22926 7719 2017-05-14 0 10.1
22926 7719 2017-06-08 25 48
I cannot use the mutate function I used above. I am hoping to create a new column that includes growth rate as follows:
squirrel litterid date age wt g.rate
22922 7684 2017-05-13 0 NA NA
22922 7684 2017-05-14 1 12 NA
22922 7684 2017-06-07 25 52.9 1.704
22976 7692 2017-05-13 1 NA NA
22976 7692 2017-05-16 3 15.5 NA
22976 7692 2017-06-07 25 50.9 1.609
22926 7719 2017-05-14 0 10.1 NA
22926 7719 2017-06-08 25 48 1.516
22758 7736 2017-05-03 0 8.8 NA
22758 7736 2017-05-28 25 43 1.368
22758 7736 2017-07-05 63 126 1.860
22758 7736 2017-07-23 81 161 1.879
22758 7736 2017-07-26 84 171 1.930
I have been calculating the growth rates (growth between each wt and the first time it was weighed) in excel, however I would like to do the calculations in R instead since I have a large number of squirrels to work with. I suspect if else loops might be the way to go here, but I am not well versed in that sort of coding. Any suggestions or ideas are welcome!
You can use group_by to calculate this for each squirrel:
group_by(df, squirrel) %>%
mutate(g.rate = (wt - nth(wt, which.min(is.na(wt)))) /
(age - nth(age, which.min(is.na(wt)))))
That leaves NaNs where the age term is zero, but you can change those to NAs if you want with df$g.rate[is.nan(df$g.rate)] <- NA.
alternative using data.table and its function "shift" that takes the previous row
library(data.table)
df= data.table(df)
df[,"growth":=(wt-shift(wt,1))/age,by=.(squirrel)]

Subsetting dataset when data contains no clear groups to subset [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Hi Stackoverflow community,
Background:
I'm using a population modelling program to try to predict genetic outcomes of threatened species populations given a range of management scenarios. At the end of each of my population modelling scenarios I have a .csv file containing information on all the final living individuals over all 1,000 iterations of the modeled population which includes information on all surviving individual's genotypes.
What I want:
From this .csv output file I'd like to determine the frequency of the allele "6" in the columns "Allele2a" and "Allele2b" in each of the 1,000 iterations of the model contained in the file.
The Problem:
The .csv file I'm trying to determine the allele 6's frequency from does not contain information that can be used to easily subset the data (from what can see) into the separate iterations. I have no idea how to split this dataset into it's respective iterations given that the number of individuals surviving to the end of the model (and subsequently the number of individual rows in each iteration) is not the same, and there is no clear subsettable points.
Any guidance on how to separate this data into iteration units which can be analysed, or how to determine the frequency of the allele without complex subsetting would be very greatly appreciated. If any further information is required please don't hesitate to ask.
Thanks!
EDIT: When input into R the data looks like this:
Living<-read.csv("Living Ind.csv", header=F)
colnames(Living) <- c("Iteration","ID","Pop","Sex","alive","Age","DamID","SireID","F","FInd","MtDNA","Alle1a","Alle1b","Alle2a","Alle2b")
attach(Living)
Living
Iteration ID Pop Sex alive Age DamID SireID F FInd MtDNA Alle1a Alle1b Alle2a Alle2b
1 Iteration 1 NA NA NA NA NA NA NA NA NA NA NA NA
2 NA NA NA NA NA NA NA NA NA NA NA NA NA
3 2511 2 M TRUE 19 545 1376 0.000 0.000 545 1089 2751 6 6
4 2515 2 F TRUE 18 590 1783 0.000 0.000 590 1180 3566 5 5
5 2519 2 F TRUE 18 717 1681 0.000 0.000 717 1434 3362 4 6
6 2526 2 M TRUE 17 412 1780 0.000 0.000 412 823 3559 4 6
7 2529 2 F TRUE 17 324 1473 0.000 0.000 324 647 2945 5 6
107 2676 2 F TRUE 1 2576 2526 0.000 0.000 621 3876 3559 6 4
108 NA NA NA NA NA NA NA NA NA NA NA NA NA
109 Iteration 2 NA NA NA NA NA NA NA NA NA NA NA NA
110 NA NA NA NA NA NA NA NA NA NA NA NA NA
111 2560 2 M TRUE 18 703 1799 0.000 0.000 703 1406 3598 6 6
112 2564 2 M TRUE 18 420 1778 0.000 0.000 420 840 3555 4 6
113 2578 2 F TRUE 17 347 1778 0.000 0.000 347 693 3555 3 5
114 2581 2 M TRUE 16 330 1454 0.000 0.000 330 659 2907 6 6
115 2584 2 F TRUE 16 568 1593 0.000 0.000 568 1135 3185 6 5
116 2591 2 F TRUE 13 318 1423 0.000 0.000 318 635 2846 3 6
117 2593 2 M TRUE 13 341 1454 0.000 0.000 341 682 2907 6 6
118 2610 2 M TRUE 8 2578 2582 0.000 0.000 347 693 2908 5 6
119 2612 2 M TRUE 8 2578 2582 0.000 0.000 347 3555 660 3 6
Just a total mess I'm afraid.
Here's a link to a copy of the .csv file.
https://www.dropbox.com/s/pl6ncy5i0152uv1/Living%20Ind.csv?dl=0
Thank you for providing your data. It makes this much easier. In future you should always do this with questions on SO.
The basic issue is transforming your original data into something more easy to manipulate in R. Since your data-set is fairly large, I'm using the data.table package, but you could also do basically the same thing using data.frames in base R.
library(data.table)
url <- "https://www.dropbox.com/s/pl6ncy5i0152uv1/Living%20Ind.csv?dl=1"
DT <- fread(url,header=FALSE, showProgress = FALSE) # import data
DT <- DT[!is.na(V2)] # remove blank lines (rows)
brks <- which(DT$V1=="Iteration") # identify iteration header rows
iter <- DT[brks,]$V2 # extract iteration numbers
DT <- DT[-brks,Iter:=rep(iter,diff(c(brks,nrow(DT)+1))-1)] # assign iteration number to each row
DT <- DT[-brks] # remove iteration header rows
DT[,V1:=NULL] # remove first column
setnames(DT, c("ID","Pop","Sex","alive","Age","DamID","SireID","F","FInd","MtDNA","Alle1a","Alle1b","Alle2a","Alle2b","Iteration"))
# now can count fraction of allele 6 easily.
DT[,list(frac=sum(Alle2a==6 | Alle2b==6)/.N), by=Iteration]
# Iteration frac
# 1: 1 0.7619048
# 2: 2 0.9130435
# 3: 3 0.6091954
# 4: 4 0.8620690
# 5: 5 0.8850575
# ---
If you are going to be analyzing large datasets like this a lot, it would probably be worth your while to learn how to use data.table.

Globaltest Pathway analysis with a matrix

I have a matrix with SAGE count data and i want to test for GO enrichment en Pathway enrichment. Therefore I want to use the globaltest in R. My data looks like this:
data_file
KI_1 KI_2 KI_4 KI_5 KI_6 WT_1 WT_2 WT_3 WT_4 WT_6
ENSMUSG00000002012 215 141 102 127 138 162 164 114 188 123
ENSMUSG00000028182 13 5 13 12 8 10 7 13 7 14
ENSMUSG00000002017 111 72 70 170 52 87 117 77 226 122
ENSMUSG00000028184 547 312 162 226 280 501 603 407 355 268
ENSMUSG00000002015 1712 1464 825 1038 1189 1991 1950 1457 1240 883
ENSMUSG00000028180 1129 944 766 869 737 1223 1254 865 871 844
The rownames contains ensembl gene IDs and each column represent a sample. These samples can be divided in two groups for testing pathway enrichment: KI1 and the WT2 group
groups <- c("KI1","KI1","KI1","KI1","KI1","WT2","WT2","WT2","WT2","WT2")
I found the function gtKEGG to do the pathway analysis, but my question is how? Because when I run the function I don't create any error but my output file is like this:
> gtKEGG(groups, t(data_file), annotation="org.Mm.eg.db")
holm alias p-value Statistic Expected Std.dev #Cov
00380 NA Tryptophan metabolism NA NA NA NA 0
01100 NA Metabolic pathways NA NA NA NA 0
02010 NA ABC transporters NA NA NA NA 0
04975 NA Fat digestion and absorption NA NA NA NA 0
04142 NA Lysosome NA NA NA NA 0
04012 NA ErbB signaling pathway NA NA NA NA 0
04110 NA Cell cycle NA NA NA NA 0
04360 NA Axon guidance NA NA NA NA 0
Can anyone help me with this question? Thanks! :)
I found the solution!
library(globaltest)
library(org.Mm.eg.db)
eg <- as.list(org.Mm.egENSEMBL2EG)
KEGG<-gtKEGG(as.factor(groups), t(data_file), probe2entrez= eg, annotation="org.Mm.eg.db")

Resources