Apply function to dataframe based on unique values - r

I need to apply a function to a dataframe, subsetted or grouped by unique values.
My data looks like this:
FID FIX_NO ELK_ID ALTITUDE XLOC YLOC DATE_TIME JulDate
1 NA 5296 393 2260.785 547561.3 4771900 NA 140
2 NA 5297 393 2254.992 547555.9 4771906 NA 140
3 NA 5298 393 2256.078 547563.5 4771901 NA 140
4 NA 5299 393 2247.047 547564.7 4771907 NA 140
5 NA 5300 393 2264.875 547558.3 4771903 NA 140
6 NA 5301 393 2259.496 547554.1 4771925 NA 140
...
24247 NA 4389 527 2204.047 558465.7 4775358 NA 161
24248 NA 4390 527 2279.078 558884.1 4775713 NA 161
24249 NA 4391 527 2270.590 558807.9 4775825 NA 161
24250 NA 4392 527 2265.258 558732.2 4775805 NA 161
24251 NA 4393 527 2238.375 558672.4 4775781 NA 161
24252 NA 4394 527 2250.055 558686.6 4775775 NA 161
My goal is to make a new data.frame by randomly selecting 4 rows per each JulDate for each unique ELK_ID.
If I do it by hand, for each unique ELK_ID my code is as follows:
oneelk <- subset(dataset, ELK_ID == 393)
newdata <- do.call(rbind,lapply(split(oneelk,oneelk$JulDate),
function(x)x[sample(1:nrow(x),4),]))
There are >40 ELK_IDs, so I need to automate the process. Please help!

Here is a data.table solution.
library(data.table)
setDT(dataset)[,.SD[sample(.N,4)],by=list(ELK_ID,JulDate)]
# ELK_ID JulDate FID FIX_NO ALTITUDE XLOC YLOC DATE_TIME
# 1: 393 140 NA 5297 2254.992 547555.9 4771906 NA
# 2: 393 140 NA 5299 2247.047 547564.7 4771907 NA
# 3: 393 140 NA 5298 2256.078 547563.5 4771901 NA
# 4: 393 140 NA 5300 2264.875 547558.3 4771903 NA
# 5: 527 161 NA 4394 2250.055 558686.6 4775775 NA
# 6: 527 161 NA 4392 2265.258 558732.2 4775805 NA
# 7: 527 161 NA 4390 2279.078 558884.1 4775713 NA
# 8: 527 161 NA 4393 2238.375 558672.4 4775781 NA
NB, this will only work if there are at least 4 rows for every combination of ELK_ID and JulDate.

You can also create an index using tapply and then just subset (assuming your data set called df)
indx <- unlist(tapply(seq_len(dim(df)[1L]),
df[, c("JulDate", "ELK_ID")],
function(x) sample(x, 4)))
df[indx, ]

Try to split using both columns, maybe split(dataset, dataset[, c("ELK_ID", "JulDate")])

Might as well add a dplyr solution too:
library(dplyr)
newdf <- yourdata %>%
group_by(ELK_ID, JulDate) %>%
sample_n(4)

Related

Find rows with certain combination of values

I have a data frame that looks like this
iso_o iso_d FLOW FLOW_0
185 190 NA NA
185 190 NA NA
185 190 NA NA
185 190 1 NA
185 190 NA NA
185 190 NA 4249
185 114 1 NA
Now I want to know which rows and the number of rows that have for example "185" in iso_o and "190" in iso_d.
Can anyone point me in the right direction?
We can try subset
> subset(df, iso_o == 185 & iso_d == 190)
iso_o iso_d FLOW FLOW_0
1 185 190 NA NA
2 185 190 NA NA
3 185 190 NA NA
4 185 190 1 NA
5 185 190 NA NA
6 185 190 NA 4249
You can find the index with the which-function:
which(data$iso_o == 185 & data$iso_d == 190)
Using brackets might make it a bit easier to read:
which( (data$iso_o == 185) & (data$iso_d == 190) )

Linear regression on 415 files, output just filename, regression coefficient, significance

I am a beginner in R, I am learning the basics to analyse some biological data. I have 415 .csv files, each is a fungal species. Each file has 5 columns - (YEAR, FFD, LFD, MEAN, RANGE)
YEAR FFD LFD RAN MEAN
1 1950 NA NA NA NA
2 1951 NA NA NA NA
3 1952 NA NA NA NA
4 1953 NA NA NA NA
5 1954 NA NA NA NA
6 1955 NA NA NA NA
7 1956 NA NA NA NA
8 1957 NA NA NA NA
9 1958 NA NA NA NA
10 1959 140 141 1 140
11 1960 NA NA NA NA
12 1961 NA NA NA NA
13 1962 NA NA NA NA
14 1963 NA NA NA NA
15 1964 NA NA NA NA
16 1965 155 156 1 155
17 1966 NA NA NA NA
18 1967 NA NA NA NA
19 1968 152 153 1 152
20 1969 NA NA NA NA
21 1970 NA NA NA NA
22 1971 161 162 1 161
23 1972 NA NA NA NA
24 1973 143 144 1 143
25 1974 NA NA NA NA
26 1975 NA NA NA NA
27 1976 NA NA NA NA
28 1977 NA NA NA NA
29 1978 NA NA NA NA
30 1979 NA NA NA NA
31 1980 NA NA NA NA
32 1981 NA NA NA NA
33 1982 155 156 1 155
34 1983 NA NA NA NA
35 1984 NA NA NA NA
36 1985 157 158 1 157
37 1986 170 310 140 240
38 1987 173 274 101 232
39 1988 192 236 44 214
40 1989 234 320 86 277
41 1990 172 287 115 213
42 1991 148 287 139 205
43 1992 140 278 138 206
44 1993 152 273 121 216
45 1994 142 319 177 228
46 1995 261 318 57 287
47 1996 247 315 68 285
48 1997 164 270 106 230
49 1998 186 187 1 186
50 1999 235 236 1 235
51 2000 NA NA NA NA
52 2001 309 310 1 309
53 2002 203 308 105 256
54 2003 140 238 98 189
55 2004 204 313 109 267
56 2005 253 313 60 287
57 2006 247 300 53 279
58 2007 185 295 110 225
59 2008 259 260 1 259
60 2009 296 315 19 309
61 2010 230 303 73 275
62 2011 247 248 1 247
63 2012 206 207 1 206
64 2013 NA NA NA NA
65 2014 250 317 67 271
First I would like to see the regression coefficient (slope of the line) for each file, and the significance (p-value) for all of the files.
I can do it individually with:
fruit<-read.csv(file.choose(),header=TRUE)
yr<-fruit[,1]
ffd<-fruit[,2]
res<-lm(ffd~yr)
summary(res)
when I do this for the data, I get:
Call:
lm(formula = ffd ~ yr)
Residuals:
Min 1Q Median 3Q Max
-77.358 -20.858 -5.714 22.494 96.015
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -4162.0710 950.1439 -4.380 0.000119 ***
yr 2.1864 0.4765 4.588 6.55e-05 ***
---
Signif. codes:
0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 38.75 on 32 degrees of freedom
(31 observations deleted due to missingness)
Multiple R-squared: 0.3968, Adjusted R-squared: 0.378
F-statistic: 21.05 on 1 and 32 DF, p-value: 6.549e-05
The only information I need from this at the moment is the regression coefficient (2.1864) and the p-value (6.549e-05)
The perfect output would be if I could get R to cycle through the 415 files, and give an output in the form of a table with 3 columns: filename, regression coefficient, and significance. There would be 415 rows, one for each file.
I would then like to do YEAR~LFD, YEAR~RANGE, and YEAR~MEAN. I am hoping that I can easily edit the code for YEAR~FFD and run it for the other 3 regressions.
The following code will probably work.
I have tested it with your data in two files. The functions that do all the work are these ones:
regrFun <- function(DF){
fit <- lm(DF[[1]] ~ DF[[2]])
coef(summary(fit))[2, c(1, 4)]
}
regrList <- function(iv, L){
res <- lapply(seq_along(L), function(i){
dftmp <- L[[i]]
cfs <- regrFun(dftmp[c(1, iv)])
data.frame(file = names(L)[i], Estimate = cfs[1], p.value = cfs[2])
})
res <- do.call(rbind, res)
row.names(res) <- NULL
res
}
Now read in the data files. In the following code line, substitute a common filename part for "pattern" in the obvious place.
filenames <- list.files(pattern = "pattern")
df_list <- lapply(filenames, read.csv)
names(df_list) <- filenames
And compute the values you want.
results_list <- lapply(2:ncol(df_list[[1]]), regrList, df_list)
names(results_list) <- names(df_list[[1]][-1])
First I simulate like 5 csv files, with columns that look like yours:
for(i in 1:5){
tab=data.frame(
YEAR=1950:2014,
FFD= rpois(65,100),
LFD= rnorm(65,100,10),
RAN= rnbinom(65,mu=100,size=1),
MEAN = runif(65,min=50,max=150)
)
write.csv(tab,paste0("data",i,".csv"))
}
Now, we need a vector of all the files in your directory, this will be different for yours, but try to create this somehow using the pattern argument:
csvfiles = dir(pattern="data[0-9]*.csv$")
So we use three libraries from tidyverse, and I guess each csv file is not so huge, so the code below reads in all the files, group them by the source and performs the regression, note you can use call the columns from the data frame, not having to rename them:
library(dplyr)
library(purrr)
library(broom)
csvfiles %>%
map_df(function(i){df = read.csv(i);df$data = i;df}) %>%
group_by(data) %>%
do(tidy(lm(FFD ~ YEAR,data=.))) %>%
filter(term!="(Intercept)")
# A tibble: 5 x 6
# Groups: data [5]
data term estimate std.error statistic p.value
<chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 data1.csv YEAR -0.0228 0.0731 -0.311 0.756
2 data2.csv YEAR -0.139 0.0573 -2.42 0.0182
3 data3.csv YEAR -0.175 0.0650 -2.70 0.00901
4 data4.csv YEAR -0.0478 0.0628 -0.762 0.449
5 data5.csv YEAR 0.0204 0.0648 0.315 0.754
You can just change the formula inside lm(FFD ~ YEAR,data=.) to get the other regressions
data.table version, using StupidWolf's csv files layout and names, featuring the requested fields:
library(data.table)
input.dir = "/home/user/Desktop/My Folder/" # adjust to your needs
csvfiles <- list.files(path=input.dir, full.names=TRUE, pattern=".*data(.*)\\.csv") # adjust pattern
Above, I used a more specific regex pattern, but you could just pick pattern="*.csv" if you want to process all csv files in that folder.
# order the files
csvfiles <- csvfiles[order(as.numeric(gsub(".*data(.*)\\.csv", "\\1", csvfiles)))]
# function to read file and return requested columns
regrFun <- function(x){
DT <- fread(x)
fit <- lm(FFD ~ YEAR, data=DT)
return(as.list(c(filename=basename(x), coef(summary(fit))[2, c(1, 4)])))
}
# apply function and rename columns
DT <- rbindlist(lapply(csvfiles, regrFun))
setnames(DT, c("filename", "regression coefficient", "significance"))
DT
Result:
filename regression coefficient significance
1: data1.csv -0.113286713286712 0.0874762832713643
2: data2.csv -0.044449300699302 0.457096760642717
3: data3.csv 0.0464597902097902 0.499618510612891
4: data4.csv -0.032473776223776 0.638494798460044
5: data5.csv 0.0562062937062939 0.452955919860998
---
411: data411.csv 0.0381555944055959 0.544185411150829
412: data412.csv -0.0672202797202807 0.314346452751388
413: data413.csv 0.116564685314687 0.0694785724198052
414: data414.csv -0.0908216783216786 0.110811677724832
415: data415.csv -0.0282779720279721 0.638766712090455
You could write a R script that runs on a single file, then run it on every file via the terminal.
The script is simply a .R file with code inside.
To run it on every file you would execute on your terminal something on the lines of (using bash)
for file in $(ls yourDataDirectory); do
Rscript yourScriptFile.R $file >> finalOutput
done
This would run the script in yourScriptFile.R on every file in yourDataDircetory and save the output on finalOutput.
The script code itself would be very similar to the one you already wrote, but instead of file.choose() you would use the argument passed by the command line, as described here, and you would have to print only the information you're insterested, instead of the output of summary.
finalOutput could even be a csv file, if you format the script output correctly.

Efficient way to add multiple columns to weekly data in data.table, based on other values of columns

I have data with this structure:
a <- data.table(week = 1:52, price = 101:152)
a <- a[rep(1:nrow(a), each = 12),]
a$index_in_week <- 1:12
How do I efficiently create 12 new columns that will hold values of prices for next 12 weeks? So, for each week we have 12 rows of data, with index column by week, so it's always in range(1,12). The new columns should contain prices of following 12 weeks starting from current, with a step of 1 week. For example, for week 1 the first new column will have prices of week 1 to 12, column 2 will have values of week 2 to 13, and so on.
I.e., here is how one can create the first two columns:
a$price_for_week_1 <- apply(a, 1, function(y) {
return(head(a[week == (y[[1]]+y[[3]]-1), price], 1))
})
a$price_for_week_2 <- apply(a, 1, function(y) {
return(head(a[week == (y[[1]]+y[[3]]+0), price], 1))
})
Here is an example of a for loop:
for (i in 1:12) {
inside_i <- -2+i
a[, paste0('PRICE_WEEK_', i) := apply(a, 1, function(y) {
return(head(a[week == (y[[1]]+y[[3]] + inside_i), price], 1))
})]
}
The ways to do it as I see it (e.g. for loop or apply family) consumes too much time, and I need efficiency.
What would be the way with data.table or maybe, as all columns are integer, some funky matrix operations?
P.s. I couldn't come up with better title, my apologies.
If I understand correctly, the OP wants to create a table for 52 weeks (rows) where the prices for the subsequent 12 weeks are printed horizontally.
For this, it is not necessary to create a data.table of 12 x 52 = 624 rows and an index_in_week helper column. docendo discimus has suggested to apply the shift() function on the enlarged (624 rows) data.table.
Instead, the shift() function can be applied directly to the data.table which contains weeks and prices (52 rows).
library(data.table)
a <- data.table(week = 1:52, price = 101:152)
print(a, nrows = 20L)
week price
1: 1 101
2: 2 102
3: 3 103
4: 4 104
5: 5 105
---
48: 48 148
49: 49 149
50: 50 150
51: 51 151
52: 52 152
a[, sprintf("wk%02i", 1:12) := shift(price, n = 0:11, type = "lead")]
print(a, nrows = 20L)
week price wk01 wk02 wk03 wk04 wk05 wk06 wk07 wk08 wk09 wk10 wk11 wk12
1: 1 101 101 102 103 104 105 106 107 108 109 110 111 112
2: 2 102 102 103 104 105 106 107 108 109 110 111 112 113
3: 3 103 103 104 105 106 107 108 109 110 111 112 113 114
4: 4 104 104 105 106 107 108 109 110 111 112 113 114 115
5: 5 105 105 106 107 108 109 110 111 112 113 114 115 116
---
48: 48 148 148 149 150 151 152 NA NA NA NA NA NA NA
49: 49 149 149 150 151 152 NA NA NA NA NA NA NA NA
50: 50 150 150 151 152 NA NA NA NA NA NA NA NA NA
51: 51 151 151 152 NA NA NA NA NA NA NA NA NA NA
52: 52 152 152 NA NA NA NA NA NA NA NA NA NA NA

How to count rows in a logical vector

I have a data frame called source that looks something like this
185 2002-07-04 NA NA 20
186 2002-07-05 NA NA 20
187 2002-07-06 NA NA 20
188 2002-07-07 14.400 0.243 20
189 2002-07-08 NA NA 20
190 2002-07-09 NA NA 20
191 2002-07-10 NA NA 20
192 2002-07-11 NA NA 20
193 2002-07-12 NA NA 20
194 2002-07-13 4.550 0.296 20
195 2002-07-14 NA NA 20
196 2002-07-15 NA NA 20
197 2002-07-16 NA NA 20
198 2002-07-17 NA NA 20
199 2002-07-18 NA NA 20
200 2002-07-19 NA 0.237 20
and when I try
> nrow(complete.cases(source))
I only get NULL
can someone explain why this is the case and how can I count how many rows there are without NA or NaN values?
Instead use sum. Though the safest option would be NROW (because it can handle both data.frams and vectors)
sum(complete.cases(source))
#[1] 2
Or alternatively if you insist on using nrow
nrow(source[complete.cases(source), ])
#[1] 2
Explanation: complete.cases returns a logical vector indicating which cases (in your case rows) are complete.
Sample data
source <- read.table(text =
"185 2002-07-04 NA NA 20
186 2002-07-05 NA NA 20
187 2002-07-06 NA NA 20
188 2002-07-07 14.400 0.243 20
189 2002-07-08 NA NA 20
190 2002-07-09 NA NA 20
191 2002-07-10 NA NA 20
192 2002-07-11 NA NA 20
193 2002-07-12 NA NA 20
194 2002-07-13 4.550 0.296 20
195 2002-07-14 NA NA 20
196 2002-07-15 NA NA 20
197 2002-07-16 NA NA 20
198 2002-07-17 NA NA 20
199 2002-07-18 NA NA 20
200 2002-07-19 NA 0.237 20")
complete.cases returns a logical vector that indicates the rows which are complete. As a vector doesn't have a row attribute, you cannot use nrow here, but as suggested by others sum. With sum the TRUE and FALSE are transformed to 1 and 0 internally, so using sum counts the TRUE values of your vector.
sum(complete.cases(source))
# [1] 2
If you however are more interested in the data.frame, which is left after you exclude all non-complete rows, you can use na.exclude. This returns a data.frame and you can use nrow.
nrow(na.exclude(source))
# [1] 2
na.exclude(source)
# V2 V3 V4 V5
# 188 2002-07-07 14.40 0.243 20
# 194 2002-07-13 4.55 0.296 20
You can even try:
source[rowSums(is.na(source))==0,]
# V1 V2 V3 V4 V5
# 4 188 2002-07-07 14.40 0.243 20
# 10 194 2002-07-13 4.55 0.296 20
nrow(source[rowSums(is.na(source))==0,])
#[1] 2

Globaltest Pathway analysis with a matrix

I have a matrix with SAGE count data and i want to test for GO enrichment en Pathway enrichment. Therefore I want to use the globaltest in R. My data looks like this:
data_file
KI_1 KI_2 KI_4 KI_5 KI_6 WT_1 WT_2 WT_3 WT_4 WT_6
ENSMUSG00000002012 215 141 102 127 138 162 164 114 188 123
ENSMUSG00000028182 13 5 13 12 8 10 7 13 7 14
ENSMUSG00000002017 111 72 70 170 52 87 117 77 226 122
ENSMUSG00000028184 547 312 162 226 280 501 603 407 355 268
ENSMUSG00000002015 1712 1464 825 1038 1189 1991 1950 1457 1240 883
ENSMUSG00000028180 1129 944 766 869 737 1223 1254 865 871 844
The rownames contains ensembl gene IDs and each column represent a sample. These samples can be divided in two groups for testing pathway enrichment: KI1 and the WT2 group
groups <- c("KI1","KI1","KI1","KI1","KI1","WT2","WT2","WT2","WT2","WT2")
I found the function gtKEGG to do the pathway analysis, but my question is how? Because when I run the function I don't create any error but my output file is like this:
> gtKEGG(groups, t(data_file), annotation="org.Mm.eg.db")
holm alias p-value Statistic Expected Std.dev #Cov
00380 NA Tryptophan metabolism NA NA NA NA 0
01100 NA Metabolic pathways NA NA NA NA 0
02010 NA ABC transporters NA NA NA NA 0
04975 NA Fat digestion and absorption NA NA NA NA 0
04142 NA Lysosome NA NA NA NA 0
04012 NA ErbB signaling pathway NA NA NA NA 0
04110 NA Cell cycle NA NA NA NA 0
04360 NA Axon guidance NA NA NA NA 0
Can anyone help me with this question? Thanks! :)
I found the solution!
library(globaltest)
library(org.Mm.eg.db)
eg <- as.list(org.Mm.egENSEMBL2EG)
KEGG<-gtKEGG(as.factor(groups), t(data_file), probe2entrez= eg, annotation="org.Mm.eg.db")

Resources