Data frame "expand" procedure in R? - r

This is not a real statistical question, but rather a data preparation question before performing the actual statistical analysis. I have a data frame which consists of sparse data. I would like to "expand" this data to include zeroes for missing values, group by group.
Here is an example of the data (a and b are two factors defining the group, t is the sparse timestamp and xis the value):
test <- data.frame(
a=c(1,1,1,1,1,1,1,1,1,1,1),
b=c(1,1,1,1,1,2,2,2,2,2,2),
t=c(0,2,3,4,7,3,4,6,7,8,9),
x=c(1,2,1,2,2,1,1,2,1,1,3))
Assuming I would like to expand the values between t=0 and t=9, this is the result I'm hoping for:
test.expanded <- data.frame(
a=c(1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1),
b=c(1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2),
t=c(0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9),
x=c(1,0,2,1,2,0,0,2,0,0,0,0,0,1,1,0,2,1,1,3))
Zeroes have been inserted for all missing values of t. This makes it easier to use.
I have a quick and dirty implementation which sorts the dataframe and loops through each of its lines, adding missing lines one at a time. But I'm not entirely satisfied by the solution. Is there a better way to do it?
For those who are familiar with SAS, it is similar to the proc expand.
Thanks!

As you noted in a comment to the other answer, doing it by group is easy with plyr which just leaves how to "fill in" the data sets. My approach is to use merge.
library("plyr")
test.expanded <- ddply(test, c("a","b"), function(DF) {
DF <- merge(data.frame(t=0:9), DF[,c("t","x")], all.x=TRUE)
DF[is.na(DF$x),"x"] <- 0
DF
})
merge with all.x=TRUE will make the missing values NA, so the second line of the function is needed to replace those NAs with 0's.

This is convoluted but works fine:
test <- data.frame(
a=c(1,1,1,1,1,1,1,1,1,1,1),
b=c(1,1,1,1,1,2,2,2,2,2,2),
t=c(0,2,3,4,7,3,4,6,7,8,9),
x=c(1,2,1,2,2,1,1,2,1,1,3))
my.seq <- seq(0,9)
not.t <- !(my.seq %in% test$t)
test[nrow(test)+seq(length(my.seq[not.t])),"t"] <- my.seq[not.t]
test
#------------
a b t x
1 1 1 0 1
2 1 1 2 2
3 1 1 3 1
4 1 1 4 2
5 1 1 7 2
6 1 2 3 1
7 1 2 4 1
8 1 2 6 2
9 1 2 7 1
10 1 2 8 1
11 1 2 9 3
12 NA NA 1 NA
13 NA NA 5 NA
Not sure if you want it sorted by t afterwards or not. If so, easy enough to do:
https://stackoverflow.com/a/6871968/636656

Related

Apply the same calculation in different data frames in R

I am trying to loop over many data frames in R and I feel like this is a rather basic question. However, I only found similar questions that were solved with specific functions that don't match my problem (like calculating means or medians, changing column names, ...). I hope to find a more general solution that can be applied for any change or calculation in various data frames here.
I have a lot (about 500) of data frames that look somewhat like this (very simplified):
df0100
a b c d
1 4 3 5 NA
2 2 5 4 NA
3 4 4 3 NA
...
df0130
a b c d
1 3 2 3 NA
2 4 5 3 NA
3 4 3 2 NA
...
For each of them, I want to calculate a new value (also simplified here) from the values in a and c in the first row and insert the value in any row in column d. It works fine like this for a single data frame:
df0100$d <- ((df0100[1,1]*(df0100[1,3]+13.5)/(3*exp(df0100[1,3]))/100
which leads to
df0100
a b c d
1 4 3 5 36.60858
2 2 5 4 36.60858
3 4 4 3 36.60858
....
Since I don't want to do this for every single of the 500 data frames, I saved them as a list and tried to loop over them as follows. I thought the easiest way would be to replace the former 'df0100' by each data frame name but both versions didn't work. Can anyone tell me what I have to change?
my_files <- list.files(pattern=".csv")
my_data <- lapply(my_files, read.csv)
Version 1:
for (n in my_data)
{
n$d <- ((n[1,1]*(n[1,3]+13.5)/(3*exp(n[1,3]))/100
}
Version 2:
my_data <- lapply(my_data, function(n){
n$d <- ((n[1,1]*(n[1,3]+13.5)/(3*exp(n[1,3]))/100
})
This is my first question here, I hope it makes sense to you.

How to remove columns of data from a data frame using a vector with a regular expression

I am trying to remove columns from a dataframe using a vector of numbers, with those numbers being just a part of the whole column header. What I'm looking to use is something like the wildcard "*" in unix, so that I can say that I want to remove columns with labels xxxx, xxkx, etc... To illustrate what I mean, if I have the following data:
data_test_read <- read.table("batch_1_8c9.structure-edit.tsv",sep="\t", header=TRUE)
data_test_read[1:5,1:5]
samp pop X12706_10 X14223_16 X14481_7
1 BayOfIslands_s088.fq 1 4 1 3
2 BayOfIslands_s088.fq 1 4 1 3
3 BayOfIslands_s089.fq 1 4 1 3
4 BayOfIslands_s089.fq 1 4 3 3
5 BayOfIslands_s090.fq 1 4 1 3
And I want to take out, for example, columns with headers (X12706_10, X14481_7), the following works
data_subs1=subset(data_test_read, select = -c(X12706_10, X14481_7))
data_subs1[1:4,1:4]
samp pop X14223_16 X15213_19
1 BayOfIslands_s088.fq 1 1 3
2 BayOfIslands_s088.fq 1 1 3
3 BayOfIslands_s089.fq 1 1 3
4 BayOfIslands_s089.fq 1 3 3
However, what I need is to be able to identify these columns by only the numbers, so, using (12706,14481). But, if I try this, I get the following
data_subs2=subset(data_test_read, select = -c(12706,14481))
data_subs2[1:4,1:4]
samp pop X12706_10 X14223_16
1 BayOfIslands_s088.fq 1 4 1
2 BayOfIslands_s088.fq 1 4 1
3 BayOfIslands_s089.fq 1 4 1
4 BayOfIslands_s089.fq 1 4 3
This is clearly because I haven't specified anything to do with the "x", or the "_" or what is after the underscore. I've read so many answers on using regular expressions, and I just can't seem to sort it out. Any thoughts, or pointers to what I might turn to would be appreciated.
First you can just extract the numbers from the headers
# for testing
col_names <- c("X12706_10","X14223_16","X14481_7")
# in practice, use
# col_names <- names(data_test_read)
samples <- gsub("X(\\d+)_.*","\\1",col_names)
The find the indexes of the samples you want to drop.
samples_to_drop <- c(12706, 14481)
cols_to_drop <- match(samples_to_drop, samples)
Then you can use
data_subs2 <- subset(data_test_read, select = -cols_to_drop)
to actually get rid of those columns.
Perhaps put this all in a function to make it easier to use
sample_subset <- function(x, drop) {
samples <- gsub("X(\\d+)_.*","\\1", names(x))
subset(x, select = -match(drop, samples))
}
sample_subset(data_test_read, c(12706, 14481))

replace values in one dataset with values in another dataset R

I have a somewhat seemingly simple problem that I am stumped with. I have a df, say:
x y z
0 1 2
3 5 4
1 0 5
0 5 0
and another:
x y z
1 5 6
2 4 5
4 5 7
5 8 5
I want to replace the zero values in df1 with the value in df2. E.g., cell 1 of df1 would be 1 instead of zero. I want this for all columns in a dataframe. Can you help me code? I cant seem to figure it out. Thanks!
First, you can locate the indices of 0's using which
zero_locations <- which(df1 == 0, arr.ind=TRUE)
Then, you can use the locations to make the replacements:
df1[zero_locations] <- df2[zero_locations]
As David Arenburg pointed out in the comments, which isn't strictly necessary:
zero_locations <- df1 == 0
Will work as well.

Reformatting data in order to plot 2D continuous heatmap

I have data stored in a data.frame that I would like to plot as a continuous heat map. I have tried using the interp function from akima package, but as the data can be very large (2 million rows) I would like to avoid this if possible as it takes a very long time. Here is the format of my data
l1 <- c(1,2,3)
grid1 <- expand.grid(l1, l1)
lprobdens <- c(0,2,4,2,8,10,4,8,2)
df <- cbind(grid1, lprobdens)
colnames(df) <- c("age1", "age2", "probdens")
age1 age2 probdens
1 1 0
2 1 2
3 1 4
1 2 2
2 2 8
3 2 10
1 3 4
2 3 8
3 3 2
I would like to format it in a length(df$age1) x length(df$age2) matrix. I gather that once it is formatted in this manner I would be able to use basic functions such as image to plot a 2D histogram continuous heat map similar to that created using the akima package. Here is how I think the transformed data should look. Please correct me if I am wrong.
1 2 3
1 0 2 4
2 2 8 8
3 4 10 2
It seems as though ldply but I can't seem to sort out how it works.
I forgot to mention, the $age information is always continuous and regular, such that the list age1 is equal to age2 but age1 >= age2. I guess this means that it may be classed as continuous data as it stands and doesn't require the interp function.
Ok I think I get it what you want. It just a matter of reshaping data with reshape s 'cast function. The value.var argument is just to avoid the warning message that R tried to guess the value to use. The result does not change if you omit it.
library(reshape2)
as.matrix(dcast(dat, age1 ~ age2, value.var = "probdens")[-1])
1 2 3
[1,] 0 2 4
[2,] 2 8 8
[3,] 4 10 2

increase in one variable nested within another column in R + setting 0 as starting value

I'm trying to use the diff function to calculate the increase in a variable ("damage") in this dataset (df). I want to fill the column "damage_new" with this new variable. The values that you see now are the values I would like to have.
df = data.frame(id=c(1,1,1,2,2), trial=c(1,3,4,1,2), damage=(1,NA,3,1,5))
df
ID TRIAL DAMAGE DAMAGE_NEW
1 1 1 0
1 3 NA NA
1 4 3 NA
2 1 1 0
2 2 5 4
If I run
diff(df$damage) it will calculate the difference in the whole dataset.
two things that I haven't managed are:
-how to nest the difference within the values of another column? Specifically, I want to calculate the damage increase (for the whole dataset), but within a single individual (ID), of which I have repeated measurements.
-I also would like to have the damage_new column to be the same length as the rest of the dataset (to attach it), and for each individual, have the first value of damage_new set to 0, since obviously the first measurement has no reference.
-To further describe the dataset, I have NAs in the 'damage" column, which I suspect will lead to more NAs in the damage_new column, but I would like to keep them (and I wonder how the function deals with them?). I also don't have the same number of measurements per individual (they will have a different number of trials, with some missing in between).
thanks a lot for the always fast and efficient answers!
The dplyr package is great for this kind of things:
library(dplyr)
df %>% group_by(id) %>% mutate(damage_new=c(0,diff(damage)))
Source: local data frame [5 x 4]
Groups: id
id trial damage damage_new
1 1 1 1 0
2 1 3 NA NA
3 1 4 3 NA
4 2 1 1 0
5 2 2 5 4
You can read more about dplyr usage here
Update
If you'd like to go with the base R, you could do:
df$damage_new <- ave(df$damage,df$id,FUN=function(v) c(0,diff(v)))
which will produce the same df.
Library data.table is your friend there:
> library(data.table)
> setDT(df)
> setkey(df, id, trial)
> df[,new_damage:=c(0,diff(damage)),by=id]
> df
id trial damage new_damage
1: 1 1 1 0
2: 1 3 NA NA
3: 1 4 3 NA
4: 2 1 1 0
5: 2 2 5 4
On the diff working with NA, anything you withdraw from NA gives NA:
> diff(c(1,3,4,NA,5,7))
[1] 2 1 NA NA 2

Resources