Related
NAs introduced by coercion. How to get around this? Thank you for your help.
water <- 785.5
volume_water <- as.numeric(as.character(c("water", water)))
volume_water
[1] NA 785.5
This is dataframe called data
Substance v1
1 abc 12.5
2 defg 100.0
3 hijk 100.0
4 abfg 2.0
I want to achieve:
rbind(data, volume_water)
Substance v1
1 abc 12.5
2 defg 100.0
3 hijk 100.0
4 abfg 2.0
5 water 785.5
I would create the object as a data frame, i.e.:
volume_water = data.frame(Substance="water", v1=785.5)
Then you can rbind it with data.
I am trying to get the average of all 6 quizzes for each male student.
Here is part of the code that I've tried:
a<-subset(mydf,Sex=="M")
b<-a[4:9]
b
sum(b[1:6])
My logic is to get a table only contains male students with each of their 6 quizzes, then sum the table and divide by the number of male student. But I think there should be an easier way to do this.
Sample data:
df <- data.frame(Section=c(rep('A',9)),
Degree=c(rep('MBA',4),'MS','MBA','MBA','MS','MBA'),
Sex=c(rep('M',5),'F','M','M','F'),
Quiz1=c(0,10,2,2,8,6,6,2,3),
Quiz2=c(0,1,4,4,1,5,0,3,9),
Quiz3=c(6,5,6,6,4,2,7,9,3),
Quiz4=c(5,4,5,5,10,5,7,7,3),
Quiz5=c(7,3,6,3,10,7,6,10,5),
Quiz6=c(3,8,6,6,5,8,10,10,5))
How about this:
data.frame(df[which(df$Sex=='M'),],QuizMeans=rowMeans(df[which(df$Sex=='M'),c(4:9)]))
Note: "c(4:9)" in the code above is takes the row average for quiz columns 4-9.
So we're calculating quiz scores for each individual this way.
Output:
Section Degree Sex Quiz1 Quiz2 Quiz3 Quiz4 Quiz5 Quiz6 QuizMeans
1 A MBA M 0 0 6 5 7 3 3.500000
2 A MBA M 10 1 5 4 3 8 5.166667
3 A MBA M 2 4 6 5 6 6 4.833333
4 A MBA M 2 4 6 5 3 6 4.333333
5 A MS M 8 1 4 10 10 5 6.333333
7 A MBA M 6 0 7 7 6 10 6.000000
8 A MS M 2 3 9 7 10 10 6.833333
Then if you wanted to take the mean of their means (i.e. the grand mean), you could store the above as something like "df", then use mean() to calculate the mean of the column QuizMeans, like this:
df <- data.frame(df[which(df$Sex=='M'),],QuizMeans=rowMeans(df[which(df$Sex=='M'),c(4:9)]))
mean(df$QuizMeans)
[1] 5.285714
If there are missing values in your data, you'll need to add na.rm=TRUE to either the mean() or rowMeans() function, like this:
mean(df$QuizMeans, na.rm=TRUE)
[1] 5.285714
You could use the following without specifying column positions
ans <- sum(df[df$Sex=="M", grepl("Quiz",names(df))])/sum(df$Sex=="M")
# 31.71429
If you know the column positions
ans <- sum(df[df$Sex=="M", 4:9])/sum(df$Sex=="M")
# 31.71429
Data
df <- data.frame(Section=c(rep('A',9)),
Degree=c(rep('MBA',4),'MS','MBA','MBA','MS','MBA'),
Sex=c(rep('M',5),'F','M','M','F'),
Quiz1=c(0,10,2,2,8,6,6,2,3),
Quiz2=c(0,1,4,4,1,5,0,3,9),
Quiz3=c(6,5,6,6,4,2,7,9,3),
Quiz4=c(5,4,5,5,10,5,7,7,3),
Quiz5=c(7,3,6,3,10,7,6,10,5),
Quiz6=c(3,8,6,6,5,8,10,10,5))
Use dplyr.
library(dplyr)
mydf %>% filter(Sex == "Male") %>%
summarise(avg_q6 = mean(Quiz6))
I am analysing some data and need help.
Basically, I have a dataset that looks like this:
date <- seq(as.Date("2017-04-01"),as.Date("2017-05-09"),length.out=40)
switch <- c(rep(1:2,each=10),rep(1:2,each=10))
O2 <- runif(40,min=21.02,max=21.06)
CO2 <- runif(40,min=0.076,max=0.080)
test.data <- data.frame(date,switch,O2,CO2)
As can be seen, there's a switch column that switches between 1 and 2 every 10 data points. I want to write a code that does: when the "switch" column changes its value (from 1 to 2, or 2 to 1), delete the first 5 rows of data after the switch (i.e. leaving the 5 last data points for all the 4 variables), average the rest of the data points for O2 and CO2, and put them in 2 new columns (avg.O2 and avg.CO2) before the next switch. Then repeat this process until the end.
It's quite easy to do manually on paper or excel, but my real dataset would comprise thousands of data points and I would like to use R to do it automatically for me. So anyone has any ideas that could help me?
Please find my edits which should work for both regular and irregular
date <- seq(as.Date("2017-04-01"),as.Date("2017-05-09"),length.out=40)
switch <- c(rep(1:2,each=10),rep(1:2,each=10))
O2 <- runif(40,min=21.02,max=21.06)
CO2 <- runif(40,min=0.076,max=0.080)
test.data <- data.frame(date,switch,O2,CO2)
CleanMachineData <- function(Data, SwitchData, UnreliableRows = 5){
# First, we can properly turn your switch column into a grouping column (1,2,1,2)->(1,2,3,4)
grouplength <- rle(Data[,"switch"])$lengths
# mapply lets us input vector arguments into typically one/first-element only argument functions.
# In this case we create a sequence of lengths (output is a list/vector)
grouping <- mapply(seq, grouplength)
# Here we want it to become a single vector representing groups
groups <- mapply(rep, 1:length(grouplength), each = grouplength)
# if frequency was irregular, it will be a list, if regular it will be a matrix
# convert either into a vector by doing as follows:
if(class(grouping) == "list"){
groups <- unlist(groups)
} else {
groups <- as.vector(groups)
}
Data$group <- groups
#
# vector of the first row of each new switch (except the starting 0)
switchRow <- c(0,which(abs(diff(SwitchData)) == 1))+1
# I use "as.vector" to turn the matrix output of mapply into a sequence of numbers.
# "ToRemove" will have all the row numbers to get rid of from your original data, except for what happens before (in this case) row 10
ToRemove <- c(1:UnreliableRows, as.vector(mapply(seq, switchRow, switchRow+(UnreliableRows)-1)))
# I concatenate the missing beginning (1,2,3,4,5) and theToRemove them with c() and then remove them from n with "-"
Keep <- seq(nrow(Data))[-c(1:UnreliableRows,ToRemove)]
# Create the new data, (in case you don't know: data[<ROW>,<COLUMN>])
newdat <- Data[-ToRemove,]
# print the results
newdat
}
dat <- CleanMachineData(test.data, test.data$switch, 5)
dat
date switch O2 CO2 group
6 2017-04-05 1 21.03922 0.07648886 1
7 2017-04-06 1 21.04071 0.07747368 1
8 2017-04-07 1 21.05742 0.07946615 1
9 2017-04-08 1 21.04673 0.07782362 1
10 2017-04-09 1 21.04966 0.07936446 1
16 2017-04-15 2 21.02526 0.07833825 2
17 2017-04-16 2 21.04511 0.07747774 2
18 2017-04-17 2 21.03165 0.07662803 2
19 2017-04-18 2 21.03252 0.07960098 2
20 2017-04-19 2 21.04032 0.07892145 2
26 2017-04-25 1 21.03691 0.07691438 3
27 2017-04-26 1 21.05846 0.07857017 3
28 2017-04-27 1 21.04128 0.07891908 3
29 2017-04-28 1 21.03837 0.07817021 3
30 2017-04-29 1 21.02334 0.07917546 3
36 2017-05-05 2 21.02890 0.07723042 4
37 2017-05-06 2 21.04606 0.07979641 4
38 2017-05-07 2 21.03822 0.07985775 4
39 2017-05-08 2 21.04136 0.07781525 4
40 2017-05-09 2 21.05375 0.07941123 4
aggregate(cbind(O2,CO2) ~ group, dat, mean)
group O2 CO2
1 1 21.04675 0.07812336
2 2 21.03497 0.07819329
3 3 21.03967 0.07834986
4 4 21.04166 0.07882221
# crazier, irregular switching
test.data2 <- test.data
test.data2$switch <- unlist(mapply(rep, 1:2, times = 1, each = c(10,8,10,5,3,10)))[1:20]
dat2 <- CleanMachineData(test.data2, test.data2$switch, 5)
dat2
date switch O2 CO2 group
6 2017-04-05 1 21.03922 0.07648886 1
7 2017-04-06 1 21.04071 0.07747368 1
8 2017-04-07 1 21.05742 0.07946615 1
9 2017-04-08 1 21.04673 0.07782362 1
10 2017-04-09 1 21.04966 0.07936446 1
16 2017-04-15 2 21.02526 0.07833825 2
17 2017-04-16 2 21.04511 0.07747774 2
18 2017-04-17 2 21.03165 0.07662803 2
24 2017-04-23 1 21.05658 0.07669662 3
25 2017-04-24 1 21.04452 0.07983165 3
26 2017-04-25 1 21.03691 0.07691438 3
27 2017-04-26 1 21.05846 0.07857017 3
28 2017-04-27 1 21.04128 0.07891908 3
29 2017-04-28 1 21.03837 0.07817021 3
30 2017-04-29 1 21.02334 0.07917546 3
36 2017-05-05 2 21.02890 0.07723042 4
37 2017-05-06 2 21.04606 0.07979641 4
38 2017-05-07 2 21.03822 0.07985775 4
# You can try removing a vector with the following
lapply(5:7, function(x) {
dat <- CleanMachineData(test.data2, test.data2$switch, x)
list(data = dat, means = aggregate(cbind(O2,CO2)~group, dat, mean))
})
Use
test.data[rep(c(FALSE, TRUE), each=5),]
to select always the last five rows from the group of 10 rows.
Then you can use aggregate:
d2 <- test.data[rep(c(FALSE, TRUE), each=5),]
aggregate(cbind(O2, CO2) ~ 1, data=d2, FUN=mean)
If you want the average for every 5-rows-group:
aggregate(cbind(O2, CO2) ~ gl(k=5, n=nrow(d2)/5L), data=d2, FUN=mean)
Here is a generalization for the situation of arbitrary number of rows in test.data:
stay <- rep(c(FALSE, TRUE), each=5, length.out=nrow(test.data))
d2 <- test.data[stay,]
group <- gl(k=5, n=nrow(d2)/5L+1L, length=nrow(d2))
aggregate(cbind(O2, CO2) ~ group, data=d2, FUN=mean)
Here is a variant for mixing the data with the averages:
group <- gl(k=10, n=nrow(test.data)/10L+1L, length=nrow(test.data))
L <- split(test.data, group)
mySummary <- function(x) {
if (nrow(x) <= 5) return(NULL)
x <- x[-(1:5),]
d.avg <- aggregate(cbind(O2, CO2) ~ 1, data=x, FUN=mean)
rbind(x, cbind(date=NA, switch=-1, d.avg))
}
lapply(L, mySummary) # as list of dataframes
do.call(rbind, lapply(L, mySummary)) # as one dataframe
In this hypothetical scenario, I have performed 5 different analyses on 13 chemicals, resulting in a score assigned to each chemical within each analysis. I have created a table as follows:
---- Analysis1 Analysis2 Analysis3 Analysis4 Analysis5
Chem_1 3.524797844 4.477695034 4.524797844 4.524797844 4.096698498
Chem_2 2.827511555 3.827511555 3.248136118 3.827511555 3.234398548
Chem_3 2.682144761 3.474646298 3.017780505 3.682144761 3.236152242
Chem_4 2.134137304 2.596921333 2.95181339 2.649076603 2.472875191
Chem_5 2.367736454 3.027814219 2.743137896 3.271122346 2.796607809
Chem_6 2.293110565 2.917318708 2.724156207 3.293110565 2.530967343
Chem_7 2.475709113 3.105794018 2.708222528 3.475709113 3.088819908
Chem_8 2.013451822 2.259454085 2.683273938 2.723554966 2.400976121
Chem_9 2.345123123 3.050074893 2.682845391 3.291851228 2.700844104
Chem_10 2.327658894 2.848729452 2.580415233 3.327658894 2.881490893
Chem_11 2.411243882 2.98131398 2.554456095 3.411243882 3.109205453
Chem_12 2.340778276 2.576860244 2.549707035 3.340778276 3.236545826
Chem_13 2.394698249 2.90682524 2.542599327 3.394698249 3.12936843
I would like to create columns corresponding to each analysis which contain the rank position for each chemical. For instance, under Analysis1,Chem_1 would have value "1", Chem_2 would have value "2", Chem_3 would have value "4", Chem_7 would have value "4", Chem_11 would have value "5", and so on.
We can use dense_rank from dplyr
library(dplyr)
df %>%
mutate_each(funs(dense_rank(-.)))
In base R, we can do
df[] <- lapply(-df, rank, ties.method="min")
In data.table, we can use
library(data.table)
setDT(df)[, lapply(-.SD, frank, ties.method="dense")]
To avoid the copies from multiplying with -, as #Arun mentioned in the comments
lapply(.SD, frankv, order=-1L, ties.method="dense")
You can also do this in base R:
cbind("..." = df[,1], data.frame(do.call(cbind,
lapply(df[,-1], order, decreasing = T))))
... Analysis1 Analysis2 Analysis3 Analysis4 Analysis5
1 Chem_1 1 1 1 1 1
2 Chem_2 2 2 2 2 12
3 Chem_3 3 3 3 3 3
4 Chem_4 7 7 4 7 2
5 Chem_5 11 9 5 11 13
6 Chem_6 13 5 6 13 11
7 Chem_7 5 11 7 12 7
8 Chem_8 9 6 8 10 10
9 Chem_9 12 13 9 6 5
10 Chem_10 10 10 10 9 9
11 Chem_11 6 4 11 5 6
12 Chem_12 4 12 12 8 4
13 Chem_13 8 8 13 4 8
If I'm not mistaken, you want to have the column-wise rank of your table. Here is my solution:
m=data.matrix(df) # converts data frame to matrix, convert your data to matrix accordingly
apply(m, 2, function(c) rank(c)) # increasingly
apply(m, 2, function(c) rank(-c)) # decreasingly
However, I believe you could solve it by yourself with the help of the answers to this question
Get rank of matrix entries?
I’m in the process of cleaning some longitudinal data and I have several missing cases. I am trying to use an imputation that incorporates observations before and after the missing case. I’m wondering how I can go about addressing the issues detailed below.
I’ve been trying to break the problem apart into smaller, more manageable operations and objects, however, the solutions I keep coming to force me to use conditional formatting based on rows immediately above and below the a missing value and, quite frankly, I’m at a bit of a loss as to how to do this. I would love a little guidance if you think you know of a good technique I can use, experiment with, or if you know of any good search terms I can use when looking up a solution.
The details are below:
#Fake dataset creation
id <- c(1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4)
time <-c(0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6)
ss <- c(1,3,2,3,NA,0,0,2,4,0,NA,0,0,0,4,1,2,4,2,3,NA,2,1,0,NA,NA,0,0)
mydat <- data.frame(id, time, ss)
*Bold characters represent changes from the dataset above
The goal here is to find a way to get the mean of the value before (3) and after (0) the NA value for ID #1 (variable ss) so that the data look like this: 1,3,2,3,1.5,0,0,
ID# 2 (variable ss) should look like this: 2,4,0,0,0,0,0
ID #3 (variable ss) should use a last observation carried forward approach, so it would need to look like this: 4,1,2,4,2,3,3
ID #4 (variable ss) has two consecutive NA values and should not be changed. It will be flagged for a different analysis later in my project. So, it should look like this: 2,1,0,NA,NA,0,0 (no change).
I use a package, smwrBase, the syntax for only filling in 1 missing value is below, but doesn't address id.
smwrBase::fillMissing(ss, max.fill=1)
The zoo package might be more standard, same issue though.
zoo::na.approx(ss, maxgap=1)
Below is an approach that accounts for the variable id. Current interpolation approaches dont like to fill in the last value, so i added a manual if stmt for that. A bit brute force as there might be a tapply approach out there.
> id <- c(1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,3,3,3,3,3,4,4,4,4,4,4,4)
> time <-c(0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6)
> ss <- c(1,3,2,3,NA,0,0,2,4,0,NA,0,0,0,4,1,2,4,2,3,NA,2,1,0,NA,NA,0,0)
> mydat <- data.frame(id, time, ss, ss2=NA_real_)
> for (i in unique(id)) {
+ # interpolate for gaps
+ mydat$ss2[mydat$id==i] <- zoo::na.approx(ss[mydat$id==i], maxgap=1, na.rm=FALSE)
+ # extension for gap as last value
+ if(is.na(mydat$ss2[mydat$id==i][length(mydat$ss2[mydat$id==i])])) {
+ mydat$ss2[mydat$id==i][length(mydat$ss2[mydat$id==i])] <-
+ mydat$ss2[mydat$id==i][length(mydat$ss2[mydat$id==i])-1]
+ }
+ }
> mydat
id time ss ss2
1 1 0 1 1.0
2 1 1 3 3.0
3 1 2 2 2.0
4 1 3 3 3.0
5 1 4 NA 1.5
6 1 5 0 0.0
7 1 6 0 0.0
8 2 0 2 2.0
9 2 1 4 4.0
10 2 2 0 0.0
11 2 3 NA 0.0
12 2 4 0 0.0
13 2 5 0 0.0
14 2 6 0 0.0
15 3 0 4 4.0
16 3 1 1 1.0
17 3 2 2 2.0
18 3 3 4 4.0
19 3 4 2 2.0
20 3 5 3 3.0
21 3 6 NA 3.0
22 4 0 2 2.0
23 4 1 1 1.0
24 4 2 0 0.0
25 4 3 NA NA
26 4 4 NA NA
27 4 5 0 0.0
28 4 6 0 0.0
The interpolated value in id=1 is 1.5 (avg of 3 and 0), id=2 is 0 (avg of 0 and 0, and id=3 is 3 (the value preceding since it there is no following value).