R - Appending data to a data.frame - r

I have a data set which looks like this:
job_id start_hour duration
1 14 3
2 8 2
Job_id: the id of the job
start_hour: the hour at which the job starts
duration: the number of hours required for the job
I would like to turn it into a table where each line represents an hour for the job:
job_id hour
1 14
1 15
1 16
2 8
2 9
So I would have for each job, as much lines as the job requires hours to be done.
Is there an elegant way to do this in R?
Many thanks

One way to do this is using the package plyr (where d is your original data frame)
ddply(d, .(job_id),
function(d) data.frame(job_id = d$job_id,
hour = d$start_hour:(d$start_hour + d$duration -1)))

This is also possible with simple base functions. First, an input data.frame
#sample data
dd<-data.frame(
job_id = 1:2,
start_hour = c(14, 8),
duration = c(3, 2)
)
Now we use Map to walk through each row and expand it to the right size. Then we combine all the newly expanded rows into one data.frame with do.call(rbind,...)
#transformation
do.call(rbind,Map(function(id,start,dur) {
data.frame(
job_id=rep(id, dur),
hour=seq(from=start, by=1, length.out=dur))
}, dd$job_id, dd$start_hour, dd$duration))
which gives us
job_id hour
1 1 14
2 1 15
3 1 16
4 2 8
5 2 9

Related

R loop through multiple sub groups with using functions

Hi I am trying to learn how to loop through multiple groups within a data frame and apply certain arithmetic operations. I do not have a programming background and am struggling to loop through the multiple conditions.
My data looks like the following:
Event = c(1,1,1,1,1,2,2,2,2,2)
Indiv1=c(4,5,6,11,45,66,8,9,32,45)
Indiv2=c(7,81,91,67,12,34,56,78,90,12)
Category=c(1,1,2,2,2,1,2,2,1,1)
Play_together=c(1,0,1,1,1,1,1,1,0,1)
Money=c(23,11,78,-9,-12,345,09,43,21,90)
z = data.frame(Event,Indiv1,Indiv2,Category,Play_together,Money)
What I would like to do is to look through each event and each category and take the average value of Money in cases where Play_together == 1. When Play_together==0, then I would like to apply Money/100.
I understand that the loop would look something like the following:
for i in 1:nrow(z){
#loop for event{
#loop for Category{
#Define avg or division function
}
}
}
However, I cannot seem to implement this using a nested loop. I saw another post (link: apply function for each subgroup) which uses dplyr package. I was wondering if someone could help me to implement this without using any packages (I know this might take longer as compared to using R packages). I am trying to learn R and this is the first time I am working with nested loops.
The final output will look like this:
where for event 1, the following holds:
a) For cateory 1:
Play_together ==1 in row 1; we take the avg of Money value and hence final output = 23/1= 23
Play_together==0 in row 2; we take Money/100= 0.11
b) For category 2:
Play_together == 1 for all observations. We take avg Money for all three observations.
This holds similarly for Event 2. In my actual dataset, I have event = 600 and number of category ranging from 1 - 10. Some events may have only 1 category and a maximum of 10 categories. So any function needs to be extremely flexible. The total number of observations in my dataset is around 1.5 million so any changes in the looping process to reduce the time taken to carry out the operation is going to be extremely helpful (Although at this stage my priority is the looping process itself).
Once again it would be a great help if you can show me how to use nested looping and explain the steps in brief. Much appreciated.
will something like this do?
I know it's using dplyr, but that package is made for this kind of jobs ;-)
Event = c(1,1,1,1,1,2,2,2,2,2)
Indiv1=c(4,5,6,11,45,66,8,9,32,45)
Indiv2=c(7,81,91,67,12,34,56,78,90,12)
Category=c(1,1,2,2,2,1,2,2,1,1)
Play_together=c(1,0,1,1,1,1,1,1,0,1)
Money=c(23,11,78,-9,-12,345,09,43,21,90)
z = data.frame(Event,Indiv1,Indiv2,Category,Play_together,Money)
library(dplyr)
df_temp <- z %>%
group_by( Event, Category, Play_together ) %>%
summarise( money_mean = mean( Money ) ) %>%
mutate( final_output = ifelse( Play_together == 0, money_mean / 100, money_mean )) %>%
select( -money_mean )
df <- z %>%
left_join(df_temp, by = c("Event", "Category", "Play_together" )) %>%
arrange(Event, Category)
Consider base R's by, the object-oriented wrapper to tapply designed to subset dataframes by factor(s) but unlike split can pass subsets into a defined function. Then, run conditional logic with ifelse for Final_Output field. Finally, stack all subsetted dataframes for final object.
# LIST OF DATAFRAMES
by_list <- by(z, z[c("Event", "Category")], function(sub) {
tmp <- subset(sub, Play_together==1)
sub$Final_Output <- ifelse(sub$Play_together == 1, mean(tmp$Money), sub$Money/100)
return(sub)
})
# APPEND ALL DATAFRAMES
final_df <- do.call(rbind, by_list)
row.names(final_df) <- NULL
final_df
# Event Indiv1 Indiv2 Category Play_together Money Final_Output
# 1 1 4 7 1 1 23 23.00
# 2 1 5 81 1 0 11 0.11
# 3 2 66 34 1 1 345 217.50
# 4 2 32 90 1 0 21 0.21
# 5 2 45 12 1 1 90 217.50
# 6 1 6 91 2 1 78 19.00
# 7 1 11 67 2 1 -9 19.00
# 8 1 45 12 2 1 -12 19.00
# 9 2 8 56 2 1 9 26.00
# 10 2 9 78 2 1 43 26.00

Search for value within a range of values in two separate vectors

This is my first time posting to Stack Exchange, my apologies as I'm certain I will make a few mistakes. I am trying to assess false detections in a dataset.
I have one data frame with "true" detections
truth=
ID Start Stop SNR
1 213466 213468 10.08
2 32238 32240 10.28
3 218934 218936 12.02
4 222774 222776 11.4
5 68137 68139 10.99
And another data frame with a list of times, that represent possible 'real' detections
possible=
ID Times
1 32239.76
2 32241.14
3 68138.72
4 111233.93
5 128395.28
6 146180.31
7 188433.35
8 198714.7
I am trying to see if the values in my 'possible' data frame lies between the start and stop values. If so I'd like to create a third column in possible called "between" and a column in the "truth" data frame called "match. For every value from possible that falls between I'd like a 1, otherwise a 0. For all of the rows in "truth" that find a match I'd like a 1, otherwise a 0.
Neither ID, not SNR are important. I'm not looking to match on ID. Instead I wand to run through the data frame entirely. Output should look something like:
ID Times Between
1 32239.76 0
2 32241.14 1
3 68138.72 0
4 111233.93 0
5 128395.28 0
6 146180.31 1
7 188433.35 0
8 198714.7 0
Alternatively, knowing if any of my 'possible' time values fall within 2 seconds of start or end times would also do the trick (also with 1/0 outputs)
(Thanks for the feedback on the original post)
Thanks in advance for your patience with me as I navigate this system.
I think this can be conceptulised as a rolling join in data.table. Take this simplified example:
truth
# id start stop
#1: 1 1 5
#2: 2 7 10
#3: 3 12 15
#4: 4 17 20
#5: 5 22 26
possible
# id times
#1: 1 3
#2: 2 11
#3: 3 13
#4: 4 28
setDT(truth)
setDT(possible)
melt(truth, measure.vars=c("start","stop"), value.name="times")[
possible, on="times", roll=TRUE
][, .(id=i.id, truthid=id, times, status=factor(variable, labels=c("in","out")))]
# id truthid times status
#1: 1 1 3 in
#2: 2 2 11 out
#3: 3 3 13 in
#4: 4 5 28 out
The source datasets were:
truth <- read.table(text="id start stop
1 1 5
2 7 10
3 12 15
4 17 20
5 22 26", header=TRUE)
possible <- read.table(text="id times
1 3
2 11
3 13
4 28", header=TRUE)
I'll post a solution that I'm pretty sure works like you want it to in order to get you started. Maybe someone else can post a more efficient answer.
Anyway, first I needed to generate some example data - next time please provide this from your own data set in your post using the function dput(head(truth, n = 25)) and dput(head(possible, n = 25)). I used:
#generate random test data
set.seed(7)
truth <- data.frame(c(1:100),
c(sample(5:20, size = 100, replace = T)),
c(sample(21:50, size = 100, replace = T)))
possible <- data.frame(c(sample(1:15, size = 15, replace = F)))
colnames(possible) <- "Times"
After getting sample data to work with; the following solution provides what I believe you are asking for. This should scale directly to your own dataset as it seems to be laid out. Respond below if the comments are unclear.
#need the %between% operator
library(data.table)
#initialize vectors - 0 or false by default
truth.match <- c(rep(0, times = nrow(truth)))
possible.between <- c(rep(0, times = nrow(possible)))
#iterate through 'possible' dataframe
for (i in 1:nrow(possible)){
#get boolean vector to show if any of the 'truth' rows are a 'match'
match.vec <- apply(truth[, 2:3],
MARGIN = 1,
FUN = function(x) {possible$Times[i] %between% x})
#if any are true then update the match and between vectors
if(any(match.vec)){
truth.match[match.vec] <- 1
possible.between[i] <- 1
}
}
#i think this should be called anyMatch for clarity
truth$anyMatch <- truth.match
#similarly; betweenAny
possible$betweenAny <- possible.between

Alternative to for loop and indexing?

I have a large data set of 3 columns, Order, Discharge, Date (numeric). There are 20 years of daily Discharge values for each Order, which can extend beyond 100.
> head(dat)
Order Discharge date
1 0.04712 6574
2 0.05108 6574
3 0.00000 6574
4 0.00000 6574
5 3.54100 6574
6 3.61500 6574
For a given Order x, I would like to replace the Discharge value with the average of the Discharge at x+1 and x-1 for that date. I have been doing this in a crude manner with a for loop and indexing, but it takes over an hour to process. I know there has to be a better way.
x <- 4
for(i in min(dat[,3]):max(dat[,3]))
dat[,2][dat[,3] == i & dat[,1] == x ] <-
mean(c(dat[,2][dat[,3] == i & dat[,1] == x + 1],
dat[,2][dat[,3] == i & dat[,1] == x - 1]))
Gives
> head(dat)
Order Discharge date
1 0.04712 6574
2 0.05108 6574
3 0.00000 6574
4 1.77050 6574
5 3.54100 6574
6 3.61500 6574
Where the Discharge at Order 4, for date 6574 has been replaced with 1.77050. It works, but it's ridiculously slow.
I should specify that I don't need to do this calculation on every Order, but only a select few (only 8 out of a total of 117). Based on the answer, I have the following.
dat$NewDischarge <- by(dat$Discharge,dat$date,function(x)
colMeans(cbind(c(x[-1],NA), x,
c(NA, x[-length(x)])), na.rm=T))
I am trying to figure out a way still to only have the values of the select Orders to be calculated and am stuck in the rut of a for loop and indexing on date and Orders.
I would go by it as following:
Ensure that Order is a factor.
For each Order, you now have a sub-problem:
Sort the sub-data-frame by date.
Each Discharge-mean can be produced "vectorally" as:
colMeans(cbind(c(Discharge[-1], NA), Discharge, c(NA, Discharge[-length(Discharge)])))
The sub-problem can be dealt with a simple for-loop or the function by. I would prefer by.
Your data has been rearranged, but you can easily reorder it.
For point 2.2, imagine it (or try it) with a simple vector and see the effects of the cbind operation. It also forces you to consider the limit-situations; how is the first and last Discharge-value calculated (no preceding or proceeding dates).
There are several ways to solve your particular dilemma, but the basic question to ask when confronted with a slow for loop is, "How do I use vectorization to replace this loop?" (Well, maybe you should ask "Should I...?" first.) In your case, you're looping across dates, but there's no need to explicitly do that, since just grabbing all of the rows where dat$Order==x will implicitly grab all the dates.
The dataset you posted only has one date, but I can generate some fake data to illustrate:
generate.data <- function(n.order, n.date){
dat <- expand.grid(Order=seq_len(n.order), date=seq_len(n.date))
dat$Discharge <- rlnorm(n.order * n.date)
dat[, c("Order", "Discharge", "date")]
}
dat <- generate.data(10, 5)
head(dat)
# Order Discharge date
# 1 1 2.1925563 1
# 2 2 0.4093022 1
# 3 3 2.5525497 1
# 4 4 1.9274013 1
# 5 5 1.1941986 1
# 6 6 1.2407451 1
tail(dat)
# Order Discharge date
# 45 5 1.4344575 5
# 46 6 0.5757580 5
# 47 7 0.4986190 5
# 48 8 1.2076292 5
# 49 9 0.3724899 5
# 50 10 0.8288401 5
Here's all the rows where dat$Order==4, across all dates:
dat[dat$Order==4, ]
# Order Discharge date
# 4 4 1.9274013 1
# 14 4 3.5319072 2
# 24 4 0.2374532 3
# 34 4 0.4549798 4
# 44 4 0.7654059 5
You can just take the Discharge column, and you'll have the left-hand side of your assignment:
dat[dat$Order==4, ]$Discharge
# [1] 1.9274013 3.5319072 0.2374532 0.4549798 0.7654059
Now you just need the right side, which has two components: the x-1 discharges and the x+1 discharges. You can grab these the same way you grabbed the x discharges:
dat[dat$Order==4-1, ]$Discharge
# [1] 2.5525497 1.9143963 0.2800546 8.3627810 7.8577635
dat[dat$Order==4+1, ]$Discharge
# [1] 1.1941986 4.6076114 0.3963693 0.4190957 1.4344575
To obtain the new values, you need the parallel mean. R doesn't have a pmean function, but you can cbind these and take the rowMeans:
rowMeans(cbind(dat[dat$Order==4-1, ]$Discharge, dat[dat$Order==4+1, ]$Discharge))
# [1] 1.8733741 3.2610039 0.3382119 4.3909383 4.6461105
So, in the end you have:
dat[dat$Order==4, ]$Discharge <- rowMeans(cbind(dat[dat$Order==4-1, ]$Discharge,
dat[dat$Order==4+1, ]$Discharge))
You can even use %in% to make this work across all of your x values.
Note that this assumes your data is ordered.

Adding additional observation in panel data in R

I am trying to add additional years to my panel data. Just wondering if you guys have any ideas of quick way of doing it. Keep in mind my real data is T=6, i=4000.
# Here is my input
data = data.frame(time=c(30,40,50,30,40,50,30,40,50),
id=c(1,1,1,2,2,2,3,3,3),
d=c(1,4,7,8,14,2,41,11,61))
# declare panel data individ and time
pd = pdata.frame(data, c("id","time"), drop.index=FALSE)
#this is what I want out...
data.out = data.frame(time=c(30,40,50,60,30,40,50,60,30,40,50,60),
id=c(1,1,1,1,2,2,2,2,3,3,3,3),
d=c(1,4,7,8,9,14,2,41,50,11,61,70))
# declare panel data individ and time
pd.data.out = pdata.frame(data.out, c("id","time"), drop.index=FALSE)
I am not quite sure what you are doing but this might help:
data = data.frame(time=c(30,40,50,30,40,50,30,40,50),
id=c(1,1,1,2,2,2,3,3,3),
d=c(1,4,7,8,14,2,41,11,61))
newdata = data.frame(time=c(60,60,60),
id=c(1,2,3),
d=c(9,50,70))
combodata = rbind(data,newdata)
data.out = combodata[order(combodata$id,combodata$time), ]
rownames(data.out) = NULL
to produce
> data.out
time id d
1 30 1 1
2 40 1 4
3 50 1 7
4 60 1 9
5 30 2 8
6 40 2 14
7 50 2 2
8 60 2 50
9 30 3 41
10 40 3 11
11 50 3 61
12 60 3 70
and I think this is what you want for time and id, though d is marginally different. If the rows do not need to be ordered then the last three lines of the code can be condensed to
data.out = rbind(data,newdata)
Got it... just create new time and id data.frame and merge into it.
time = rep(c(unique(as.numeric(as.character(pd$time))),max(as.numeric(as.character(pd$time)))+10), length(unique(pd$id)))
id = rep( unique(pd$id), each=max(as.numeric(as.character(pd$id)))+1)
data2 = data.frame(time, id)
data.out = merge(data2, pd, all.x=T)
data.out = data.out[with(data.out, order(id,time) ), ]

Grouping R variables based on sub-groups

I have a data formatted as
PERSON_A PERSON_B MEET LEAVE
That describes basically when a PERSON_A met a PERSON_B at time MEET and they said "bye" to each other at moment LEAVE. The time is expressed in seconds, and there is a small part of the data on http://pastie.org/2825794 (simple.dat).
What I need is to count the number of meetings grouping it by day. At the moment, I have a code that works, the appearance is not beautiful. Anyway, I'd like a help in order to transform it in a code that reflects the grouping Im trying to do, e.g, using ddply, etc. Therefore, my main aim is to learn from this case. Probably there are many mistakes in this code regarding good practices in R.
library(plyr)
data = read.table("simple.dat", stringsAsFactors=FALSE)
names(data)=c('PERSON_A','PERSON_B','MEET','LEAVE')
attach(data)
min_interval = min(MEET)
max_interval = max(LEAVE)
interval = max_interval - min_interval
day = 86400
number_of_days = floor(interval/day)
g = data.frame(MEETINGS=c(0:number_of_days)) # just to store the result
g[,1] = 0
start_offset = min_interval # start of the first day
for (interval in c(0:number_of_days)) {
end_offset = start_offset + day
meetings = (length(data[data$MEET >= start_offset & data$LEAVE <= end_offset, ]$PERSON_A) + length(data[data$MEET >= start_offset & data$LEAVE <= end_offset, ]$PERSON_B))
g[interval+1, ] = meetings
start_offset = end_offset # start next day
}
g
This code iterates over the days (intervals of 86400 seconds) and stores the number of meetings on the dataframe g. The correct output (shown bellow) of this code when executed on the linked dataset gives for each line (day) the number o meetings.
MEETINGS
1 38
2 10
3 16
4 18
5 24
6 6
7 4
8 10
9 28
10 14
11 22
12 2
13 .. 44 0 # I simplified the output here
45 2
Anyway, I know that I could use ddply to get the number of meetings for each pair o nodes:
contacts <- ddply(data, .(PERSON_A, PERSON_B), summarise
, CONTACTS = length(c(PERSON_A, PERSON_B)) /2
)
but there is a huge hill for me between this and the result I need.
As a end note, I read How to make a great R reproducible example? and tried my best :)
Thanks,
try this:
> d2 <- transform(data, m = floor(MEET/86400) + 1, l = floor(LEAVE/86400) + 1)
> d3 <- subset(d2, m == l)
> table(d3$m) * 2
1 2 3 4 5 6 7 8 9 10 11 12 45
38 10 16 18 24 6 4 10 28 14 22 2 2
floor(x/(60*60*24)) is a quick way to convert second into day.

Resources