Hi I have dataframe with multiple columns ,I.e first 5 columns are my metadata and remaing
columns (columns count will be even) are actual columns which need to be calculated
formula : (col6*col9) + (col7*col10) + (col8*col11)
country<-c("US","US","US","US")
name <-c("A","B","c","d")
dob<-c(2017,2018,2018,2010)
day<-c(1,4,7,9)
hour<-c(10,11,2,4)
a <-c(1,3,4,5)
d<-c(1,9,4,0)
e<-c(8,1,0,7)
f<-c(10,2,5,6)
j<-c(1,4,2,7)
m<-c(1,5,7,1)
df=data.frame(country,name,dob,day,hour,a,d,e,f,j,m)
how to get final summation if i have more columns
I have tried with below code
df$final <-(df$a*df$f)+(df$d*df$j)+(df$e*df$m)
Here is one way to do generalize the computation:
x <- ncol(df) - 5
df$final <- rowSums(df[6:(5 + x/2)] * df[(ncol(df) - x/2 + 1):ncol(df)])
# country name dob day hour a d e f j m final
# 1 US A 2017 1 10 1 1 8 10 1 1 19
# 2 US B 2018 4 11 3 9 1 2 4 5 47
# 3 US c 2018 7 2 4 4 0 5 2 7 28
# 4 US d 2010 9 4 5 0 7 6 7 1 37
Related
I am a R noob, and hope some of you can help me.
I have two data sets:
- store (containing store data, including location coordinates (x,y). The location are integer values, corresponding to GridIds)
- grid (containing all gridIDs (x,y) as well as a population variable TOT_P for each grid point)
What I want to achieve is this:
For each store I want loop over the grid date, and sum the population of the grid ids close to the store grid id.
I.e basically SUMIF the grid population variable, with the condition that
grid(x) < store(x) + 1 &
grid(x) > store(x) - 1 &
grid(y) < store(y) + 1 &
grid(y) > store(y) - 1
How can I accomplish that? My own take has been trying to use different things like merge, sapply, etc, but my R inexperience stops me from getting it right.
Thanks in advance!
Edit:
Sample data:
StoreName StoreX StoreY
Store1 3 6
Store2 5 2
TOT_P GridX GridY
8 1 1
7 2 1
3 3 1
3 4 1
22 5 1
20 6 1
9 7 1
28 1 2
8 2 2
3 3 2
12 4 2
12 5 2
15 6 2
7 7 2
3 1 3
3 2 3
3 3 3
4 4 3
13 5 3
18 6 3
3 7 3
61 1 4
25 2 4
5 3 4
20 4 4
23 5 4
72 6 4
14 7 4
178 1 5
407 2 5
26 3 5
167 4 5
58 5 5
113 6 5
73 7 5
76 1 6
3 2 6
3 3 6
3 4 6
4 5 6
13 6 6
18 7 6
3 1 7
61 2 7
25 3 7
26 4 7
167 5 7
58 6 7
113 7 7
The output I am looking for is
StoreName StoreX StoreY SUM_P
Store1 3 6 479
Store2 5 2 119
I.e for store1 it is the sum of TOT_P for Grid fields X=[2-4] and Y=[5-7]
One approach would be to use dplyr to calculate the difference between each store and all grid points and then group and sum based on these new columns.
#import library
library(dplyr)
#create example store table
StoreName<-paste0("Store",1:2)
StoreX<-c(3,5)
StoreY<-c(6,2)
df.store<-data.frame(StoreName,StoreX,StoreY)
#create example population data (copied example table from OP)
df.pop
#add dummy column to each table to enable cross join
df.store$k=1
df.pop$k=1
#dplyr to join, calculate absolute distance, filter and sum
df.store %>%
inner_join(df.pop, by='k') %>%
mutate(x.diff = abs(StoreX-GridX), y.diff=abs(StoreY-GridY)) %>%
filter(x.diff<=1, y.diff<=1) %>%
group_by(StoreName) %>%
summarise(StoreX=max(StoreX), StoreY=max(StoreY), tot.pop = sum(TOT_P) )
#output:
StoreName StoreX StoreY tot.pop
<fctr> <dbl> <dbl> <int>
1 Store1 3 6 721
2 Store2 5 2 119
I have a dataframe p1. I would like to transpose by column a. Find minimum of each row and return the column name that has the minimum value.
a=c(0,1,2,3,4,0,1,2,3,4)
b=c(10,20,30,40,50,9,8,7,6,5)
p1=data.frame(a,b)
p1
> p1
a b
1 0 10
2 1 20
3 2 30
4 3 40
5 4 50
6 0 9
7 1 8
8 2 7
9 3 6
10 4 5
The final required answer
0 1 2 3 4 row_minimum column_index_of_minimum
10 20 30 40 50 10 0
9 8 7 6 5 5 4
I used many things but the main was ave(p1$a, p1$a, FUN = seq_along) which allowed me to separate the b into groups based on the number of times they were associated with a
myans = setNames(data.frame(do.call(rbind, lapply(split(p1, ave(p1$a, p1$a, FUN = seq_along)),
function(x) x[,2]))), nm = rbind(p1$a[ave(p1$a, p1$a, FUN = seq_along) == 1]))
minimum = apply(myans, 1, min)
index = colnames(myans)[apply(myans, 1, which.min)]
myans$min = minimum
myans$index = index
myans
# 0 1 2 3 4 min index
#1 10 20 30 40 50 10 0
#2 9 8 7 6 5 5 4
Consider using a running group count followed by an aggregate and reshape:
# RUNNING GROUP COUNT
p1$grpcnt <- sapply(seq(nrow(p1)), function(i) sum(p1[1:i, c("a")]==p1$a[[i]]))
# MINIMUM OF B BY GROUP COUNT MERGING TO RETRIEVE A VALUE
aggdf <- setNames(merge(aggregate(b~grpcnt, p1, FUN=min),p1,by="b")[c("grpcnt.x","b","a")],
c("grpcnt", "row_minimum", "column_index_of_minimum"))
# RESHAPE/TRANSPOSE LONG TO WIDE
reshapedf <- setNames(reshape(p1, timevar=c("a"), idvar=c("grpcnt"), direction="wide"),
c("grpcnt", paste(unique(p1$a))))
# FINAL MERGE
finaldf <- merge(reshapedf, aggdf, by="grpcnt")[-1]
finaldf
# 0 1 2 3 4 row_minimum column_index_of_minimum
# 1 10 20 30 40 50 10 0
# 2 9 8 7 6 5 5 4
I have the following data:
a <- data.frame(ID=c("A","B","Z","H"), a=c(0,1,2,45), b=c(3,4,5,22), c=c(6,7,8,3))
> a
ID a b c
1 A 0 3 6
2 B 1 4 7
3 Z 2 5 8
4 H 45 22 3
b <- data.frame(ID=c("A","B","E","W","Z","H"), a=c(9,10,11,39,5,0), b=c(4,2,7,54,12,34), c=c(12,0,34,23,13,14))
> b
ID a b c
1: A 9 4 12
2: B 10 2 0
3: E 11 7 34
4: W 39 54 23
5: Z 5 12 13
6: H 0 34 14
I want to merge both dataframes, keeping only rows of data.frame a and summarize the same columns, so at the end I get:
> z
ID a b c
1 A 9 7 18
2 B 11 6 7
3 Z 7 17 21
4 H 45 56 17
So far I have tried the following:
merge(a,b,by="ID",all.x=T,all.y=F)
> merge(a,b,by="ID",all.x=T,all.y=F)
ID a.x b.x c.x a.y b.y c.y
1 A 0 3 6 9 4 12
2 B 1 4 7 10 2 0
3 H 45 22 3 0 34 14
4 Z 2 5 8 5 12 13
> join(a,b,type="left",by="ID")
ID a b c a b c
1 A 0 3 6 9 4 12
2 B 1 4 7 10 2 0
3 Z 2 5 8 5 12 13
4 H 45 22 3 0 34 14
I cannot manage to summarize the columns.
My dataframe is pretty big so if the solution can speed up things that would even be better.
If your data.frame is very big, then you may consider this option:
library(data.table)
## convert data.frame to data.table
setDT(a)
## convert data.frame to data.table
setDT(b)
## merge the two data.tables
c <- merge(a,b,by='ID')
## extract names of all columns except the first one i.e. ID
col_names <- colnames(a)[-1]
## query building
col_1 <- paste0(col_names,'.x')
col_2 <- paste0(col_names,'.y')
cols <- paste(col_1,col_2,sep=',')
cols_2 <- paste0(col_names," = sum(",cols,")")
cols_3 <- paste(cols_2,collapse=',')
query <- paste0("z <- c[,.(",cols_3,"),by=ID]")
## query execution
eval(parse(text = query))
This works at least for your example:
a <- data.frame(ID=c("A","B","Z","H"), a=c(0,1,2,45), b=c(3,4,5,22), c=c(6,7,8,3))
b <- data.frame(ID=c("A","B","E","W","Z","H"), a=c(9,10,11,39,5,0), b=c(4,2,7,54,12,34), c=c(12,0,34,23,13,14))
match_a <- na.omit(match(b$ID, a$ID))
match_b <- na.omit(match(a$ID, b$ID))
df <- cbind(ID = a$ID[match_a], a[match_a, -1] + b[match_b, -1])
First, get matching rows from a in b and vice versa, so we can be sure that we only have those rows that appear in both data frames (and we now know their row-indices in both data frames). Then, simply use vectorized additions for those matching rows, but omit ID, as factor cannot be summed up; add ID back manually.
You cannot directly add both data frame is because both the data frames are of unequal size. To make them of equal size you can check for IDs in a which are present in b and then add them element wise.
new <- b[b$ID %in% a$ID, ]
cbind(ID = a$ID, a[-1] + new[-1])
# ID a b c
#1 A 9 7 18
#2 B 11 6 7
#3 Z 7 17 21
#4 H 45 56 17
Given a data frame as follows:
id<-c(1,1,1,1,1,1,2,2,2,2,2,2)
t<-c(6,8,9,11,12,14,55,57,58,60,62,63)
p<-c("a","a","a","b","b","b","a","a","b","b","b","b")
df<-data.frame(id,t,p)
row id t p
1 1 6 a
2 1 8 a
3 1 9 a
4 1 11 b
5 1 12 b
6 1 14 b
7 2 55 a
8 2 57 a
9 2 58 b
10 2 60 b
11 2 62 b
12 2 63 b
I want to create a new variable 'ta' such that the value of ta is:
Zero for the row in which 'p' changes from a to b for a given ID (rows 4 and 9) (this I can do)
Within each unique id, when p is 'a', the value of ta should count down from zero by the change in t between the row in question and the row above it. For example, for row 3, the value of ta should be 0 - (11-9) = -2.
Within each unique id, when p is 'b', the value of ta should count up from zero by the change in t between the row in question and the row below it. For example, for row 5, the value of ta should be 0 + (12-11) = 1.
Thus, when complete, the data frame should look as follows:
row id t p ta
1 1 6 a -5
2 1 8 a -3
3 1 9 a -2
4 1 11 b 0
5 1 12 b 1
6 1 14 b 3
7 2 55 a -3
8 2 57 a -1
9 2 58 b 0
10 2 60 b 2
11 2 62 b 4
12 2 63 b 5
I've been playing around with loops and cumsum() and head() and tail() and can't quite make this kind of within id/within condition summing work. There are a number of other questions about working with values from previous or following rows, but I can't quite reshape any of those techniques to work here. Your thoughts are greatly appreciated.
Here you go. This is a split-apply-combine strategy of breaking everything up by id, establishing the transition point between p=='a' and p=='b' and then subtracting values above and below that. It only works if your data are actually ordered in the way you show them here.
do.call('rbind',
lapply(split(df, id), function(x) {
# save values of `0` at transition points in `p`
x <- cbind.data.frame(x, ta=ifelse(c(0,diff(as.numeric(as.factor(x$p))))==1, 0, NA))
# identify indices for those points
w <- which(x$ta==0)
# handle `ta` values for `p=='b'`
x$ta[(w+1):nrow(x)] <- x$ta[w] + (x$t[(w+1):nrow(x)] - x$t[w])
# handle `ta` values for `p=='a'`
x$ta[1:(w-1)] <- x$ta[w] - (x$t[w] - x$t[1:(w-1)])
return(x)
})
)
Result:
id t p ta
1.1 1 6 a -5
1.2 1 8 a -3
1.3 1 9 a -2
1.4 1 11 b 0
1.5 1 12 b 1
1.6 1 14 b 3
2.7 2 55 a -3
2.8 2 57 a -1
2.9 2 58 b 0
2.10 2 60 b 2
2.11 2 62 b 4
2.12 2 63 b 5
This is my first post at StackOverflow. I am relatively a newbie in programming and trying to work with the data.table in R, for its reputation in speed.
I have a very large data.table, named "Actions", with 5 columns and potentially several million rows. The column names are k1, k2, i, l1 and l2. I have another data.table, with the unique values of Actions in columns k1 and k2, named "States".
For every row in Actions, I would like to find the unique index for columns 4 and 5, matching with States. A reproducible code is as follows:
S.disc <- c(2000,2000)
S.max <- c(6200,2300)
S.min <- c(700,100)
Traces.num <- 3
Class.str <- lapply(1:2,function(x) seq(S.min[x],S.max[x],S.disc[x]))
Class.inf <- seq_len(Traces.num)
Actions <- data.table(expand.grid(Class.inf, Class.str[[2]], Class.str[[1]], Class.str[[2]], Class.str[[1]])[,c(5,4,1,3,2)])
setnames(Actions,c("k1","k2","i","l1","l2"))
States <- unique(Actions[,list(k1,k2,i)])
So if i was using data.frame, the following line would be like:
index <- apply(Actions,1,function(x) {which((States[,1]==x[4]) & (States[,2]==x[5]))})
How can I do the same with data.table efficiently ?
This is relatively simple once you get the hang of keys and the special symbols which may be used in the j expression of a data.table. Try this...
# First make an ID for each row for use in the `dcast`
# because you are going to have multiple rows with the
# same key values and you need to know where they came from
Actions[ , ID := 1:.N ]
# Set the keys to join on
setkeyv( Actions , c("l1" , "l2" ) )
setkeyv( States , c("k1" , "k2" ) )
# Join States to Actions, using '.I', which
# is the row locations in States in which the
# key of Actions are found and within each
# group the row number ( 1:.N - a repeating 1,2,3)
New <- States[ J(Actions) , list( ID , Ind = .I , Row = 1:.N ) ]
# k1 k2 ID Ind Row
#1: 700 100 1 1 1
#2: 700 100 1 2 2
#3: 700 100 1 3 3
#4: 700 100 2 1 1
#5: 700 100 2 2 2
#6: 700 100 2 3 3
# reshape using 'dcast.data.table'
dcast.data.table( Row ~ ID , data = New , value.var = "Ind" )
# Row 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27...
#1: 1 1 1 1 4 4 4 7 7 7 10 10 10 13 13 13 16 16 16 1 1 1 4 4 4 7 7 7...
#2: 2 2 2 2 5 5 5 8 8 8 11 11 11 14 14 14 17 17 17 2 2 2 5 5 5 8 8 8...
#3: 3 3 3 3 6 6 6 9 9 9 12 12 12 15 15 15 18 18 18 3 3 3 6 6 6 9 9 9...