I have 2 tables
In the first table, I have two columns. In the first colum , the values run from 1 to 2 million (call them x). In the second column, I have random numbers (call them y) .
In the second table , I have two columns. In the first colum , I have the same x values, but they do not run from 1 to 2 million instead they are in random increasing order like 222 , 249 , 562 .. and so on. In the second column, I have random numbers (call them z) .
Now, I am trying to add a third column to my second table with the y values from first table.I decided to use apply . But, you can use join or merge -- whichever is more efficient. Here x value connects the y and the z.
To start with a minimal data, you can use this code:
t1 <- cbind(1:20, sample(100:999, 20, TRUE))
t2 <- rbind(c(2, 4), c(6, 12), c(17, 18))
apply(t2, 1, function(...) )
Could you help me to fill the ... blanks.
The output should be of the form:
2 4 --
6 12 --
17 18 --
You can use merge for this:
merge(as.data.frame(t2), as.data.frame(t1), by='V1')
V1 V2.x V2.y
1 2 4 751
2 6 12 298
3 17 18 218
Does this meet your requirements?
require(plyr)
t1 <- as.data.frame(cbind(1:20, sample(100:999, 20, TRUE)))
t2 <- as.data.frame(rbind(c(2, 4), c(6, 12), c(17, 18)))
t3 <- join(t2, t1, type = "left", by = "V1")
t3
> t3
V1 V2 V2
1 2 4 779
2 6 12 898
3 17 18 903
Related
I have two large dataframes (50+ columns and many are long character vars) and I need to identify the "link" variable that I should use to merge them together. The problem is the name of the variables don't match up. That is I need to identify variables in the two datasets where the values have a high correlation.
As an example :
dta1 = data.frame(A = c(1 , 2,3, 4), B = c( 23, 45, 6, 8), C = c("001", "028", "076", "039"))
dta2 = data.frame(first = c(5, 6, 7, 8), second = c( 58, 32, 33, 45), third = c("008", "028", "076", "039"))
I would like the code to tell me that columns C and third have a very high correlation (they are not complete duplicates though!).
I have tried adding the two dataframes and running a cor() function, but this doesn't work with character variables.
Also tried union_all(x, y, ...) from dplyr but that requires the same column names.
At this point I am out of ideas.
Thanks very much.
To identify the columns most similar, try the following. It systematically compares the values from each column in dta1 with the columns in dta2. It returns a matrix.
sapply(dta1, function(x) sapply(dta2, function(y) sum(x == y)))
A B C
first 0 1 0
second 0 0 0
third 0 0 3
From here we can see that third and C have the most matches. Now you can join your two data.frames. To keep all rows and columns, you will want a full_join from the dplyr package.
library(dplyr)
full_join(dta1, dta2, by = c("C" = "third"))
A B C first second
1 1 23 001 NA NA
2 2 45 028 6 32
3 3 6 076 7 33
4 4 8 039 8 45
5 NA NA 008 5 58
I am trying to automate a process to complete missing values on a sequence of variables using an ifelse statement and mutate_all function. The problem involves a dataframe with many variable names, for example, ax1, bx1, ...zx1, ax2, bx2, ...zx2, ax3, bx3, ...zx3. The following data give a small scenario:
df<-data.frame(
"id" = c(1:5),
"ax1" = c(1, "NA", 8, "NA", 17),
"bx1" = c(2, 7, "NA", 11, 12),
"ax2" = c(2, 1, 8, 15, 17),
"bx2" = c(2, 6, 4, 13, 11))
The process is to replace the missing values on the variables with the ending "x1" with their corresponding values on the variables with the ending "x2". That is, if ax1 is missing it is replaced by ax2 and any missingness on bx1 is replaced by bx2 and so on. Since there are many variables than the scenario presented here, I am looking for a way to automate this process. I have tried the following codes
library(dplyr)
df <- df %>%
mutate_all(vars(ends_with("x1", "x2")), function(x,y)
ifelse(is.na(x), y, x)))
but it does not work. I greatly appreciate any help on this.
The expected output is
id ax1 bx1 ax2 bx2
1 1 2 2 2
2 1 7 1 6
3 8 4 8 4
4 15 11 15 13
5 17 12 17 11
In base R, we can replace NA value in x1 with corresponding NA values in x2 using Map.
x1_cols <- grep('x1$', names(df))
x2_cols <- grep('x2$', names(df))
df[x1_cols] <- Map(function(x, y) {x[is.na(x)] <- y[is.na(x)];x},
df[x1_cols], df[x2_cols])
df
# id ax1 bx1 ax2 bx2
#1 1 1 2 2 2
#2 2 1 7 1 6
#3 3 8 4 8 4
#4 4 15 11 15 13
#5 5 17 12 17 11
We can use the same logic and use purrr::map2
df[x1_cols] <- purrr::map2(df[x1_cols], df[x2_cols],
~{.x[is.na(.x)] <- .y[is.na(.x)];.x})
data
Modified data a bit making sure that NA are actual NAs and not string "NA" which were actually making columns as factors.
df<-data.frame(id=c(1:5),
ax1=c(1,NA,8,NA,17),
bx1=c(2,7,NA,11,12),
ax2=c(2,1,8,15,17),
bx2=c(2,6,4,13,11))
When matching 2 data sets, is it possible to somehow specify the matching such that an observation from the first dataset is matched to the second dataset if at least one of the conditions are met?
Let's say I have the following 2 data.tables:
dt1<- data.table(c1=c(rep('a', 2), rep('b', 2), rep('c', 2)),
c2=c('x','y','x','y','x','z'),
c3.min = c(rep(0,3), rep(-1,3)),
c3.max = c(rep(10,3), rep(11,3)),
x= (1:6))
dt2 <- data.table(c1=c(rep('a', 3), rep('b', 3), rep('c', 4)),
c2=c(rep(c('x','y'), 5)),
c3=c(-1, 2, 0, 10, 11, -1, 3, 6, 3, 12),
y= (1:10))
I have 3 conditions based on which I want to match dt1 to dt2, and the 3rd condition is a range. If I just do a normal merge by these 3 conditions I will get:
> dt2[dt1, on=.(c1,
+ c2,
+ c3 <= c3.max,
+ c3 >= c3.min), nomatch=NA ]
c1 c2 c3 y c3.1 x
1: a x 10 3 0 1
2: a y 10 2 0 2
3: b x 10 NA 0 3
4: b y 11 4 -1 4
5: b y 11 6 -1 4
6: c x 11 7 -1 5
7: c x 11 9 -1 5
8: c z 11 NA -1 6
As you can see the observations from dt1 with x=3 and x=6 aren't matched. My main concern is to find at least one match for as many observations in dt1 as possible, even if I have to relax some conditions. So I want to know if there is anyway to perform a match where dt1 matches with dt2 on at least 1 out of the 3 conditions?
I could write a loop, but in reality my 2 datasets are much bigger than this (the first has 10K observations and the 2nd has 300K observations), and I have 4 conditions in total, so I'm looking for a more efficient way.
Thanks!
My first instinct with this type of problem would be to use the sqldf package, since we need to join using OR conditions, not AND conditions.
library(sqldf)
names(dt1) <- c("c1", "c2", "c3_min", "c3_max", "x") # need to get rid of the "."
query1 <- "select * from dt1
left join dt2
on (dt1.c1 = dt2.c1) or (dt1.c2 = dt2.c2) or (dt2.c3 between dt1.c3_min and dt1.c3_max)"
sqldf(query1)
Given two data frames s and q with five observations each:
set.seed(8)
s <- data.frame(id=sample(c('Z','X'), 5, T),
t0=sample(1:10, 5, T),
t1 = sample(11:30, 5, T))
q <- data.frame(id=sample(c('Z','X'), 5, T),
t0=sample(1:10, 5, T),
t1 = sample(11:30, 5, T))
> s
id t0 t1
1 Z 8 20
2 Z 3 12
3 X 10 19
4 X 8 21
5 Z 7 13
> q
id t0 t1
1 X 3 30
2 Z 5 12
3 Z 7 23
4 Z 3 21
5 X 7 27
The midpoint for the observations between the variables t0 and t1 is (e.g. for s data):
s$t0+(s$t1-s$t0)/2
To find the index of the (first) observation in s whose midpoint is closest to, say, the first observation in q I can do:
i <- which.min(abs((s$t0+(s$t1-s$t0)/2 - (q$t0[1]+(q$t1[1]-q$t0[1])/2)))
s[i,]
gives:
id t0 t1
3 X 10 19
But I cannot figure out how to find the same index in the original data s if I also want to condition on the id variable (e.g. pseudo code like: which.min(....) & s$id == q$id[1] - in this case the midpoint is sought among ids being 'X'). This SO is close but not spot on.
Again: I need a index to be used in the original 5-row data set.
Set the which.min argument to infinity when your condition is not obeyed:
val <- abs((s$t0+(s$t1-s$t0)/2 - (q$t0[1]+(q$t1[1]-q$t0[1])/2))
val[s$id != q$id[1]] <- Inf
i <- which.min(val)
By the way, you can simplify the expression in the first character as:
val <- abs((s$t0+s$t1)/2-(q$t0[1]+q$t1[1])/2)
or even
val <- abs(s$t0+s$t1-q$t0[1]-q$t1[1])/2
I have 2 files of 3 columns and hundreds of rows. I want to compare and list the common elements of first two columns of the two files. Then the list which i will get after comparing i have to add the third column of second file to that list. Third column will contain the values which were in the second file corresponding to numbers of remaining two columns which i have got as common to both the files.
For example, consider two files of 6 rows and 3 columns
First file -
1 2 3
2 3 4
4 6 7
3 8 9
11 10 5
19 6 14
second file -
1 4 1
2 1 4
4 6 10
3 7 2
11 10 3
19 6 5
As i said i have to compare the first two columns and then add the third column of second file to that list. Therefore, output must be:
4 6 10
11 10 3
19 6 5
I have the following code, however its showing an error object not found also i am not able to add the third column. Please help :)
df2 = reading first file, df3 = reading second file. Code is in R language.
s1 = 1
for(i in 1:nrow(df2)){
for(j in 1:nrow(df3)){
if(df2[i,1] == df3[j,1]){
if(df2[i,2] == df3[j,2]){
common.rows1[s1,1] <- df2[i,1]
common.rows1[s1,2] <- df2[i,2]
s1 = s1 + 1
}
}
}
You can use the %in% operator twice to subset your second data.frame (I call it df2):
df2[df2$V1 %in% df1$V1 & df2$V2 %in% df1$V2,]
# V1 V2 V3
#3 4 6 10
#5 11 10 3
#6 19 6 5
V1 and V2 in my example are the column names of df1 and df2.
It seems that this is the perfect use-case for merge, e.g.
merge(d1[c('V1','V2')],d2)
results in:
V1 V2 V3
1 11 10 3
2 19 6 5
3 4 6 10
In which 'V1' and 'V2' are the column names of interest.
data.table proposal
library(data.table)
setDT(df1)
setDT(df2)
setkey(df1, V1, V2)
setkey(df2, V1, V2)
df2[df1[, -3, with = F], nomatch = 0]
## V1 V2 V3
## 1: 4 6 10
## 2: 11 10 3
## 3: 19 6 5
If your two tables are d1 and d2,
d1<-data.frame(
V1 = c(1, 2, 4, 3, 11, 19),
V2 = c(2, 3, 6, 8, 10, 6),
V3 = c(3, 4, 7, 9, 5, 14)
)
d2<-data.frame(
V1 = c(1, 2, 4, 3, 11, 19),
V2 = c(4, 1, 6, 7, 10, 6),
V3 = c(1, 4, 10, 2, 3, 5)
)
then you can subset d2 (in order to keep the third column) with
d2[interaction(d2$V1, d2$V2) %in% interaction(d1$V1, d1$V2),]
The interaction() treats the first two columns as a combined key.