R: not extracting right columns from table - r

I am a beginner in R. I have a table that looks like this:
> means
as er op rt
a 34.66667 3.5 87 4
b 22.66667 4.5 9 5
c 5.00000 7.5 6 9
d 6.00000 0.5 6 3
e 3.00000 8.0 7 89
and another one that looks like this:
> table
exp ctrl
1 as er
2 rt op
I want to extract the values from the columns in "means" that are indicated in column "exp" of "table", like this:
> means_exp <- means[, table$exp]
In the real situation both tables would be much bigger, so I don't want to just specify the names of the columns to extract one by one.
However, with that command I am getting this:
> means_exp
as er
a 34.66667 3.5
b 22.66667 4.5
c 5.00000 7.5
d 6.00000 0.5
e 3.00000 8.0
but I am supposed to get columns "as" and "rt", not "as" and "er"
Any idea why the wrong columns are extracted?
Thank you!
Here is the dput of the first table:
structure(c(34.6666666666667, 22.6666666666667, 5, 6, 3, 3.5,
4.5, 7.5, 0.5, 8, 87, 9, 6, 6, 7, 4, 5, 9, 3, 89), .Dim = c(5L,
4L), .Dimnames = list(c("a", "b", "c", "d", "e"), c("as", "er",
"op", "rt")))
and that of the second:
structure(list(exp = structure(1:2, .Label = c("as", "rt"), class = "factor"),
ctrl = structure(1:2, .Label = c("er", "op"), class = "factor")), .Names = c("exp",
"ctrl"), class = "data.frame", row.names = c(NA, -2L))

The reason the OP got different columns with the 'exp' column in 'table' is the class of the exp. It would be factor class, so converting to character is an option.
means[,as.character(table$exp)]
The factor gets coerced to integer and we get
as.integer(factor(table$exp))
#[1] 1 2
means[,factor(table$exp)]
# as er
#a 34.66667 3.5
#b 22.66667 4.5
#c 5.00000 7.5
#d 6.00000 0.5
#e 3.00000 8.0
So, it selects the first 2 columns instead of the 'as' and 'rt'

Related

How to insert one column of data from one table into another?

I am trying to add this table:
# [,1]
#[1,] -0.870 8
#[2,] -0.750 7
#[3,] 2.290 2
#[4,] -0.050 5
#[5,] 0.355 4
#[6,] -0.895 9
#[7,] 3.290 1
#[8,] -0.510 6
#[9,] 0.430 3
#[10,] -3.290 10
Into this the respective "predAwayScore" and "predHomeScore" columns in my data frame.
I want to insert the left hand column of the first data set (-.87, -.75, etc.) into the appropriate cells. The right hand side of that data set (8,7,2,etc.) corresponds to the letter on the data frame that the value needs to be entered. (For instance, AwayTeam E = 5 = -.05)
I am unsure how to insert one column into another data frame, and how to refer to the corresponding letter guide that is attached.
I appreciate any help.
One option is to create a named vector, use that to match the 'AwayTeam', 'HomeTeam' values to get the corresponding scores and assign those values to the columns 'predAwayScore', 'predHomeScore'
nm1 <- setNames(m1[,1], LETTERS[m1[,2]])
df1$predAwayScore <- nm1[df1[['AwayTeam']]]
df1$predHomeScore <- nm1[df1[['HomeTeam']]]
df1
# Week AwayTeam HomeTeam predAwayScore predHomeScore
#1 3 E A -0.050 3.29
#2 3 A F 3.290 -0.51
#3 4 H E -0.870 -0.05
#4 4 I A -0.895 3.29
#5 5 F C -0.510 0.43
#6 5 F J -0.510 -3.29
data
m1 <- structure(c(-0.87, -0.75, 2.29, -0.05, 0.3555, -0.895, 3.29,
-0.51, 0.43, -3.29, 8, 7, 2, 5, 4, 9, 1, 6, 3, 10), .Dim = c(10L,
2L))
df1 <- structure(list(Week = c(3, 3, 4, 4, 5, 5), AwayTeam = c("E",
"A", "H", "I", "F", "F"), HomeTeam = c("A", "F", "E", "A", "C",
"J")), class = "data.frame", row.names = c(NA, -6L))

Auto-generate code in R to create data frame [duplicate]

I miss a way to add data to an SO answer in a transparent manner. My experience is that the structure object from dput() at times confuses inexperienced users unnecessary. I do however not have the patience to copy/paste it into a simple data frame each time and would like to automate it. Something similar to dput(), but in a simplified version.
Say I by copy/pasting and some other hos have data like this,
Df <- data.frame(A = c(2, 2, 2, 6, 7, 8),
B = c("A", "G", "N", NA, "L", "L"),
C = c(1L, 3L, 5L, NA, NA, NA))
looks like this,
Df
#> A B C
#> 1 2 A 1
#> 2 2 G 3
#> 3 2 N 5
#> 4 6 <NA> NA
#> 5 7 L NA
#> 6 8 L NA
Within one integer, one factor and one numeric vector,
str(Df)
#> 'data.frame': 6 obs. of 3 variables:
#> $ A: num 2 2 2 6 7 8
#> $ B: Factor w/ 4 levels "A","G","L","N": 1 2 4 NA 3 3
#> $ C: int 1 3 5 NA NA NA
Now, I would like to share this on SO, but I do not always have the orginal data frame it came from. More often than not I pipe() it in form SO and the only way I know to get it out is dput(). Like,
dput(Df)
#> structure(list(A = c(2, 2, 2, 6, 7, 8), B = structure(c(1L, 2L,
#> 4L, NA, 3L, 3L), .Label = c("A", "G", "L", "N"), class = "factor"),
#> C = c(1L, 3L, 5L, NA, NA, NA)), .Names = c("A", "B", "C"), row.names = c(NA,
#> -6L), class = "data.frame")
but, as I said at the top, these structures can look quite confusing. For that reason I am looking for a way to compress dput()'s output in some way. I imagine an output that looks something like this,
dput_small(Df)
#> data.frame(A = c(2, 2, 2, 6, 7, 8), B = c("A", "G", "N", NA, "L", "L"),
#> C = c(1L, 3L, 5L, NA, NA, NA))
Is that possible? I realize there's other classes, like lists, tbl, tbl_df, etc.
Edit: leaving the older solution at the bottom because it got a bounty and many votes but proposing an improved answer
You can use the {constructive} package, now only on GitHub but might be on CRAN by the time you read this :
# remotes::install_github("cynkra/constructive")
Df <- data.frame(A = c(2, 2, 2, 6, 7, 8),
B = c("A", "G", "N", NA, "L", "L"),
C = c(1L, 3L, 5L, NA, NA, NA))
constructive::construct(Df)
#> data.frame(
#> A = c(2, 2, 2, 6, 7, 8),
#> B = c("A", "G", "N", NA, "L", "L"),
#> C = c(1L, 3L, 5L, NA, NA, NA)
#> )
It has custom constructors to many common classes so it should be able to reproduce most objects faithfully in a human readable way.
Old solution:
3 solutions :
a wrapper around dput (handles standard data.frames, tibbles and lists)
a read.table solution (for data.frames)
a tibble::tribble solution (for data.frames, returning a tibble)
All include n and random parameter which allow one to dput only the head of the data or sample it on the fly.
dput_small1(Df)
# Df <- data.frame(
# A = c(2, 2, 2, 6, 7, 8),
# B = structure(c(1L, 2L, 4L, NA, 3L, 3L), .Label = c("A", "G", "L",
# "N"), class = "factor"),
# C = c(1L, 3L, 5L, NA, NA, NA) ,
# stringsAsFactors=FALSE)
dput_small2(Df,stringsAsFactors=TRUE)
# Df <- read.table(sep="\t", text="
# A B C
# 2 A 1
# 2 G 3
# 2 N 5
# 6 NA NA
# 7 L NA
# 8 L NA", header=TRUE, stringsAsFactors=TRUE)
dput_small3(Df)
# Df <- tibble::tribble(
# ~A, ~B, ~C,
# 2, "A", 1L,
# 2, "G", 3L,
# 2, "N", 5L,
# 6, NA_character_, NA_integer_,
# 7, "L", NA_integer_,
# 8, "L", NA_integer_
# )
# Df$B <- factor(Df$B)
Wrapper around dput
This option that gives an output very close to the one proposed in the question. It's quite general because it's actually wrapped around dput, but applied separately on columns.
multiline means 'keep dput's default output laid out into multiple lines'.
dput_small1<- function(x,
name=as.character(substitute(x)),
multiline = TRUE,
n=if ('list' %in% class(x)) length(x) else nrow(x),
random=FALSE,
seed = 1){
name
if('tbl_df' %in% class(x)) create_fun <- "tibble::tibble" else
if('list' %in% class(x)) create_fun <- "list" else
if('data.table' %in% class(x)) create_fun <- "data.table::data.table" else
create_fun <- "data.frame"
if(random) {
set.seed(seed)
if(create_fun == "list") x <- x[sample(1:length(x),n)] else
x <- x[sample(1:nrow(x),n),]
} else {
x <- head(x,n)
}
line_sep <- if (multiline) "\n " else ""
cat(sep='',name," <- ",create_fun,"(\n ",
paste0(unlist(
Map(function(item,nm) paste0(nm,if(nm=="") "" else " = ",paste(capture.output(dput(item)),collapse=line_sep)),
x,if(is.null(names(x))) rep("",length(x)) else names(x))),
collapse=",\n "),
if(create_fun == "data.frame") ",\n stringsAsFactors = FALSE)" else "\n)")
}
dput_small1(list(1,2,c=3,d=4),"my_list",random=TRUE,n=3)
# my_list <- list(
# 2,
# d = 4,
# c = 3
# )
read.table solution
For data.frames I find it comfortable however to have the input in a more explicit/tabular format.
This can be reached using read.table, then reformatting automatically the type of columns that read.table wouldn't get right. Not as general as first solution but will work smoothly for 95% of the cases found on SO.
dput_small2 <- function(df,
name=as.character(substitute(df)),
sep='\t',
header=TRUE,
stringsAsFactors = FALSE,
n= nrow(df),
random=FALSE,
seed = 1){
name
if(random) {
set.seed(seed)
df <- df[sample(1:nrow(df),n),]
} else {
df <- head(df,n)
}
cat(sep='',name,' <- read.table(sep="',sub('\t','\\\\t',sep),'", text="\n ',
paste(colnames(df),collapse=sep))
df <- head(df,n)
apply(df,1,function(x) cat(sep='','\n ',paste(x,collapse=sep)))
cat(sep='','", header=',header,', stringsAsFactors=',stringsAsFactors,')')
sapply(names(df), function(x){
if(is.character(df[[x]]) & suppressWarnings(identical(as.character(as.numeric(df[[x]])),df[[x]]))){ # if it's a character column containing numbers
cat(sep='','\n',name,'$',x,' <- as.character(', name,'$',x,')')
} else if(is.factor(df[[x]]) & !stringsAsFactors) { # if it's a factor and conversion is not automated
cat(sep='','\n',name,'$',x,' <- factor(', name,'$',x,')')
} else if(inherits(df[[x]], "POSIXct")){
cat(sep='','\n',name,'$',x,' <- as.POSIXct(', name,'$',x,')')
} else if(inherits(df[[x]], "Date")){
cat(sep='','\n',name,'$',x,' <- as.Date(', name,'$',x,')')
}})
invisible(NULL)
}
Simplest case
dput_small2(iris,n=6)
will print:
iris <- read.table(sep="\t", text="
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
5.1 3.5 1.4 0.2 setosa
4.9 3.0 1.4 0.2 setosa
4.7 3.2 1.3 0.2 setosa
4.6 3.1 1.5 0.2 setosa
5.0 3.6 1.4 0.2 setosa
5.4 3.9 1.7 0.4 setosa", header=TRUE, stringsAsFactors=FALSE)
which in turn when executed will return :
# Sepal.Length Sepal.Width Petal.Length Petal.Width Species
# 1 5.1 3.5 1.4 0.2 setosa
# 2 4.9 3.0 1.4 0.2 setosa
# 3 4.7 3.2 1.3 0.2 setosa
# 4 4.6 3.1 1.5 0.2 setosa
# 5 5.0 3.6 1.4 0.2 setosa
# 6 5.4 3.9 1.7 0.4 setosa
str(iris)
# 'data.frame': 6 obs. of 5 variables:
# $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4
# $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9
# $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7
# $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4
# $ Species : chr " setosa" " setosa" " setosa" " setosa" ...
more complex
dummy data:
test <- data.frame(a=1:5,
b=as.character(6:10),
c=letters[1:5],
d=factor(letters[6:10]),
e=Sys.time()+(1:5),
stringsAsFactors = FALSE)
This:
dput_small2(test,'df2')
will print:
df2 <- read.table(sep="\t", text="
a b c d e
1 6 a f 2018-02-15 11:53:17
2 7 b g 2018-02-15 11:53:18
3 8 c h 2018-02-15 11:53:19
4 9 d i 2018-02-15 11:53:20
5 10 e j 2018-02-15 11:53:21", header=TRUE, stringsAsFactors=FALSE)
df2$b <- as.character(df2$b)
df2$d <- factor(df2$d)
df2$e <- as.POSIXct(df2$e)
which in turn when executed will return :
# a b c d e
# 1 1 6 a f 2018-02-15 11:53:17
# 2 2 7 b g 2018-02-15 11:53:18
# 3 3 8 c h 2018-02-15 11:53:19
# 4 4 9 d i 2018-02-15 11:53:20
# 5 5 10 e j 2018-02-15 11:53:21
str(df2)
# 'data.frame': 5 obs. of 5 variables:
# $ a: int 1 2 3 4 5
# $ b: chr "6" "7" "8" "9" ...
# $ c: chr "a" "b" "c" "d" ...
# $ d: Factor w/ 5 levels "f","g","h","i",..: 1 2 3 4 5
# $ e: POSIXct, format: "2018-02-15 11:53:17" "2018-02-15 11:53:18" "2018-02-15 11:53:19" "2018-02-15 11:53:20" ...
all.equal(df2,test)
# [1] "Component ā€œeā€: Mean absolute difference: 0.4574251" # only some rounding error
tribble solution
The read.table option is very readable but not very general. with tribble pretty much any data type can be handled (though factors need adhoc fixing).
This solution isn't so useful for OP's example but is great for list columns (see example below). To make use of the output, library tibble is required.
Just as my first solution, it's a wrapper around dput, but instead of 'dputting' columns, i'm 'dputting' elements.
dput_small3 <- function(df,
name=as.character(substitute(df)),
n= nrow(df),
random=FALSE,
seed = 1){
name
if(random) {
set.seed(seed)
df <- df[sample(1:nrow(df),n),]
} else {
df <- head(df,n)
}
df1 <- lapply(df,function(col) if(is.factor(col)) as.character(col) else col)
dputs <- sapply(df1,function(col){
col_dputs <- sapply(col,function(elt) paste(capture.output(dput(elt)),collapse=""))
max_char <- max(nchar(unlist(col_dputs)))
sapply(col_dputs,function(elt) paste(c(rep(" ",max_char-nchar(elt)),elt),collapse=""))
})
lines <- paste(apply(dputs,1,paste,collapse=", "),collapse=",\n ")
output <- paste0(name," <- tibble::tribble(\n ",
paste0("~",names(df),collapse=", "),
",\n ",lines,"\n)")
cat(output)
sapply(names(df), function(x) if(is.factor(df[[x]])) cat(sep='','\n',name,'$',x,' <- factor(', name,'$',x,')'))
invisible(NULL)
}
dput_small3(dplyr::starwars[c(1:3,11)],"sw",n=6,random=TRUE)
# sw <- tibble::tribble(
# ~name, ~height, ~mass, ~films,
# "Lando Calrissian", 177L, 79, c("Return of the Jedi", "The Empire Strikes Back"),
# "Finis Valorum", 170L, NA_real_, "The Phantom Menace",
# "Ki-Adi-Mundi", 198L, 82, c("Attack of the Clones", "The Phantom Menace", "Revenge of the Sith"),
# "Grievous", 216L, 159, "Revenge of the Sith",
# "Wedge Antilles", 170L, 77, c("Return of the Jedi", "The Empire Strikes Back", "A New Hope"),
# "Wat Tambor", 193L, 48, "Attack of the Clones"
# )
The package datapasta won't always work perfectly as it currently doesn't support all types, but it is clean and easy, i.e.,
# install.packages(c("datapasta"), dependencies = TRUE)
datapasta::dpasta(Df)
#> data.frame(
#> A = c(2, 2, 2, 6, 7, 8),
#> C = c(1L, 3L, 5L, NA, NA, NA),
#> B = as.factor(c("A", "G", "N", NA, "L", "L"))
#> )
We could set control to NULL to simplify:
dput(Df, control = NULL)
# list(A = c(2, 2, 2, 6, 7, 8), B = c(NA, NA, NA, NA, 7, 9), C = c(1, 3, 5, NA, NA, NA))
Then wrap it with data.frame:
data.frame(dput(Df, control = NULL))
Edit: To avoid factor columns getting converted to numbers, we could convert them to character before calling dput:
dput_small <- function(d){
ix <- sapply(d, is.factor)
d[ix] <- lapply(d[ix], as.character)
dput(d, control = NULL)
}
You could simply write to a compressed connection.
gz <- gzfile("foo.gz", open="wt")
dput(Df, gz)
close(gz)
Generally a large dput is difficult to cope with, on SO or otherwise. Instead you can just save the structure directly to an Rda file:
save(Df, file='foo.Rda')
And read it back in:
load('foo.Rda')
See this question for a little more info and credit where credit is due: How to save a data.frame in R?
You could also look at the sink function...
If I've missed the purpose of your question, please feel free to expand on the reasons why dput is the only mechanism for you.
It might be worth mentioning memCompress and memDecompress here. For in-memory objects, it can reduce the size of large objects by compressing them as specified. And the latter reverses the compression. They're actually quite useful for package objects.
sum(nchar(dput(DF)))
# [1] 64
( mDF <- memCompress(as.character(DF)) )
# [1] 78 9c 4b d6 30 d2 51 80 20 33 1d 05 73 1d 05 0b 4d ae 64 0d 3f 47 1d 05 64 0c 14 b7 04 89 1b ea 28 18 eb 28 98 22 4b 6a 02 00 a8 ba 0c d2
length(mDF)
# [1] 46
cat(mdDF <- memDecompress(mDF, "gzip", TRUE))
# c(2, 2, 2, 6, 7, 8)
# c(NA, NA, NA, NA, 7, 9)
# c(1, 3, 5, NA, NA, NA)
nchar(mdDF)
# [1] 66
I haven't quite determined if the data frame can be reassembled easily, but I'm sure it can be.
There is also the read.so package, which I really like, in particular to read SO data.
It works for tibbles as well.
#devtools::install_github("alistaire47/read.so")
Df <- data.frame(A = c(2, 2, 2, 6, 7, 8),
B = c("A", "G", "N", NA, "L", "L"),
C = c(1L, 3L, 5L, NA, NA, NA))
read.so::write.so(Df)
#> Df <- data.frame(
#> A = c(2, 2, 2, 6, 7, 8),
#> B = c("A", "G", "N", NA, "L", "L"),
#> C = c(1L, 3L, 5L, NA, NA, NA)
#> )

How to merge two dataframes with same column name but may have same data in variables in R?

I want to ask how do I merge this two data frame?
df1:
Name Type Price
A 1 NA
B 2 2.5
C 3 2.0
df2:
Name Type Price
A 1 1.5
D 2 2.5
E 3 2.0
As you can see from both df, they have same column names and one row with the same value in "Name" which is A but df1 doesn't have the price whereas df2 has. I want to achieve this output such that they merge if the value in "Name" is the same
Name Type Price
A 1 1.5
B 2 2.5
C 3 2.0
D 2 2.5
E 3 2.0
We could do a full_join on df1 and df2 by Name and using coalesce on Type and Price get the first non-NA value from those columns.
library(dplyr)
full_join(df1, df2, by = 'Name') %>%
mutate(Type = coalesce(Type.x, Type.y),
Price = coalesce(Price.x, Price.y)) %>%
select(names(df1))
# Name Type Price
#1 A 1 1.5
#2 B 2 2.5
#3 C 3 2.0
#4 D 2 2.5
#5 E 3 2.0
And similar in base R :
transform(merge(df1, df2, by = 'Name', all = TRUE),
Price = ifelse(is.na(Price.x), Price.y, Price.x),
Type = ifelse(is.na(Type.x), Type.y, Type.x))[names(df1)]
data
df1 <- structure(list(Name = structure(1:3, .Label = c("A", "B", "C"
), class = "factor"), Type = 1:3, Price = c(NA, 2.5, 2)),
class = "data.frame", row.names = c(NA, -3L))
df2 <- structure(list(Name = structure(1:3, .Label = c("A", "D", "E"
), class = "factor"), Type = 1:3, Price = c(1.5, 2.5, 2)),
class = "data.frame", row.names = c(NA, -3L))
Seems like you want to rbind the data frames together, then remove rows with NA values for Price, and order by Name.
library(data.table)
setDT(rbind(df1, df2))[!is.na(Price)][order(Name)]
# Name Type Price
# 1: A 1 1.5
# 2: B 2 2.5
# 3: C 3 2.0
# 4: D 2 2.5
# 5: E 3 2.0
Here is a base R solution using merge + ocmplete.cases
dfout <- subset(u <- merge(df1,df2,all= TRUE),complete.cases(u))
which yields
> dfout
Name Type Price
1 A 1 1.5
3 B 2 2.5
4 C 3 2.0
5 D 2 2.5
6 E 3 2.0
DATA
df1 <- structure(list(Name = structure(1:3, .Label = c("A", "B", "C"
), class = "factor"), Type = 1:3, Price = c(NA, 2.5, 2)),
class = "data.frame", row.names = c(NA, -3L))
df2 <- structure(list(Name = structure(1:3, .Label = c("A", "D", "E"
), class = "factor"), Type = 1:3, Price = c(1.5, 2.5, 2)),
class = "data.frame", row.names = c(NA, -3L))

Matching column in dataframe by nearest values in column of other dataframe

Hello I have one question of matching two data.frames.
Consider I have two datasets:
Dataframe 1:
"A" "B"
91 1
92 3
93 11
94 4
95 10
96 6
97 7
98 8
99 9
100 2
structure(list(A = 91:100, B = c(1, 3, 11, 4, 10, 6, 7, 8, 9,
2)), .Names = c("A", "B"), row.names = c(NA, -10L), class = "data.frame")
Dataframe 2:
"C" "D"
91.12 1
92.34 3
93.65 11
94.23 4
92.14 10
96.98 6
97.22 7
98.11 8
93.15 9
100.67 2
91.45 1
96.45 3
83.78 11
84.66 4
100 10
structure(list(C = c(91.12, 92.34, 93.65, 94.23, 92.14, 96.98,
97.22, 98.11, 93.15, 100.67, 91.25, 96.45, 83.78, 84.66, 100),
D = c(1, 3, 11, 4, 10, 6, 7, 8, 9, 2, 1, 3, 11, 4, 10)), .Names = c("C",
"D"), row.names = c(NA, -15L), class = "data.frame")
Now I want to find the rounded matches between column A and C and replace column D by the respective value in column B from Dataframe 1. Where there is no corresponding value (by rounded matches between A and C) I want to get an NaN for the replaced column D.
result:
"C" "newD"
91.12 1
92.34 3
93.65 4
94.23 4
92.14 3
96.98 7
97.22 7
98.11 8
93.15 11
100.67 NaN
91.25 1
96.45 6
83.78 NaN
84.66 NaN
100 2
structure(list(C = c(91.12, 92.34, 93.65, 94.23, 92.14, 96.98,
97.22, 98.11, 93.15, 100.67, 91.25, 96.45, 83.78, 84.66, 100),
D = c(1, 3, 4, 4, 3, 7, 7, 8, 11, NaN, 1, 6, NaN, NaN, 2)), .Names = c("C",
"D"), row.names = c(NA, -15L), class = "data.frame")
Does anybody knows how to do that especially for large datasets?
Thanks a lot!
Making an update join with data.table:
library(data.table)
setDT(DF1); setDT(DF2)
DF2[, A := round(C)]
DF2[, D := DF1[DF2, on=.(A), x.B] ]
# alternately, chain together in one step:
DF2[, A := round(C)][, D := DF1[DF2, on=.(A), x.B] ]
This gives NAs in unmatched rows. To switch it... DF2[is.na(D), D := NaN].
To drop the new DF2$A column, use DF2[, A := NULL].
Does anybody knows how to do that especially for large datasets?
This modifies DF2 in place (instead of making a new table like a vanilla join as in Mike's answer), so it should be fairly efficient for large tables. It might perform better if A is stored as an integer instead of a float in both tables.
On data.table 1.9.6, use on="A", B instead of on=.(A), x.B. Thanks to Mike H for checking this.
You can create a lookup table where the values in A are used to look up the values in B.
Lookup = df1$B
names(Lookup) = df1$A
df3 = data.frame(C = df2$C, newD = Lookup[as.character(round(df2$C))])
df3$newD[is.na(df3$newD)] = NaN
For these types of merges I like sql:
library(sqldf)
res <- sqldf("SELECT l.C, r.B
FROM df2 as l
LEFT JOIN df1 as r
on round(l.C) = round(r.A)")
res
# C B
#1 91.12 1
#2 92.34 3
#3 93.65 4
#4 94.23 4
#5 92.14 3
#6 96.98 7
#7 97.22 7
#8 98.11 8
#9 93.15 11
#10 100.67 NA
#11 91.45 1
#12 96.45 6
#13 83.78 NA
#14 84.66 NA
#15 100.00 2

How to get row from R data.frame

I have a data.frame with column headers.
How can I get a specific row from the data.frame as a list (with the column headers as keys for the list)?
Specifically, my data.frame is
A B C
1 5 4.25 4.5
2 3.5 4 2.5
3 3.25 4 4
4 4.25 4.5 2.25
5 1.5 4.5 3
And I want to get a row that's the equivalent of
> c(a=5, b=4.25, c=4.5)
a b c
5.0 4.25 4.5
x[r,]
where r is the row you're interested in. Try this, for example:
#Add your data
x <- structure(list(A = c(5, 3.5, 3.25, 4.25, 1.5 ),
B = c(4.25, 4, 4, 4.5, 4.5 ),
C = c(4.5, 2.5, 4, 2.25, 3 )
),
.Names = c("A", "B", "C"),
class = "data.frame",
row.names = c(NA, -5L)
)
#The vector your result should match
y<-c(A=5, B=4.25, C=4.5)
#Test that the items in the row match the vector you wanted
x[1,]==y
This page (from this useful site) has good information on indexing like this.
Logical indexing is very R-ish. Try:
x[ x$A ==5 & x$B==4.25 & x$C==4.5 , ]
Or:
subset( x, A ==5 & B==4.25 & C==4.5 )
Try:
> d <- data.frame(a=1:3, b=4:6, c=7:9)
> d
a b c
1 1 4 7
2 2 5 8
3 3 6 9
> d[1, ]
a b c
1 1 4 7
> d[1, ]['a']
a
1 1
If you don't know the row number, but do know some values then you can use subset
x <- structure(list(A = c(5, 3.5, 3.25, 4.25, 1.5 ),
B = c(4.25, 4, 4, 4.5, 4.5 ),
C = c(4.5, 2.5, 4, 2.25, 3 )
),
.Names = c("A", "B", "C"),
class = "data.frame",
row.names = c(NA, -5L)
)
subset(x, A ==5 & B==4.25 & C==4.5)
10 years later ---> Using tidyverse we could achieve this simply and borrowing a leaf from Christopher Bottoms. For a better grasp, see slice().
library(tidyverse)
x <- structure(list(A = c(5, 3.5, 3.25, 4.25, 1.5 ),
B = c(4.25, 4, 4, 4.5, 4.5 ),
C = c(4.5, 2.5, 4, 2.25, 3 )
),
.Names = c("A", "B", "C"),
class = "data.frame",
row.names = c(NA, -5L)
)
x
#> A B C
#> 1 5.00 4.25 4.50
#> 2 3.50 4.00 2.50
#> 3 3.25 4.00 4.00
#> 4 4.25 4.50 2.25
#> 5 1.50 4.50 3.00
y<-c(A=5, B=4.25, C=4.5)
y
#> A B C
#> 5.00 4.25 4.50
#The slice() verb allows one to subset data row-wise.
x <- x %>% slice(1) #(n) for the nth row, or (i:n) for range i to n, (i:n()) for i to last row...
x
#> A B C
#> 1 5 4.25 4.5
#Test that the items in the row match the vector you wanted
x[1,]==y
#> A B C
#> 1 TRUE TRUE TRUE
Created on 2020-08-06 by the reprex package (v0.3.0)

Resources