I have a data frame with two columns. The first column defines subsets of the data. I want to find all values in the second column that only appear in one subset in the first column.
For example, from:
df=data.frame(
data_subsets=rep(LETTERS[1:2],each=5),
data_values=c(1,2,3,4,5,2,3,4,6,7))
data_subsets data_values
A 1
A 2
A 3
A 4
A 5
B 2
B 3
B 4
B 6
B 7
I would want to extract the following data frame.
data_subsets data_values
A 1
A 5
B 6
B 7
I have been playing around with duplicated but I just can't seem to make it work. Any help is appreciated. There are a number of topics tackling similar problems, I hope I didn't overlook the answer in my searches!
EDIT
I modified the approach from #Matthew Lundberg of counting the number of elements and extracting from the data frame. For some reason his approach was not working with the data frame I had, so I came up with this, which is less elegant but gets the job done:
counts=rowSums(do.call("rbind",tapply(df$data_subsets,df$data_values,FUN=table)))
extract=names(counts)[counts==1]
df[match(extract,df$data_values),]
First, find the count of each element in df$data_values:
x <- sapply(df$data_values, function(x) sum(as.numeric(df$data_values == x)))
> x
[1] 1 2 2 2 1 2 2 2 1 1
Now extract the rows:
> df[x==1,]
data_subsets data_values
1 A 1
5 A 5
9 B 6
10 B 7
Note that you missed "A 5" above. There is no "B 5".
You had the right idea with duplicated. The trick is to combine fromLast = TRUE and fromLast = FALSE options to get a full list of non-duplicated rows.
!duplicated(df$data_values,fromLast = FALSE)&!duplicated(df$data_values,fromLast = TRUE)
[1] TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE TRUE TRUE
Indexing your data.frame with this vector gives:
df[!duplicated(df$data_values,fromLast = FALSE)&!duplicated(df$data_values,fromLast = TRUE),]
data_subsets data_values
1 A 1
5 A 5
9 B 6
10 B 7
A variant of P Lapointe's answer would be
df[! df$data_values %in% df[duplicated( unique(df)$data_values ), ]$data_values,]
The unique() deals with the possibility (not in your test data) that some rows in the data may be identical and you want to keep them once if the same data_values does not appear for distinct data_sets (or distinct other columns).
You can use the 'dplyr' and 'explore' library to overcome this problem.
library(dplyr)
library(explore)
df=data.frame(
data_subsets=rep(LETTERS[1:2],each=5),
data_values=c(1,2,3,4,5,2,3,4,6,7))
df %>% describe(data_subsets)
######## output ########
#variable = data_subsets
#type = character
#na = 0 of 10 (0%)
#unique = 2
# A = 5 (50%)
# B = 5 (50%)
Related
Please help me first to make a clearer subject for that question.
To point is I don't know the correct R-terminology for what I need here. Is "join" a correct word?
set.seed(0)
df <- data.frame(a = sample(c(T,F), 10, replace=TRUE),
b = sample(c(T,F), 10, replace=TRUE),
c = sample(c(T,F), 10, replace=TRUE),
d = sample(c(T,F), 10, replace=TRUE))
a <- addmargins(table(df$a))
b <- addmargins(table(df$b))
c <- addmargins(table(df$c))
d <- addmargins(table(df$d))
This is the data
FALSE TRUE Sum
7 3 10
FALSE TRUE Sum
4 6 10
FALSE TRUE Sum
4 6 10
FALSE TRUE Sum
5 5 10
And what I want is to make the data look like this
FALSE TRUE Sum
a 7 3 10
b 4 6 10
c 4 6 10
d 5 5 10
Sounds simple, dosn't it? I was using ddply in the past. But I don't get here how to use ddply or anything else.
Here is a simple one-liner to perform the table command and then to add the margins:
addmargins(t(sapply(df, table)))
#or this for just the row sums:
addmargins(t(sapply(df, table)), 2)
sapply to apply the table function to each column.
t to transpose the results
addmargins for the row/column sums
This is just stacking rows, you want rbind (for "binding" rows together. cbind is the equivalent for columns).
rbind(a, b, c, d)
# FALSE TRUE Sum
# a 7 3 10
# b 4 6 10
# c 4 6 10
# d 5 5 10
A join is typically done when you have some shared columns but some different columns, and you want to combine the data such that the shared columns line up, and the different corresponding different columns are kept. For example, if you had one data frame of people and addresses, and another data frame of people and orders, you would join them together to see which address goes with which order. In base R, joins are done with the merge command.
I have a data frame of 6449x743, in which few rows are repeating twice with same column_X and column_Y values, but with higher column_Z values for second repetition for the same row. I want to keep the row with higher column_Z only.
I tried following, but this doesn't get rid of duplicate values and gives me output of 6449x743 only.
output <- unique(Data[,c('column_X', 'column_Y', max('column_Z'))])
Ideally, the output should be (6449 - N)x743, as number of rows will be less, but number of columns will remain same, as column_X and column_Y will become unique after filtering data based on column_Z.
If anyone has suggestions, please let me know. Thanks.
You can used not duplicated (!duplicated) with option fromLast = TRUE on specific columns like this:
df <- data.frame(a=c(1,1,2,3,4),b=c(2,2,3,4,5),c=1:5)
df <- df[order(df$c),] #make sure the data is sorted.
a b c
1 1 2 1
2 1 2 2
3 2 3 3
4 3 4 4
5 4 5 5
df[!duplicated(df$a,fromLast = TRUE) & !duplicated(df$b,fromLast = TRUE),]
a b c
2 1 2 2
3 2 3 3
4 3 4 4
5 4 5 5
Try
library(dplyr)
Data %>%
group_by(column_x, column_Y) %>%
filter(Z==max(column_Z))
It works with the sample data
set.seed(13)
df<-data_frame(a=sample(1:4, 50, rep=T),
b=sample(1:3, 50, rep=T),
x=runif(50), y=rnorm(50))
df %>% group_by(a,b) %>% filter(x==max(x))
Probably the easiest way would be to order the whole thing by column_Z and then remove the duplicates:
output <- Data[order(Data$column_Z, decreasing=TRUE),]
output <- output[!duplicated(paste(output$column_X, output$column_Y)),]
assuming I understood you correctly.
Here's an older answer which may be trying to accomplish the same thing that you are:
How to make a unique in R by column A and keep the row with maximum value in column B
Editing with relevant code:
A solution using package data.table:
set.seed(42)
dat <- data.frame(A=c('a','a','a','b','b'),B=c(1,2,3,5,200),C=rnorm(5))
library(data.table)
dat <- as.data.table(dat)
dat[,.SD[which.max(B)],by=A]
A B C
1: a 3 0.3631284
2: b 200 0.4042683
got that one I can't resolve.
Example dataset:
company <- c("compA","compB","compC")
compA <- c(1,2,3)
compB <- c(2,3,1)
compC <- c(3,1,2)
df <- data.frame(company,compA,compB,compC)
I want to create a new column with the value from the column which name is in the column "company" of the same line. the resulting extraction would be:
df$new <- c(1,3,2)
df
The way you have it set up, there's one row and one column for every company, and the rows and columns are in the same order. If that's your real dataset, then as others have said diag(...) is the solution (and you should select that answer).
If your real dataset has more than one instance of company (e.g., more than one row per company, then this is more general:
# using your df
sapply(1:nrow(df),function(i)df[i,as.character(df$company[i])])
# [1] 1 3 2
# more complex case
set.seed(1) # for reproducible example
newdf <- data.frame(company=LETTERS[sample(1:3,10,replace=T)],
A = sample(1:3,10,replace=T),
B=sample(1:5,10,replace=T),
C=1:10)
head(newdf)
# company A B C
# 1 A 1 5 1
# 2 B 1 2 2
# 3 B 3 4 3
# 4 C 2 1 4
# 5 A 3 2 5
# 6 C 2 2 6
sapply(1:nrow(newdf),function(i)newdf[i,as.character(newdf$company[i])])
# [1] 1 2 4 4 3 6 7 2 5 3
EDIT: eddi's answer is probably better. It is more likely that you would have the dataframe to work with rather than the individual row vectors.
I am not sure if I understand your question, it is unclear from your description. But it seems you are asking for the diagonals of the data values since this would be the place where "name is in the column "company" of the same line". The following will do this:
df$new <- diag(matrix(c(compA,compB,compC), nrow = 3, ncol = 3))
The diag function will return the diagonal of the matrix for you. So I first concatenated the three original vectors into one vector and then specified it to be wrapped into a matrix of three rows and three columns. Then I took the diagonal. The whole thing is then added to the dataframe.
Did that answer your question?
ETA: the point of the below, by the way, is to not have to iterate through my entire set of column vectors, just in case that was a proposed solution (just do what is known to work once at a time).
There's plenty of examples of replacing values in a single vector of a data frame in R with some other value.
Replace a value in a data frame based on a conditional (if) statement in R
replace numbers in data frame column in r [duplicate]
And also how to replace all values of NA with something else:
How to replace all values in a data.frame with another ( not 0) value
What I'm looking for is analogous to the last question, but basically trying to replace one value with another. I'm having trouble generating a data frame of logical values mapped to my actual data frame for cases where multiple columns meet a criteria, or simply trying to do the actions from the first two questions on more than one column.
An example:
data <- data.frame(name = rep(letters[1:3], each = 3), var1 = rep(1:9), var2 = rep(3:5, each = 3))
data
name var1 var2
1 a 1 3
2 a 2 3
3 a 3 3
4 b 4 4
5 b 5 4
6 b 6 4
7 c 7 5
8 c 8 5
9 c 9 5
And say I want all of the values of 4 in var1 and var2 to be 10.
I'm sure this is elementary and I'm just not thinking through it properly. I have been trying things like:
data[data[, 2:3] == 4, ]
That doesn't work, but if I do the same with data[, 2] instead of data[, 2:3], things work fine. It seems that logical test (like is.na()) work on multiple rows/columns, but that numerical comparisons aren't playing as nicely?
Thanks for any suggestions!
you want to search through the whole data frame for any value that matches the value you're trying to replace. the same way you can run a logical test like replacing all missing values with 10..
data[ is.na( data ) ] <- 10
you can also replace all 4s with 10s.
data[ data == 4 ] <- 10
at least i think that's what you're after?
and let's say you wanted to ignore the first row (since it's all letters)
# identify which columns contain the values you might want to replace
data[ , 2:3 ]
# subset it with extended bracketing..
data[ , 2:3 ][ data[ , 2:3 ] == 4 ]
# ..those were the values you're going to replace
# now overwrite 'em with tens
data[ , 2:3 ][ data[ , 2:3 ] == 4 ] <- 10
# look at the final data
data
Basically data[, 2:3]==4 gave you the index for data[,2:3] instead of data:
R > data[, 2:3] ==4
var1 var2
[1,] FALSE FALSE
[2,] FALSE FALSE
[3,] FALSE FALSE
[4,] TRUE TRUE
[5,] FALSE TRUE
[6,] FALSE TRUE
[7,] FALSE FALSE
[8,] FALSE FALSE
[9,] FALSE FALSE
So you may try this:
R > data[,2:3][data[, 2:3] ==4]
[1] 4 4 4 4
Just to provide a different answer, I thought I would write up a vector-math approach:
You can create a transformation matrix (really a data frame here, but will work the same), using a the vectorized 'ifelse' statement and multiply the transformation matrix and your original data, like so:
df.Rep <- function(.data_Frame, .search_Columns, .search_Value, .sub_Value){
.data_Frame[, .search_Columns] <- ifelse(.data_Frame[, .search_Columns]==.search_Value,.sub_Value/.search_Value,1) * .data_Frame[, .search_Columns]
return(.data_Frame)
}
To replace all values 4 with 10 in the data frame 'data' in columns 2 through 3, you would use the function like so:
# Either of these will work. I'm just showing options.
df.Rep(data, 2:3, 4, 10)
df.Rep(data, c("var1","var2"), 4, 10)
# name var1 var2
# 1 a 1 3
# 2 a 2 3
# 3 a 3 3
# 4 b 10 10
# 5 b 5 10
# 6 b 6 10
# 7 c 7 5
# 8 c 8 5
# 9 c 9 5
Just for continuity
data[,2:3][ data[,2:3] == 4 ] <- 10
But it looks ugly, So do it in 2 steps is better.
I'm not sure how to do this without getting an error. Here is a simplified example of my problem.
Say I have this data frame DF
a b c d
1 2 3 4
2 3 4 5
3 4 5 6
Then I have a variable
x <- min(c(1,2,3))
Now I want do do the following
y <- DF[a == x]
But when I try to refer to some variable like "x" I get an error because R is looking for a column "x" in my data frame. I get the "undefined columns selected" error
How can I do what I am trying to do in R?
You may benefit from reading an Introduction to R, especially on matrices, data.frames and indexing. Your a is a column of a data.frame, your x is a scalar. The comparison you have there does not work.
Maybe you meant
R> DF$a == min(c(1,2,3))
[1] TRUE FALSE FALSE
R> DF[,"a"] == min(c(1,2,3))
[1] TRUE FALSE FALSE
R>
which tells you that the first row fits but not the other too. Wrapping this in which() gives you indices instead.
I think this is what you're looking for:
> x <- min(DF$a)
> DF[DF$a == x,]
a b c d
1 1 2 3 4
An easier way (avoiding the 'x' variable) would be this:
> DF[which.min(DF$a),]
a b c d
1 1 2 3 4
or this:
> subset(DF, a==min(a))
a b c d
1 1 2 3 4