Removing same values in columns in some rows of a file in R - r

I have a file like this.
1 3
1 2
1 10
1 5
**5 5**
6 7
8 9
4 6
1 2
**10 10**
......
The file contains thousands of rows. I wanted to know, how can I remove the rows which contains the same values in columns in R ( The row containing 5 5 and row containing 10 10 )? I know how to remove duplicate columns or duplicate rows, but how do I go about selectively removing them? Thanks. :)

I would do this with indexing, example with small data frame:
myDf <- data.frame(a=c(3,5,8,6,9,4,3), b=c(3,3,5,8,9,6,4))
myDf <- myDf[myDf$a != myDf$b,]

I would consider writing a helper function like this:
indicator <- function(indf) {
rowSums(vapply(indf, function(x) x == indf[, 1],
logical(nrow(indf)))) == ncol(indf)
}
Basically, the function compares each column in the data.frame with the first column of the data.frame, then, checks to see which rowSums are the same as the number of columns in the data.frame.
This basically creates a logical vector that can be used to subset your data.frame.
Example:
mydf <- data.frame(a=c(3,5,8,6,9,4,3),
b=c(3,3,5,8,9,6,4),
c=c(3,4,5,6,9,7,2))
indicator(mydf)
# [1] TRUE FALSE FALSE FALSE TRUE FALSE FALSE
mydf[!indicator(mydf), ]
# a b c
# 2 5 3 4
# 3 8 5 5
# 4 6 8 6
# 6 4 6 7
# 7 3 4 2

Related

Why does dropping which(FALSE) columns delete all columns?

This answer warns of some scary behavior from which. Specifically, if you take any data frame, say df <- data.frame(x=1:5, y=2:6), and then try to subset it with something that evaluates to which(FALSE) (i.e. integer(0)), then you will delete every column in the data set. Why is this? Why would dropping all columns that correspond to integer(0) delete everything? Deleting nothing shouldn't destroy everything.
Example:
>df <- data.frame(x=1:5, y=2:6)
>df
x y
1 1 2
2 2 3
3 3 4
4 4 5
5 5 6
>df <- df[,-which(FALSE)]
>df
data frame with 0 columns and 5 rows
Consider:
identical(integer(0), -integer(0))
# [1] TRUE
So, actually you're selecting nothing, rather than deleting nothing.
If you want to delete nothing, you could use a large negative integer, e.g. the largest possible.
df[, -.Machine$integer.max]
# x y
# 1 1 2
# 2 2 3
# 3 3 4
# 4 4 5
# 5 5 6

Counting 0`s, 1`s, 99`s and NA`s for each variable in a data frame

I have a data frame with 118 variables with 0's, 1's 99's and NA's. I need count for each variable how many 99's, NA's, 1's and 0's there is (the 99 is "not apply", the 0 is "no", the 1 is "yes" and the NA is "No answer"). I try to do this with table function but it works with vectors, how can I do it for all the set of variables?
There is a little reproducible example of the data frame:
forest<-c(1,1,1,1,0,0,0,1,1,1,0,NA,0,NA,0,99,99,1,0,NA)
water<-c(1,NA,NA,NA,NA,99,99,0,0,0,1,1,1,0,0,NA,NA,99,1,0)
rain<-c(1,NA,1,0,1,99,99,0,1,0,1,0,1,0,0,NA,99,99,1,1)
fire<-c(1,0,0,0,1,99,99,NA,NA,NA,1,0,1,0,0,NA,99,99,1,1)
df<-data.frame(forest,water,rain,fire)
And I need write in a data frame the result for variable, like this:
forest water rain fire
1 8 5 8 6
0 7 6 6 6
99 2 3 4 4
NA 3 6 2 4
Can't find a good dupe, so here's my comment as an answer:
A data frame is really a list of columns. lapply will apply a function to every item in the input (every column, in the case of a data frame) and return a list with each result:
lapply(df, table)
# $forest
#
# 0 1 99
# 7 8 2
#
# $water
#
# 0 1 99
# 6 5 3
#
# $rain
#
# 0 1 99
# 6 8 4
#
# $fire
#
# 0 1 99
# 6 6 4
sapply is like lapply, but it will attempt to simplify the result instead of always returning a list. In both cases, you can pass along additional arguments to the function being applied, like useNA = "always" to table to have NA included in the output:
sapply(df, table, useNA = "always")
# forest water rain fire
# 0 7 6 6 6
# 1 8 5 8 6
# 99 2 3 4 4
# <NA> 3 6 2 4
For lots more info, check out R Grouping functions: sapply vs. lapply vs. apply. vs. tapply vs. by vs. aggregate
To compare with some other answers: apply is similar to lapply and sapply, but it is intended for use with matrices or higher-dimensional arrays. The only time you should use apply on a data.frame is when you need to apply a function to each row. For functions on data frame columns, prefer lapply or sapply. The reason is that apply will coerce the data frame to a matrix first, which can have unintended consequences if you have columns of different classes.
rbind(sapply(df,table),"NA"=sapply(df, function(y) sum(is.na(y))))
forest water rain fire
0 7 6 6 6
1 8 5 8 6
99 2 3 4 4
NA 3 6 2 4
This should do it:
tables <- apply(df, 2, FUN = table)
There's probably a way to do it in one fell swoop.
apply(df, 2, table)
apply(df, 2, function(x){ sum(is.na(x)) })
As the variables are factors, you should first turn them into it:
df <- lapply(df, as.factor)
And then, summary your data.frame:
sapply(df, summary)
The factor method for the summary() function counts each level of it.

Smartest way to check if an observation in data.frame(x) exists also in data.frame(y) and populate a new column according with the result

Having two dataframes:
x <- data.frame(numbers=c('1','2','3','4','5','6','7','8','9'), coincidence="NA")
and
y <- data.frame(numbers=c('1','3','10'))
How can I check if the observations in y (1, 3 and 10) also exist in x and fill accordingly the column x["coincidence"] (for example with YES|NO, TRUE|FALSE...).
I would do the same in Excel with a formula combining IFERROR and VLOOKUP, but I don't know how to do the same with R.
Note:
I am open to change data.frames to tables or use libraries. The dataframe with the numbers to check (y) will never have more than 10-20 observations, while the other one (x) will never have more than 1K observations. Therefore, I could also iterate with an if, if it´s necessary
We can create the vector matching the desired output with a set difference search that outputs boolean TRUE and FALSE values where appropriate. The sign %in%, is a binary operator that compares the values on the left-hand side to the set of values on the right:
x$coincidence <- x$numbers %in% y$numbers
# numbers coincidence
# 1 1 TRUE
# 2 2 FALSE
# 3 3 TRUE
# 4 4 FALSE
# 5 5 FALSE
# 6 6 FALSE
# 7 7 FALSE
# 8 8 FALSE
# 9 9 FALSE
Do numbers have to be factors, as you've set them up? (They're not numbers, but character.) If not, it's easy:
x <- data.frame(numbers=c('1','2','3','4','5','6','7','8','9'), coincidence="NA", stringsAsFactors=FALSE)
y <- data.frame(numbers=c('1','3','10'), stringsAsFactors=FALSE)
x$coincidence[x$numbers %in% y$numbers] <- TRUE
> x
numbers coincidence
1 1 TRUE
2 2 NA
3 3 TRUE
4 4 NA
5 5 NA
6 6 NA
7 7 NA
8 8 NA
9 9 NA
If they need to be factors, then you'll need to either set common levels or use as.character().

How to remove outiers from multi columns of a data frame

I would like to get a data frame that contains only data that is within 2 SD per each numeric column.
I know how to do it for a single column but how can I do it for a bunch of columns at once?
Here is the toy data frame:
df <- read.table(text = "target birds wolfs Country
3 21 7 a
3 8 4 b
1 2 8 c
1 2 3 a
1 8 3 a
6 1 2 a
6 7 1 b
6 1 5 c",header = TRUE)
Here is the code line for getting only the data that is under 2 SD for a single column(birds).How can I do it for all numeric columns at once?
df[!(abs(df$birds - mean(df$birds))/sd(df$birds)) > 2,]
target birds wolfs Country
2 3 8 4 b
3 1 2 8 c
4 1 2 3 a
5 1 8 3 a
6 6 1 2 a
7 6 7 1 b
8 6 1 5 c
We can use lapply to loop over the dataset columns and subset the numeric vectors (by using a if/else condition) based on the mean and sd.
lapply(df, function(x) if(is.numeric(x)) x[!(abs((x-mean(x))/sd(x))>2)] else x)
EDIT:
I was under the impression that we need to remove the outliers for each column separately. But, if we need to keep only the rows that have no outliers for the numeric columns, we can loop through the columns with lapply as before, instead of returning 'x', we return the sequence of 'x' and then get the intersect of the list element with Reduce. The numeric index can be used for subsetting the rows.
lst <- lapply(df, function(x) if(is.numeric(x))
seq_along(x)[!(abs((x-mean(x))/sd(x))>2)] else seq_along(x))
df[Reduce(intersect,lst),]
I'm guessing that you are trying to filter your data set by checking that all of the numeric columns are within 2 SD (?)
In that case I would suggest to create two filters. 1 one that will indicate numeric columns, the second one that will check that all of them within 2 SD. For the second condition, we can use the built in scale function
indx <- sapply(df, is.numeric)
indx2 <- rowSums(abs(scale(df[indx])) <= 2) == sum(indx)
df[indx2,]
# target birds wolfs Country
# 2 3 8 4 b
# 3 1 2 8 c
# 4 1 2 3 a
# 5 1 8 3 a
# 6 6 1 2 a
# 7 6 7 1 b
# 8 6 1 5 c

How to delete rows from a dataframe that contain n*NA

I have a number of large datasets with ~10 columns, and ~200000 rows. Not all columns contain values for each row, although at least one column must contain a value for the row to be present, I would like to set a threshold for how many NAs are allowed in a row.
My Dataframe looks something like this:
ID q r s t u v w x y z
A 1 5 NA 3 8 9 NA 8 6 4
B 5 NA 4 6 1 9 7 4 9 3
C NA 9 4 NA 4 8 4 NA 5 NA
D 2 2 6 8 4 NA 3 7 1 32
And I would like to be able to delete the rows that contain more than 2 cells containing NA to get
ID q r s t u v w x y z
A 1 5 NA 3 8 9 NA 8 6 4
B 5 NA 4 6 1 9 7 4 9 3
D 2 2 6 8 4 NA 3 7 1 32
complete.cases removes all rows containing any NA, and I know one can delete rows that contain NA in certain columns but is there a way to modify it so that it is non-specific about which columns contain NA, but how many of the total do?
Alternatively, this dataframe is generated by merging several dataframes using
file1<-read.delim("~/file1.txt")
file2<-read.delim(file=args[1])
file1<-merge(file1,file2,by="chr.pos",all=TRUE)
Perhaps the merge function could be altered?
Thanks
Use rowSums. To remove rows from a data frame (df) that contain precisely n NA values:
df <- df[rowSums(is.na(df)) != n, ]
or to remove rows that contain n or more NA values:
df <- df[rowSums(is.na(df)) < n, ]
in both cases of course replacing n with the number that's required
If dat is the name of your data.frame the following will return what you're looking for:
keep <- rowSums(is.na(dat)) < 2
dat <- dat[keep, ]
What this is doing:
is.na(dat)
# returns a matrix of T/F
# note that when adding logicals
# T == 1, and F == 0
rowSums(.)
# quickly computes the total per row
# since your task is to identify the
# rows with a certain number of NA's
rowSums(.) < 2
# for each row, determine if the sum
# (which is the number of NAs) is less
# than 2 or not. Returns T/F accordingly
We use the output of this last statement to
identify which rows to keep. Note that it is not necessary to actually store this last logical.
If d is your data frame, try this:
d <- d[rowSums(is.na(d)) < 2,]
This will return a dataset where at most two values per row are missing:
dfrm[ apply(dfrm, 1, function(r) sum(is.na(x)) <= 2 ) , ]

Resources