How to remove specific digits in dataframe columns in R? - r

I have two dataframes d1 and d2. My data frames are census data from 2010. I want to merge them using a common attribute
merge (d1, d2, by.x="GEOID", by.y= "GISJOIN")
d1 has the common id as GEOID (for eg. 310019654001) while d2 has the same id attribute as GISJOIN (for eg. 31000109654001). I need to remove the "0" from the 3rd and 7th position in the GISJOIN attribute. How can I do that in R?
I splitted the values using
splitted <- as.data.frame(t(sapply(d2$GISJOIN, function(x) substring(x, first=c(1,4,8), last=c(2,6,14)))))
splitted$v4 <- (paste(splitted$V1, splitted$V2, splitted$V3))
v4 is character values, when I do as.numeric it gives me error: Warning message:
NAs introduced by coercion

Too long to type as a comment, using the only example you have provided and another invented example to show you don't need the sapply() :
d2 = data.frame(GISJOIN=c("31000109654001","12345678910112"))
d2$GISJOIN = as.character(d2$GISJOIN)
What you have now:
splitted <- as.data.frame(t(sapply(d2$GISJOIN, function(x) substring(x, first=c(1,4,8), last=c(2,6,14)))))
splitted$v4 <- (paste(splitted$V1, splitted$V2, splitted$V3))
V1 V2 V3 v4
31000109654001 31 001 9654001 31 001 9654001
12345678910112 12 456 8910112 12 456 8910112
The new string still has spaces in between, hence if you convert as.numeric() it gives NA. Below I just split it into characters and exclude position 3 and 7:
d2$new = lapply(strsplit(d2$GISJOIN,""),function(i){
paste(i[-c(3,7)],collapse="")
})
as.numeric(d2$new)
[1] 310019654001 124568910112

Related

R: How do you subset all data-frames within a list?

I have a list of data-frames called WaFramesCosts. I want to simply subset it to show specific columns so that I can then export them. I have tried:
for (i in names(WaFramesCosts)) {
WaFramesCosts[[i]][,c("Cost_Center","Domestic_Anytime_Min_Used","Department",
"Domestic_Anytime_Min_Used")]
}
but it returns the error of
Error in `[.data.frame`(WaFramesCosts[[i]], , c("Cost_Center", "Department", :
undefined columns selected
I also tried:
for (i in seq_along(WaFramesCosts)){
WaFramesCosts[[i]][ , -which(names(WaFramesCosts[[i]]) %in% c("Cost_Center","Domestic_Anytime_Min_Used","Department",
"Domestic_Anytime_Min_Used"))]
but I get the same error. Can anyone see what I am doing wrong?
Side Note: For reference, I used this:
for (i in seq_along(WaFramesCosts)) {
t <- WaFramesCosts[[i]][ , grepl( "Domestic" , names( WaFramesCosts[[i]] ) )]
q <- subset(WaFramesCosts[[i]], select = c("Cost_Center","Domestic_Anytime_Min_Used","Department","Domestic_Anytime_Min_Used"))
WaFramesCosts[[i]] <- merge(q,t)
}
while attempting the same goal with a different approach and seemed to get closer.
Welcome back, Kootseeahknee. You are still incorrectly assuming that the last command of a for loop is implicitly returned at the end. If you want that behavior, perhaps you want lapply:
myoutput <- lapply(names(WaFramesCosts)), function(i) {
WaFramesCosts[[i]][,c("Cost_Center","Domestic_Anytime_Min_Used","Department","Domestic_Anytime_Min_Used")]
})
The undefined columns selected error tells me that your assumptions of the datasets are not correct: at least one is missing at least one of the columns. From your previous question (How to do a complex edit of columns of all data frames in a list?), I'm inferring that you want columns that match, not assuming that it is in everything. From that, you could/should be using grep or some variant:
myoutput <- lapply(names(WaFramesCosts)), function(i) {
WaFramesCosts[[i]][,grep("(Cost_Center|Domestic_Anytime_Min_Used|Department)",
colnames(WaFramesCosts)),drop=FALSE]
})
This will match column names that contain any of those strings. You can be a lot more precise by ensuring whole strings or start/end matches occur by using regular expressions. For instance, changing from (Cost|Dom) (anything that contains "Cost" or "Dom") to (^Cost|Dom) means anything that starts with "Cost" or contains "Dom"; similarly, (Cost|ment$) matches anything that contains "Cost" or ends with "ment". If, however, you always want exact matches and just need those that exist, then something like this will work:
myoutput <- lapply(names(WaFramesCosts)), function(i) {
WaFramesCosts[[i]][,intersect(c("Cost_Center","Domestic_Anytime_Min_Used","Department"),
colnames(WaFramesCosts)),drop=FALSE]
})
Note, in that last example: notice the difference between mtcars[,2] (returns a vector) and mtcars[,2,drop=FALSE] (returns a data.frame with 1 column). Defensive programming, if you think it at all possible that your filtering will return a single-column, make sure you do not inadvertently convert to a vector by appending ,drop=FALSE to your bracket-subsetting.
Based on your description, this is an example of using library dplyr to achieve combining a list of data frames for a given set of columns. This doesn't require all data frames to have identical columns (Providing your data in a reproducible example would be better)
# test data
df1 = read.table(text = "
c1 c2 c3
a 1 101
b 2 102
", header = TRUE, stringsAsFactors = FALSE)
df2 = read.table(text = "
c1 c2 c3
w 11 201
x 12 202
", header = TRUE, stringsAsFactors = FALSE)
# dfs is a list of data frames
dfs <- list(df1, df2)
# use dplyr::bind_rows
library(dplyr)
cols <- c("c1", "c3")
result <- bind_rows(dfs)[cols]
result
# c1 c3
# 1 a 101
# 2 b 102
# 3 w 201
# 4 x 202

Parsing a string efficiently

So I've got a column in my data frame that is essentially one long characteristic string that is used to encode about variables for each record. It might look something like this:
string<-c('001034002025003996','001934002199004888')
But much longer.
The strings are structured so each 6 characters are paired together. So you can look at the string above like this:
001034 002025 003996
001934 002199 004888
The first three characters of these is a code corresponding to a certain variable and the next three correspond to the value of that variable. So the above can be broken down into three columns that look like this:
var001 var002 var003 var004
1 034 025 996 NA
2 934 199 NA 888
I need a way to parse this string and return a data frame with the expanded columns.
I wrote a nested loop that looks like this:
for(i in 1:length(string)){
text <- string[i]
for(j in seq(1,505,6)){
var <- substr(text,j, j+2)
var.value <- substr(text, j+3, j+5)
index <- (as.numeric(var))
df[i, index] <- var.value
}
}
where df is an empty data frame created to receive the data. This works, but is slow on larger amounts of data. Is there a better way to do this?
1) This one-liner produces a character matrix (which can easily be converted to a data.frame if need be). No packages are used.
read.dcf(textConnection(gsub("(...)(...)", "\\1: \\2\n", string)))
giving:
001 002 003 004
[1,] "034" "025" "996" NA
[2,] "934" "199" NA "888"
2) This alternative produces the same matrix. The read.table produces a long form data.frame and then tapply reshapes it to a wide matrix.
long <- read.table(text = gsub("(...)(...)", "\\1 \\2\n", string),
colClasses = "character", col.names = c("id", "var"))
tapply(long$var, list(gl(length(string), nchar(string[1])/6), long$id), c)

Keep doubled columns which differ in only 2 letters in a data.frame

I have a data frame in R which consists of around 100 columns. Most of the columns are doubled but differ in 2 letters. I want to keep these columns and delete those columns which are not doubled.
Here is an example:
234-rgz SK 234-rgz PV 556-gft SK 456-hjk SK 456-hjk PV
The Output should be:
234-rgz SK 234-rgz PV 456-hjk SK 456-hjk PV
All columns have the same naming conventions. A number starting from 2 to 150 then a "-" after this 4 or 5 letters, then a space and then "SK" or "PV". I thought of using regular expression but then I don't solving the problem how I get rid of those single columns. Thanks for your help!
You can use duplicated on the column names after removing the suffix part. The output will be logical index which can be used to subset the original dataset.
v1 <- colnames(df1)
v2 <- sub('\\s+[^ ]+$', '', v1)
indx <- duplicated(v2)|duplicated(v2, fromLast=TRUE)
v1[indx]
#[1] "234-rgz SK" "234-rgz PV" "456-hjk SK" "456-hjk PV"
To subset the columns in the dataframe,
df1[indx]
Or another option is splitting the column names string to substring and use grep to match the substring that have a frequency >1
tbl <- table(unlist(strsplit(v1, '\\s+.*')))
df1[grep(paste(names(tbl)[tbl>1], collapse="|"), v1)]
data
set.seed(24)
df1 <- as.data.frame(matrix(sample(0:9, 5*10, replace=TRUE), ncol=5,
dimnames=list(NULL, c('234-rgz SK', '234-rgz PV' , '556-gft SK',
'456-hjk SK' , '456-hjk PV') )) )

lapply on single column in data frame

I have a data frame which I populate from a csv file as follows (data for sample only) :
> csv_data <- read.csv('test.csv')
> csv_data
gender country income
1 1 20 10000
2 2 20 12000
3 2 23 3000
I want to convert country to factor. However when I do the following, it fails :
> csv_data[,2] <- lapply(csv_data[,2], factor)
Warning message:
In `[<-.data.frame`(`*tmp*`, , 2, value = list(1L, 1L, 1L)) :
provided 3 variables to replace 1 variables
However, if I convert both gender and country to factor, it succeeds :
> csv_data[,1:2] <- lapply(csv_data[,1:2], factor)
> is.factor(csv_data[,1])
[1] TRUE
> is.factor(csv_data[,2])
[1] TRUE
Is there something I am doing wrong? I want to use lapply since I want to programmatically convert the columns into factors and it could be possible that the number of columns to be converted is only 1(it could be more as well, this number is driven from arguments to a function). Any way I can do it using lapply only?
When subsetting for one single column, you'll need to change it slightly.
There's a big difference between
lapply(df[,2], factor)
and
lapply(df[2], factor)
## and/or
lapply(df[, 2, drop=FALSE], factor)
Have a look at the output of each. If you remove the comma, everything should work fine. Using the comma in [,] turns a single column into a vector and therefore each element in the vector is factored individually. Whereas leaving it out keeps the column as a list, which is what you want to give to lapply in this situation. However, if you use drop=FALSE, you can leave the comma in, and the column will remain a list/data.frame.
No good:
df[,2] <- lapply(df[,2], factor)
# Warning message:
# In `[<-.data.frame`(`*tmp*`, , 2, value = list(1L, 1L, 1L)) :
# provided 3 variables to replace 1 variables
Succeeds on a single column:
df[,2] <- lapply(df[,2,drop=FALSE], factor)
df[,2]
# [1] 20 20 23
# Levels: 20 23
On my opinion, the best way to subset data frame columns is without the comma. This also succeeds:
df[2] <- lapply(df[2], factor)
df[[2]]
# [1] 20 20 23
# Levels: 20 23

Remove an entire column from a data.frame in R

Does anyone know how to remove an entire column from a data.frame in R? For example if I am given this data.frame:
> head(data)
chr genome region
1 chr1 hg19_refGene CDS
2 chr1 hg19_refGene exon
3 chr1 hg19_refGene CDS
4 chr1 hg19_refGene exon
5 chr1 hg19_refGene CDS
6 chr1 hg19_refGene exon
and I want to remove the 2nd column.
You can set it to NULL.
> Data$genome <- NULL
> head(Data)
chr region
1 chr1 CDS
2 chr1 exon
3 chr1 CDS
4 chr1 exon
5 chr1 CDS
6 chr1 exon
As pointed out in the comments, here are some other possibilities:
Data[2] <- NULL # Wojciech Sobala
Data[[2]] <- NULL # same as above
Data <- Data[,-2] # Ian Fellows
Data <- Data[-2] # same as above
You can remove multiple columns via:
Data[1:2] <- list(NULL) # Marek
Data[1:2] <- NULL # does not work!
Be careful with matrix-subsetting though, as you can end up with a vector:
Data <- Data[,-(2:3)] # vector
Data <- Data[,-(2:3),drop=FALSE] # still a data.frame
To remove one or more columns by name, when the column names are known (as opposed to being determined at run-time), I like the subset() syntax. E.g. for the data-frame
df <- data.frame(a=1:3, d=2:4, c=3:5, b=4:6)
to remove just the a column you could do
Data <- subset( Data, select = -a )
and to remove the b and d columns you could do
Data <- subset( Data, select = -c(d, b ) )
You can remove all columns between d and b with:
Data <- subset( Data, select = -c( d : b )
As I said above, this syntax works only when the column names are known. It won't work when say the column names are determined programmatically (i.e. assigned to a variable). I'll reproduce this Warning from the ?subset documentation:
Warning:
This is a convenience function intended for use interactively.
For programming it is better to use the standard subsetting
functions like '[', and in particular the non-standard evaluation
of argument 'subset' can have unanticipated consequences.
(For completeness) If you want to remove columns by name, you can do this:
cols.dont.want <- "genome"
cols.dont.want <- c("genome", "region") # if you want to remove multiple columns
data <- data[, ! names(data) %in% cols.dont.want, drop = F]
Including drop = F ensures that the result will still be a data.frame even if only one column remains.
The posted answers are very good when working with data.frames. However, these tasks can be pretty inefficient from a memory perspective. With large data, removing a column can take an unusually long amount of time and/or fail due to out of memory errors. Package data.table helps address this problem with the := operator:
library(data.table)
> dt <- data.table(a = 1, b = 1, c = 1)
> dt[,a:=NULL]
b c
[1,] 1 1
I should put together a bigger example to show the differences. I'll update this answer at some point with that.
There are several options for removing one or more columns with dplyr::select() and some helper functions. The helper functions can be useful because some do not require naming all the specific columns to be dropped. Note that to drop columns using select() you need to use a leading - to negate the column names.
Using the dplyr::starwars sample data for some variety in column names:
library(dplyr)
starwars %>%
select(-height) %>% # a specific column name
select(-one_of('mass', 'films')) %>% # any columns named in one_of()
select(-(name:hair_color)) %>% # the range of columns from 'name' to 'hair_color'
select(-contains('color')) %>% # any column name that contains 'color'
select(-starts_with('bi')) %>% # any column name that starts with 'bi'
select(-ends_with('er')) %>% # any column name that ends with 'er'
select(-matches('^v.+s$')) %>% # any column name matching the regex pattern
select_if(~!is.list(.)) %>% # not by column name but by data type
head(2)
# A tibble: 2 x 2
homeworld species
<chr> <chr>
1 Tatooine Human
2 Tatooine Droid
You can also drop by column number:
starwars %>%
select(-2, -(4:10)) # column 2 and columns 4 through 10
With this you can remove the column and store variable into another variable.
df = subset(data, select = -c(genome) )
Using dplyR, the following works:
data <- select(data, -genome)
as per documentation found here https://www.marsja.se/how-to-remove-a-column-in-r-using-dplyr-by-name-and-index/#:~:text=select(starwars%2C%20%2Dheight)
I just thought I'd add one in that wasn't mentioned yet. It's simple but also interesting because in all my perusing of the internet I did not see it, even though the highly related %in% appears in many places.
df <- df[ , -which(names(df) == 'removeCol')]
Also, I didn't see anyone post grep alternatives. These can be very handy for removing multiple columns that match a pattern.

Resources