Add columns dynamically to dataframe in R - r

I am only a few days old in the R ecosystem and trying to figure out a way to add dynamic column for each numeric column found in the original dataframe.
I have succesfully written a way to change the value in the existing column in the dataframe but what I need is to put those calculated values into a new column rather than overwriting the existing one.
Here is what I've done so far,
myDf <- read.csv("MyData.csv",header = TRUE)
normalize <- function(x) {
return ((x - min(x,na.rm = TRUE)) / (max(x,na.rm = TRUE) - min(x,na.rm = TRUE)))
}
normalizeAllCols <- function(df){
df[,sapply(x, is.numeric)] <- lapply(df[,sapply(df, is.numeric)], normalize)
df
}
normalizedDf<-normalizeAllCols(myDf)
I came with above snippet (with a lot of help from the internet) to apply normalize function to all numeric columns in the given data frame. I want to know how to put those calculated values into a new column in the data frame. (in the given snippet I'd like to know how to put normalized value in a new column like "norm" + colname ).

You can find the column names which are numeric and use paste0 create new columns.
normalizeAllCols <- function(df){
cols <- names(df)[sapply(df, is.numeric)]
df[paste0('norm_', cols)] <- lapply(df[cols], normalize)
df
}
normalizedDf<-normalizeAllCols(myDf)
In dplyr you can use across to apply a function to only numeric columns directly.
library(dplyr)
normalizeAllCols <- function(df){
df %>%
mutate(across(where(is.numeric), list(norm = ~normalize)))
}

Related

How can I select certain columns in a dataframe based on their number of valid values (except NA) in R?

I'm using R, and I have a dataframe with multiple columns. I want to run a code and automatically check the number of values (valid values, not NA) in each column. Then, it should select the columns that 50% of its rows are filled by valid values, and save them in a new dataframe.
Can anybody help me doing this? Thank you very much.
Is there any way that the codes can be applied for an uncertain number of columns?
Using purrr package, you can write function below to check for the percentage of missing values:
pct_missing <- purrr::map_dbl(df,~mean(is.na(.x)))
After that, you can select those columns that have less than 50% missing values by their names.
selected_column <- colnames(df)[pct_missing < 0.5]
To create a new dataset, you may use:
library(dplyr)
df_new <- df %>% select(one_of(selected_column))
You can create a function within R base also to automatically retrieve the colums matching the critria:
Function:
ColSel <- function(df){
vals <- apply(df,2, function(fo) mean(is.na(fo))) < .5
return(df[,vals])
}
Some toy data
## example
df1 <- data.frame(
a = c(runif(19),NA),
b = c(rep(NA,11),runif(9)),
d = rep(NA,20),
e = runif(20)
)
Test
df2 <- ColSel(df1)

Why add column data to data frame not working with a function in r

I am curious that why the following code doesn't work for adding column data to a data frame.
a <- c(1:3)
b <- c(4:6)
df <- data.frame(a,b) # create a data frame example
add <- function(df, vector){
df[[3]] <- vector
} # create a function to add column data to a data frame
d <- c(7:9) # a new vector to be added to the data frame
add(df,d) # execute the function
If you run the code in R, the new vector doesn't add to the data frame and no error also.
R passes parameters to functions by value - not by reference - that means inside the function you work on a copy of the data.frame df and when returning from the function the modified data.frame "dies" and the original data.frame outside the function is still unchanged.
This is why #RichScriven proposed to store the return value of your function in the data.frame df again.
Credits go to #RichScriven please...
PS: You should use cbind ("column bind") to extend your data.frame independently of how many columns already exist and ensure unique column names:
add <- function(df, vector){
res <- cbind(df, vector)
names(res) <- make.names(names(res), unique = T)
res # return value
}
PS2: You could use a data.table instead of a data.frame which is passed by reference (not by value).

Use apply to add multiple columns (more than 100) of random numbers or other function in R

I would like to build a function that adds many columns of random variables or other function to a a dataframe. Here I am trying to append it to map data.
library(plyr)
add <- function(name, df){
new.df = mutate(df, name = runif(length(df[,1])))
new.df
}
The function works to add a column of data...
add("e", iris)
iris2<- add("f", iris)
The apply does not work...
I am trying to add 26 columns from the list of letters so that df$a, df$b, df$c are all random vectors.
new <- lapply(letters, add, df = tx)
What is the most efficient way to columns from a list of col names?
I would like to later loop through all of the column names in another function.
It's not very clear to me, what you want to achieve. This adds multiple columns of random numbers to a data.frame:
cbind(iris,
matrix(runif(nrow(iris)*5), ncol=5))
I don't see a reason to use an *apply function.

sum different columns in a data.frame

I have a very big data.frame and want to sum the values in every column.
So I used the following code:
sum(production[,4],na.rm=TRUE)
or
sum(production$X1961,na.rm=TRUE)
The problem is that the data.frame is very big. And I only want to sum 40 certain columns with different names of my data.frame. And I don't want to list every single column. Is there a smarter solution?
At the end I also want to store the sum of every column in a new data.frame.
Thanks in advance!
Try this:
colSums(df[sapply(df, is.numeric)], na.rm = TRUE)
where sapply(df, is.numeric) is used to detect all the columns that are numeric.
If you just want to sum a few columns, then do:
colSums(df[c("X1961", "X1962", "X1999")], na.rm = TRUE)
res <- unlist(lapply(production, function(x) if(is.numeric(x)) sum(x, na.rm=T)))
will return the sum of each numeric column.
You could create a new data frame based on the result with
data.frame(t(res))
If you dont want to include every single column, you somehow have to indicate which ones to include (or alternatively, which to exclude)
colsInclude <- c("X1961", "X1962", "X1963") # by name
# or #
colsInclude <- paste0("X", 1961:2003) # by name
# or #
colsInclude <- c(10:19, 23, 55, 147) # by column number
To put those columns in a new data frame simply use [ ] as you've done: '
newDF <- oldDF[, colsInclude]
To sum up each column, simply use colSums
sums <- colSums(newDF, na.rm=T)
# or #
sums <- colSums(oldDF[, colsInclude], na.rm=T)
Note that sums will be a vector, not necessarilly a data frame.
You can make it into a data frame using as.data.frame
sums <- as.data.frame(sums)
# or, to include the data frame from which it came #
sums <- rbind(newDF, "totals"=sums)

R: Apply function on specific columns preserving the rest of the dataframe

I'd like to learn how to apply functions on specific columns of my dataframe without "excluding" the other columns from my df. For example i'd like to multiply some specific columns by 1000 and leave the other ones as they are.
Using the sapply function for example like this:
a<-as.data.frame(sapply(table.xy[,1], function(x){x*1000}))
I get new dataframes with the first column multiplied by 1000 but without the other columns that I didn't use in the operation. So my attempt was to do it like this:
a<-as.data.frame(sapply(table.xy, function(x) if (colnames=="columnA") {x/1000} else {x}))
but this one didn't work.
My workaround was to give both dataframes another row with IDs and later on merge the old dataframe with the newly created to get a complete one. But I think there must be a better solution. Isn't it?
If you only want to do a computation on one or a few columns you can use transform or simply do index it manually:
# with transfrom:
df <- data.frame(A = 1:10, B = 1:10)
df <- transform(df, A = A*1000)
# Manually:
df <- data.frame(A = 1:10, B = 1:10)
df$A <- df$A * 1000
The following code will apply the desired function to the only the columns you specify.
I'll create a simple data frame as a reproducible example.
(df <- data.frame(x = 1, y = 1:10, z=11:20))
(df <- cbind(df[1], apply(df[2:3],2, function(x){x*1000})))
Basically, use cbind() to select the columns you don't want the function to run on, then use apply() with desired functions on the target columns.
In dplyr we would use mutate_at in which you can select or exclude (by preceding variable name with "-" minus sign) specific variables.
You can just name a function
df <- df %>%
mutate_at(vars(columnA), scale)
or create your own
df <- df %>%
mutate_at(vars(columnA, columnC), function(x) {do this})

Resources