I'm interested in reshaping a dataframe, but instead of using standard dcast functions like the mean, I'd like to use a custom function. Specifically, I'm interested in using an ifelse statement to assign binary values.
Here's a reproducible example:
# dataframe that includes extraneous information
df <- data.frame(sale_id=c(1,1,1,2,2,2,3,3,4,5),project_id=c(501,502,503,501,502,503,501,502,504,505),
sale_year=c(1990,1991,1993,1990,1992,1990,1991,1993,1990,1992),
var1=c(5,4,3,6,5,4,4,7,2,9),var2=c(7,3,4,8,5,8,2,3,5,7))
# list of the variables I actually need (I don't need 'sale_year')
varlist <- c("var1","var2")
# selecting out id variables and variables I'm interested in manipulating
dfvars <- df[,c("sale_id","project_id",varlist)]
# melt dataframe
library(reshape2)
mdata <- melt(dfvars, id=c('sale_id','project_id'))
# create custom ifelse function, assign '1' if mean is above a critical value, and '0' if not
funx <- function(u){ifelse(mean(u)>5,1,0)}
# cast data using this function
cdata <- dcast(mdata, sale_id~variable, funx)
It works if I just use a standard function, like mean (ex):
cdata <- dcast(mdata, sale_id~variable, mean)
But with my ifelse() function, I get an error about data types (logical vs. double), which doesn't make sense to me, since the result of "mean(u) > 5" should be returning a logical result (TRUE or FALSE), to then be used by the ifelse() part.
I believe this has to do with the details of type coercion. The return of your custom function is being treated as double for some sets of observations, but logical in others. The code works when you make explicit the return type.
Example:
# Works
funx1 <- function(u){ifelse(mean(u)>5,TRUE,FALSE)}
funx2 <- function(u){as.logical(ifelse(mean(u)>5,1,0))}
funx3 <- function(u){as.numeric(ifelse(mean(u)>5,1,0))}
Related
I have been using Stata and the loops are easily executed there. However, in R I have faced some errors in looping over variables. I tried some of the codes over here and it does not work. Basically, I am trying to clean the data by logging the values. I had to convert negative values to positive first before logging them.
I intend to loop over multiple firm statistics on the dataframe but I faced errors in doing so.
varlist <- c("revenue", "profit", "cost")`
for (v in varlist) {
data$log_v <- log(abs(ifelse(data$v>1, data$v, NA)))
data$log_v <- ifelse(data$v<0, data$log_v*-1,data$log_v)
}
Error in $<-.data.frame(tmp,"log_v", value = numeric(0)) : replacement has 0 rows, data has 9
It looks like you might be assuming that data$log_v is getting read as data$log_profit, but R's going to take it own it's own and read it as "log_v" all 3 times. This example might not be quite everything you're trying to do but it might help you. It's taking a list of variables and referencing them via their string names.
df <- data.frame(x = rnorm(15), y = rnorm(15))
vars <- c("x", "y")
for (v in vars) {
df[paste0("log_", v)] <- log(abs(df[v]))
}
Here's roughly the same thing in data.table.
library(data.table)
dt <- data.table(x = rnorm(15), y = rnorm(15))
dt[, `:=`(log_x = log(abs(x)), log_y = log(abs(y)))]
Here is an explanation to the source of your confusion:
A data.frame is a special type of list, it's elements are vectors of the same length – columns. Normally, you access an element of a list using the [[ function, for example df[["revenue"]]. Instead of "revenue", you can also use a variable, such as df[[varlist[1]]]. So far, so good.
However, lists have a convenience operator, $, which allows you to access the elements with less typing: df$revenue. Unfortunately, you cannot use variables this way: this by design. Since you don't have to use quotes with $, the operator cannot know whether you mean revenue as the literal name of the element or revenue as the variable that holds the literal name of the element.
Therefore, if you want to use variables, you need to use the [[ function, and not the $. Since programmers hate typing and want to make code as terse as possible, various ways around it have been invented, such as data.tables and tidyverse (I am exaggerating a bit here).
Also, here is a tidyverse solution.
library(tidyverse)
varlist <- c("revenue", "profit", "cost")
df <- data.frame(revenue=rnorm(100), profit=rnorm(100), cost=rnorm(100))
df <- df %>% mutate_at(varlist, list(log10 = ~ log10(abs(.))))
Explanation:
mutate_all applies log10(abs(.)) to every column. The dot . is a temporary variable that hold the column values for each of the columns.
by default, mutate_all will replace the existing variables. However, if instead of providing a function (~ log10(abs(.))) you provide a named list (list(log10 = ~ log10(abs(.)))), it will add new columns using log10 as a suffix in column name.
this method makes it easy to apply several functions to your columns, not only the one.
See? No (obvious) loops at all!
I want to write a function that dynamically uses different correlation methods depending on the scale of measure of the feature (continuous, dichotomous, ordinal). The label is always continuous. My idea was to use the apply() function, so iterate over every feature (aka column), check it's scale of measure (numeric, factor with two levels, factor with more than two levels) and then use the appropriate correlation function. Unfortunately my code seems to convert every feature into a character vector and as consequence the condition in the if statement is always false for every column. I don't know why my code is doing this. How can I prevent my code from converting my features to character vectors?
set.seed(42)
foo <- sample(c("x", "y"), 200, replace = T, prob = c(0.7, 0.3))
bar <- sample(c(1,2,3,4,5),200,replace = T,prob=c(0.5,0.05,0.1,0.1,0.25))
y <- sample(c(1,2,3,4,5),200,replace = T,prob=c(0.25,0.1,0.1,0.05,0.5))
data <- data.frame(foo,bar,y)
features <- data[, !names(data) %in% 'y']
dyn.corr <- function(x,y){
# print out structure of every column
print(str(x))
# if feature is numeric and has more than two outcomes use corr.test
if(is.numeric(x) & length(unique(x))>2){
result <- corr.test(x,y)[['r']]
} else {
result <- "else"
}
}
result <- apply(features,2,dyn.corr,y)
apply is built for matrices. When you apply to a data frame, the first thing that happens is coercing your data frame to a matrix. A matrix can only have one data type, so all columns of your data are converted to the most general type among them when this happens.
Use sapply or lapply to work with columns of a data frame.
This should work fine (I tried to test, but I don't know what package to load to get the corr.test function.)
result <- sapply(features, dyn.corr, income)
I'm using a data.frame that contains many data.frames. I'm trying to access these sub-data.frames within a loop. Within these loops, the names of the sub-data.frames are contained in a string variable. Since this is a string, I can use the [,] notation to extract data from these sub-data.frames. e.g. X <- "sub.df"and then df[42,X] would output the same as df$sub.df[42].
I'm trying to create a single row data.frame to replace a row within the sub-data.frames. (I'm doing this repeatedly and that's why my sub-data.frame name is in a string). However, I'm having trouble inserting this new data into these sub-data.frames. Here is a MWE:
#Set up the data.frames and sub-data.frames
sub.frame <- data.frame(X=1:10,Y=11:20)
df <- data.frame(A=21:30)
df$Z <- sub.frame
Col.Var <- "Z"
#Create a row to insert
new.data.frame <- data.frame(X=40,Y=50)
#This works:
df$Z[3,] <- new.data.frame
#These don't (even though both sides of the assignment give the correct values/dimensions):
df[,Col.Var][6,] <- new.data.frame #Gives Warning and collapses df$Z to 1 dimension
df[7,Col.Var] <- new.data.frame #Gives Warning and only uses first value in both places
#This works, but is a work-around and feels very inelegant(!)
eval(parse(text=paste0("df$",Col.Var,"[8,] <- new.data.frame")))
Are there any better ways to do this kind of insertion? Given my experience with R, I feel like this should be easy, but I can't quite figure it out.
df is a frequency table, where the values in a were reported as many times as recorded in column x,y,z. I'm trying to convert the frequency table to the original data, so I use the rep() function.
How do I loop the rep() function to give me the original data for x, y, z without having to repeat the function several times like I did below?
Also, can I input the result into a data frame, bearing in mind that the output will have different column lengths:
a <- (1:10)
x <- (6:15)
y <- (11:20)
z <- (16:25)
df <- data.frame(a,x,y,z)
df
rep(df[,1], df[,2])
rep(df[,1], df[,3])
rep(df[,1], df[,4])
If you don't want to repeat the for loop, you can always try using an apply function. Note that you cannot store it in a data.frame because the objects are of different lengths, but you could store it in a list and access the elements in a similar way to a data.frame. Something like this works:
df2<-sapply(df[,2:4],function(x) rep(df[,1],x))
What this sapply function is saying is for each column in df[,2:4], apply the rep(df[,1],x) function to it where x is one of your columns ( df[,2], df[,3], or df[,4]).
The below code just makes sure the apply function is giving the same result as your original way.
identical(df2$x,rep(df[,1], df[,2]))
[1] TRUE
identical(df2$y,rep(df[,1], df[,3]))
[1] TRUE
identical(df2$z,rep(df[,1], df[,4]))
[1] TRUE
EDIT:
If you want it as a data.frame object you can do this:
res<-as.data.frame(sapply(df2, '[', seq(max(sapply(df2, length)))))
Note this introduces NAs into your data.frame so be careful!
This post contains two questions. The first is some related with the second.
First, suppose that I want define one function that receives two arguments: one data frame and one variable(column) and I would like to do some counts or statistics. In first time, I have to determine the variable position. For example, suppose that my two first rows of the df are
> df
person age rent
1 23 1000
2 35 1.500
and my function is like this
> myfun<- function(df, var)
{
# determining the variable
ind<- which(names(df) %in% var )
# selecting the variable
v <- df[,ind]
# rest of function
....
}
I think that it may be more easy... Is there some way to determine v directly?
Second Question: I have a large list of data frames(samples of one population). All data frame have the same variables and one of these variable is the rent. I would like to calculate the mean of the rent variable for each sample and I would like to use the lapply function. For one sample, I can do the following code
> mean(sample$rent , na.rm = T)
All that I want is do something like this
> apply(list, mean( , variablefix = rent))
One option is create a new mean function with the rent argument being fix or with only one argument and apply the lappy function:
>mean_rent <- function(df){...}
>lapply(df, mean_rent)
But, I want a way to use the apply function directly in only one line
Some ideas?
Question One: you can also use the names (i.e a character string) or a variable containing the name to index data.frames (and vectors,matrices etc.), so you just have to do:
myfun<- function(df, var) {
# select the column
v <- df[,var]
# rest of function
}
but it is more common to define the function on a vector and then just call it with myfun(df[,var])
Question Two: Instead of assigning the new function to a variable, you can also just pass it on directly, i.e.
lapply(list_of_dfs, function(df){ mean( df$rent ) })