How to make this simple function more efficient? - r

I have data on wages and about 95% of them are given in hourly format, however some of them are given as an annual salary. So I made a function to convert the annual salaries to hourly, however it takes 1 min 40 sec to run, when my dataset is 43000 rows x 12 columns (which I didnt think would be too big) so I did not think it would take this long.
I am curious if there is a better way to do this than the current function I have created. I am new with dplyr and tidyverse so ideally an answer using those capabilities.
Here is some sample data:
NOC4 Region Region_Name Wage_2012 Wage_2013 Wage_2014
0011 ER10 National 28.1 65000 NA
0011 ER1010 Northern NA 30.5 18
0011 ER1020 Southern 42.3 72000 22
0011 ER1030 Eastern 12 NA 45500
0011 ER1040 Western 8 NA 99000
0011 ER10 National NA 65000 NA
Here is what it should look like after the function:
NOC4 Region Region_Name Wage_2012 Wage_2013 Wage_2014
0011 ER10 National 28.1 33.33 NA
0011 ER1010 Northern NA 30.5 18
0011 ER1020 Southern 42.3 36.92 22
0011 ER1030 Eastern 12 NA 23.33
0011 ER1040 Western 8 NA 50.77
0011 ER10 National NA 33.33 NA
Here is the function:
year_to_hour <- function(dataset, salary, startcol){
# where "startcol" should be the first column containing the numeric
# values that you are trying to convert.
for(i in startcol:ncol(dataset)){
for(j in 1:nrow(dataset)){
if(is.na(dataset[j, i])){
j = j+1
}else if(as.numeric(dataset[j, i]) >= as.numeric(salary)){
dataset[j, i] = dataset[j, i]/1950
}
else{
dataset[j, i] = dataset[j, i]
}
}
}
return(as_tibble(dataset))
}
converted <- year_to_hour(wage_data_messy, 1000, 4)

R will work much faster if you let it handle the loops under the hood through "vectorized" code.
http://www.noamross.net/blog/2014/4/16/vectorization-in-r--why.html
Here's an approach using dplyr:
library(dplyr)
salary <- 1000
df %>%
mutate_at(vars(Wage_2012:Wage_2014), # For these columns...
~ . / if_else(. > salary, 1950, 1)) # Divide by 1950 if > salary

Using dplyr I would use mutate_if
salary <- 1000
df %>% mutate_if(is.numeric, ~ifelse(. > salary, ./1950, .))

Related

Nested for loop to create a column, using two data sets in R

I want to create a new variable(named "treatment") in a dataset using two different datasets. My original datasets are two big datasets with other variables however, for simplicity, let's say I have the following datasets:
#individual level data, birth years
a <- data.frame (country_code = c(2,2,2,10,10,10,10,8),
birth_year = c(1920,1930,1940,1970,1980,1990, 2000, 1910))
#country level reform info with affected cohorts
b <- data.frame(country_code = c(2,10,10,11),
lower_cutoff = c(1928, 1975, 1907, 1934),
upper_cutoff = c(1948, 1995, 1927, 1948),
cohort = c(1938, 1985, 1917, 1942))
Dataset a is an individual dataset with birth year informations and dataset b is country-level data with some country reform information. According to dataset b I want to create a treatment column in the dataset a. Treatment is 1 if the birth_year is between the cohort and upper_cutoff and 0 if between cohort and lower_cutoff. And anything else should be NA.
After creating an empty treatment column, I used the following code below:
for(i in 1:nrow(a)) {
for(j in 1:nrow(b)){
a$treatment[i] <- ifelse(a$country_code[i] == b$country_code[j] &
a$birth_year[i] >= b$cohort[j] &
a$birth_year[i]<= b$upper_cutoff[j], "1",
ifelse(a$ison[i] == b$ison[j] &
a$birth_year[i] < b$cohort[j] &
a$birth_year[i]>= b$lower_cutoff[j], "0", NA))
}
}
As well as:
for(i in 1:nrow(a)) {
for(j in 1:nrow(b)){
a[i, "treatment"] <- case_when(a[i,"country_code"] == b[j, "country_code"] &
a[i,"birth_year"] >= b[j,"cohort"] &
a[i,"birth_year"]<= b[j,"upper_cutoff"] ~ 1,
a[i,"country_code"] == b[j, "country_code"] &
a[i,"birth_year"] < b[j,"cohort"]&
a[i,"birth_year"]>= b[j,"lower_cutoff"] ~ 0)
}
}
Both codes run, but they only return NAs. The following is the result I want to get:
treatment <- c(NA, 0, 1, NA, 0, 1, NA, 0)
Any ideas about what is wrong? Or any other suggestions? Thanks in advance!
I believe you are switching your upper and lower cutoffs. Try this approach with dplyr:
library(dplyr)
left_join(a,b) %>%
mutate(treatment = case_when(
(birth_year>=cohort & birth_year<=lower_cutoff)~1,
(birth_year<cohort & birth_year>=upper_cutoff)~0
))
Output:
country_code birth_year upper_cutoff lower_cutoff cohort treatment
1 2 1920 1928 1948 1938 NA
2 2 1930 1928 1948 1938 0
3 2 1940 1928 1948 1938 1
4 10 1970 1975 1995 1985 NA
5 10 1970 1907 1927 1917 NA
6 10 1980 1975 1995 1985 0
7 10 1980 1907 1927 1917 NA
8 10 1990 1975 1995 1985 1
9 10 1990 1907 1927 1917 NA
10 10 2000 1975 1995 1985 NA
11 10 2000 1907 1927 1917 NA
12 8 1910 NA NA NA NA
Try this for loop
for(i in 1:nrow(a)){
x <- which(a$country_code[i] == b$country_code)
a$treatment[i] <- NA
for(j in x){
if(a$birth_year[i] %in% b$cohort[j]:b$upper_cutoff[j]){
a$treatment[i] <- 1
}
if(a$birth_year[i] %in% b$lower_cutoff[j]:b$cohort[j]){
a$treatment[i] <- 0
}
}
}
Output
country_code birth_year treatment
1 2 1920 NA
2 2 1930 0
3 2 1940 1
4 10 1970 NA
5 10 1980 0
6 10 1990 1
7 10 2000 NA
8 8 1910 NA
I found the mistake in my code. Apparently, I should have used a break to avoid overwriting the variable I'm creating. But, I'm still open to other answers.
for(i in 1:nrow(a)) {
for(j in 1:nrow(b)){
if(!is.na(a$treatment[i])){break} #to make it stop if I already assign a value
a$treatment[i] <- ifelse(a$country_code[i] == b$country_code[j] &
a$birth_year[i] >= b$cohort[j] &
a$birth_year[i]<= b$upper_cutoff[j], "1",
ifelse(a$ison[i] == b$ison[j] &
a$birth_year[i] < b$cohort[j] &
a$birth_year[i]>= b$lower_cutoff[j], "0", NA))
}
}

Removing "outer rows" to allow for interpolation (and prevent extrapolation)

I have (left)joined two data frames by country-year.
df<- left_join(df, df2, by="country-year")
leading to the following example output:
country country-year a b
1 France France2000 NA NA
2 France France2001 1000 1000
3 France France2002 NA NA
4 France France2003 1600 2200
5 France France2004 NA NA
6 UK UK2000 1000 1000
7 UK UK2001 NA NA
8 UK UK2002 1000 1000
9 UK UK2003 NA NA
10 UK UK2004 NA NA
I initially wanted to remove all values for which both of the added columns (a,b) were NA.
df<-df[!is.na( df$a | df$b ),]
However, in second instance, I decided I wanted to interpolate the data I had (but not extrapolate). So instead I would like to remove all the columns for which I cannot interpolate; in the example:
1 France France2000 NA NA
5 France France2004 NA NA
9 UK UK2003 NA NA
10 UK UK2004 NA NA
I believe there are 2 options. First I somehow adapt this function:
library(tidyerse)
TRcomplete<-TRcomplete%>%
group_by(country) %>%
mutate_at(a:b,~na.fill(.x,"extend"))
to interpolate only, and then remove then apply df<-df[!is.na( df$a | df$b ),]
or I write a code to remove the "outer"columns first and then use extend like normal. Desired output:
country country-year a b
2 France France2001 1000 1000
3 France France2002 1300 1600
4 France France2003 1600 2200
6 UK UK2000 1000 1000
7 UK UK2001 0 0
8 UK UK2002 1000 1000
Any suggestions?
There are options in na.fill to specify what is done. If you look at ?na.fill, you see that fill can specify the left, interior and right, so if you specify the left and right are NA and the interior is "extend", then it will only fill the interior data. You can then filter the rows with NA.
library(tidyverse)
library(zoo)
df %>%
group_by(country) %>%
mutate_at(vars(a:b),~na.fill(.x,c(NA, "extend", NA))) %>%
filter(!is.na(a) | !is.na(b))
By the way, you have a typo in your library(tidyverse) statement; you are missing the v.

extracting values of a column into a string and replacing values in a data frame column

More than the programming, I am lost on the right approach for this problem. I have 2 data frames with a market name column. Unfortunately the names vary by a few characters / symbols in each column, for e.g. Albany.Schenectady.Troy = ALBANY, Boston.Manchester = BOSTON.
I want to standardize the market names in both data frames so I can perform merge operations later.
I thought of tackling the problem in two steps:
1) Create a vector of the unique market names from both tables and use that to create a look up table. Something that looks like:
Table 1 Markets > "Albany.Schenectady.Troy" , "Albuquerque.Santa.Fe", "Atlanta" . . . .
Table2 Markets > "SPOKANE" , "BOSTON" . . .
I tried marketnamesvector <- paste(unique(Table1$Market, sep = "", collapse = ",")) but that doesn't produce the desired output.
2) Change Market names in Table 2 to equivalent market names in Table 1. For any market name not available in Table 1, Table 2 should retain the same value in market name.
I know I could use a looping function like below but I still need a lookup table I think.
replacefunc <- function (data, oldvalue, newvalue) {
newdata <- data
for (i in unique(oldvalue)) newdata[data == i] <- newvalue[oldvalue == i]
newdata
}
Table 1: This table is 90 rows x 2 columns and has 90 unique market names.
Market Leads Investment Leads1 Leads2 Leads3
1 Albany.Schenectady.Troy NA NA NA NA NA
2 Albuquerque.Santa.Fe NA NA NA NA NA
3 Atlanta NA NA NA NA NA
4 Austin NA NA NA NA NA
5 Baltimore NA NA NA NA NA
Table 2 : This table is 150K rows x 20 columns and has 89 unique market names.
> df
Spot.ID Date Hour Time Local.Date Broadcast.Week Local.Hour Local.Time Market
2 13072765 6/30/14 0 12:40 AM 2014-06-29 1 21 9:40 PM SPOKANE
261 13072946 6/30/14 5 5:49 AM 2014-06-30 1 5 5:49 AM BOSTON
356 13081398 6/30/14 10 10:52 AM 2014-06-30 1 7 7:52 AM SPOKANE
389 13082306 6/30/14 11 11:25 AM 2014-06-30 1 8 8:25 AM SPOKANE
438 13082121 6/30/14 8 8:58 AM 2014-06-30 1 8 8:58 AM BOSTON
469 13081040 6/30/14 9 9:17 AM 2014-06-30 1 9 9:17 AM ALBANY
482 13080104 6/30/14 12 12:25 PM 2014-06-30 1 9 9:25 AM SPOKANE
501 13082120 6/30/14 9 9:36 AM 2014-06-30 1 9 9:36 AM BOSTON
617 13080490 6/30/14 13 1:23 PM 2014-06-30 1 10 10:23 AM SPOKANE
Assume that the data is in data frames df1, df2. The goal is to adjust the market names to be the same, they are currently slightly different.
First, list the markets, use the following command to list the unique names in df1, repeat for df2.
mk1 <- sort(unique(df1$market))
mk2 <- sort(unique(df2$market))
dmk12 <- setdiff(mk1,mk2)
dmk21 <- setdiff(mk2,mk1)
Use dmk12 and dmk21 to identify the different markets. Decide what names to use, and how they match up, let's change "Atlanta, GA" from df1 to "Atlanta" from df2. Then use
df2[df2$market=="Atlanta","market"] = "Atlanta, GA"
The format is
df_to_change[df_to_change[,"column"]=="old data", "column"] = "new data"
If you only have 90 names to correct, I would write out 90 change lines like the one above.
After adjusting all the names, do sort(unique(df)) again and use setdiff twice to confirm all the names are the same.

R getting rid of nested for loops

I did quite some searching on how to simplify the code for the problem below but was not successful. I assume that with some kind of apply-magic one could speed things up a little, but so far I still have my difficulties with these kind of functions ....
I have an data.frame data, structured as follows:
year iso3c gdpppc elec solid liquid heat
2010 USA 1567 1063 1118 835 616
2015 USA 1571 NA NA NA NA
2020 USA 1579 NA NA NA NA
... USA ... NA NA NA NA
2100 USA 3568 NA NA NA NA
2010 ARG 256 145 91 85 37
2015 ARG 261 NA NA NA NA
2020 ARG 270 NA NA NA NA
... ARG ... NA NA NA NA
2100 ARG 632 NA NA NA NA
As you can see, I have a historical starting value for 2010 and a complete scenario for gdppc up to 2100. I want to let values for elec, solid, liquid and heat grow according to some elasticity with respect to the development of gdppc, but separately for each country (coded in iso3c).
I have the elasticities defined in a separate data.frame parameters:
item value
elec 0.5
liquid 0.2
solid -0.1
heat 0.1
So far I am using a nested for loop:
for (e in 1:length(levels(parameters$item)){
for (c in 1:length(levels(data$iso3c)){
tmp <- subset(data, select=c("year", "iso3c", "gdppc", parameters[e, "item"]), subset=("iso3c" == levels(data$iso3c)[c]))
tmp[tmp$year %in% seq(2015, 2100, 5), parameters[e, "item"]] <-
tmp[tmp$year == 2010, parameters[e, "item"]] *
cumprod((1 + (tmp[tmp$year %in% seq(2015, 2100, 5), "gdppc"] /
tmp[tmp$year %in% seq(2010, 2095, 5), "gdppc"] - 1) * parameters[e, "value"]))
data[data$iso3c == levels(data$iso3c)[i] & data$year %in% seq(2015, 2100, 5), parameters[e, "item"]] <- tmp[tmp$year > 2010, parameters[e, "item"]]
}
}
The outer loop loops over the columns and the inner one over the countries. The inner loop runs for every country (I have 180+ countries). First, a subset containing data on one single country and on the variable of interest is selected. Then I let the respective variable grow with a certain elasticity to growth in gdppc and finally put the subset back into place in data.
I have already tried to let the outer loop run in parallel using foreach but was not succesful recombining the results. Since I have to run similar calculations quite often I would be very grateful for any help.
Thanks
Here's one way. Note I renamed your parameters data.frame to p
library(data.table)
library(reshape2)
dt <- data.table(data)
dt.melt = melt(dt,id=1:3)
dt.melt[,value:=as.numeric(value)] # coerce value column to numeric
dt.melt[,value:=head(value,1)+(gdpppc-head(gdpppc,1))*p[p$item==variable,]$value,
by="iso3c,variable"]
result <- dcast(dt.melt,iso3c+year+gdpppc~variable)
result
# iso3c year gdpppc elec solid liquid heat
# 1 ARG 2010 256 145.0 91.0 85.0 37.0
# 2 ARG 2015 261 147.5 90.5 86.0 37.5
# 3 ARG 2020 270 152.0 89.6 87.8 38.4
# 4 ARG 2100 632 333.0 53.4 160.2 74.6
# 5 USA 2010 1567 1063.0 1118.0 835.0 616.0
# 6 USA 2015 1571 1065.0 1117.6 835.8 616.4
# 7 USA 2020 1579 1069.0 1116.8 837.4 617.2
# 8 USA 2100 3568 2063.5 917.9 1235.2 816.1
The basic idea is to use the melt(...) function to reshape your original data into "long" format, where the values in the four columns solid, liquid, elec, and heat are all in one column, value, and the column variable indicates which metric value refers to. Now, using data tables, you can fill in the values easily. Then, reshape the result back into wide format using dcast(...).

How can I streamline this R script?

I have a a big dataframe in R that all looks about like this:
name amount date1 date2 days_out year
JEAN 318.5 1971-02-16 1972-11-27 650 days 1971
GREGORY 1518.5 <NA> <NA> NA days 1971
JOHN 318.5 <NA> <NA> NA days 1971
EDWARD 318.5 <NA> <NA> NA days 1971
WALTER 518.5 1971-07-06 1975-03-14 1347 days 1971
BARRY 1518.5 1971-11-09 1972-02-09 92 days 1971
LARRY 518.5 1971-09-08 1972-02-09 154 days 1971
HARRY 318.5 1971-09-16 1972-02-09 146 days 1971
GARRY 1018.5 1971-10-26 1972-02-09 106 days 1971
If someone's days_out is less than 60, they get a 90% discount. 60-90, a 70% discount. I need to find out the discounted sum of all the amounts for each year. My utterly embarrassing workaround is to write a python script that writes an R script that reads like this for each relevant year:
tmp <- members[members$year==1971, ]
tmp90 <- tmp[tmp$days_out <= 60 & tmp$days_out > 0 & !is.na(tmp$days_out), ]
tmp70 <- tmp[tmp$days_out <= 90 & tmp$days_out > 60 & !is.na(tmp$days_out), ]
tmp50 <- tmp[tmp$days_out <= 120 & tmp$days_out > 90 & !is.na(tmp$days_out), ]
tmp30 <- tmp[tmp$days_out <= 180 & tmp$days_out >120 & !is.na(tmp$days_out), ]
tmp00 <- tmp[tmp$days_out > 180 | is.na(tmp$days_out), ]
details.1971 <- c(1971, nrow(tmp),
nrow(tmp90), sum(tmp90$amount), sum(tmp90$amount) * .9,
nrow(tmp70), sum(tmp70$amount), sum(tmp70$amount) * .7,
nrow(tmp50), sum(tmp50$amount), sum(tmp50$amount) * .5,
nrow(tmp30), sum(tmp30$amount), sum(tmp90$amount) * .9,
nrow(tmp00), sum(tmp00$amount))
membership.for.chart <- rbind(membership.for.chart,details.1971)
It works just fine. The tmp frames and vectors get overwritten which is fine. But I know that I've utterly defeated everything that is elegant and efficient about R here. I launched R for the first time a month ago and I think I've come a long way. But I would really like to know how I should have gone about this?
Wow, you wrote a Python script that generates an R script? Consider my eyebrows raised...
Hopefully this will get you started:
#Import your data; add dummy column to separate 'days' suffix into its own column
dat <- read.table(text = " name amount date1 date2 days_out dummy year
JEAN 318.5 1971-02-16 1972-11-27 650 days 1971
GREGORY 1518.5 <NA> <NA> NA days 1971
JOHN 318.5 <NA> <NA> NA days 1971
EDWARD 318.5 <NA> <NA> NA days 1971
WALTER 518.5 1971-07-06 1975-03-14 1347 days 1971
BARRY 1518.5 1971-11-09 1972-02-09 92 days 1971
LARRY 518.5 1971-09-08 1972-02-09 154 days 1971
HARRY 318.5 1971-09-16 1972-02-09 146 days 1971
GARRY 1018.5 1971-10-26 1972-02-09 106 days 1971",header = TRUE,sep = "")
#Repeat 3 times
df <- rbind(dat,dat,dat)
#Create new year variable
df$year <- rep(1971:1973,each = nrow(dat))
#Breaks for discount levels
ct <- c(0,60,90,120,180,Inf)
#Cut into a factor
df$fac <- cut(df$days_out,ct)
#Create discount amounts for each row
df$discount <- c(0.9,0.7,0.5,0.9,1)[df$fac]
df$discount[is.na(df$discount)] <- 1
#Calc adj amount
df$amount_adj <- with(df,amount * discount)
#I use plyr a lot, but there are many, many
# alternatives
library(plyr)
ddply(df,.(year),summarise,
amt = sum(amount_adj),
total = length(year),
d60 = length(which(fac == "(0,60]")))
I only calculated a few of your summary values in the last ddply command. I'm assuming you can extend it yourself.
You can use either the cut function or the findInterval function. The exact code will depend on the internals of the object which are not unambiguously communicated with console output. If that days_out is a difftime-object. then something like this might work:
disc_amt <- with(tmp, amount*c(.9, .7, .5, .9, 1)[
findInterval(days_out, c(0, 60, 90, 120, 180, Inf] )
You should post the output of dput() on that tmp object or perhaps dput(head(tmp, 20)) if its really big, and testing can proceed. (The actual discounts did not seem to be ordered in a manner I would have expected.)

Resources