How to calculate growth rate in R? [duplicate] - r

I have a data frame and would like to calculate the growth rate of nominal GDP in R. I know how to do it in Excel with the formula ((gdp of this year)-gdp of last year)/( gdp of last year))*100. What kind of command could be used in R to calculate it?
year nominal gdp
2003 7696034.9
2004 8690254.3
2005 9424601.9
2006 10520792.8
2007 11399472.2
2008 12256863.6
2009 12072541.6
2010 13266857.9
2011 14527336.9
2012 15599270.7
2013 16078959.8

You can also use the lag() fuction from dplyr. It gives the previous values in a vector. Here is an example
data <- data.frame(year = c(2003:2013),
gdp = c(7696034.9, 8690254.3, 9424601.9, 10520792.8,
11399472.2, 12256863.6, 12072541.6, 13266857.9,
14527336.9, 15599270.7, 16078959.8))
library(dplyr)
growth_rate <- function(x)(x/lag(x)-1)*100
data$growth_rate <- growth_rate(data$gdp)

It's probably best for you to get familiar with data tables, and do something like this:
library(data.table)
dt_gdp <- data.table(df)
dt_gdp[, growth_rate_of_gdp := 100 * (Producto.interno.bruto..PIB. - shift(Producto.interno.bruto..PIB.)) / shift(Producto.interno.bruto..PIB.)]

A base-R solution:
with(data,
c(NA, ## augment results (growth rate unknown in year 1)
diff(gdp)/ ## this is gdp(t) - gdp(t-1)
head(gdp, -1)) ## gdp(t-1)
*100) ## scale to percentage growth
head(gdp, -1) is perhaps a little too clever. gdp[-length(gdp)] (i.e. "gdp, excluding the last value") would be slightly more idiomatic.
Or
(gdp/c(NA,gdp[-length(gdp)])-1)*100

Related

approach to cut dataset to make a new factor variable

Currently, I am trying to cut the dataset into three parts: developed, developing and under-developed. The cut criteria is quantiles. That is,
developed would be those above 75% quantiles, developing would be between 50%-75% and under-developed would be below 50%. However, quantiles are different by years.
data = data.frame("country" = c("U.S.A","U.S.A","Jamaica","Jamaica","Congo","Congo"),
"year" = c(2000,2001,2000,2001,2000,2001),
"gdp_per_capita" = c(30000,40000,100,200,50,60))
quantiles = do.call("data.frame",
tapply(data$gdp_per_capita, data$year, quantile))
What I did was to calculate the quantiles by year and I got a data frame with just that information. Now, I am trying to use this information to apply above criteria for each year.
Example
2000 = (50% = 3000, 75% = 15999)
2001 = (50% = 5000, 75% = 18000)
cut points changes
Possible results
year country gdp_per_capita status
2000 U.S. 1800000 "developed"
2000 France 200000 "developed"
....more than 500+ obs.
2000 Kenya 300 "under-developed"
2000 Malaysia 1500 "developing"
2001 Malaysia 3000 "developing"
2001 Kenya 500 "under-developed"
2001 Spain 30000 "developed"
2000 India 300 "under-developed"
2001 India 5100 "developing"
What will be the most efficient way to resolving this issue?
I tried using ifelse and doing that one by one. This seems like it is too much work and I felt like there was no reason to use computer if I am going to iterate them one by one.
Instead of data.frame, consider rbind in do.call to create quantile percents as columns, then merge to original dataset by year. Finally, calculate status with a nested ifelse conditional logic.
### QUANTILES
quantiles_matrix <- do.call("rbind", tapply(data$gdp_per_capita, data$year, quantile))
quantiles_df <- transform(data.frame(quantiles_matrix),
year = row.names(quantiles_matrix))
### MERGE
mdf <- merge(data, quantiles_df, by="year")
### STATUS COLUMN ASSIGNMENT
final_df <- transform(mdf,
status = ifelse(gdp_per_capita > X75., "developed",
ifelse(gdp_per_capita >= X50. & gdp_per_capita <= X75., "developing",
ifelse(gdp_per_capita < X50., "under-developed", NA)
)
)
)
Rextester demo

How to compute growth rate (1- and 3-year horizon) from panel data in R

I've a panel dataset of several banks, each from 1997 to 2015, with annual observations s.t.:
CODE COUNTRY YEAR LOANS_NET ...other variables
671405 AT 1997 39028938
671405 AT 1998 41033237
671405 AT 1999 35735062
...
...
671405 AT 2015 130701872
...
30885R DE 2004 200024673
...
...
Using R, I need to compute additional two columns:
1) LOANS_NET growth rate at 1-year horizon
2) LOANS_NET growth rate at 3-years horizon, which must be annualized, once calculated.
E.g.:
Loan Growth 3-year = [Bank's(i) LOANS_NET Year(t) / Bank's(i) LOANS_NET Year(t-3)] -1
nb: data contains lots of missing values, code must consider that issue! :)
#Dan Do you use any packages? I recommend you using zoo and data.table packages and transform dates in the following way:
DT[, YearNumeric := as.numeric(YEAR)]
DT[, PreviousYearLoanNet := .SD[match(YearNumeric - 1, .SD$YearNumeric), LOANS_NET], by=CODE]
Here, you create a column with previous (-1 year) loan values. Then you create a new column with growth:
DT[,Growth1Y:= (YEARLOANNET- PreviousYearLoanNet)/PreviousYearLoanNet]
And then you do whatever you want:) Cheers!

Differencing with respect to specific value of a column

I have a variable called Depression which has 40 observations and goes from 2004 to 2013 quarterly (e.g. 2004 Q1, 2004 Q2 etc.) I would like to make a new column which differences with respect to the 27th row/observations which corresponds with 2010 Q3 and set that value to 0. Any help is greatly appreciated!
If I understand correctly your question, this would do it:
# generate sample data
dat <- data.frame(id=paste0("Obs.",1:40),depression=as.integer(runif(40,0,20)))
# Create new var that calculates difference with 27th observation on depression score
dat$diff <- dat$depression - dat$depression[27]

Eliminating Existing Observations in a Zoo Merge

I'm trying to do a zoo merge between stock prices from selected trading days and observations about those same stocks (we call these "Nx observations") made on the same days. Sometimes do not have Nx observations on stock trading days and sometimes we have Nx observations on non-trading days. We want to place an "NA" where we do not have any Nx observations on trading days but eliminate Nx observations where we have them on non-trading day since without trading data for the same day, Nx observations are useless.
The following SO question is close to mine, but I would characterize that question as REPLACING missing data, whereas my objective is to truly eliminate observations made on non-trading days (if necessary, we can change the process by which Nx observations are taken, but it would be a much less expensive solution to leave it alone).
merge data frames to eliminate missing observations
The script I have prepared to illustrate follows (I'm new to R and SO; all suggestions welcome):
# create Stk_data data.frame for use in the Stack Overflow question
Date_Stk <- c("1/2/13", "1/3/13", "1/4/13", "1/7/13", "1/8/13") # dates for stock prices used in the example
ABC_Stk <- c(65.73, 66.85, 66.92, 66.60, 66.07) # stock prices for tkr ABC for Jan 1 2013 through Jan 8 2013
DEF_Stk <- c(42.98, 42.92, 43.47, 43.16, 43.71) # stock prices for tkr DEF for Jan 1 2013 through Jan 8 2013
GHI_Stk <- c(32.18, 31.73, 32.43, 32.13, 32.18) # stock prices for tkr GHI for Jan 1 2013 through Jan 8 2013
Stk_data <- data.frame(Date_Stk, ABC_Stk, DEF_Stk, GHI_Stk) # create the stock price data.frame
# create Nx_data data.frame for use in the Stack Overflow question
Date_Nx <- c("1/2/13", "1/4/13", "1/5/13", "1/6/13", "1/7/13", "1/8/13") # dates for Nx Observations used in the example
ABC_Nx <- c(51.42857, 51.67565, 57.61905, 57.78349, 58.57143, 58.99564) # Nx scores for stock ABC for Jan 1 2013 through Jan 8 2013
DEF_Nx <- c(35.23809, 36.66667, 28.57142, 28.51778, 27.23150, 26.94331) # Nx scores for stock DEF for Jan 1 2013 through Jan 8 2013
GHI_Nx <- c(7.14256, 8.44573, 6.25344, 6.00423, 5.99239, 6.10034) # Nx scores for stock GHI for Jan 1 2013 through Jan 8 2013
Nx_data <- data.frame(Date_Nx, ABC_Nx, DEF_Nx, GHI_Nx) # create the Nx scores data.frame
# create zoo objects & merge
z.Stk_data <- zoo(Stk_data, as.Date(as.character(Stk_data[, 1]), format = "%m/%d/%Y"))
z.Nx_data <- zoo(Nx_data, as.Date(as.character(Nx_data[, 1]), format = "%m/%d/%Y"))
z.data.outer <- merge(z.Stk_data, z.Nx_data)
The NAs on Jan 3 2013 for the Nx observations are fine (we'll use the na.locf) but we need to eliminate the Nx observations that appear on Jan 5 and 6 as well as the associated NAs in the Stock price section of the zoo objects.
I've read the R Documentation for merge.zoo regarding the use of "all": that its use "allows
intersection, union and left and right joins to be expressed". But trying all combinations of the
following use of "all" yielded the same results (as to why would be a secondary question).
z.data.outer <- zoo(merge(x = Stk_data, y = Nx_data, all.x = FALSE)) # try using "all"
While I would appreciate comments on the secondary question, I'm primarily interested in learning how to eliminate the extraneous Nx observations on days when there is no trading of stocks. Thanks. (And thanks in general to the community for all the great explanations of R!)
The all argument of merge.zoo must be (quoting from the help file):
logical vector having the same length as the number of "zoo" objects to be merged
(otherwise expanded)
and you want to keep all rows from the first argument but not the second so its value should be c(TRUE, FALSE).
merge(z.Stk_data, z.Nx_data, all = c(TRUE, FALSE))
The reason for the change in all syntax for merge.zoo relative to merge.data.frame is that merge.zoo can merge any number of arguments whereas merge.data.frame only handles two so the syntax had to be extended to handle that.
Also note that %Y should have been %y in the question's code.
I hope I have understood your desired output correctly ("NAs on Jan 3 2013 for the Nx observations are fine"; "eliminate [...] observations that appear on Jan 5 and 6"). I don't quite see the need for zoo in the merging step.
merge(Stk_data, Nx_data, by.x = "Date_Stk", by.y = "Date_Nx", all.x = TRUE)
# Date_Stk ABC_Stk DEF_Stk GHI_Stk ABC_Nx DEF_Nx GHI_Nx
# 1 1/2/13 65.73 42.98 32.18 51.42857 35.23809 7.14256
# 2 1/3/13 66.85 42.92 31.73 NA NA NA
# 3 1/4/13 66.92 43.47 32.43 51.67565 36.66667 8.44573
# 4 1/7/13 66.60 43.16 32.13 58.57143 27.23150 5.99239
# 5 1/8/13 66.07 43.71 32.18 58.99564 26.94331 6.10034

R: Percentile calculations on subsets of data

I have a data set which contains the following identifiers, an rscore, gvkey, sic2, year, and cdom. What I am looking to do is calculate percentile ranks based on summed rscores for all temporal spans (~1500) for a given gvkey, and then calculate percentile ranks in a given temporal time span and sic2 based on gvkey.
Calculating the percentiles for all temporal time spans is a fairly quick process, however once I add in calculating the sic2 percentile ranks it's fairly slow, but we are likely looking at about ~65,000 subsets in total. I'm wondering if there is a possibility of speeding up this process.
The data for one temporal time span looks like the following
gvkey sic2 cdom rscoreSum pct
1187 10 USA 8.00E-02 0.942268617
1265 10 USA -1.98E-01 0.142334654
1266 10 USA 4.97E-02 0.88565478
1464 10 USA -1.56E-02 0.445748247
1484 10 USA 1.40E-01 0.979807985
1856 10 USA -2.23E-02 0.398252565
1867 10 USA 4.69E-02 0.8791019
2047 10 USA -5.00E-02 0.286701209
2099 10 USA -1.78E-02 0.430915371
2127 10 USA -4.24E-02 0.309255308
2187 10 USA 5.07E-02 0.893020421
The code to calculate the industry ranks is below, and fairly straightforward.
#generate 2 digit industry SICs percentile ranks
dout <- ddply(dfSum, .(sic2), function(x){
indPct <- rank(x$rscoreSum)/nrow(x)
gvkey <- x$gvkey
x <- data.frame(gvkey, indPct)
})
#merge 2 digit industry SIC percentile ranks with market percentile ranks
dfSum <- merge(dfSum, dout, by = "gvkey")
names(dfSum)[2] <- 'sic2'
Any suggestions to speed the process would be appreciated!
You might try the data.table package for fast operations across relatively large datasets like yours. For example, my machine has no problem working through this:
library(data.table)
# Create a dataset like yours, but bigger
n.rows <- 2e6
n.sic2 <- 1e4
dfSum <- data.frame(gvkey=seq_len(n.rows),
sic2=sample.int(n.sic2, n.rows, replace=TRUE),
cdom="USA",
rscoreSum=rnorm(n.rows))
# Now make your dataset into a data.table
dfSum <- data.table(dfSum)
# Calculate the percentiles
# Note that there is no need to re-assign the result
dfSum[, indPct:=rank(rscoreSum)/length(rscoreSum), by="sic2"]
whereas the plyr equivalent takes a while.
If you like the plyr syntax (I do), you may also be interested in the dplyr package, which is billed as "the next generation of plyr", with support for faster data stores in the backend.

Resources