Extract time series from netCDF in R - r

I created this file
using TRMM_3B42_Daily product over 1998-01-01 to 1998-12-31. This is the script I used in R:
lon=seq(-91.875,-86.875,by= 0.25)
lat=seq(13.875,16.875,by= 0.25)
x_dim <- ncdim_def( "lon", "degrees_east", lon, create_dimvar=TRUE)
y_dim <- ncdim_def( "lat", "degrees_north", lat, create_dimvar=TRUE)
t_dim <- ncdim_def( "time", "days since 1997-12-31 12:00:00.0 -0:00", 1:365, unlim=FALSE)
mv=9999.900390625
precipitation_var <- ncvar_def("precipitation", "mm", list(y_dim,x_dim,t_dim), mv)
nrow = 13
ncol = 21
NA.matrix=matrix(rep(NA,nrow*ncol))
precip=array(NA.matrix,c(nrow,ncol, 1))
for (i in 1:length(test01)){precip_nc=nc_open(test01[i])
precip_get_nc=ncvar_get(precip_nc,"precipitation")
precip=abind(precip,precip_get_nc)}
precip=precip[,,-1]
PRECIPITATION_nc = nc_create("PRECIPITATION_1998.nc", precipitation_var)
precipitation_nc_put=ncvar_put (PRECIPITATION_nc, precipitation_var, precip)
nc_close(PRECIPITATION_nc)
Following this link I tried extracting the values in order to plot a time series but it seems I am averaging the values of two cells instead of just extracting the values of a single cell. How do I fix this? Is there a way to create a loop so that it extracts the values of different cells? (in this case it would be 13 x 21 = 273)
b <- brick('PRECIPITATION_1998.nc')
be <- crop(b, extent(13.875, 14.125, -91.875,-91.625))
a <- aggregate(be, dim(be)[2:1], na.rm=TRUE)
v <- values(a)
write.csv(v, 'precip.csv', row.names=FALSE)
Also two other problems I found where that the dates in the excel file have an X in front and that the values are shown horizontally instead of vertically. Any help would be greatly appreciated!! Thanks

extraction of points data can be easily accomplished by creating a SpatialPoints object containing the point from which you want to extract data, followed by an extract operation.
Concerining the other topics: The "X"s are added because column names can not start with numerals, so a character is added. The horizontal ordering can be easily changed after extraction with some transposing
This, for example, should work (It solves also the "X"s problem and changes the format to "column like"):
library(raster)
library(stringr)
library(lubridate)
library(tidyverse)
b <- brick('/home/lb/Temp/buttami/PRECIPITATION_1998.nc')
lon = c(-91.875,-91.625) # Array of x coordinates
lat <- c(13.875, 14.125) # Array of y coordinates
points <- SpatialPoints(cbind(lat,lon)), # Build a spPoints object
# Etract and tidy
points_data <- b %>%
raster::extract(points, df = T) %>%
gather(date, value, -ID) %>%
spread(ID, value) %>% # Can be skipped if you want a "long" table
mutate(date = ymd(str_sub(names(b),2))) %>%
as_tibble()
points_data
# A tibble: 365 × 3
date `1` `2`
<date> <dbl> <dbl>
1 1998-01-01 0 0
2 1998-01-02 0 0
3 1998-01-03 0 0
4 1998-01-04 0 0
5 1998-01-05 0 0
6 1998-01-06 0 0
7 1998-01-07 0 0
8 1998-01-08 0 0
9 1998-01-09 0 0
10 1998-01-10 0 0
# ... with 355 more rows
plot(points_data$date,points_data$`1`)

Related

How to add a column to my data frame that calculates the distance between lat/long points between the previous point with matching IDs

I have a data frame of individual animals with a unique ID, the lat/long where they were found, and the date they were found. The database has frequent returns of the same individual. I have over 2000 individuals. I want to add a column to my data frame to calculate euclidian distance between current location & previous location. I want to add a second column to tell me which calculation number I'm on for each individual. The data frame is already organized by sequential date. I'm trying to solve this in R.
Event
ID
Lat
Long
1
1
31.89
-80.98
2
2
31.54
-80.12
3
1
31.45
-81.92
4
1
31.64
-81.82
5
2
31.23
-80.98
Add a column so that now it looks like
Event
ID
Lat
Long
Dist.
Calculation #
1
1
31.89
-80.98
-
0
2
2
31.54
-80.12
-
0
3
1
31.45
-81.92
Distance between event 1 & 3
1
4
1
31.64
-81.82
Distance between event 3 & 4
2
5
2
31.23
-80.98
Distance between event 2 & 5
1
Is there a faster way to do this without a for loop? I'm stuck on where to start. I know I can use a distance function from the geospatial package once, I have the uniqueID sorted, but I'm having trouble iterating through my data.
Here is one option which leans on the sf package and dplyr. The function sf::st_distance calculates distances between pairs of points, and dplyr::lag can be used to look "one row behind". You will want to confirm your coordinate system, which I guessed here is WGS84/4326.
library(dplyr)
library(sf)
dat <- read.table(text = " Event ID Lat Long
1 1 31.89 -80.98
2 2 31.54 -80.12
3 1 31.45 -81.92
4 1 31.64 -81.82
5 2 31.23 -80.98", h = T)
dat_sf <- st_as_sf(dat, coords = c('Long', 'Lat'), crs = 4326)
dat_sf %>%
arrange(ID) %>%
group_by(ID) %>%
mutate(distance = as.numeric(st_distance(geometry, lag(geometry), by_element = TRUE)),
calculation = row_number() - 1)
#> Simple feature collection with 5 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: -81.92 ymin: 31.23 xmax: -80.12 ymax: 31.89
#> Geodetic CRS: WGS 84
#> # A tibble: 5 x 5
#> # Groups: ID [2]
#> Event ID geometry distance calculation
#> * <int> <int> <POINT [°]> <dbl> <dbl>
#> 1 1 1 (-80.98 31.89) NA 0
#> 2 3 1 (-81.92 31.45) 101524. 1
#> 3 4 1 (-81.82 31.64) 23155. 2
#> 4 2 2 (-80.12 31.54) NA 0
#> 5 5 2 (-80.98 31.23) 88615. 1
Created on 2022-11-14 by the reprex package (v2.0.0)
Try this:
load library geosphere
create demo data
get all unique IDs and sort dataframe by ID and event
append last known coords of each animal to each row
apply distance function to each row
library(geosphere)
df <- data.frame(
event = seq(5),
id = c(1, 2, 1, 1, 2),
lat = c(31.89, 31.54, 31.45, 31.64, 31.23),
long = c(-80.98, -80.12, -81.92, -81.82, -80.98)
)
keys <- df$id %>% unique
df %<>% dplyr::arrange(id, event)
df <- keys %>% lapply(
function(key){
tmp <- df[df$id == key, ]
tmp$last_lat <- tmp$lat
tmp$last_long <- tmp$long
tmp[2:nrow(tmp), ]$last_lat <- tmp[1:nrow(tmp) - 1, ]$lat
tmp[2:nrow(tmp), ]$last_long <- tmp[1:nrow(tmp) - 1, ]$long
tmp %>% return
}
) %>% do.call(rbind, .)
df %<>% mutate(dist = distHaversine(cbind(long, lat), cbind(last_long, last_lat)))
Since you said you need speed, below is the same code as above but run in parallel:
library(tictoc)
library(parallel)
tic()
clust <- makeCluster(detectCores() - 1)
df <- data.frame(
event = seq(5),
id = c(1, 2, 1, 1, 2),
lat = c(31.89, 31.54, 31.45, 31.64, 31.23),
long = c(-80.98, -80.12, -81.92, -81.82, -80.98)
)
keys <- df$id %>% unique
df %<>% dplyr::arrange(id, event)
clusterExport(clust, "df")
clusterEvalQ(clust, library(magrittr))
df <- keys %>% parLapply(
clust, .,
function(key){
tmp <- df[df$id == key, ]
tmp$last_lat <- tmp$lat
tmp$last_long <- tmp$long
tmp[2:nrow(tmp), ]$last_lat <- tmp[1:nrow(tmp) - 1, ]$lat
tmp[2:nrow(tmp), ]$last_long <- tmp[1:nrow(tmp) - 1, ]$long
tmp %>% return
}
) %>% do.call(rbind, .)
df %<>% mutate(dist = distHaversine(cbind(long, lat), cbind(last_long, last_lat)))
toc()
Above, tictoc just records the execution time. I just created a cluster with the number of your cpu cores minus 1, and changed the lapply part to parLapply The second version will be slower than the first if you have a small dataset (due to overhead setting up the parallel computation). But if you have a large dataset, the second version will be much faster.

How can I join elements (columns from dataframes) from two lists by row names using R?

I need help please. I have two lists: the first contains ndvi time series for distinct points, the second contains precipitation time series for the same plots (plots are in the same order in the two lists).
I need to combine the two lists. I want to add the column called precipitation from one list to the corresponding ndvi column from the other list respecting the dates (represented here by letters in the row names) to a posterior analises of correlation between columns. However, both time series of ndvi and precipitation have distinct lenghts and distinct dates.
I created the two lists to be used as example of my dataset. However, in my actual dataset the row names are monthly dates in the format "%Y-%m-%d".
library(tidyverse)
set.seed(100)
# First variable is ndvi.mon1 (monthly ndvi)
ndvi.mon1 <- vector("list", length = 3)
for (i in seq_along(ndvi.mon1)) {
aux <- data.frame(ndvi = sample(randu$x,
sample(c(seq(1,20, 1)),1),
replace = T))
ndvi.mon1[i] <- aux
ndvi.mon1 <- ndvi.mon1 %>% map(data.frame)
rownames(ndvi.mon1[[i]]) <- sample(letters, size=seq(letters[1:as.numeric(aux %>% map(length))]) %>% length)
}
# Second variable is precipitation
precipitation <- vector("list", length = 3)
for (i in seq_along(ndvi.mon1)){
prec_aux <- data.frame(precipitation = sample(randu$x*500,
26,
replace = T))
row.names(prec_aux) <- seq(letters[1:as.numeric(prec_aux %>% map(length))])
precipitation[i] <- prec_aux
precipitation <- precipitation %>% map(data.frame)
rownames(precipitation[[i]]) <- letters[1:(as.numeric(precipitation[i] %>% map(dim) %>% map(first)))]
}
Can someone help me please?
Thank you!!!
Marcio.
Maybe like this?
library(dplyr)
library(purrr)
precipitation2 <- precipitation %>%
map(rownames_to_column) %>%
map(rename, precipitation = 2)
ndvi.mon2 <- ndvi.mon1 %>%
map(rownames_to_column) %>%
map(rename, ndvi = 2)
purrr::map2(ndvi.mon2, precipitation2, left_join, by = "rowname")
[[1]]
rowname ndvi precipitation
1 k 0.354886 209.7415
2 x 0.596309 103.3700
3 r 0.978769 403.8775
4 l 0.322291 354.2630
5 c 0.831722 348.9390
6 s 0.973205 273.6030
7 h 0.949827 218.6430
8 y 0.443353 61.9310
9 b 0.826368 8.3290
10 d 0.337308 291.2110
The below will return a list of data.frames, that have been merged, using rownames:
lapply(seq_along(ndvi.mon1), function(i) {
merge(
x = data.frame(date = rownames(ndvi.mon1[[i]]), ndvi = ndvi.mon1[[i]][,1]),
y = data.frame(date = rownames(precipitation[[i]]), precip = precipitation[[i]][,1]),
by="date"
)
})
Output:
[[1]]
date ndvi precip
1 b 0.826368 8.3290
2 c 0.831722 348.9390
3 d 0.337308 291.2110
4 h 0.949827 218.6430
5 k 0.354886 209.7415
6 l 0.322291 354.2630
7 r 0.978769 403.8775
8 s 0.973205 273.6030
9 x 0.596309 103.3700
10 y 0.443353 61.9310
[[2]]
date ndvi precip
1 g 0.415824 283.9335
2 k 0.573737 311.8785
3 p 0.582422 354.2630
4 y 0.952495 495.4340
[[3]]
date ndvi precip
1 b 0.656463 332.5700
2 c 0.347482 94.7870
3 d 0.215425 431.3770
4 e 0.063100 499.2245
5 f 0.419460 304.5190
6 g 0.712057 226.7125
7 h 0.666700 284.9645
8 i 0.778547 182.0295
9 k 0.902520 82.5515
10 l 0.593219 430.6630
11 m 0.788715 443.5345
12 n 0.347482 132.3950
13 q 0.719538 79.1835
14 r 0.911370 100.7025
15 s 0.258743 309.3575
16 t 0.940644 142.3725
17 u 0.626980 335.4360
18 v 0.167640 390.4915
19 w 0.826368 63.3760
20 x 0.937211 439.8685

Calculate n-dimensional euclidean distance from group centroids for each sample and select the lowest 3 for each group in R

This is two-part question and is pretty complex.
First. I want to calculate the 'n'-dimensional euclidean distance between each individual sample in dataframe "ind_scores" and it's respective group centroid in dataframe "centroids".
Then, I want to select the 3 individual samples are the closest to their respective group centroids. I want to save these to a new dataframe with the info for 'individual sample name', 'group' and 'distance to centroid'.
Here is an example of the data:
ind_scores <- data.frame(row.names = c("OP2413iiiaMOU","OP2413iiibMOU","OP2413iiicMOU","OP2645ii_aPOR","OP2645ii_bPOR","OP2645ii_cPOR","OP2645ii_dPOR","OP2645ii_ePOR","OP3088i__aPOR","OP5043___aWAT","OP5043___bWAT","OP5044___aMOU","OP5044___bMOU","OP5044___cMOU","OP5046___aWAT","OP5046___bWAT","OP5046___cWAT","OP5046___dWAT","OP5046___eWAT","OP5047___aPHA","OP5047___bPHA","OP5048___bPHA","OP5048___cPHA","OP5048___dPHA","OP5048___ePHA","OP5048___fPHA","OP5048___gPHA","OP5048___hPHA","OP5049___aWAT","OP5049___bWAT","OP5051DNAaCOM","OP5051DNAbCOM","OP5051DNAcCOM","OP5052DNAaWAT","OP5053DNAaPHA","OP5053DNAbPHA","OP5053DNAcPHA","OP5054DNAaMOU","OP5054DNAbMOU","OP5054DNAcMOU"),
group = c("4","4","4","1","1","1","1","1","3","3","3","5","5","5","2","5","2","2","5","3","3","3","3","1","3","3","3","3","2","2","4","5","4","2","3","3","3","5","5","5"),
CV.1 = c(-13.3864612433581,-12.8079930877268,-12.8078461023615,11.609290941109,10.5489837203281,10.8802079446603,11.7559827821839,10.769027609963,2.93788199576291,5.14343682437333,1.1768471575429,-3.59878541566711,-3.69656648297924,-3.13205394000296,-1.88190759998412,-3.50181277277038,0.563858206656491,-1.38629942623866,-3.73771209413208,3.40039117982473,2.86962877144321,3.80869463338469,4.15722705333298,9.08529455175736,3.15497802125988,2.42193314853044,0.600699372070624,4.14515087614032,-3.3599436881205,-1.8893406509868,-13.355031250023,-4.10118631444206,-11.4911993949333,-1.55841778422586,2.91834267480086,1.58762181687645,3.08125993208779,-3.84248479288043,-3.60800082570682,-3.47369634755007),
CV.2 = c(-5.98931418061097,-6.48685652483353,-6.48781938591041,-5.4121748521578,-4.56051914391762,-5.14772881585026,-4.3883054106957,-3.06298578319138,0.25688954313487,1.01459325674394,1.47381593062751,5.11285501685872,6.32219277017476,4.93757903863915,-1.98974199849122,6.8029453586845,-4.47482073821288,-2.89353901685366,6.19654462202962,1.44791941276988,2.01950206487354,3.29347544821835,1.70411388918498,-3.36842394773708,0.843537649290457,1.53904192617335,-0.0653393231022099,2.43481086719558,-2.28081054006986,-1.12101221091068,-5.74678650527647,2.81164429296665,-4.7739502651084,-0.836323550526183,1.21550795042252,1.3943021883996,1.4814166592311,5.83324212843683,5.74898742272061,5.20153475667944),
CV.3 = c(-1.98030009996666,-0.130982057250324,-0.13182806033636,4.66419380929057,5.76073945060135,4.68132496125842,4.76343610149589,4.14550671815003,-4.32639082067268,-4.24665489024982,-4.41960026466873,3.48306980151309,3.33978102573513,5.7630709271421,1.72213262278476,3.4138699327986,-0.214011687254588,-1.35717946591182,3.99742433050098,-4.11899265115508,-4.850265219848,-4.56241597162798,-5.1673124571133,3.88620294769555,-7.55945071289283,-5.18624310325486,-2.64740221288213,-3.34585676732483,-0.146912983782168,0.183282683148834,0.341803164827804,3.08878325423758,0.402559648490399,-0.589462854225432,-4.66295564242554,-4.70902036477095,-3.15037329091412,4.46721009678144,4.19323467451728,5.20598542755799),
CV.4 = c(-1.85773720384766,-3.29816018270707,-3.29805035723744,-1.0463680864694,-0.164642808251456,-1.88434766843655,-2.76184052196793,-1.69491772471098,0.0194432918943446,0.900426089523736,-0.581953934607345,-0.230042890025999,-1.79667524325622,-2.45893275735924,6.71016957191989,1.8888359729478,5.48587185602468,7.45260127587355,-0.447573770298677,-1.61748546155154,-2.01415972868345,-1.50135791552696,-0.439840157186184,-1.26569596255966,-1.04297110114946,-1.59978271452128,-0.471298592990895,-0.466524983137062,6.36590517153234,6.62852590954231,-3.04695209017556,-0.936146169909344,-2.4145719914164,5.10804058988218,-0.0744344020096521,-1.17738342385673,-1.67635978290671,-1.05954691377259,-0.0467102661118772,1.81264507750015))
centroids <- data.frame(group = c("1","2","3","4","5"),
CV.1 = c(10.7747979250003,-1.58534182381657,2.95743524695937,-12.7697062156805,-3.63247766512568),
CV.2 = c(-4.32335632559164,-2.26604134251075,1.43239910451168,-5.89694537234795,5.44083615635448),
CV.3 = c(4.65023399808197,-0.0670252808734024,-4.49663816927149,-0.299749480847027,4.1058254967538),
CV.4 = c(-1.469635462066,6.29185239579583,-0.838834486907799,-2.78309436507683,-0.363794106698444))
Many thanks in advance! Cheers. Deon.
Personally, I like working with tidy tibbles (no more row-names, and in long-form), so I'll first convert your dataframes to that.
library(tidyverse)
ind_scores <- ind_scores %>%
as_tibble(rownames = "name") %>%
pivot_longer(cols = starts_with("CV"),
names_to = "CV")
centroids <- centroids %>%
pivot_longer(cols = starts_with("CV"),
names_to = "CV")
Now, it is easy to join the correct centroids to the individuals, group by individual, and calculate its euclidean distance. The resulting tibble contains columns name and distance. Sorting by distance gives the closest examples in the top.
ind_scores %>%
left_join(centroids, by = c("group", "CV"), suffix = c("", "_centroid")) %>%
group_by(group, name) %>%
summarise(distance = sqrt(sum((value - value_centroid)^2))) %>% # euclidean distance
top_n(-3, distance) %>% # bottom 3 sorted by distance
arrange(group, distance) # sort them
Bas's version is prettier and easier to understand, but if you were thinking to translate it to Rcpp (personally i work a lot with clustering and for big data R is a bit too slow), this might also help.
groups <- ind_scores[,1] %>% as.character() %>% as.numeric()
ind_scores[,1] <- NULL
centroids <- list()
j <- 1
for(i in unique(groups)){
centroids[[j]] <- (ind_scores[groups==i,] %>% apply(.,2,mean))
names(centroids)[j] <- i
ind_scores <- rbind(ind_scores,centroids[[j]])
j <- j +1
}
## the ast j of ind scores will be centroids
dist_mat <- dist(ind_scores %>% as.matrix()) %>% as.matrix() ## get the distance matrix
# > dist_mat[1:5,1:5]
# OP2413iiiaMOU OP2413iiibMOU OP2413iiicMOU OP2645ii_aPOR OP2645ii_bPOR
# OP2413iiiaMOU 0.000000 2.465150984 2.464681274 25.882974 25.253460
# OP2413iiibMOU 2.465151 0.000000000 0.001294793 25.008458 24.367816
# OP2413iiicMOU 2.464681 0.001294793 0.000000000 25.008508 24.367942
# OP2645ii_aPOR 25.882974 25.008458155 25.008508380 0.000000 1.956891
# OP2645ii_bPOR 25.253460 24.367815962 24.367941651 1.956891 0.000000
## do not touch j
thresh <- 3
new_data_frame <- data.frame(sample_name=NA,group=NA,centr_distance=NA)
for(i in nrow(dist_mat):(nrow(dist_mat)-j+2)){
distances_to_cluster <- dist_mat[i,-i]
indexes <- order(distances_to_cluster,decreasing = F)[1:thresh]
## collect thresh minimum distances
##[1] 2 3 31
for(z in indexes){
## get indexes name, which group and distance to centorid
tmp <- c(rownames(dist_mat)[z],groups[z],distances_to_cluster[z])
new_data_frame <- rbind(new_data_frame,tmp)
}
}
new_data_frame[order(new_data_frame$group),] %>% na.omit()
# 11 OP2645ii_cPOR 1 0.929329939818952
# 12 OP2645ii_ePOR 1 1.376251766813
# 13 OP2645ii_aPOR 1 1.4357069775206
# 2 OP5049___bWAT 2 1.25678563440431
# 3 OP5049___aWAT 2 1.77800330839339
# 4 OP5046___dWAT 2 1.85612687904496
# 8 OP5053DNAaPHA 3 0.812735500386649
# 9 OP5047___aPHA 3 0.972298470684858
# 10 OP5048___fPHA 3 1.16307022957174
# 14 OP2413iiibMOU 4 0.802020132014482
# 15 OP2413iiicMOU 4 0.802473693143821
# 16 OP5051DNAaCOM 4 0.919980313623531
# 5 OP5054DNAbMOU 5 0.451374395540337
# 6 OP5044___aMOU 5 0.717231370935914
# 7 OP5046___eWAT 5 0.775202821859753

How can i add more columns in dataframe by for loop

I am beginner of R. I need to transfer some Eviews code to R. There are some loop code to add 10 or more columns\variables with some function in data in Eviews.
Here are eviews example code to estimate deflator:
for %x exp con gov inv cap ex im
frml def_{%x} = gdp_{%x}/gdp_{%x}_r*100
next
I used dplyr package and use mutate function. But it is very hard to add many variables.
library(dplyr)
nominal_gdp<-rnorm(4)
nominal_inv<-rnorm(4)
nominal_gov<-rnorm(4)
nominal_exp<-rnorm(4)
real_gdp<-rnorm(4)
real_inv<-rnorm(4)
real_gov<-rnorm(4)
real_exp<-rnorm(4)
df<-data.frame(nominal_gdp,nominal_inv,
nominal_gov,nominal_exp,real_gdp,real_inv,real_gov,real_exp)
df<-df %>% mutate(deflator_gdp=nominal_gdp/real_gdp*100,
deflator_inv=nominal_inv/real_inv,
deflator_gov=nominal_gov/real_gov,
deflator_exp=nominal_exp/real_exp)
print(df)
Please help me to this in R by loop.
The answer is that your data is not as "tidy" as it could be.
This is what you have (with an added observation ID for clarity):
library(dplyr)
df <- data.frame(nominal_gdp = rnorm(4),
nominal_inv = rnorm(4),
nominal_gov = rnorm(4),
real_gdp = rnorm(4),
real_inv = rnorm(4),
real_gov = rnorm(4))
df <- df %>%
mutate(obs_id = 1:n()) %>%
select(obs_id, everything())
which gives:
obs_id nominal_gdp nominal_inv nominal_gov real_gdp real_inv real_gov
1 1 -0.9692060 -1.5223055 -0.26966202 0.49057546 2.3253066 0.8761837
2 2 1.2696927 1.2591910 0.04238958 -1.51398652 -0.7209661 0.3021453
3 3 0.8415725 -0.1728212 0.98846942 -0.58743294 -0.7256786 0.5649908
4 4 -0.8235101 1.0500614 -0.49308092 0.04820723 -2.0697008 1.2478635
Consider if you had instead, in df2:
obs_id variable real nominal
1 1 gdp 0.49057546 -0.96920602
2 2 gdp -1.51398652 1.26969267
3 3 gdp -0.58743294 0.84157254
4 4 gdp 0.04820723 -0.82351006
5 1 inv 2.32530662 -1.52230550
6 2 inv -0.72096614 1.25919100
7 3 inv -0.72567857 -0.17282123
8 4 inv -2.06970078 1.05006136
9 1 gov 0.87618366 -0.26966202
10 2 gov 0.30214534 0.04238958
11 3 gov 0.56499079 0.98846942
12 4 gov 1.24786355 -0.49308092
Then what you want to do is trivial:
df2 %>% mutate(deflator = real / nominal)
obs_id variable real nominal deflator
1 1 gdp 0.49057546 -0.96920602 -0.50616221
2 2 gdp -1.51398652 1.26969267 -1.19240392
3 3 gdp -0.58743294 0.84157254 -0.69801819
4 4 gdp 0.04820723 -0.82351006 -0.05853872
5 1 inv 2.32530662 -1.52230550 -1.52749012
6 2 inv -0.72096614 1.25919100 -0.57256297
7 3 inv -0.72567857 -0.17282123 4.19901294
8 4 inv -2.06970078 1.05006136 -1.97102841
9 1 gov 0.87618366 -0.26966202 -3.24919196
10 2 gov 0.30214534 0.04238958 7.12782060
11 3 gov 0.56499079 0.98846942 0.57158146
12 4 gov 1.24786355 -0.49308092 -2.53074800
So the question becomes: how do we get to the nice dplyr-compatible data.frame.
You need to gather your data using tidyr::gather. However, because you have 2 sets of variables to gather (the real and nominal values), it is not straightforward. I have done it in two steps, there may be a better way though.
real_vals <- df %>%
select(obs_id, starts_with("real")) %>%
# the line below is where the magic happens
tidyr::gather(variable, real, starts_with("real")) %>%
# extracting the variable name (by erasing up to the underscore)
mutate(variable = gsub(variable, pattern = ".*_", replacement = ""))
# Same thing for nominal values
nominal_vals <- df %>%
select(obs_id, starts_with("nominal")) %>%
tidyr::gather(variable, nominal, starts_with("nominal")) %>%
mutate(variable = gsub(variable, pattern = ".*_", replacement = ""))
# Merging them... Now we have something we can work with!
df2 <-
full_join(real_vals, nominal_vals, by = c("obs_id", "variable"))
Note the importance of the observation id when merging.
We can grep the matching names, and sort:
x <- colnames(df)
df[ sort(x[ (grepl("^nominal", x)) ]) ] /
df[ sort(x[ (grepl("^real", x)) ]) ] * 100
Similarly, if the columns were sorted, then we could just:
df[ 1:4 ] / df[ 5:8 ] * 100
We can loop over column names using purrr::map_dfc then apply a custom function over the selected columns (i.e. the columns that matched the current name from nms)
library(dplyr)
library(purrr)
#Replace anything before _ with empty string
nms <- unique(sub('.*_','',names(df)))
#Use map if you need the ouptut as a list not a dataframe
map_dfc(nms, ~deflator_fun(df, .x))
Custom function
deflator_fun <- function(df, x){
#browser()
nx <- paste0('nominal_',x)
rx <- paste0('real_',x)
select(df, matches(x)) %>%
mutate(!!paste0('deflator_',quo_name(x)) := !!ensym(nx) / !!ensym(rx)*100)
}
#Test
deflator_fun(df, 'gdp')
nominal_gdp real_gdp deflator_gdp
1 -0.3332074 0.181303480 -183.78433
2 -1.0185754 -0.138891362 733.36121
3 -1.0717912 0.005764186 -18593.97398
4 0.3035286 0.385280401 78.78123
Note: Learn more about quo_name, !!, and ensym which they are tools for programming with dplyr here

Count based on multiple conditions from other data.frame

I am migrating analysis from Excel to R, and would like some input on how best to perform something similar to Excel's COUNTIFS in R.
I have a two data.frames, statedf and memberdf.
statedf=data.frame(state=c('MD','MD','MD','NY','NY','NY'), week = 5:7)
memberdf=data.frame(memID = 1:15, state = c('MD','MD','NY','NY','MD'),
finalweek = c(3,3,5,3,3,5,3,5,3,5,6,5,2,3,5),
orders = c(1,2,3))
This data is for a subscription-based business. I would like to know the number of members who newly lapsed for each week/state combo in statedf, where newly lapse is defined by statedf$week - 1 = memberdf$finalweek. Further I would like to have separate counts for each order value (1,2,3).
The desired output would look like
out <- data.frame(state=c('MD','MD','MD','NY','NY','NY'), week = 5:7,
oneorder = c(0,1,0,0,0,0),
twoorder = c(0,0,1,0,1,0),
threeorder = c(0,3,0,0,1,0))
I asked (and got a great response for) a simpler version of this question yesterday - the answers revolved around creating a new data.frame based on member.df. However, I need to append the data to statedf, because statedf has member/week combos that don't exist in memberdf, and vice versa. If this was in Excel, I'd use COUNTIFS but am struggling for a solution in R.
Thanks.
Here is a solution with the dplyr and tidyr packages:
library(tidyr) ; library(dplyr)
counts <- memberdf %>%
mutate(lapsedweek = finalweek + 1) %>%
group_by(state, lapsedweek, orders) %>%
tally()
counts <- counts %>% spread(orders, n, fill = 0)
out <- left_join(statedf, counts, by = c("state", "week" = "lapsedweek"))
out[is.na(out)] <- 0 # convert rows with all NAs to 0s
names(out)[3:5] <- paste0("order", names(out)[3:5]) # rename columns
We could create a new variable ('week1') in the 'statedf' dataset, merge the 'memberdf' with 'statedf', and then reshape from 'long' to 'wide' format with dcast. I changed the 'orders' column to match the column names in the 'out'.
statedf$week1 <- statedf$week-1
df1 <- merge(memberdf[-1], statedf, by.x=c('state', 'finalweek'),
by.y=c('state', 'week1'), all.y=TRUE)
lvls <- paste0(c('one', 'two', 'three'), 'order')
df1$orders <- factor(lvls[df1$orders],levels=lvls)
library(reshape2)
out1 <- dcast(df1, state+week~orders, value.var='orders', length)[-6]
out1
# state week oneorder twoorder threeorder
#1 MD 5 0 0 0
#2 MD 6 1 0 3
#3 MD 7 0 1 0
#4 NY 5 0 0 0
#5 NY 6 0 1 1
#6 NY 7 0 0 0
all.equal(out, out1)
#[1] TRUE

Resources