reshape untidy data frame, spreading rows to columns names [duplicate] - r

This question already has answers here:
Transpose a data frame
(6 answers)
Closed 2 years ago.
Have searched the threads but can't understand a solution that will solve the problem with the data frame that I have.
My current data frame (df):
# A tibble: 8 x 29
`Athlete` Monday...2 Tuesday...3 Wednesday...4 Thursday...5 Friday...6 Saturday...7 Sunday...8
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
1 Date 29/06/2020 30/06/2020 43837.0 43868.0 43897.0 43928.0 43958.0
2 HR 47.0 54.0 51.0 56.0 59.0 NA NA
3 HRV 171.0 91.0 127.0 99.0 77.0 NA NA
4 Sleep Duration 9.11 7.12 8.59 7.15 8.32 NA NA
5 Sleep Efficien~ 92.0 94.0 89.0 90.0 90.0 NA NA
6 Recovery Score 98.0 66.0 96.0 72.0 46.0 NA NA
7 Life Stress NO NO NO NO NO NA NA
8 Sick NO NO NO NO NO NA NA
Have tried to use spread and pivot wider but I know there would require additional functions in order to get the desired output which beyond my level on understanding in R.
Do I need to u
Desired output:
Date HR HRV Sleep Duration Sleep Efficiency Recovery Score Life Stress Sick
29/06/2020 47.0 171.0 9.11
30/06/2020 54.0 91.0 7.12
43837.0 51.0 127.0 8.59
43868.0 56.0 99.0 7.15
43897.0 59.0 77.0 8.32
43928.0 NA NA NA
43958.0 NA NA NA
etc.
Thank you

In Base R you will do:
type.convert(setNames(data.frame(t(df[-1]), row.names = NULL), df[,1]))
Date HR HRV Sleep Duration Sleep Efficien~ Recovery Score Life Stress Sick
1 29/06/2020 47 171 9.11 92 98 NO NO
2 30/06/2020 54 91 7.12 94 66 NO NO
3 43837.0 51 127 8.59 89 96 NO NO
4 43868.0 56 99 7.15 90 72 NO NO
5 43897.0 59 77 8.32 90 46 NO NO
6 43928 NA NA NA NA NA <NA> <NA>
7 43958 NA NA NA NA NA <NA> <NA>

Related

Time series forecasting by lm() using lapply

I was trying to forecast a time series problem using lm() and my data looks like below
Customer_key date sales
A35 2018-05-13 31
A35 2018-05-20 20
A35 2018-05-27 43
A35 2018-06-03 31
BH22 2018-05-13 60
BH22 2018-05-20 67
BH22 2018-05-27 78
BH22 2018-06-03 55
Converted my df to a list format by
df <- dcast(df, date ~ customer_key,value.var = c("sales"))
df <- subset(df, select = -c(dt))
demandWithKey <- as.list(df)
Trying to write a function such that applying this function across all customers
my_fun <- function(x) {
fit <- lm(ds_load ~ date, data=df) ## After changing to list ds_load and date column names
## are no longer available for formula
fit_b <- forecast(fit$fitted.values, h=20) ## forecast using lm()
return(data.frame(c(fit$fitted.values, fit_b[["mean"]])))
}
fcast <- lapply(df, my_fun)
I know the above function doesn't work, but basically I'm looking for getting both the fitted values and forecasted values for a grouped data.
But I've tried all other methods using tslm() (converting into time series data) and so on but no luck I can get the lm() work somehow on just one customer though. Also many questions/posts were on just fitting the model but I would like to forecast too at same time.
lm() is for a regression model
but here you have a time serie so for forecasting the serie you have to use one of the time serie model (ARMA ARCH GARCH...)
so you can use the function in r : auto.arima() in "forecast" package
I don't know what you're up to exactly, but you could make this less complicated.
Using by avoids the need to reshape your data, it splits your data e.g. by customer ID as in your case and applies a function on the subsets (i.e. it's a combination of split and lapply; see ?by).
Since you want to compare fitted and forecasted values somehow in your result, you probably need predict rather than $fitted.values, otherwise the values won't be of same length. Because your independent variable is a date in weekly intervals, you may use seq.Date and take the first date as a starting value; the sequence has length actual values (nrow each customer) plus h= argument of the forecast.
For demonstration purposes I add the fitted values as first column in the following.
res <- by(dat, dat$cus_key, function(x) {
H <- 20 ## globally define 'h'
fit <- lm(sales ~ date, x)
fitted <- fit$fitted.values
pred <- predict(fit, newdata=data.frame(
date=seq(x$date[1], length.out= nrow(x) + H, by="week")))
fcst <- c(fitted, forecast(fitted, h=H)$mean)
fit.na <- `length<-`(unname(fitted), length(pred)) ## for demonstration
return(cbind(fit.na, pred, fcst))
})
Result
res
# dat$cus_key: A28
# fit.na pred fcst
# 1 41.4 41.4 41.4
# 2 47.4 47.4 47.4
# 3 53.4 53.4 53.4
# 4 59.4 59.4 59.4
# 5 65.4 65.4 65.4
# 6 NA 71.4 71.4
# 7 NA 77.4 77.4
# 8 NA 83.4 83.4
# 9 NA 89.4 89.4
# 10 NA 95.4 95.4
# 11 NA 101.4 101.4
# 12 NA 107.4 107.4
# 13 NA 113.4 113.4
# 14 NA 119.4 119.4
# 15 NA 125.4 125.4
# 16 NA 131.4 131.4
# 17 NA 137.4 137.4
# 18 NA 143.4 143.4
# 19 NA 149.4 149.4
# 20 NA 155.4 155.4
# 21 NA 161.4 161.4
# 22 NA 167.4 167.4
# 23 NA 173.4 173.4
# 24 NA 179.4 179.4
# 25 NA 185.4 185.4
# ----------------------------------------------------------------
# dat$cus_key: B16
# fit.na pred fcst
# 1 49.0 49.0 49.0
# 2 47.7 47.7 47.7
# 3 46.4 46.4 46.4
# 4 45.1 45.1 45.1
# 5 43.8 43.8 43.8
# 6 NA 42.5 42.5
# 7 NA 41.2 41.2
# 8 NA 39.9 39.9
# 9 NA 38.6 38.6
# 10 NA 37.3 37.3
# 11 NA 36.0 36.0
# 12 NA 34.7 34.7
# 13 NA 33.4 33.4
# 14 NA 32.1 32.1
# 15 NA 30.8 30.8
# 16 NA 29.5 29.5
# 17 NA 28.2 28.2
# 18 NA 26.9 26.9
# 19 NA 25.6 25.6
# 20 NA 24.3 24.3
# 21 NA 23.0 23.0
# 22 NA 21.7 21.7
# 23 NA 20.4 20.4
# 24 NA 19.1 19.1
# 25 NA 17.8 17.8
# ----------------------------------------------------------------
# dat$cus_key: C12
# fit.na pred fcst
# 1 56.4 56.4 56.4
# 2 53.2 53.2 53.2
# 3 50.0 50.0 50.0
# 4 46.8 46.8 46.8
# 5 43.6 43.6 43.6
# 6 NA 40.4 40.4
# 7 NA 37.2 37.2
# 8 NA 34.0 34.0
# 9 NA 30.8 30.8
# 10 NA 27.6 27.6
# 11 NA 24.4 24.4
# 12 NA 21.2 21.2
# 13 NA 18.0 18.0
# 14 NA 14.8 14.8
# 15 NA 11.6 11.6
# 16 NA 8.4 8.4
# 17 NA 5.2 5.2
# 18 NA 2.0 2.0
# 19 NA -1.2 -1.2
# 20 NA -4.4 -4.4
# 21 NA -7.6 -7.6
# 22 NA -10.8 -10.8
# 23 NA -14.0 -14.0
# 24 NA -17.2 -17.2
# 25 NA -20.4 -20.4
As you can see, prediction and forecast yield the same values, since both methods are based on the same single explanatory variable date in this case.
Toy data:
set.seed(42)
dat <- transform(expand.grid(cus_key=paste0(LETTERS[1:3], sample(12:43, 3)),
date=seq.Date(as.Date("2018-05-13"), length.out=5, by="week")),
sales=sample(20:80, 15, replace=TRUE))

R: Why is merge dropping data? How to interpolate missing values for a merge

I am trying to merge two relatively large datasets. I am merging by SiteID - which is a unique indicator of location, and date/time, which are comprised of Year, Month=Mo, Day, and Hour=Hr.
The problem is that the merge is dropping data somewhere. Minimum, Maximum, Mean, and Median values all change, when they should be the same data, simply merged. I have made the data into characters and checked that the character strings match, yet I still lose data. I have tried left_join as well, but that doesn't seem to help. See below for more details.
EDIT: Merge is dropping data because data do not exist for every ("SiteID", "Year","Mo","Day", "Hr"). So, I needed to interpolate missing values from dB before I could merge (see answer below).
END EDIT
see link at the bottom of the page to reproduce this example.
PC17$Mo<-as.character(PC17$Mo)
PC17$Year<-as.character(PC17$Year)
PC17$Day<-as.character(PC17$Day)
PC17$Hr<-as.character(PC17$Hr)
PC17$SiteID<-as.character(PC17$SiteID)
dB$Mo<-as.character(dB$Mo)
dB$Year<-as.character(dB$Year)
dB$Day<-as.character(dB$Day)
dB$Hr<-as.character(dB$Hr)
dB$SiteID<-as.character(dB$SiteID)
# confirm that data are stored as characters
str(PC17)
str(dB)
Now to compare my SiteID values, I use unique to see what character strings I have, and setdiff to see if R recognizes any as missing. One siteID is missing from each, but this is okay, because it is truly missing in the data (not a character string issue).
sort(unique(PC17$SiteID))
sort(unique(dB$SiteID))
setdiff(PC17$SiteID, dB$SiteID) ## TR2U is the only one missing, this is ok
setdiff(dB$SiteID, PC17$SiteID) ## FI7D is the only one missing, this is ok
Now when I look at the data (summarize by SiteID), it looks like a nice, full dataframe - meaning I have data for every site that I should have.
library(dplyr)
dB %>%
group_by(SiteID) %>%
summarise(
min_dBL50=min(dbAL050, na.rm=TRUE),
max_dBL50=max(dbAL050, na.rm=TRUE),
mean_dBL50=mean(dbAL050, na.rm=TRUE),
med_dBL50=median(dbAL050, na.rm=TRUE)
)
# A tibble: 59 x 5
SiteID min_dBL50 max_dBL50 mean_dBL50 med_dBL50
<chr> <dbl> <dbl> <dbl> <dbl>
1 CU1D 35.3 57.3 47.0 47.6
2 CU1M 33.7 66.8 58.6 60.8
3 CU1U 31.4 55.9 43.1 43.3
4 CU2D 40 58.3 45.3 45.2
5 CU2M 32.4 55.8 41.6 41.3
6 CU2U 31.4 58.1 43.9 42.6
7 CU3D 40.6 59.5 48.4 48.5
8 CU3M 35.8 75.5 65.9 69.3
9 CU3U 40.9 59.2 46.6 46.2
10 CU4D 36.6 49.1 43.6 43.4
# ... with 49 more rows
Here, I merge the two data sets PC17 and dB by "SiteID", "Year","Mo","Day", "Hr" - keeping all PC17 values (even if they don't have dB values to go with it; all.x=TRUE).
However, when I look at the summary of this data, now all of the SiteID have different values, and some sites are missing completely such as "CU3D" and "CU4D".
PCdB<-(merge(PC17, dB, by=c("SiteID", "Year","Mo","Day", "Hr"), all.x=TRUE))
PCdB %>%
group_by(SiteID) %>%
summarise(
min_dBL50=min(dbAL050, na.rm=TRUE),
max_dBL50=max(dbAL050, na.rm=TRUE),
mean_dBL50=mean(dbAL050, na.rm=TRUE),
med_dBL50=median(dbAL050, na.rm=TRUE)
)
# A tibble: 59 x 5
SiteID min_dBL50 max_dBL50 mean_dBL50 med_dBL50
<chr> <dbl> <dbl> <dbl> <dbl>
1 CU1D 47.2 54 52.3 54
2 CU1M 35.4 63 49.2 49.2
3 CU1U 35.3 35.3 35.3 35.3
4 CU2D 42.3 42.3 42.3 42.3
5 CU2M 43.1 43.2 43.1 43.1
6 CU2U 43.7 43.7 43.7 43.7
7 CU3D Inf -Inf NaN NA
8 CU3M 44.1 71.2 57.6 57.6
9 CU3U 45 45 45 45
10 CU4D Inf -Inf NaN NA
# ... with 49 more rows
I set everything to characters with as.character() in the first lines. Additionally, I have checked Year, Day, Mo, and Hr with setdiff and unique just as I did above with SiteID, and there don't appear to be any issues with those character strings not matching.
I have also tried dplyr function left_join to merge the datasets, and it hasn't made a difference.
problay solved when using na.rm = TRUE in your summarising functions...
a data.table approach:
library( data.table )
dt.PC17 <- fread( "./PC_SO.csv" )
dt.dB <- fread( "./dB.csv" )
#data.table left join on "SiteID", "Year","Mo","Day", "Hr", and the summarise...
dt.PCdB <- dt.dB[ dt.PC17, on = .( SiteID, Year, Mo, Day, Hr ) ]
#summarise, and order by SiteID
result <- setorder( dt.PCdB[, list(min_dBL50 = min( dbAL050, na.rm = TRUE ),
max_dBL50 = max( dbAL050, na.rm = TRUE ),
mean_dBL50 = mean( dbAL050, na.rm = TRUE ),
med_dBL50 = median( dbAL050, na.rm = TRUE )
),
by = "SiteID" ],
SiteID)
head( result, 10 )
# SiteID min_dBL50 max_dBL50 mean_dBL50 med_dBL50
# 1: CU1D 47.2 54.0 52.300 54.00
# 2: CU1M 35.4 63.0 49.200 49.20
# 3: CU1U 35.3 35.3 35.300 35.30
# 4: CU2D 42.3 42.3 42.300 42.30
# 5: CU2M 43.1 43.2 43.125 43.10
# 6: CU2U 43.7 43.7 43.700 43.70
# 7: CU3D Inf -Inf NaN NA
# 8: CU3M 44.1 71.2 57.650 57.65
# 9: CU3U 45.0 45.0 45.000 45.00
# 10: CU4D Inf -Inf NaN NA
If you would like to perform a left join, but exclude hits that cannot be found (so you do not get rows like the one above on "CU3D") use:
dt.PCdB <- dt.dB[ dt.PC17, on = .( SiteID, Year, Mo, Day, Hr ), nomatch = 0L ]
this will result in:
# SiteID min_dBL50 max_dBL50 mean_dBL50 med_dBL50
# 1: CU1D 47.2 54.0 52.300 54.00
# 2: CU1M 35.4 63.0 49.200 49.20
# 3: CU1U 35.3 35.3 35.300 35.30
# 4: CU2D 42.3 42.3 42.300 42.30
# 5: CU2M 43.1 43.2 43.125 43.10
# 6: CU2U 43.7 43.7 43.700 43.70
# 7: CU3M 44.1 71.2 57.650 57.65
# 8: CU3U 45.0 45.0 45.000 45.00
# 9: CU4M 52.4 55.9 54.150 54.15
# 10: CU4U 51.3 51.3 51.300 51.30
In the end, I answered this question with a better understanding of the data. The merge function itself was not dropping any values, since it was only doing exactly as one tells it. However, since datasets were merged by SiteID, Year, Mo, Day, Hr the result was Inf, NaN, and NA values for a few SiteID.
The reason for this is that dB is not a fully continuous dataset to merge with. Thus, Inf, NaN, and NA values for some SiteID were returned because data did not overlap in all variables (SiteID, Year, Mo, Day, Hr).
So I solved this problem with interpolation. That is, I filled the missing values in based on values from dates on either side of the missing values. The package imputeTS was valuable here.
So I first interpolated the missing values in between the dates with data, and then I re-merged the datasets.
library(imputeTS)
library(tidyverse)
### We want to first interpolate dB values on the siteID first in dB dataset, BEFORE merging.
### Why? Because the merge drops all the data that would help with the interpolation!!
dB<-read.csv("dB.csv")
dB_clean <- dB %>%
mutate_if(is.integer, as.character)
# Create a wide table with spots for each minute. Missing will
# show up as NA's
# All the NA's here in the columns represent
# missing jDays that we should add. jDay is an integer date 'julian date'
dB_NA_find <- dB_clean %>%
count(SiteID, jDay) %>%
spread(jDay, n)
dB_NA_find
# A tibble: 59 x 88
# SiteID `13633` `13634` `13635` `13636` `13637` `13638` `13639` `13640` `13641`
# <fct> <int> <int> <int> <int> <int> <int> <int> <int> <int>
# 1 CU1D NA NA NA NA NA NA NA NA
# 2 CU1M NA 11 24 24 24 24 24 24
# 3 CU1U NA 11 24 24 24 24 24 24
# 4 CU2D NA NA NA NA NA NA NA NA
# 5 CU2M NA 9 24 24 24 24 24 24
# 6 CU2U NA 9 24 24 24 24 21 NA
# 7 CU3D NA NA NA NA NA NA NA NA
# 8 CU3M NA NA NA NA NA NA NA NA
# 9 CU3U NA NA NA NA NA NA NA NA
# 10 CU4D NA NA NA NA NA NA NA NA
# Take the NA minute entries and make the desired line for each
dB_rows_to_add <- dB_NA_find %>%
gather(jDay, count, 2:88) %>%
filter(is.na(count)) %>%
select(-count, -NA)
# Add these lines to the original, remove the NA jDay rows
# (these have been replaced with jDay rows), and sort
dB <- dB_clean %>%
bind_rows(dB_rows_to_add) %>%
filter(jDay != "NA") %>%
arrange(SiteID, jDay)
length((dB$DailyL50.x[is.na(dB$DailyL50.x)])) ## How many NAs do I have?
# [1] 3030
## Here is where we do the na.interpolation with package imputeTS
# prime the for loop with zeros
D<-rep("0",17)
sites<-unique(dB$SiteID)
for(i in 1:length(sites)){
temp<-dB[dB$SiteID==sites[i], ]
temp<-temp[order(temp$jDay),]
temp$DayL50<-na.interpolation(temp$DailyL50.x, option="spline")
D<-rbind(D, temp)
}
# delete the first row of zeros from above 'priming'
dBN<-D[-1,]
length((dBN$DayL50[is.na(dBN$DayL50)])) ## How many NAs do I have?
# [1] 0
Because I did the above interpolation of NAs based on jDay, I am missing the Month (Mo), Day, and Year information for those rows.
dBN$Year<-"2017" #all data are from 2017
##I could not figure out how jDay was formatted, so I created a manual 'key'
##to get Mo and Day by counting from a known date/jDay pair in original data
#Example:
# 13635 is Mo=5 Day=1
# 13665 is Mo=5 Day=31
# 13666 is Mo=6 Day=1
# 13695 is Mo=6 Day=30
key4<-data.frame("jDay"=c(13633:13634), "Day"=c(29:30), "Mo"=4)
key5<-data.frame("jDay"=c(13635:13665), "Day"=c(1:31), "Mo"=5)
key6<-data.frame("jDay"=c(13666:13695), "Day"=c(1:30), "Mo"=6)
key7<-data.frame("jDay"=c(13696:13719), "Day"=c(1:24), "Mo"=7)
#make master 'key'
key<-rbind(key4,key5,key6,key7)
# Merge 'key' with dataset so all rows now have 'Mo' and 'Day' values
dBM<-merge(dBN, key, by="jDay", all.x=TRUE)
#clean unecessary columns and rename 'Mo' and 'Day' so it matches PC17 dataset
dBM<-dBM[ , -c(2,3,6:16)]
colnames(dBM)[5:6]<-c("Day","Mo")
#I noticed an issue with duplication - merge with PC17 created a massive dataframe
dBM %>% ### Have too many observations per day, will duplicate merge out of control.
count(SiteID, jDay, DayL50) %>%
summarise(
min=min(n, na.rm=TRUE),
mean=mean(n, na.rm=TRUE),
max=max(n, na.rm=TRUE)
)
## to fix this I only kept distinct observations so that each day has 1 observation
dB<-distinct(dBM, .keep_all = TRUE)
### Now run above line again to check how many observations per day are left. Should be 1
Now when you do the merge with dB and PC17, the interpolated values (that were missing NAs before) should be included. It will look something like this:
> PCdB<-(merge(PC17, dB, by=c("SiteID", "Year","Mo","Day"), all.x=TRUE, all=FALSE,no.dups=TRUE))
> ### all.x=TRUE is important. This keeps all PC17 data, even stuff that DOESNT have dB data that corresponds to it.
> library(dplyr)
#Here is the NA interpolated 'dB' dataset
> dB %>%
+ group_by(SiteID) %>%
+ dplyr::summarise(
+ min_dBL50=min(DayL50, na.rm=TRUE),
+ max_dBL50=max(DayL50, na.rm=TRUE),
+ mean_dBL50=mean(DayL50, na.rm=TRUE),
+ med_dBL50=median(DayL50, na.rm=TRUE)
+ )
# A tibble: 59 x 5
SiteID min_dBL50 max_dBL50 mean_dBL50 med_dBL50
<chr> <dbl> <dbl> <dbl> <dbl>
1 CU1D 44.7 53.1 49.4 50.2
2 CU1M 37.6 65.2 59.5 62.6
3 CU1U 35.5 51 43.7 44.8
4 CU2D 42 52 47.8 49.3
5 CU2M 38.2 49 43.1 42.9
6 CU2U 34.1 53.7 46.5 47
7 CU3D 46.1 53.3 49.7 49.4
8 CU3M 44.5 73.5 61.9 68.2
9 CU3U 42 52.6 47.0 46.8
10 CU4D 42 45.3 44.0 44.6
# ... with 49 more rows
# Now here is the PCdB merged dataset, and we are no longer missing values!
> PCdB %>%
+ group_by(SiteID) %>%
+ dplyr::summarise(
+ min_dBL50=min(DayL50, na.rm=TRUE),
+ max_dBL50=max(DayL50, na.rm=TRUE),
+ mean_dBL50=mean(DayL50, na.rm=TRUE),
+ med_dBL50=median(DayL50, na.rm=TRUE)
+ )
# A tibble: 60 x 5
SiteID min_dBL50 max_dBL50 mean_dBL50 med_dBL50
<chr> <dbl> <dbl> <dbl> <dbl>
1 CU1D 44.8 50 46.8 47
2 CU1M 59 63.9 62.3 62.9
3 CU1U 37.9 46 43.6 44.4
4 CU2D 42.1 51.6 45.6 44.3
5 CU2M 38.4 48.3 44.2 45.5
6 CU2U 39.8 50.7 45.7 46.4
7 CU3D 46.5 49.5 47.7 47.7
8 CU3M 67.7 71.2 69.5 69.4
9 CU3U 43.3 52.6 48.1 48.2
10 CU4D 43.2 45.3 44.4 44.9
# ... with 50 more rows

Elegant way to sum values by time intervals (whilst accounting for missing values)

I'm trying to take something like this
df <- data.frame(times = c("0915", "0930", "0945", "1000", "1015", "1030", "1045", "1100", "1130", "1145", "1200"),
values = c(1,2,3,4,1,2,3,4,1,3,4))
> df
times values
1 0915 1
2 0930 2
3 0945 3
4 1000 4
5 1015 1
6 1030 2
7 1045 3
8 1100 4
9 1130 1
10 1145 3
11 1200 4
12 1215 1
13 1245 3
14 1300 4
15 1330 2
16 1345 4
And turn it into something like this
> df2
times values
1 0930 3
2 1000 7
3 1030 3
4 1100 7
5 1130 NA
6 1200 7
7 1230 NA
8 1300 7
9 1330 NA
10 1400 NA
Essentially, take values measured in 15 minute intervals, and convert them into values measured across 30 minute intervals (summing is sufficient for this).
I can think of an okay solution if I can be certain I have two 15 minute readings for each half hourly reading. I could just add elements pairwise and get what I want. But I can't be certain of that in my data set. As my demo also shows, there could be multiple consecutive values missing.
So I thought some kind of number recognition was necessary, e.g. recognises the time is between 9:15 and 9:30, and just sums those two. So I have a function already called hr2dec which I created to convert these times to decimal so it looks like this
> hr2dec(df$times)
[1] 9.25 9.50 9.75 10.00 10.25 10.50 10.75 11.00 11.50 11.75 12.00
I mention this in case it's easier to solve this problem with decimal instead of 4 digit time.
I also have this data for 24 hours, and multiple days. So if I have a solution that loops, it would need to reset to 0015 after 2400, as these are the first and last measurements for each day. A full set of data with dates included could be generated like so (with decimals for times, like I said, either is fine for me):
set.seed(42)
full_df <- data.frame(date = rep(as.Date(c("2010-02-02", "2010-02-03")), each = 96),
dec_times = seq(0.25,24,0.25),
values = rnorm(96)
)
full_df <- full_df[-c(2,13,15,19,95,131,192),]
The best solution I can come up with so far is a pairwise comparative loop. But even this is not perfect.
Is there some elegant way to do what I'm after? I.e. check the first and last values (in terms of date and time), and sum each half hourly interval? I'm not satisfied with my loop that...
Checks first and last date-time value to work out the range of half hours
Checks items in order, pair at a time to decide whether or not I have two values that belong to that half hourly period.
Sums if I do, places NA if I do not.
You should check out the tibbletime package -- specifically, you'll want to look at collapse_by() which collapses a tbl_time object by a time period.
library(tibbletime)
library(dplyr)
# create a series of 7 days
# 2018-01-01 to 2018-01-07 by 15 minute intervals
df <- create_series('2018-01-01' ~ '2018-01-07', period = "15 minute")
df$values <- rnorm(nrow(df))
df
#> # A time tibble: 672 x 2
#> # Index: date
#> date values
#> <dttm> <dbl>
#> 1 2018-01-01 00:00:00 -0.365
#> 2 2018-01-01 00:15:00 -0.275
#> 3 2018-01-01 00:30:00 -1.50
#> 4 2018-01-01 00:45:00 -1.64
#> 5 2018-01-01 01:00:00 -0.341
#> 6 2018-01-01 01:15:00 -1.05
#> 7 2018-01-01 01:30:00 -0.544
#> 8 2018-01-01 01:45:00 -1.10
#> 9 2018-01-01 02:00:00 0.0824
#> 10 2018-01-01 02:15:00 0.477
#> # ... with 662 more rows
# Collapse into 30 minute intervals, group, and sum
df %>%
collapse_by("30 minute") %>%
group_by(date) %>%
summarise(sum_values = sum(values))
#> # A time tibble: 336 x 2
#> # Index: date
#> date sum_values
#> <dttm> <dbl>
#> 1 2018-01-01 00:15:00 -0.640
#> 2 2018-01-01 00:45:00 -3.14
#> 3 2018-01-01 01:15:00 -1.39
#> 4 2018-01-01 01:45:00 -1.64
#> 5 2018-01-01 02:15:00 0.559
#> 6 2018-01-01 02:45:00 0.581
#> 7 2018-01-01 03:15:00 -1.50
#> 8 2018-01-01 03:45:00 1.36
#> 9 2018-01-01 04:15:00 0.872
#> 10 2018-01-01 04:45:00 -0.835
#> # ... with 326 more rows
# Alternatively, you can use clean = TRUE
df %>%
collapse_by("30 minute", clean = TRUE) %>%
group_by(date) %>%
summarise(sum_values = sum(values))
#> # A time tibble: 336 x 2
#> # Index: date
#> date sum_values
#> <dttm> <dbl>
#> 1 2018-01-01 00:30:00 -0.640
#> 2 2018-01-01 01:00:00 -3.14
#> 3 2018-01-01 01:30:00 -1.39
#> 4 2018-01-01 02:00:00 -1.64
#> 5 2018-01-01 02:30:00 0.559
#> 6 2018-01-01 03:00:00 0.581
#> 7 2018-01-01 03:30:00 -1.50
#> 8 2018-01-01 04:00:00 1.36
#> 9 2018-01-01 04:30:00 0.872
#> 10 2018-01-01 05:00:00 -0.835
#> # ... with 326 more rows
If you're more into videos (< 20 minutes), check out the The Future of Time Series and Financial Analysis in the Tidyverse by David Vaughan.
I'm the OP. After a bit of playing I got something which I think is a more elegant solution than the loop I originally had. Decided to post as an answer for discussion. Still wouldn't mind something more elegant still.
Using full_df I create an index, which is just all the 15-minute periods I would expect given the days I've been supplied.
index <- data.frame(date = rep(seq(full_df$date[1], full_df$date[nrow(full_df)],by="+1 day"),each=96),
dec_times = rep(seq(0.25,24,0.25), length(unique(full_df$date)))
)
Then I merge this with full_df by the two matching columns, and so it keeps values which aren't common (i.e. my missing values)
index <- merge(full_df, index, by.y=c("date", "dec_times"), all.y=T)
Then I go ahead an create a column which lists what half hour each 15 minute interval belongs to using plyr's round_any function
index$half_hour <- plyr::round_any(index$dec_times, 0.5, ceiling)
Then I use plyr's ddply function to sum based on the new half_hour column (taking advantage of the fact that anything + an NA is an NA).
df2 <- plyr::ddply(index[,c("half_hour","values")], "half_hour", sum)
I believe the resulting data frame is exactly what I was after.
> df2
date half_hour values
1 2010-02-02 0.5 NA
2 2010-02-02 1.0 0.99599102
3 2010-02-02 1.5 0.29814381
4 2010-02-02 2.0 1.41686296
5 2010-02-02 2.5 1.95570961
6 2010-02-02 3.0 3.59151505
7 2010-02-02 3.5 NA
8 2010-02-02 4.0 NA
9 2010-02-02 4.5 -2.94070834
10 2010-02-02 5.0 NA
11 2010-02-02 5.5 -2.08794703
12 2010-02-02 6.0 1.04275734
13 2010-02-02 6.5 1.46472433
14 2010-02-02 7.0 -2.02043247
15 2010-02-02 7.5 -0.17989752
16 2010-02-02 8.0 1.16028746
17 2010-02-02 8.5 0.42617715
18 2010-02-02 9.0 -1.21205356
19 2010-02-02 9.5 -1.63536660
20 2010-02-02 10.0 -2.37808504
21 2010-02-02 10.5 -0.15505870
22 2010-02-02 11.0 0.03145841
23 2010-02-02 11.5 -0.93546302
24 2010-02-02 12.0 0.63270809
25 2010-02-02 12.5 0.22420168
26 2010-02-02 13.0 -0.46191368
27 2010-02-02 13.5 2.21862683
28 2010-02-02 14.0 0.36631139
29 2010-02-02 14.5 0.76912170
30 2010-02-02 15.0 -2.70820713
31 2010-02-02 15.5 -0.18200408
32 2010-02-02 16.0 1.98156055
33 2010-02-02 16.5 0.57525057
34 2010-02-02 17.0 1.37435422
35 2010-02-02 17.5 1.64160673
36 2010-02-02 18.0 -1.13330533
37 2010-02-02 18.5 -0.33000520
38 2010-02-02 19.0 0.03816768
39 2010-02-02 19.5 1.23194633
40 2010-02-02 20.0 -1.98555720
41 2010-02-02 20.5 1.77062845
42 2010-02-02 21.0 -0.03245631
43 2010-02-02 21.5 -0.58233200
44 2010-02-02 22.0 -0.39989655
45 2010-02-02 22.5 1.75511944
46 2010-02-02 23.0 0.91594245
47 2010-02-02 23.5 2.04145902
48 2010-02-02 24.0 NA
49 2010-02-03 0.5 0.80626028
50 2010-02-03 1.0 0.99599102
51 2010-02-03 1.5 0.29814381
52 2010-02-03 2.0 1.41686296
53 2010-02-03 2.5 1.95570961
54 2010-02-03 3.0 3.59151505
55 2010-02-03 3.5 -1.66764947
56 2010-02-03 4.0 0.50262906
57 2010-02-03 4.5 -2.94070834
58 2010-02-03 5.0 -1.12035358
59 2010-02-03 5.5 -2.08794703
60 2010-02-03 6.0 1.04275734
61 2010-02-03 6.5 1.46472433
62 2010-02-03 7.0 -2.02043247
63 2010-02-03 7.5 -0.17989752
64 2010-02-03 8.0 1.16028746
65 2010-02-03 8.5 0.42617715
66 2010-02-03 9.0 NA
67 2010-02-03 9.5 -1.63536660
68 2010-02-03 10.0 -2.37808504
69 2010-02-03 10.5 -0.15505870
70 2010-02-03 11.0 0.03145841
71 2010-02-03 11.5 -0.93546302
72 2010-02-03 12.0 0.63270809
73 2010-02-03 12.5 0.22420168
74 2010-02-03 13.0 -0.46191368
75 2010-02-03 13.5 2.21862683
76 2010-02-03 14.0 0.36631139
77 2010-02-03 14.5 0.76912170
78 2010-02-03 15.0 -2.70820713
79 2010-02-03 15.5 -0.18200408
80 2010-02-03 16.0 1.98156055
81 2010-02-03 16.5 0.57525057
82 2010-02-03 17.0 1.37435422
83 2010-02-03 17.5 1.64160673
84 2010-02-03 18.0 -1.13330533
85 2010-02-03 18.5 -0.33000520
86 2010-02-03 19.0 0.03816768
87 2010-02-03 19.5 1.23194633
88 2010-02-03 20.0 -1.98555720
89 2010-02-03 20.5 1.77062845
90 2010-02-03 21.0 -0.03245631
91 2010-02-03 21.5 -0.58233200
92 2010-02-03 22.0 -0.39989655
93 2010-02-03 22.5 1.75511944
94 2010-02-03 23.0 0.91594245
95 2010-02-03 23.5 2.04145902
96 2010-02-03 24.0 NA
What I like about this solution
No loops
Works within the data frame
What I don't like about this solution
Chunkiness in creating the index

Download hourly weatherdata : Error

Hi I am downloading hourly historical weather data from "rwunderground" package with the below code.
Library("rwunderground")
rwunderground::set_api_key("MY_API_KEY")
history(set_location(zip_code = "90210"), "20170101")
After executing the above lines the error i am getting was
"Error in curl::curl_fetch_memory(url, handle = handle) :
Timeout was reached: Connection timed out after 10000 milliseconds"
Please help me to modify / update the above code.
Thanks in advance.
The code worked for me fine.
If you're referencing the library, you don't need to reference it at the same time as your command: you don't need to reference rwunderground when using set_api_key. This will improve the code's layout, but it won't speed up the function.
I'll include the code and output below; at least if it doesn't work for you, you can copy it from here:
library(rwunderground)
set_api_key("0d5f3d47ea78fa83")
history(set_location(zip_code = "90210"), "20170101")
[1] "Requesting: http://api.wunderground.com/api/0d5f3d47ea78fa83/history_20170101/q/90210.json"
# A tibble: 24 x 21
date temp dew_pt hum wind_spd wind_gust dir vis pressure wind_chill heat_index precip precip_rate
<dttm> <dbl> <dbl> <dbl> <dbl> <dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2017-01-01 00:51:00 45.0 41.0 86. 4.60 NA West 10. 29.9 42.6 NA NA NA
2 2017-01-01 01:51:00 44.1 39.0 82. 4.60 NA NNE 10. 29.9 41.5 NA NA NA
3 2017-01-01 02:51:00 43.0 39.9 89. 4.60 NA East 10. 29.9 40.3 NA NA NA
4 2017-01-01 03:51:00 44.1 39.9 85. 3.50 NA South 10. 29.9 42.5 NA NA NA
5 2017-01-01 04:51:00 43.0 39.9 89. 0. NA North 10. 29.9 NA NA NA NA
6 2017-01-01 05:51:00 43.0 39.9 89. 0. NA North 10. 29.9 NA NA NA NA
7 2017-01-01 06:51:00 43.0 39.9 89. 4.60 NA NNE 10. 29.9 40.3 NA NA NA
8 2017-01-01 07:51:00 44.1 41.0 89. 4.60 NA NE 10. 29.9 41.5 NA NA NA
9 2017-01-01 08:51:00 48.0 42.1 80. 5.80 NA NE 10. 29.9 NA NA NA NA
10 2017-01-01 09:51:00 52.0 44.1 74. 5.80 NA Variā€¦ 10. 29.9 NA NA NA NA
# ... with 14 more rows, and 8 more variables: precip_total <dbl>, cond <chr>, fog <dbl>, rain <dbl>, snow <dbl>,
# hail <dbl>, thunder <dbl>, tornado <dbl>
Of note, since the time it takes your computer to run the code is an issue, I thought I'd show you how long it takes for mine.
time <- Sys.time()
set_api_key("0d5f3d47ea78fa83")
history(set_location(zip_code = "90210"), "20170101")
Sys.time() - time
Time difference of 0.526396 secs
time <- Sys.time()
rwunderground::set_api_key("0d5f3d47ea78fa83")
history(set_location(zip_code = "90210"), "20170101")
Sys.time() - time
Time difference of 0.5350232 secs
Repeating above gives different but similar values - they're about the same speed.

R Creating new data.table with specified rows of a single column from an old data.table

I have the following data.table:
Month Day Lat Long Temperature
1: 10 01 80.0 180 -6.383330333333309
2: 10 01 77.5 180 -6.193327999999976
3: 10 01 75.0 180 -6.263328333333312
4: 10 01 72.5 180 -5.759997333333306
5: 10 01 70.0 180 -4.838330999999976
---
117020: 12 31 32.5 310 11.840003833333355
117021: 12 31 30.0 310 13.065001833333357
117022: 12 31 27.5 310 14.685003333333356
117023: 12 31 25.0 310 15.946669666666690
117024: 12 31 22.5 310 16.578336333333358
For every location (given by Lat and Long), I have a temperature for each day from 1 October to 31 December.
There are 1,272 locations consisting of each pairwise combination of Lat:
Lat
1 80.0
2 77.5
3 75.0
4 72.5
5 70.0
--------
21 30.0
22 27.5
23 25.0
24 22.5
and Long:
Long
1 180.0
2 182.5
3 185.0
4 187.5
5 190.0
---------
49 300.0
50 302.5
51 305.0
52 307.5
53 310.0
I'm trying to create a data.table that consists of 1,272 rows (one per location) and 92 columns (one per day). Each element of that data.table will then contain the temperature at that location on that day.
Any advice about how to accomplish that goal without using a for loop?
Here we use ChickWeights as the data, where we use "Chick-Diet" as the equivalent of your "lat-lon", and "Time" as your "Date":
dcast.data.table(data.table(ChickWeight), Chick + Diet ~ Time)
Produces:
Chick Diet 0 2 4 6 8 10 12 14 16 18 20 21
1: 18 1 1 1 NA NA NA NA NA NA NA NA NA NA
2: 16 1 1 1 1 1 1 1 1 NA NA NA NA NA
3: 15 1 1 1 1 1 1 1 1 1 NA NA NA NA
4: 13 1 1 1 1 1 1 1 1 1 1 1 1 1
5: ... 46 rows omitted
You will likely need to lat + lon ~ Month + Day or some such for your formula.
In the future, please make your question reproducible as I did here by using a built-in data set.
First create a date value using the lubridate package (I assumed year = 2014, adjust as necessary):
library(lubridate)
df$datetext <- paste(df$Month,df$Day,"2014",sep="-")
df$date <- mdy(df$datetext)
Then one option is to use the tidyr package to spread the columns:
library(tidyr)
spread(df[,-c(1:2,6)],date,Temperature)
Lat Long 2014-10-01 2014-12-31
1 22.5 310 NA 16.57834
2 25.0 310 NA 15.94667
3 27.5 310 NA 14.68500
4 30.0 310 NA 13.06500
5 32.5 310 NA 11.84000
6 70.0 180 -4.838331 NA
7 72.5 180 -5.759997 NA
8 75.0 180 -6.263328 NA
9 77.5 180 -6.193328 NA
10 80.0 180 -6.383330 NA

Resources