I know this have been asked several times but I could not find the right way to get around my problem. I have a very simple CSV file that I upload, looking like:
27.07.2015,100
28.07.2015,100.1504
29.07.2015,100.1957
30.07.2015,100.5044
31.07.2015,100.7661
03.08.2015,100.9308
04.08.2015,100.8114
05.08.2015,100.6927
06.08.2015,100.7501
07.08.2015,100.7194
10.08.2015,100.8197
11.08.2015,100.8133
Now I need to convert my data.frame into xts so I can use the PerformanceAnalytics package. My data.frame has the structure:
> str(mpey)
'data.frame': 243 obs. of 2 variables:
$ V1: Factor w/ 243 levels "01.01.2016","01.02.2016",..: 210 218 228 234 241 21 30 38 45 52 ...
- attr(*, "names")= chr "5" "6" "7" "8" ...
$ V2: Factor w/ 242 levels "100","100.0062",..: 1 4 5 10 16 20 17 13 15 14 ...
- attr(*, "names")= chr "5" "6" "7" "8" ...
I tried different things with as.xts function but could make it work.
Could you please help me get over this?
Here's a solution using the tidyquant package, which contains as_xts() for coercing data frames to xts objects and as_tibble() for coercing time series objects such as xts to tibbles ("tidy" data frames).
Recreate your data
> data_df
# A tibble: 12 × 2
date value
<fctr> <fctr>
1 27.07.2015 100
2 28.07.2015 100.1504
3 29.07.2015 100.1957
4 30.07.2015 100.5044
5 31.07.2015 100.7661
6 03.08.2015 100.9308
7 04.08.2015 100.8114
8 05.08.2015 100.6927
9 06.08.2015 100.7501
10 07.08.2015 100.7194
11 10.08.2015 100.8197
12 11.08.2015 100.8133
First, we need to reformat your data frame. The dates and values are both stored as factors and they need to be in a date and double class, respectively. We'll load tidyquant and reformat the data frame. Note that tidyquant loads the tidyverse and financial packages so you don't need to load anything else. The date can be converted with lubridate::dmy which converts characters in a day-month-year format to date. The value needs to go from factor to character then from character to double, and this is done by nesting as.numeric and as.character.
> library(tidyquant)
> data_tib <- data_df %>%
mutate(date = dmy(date),
value = as.numeric(as.character(value)))
> data_tib
# A tibble: 12 × 2
date value
<date> <dbl>
1 2015-07-27 100.0000
2 2015-07-28 100.1504
3 2015-07-29 100.1957
4 2015-07-30 100.5044
5 2015-07-31 100.7661
6 2015-08-03 100.9308
7 2015-08-04 100.8114
8 2015-08-05 100.6927
9 2015-08-06 100.7501
10 2015-08-07 100.7194
11 2015-08-10 100.8197
12 2015-08-11 100.8133
Now, we can coerce to xts using the tidyquant::as_xts() function. Just specify date_col = date.
> data_xts <- data_tib %>%
as_xts(date_col = date)
> data_xts
value
2015-07-27 100.0000
2015-07-28 100.1504
2015-07-29 100.1957
2015-07-30 100.5044
2015-07-31 100.7661
2015-08-03 100.9308
2015-08-04 100.8114
2015-08-05 100.6927
2015-08-06 100.7501
2015-08-07 100.7194
2015-08-10 100.8197
2015-08-11 100.8133
Related
I am trying to read dates from different excel files and each of them have the dates stored in different formats (character or date). This is making the date column on each file being read as character "28/02/2020" or as the numeric conversion excel does to the dates "452344" (number of days since 1900)
files1 = list.files(pattern="*.xlsx")
df = lapply(files1, read_excel,col_types = "text")
df = do.call(rbind, df)
¿How can I make R to read the character type "28/02/2020" and not the "452344" numeric type?
For multiple date format in one column I suggest using lubridate::parse_date_time() (or any other date converter that converts ambiguous format to NA instead of printing an error message)
I assume your df should look something like this:
# A tibble: 6 x 2
id date
<chr> <chr>
1 1 43889
2 2 43889
3 3 43889
4 1 28/02/2020
5 2 28/02/2020
6 3 28/02/2020
Then you should use this code:
library(lubridate)
df <- as.data.frame(df)
df$date2 <- parse_date_time(x = df$date, orders = "d m y") #converts rows like "28/02/2020" to date
df[is.na(df$date2),"date2"] <- as.Date(as.numeric(df[is.na(df$date2),"date"]), origin = "1899-12-30") #converts rows like "43889"
R output:
id date date2
1 1 43889 2020-02-28
2 2 43889 2020-02-28
3 3 43889 2020-02-28
4 1 28/02/2020 2020-02-28
5 2 28/02/2020 2020-02-28
6 3 28/02/2020 2020-02-28
str(df)
'data.frame': 6 obs. of 3 variables:
$ id : chr "1" "2" "3" "1" ...
$ date : chr "43889" "43889" "43889" "28/02/2020" ...
$ date2: POSIXct, format: "2020-02-28" "2020-02-28" "2020-02-28" "2020-02-28" ...
I know it is not the nicest solution but it should work for you as well
I am having a data frame with a specific date range for each row.
stuID stdID roleStart roleEnd
1 1 7 2010-11-18 2020-06-14
2 2 2 2012-08-13 2014-04-01
3 2 4 2014-04-01 2015-10-01
4 2 3 2015-10-01 2018-10-01
5 2 6 2018-10-01 2020-06-14
6 3 4 2014-03-03 2015-10-01
I need to generate the rows based on the weeks of the date. To be precise, I need to populate the rows based on week between two dates in the given data frame.
I tried to achieve this using the following piece of code
extendedData <- reshape2::melt(setNames(lapply(1:nrow(df), function(x) seq.Date(df[x, "roleStart"],
df[x, "roleEnd"], by = "1 week")),df$stuID))
But when I execute this, I am getting the error message
Error in seq.int(0, to0 - from, by) : wrong sign in 'by' argument
This is the structure of the dataframe
'data.frame': 350 obs. of 4 variables:
$ stuID : int 1 2 2 2 2 3 3 3 4 4 ...
$ stdID : int 7 2 4 3 6 4 3 6 1 2 ...
$ roleStart: Date, format: "2010-11-18" "2012-08-13" "2014-04-01" "2015-10-01" ...
$ roleEnd : Date, format: "2020-06-14" "2014-04-01" "2015-10-01" "2018-10-01" ...
Can anyone say what's wrong with the code?
Thanks in advance!!
Here's a way to do this using tidyverse functions :
library(dplyr)
df %>%
mutate(date = purrr::map2(roleStart, roleEnd, seq, by = 'week')) %>%
tidyr::unnest(date)
As far as your code is concerned it works fine till this step i.e generating weekly dates
lapply(1:nrow(df), function(x)
seq.Date(df[x, "roleStart"], df[x, "roleEnd"], by = "1 week"))
I am not sure what you are trying to do with setNames and melt functions there.
I have one table with two columns DATE and Q.
DATE Q
--------------------
2013-01-04 932
2013-01-05 409
2013-01-08 511
2013-01-11 121
2013-01-12 252
2013-01-13 201
2013-01-14 40
2013-01-15 66
2013-01-17 NA
2013-01-18 123
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 10 obs. of 2 variables:
$ DATE: POSIXct, format: "2013-01-04" "2013-01-05" "2013-01-08" "2013-01-11" ...
$ Q: num 932 409 511 121 252 201 40 66 NA 123 ..
You can see from data, there is a irregular frequency.First column have data which are converted into date format and in the second column data is numeric. So my intention is to convert this table into times series object, for further projections with forecast package.
So can anyone help me with some code to convert this table into ts object?
time <- seq(as.Date("2018-1-1"),as.Date("2019-1-1"),by=1)
df <- data.frame(Time=Time)
output <- dplyr::left_join(df,YOUR_TABLE,by="DATE")
Your table should have date column by name "DATE". So now you have NA values when your data is missing and you can transform your data to time series. I dont know if it this would help, for me sometimes it does. Maybe tackle NA problem with some replacing method.
I have a dataframe DF in which I have numerous of columns, one is with Dates and an other is the Hour.
My point is that I need to find the PRICE (dame datafra 36 hours before. All my days don't have 24 hours so I can't just shift my data set.
My idea was to look for the day before in my dataset & 12 hours before.
This is what I wrote but this is not working:
for (i in 38:nrow(DF)){
RefDay=as.Date(DF$Date[i])
HourRef=DF$Hour[i]
DF$P24[i]=DF[which(DF$Date == (RefDay-1))& which(DF$Hour == (HourRef-36)),"PRICE"]
}
Here is my DF:
'data.frame': 20895 obs. of 45 variables:
$ Hour : Factor w/ 24 levels "0","1","2","3",..: 1 2 3 4 5 6 7 8 9 10 ...
$ Date : POSIXct, format: "2016-07-01" "2016-07-01" "2016-07-01" "2016-07-01" ...
$ PRICE : num 29.4 24.7 23.4 21.9 20.2 ...
Here is a sample of my data:
DF.Hour DF.Date DF.PRICE
1 0 2016-07-01 29.36
2 1 2016-07-01 24.69
3 2 2016-07-01 23.42
4 3 2016-07-01 21.91
5 4 2016-07-01 20.19
6 5 2016-07-01 22.44
Try to fill the data.frame with full days. You can do it with complete in tidyr. It will fill the not existing values with NA.
If you have any NAs in your full data.frame you can go for the 36th element before with for example lag(price, 36).
DF <- complete(DF, Hour, Date) %>% arrange(Date)
DF$Price[is.na(DF$Price)] <- lag(Price, 36)
[enter image description here][1][enter image description here][2]I have a data frame "RH", with hourly data and I want to convert it to daily maximum and minimum data. This code was very useful [question]:Aggregating hourly data into daily aggregates
RH$Date <- strptime(RH$Date,format="%y/%m/%d)
RH$day <- trunc(RH$Date,"day")
require(plyr)
x <- ddply(RH,.(Date),
summarize,
aveRH=mean(RH),
maxRH=max(RH),
minRH=min(RH)
)
But my first 5 years data are 3 hours data not hourly. so no results for those years. Any suggestion? Thank you in advance.
'data.frame': 201600 obs. of 3 variables:
$ Date: chr "1985/01/01" "1985/01/01" "1985/01/01" "1985/01/01" ...
$ Hour: int 1 2 3 4 5 6 7 8 9 10 ...
$ RH : int NA NA 93 NA NA NA NA NA 79 NA ...
The link you provided is an old one. The code is still perfectly good and would work, but here's a more modern version using dplyr and lubridate
df <- read.table(text='date_time value
"01/01/2000 01:00" 30
"01/01/2000 02:00" 31
"01/01/2000 03:00" 33
"12/31/2000 23:00" 25',header=TRUE,stringsAsFactors=FALSE)
library(dplyr);library(lubridate)
df %>%
mutate(date_time=as.POSIXct(date_time,format="%m/%d/%Y %H:%M")) %>%
group_by(date(date_time)) %>%
summarise(mean=mean(value,na.rm=TRUE),max=max(value,na.rm=TRUE),
min=min(value,na.rm=TRUE))
`date(date_time)` mean max min
<date> <dbl> <dbl> <dbl>
1 2000-01-01 31.33333 33 30
2 2000-12-31 25.00000 25 25
EDIT
Since there's already a date column, this should work:
RH %>%
group_by(Date) %>%
summarise(mean=mean(RH,na.rm=TRUE),max=max(RH,na.rm=TRUE),
min=min(RH,na.rm=TRUE))