I'm a beginner in working with R. In general I do have csv files which I'm gonna read with "read.csv".
The files have 2 colums:
1st is date: "2013-01-01 22:20:00"
2nd is value: 0
So far I just took the var$2nd for analysis on data - but I need the date. Is it possible to read this date? And ask for the values between two dates? Or exclude values always between two times?
What is the right data format, how to convert and which is standard if I just read.csv
Thank you!
Say your csv file is called "foo.csv" and contains:
date, value
"2013-01-01 22:20:00", 3
"2013-01-02 12:20:00", 5
You need to tell R what kinds of things the columns are. By default, if it sees a string it will turn it into a factor, which is not what you want, so:
f <- read.csv ("foo.csv", colClasses=c("POSIXct", "integer"))
should do the trick.
Learn how read.csv works by doing:
?read.csv
and read carefully. If you do:
str (f)
you'll see that your date is POSIXct, as you asked. Do
?POSIXct
to learn how to do comparisons.
Related
Newbie here, first post (please be gentle). I have been trying to resolve this for several hours, so finally decided time to ask advice.
I have a large spreadsheet which I am importing with readxl. It contains one column with date (format dd/mm/yyyy) and several time columns in format hh:mm as can be seen: excel
Essentially I want to be able to import both time and date columns and combine them, so that I can then do some other calculations, like time elapsed.
If I import letting R guess the col-types, it converts the times to POSIXct, but these then have a date on 1899 attached to them: R_POSIXct
If I force readxl to assign the time column to numeric, I get a decimal (e.g. 0.315972222 for 07:35), which then tried converting using similar syntax to
format(as.POSIXct(Sys.Date() + 0.315972222), "%Y-%m-%d %H:%M:%S", tz="UTC")
i.e.
df$datetime <- format(as.POSIXct(df$date + df$time), "%Y-%m-%d %H:%M", tz="UTC")
which results in the correct date, but with a time of 00:00, not the time it is passed.
I have tried searching here and found posts to be not quite the same question (e.g. Combining date and time columns into dd/mm/yyyy hh:mm), and have read widely, including about about lubridate, but as I'm only 6 months into R, am finding some explanations a bit cryptic.
Suggestions or ignposting appreciated (if there are solutions I haven't found)
If you subtract the number of days between 1899-01-01 and 1970-01-01 and then multiply that (shifted) Excel numeric value by 3600 you should come close to the number of seconds since start of 1970. You could then convert to POSIXct with as.POSIXct( x, origin="1970-01-01"). That does seem to be "the hard way", however
It would be far easier and probably more accurate to convert the date-times to YYYY-MM-DD H:M:S format and then export as csv to be imported into R as text. There is a "POSIXct" colClasses argument to read.csv, although it doesn't handle separate columns of date and time. For that you would be advised to import as character values and then paste the dates and times. Then watch you format strings for as.POSIXct. The dd/mm/yyyy "format" would be specified by "%d/%m/%Y".
I have a very large data set (CSV) with information about bicycle counts from a bike share system. The information I'm working with is the time at which bicycles were taken out of the racks (departure time) and also the total travel time. What I want to do is to add them so I can get the arrival time at the arrival station. The departure time variable is FECHA_HORA_RETIRO and the travel time variable is TIEMPO_USO. The former, which is read by R as factor object, is in the following format: "23/01/2017 19:55:16". On the other hand, TIEMPO_USO is read by R as a character and it's in the following format: "0:17:46".
> head(viajes_ecobici_2017$FECHA_HORA_RETIRO)
[1] 28/01/2017 13:51 17/01/2017 16:24 12/01/2017 16:38 25/01/2017 10:31
> head(viajes_ecobici_2017$TIEMPO_USO)
[1] "1:35:37" "0:11:17" "0:32:51" "0:31:29" "1:31:59" "0:21:43" "0:5:43"
I first used strptime to get everything in the desired format
> viajes_ecobici_2017$FECHA_HORA_RETIRO =format(strptime(viajes_ecobici_2017$FECHA_HORA_RETIRO,format = "%d/%m/%Y %H:%M"),format = "%d/%m/%Y %H:%M:%S")
> viajes_ecobici_2017$TIEMPO_USO = format(strptime(viajes_ecobici_2017$TIEMPO_USO, format="%H:%M:%S"), format="%H:%M:%S")
This works with most observations. However, several observations became NA values after running this code. I went back to the original data to see why this was happening and created a variable with just the observations that became NA. When I looked closer at this observations I saw they have this format "\t\t01/06/2017 00:01". How can I get rid of the "\t\t" while preserving the rest of the information?
Thanks in advance for your help.
trimws() trims white space (including tab characters, \t) from the ends of a character variable:
viajes_ecobici_2017$TIEMPO_USO <- trimws(viajes_ecobici_2017$TIEMPO_USO)
For what it's worth, readr::read_csv() has a built-in trimws option (which is TRUE by default).
Assuming that the variable with the problem is TIEMPO_USO, then a simple regex would take care of the tab characters ("\t")
viajes_ecobici_2017$TIEMPO_USO <- gsub("^\\t\\t","", viajes_ecobici_2017$TIEMPO_USO)
I have 2 Date variables in a .csv file with formats of "07-JUL-16 06.05.54.000000 AM". I want to use these in a regression model. Should I be reading these into a data frame as factors or characters? How can I take a difference of the 2 dates in each case?
Read them in as characters (e.g. stringsAsFactors=FALSE or tidyverse functions), then use as.POSIXct, e.g.
as.POSIXct("07-JUL-16 06.05.54.000000 AM",format="%d-%b-%y %I.%M.%OS %p")
## [1] "2016-07-07 06:05:54 EDT"
(I'm assuming that you are intending a day-month-year format rather than a month-day-year format -- but actually I don't have any evidence to support that thought!)
Once you've done this, subtracting the values should just work (give you an object of difftime) -- but be careful with units when converting to numeric!
For what it's worth, lubridate::ymd_hms thinks it can guess the format, but guesses wrong (?? assuming I guessed right above: with a two-digit year, and without any year values greater than 31, there's really nothing to distinguish years and days ...)
The question is quite simple: I have a txt data imported in R. However, I forgot to change the date format to dd/mm/yyyy. For example: instead of having 30/09/2015 I have 42277.
Of course I could go back to my excel and change the column format from number to date and get the dd/mm/yyyy format easily. But I was thinking if there is a way of doing that inside R. I have several packages here, such as XLConnect but there is nothing there.
Here's how to convert Excel-style dates:
as.Date(42277, origin="1899-12-30")
The help file for as.Date discusses the vagaries of conversion from other time systems and includes a discussion and example for Excel.
## Excel is said to use 1900-01-01 as day 1 (Windows default) or
## 1904-01-01 as day 0 (Mac default), but this is complicated by Excel
## incorrectly treating 1900 as a leap year.
## So for dates (post-1901) from Windows Excel
as.Date(35981, origin = "1899-12-30") # 1998-07-05
## and Mac Excel
as.Date(34519, origin = "1904-01-01") # 1998-07-05
## (these values come from http://support.microsoft.com/kb/214330)
The r code that I am working on is supposed to use the data collected in every five minute intervals.
The data is saved in csv format. However, due to inconsistency in the data collected, the time column in the data sometimes represent timestamp instead of just time.(dd/mm/yyyy HH:MM, instead of HH:MM)
This causes an error to my system as the system reads the data as having multiple different values for the same time value. Therefore, I would like to omit the date format from the timestamp such that the code would only read the time value.
My failed attempt was:
as.Date(data[[1]],"%H:%M")
which gave me all NA values for the time column.
I have searched for similar questions in SO, but I did not manage to find a clear answer to my question. Can anyone suggest me some possible functions to use?
I appreciate your help.
You could just strip the date portion of the text and then use as.POSIXct to convert them all to a %H:%M timestamp, e.g.:
x <- c("10:25","01/01/2014 10:30")
x <- gsub("^.+(\\d{2}:\\d{2})$","\\1",x)
as.POSIXct(x,format="%H:%M",tz="UTC")
#[1] "2014-06-02 10:25:00 UTC" "2014-06-02 10:30:00 UTC"