I hope we're all doing great
I have several decades of daily rainfall data from several monitoring stations. The data all beings at separate dates. I have combined them into a single data frame with the date in the first column, with the rainfall depth in the second column. I want to sort the variable 'Total' by the variable: 'Date and time' (please see the links below)
ms1 <- read.csv('ms1.csv')
ms2 <- read.csv('ms2.csv')
etc.etc
df <- merge(ms1, ms2 etc. etc, by = "Date and Time")
The problem is that the range of dates would differ for each monitoring station (csv file). There may also missing dates in a range. Is there a way around this?
Would I have to create a separate vector with the greatest possible date range? Or would it automatically detect the earliest start date from the imported data.
for monitoring station 1 (ms1)
for monitoring station 2 (ms2)
Note: the data continues to the current date
Related
My data contains several measurements in one day. It is stored in CSV-file and looks like this:
enter image description here
The V1 column is factor type, so I'm adding a extra column which is date-time -type: vd$Vdate <- as_datetime(vd$V1) :
enter image description here
Then I'm trying to convert the vd-data into time series: vd.ts<- ts(vd, frequency = 365)
But then the dates are gone:
enter image description here
I just cannot get it what I am doing wrong! Could someone help me, please.
Your dates are gone because you need to build the ts dataframe from your variables (V1, ... V7) disregarding the date field and your ts command will order R to structure the dates.
Also, I noticed that you have what is seems like hourly data, so you need to provide the frequency that is appropriate to your time not 365. Considering what you posted your frequency seems to be a bit odd. I recommend finding a way to establish the frequency correctly. For example, if I have hourly data for 365 days of the year then I have a frequency of 365.25*24 (0.25 for the leap years).
So the following is just as an example, it still won't work properly with what I see (it is limited view of your dataset so I am not sure 100%)
# Build ts data (univariate)
vs.ts <- ts(vd$V1, frequency = 365, start = c(2019, 4)
# check to see if it is structured correctly
print(vd.ts, calendar = T)
Finally my time series is working properly. I used
ts <- zoo(measurements, date_times)
and I found out that the date_times was supposed to be converted with as_datetime() as otherwise they were character type. The measurements are converted into data.frame type.
In R, my dataframe ("sampledata") looks like this:
The timestamp column is POSIXct, format: "2018-10-01 00:03:23"
The state column is Factor w/ 3 levels "AVAILABLE", "MUST_NOT_RUN", "MUST_RUN"
There are 6 unique device_id. The timestamps for each device are not the same, meaning data was not always collected at the same minute for each device. In some cases, there are multiple records per minute for the same device.
I want to transform the data into a visualization that shows distribution of "state" across a "typical" day. Ideally, something like this:
I've tried to count each occurrence of "state" grouped by timestamp minutes but failed (Error: can't sum factors). I've been trying to use ggplot and geom_area for the visualization, but believe I need to restructure my data before it will work. Very new to R (obviously). Happy to read any tutorials or links provided as background and appreciate any help you can provide. Thanks!
Other information that may/may not be helpful:
There are a handful of columns in the dataframe not shown.
223,446 entries between 10/2/18 - 11/8/18.
You can take the hours from the timestamps and then compute proportions of your states by hour:
library(ggplot2)
library(plyr)
#get hours from timestamp
obj$hour <- as.POSIXlt(obj$timestamp)$hour
#get average state proportions per hour
plot_obj <- ddply(obj,.(hour), #take data.frame "obj" and group by "hour"
function(x) with(x,
data.frame(100*table(state)/length(state))))
ggplot(plot_obj, aes(x=hour,y=Freq,fill=state)) +
geom_area()
I am extracting Google Trends data looking at interest_over_time and interest_by_city.
I've noticed the interest_by_city data frame doesn't contain any date information. As I am looking to monitor the changes over time this is problematic.
Is there a way to add a new variable for the date where each observation will be the date and time the data was extracted?
If what you want is to add Sys.time() to the data you just extracted then you can call df$time <- Sys.time().
There is a data frame like this:
The first two columns in the df describe the start date (month and year) and the end date (month and year). Column names describe every single month and year of a certain time period.
I need a function/loop that insterts "1" or "0" in each cell - "1" when the date from given column name is within the period described by the two first columns, and "0" if not.
I would appreciate any help.
You want to do two different things. (a) create a dummy variable and (b) see if a particular date is in an interval.
Making a dummy variable is the easiest one, in base R you can use ifelse. For example in the iris data frame:
iris$dummy <- ifelse(iris$Sepal.Width > 2.5, 1, 0)
Now working with dates is more complicated. In this answer we will use the library lubridate. First you need to convert all those dates to a format 'Month Year' to something that R can understand. For example for February you could do:
new_format_february_2016 <- interval(ymd('2016-02-01'), ymd('2016-03-01') - dseconds(1))
#[1] 2016-02-01 UTC--2016-02-29 23:59:59 UTC
This is February, the interval of time from the 1 of February to one second before the 1 of March. You can do the same with your start date column and you end date column.
To compare two intevals of time (so, to see if a particular month fall into your other intervals) you can do:
int_overlaps(new_format_february_2016, other_interval)
If this returns true, the two intervals (one particular month and another one) overlaps. This is not the same as one being inside another, but in your case it will work. Using this you can iterate over different columns and rows and build your dummy variable.
But before doing so, I would recommend to clean your data, as your current format is complicate to work with. To get all the power that vector types in R provides ideally you would want to have one row per observation and one variable per column. This does not seem to be the case with your data frame. Take a look to the chapter 'Tidy data' of 'R for Data Science' specially the spreading and gathering subsection:
Tidy data
I have a csv file that contains many thousands of timestamped data points. The file includes the following columns: Date, Tag, East, North & DistFromMean. The following is a sample of the data in the file:
The data is recorded approximately every 15 minutes for 12 tags over a month. What I'm wanting to do is select from the data, starting from the first date entry, subsets of data i.e. every 3 hours but due to the tags transmitting at slightly different rates I need a minimum and maximum value start and end time.
I have found the a related previous question but don't understand the answer enough to implement.
The solution could firstly ask for the Tag number, then the period required perhaps in minutes from the start time (i.e. every 3hrs or 180 minutes), the minimum time range and the maximum time range, both of which would be constant for whatever time period was used. The minimum and maximum would probably need to be plus and minus 6 minutes from the period selected.
As the code below shows, I've managed to read in the file, change the Date format to POSIXlt and extract data within a specific time frame but the bit I'm stuck on is extracting the data every nth minute and within a range.
TestData<- read.csv ("TestData.csv", header=TRUE, as.is=TRUE)
TestData$Date <- strptime(TestData$Date, "%d/%m/%Y %H:%M")
TestData[TestData$Date >= as.POSIXlt("2014-02-26 7:10:00") & TestData$Date < as.POSIXlt("2014-02-26 7:18:00"),]