I couldn't find an answer to this potential issue as I don't really know how to describe what I would like to achieve in a few words. Basically I have 2 columns (sunrise and sunset) with a certain number of rows. I wish to combine them into one column so that the first value of the combined column takes the value in the first row and first column, the second value in the combined column the value in first row and second column, the value in third row of combined column the value in second row and first column etc. WIth data we start with this:
df <- structure(list(sunrise = structure(c(1439635810.57809, 1439722237.7463,
1439808664.71935, 1439895091.49609, 1439981518.07612, 1440067944.45978
), class = c("POSIXct", "POSIXt")), sunset = structure(c(1439682771.28069,
1439769119.75559, 1439855467.39929, 1439941814.23447, 1440028160.28404,
1440114505.57116), class = c("POSIXct", "POSIXt"))), .Names = c("sunrise",
"sunset"), row.names = c(NA, 6L), class = "data.frame")
sunrise sunset
1 2015-08-15 06:50:10 2015-08-15 19:52:51
2 2015-08-16 06:50:37 2015-08-16 19:51:59
3 2015-08-17 06:51:04 2015-08-17 19:51:07
4 2015-08-18 06:51:31 2015-08-18 19:50:14
5 2015-08-19 06:51:58 2015-08-19 19:49:20
6 2015-08-20 06:52:24 2015-08-20 19:48:25
The desired outcome should look like:
data.frame(c("2015-08-15 06:50:10", "2015-08-15 19:52:51", "2015-08-16 06:50:37",
"2015-08-16 19:51:59", "2015-08-17 06:51:04", "2015-08-17 19:51:07",
"2015-08-18 06:51:31", "2015-08-18 19:50:14", "2015-08-19 06:51:58",
"2015-08-19 19:49:20", "2015-08-20 06:52:24", "2015-08-20 19:48:25"
))
output
1 2015-08-15 06:50:10
2 2015-08-15 19:52:51
3 2015-08-16 06:50:37
4 2015-08-16 19:51:59
5 2015-08-17 06:51:04
6 2015-08-17 19:51:07
7 2015-08-18 06:51:31
8 2015-08-18 19:50:14
9 2015-08-19 06:51:58
10 2015-08-19 19:49:20
11 2015-08-20 06:52:24
12 2015-08-20 19:48:25
I can then assign day/night to each row, and use these bins to categorize my data in day and night using the findInterval function.
Any help is greatly appreciated.
EDIT: Thanks for the answers, they work like a charm
Extract the rows iteratively and then convert into a vector
data.frame(output = as.POSIXct(Reduce(c, (apply(df, 1, c)))))
# output
#1 2015-08-15 05:50:10
#2 2015-08-15 18:52:51
#3 2015-08-16 05:50:37
#4 2015-08-16 18:51:59
#5 2015-08-17 05:51:04
#6 2015-08-17 18:51:07
#7 2015-08-18 05:51:31
#8 2015-08-18 18:50:14
#9 2015-08-19 05:51:58
#10 2015-08-19 18:49:20
#11 2015-08-20 05:52:24
#12 2015-08-20 18:48:25
#NOTE: the values are different because of timezone
OR index the values from the data.frame directly
as.POSIXct(df[cbind(sort(rep(1:NROW(df), NCOL(df))), rep(1:NCOL(df), NROW(df)))])
## create a matrix of indices then order it
o <- order(matrix(1:prod(dim(df)), nrow(df), byrow = TRUE))
## create the new data frame from the concatenated dates and the order vector
data.frame(output = do.call("c", c(df, use.names = FALSE))[o])
# output
# 1 2015-08-15 03:50:10
# 2 2015-08-15 16:52:51
# 3 2015-08-16 03:50:37
# 4 2015-08-16 16:51:59
# 5 2015-08-17 03:51:04
# 6 2015-08-17 16:51:07
# 7 2015-08-18 03:51:31
# 8 2015-08-18 16:50:14
# 9 2015-08-19 03:51:58
# 10 2015-08-19 16:49:20
# 11 2015-08-20 03:52:24
# 12 2015-08-20 16:48:25
Related
I have a large data set that spanned a month in time with the data stamped in a column called txn_date like the below. (this is a toy reproduction of it)
dat1 <- read.table(text = "var1 txn_date
5 2020-10-25
1 2020-10-25
3 2020-10-26
4 2020-10-27
1 2020-10-27
3 2020-10-31
3 2020-11-01
8 2020-11-02 ", header = TRUE)
Ideally I would like to get a column in my data frame for each date in the data which I think could be done by first getting a single column that is 1 for the first date that appears and then so on.
So something like this
dat1 <- read.table(text = "var1 txn_date day
5 2020-10-25 1
1 2020-10-25 1
3 2020-10-26 2
4 2020-10-27 3
1 2020-10-27 3
3 2020-10-31 7
3 2020-11-01 8
8 2020-11-12 9 ", header = TRUE
I'm not quite sure how to get this. The txn_date column is as.Date in my actual data frame. I think if I could get the single day column like is listed above (then convert it to a factor) then I could always one hot encode the actual levels of that column if I need to. Ultimately I need to use the day of the experiment as a regressor in a regression I'm going to run.
Something along the lines of y ~ x + day_1 + day_2 +...+ error
Would this be suitable?
library(tidyverse)
dat1 <- read.table(text = "var1 txn_date
5 2020-10-25
1 2020-10-25
3 2020-10-26
4 2020-10-27
1 2020-10-27
3 2020-10-31
3 2020-11-01
8 2020-11-02 ", header = TRUE)
dat1$txn_date <- as.Date(dat1$txn_date)
dat1 %>%
mutate(days = txn_date - txn_date[1] + 1)
# var1 txn_date days
#1 5 2020-10-25 1 days
#2 1 2020-10-25 1 days
#3 3 2020-10-26 2 days
#4 4 2020-10-27 3 days
#5 1 2020-10-27 3 days
#6 3 2020-10-31 7 days
#7 3 2020-11-01 8 days
#8 8 2020-11-02 9 days
We create a sequence of dates based on the min and max of 'txn_date' and match
dates <- seq(min(as.Date(dat1$txn_date)),
max(as.Date(dat1$txn_date)), by = '1 day')
dat1$day <- with(dat1, match(as.Date(txn_date), dates))
dat1$day
#[1] 1 1 2 3 3 7 8 9
Or may use factor route
with(dat1, as.integer(factor(txn_date, levels = as.character(dates))))
#[1] 1 1 2 3 3 7 8 9
This question already has answers here:
Complex non-equi merge in R
(3 answers)
Closed 2 years ago.
I wrote a nested for loop in R, but the loop is taking way too long to run. I have two big datasets. For every row in dfA and for every row in dfB, the loop should see if Date in dfA falls within Date Interval in dfB. If this is true, then the two datasets should merge on a given column for that row. I'm not sure if the code I wrote will work w/o error, because the loop is still running.
Any insight would be appreciated.
dfA:
Common a Date
1 20141331123 1 2005-01-01
2 20141331123 2 2005-01-02
3 20141331123 3 2005-01-03
4 20141331123 4 2005-01-04
5 20141331123 5 2005-01-05
6 20141331123 6 2005-01-06
dfB:
cDate bDate common
1 2005-01-01 2005-06-13 20141331123
dfB$Interval <- interval(ymd(dfB$cDate), ymd(dfB$bDate))
library(lubridate)
for (i in 1:nrow(dfA)) {
for (i in 1:nrow(dfB)) {
if (dfA$Date[i] %within% dfB$Interval[i] == TRUE) {
merged <- merge(dfA, dfB, by.x = c("common"), by.y = c("Common"))
}
}
return(merged)
}
Non-equal joins are supported in SQL natively, and in data.table within R. Neither base R nor tidyverse functions support it locally[1].
library(data.table)
setDT(dfA)
setDT(dfB)
dfB[dfA, on = .(common == Common, cDate <= Date, bDate >= Date)]
# cDate bDate common a
# 1: 2005-01-01 2005-01-01 20141331123 1
# 2: 2005-01-02 2005-01-02 20141331123 2
# 3: 2005-01-03 2005-01-03 20141331123 3
# 4: 2005-01-04 2005-01-04 20141331123 4
# 5: 2005-01-05 2005-01-05 20141331123 5
# 6: 2005-01-06 2005-01-06 20141331123 6
The sample data is a little uninteresting in that everything fits in the single interval, but perhaps this will work with your more varied data.
[1]: Since SQL supports it, it's supported in dbplyr using sql_on.
Data:
dfA <- structure(list(Common = c("20141331123", "20141331123", "20141331123", "20141331123", "20141331123", "20141331123"), a = 1:6, Date = structure(c(12784, 12785, 12786, 12787, 12788, 12789), class = "Date")), row.names = c(NA, -6L), class = "data.frame")
dfB <- structure(list(cDate = structure(12784, class = "Date"), bDate = structure(12947, class = "Date"), common = "20141331123"), row.names = c(NA, -1L), class = "data.frame")
Consider a straightforward merge and subset if data size allows it.
final_df <- subset(merge(dfA, dfB, by.x="Common", by.y="common"),
Date >= cDate & Date <= bDate)
final_df
# Common a Date cDate bDate
# 1 20141331123 1 2005-01-01 2005-01-01 2005-06-13
# 2 20141331123 2 2005-01-02 2005-01-01 2005-06-13
# 3 20141331123 3 2005-01-03 2005-01-01 2005-06-13
# 4 20141331123 4 2005-01-04 2005-01-01 2005-06-13
# 5 20141331123 5 2005-01-05 2005-01-01 2005-06-13
# 6 20141331123 6 2005-01-06 2005-01-01 2005-06-13
Online Demo
I am trying to use the prepData function in the R package moveHMM. I am getting "Error in prepData(x, coordNames = c("lon", "lat")) : Each animal's obervations must be contiguous."
x is a data.frame with column names "ID", "long", "lat". ID column is the name of each animal as a character, and lon/lat are numeric. There are no NA values, no missing rows.
I do not know what this error means nor can I fix it. Help please.
x <- data.frame(dat$ID, dat$lon, dat$lat)
hmmgps <- prepData(x, coordNames=c("lon", "lat"))
The function prepData assumes that the rows for each track (or each animal) are grouped together in the data frame. The error message indicates that it is not the case, and that at least one track is split. For example, the following (artificial) data set would cause this error:
> data
ID lon lat
1 1 54.08658 12.190313
2 1 54.20608 12.101203
3 1 54.18977 12.270896
4 2 55.79217 9.943341
5 2 55.88145 9.986028
6 2 55.91742 9.887342
7 1 54.25305 12.374541
8 1 54.28061 12.190078
This is because the track with ID "1" is split into two parts, separated by the track with ID "2".
The tracks need to be contiguous, i.e. all observations with ID "1" should come first, followed by all observations with ID "2". One possible solution would be to order the data by ID and by date.
Consider the same data set, with a "date" column:
> data
ID lon lat date
1 1 54.08658 12.190313 2019-09-06 14:20:00
2 1 54.20608 12.101203 2019-09-06 15:20:00
3 1 54.18977 12.270896 2019-09-06 16:20:00
4 2 55.79217 9.943341 2019-09-04 07:55:00
5 2 55.88145 9.986028 2019-09-04 08:55:00
6 2 55.91742 9.887342 2019-09-04 09:55:00
7 1 54.25305 12.374541 2019-09-06 17:20:00
8 1 54.28061 12.190078 2019-09-06 18:20:00
Following the answer to that question, you can define the ordered data set with:
> data_ordered <- data[with(data, order(ID, date)),]
> data_ordered
ID lon lat date
1 1 54.08658 12.190313 2019-09-06 14:20:00
2 1 54.20608 12.101203 2019-09-06 15:20:00
3 1 54.18977 12.270896 2019-09-06 16:20:00
7 1 54.25305 12.374541 2019-09-06 17:20:00
8 1 54.28061 12.190078 2019-09-06 18:20:00
4 2 55.79217 9.943341 2019-09-04 07:55:00
5 2 55.88145 9.986028 2019-09-04 08:55:00
6 2 55.91742 9.887342 2019-09-04 09:55:00
Then, the ordered data (excluding the date column) can be passed to prepData:
> hmmgps <- prepData(data_ordered[,1:3], coordNames = c("lon", "lat"))
> hmmgps
ID step angle x y
1 1 16.32042 NA 54.08658 12.190313
2 1 18.85560 2.3133191 54.20608 12.101203
3 1 13.37296 -0.6347523 54.18977 12.270896
4 1 20.62507 -2.4551318 54.25305 12.374541
5 1 NA NA 54.28061 12.190078
6 2 10.86906 NA 55.79217 9.943341
7 2 11.60618 -1.6734604 55.88145 9.986028
8 2 NA NA 55.91742 9.887342
I hope that this helps.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm new here, so I apologize if I miss any conventions.
I have a ~2000 row dataset with data on unique cases happening in a three year period. Each case has a start date and an end date. I want to be able to get a new dataframe that shows how many cases occur per week in this three year period.
The structure of the dataset I have is like this:
ID Start_Date End_Date
1 2015-01-04 2017-11-02
2 2015-01-05 2015-10-26
3 2015-01-07 2015-03-04
4 2015-01-12 2016-05-17
5 2015-01-15 2015-04-08
6 2015-01-21 2016-07-31
7 2015-01-21 2015-07-16
8 2015-01-22 2015-03-03
`
This problem can be solved more easily with sqldf package but I thought to stick with dplyr package.
The approach:
library(dplyr)
library(lubridate)
# First create a data frame having all weeks from chosen start date to end date.
# 2015-01-01 to 2017-12-31
df_week <- data.frame(weekStart = seq(floor_date(as.Date("2015-01-01"), "week"),
as.Date("2017-12-31"), by = 7))
df_week <- df_week %>%
mutate(weekEnd = weekStart + 7,
weekNum = as.character(weekStart, "%V-%Y"),
dummy = TRUE)
# The dummy column is only for joining purpose.
# Header looks like
#> head(df_week)
# weekStart weekEnd weekNum dummy
#1 2014-12-28 2015-01-04 52-2014 TRUE
#2 2015-01-04 2015-01-11 01-2015 TRUE
#3 2015-01-11 2015-01-18 02-2015 TRUE
#4 2015-01-18 2015-01-25 03-2015 TRUE
#5 2015-01-25 2015-02-01 04-2015 TRUE
#6 2015-02-01 2015-02-08 05-2015 TRUE
# Prepare the data as mentioned in OP
df <- read.table(text = "ID Start_Date End_Date
1 2015-01-04 2017-11-02
2 2015-01-05 2015-10-26
3 2015-01-07 2015-03-04
4 2015-01-12 2016-05-17
5 2015-01-15 2015-04-08
6 2015-01-21 2016-07-31
7 2015-01-21 2015-07-16
8 2015-01-22 2015-03-03", header = TRUE, stringsAsFactors = FALSE)
df$Start_Date <- as.Date(df$Start_Date)
df$End_Date <- as.Date(df$End_Date)
df <- df %>% mutate(dummy = TRUE) # just for joining
# Use dplyr to join, filter and then group on week to find number of cases
# in each week
df_week %>%
left_join(df, by = "dummy") %>%
select(-dummy) %>%
filter((weekStart >= Start_Date & weekStart <= End_Date) |
(weekEnd >= Start_Date & weekEnd <= End_Date)) %>%
group_by(weekStart, weekEnd, weekNum) %>%
summarise(cases = n())
# Result
# weekStart weekEnd weekNum cases
# <date> <date> <chr> <int>
# 1 2014-12-28 2015-01-04 52-2014 1
# 2 2015-01-04 2015-01-11 01-2015 3
# 3 2015-01-11 2015-01-18 02-2015 5
# 4 2015-01-18 2015-01-25 03-2015 8
# 5 2015-01-25 2015-02-01 04-2015 8
# 6 2015-02-01 2015-02-08 05-2015 8
# 7 2015-02-08 2015-02-15 06-2015 8
# 8 2015-02-15 2015-02-22 07-2015 8
# 9 2015-02-22 2015-03-01 08-2015 8
#10 2015-03-01 2015-03-08 09-2015 8
# ... with 139 more rows
Welcome to SO!
Before solving the problem be sure to have installed some packages and run
install.packages(c("tidyr","dplyr","lubridate"))
if you haven installed those packages yet.
I'll present you a modern R solution next and those packages are magic.
This is a way to solve it:
library(readr)
library(dplyr)
library(lubridate)
raw_data <- 'id start_date end_date
1 2015-01-04 2017-11-02
2 2015-01-05 2015-10-26
3 2015-01-07 2015-03-04
4 2015-01-12 2016-05-17
5 2015-01-15 2015-04-08
6 2015-01-21 2016-07-31
7 2015-01-21 2015-07-16
8 2015-01-22 2015-03-03'
curated_data <- read_delim(raw_data, delim = "\t") %>%
mutate(start_date = as.Date(start_date)) %>% # convert column 2 to date format assuming the date is yyyy-mm-dd
mutate(weeks_lapse = as.integer((start_date - min(start_date))/dweeks(1))) # count how many weeks passed since the lowest date in the data
curated_data %>%
group_by(weeks_lapse) %>% # I group to count by week
summarise(cases_per_week = n()) # now count by group by week
And the solution is:
# A tibble: 3 x 2
weeks_lapse cases_per_week
<int> <int>
1 0 3
2 1 2
3 2 3
I have a data frame with missing values for "SNAP_ID". I'd like to fill in the missing values with floating point values based on a sequence from the previous non-missing value (lag()?). I would really like to achieve this using just dplyr if possible.
Assumptions:
There will never be missing data as the first or last row I'm generating the missing dates based on missing days between a min and max of a data set
There can be multiple gaps in the data set
Current data:
end SNAP_ID
1 2015-06-26 12:59:00 365
2 2015-06-26 13:59:00 366
3 2015-06-27 00:01:00 NA
4 2015-06-27 23:00:00 NA
5 2015-06-28 00:01:00 NA
6 2015-06-28 23:00:00 NA
7 2015-06-29 09:00:00 367
8 2015-06-29 09:59:00 368
What I want to achieve:
end SNAP_ID
1 2015-06-26 12:59:00 365.0
2 2015-06-26 13:59:00 366.0
3 2015-06-27 00:01:00 366.1
4 2015-06-27 23:00:00 366.2
5 2015-06-28 00:01:00 366.3
6 2015-06-28 23:00:00 366.4
7 2015-06-29 09:00:00 367.0
8 2015-06-29 09:59:00 368.0
As a data frame:
df <- structure(list(end = structure(c(1435323540, 1435327140, 1435363260,
1435446000, 1435449660, 1435532400, 1435568400, 1435571940), tzone = "UTC", class = c("POSIXct",
"POSIXt")), SNAP_ID = c(365, 366, NA, NA, NA, NA, 367, 368)), .Names = c("end",
"SNAP_ID"), row.names = c(NA, -8L), class = "data.frame")
This was my attempt at achieving this goal, but it only works for the first missing value:
df %>%
arrange(end) %>%
mutate(SNAP_ID=ifelse(is.na(SNAP_ID),lag(SNAP_ID)+0.1,SNAP_ID))
end SNAP_ID
1 2015-06-26 12:59:00 365.0
2 2015-06-26 13:59:00 366.0
3 2015-06-27 00:01:00 366.1
4 2015-06-27 23:00:00 NA
5 2015-06-28 00:01:00 NA
6 2015-06-28 23:00:00 NA
7 2015-06-29 09:00:00 367.0
8 2015-06-29 09:59:00 368.0
The outstanding answer from #mathematical.coffee below:
df %>%
arrange(end) %>%
group_by(tmp=cumsum(!is.na(SNAP_ID))) %>%
mutate(SNAP_ID=SNAP_ID[1] + 0.1*(0:(length(SNAP_ID)-1))) %>%
ungroup() %>%
select(-tmp)
EDIT: new version works for any number of NA runs.
This one doesn't need zoo, either.
First, notice that tmp=cumsum(!is.na(SNAP_ID)) groups the SNAP_IDs such groups of the same tmp consist of one non-NA value followed by a run of NA values.
Then group by this variable and just add .1 to the first SNAP_ID to fill out the NAs:
df %>%
arrange(end) %>%
group_by(tmp=cumsum(!is.na(SNAP_ID))) %>%
mutate(SNAP_ID=SNAP_ID[1] + 0.1*(0:(length(SNAP_ID)-1)))
end SNAP_ID tmp
1 2015-06-26 12:59:00 365.0 1
2 2015-06-26 13:59:00 366.0 2
3 2015-06-27 00:01:00 366.1 2
4 2015-06-27 23:00:00 366.2 2
5 2015-06-28 00:01:00 366.3 2
6 2015-06-28 23:00:00 366.4 2
7 2015-06-29 09:00:00 367.0 3
8 2015-06-29 09:59:00 368.0 4
Then you can drop the tmp column afterwards (add %>% select(-tmp) to the end).
EDIT: this is the old version which doesn't work for subsequent runs of NAs.
If your aim is to fill each NA with the previous value + 0.1, you can use zoo's na.locf (which fills each NA with the previous value), along with cumsum(is.na(SNAP_ID))*0.1 to add the extra 0.1.
library(zoo)
df %>%
arrange(end) %>%
mutate(SNAP_ID=ifelse(is.na(SNAP_ID),
na.locf(SNAP_ID) + cumsum(is.na(SNAP_ID))*0.1,
SNAP_ID))