I am working with a dataset that contains variables measured from permanent plots. These plots are continuously remeasured every couple of years. The data sort of looks like the table at the bottom. I used the following code to separate the dataset to slice the initial measurement at t1. Now, I want to slice t2 which is the remeasurement that is one step greater than the minimum_Cycle or minimum_Measured_year. This is particularly a problem for plots that have more than two remeasurements (num_obs > 2) and the measured_year intervals and cycle intervals are different.
I would really appreciate the help. I have stuck on this for quite sometime now.
df_Time1 <- df %>% group_by(State, County, Plot) %>% slice(which.min(Cycle))
State County Plot Measured_year basal_area tph Cycle num_obs
1 1 1 2006 10 10 8 2
2 1 2 2002 20 20 7 3
1 1 1 2009 30 30 9 2
2 1 1 2005 40 40 6 3
2 1 1 2010 50 50 8 3
2 1 2 2013 60 60 10 2
2 1 2 2021 70 70 12 3
2 1 1 2019 80 80 13 3
Create a t variable for yourself based on the Cycle order:
df_Time1 %>%
group_by(State, County, Plot) %>%
mutate(t = order(Cycle))
You can then filter on t == 1 or t == 2, etc.
I have a dataset with a million records that I need to aggregate after first subsetting the data. It is difficult to provide a good reproducible sample because in this case, the sample size would be rather large - but I will try anyway.
A random sample of the data that I am working with looks like this:
> df
auto_id user_id month
164537 7124 240249 10
151635 7358 226423 9
117288 7376 172463 9
177119 6085 199194 11
128904 7110 141608 9
157194 7143 241964 9
71303 6090 141646 7
72480 6808 175910 7
108705 6602 213098 8
97889 7379 185516 8
184906 6405 212580 12
37242 6057 197905 8
157284 6548 162928 9
17910 6885 194180 10
70660 7162 161827 7
8593 7375 207061 8
28712 6311 176373 10
144194 7324 142715 9
73106 7196 176153 7
67065 7392 171039 7
77954 7116 161489 7
59842 7107 162637 7
101819 5994 182973 9
183546 6427 142029 12
102881 6477 188129 8
In every month, there many users who are the same, and first we should subset by month and make a frequency table of the users and the amount of trips taken (unfortunately, in the random sample above there is only one trip per user, but in the larger dataset, this is not the case):
full_data <- full_data[full_data$month == 7,]
users <- as.data.frame(table(full_data$user_id))
head(users)
Var1 Freq
1 100231 10
2 100744 17
3 111281 1
4 111814 2
5 113716 3
6 117493 3
As we can see, in the full data set, in month of July (month = 7), users have taken multiple trips. Now the important part - which is to subset only the top 10% of these users (the top 10% in terms of Freq)
tenPercent = round(nrow(users)/10)
users <- users[order(-users$Freq),]
topten <- head(users, n = tenPercent)
Now the new dataframe - topten - can be summed and we get the amount of trips taken by the top ten percent of users
sum(topten$Freq)
[1] 12147
In the end the output should look like this
> output
month trips
1 7 12147
2 8 ...
3 9 ...
4 10 ...
5 11 ...
6 12 ...
Is there a way to automate this process using dplyr - I mean specifically the subsetting by the top ten percent ? I have tried
output <- full_data %>%
+ group_by(month) %>%
+ summarise(n = n())
But this only aggregates total trips by month. Could someone suggest a way to integrate this part into the query in dplyr ? :
tenPercent = round(nrow(users)/10)
users <- users[order(-users$Freq),]
topten <- head(users, n = tenPercent)
The code below counts the number of rows for each user_id in each month, and then selects the 10% of users with the most rows in each month and sums them. Let me know if it solves your problem.
library(dplyr)
full_data %>% group_by(month, user_id) %>%
tally %>%
group_by(month) %>%
filter(percent_rank(n) >= 0.9) %>%
summarise(n_trips = sum(n))
UPDATE: Following up on your comment, let's do a check with some fake data. Below we have 30 different values of user_id and 10,000 total rows. I've also used the prob argument so that the probability of a user_id being selected is proportional to its value (i.e., user_id 1 is the least likely to be chosen and user_id 30 is the most likely to be chosen).
set.seed(3)
full_data = data.frame(user_id=sample(1:30,10000, replace=TRUE, prob=1:30),
month=sample(1:12, 10000, replace=TRUE))
Let's look as the number of rows for each user_id for month==1. The code below counts the number of rows for each user_id and sorts from most to least common. Note that the three most common values of user_id (28,29,26) comprise 171 rows (60+57+54). Since there are 30 different values of user_id the top three users represent the top 10% of users:
full_data %>% filter(month==1) %>%
group_by(month, user_id) %>%
tally %>%
arrange(desc(n)) %>% as.data.frame
month user_id n
1 1 28 60
2 1 29 57
3 1 26 54
4 1 30 53
5 1 27 49
6 1 22 43
7 1 21 41
8 1 20 40
9 1 23 40
10 1 24 38
11 1 25 38
12 1 19 37
13 1 18 33
14 1 16 28
15 1 15 27
16 1 17 27
17 1 14 26
18 1 9 20
19 1 12 20
20 1 13 20
21 1 10 17
22 1 11 17
23 1 6 15
24 1 7 13
25 1 8 13
26 1 4 9
27 1 5 7
28 1 2 3
29 1 3 2
30 1 1 1
So now let's take the next step and select the top 10% of users. To answer the question in your comment, filter(percent_rank(n) >= 0.9) keeps only the top 10% of user_id, based on the value of n (which is the number of rows for each user_id). percent_rank is on of several ranking functions in dplyr that have different ways of dealing with ties (which may be the reason you're not getting the results you expect). See ?percent_rank for details:
full_data %>% filter(month==1) %>%
group_by(month, user_id) %>%
tally %>%
group_by(month) %>%
filter(percent_rank(n) >= 0.9)
month user_id n
1 1 26 54
2 1 28 60
3 1 29 57
And the sum of n (the total number of trips for the top 10%) is:
full_data %>% filter(month==1) %>%
group_by(month, user_id) %>%
tally %>%
group_by(month) %>%
filter(percent_rank(n) >= 0.9) %>%
summarise(n_trips = sum(n))
month n_trips
1 1 171
So it looks like the code does what we'd naively expect, but maybe the issue is related to how ties are dealt with. Let me know if you're still getting anomalous results in your real data or if I've misunderstood what you're trying to accomplish.
is there any way in Oracle that My month start after every 28 days
Example
24-dec- 2015 to 20-jan-16 ( we mention Dec 2015)
21-jan-16 to 17-feb-16 (we mention Jan 16)
select rownum as month_number
,day1 + (rownum-1) * 28 as gregorian_month_start
,day1 + rownum * 28 - 1 as gregorian_month_end
from (select date'2015-12-24' day1
from dual connect by level <= 13);
1 24/DEC/2015 20/JAN/2016
2 21/JAN/2016 17/FEB/2016
3 18/FEB/2016 16/MAR/2016
4 17/MAR/2016 13/APR/2016
5 14/APR/2016 11/MAY/2016
6 12/MAY/2016 08/JUN/2016
7 09/JUN/2016 06/JUL/2016
8 07/JUL/2016 03/AUG/2016
9 04/AUG/2016 31/AUG/2016
10 01/SEP/2016 28/SEP/2016
11 29/SEP/2016 26/OCT/2016
12 27/OCT/2016 23/NOV/2016
13 24/NOV/2016 21/DEC/2016
Note: this doesn't handle the 365th day for normal years, or 366th day for leap years. You would need to specify which month these should be added to.
I have a dataset which looks something like this:-
Key Days
A 1
A 2
A 3
A 8
A 9
A 36
A 37
B 14
B 15
B 44
B 45
I would like to split the individual keys based on the days in groups of 7. For e.g.:-
Key Days
A 1
A 2
A 3
Key Days
A 8
A 9
Key Days
A 36
A 37
Key Days
B 14
B 15
Key Days
B 44
B 45
I could use ifelse and specify buckets of 1-7, 7-14 etc until 63-70 (max possible value of days). However the issue lies with the days column. There are lots of cases wherein there is an overlap in days - Take days 14-15 as an example which would fall into 2 brackets if split using the ifelse logic (7-14 & 15-21).
The ideal method of splitting this would be to identify a day and add 7 to it and check how many rows of data are actually falling under that category. I think we need to use loops for this. I could do it in excel but i have 20000 rows of data for 2000 keys hence i'm using R. I would need a loop which checks each key value and for each key it further checks the value of days and buckets them in group of 7 by checking the first day value of each range.
We create a grouping variable by applying %/% on the 'Day' column and then split the dataset into a list based on that 'grp'.
grp <- df$Day %/%7
split(df, factor(grp, levels = unique(grp)))
#$`0`
# Key Days
#1 A 1
#2 A 2
#3 A 3
#$`1`
# Key Days
#4 A 8
#5 A 9
#$`5`
# Key Days
#6 A 36
#7 A 37
#$`2`
# Key Days
#8 B 14
#9 B 15
#$`6`
# Key Days
#10 B 44
#11 B 45
Update
If we need to split by 'Key' also
lst <- split(df, list(factor(grp, levels = unique(grp)), df$Key), drop=TRUE)
My problem is simple. I have table where each row is event (month, day, hour, minute is given). However, the machine was set to record 24/7. So I have more events (rows) than I need. How to remove surplus rows from daytime and to keep only rows from night (from sunset to sunrise)?
Dreadful thing is, that the timing of sunrise/sunset is slightly different each day.
In this example I provide two tables. First is table with all events, second contain timings of sunset/sunrise for each day.
If it is possible to extract, please notice that EACH night consists from two dates could be a additional column inserted in table containing ID of night? (see scheme below)
# table with all events
my.table <- data.frame(event = 1:34,
day = rep(c(30,31,1,2,3), times = c(8,9,7,8,2)),
month = rep(c(3,4), each = 17),
hour = c(13,13,13,13,22,
22,23,23,2,2,2,
14,14,14,19,22,22,
2,2,2,14,15,22,22,
3,3,3,14,14,14,
23,23,2,14),
minute = c(11,13,44,55,27,
32,54,57,10,14,
26,12,16,46,30,
12,13,14,16,45,
12,15,12,15,24,
26,28,12,16,23,12,13,11,11))
# timings of sunset/sunrise for each day
sun.table <- data.frame(day = c(30,31,31,1,1,2,2,3),
month = rep(c(3,4), times = c(3,5)),
hour = rep(c(19,6), times = 4),
minute = c(30,30,31,29,32,
28,33,27),
type = rep(c("sunset","sunrise"), times = 4))
# rigth solution reduced table would contain only rows:
# 5,6,7,8,9,10,11,16,17,18,19,20,23,24,25,26,27,31,32,33.
# nrow("reduced table") == 20
Here's one possible strategy
#convert sun-up, sun-down times to proper dates
ss <- with(sun.table, ISOdate(2000,month,day,hour,minute))
up <- ss[seq(1,length(ss),by=2)]
down <- ss[seq(2,length(ss),by=2)]
Here I assume the table is ordered and starts with a sunrise and alternates back and forth and ends with a sunset. Date values also need a year, here I just hard coded 2000. As long as your data doesn't span years (or leap days) that should be fine, but you'll probably want to pop in the actual year of your observations.
Now do the same for events
tt <- with(my.table, ISOdate(2000,month,day,hour,minute))
Find rows during the day
daytime <- sapply(tt, function(x) any(up<x & x<down))
and extract those rows
my.table[daytime, ]
# event day month hour minute
# 5 5 30 3 22 27
# 6 6 30 3 22 32
# 7 7 30 3 23 54
# 8 8 30 3 23 57
# 9 9 31 3 2 10
# 10 10 31 3 2 14
# 11 11 31 3 2 26
# 16 16 31 3 22 12
# 17 17 31 3 22 13
# 18 18 1 4 2 14
# 19 19 1 4 2 16
# 20 20 1 4 2 45
# 23 23 1 4 22 12
# 24 24 1 4 22 15
# 25 25 2 4 3 24
# 26 26 2 4 3 26
# 27 27 2 4 3 28
# 31 31 2 4 23 12
# 32 32 2 4 23 13
# 33 33 3 4 2 11
Here we only grab values that are after sunrise and before sunset. Since there isn't enough information in the sun.table to make sure that row 34 actually happens before subset, it is not returned.