Count of values in intervals of latitude and years - r

I have different dataframes with a column in which there are the latitudes (latitude) of some records and in another column of the same dataframe the date of the records (datecollected).
I would like to count and export in a new dataframe the number of the records in the same intervals of latitude (5 degrees) and year (two years).

(Hint: you'll make it easier for us to answer by providing some sample data.)
dataset <- data.frame(datecollected=
sample(as.Date("2000-01-01")+(0:3650),1000,replace=TRUE),
latitude=90*runif(1000))
We round the datecollected down to the next even year:
year.index <- (as.POSIXlt(dataset$datecollected)$year %/% 2)*2+1900
Similarly, we round the latitude down to the nearest multiple of 5 degrees:
latitude.index <- (floor(dataset$latitude) %/% 5)*5
Then we simply build a table on the rounded years and latitudes:
table(year.index,latitude.index)
latitude.index
year.index 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85
2000 12 9 15 7 11 10 11 14 9 13 11 10 8 11 13 25 10 18
2002 11 9 11 16 11 15 12 5 12 13 7 15 8 7 11 7 10 13
2004 8 12 9 10 12 16 12 13 9 7 16 11 6 13 4 15 12 10
2006 14 8 13 10 12 9 12 9 6 11 11 9 13 9 10 5 5 12
2008 8 12 17 12 12 8 12 8 14 12 11 11 10 10 14 16 17 13
EDIT: after a bit of discussion in the comments, I'll post my current script. It seems like there may be an issue when you read the data into R. This is what I do and what I get:
rm(list=ls())
dataset <- read.csv("GADUS.csv",header=TRUE,sep=",")
year.index <- (as.POSIXlt(as.character(dataset$datecollected),format="%Y-%m-%d")$year
%/% 2)*2+1900
latitude.index <- (floor(dataset$latitude) %/% 5)*5
table(year.index,latitude.index)
latitude.index
year.index 0 5 20 35 40 45 50 55 60 65 70 75
1752 0 0 0 0 0 20 0 0 0 0 0 0
1754 0 0 0 0 0 27 0 3 0 0 0 0
1756 0 0 0 0 0 21 0 1 0 0 0 0
1758 0 0 0 0 0 46 0 2 0 0 0 0
...
Does this give the same result for you? If not, please edit your question and post the result of str(dataset[,c("datecollected","latitude")]).

Related

Finding the k-largest clusters in dbscan result

I have a dataframe df, consists of 2 columns: x and y coordinates.
Each row refers to a point.
I feed it into dbscan function to obtain the clusters of the points in df.
library("fpc")
db = fpc::dbscan(df, eps = 0.08, MinPts = 4)
plot(db, df, main = "DBSCAN", frame = FALSE)
By using print(db), I can see the result returned by dbscan.
> print(db)
dbscan Pts=13131 MinPts=4 eps=0.08
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
border 401 38 55 5 2 3 0 0 0 8 0 6 1 3 1 3 3 2 1 2 4 3
seed 0 2634 8186 35 24 561 99 7 22 26 5 75 17 9 9 54 1 2 74 21 3 15
total 401 2672 8241 40 26 564 99 7 22 34 5 81 18 12 10 57 4 4 75 23 7 18
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
border 4 1 2 6 2 1 3 7 2 1 2 3 11 1 3 1 3 2 5 5 1 4 3
seed 14 9 4 48 2 4 38 111 5 11 5 14 111 6 1 5 1 8 3 15 10 15 6
total 18 10 6 54 4 5 41 118 7 12 7 17 122 7 4 6 4 10 8 20 11 19 9
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
border 2 4 2 1 3 2 1 1 3 1 0 2 2 3 0 3 3 3 3 0 0 2 3 1
seed 15 2 9 11 4 8 12 4 6 8 7 7 3 3 4 3 3 4 2 9 4 2 1 4
total 17 6 11 12 7 10 13 5 9 9 7 9 5 6 4 6 6 7 5 9 4 4 4 5
69 70 71
border 3 3 3
seed 1 1 1
total 4 4 4
From the above summary, I can see cluster 2 consists of 8186 seed points (core points), cluster 1 consists of 2634 seed points and cluster 5 consists of 561 points.
I define the largest cluster as the one contains the largest amount of seed points. So, in this case, the largest cluster is cluster 2. And the 1st, 2nd, 3th largest clusters are 2, 1 and 5.
Are they any direct way to return the rows (points) in the largest cluster or the k-largest cluster in general?
I can do it in an indirect way.
I can obtain the assigned cluster number of each point by
db$cluster.
Hence, I can create a new dataframe df2 with db$cluster as the
new additional column besides the original x column and y
column.
Then, I can aggregate the df2 according to the cluster numbers in
the third column and find the number of points in each cluster.
After that, I can find the k-largest groups, which are 2, 1 and 5
again.
Finally, I can select the rows in df2 with third column value equals to 2 to return the points in the largest cluster.
But the above approach re-computes many known results as stated in the summary of print(db).
The dbscan function doesn't appear to retain the data.
library(fpc)
set.seed(665544)
n <- 600
df <- data.frame(x=runif(10, 0, 10)+rnorm(n, sd=0.2), y=runif(10, 0, 10)+rnorm(n,sd=0.2))
(dbs <- dbscan(df, 0.2))
#dbscan Pts=600 MinPts=5 eps=0.2
# 0 1 2 3 4 5 6 7 8 9 10 11
#border 28 4 4 8 5 3 3 4 3 4 6 4
#seed 0 50 53 51 52 51 54 54 54 53 51 1
#total 28 54 57 59 57 54 57 58 57 57 57 5
attributes(dbs)
#$names
#[1] "cluster" "eps" "MinPts" "isseed"
#$class
#[1] "dbscan"
Your indirect steps are not that indirect (only two lines needed), and these commands won't recalculate the clusters. So just run those commands, or put them in a function and then call the function in one command.
cluster_k <- function(dbs, data, k){
kth <- names(rev(sort(table(dbs$cluster)))[k])
data[dbs$cluster == kth,]
}
cluster_k(dbs=dbs, data=df, k=1)
## x y
## 3 6.580695 8.715245
## 13 6.704379 8.528486
## 23 6.809558 8.160721
## 33 6.375842 8.756433
## 43 6.603195 8.640206
## 53 6.728533 8.425067
## a data frame with 59 rows

Trying to integrate over discrete points from a data frame

I have several months of weather data; an example day is here:
Hour Avg.Temp
1 1 11
2 2 11
3 3 11
4 4 10
5 5 10
6 6 11
7 7 12
8 8 14
9 9 15
10 10 17
11 11 19
12 12 21
13 13 22
14 14 24
15 15 23
16 16 22
17 17 21
18 18 18
19 19 16
20 20 15
21 21 14
22 22 12
23 23 11
24 24 10
I need to figure out the total number of hours above 15 degrees by integrating in R. I'm analyzing for degree days, a concept in agriculture, that gives valuable information about relative growth rate. For example, hour 10 is 2 degree hours and hour 11 is 4 degree hours above 15 degrees. This can help predict when to harvest fruit. How can I write the code for this?
Another column could potentially work with a simple subtraction. Then I would have to make a cumulative sum after canceling out all negative numbers. That is the approach I'm setting out to do right now. Is there an integral I could write and have an answer in one step?
This solution subtracts your threshold (i.e., 15°), fits a function to the result, then integrates this function. Note that if the temperature is below the threshold this contribute zero to the total rather than a negative value.
df <- read.table(text = "Hour Avg.Temp
1 1 11
2 2 11
3 3 11
4 4 10
5 5 10
6 6 11
7 7 12
8 8 14
9 9 15
10 10 17
11 11 19
12 12 21
13 13 22
14 14 24
15 15 23
16 16 22
17 17 21
18 18 18
19 19 16
20 20 15
21 21 14
22 22 12
23 23 11
24 24 10", header = TRUE)
with(df, integrate(approxfun(Hour, pmax(Avg.Temp-15, 0)),
lower = min(Hour), upper = max(Hour)))
#> 53.00017 with absolute error < 0.0039
Created on 2019-02-08 by the reprex package (v0.2.1.9000)
The OP has requested to figure out the total number of hours above 15 degrees by integrating in R.
It is not fully clear to me what the espected result is. Does the OP want to count the number of hours above 15 degrees or does the OP want to sum up the degrees greater 15 ("integrate").
However, the code below creates both figures. Supposed the data is sampled at each hour without gaps (as suggested by OP's sample dataset), cumsum() and sum() can be used, resp.:
library(data.table)
setDT(DT)[, c("deg_hrs_sum", "deg_hrs_cnt") :=
.(cumsum(pmax(0, Avg.Temp - 15)), cumsum(Avg.Temp > 15))]
Hour Avg.Temp deg_hrs_sum deg_hrs_cnt
1: 1 11 0 0
2: 2 11 0 0
3: 3 11 0 0
4: 4 10 0 0
5: 5 10 0 0
6: 6 11 0 0
7: 7 12 0 0
8: 8 14 0 0
9: 9 15 0 0
10: 10 17 2 1
11: 11 19 6 2
12: 12 21 12 3
13: 13 22 19 4
14: 14 24 28 5
15: 15 23 36 6
16: 16 22 43 7
17: 17 21 49 8
18: 18 18 52 9
19: 19 16 53 10
20: 20 15 53 10
21: 21 14 53 10
22: 22 12 53 10
23: 23 11 53 10
24: 24 10 53 10
Hour Avg.Temp deg_hrs_sum deg_hrs_cnt
Alternatively,
setDT(DT)[, .(deg_hrs_sum = sum(pmax(0, Avg.Temp - 15)),
deg_hrs_cnt = sum(Avg.Temp > 15))]
returns only the final result (last row):
deg_hrs_sum deg_hrs_cnt
1: 53 10
Data
library(data.table)
DT <- fread("
rn Hour Avg.Temp
1 1 11
2 2 11
3 3 11
4 4 10
5 5 10
6 6 11
7 7 12
8 8 14
9 9 15
10 10 17
11 11 19
12 12 21
13 13 22
14 14 24
15 15 23
16 16 22
17 17 21
18 18 18
19 19 16
20 20 15
21 21 14
22 22 12
23 23 11
24 24 10", drop = 1L)

Loop linear regrssion model

I have a data like this where Amount is the dependent variable and len,age, quantity and pos are explanotry variables. I trying to Make a regression of Amount On age, quantity and pos Using stepwise.
ID Sym Month Amount len Age quantity Pos
11 10 1 500 5 17 0 12
22 10 1 300 6 11 0 11
33 10 1 200 2 10 0 10
44 10 1 100 2 11 0 11
55 10 1 150 4 15 0 12
66 10 1 250 4 16 0 14
11 20 1 500 5 17 0 12
22 20 1 300 6 11 0 11
33 20 1 200 2 10 0 10
44 20 1 100 2 11 0 11
55 20 1 150 4 15 0 12
66 20 1 250 4 16 0 14
77 20 1 700 4 17 0 11
88 20 1 100 2 16 0 12
11 30 1 500 5 17 0 12
22 30 1 300 6 11 0 11
33 30 1 200 2 10 0 10
44 30 1 100 2 11 0 11
55 30 1 150 4 15 0 12
66 30 1 250 4 16 0 14
11 40 1 500 5 17 2000 12
22 40 1 300 6 11 1000 11
33 40 1 200 2 10 1000 10
44 40 1 100 2 11 1000 11
55 40 1 150 4 15 1000 12
66 40 1 250 4 16 1000 14
And the Output of the results I want after running all regression should be a dataframe that's look like this (That should help me detect outliers):
Id Month Sym Amount len Age Quantity Pos R^2 CookDistanse Residuals UpperLimit LowerLimit
11 1 10 500 5 17 null 12 0.7 1.5 -350 -500 1000
22 1 10 300 6 11 null 11 0.8 1.7 -400 -500 1000
That's the code that I am trying to run on Sym = 10, Sym= 20, Sym = 30, Sym = 40.
I have something like 400 Sym values to run a regression analysis on them.
fit[i] <- step(lm (Sym[i]$Sum ~ len + Age + Quantity,
na.action=na.omit), direction="backward")
R_Sq <- summary(fit[i])$r.squared
Res[i] <- resid(fit[i])
D[i] <- cooks.distance(fit[i])
Q[i] <- quantile(resid(fit[i), c(.25, .50, .75, .99))
L[i]<- Q[1][i] - 2.2* (Q[3][i]-Q[1][i])
U[i] <- Q[3][i] + 2.2*(Q[3][i]-Q[1][i])
"i" means the results for the regression of sym = i (10,20..).
Any way to do this on loop for every Sym value?
Any help will be highly appreciate.

R: calculate and superimpose on a ggplot graph minute-based totals for an event series

Consider a data frame df with an extract from a web server access log, with two fields (sample below, duration is in msec and to simplify the example, let's ignore the date).
time,duration
18:17:26.552,8
18:17:26.632,10
18:17:26.681,12
18:17:26.733,4
18:17:26.778,5
18:17:26.832,5
18:17:26.889,4
18:17:26.931,3
18:17:26.991,3
18:17:27.040,5
18:17:27.157,4
18:17:27.209,14
18:17:27.249,4
18:17:27.303,4
18:17:27.356,13
18:17:27.408,13
18:17:27.450,3
18:17:27.506,13
18:17:27.546,3
18:17:27.616,4
18:17:27.664,4
18:17:27.718,3
18:17:27.796,10
18:17:27.856,3
18:17:27.909,3
18:17:27.974,3
18:17:28.029,3
qplot(time, duration, data=df); gives me a graph of the duration. I'd like to add, superimposed a line showing the number of requests for each minute. Ideally, this line would have a single data point per minute, at the :30sec point. If that's too complicated, an acceptable alternative is to have a step line, with the same value (the count of request) during a minute.
One way is to trunc(df$time, units=c("mins")), then calculate the count of request per minute into a new column then graph it.
I'm asking if there is, perhaps, a more direct way to accomplish the above. Thanks.
Following may be helpful. Create a data frame with steps and plot:
time duration sec sec2 diffsec2 step30s steps
1 18:17:26.552 8 26.552 552 0 0 0
2 18:17:26.632 10 26.632 632 80 1 1
3 18:17:26.681 12 26.681 681 49 0 0
4 18:17:26.733 4 26.733 733 52 1 1
5 18:17:26.778 5 26.778 778 45 0 0
6 18:17:26.832 5 26.832 832 54 1 1
7 18:17:26.889 4 26.889 889 57 1 2
8 18:17:26.931 3 26.931 931 42 0 0
9 18:17:26.991 3 26.991 991 60 1 1
10 18:17:27.040 5 27.040 040 -951 0 0
11 18:17:27.157 4 27.157 157 117 1 1
12 18:17:27.209 14 27.209 209 52 1 2
13 18:17:27.249 4 27.249 249 40 0 0
14 18:17:27.303 4 27.303 303 54 1 1
15 18:17:27.356 13 27.356 356 53 1 2
16 18:17:27.408 13 27.408 408 52 1 3
17 18:17:27.450 3 27.450 450 42 0 0
18 18:17:27.506 13 27.506 506 56 1 1
19 18:17:27.546 3 27.546 546 40 0 0
20 18:17:27.616 4 27.616 616 70 1 1
21 18:17:27.664 4 27.664 664 48 0 0
22 18:17:27.718 3 27.718 718 54 1 1
23 18:17:27.796 10 27.796 796 78 1 2
24 18:17:27.856 3 27.856 856 60 1 3
25 18:17:27.909 3 27.909 909 53 1 4
26 18:17:27.974 3 27.974 974 65 1 5
27 18:17:28.029 3 28.029 029 -945 0 0
>
> ggplot(ddf)+geom_point(aes(x=time, y=duration))+geom_line(aes(x=time, y=steps, group=1),color='red')

cumulative variable construction in longitudinal data set

The problem:
I would like to construct a variable that measures cumulative work experience within a person-year longitudinal data set. The problem applies to all sorts of longitudinal data sets and many variables might be constructed in this cumulative way (e.g., number of children, cumulative education, cumulative dollars spend on vacations, etc.)
The case:
I have a large longitudinal data set in which every row constitutes a person year. The data set contains thousands of persons (variable “ID”) followed through their lives (variable “age”), resulting in a data frame with about 1.2 million rows. One variable indicates how many months a person has worked in each person year (variable “work”). For example, when Dan was 15 years old he worked 3 months.
ID age work
1 Dan 10 0
2 Dan 11 0
3 Dan 12 0
4 Dan 13 0
5 Dan 14 0
6 Dan 15 3
7 Dan 16 5
8 Dan 17 8
9 Dan 18 5
10 Dan 19 12
11 Jeff 20 0
12 Jeff 16 0
13 Jeff 17 0
14 Jeff 18 0
15 Jeff 19 0
16 Jeff 20 0
17 Jeff 21 8
18 Jeff 22 10
19 Jeff 23 12
20 Jeff 24 12
21 Jeff 25 12
22 Jeff 26 12
23 Jeff 27 12
24 Jeff 28 12
25 Jeff 29 12
I now want to construct a cumulative work experience variable, which adds the value of year x to year x+1. The goal is to know at each age of a person how many months they have worked in their entire carrier. The variable should look like “cumwork”.
ID age work cumwork
1 Dan 10 0 0
2 Dan 11 0 0
3 Dan 12 0 0
4 Dan 13 0 0
5 Dan 14 0 0
6 Dan 15 3 3
7 Dan 16 5 8
8 Dan 17 8 16
9 Dan 18 5 21
10 Dan 19 12 33
11 Jeff 20 0 0
12 Jeff 16 0 0
13 Jeff 17 0 0
14 Jeff 18 0 0
15 Jeff 19 0 0
16 Jeff 20 0 0
17 Jeff 21 8 8
18 Jeff 22 10 18
19 Jeff 23 12 30
20 Jeff 24 12 42
21 Jeff 25 12 54
22 Jeff 26 12 66
23 Jeff 27 12 78
24 Jeff 28 12 90
25 Jeff 29 12 102
A poor solution: I can construct such a cumulative variable using the following simple loop:
# Generate test data set
x=data.frame(ID=c(rep("Dan",times=10),rep("Jeff",times=15)),age=c(10:20,16:29),work=c(rep(0,times=5),3,5,8,5,12,rep(0,times=6),8,10,rep(12,times=7)),stringsAsFactors=F)
# Generate cumulative work experience variable
x$cumwork=x$work
for(r in 2:nrow(x)){
if(x$ID[r]==x$ID[r-1]){
x$cumwork[r]=x$cumwork[r-1]+x$cumwork[r]
}
}
However, my dataset has 1.2 million rows and looping through each row is highly inefficient and running this loop would take hours. Does any brilliant programmer have a suggestion of how to construct this cumulative measure most efficiently?
Many thanks in advance!
Best,
Raphael
ave is convenient for these types of tasks. The function you want to use with it is cumsum:
x$cumwork <- ave(x$work, x$ID, FUN = cumsum)
x
# ID age work cumwork
# 1 Dan 10 0 0
# 2 Dan 11 0 0
# 3 Dan 12 0 0
# 4 Dan 13 0 0
# 5 Dan 14 0 0
# 6 Dan 15 3 3
# 7 Dan 16 5 8
# 8 Dan 17 8 16
# 9 Dan 18 5 21
# 10 Dan 19 12 33
# 11 Jeff 20 0 0
# 12 Jeff 16 0 0
# 13 Jeff 17 0 0
# 14 Jeff 18 0 0
# 15 Jeff 19 0 0
# 16 Jeff 20 0 0
# 17 Jeff 21 8 8
# 18 Jeff 22 10 18
# 19 Jeff 23 12 30
# 20 Jeff 24 12 42
# 21 Jeff 25 12 54
# 22 Jeff 26 12 66
# 23 Jeff 27 12 78
# 24 Jeff 28 12 90
# 25 Jeff 29 12 102
However, given the scale of your data, I would also strongly suggest the "data.table" package, which also gives you access to convenient syntax:
library(data.table)
DT <- data.table(x)
DT[, cumwork := cumsum(work), by = ID]

Resources