An R-like approach to averaging by histogram bin - r

as a person transitioning from Matlab, I wish any advice for a more efficient way to find the average of DepDelay values whose indices (indxs) fall within histogram bins (edges). In Matlab and my current R script, I have these commands:
edges = seq( min(t), max(t), by = dt )
indxs = findInterval( t, edges,all.inside=TRUE )
listIndx = sort( unique( indxs ) )
n = length( edges )
avgDelay = rep( 1, n) * 0
for (i in 1 : n ){
id = listIndx[i]
jd = which( id == indxs )
if ( length(jd) > minFlights){
avgDelay[id] = mean(DepDelay[jd])
}
}
I know that using for-loops in R is a potentially fraught issue, but I ask this question in the interests of improved code efficiency.
Sure. A few snippets of the relevant vectors:
DepDelay[1:20] = [1] -4 -4 -4 -9 -6 -7 -1 -7 -6 -7 -7 -5 -8 -3 51 -2 -1 -4 -7 -10
and associated indxs values:
indxs[1:20] = [1] 3 99 195 291 387 483 579 675 771 867 963 1059 1155 1251 1351 1443 1539 1635 1731 1827
minFlights = 3
Thank you.
BSL

There are many ways to do this in R, all involving variations on the "split-apply-combine" strategy (split the data into groups, apply a function to each group, combine the results by group back into a single data frame).
Here's one method using the dplyr package. I've created some fake data for illustration, since your data is not in an easily reproducible form:
library(dplyr)
# Create fake data
set.seed(20)
dat = data.frame(DepDelay = sample(-50:50, 1000, replace=TRUE))
# Bin the data
dat$bins = cut(dat$DepDelay, seq(-50,50,10), include.lowest=TRUE)
# Summarise by bin
dat %>% group_by(bins) %>%
summarise(count = n(),
meanByBin = mean(DepDelay, na.rm=TRUE))
bins count meanByBin
1 [-50,-40] 111 -45.036036
2 (-40,-30] 110 -34.354545
3 (-30,-20] 95 -24.242105
4 (-20,-10] 82 -14.731707
5 (-10,0] 92 -4.304348
6 (0,10] 109 5.477064
7 (10,20] 93 14.731183
8 (20,30] 93 25.182796
9 (30,40] 103 35.466019
10 (40,50] 112 45.696429
data.table is another great package for this kind of task:
library(data.table)
datDT = data.table(dat)
setkey(datDT, bins)
datDT[, list(count=length(DepDelay), meanByBin=mean(DepDelay, na.rm=TRUE)), by=bins]
And here are two ways to calculate the mean by bin in base R:
tapply(dat$DepDelay, dat$bins, mean)
aggregate(DepDelay ~ bins, FUN=mean, data=dat)

Related

How to know if a number is in a determinated interval in R

I have a dataset with 3 columns: Default, Height and Weight.
I made a binning of the variables and almacenated it (I have to do it this way) in a list. Every binning has a woe associated, but now I want to put those woes in the original Dataframe depending in which buckets are my observations:
For example, the data frame
df1 <- data.frame(default=sample(c(0,1), replace=TRUE, size=100, prob=c(0.9,0.1)),
height=sample(150:180, 100, replace=T),
weight=sample(50:80,100,replace=T))
> head(df1)
# default height weight
# 1 0 172 54
# 2 0 169 71
# 3 0 164 61
# 4 0 156 55
# 5 0 180 66
# 6 0 162 63
The bins (I will just show the first one)
bins <- lapply(c("height","weight"), function(x) woe.binning(df1, "default", x,
min.perc.total=0.05,
min.perc.class=0.05,event.class=1,
stop.limit = 0.05)[2])
# [[1]]
# [[1]][[1]]
# woe cutpoints.final cutpoints.final[-1] iv.total.final 0 1 col.perc.a col.perc.b iv.bins
# (-Inf,156] -46.58742 -Inf 156 0.1050725 21 5 0.24137931 0.38461538 0.0667299967
# (156,168] 23.91074 156 168 0.1050725 34 4 0.39080460 0.30769231 0.0198727638
# (168,169] -10.91993 168 169 0.1050725 6 1 0.06896552 0.07692308 0.0008689599
# (169, Inf] 25.85255 169 Inf 0.1050725 26 3 0.29885057 0.23076923 0.0176007627
# Missing NA Inf Missing 0.1050725 0 0 0.00000000 0.00000000
Now I want to see in with bins is my data.
My desired output is something similar to this
# default height weight woe_height woe_weight
# 1 0 160 54 23.91074 -8.180032
# 2 0 140 71 -46.58742 -7.640947
Is there any way to do it? The main problem I see here is that the intervals (a,b) are strings. I was thinking about use substr() or something similar to separate the strings in logical options, but I dont think that would work, and its not very elegant.
Any help will be welcome, thanks in advance.
Does this work fine for you?
apply_woe_binning <- function(df, x){
# woe binning
w <- woe.binning(df, "default", x,
min.perc.total=0.05,
min.perc.class=0.05,
event.class=1,
stop.limit = 0.05)[[2]]
# create new column name
new_col <- paste("woe", x, sep = "_")
# define cuts
cuts <- cut(df[[x]], w$cutpoints.final)
# add new column
df[[new_col]] <- w[cuts, "woe", drop = TRUE]
df
}
# one by one
df2 <- apply_woe_binning(df1, "height")
df2 <- apply_woe_binning(df2, "weight")
# in a functional
df2 <- Reduce(function(y, x) apply_woe_binning(df = y, x = x),
c("height","weight"),
init = df1)

Normalise only some columns in R

I'm new to R and still getting to grips with how it handles data (my background is spreadsheets and databases). the problem I have is as follows. My data looks like this (it is held in CSV):
RecNo Var1 Var2 Var3
41 800 201.8 Y
43 140 39 N
47 60 20.24 N
49 687 77 Y
54 570 135 Y
58 1250 467 N
61 211 52 N
64 96 117.3 N
68 687 77 Y
Column 1 (RecNo) is my observation number; while it is a number, it is not required for my analysis. Column 4 (Var3) is a Yes/No column which, again, I do not currently need for the analysis but will need later in the process to add information in the output.
I need to normalise the numeric data in my dataframe to values between 0 and 1 without losing the other information. I have the following function:
normalize <- function(x) {
x <- sweep(x, 2, apply(x, 2, min))
sweep(x, 2, apply(x, 2, max), "/")
}
However, when I apply it to my above data by calling
myResult <- normalize(myData)
it returns an error because of the text in Column 4. If I set the text in this column to binary values it runs fine, but then also normalises my case numbers, which I don't want.
So, my question is: How can I change my normalize function above to accept the names of the columns to transform, while outputting the full dataset (i.e. without losing columns)?
I could not get TUSHAr's suggestion to work, but I have found two solutions that work fine:
1. akrun's suggestion above:
myData2 <- myData1 %>% mutate_at(2:3, funs((.-min(.))/max(.-min(.))))
This produces the following:
RecNo Var1 Var2 Var3
1 41 0.62184874 0.40601834 Y
2 43 0.06722689 0.04195255 N
3 47 0.00000000 0.00000000 N
4 49 0.52689076 0.12693105 Y
5 54 0.42857143 0.25663508 Y
6 58 1.00000000 1.00000000 N
7 61 0.12689076 0.07102414 N
8 64 0.03025210 0.21718329 N
9 68 0.52689076 0.12693105 Y
Alternatively, there is the package BBmisc which allowed me the following after transforming my record numbers to factors:
> myData <- myData %>% mutate(RecNo = factor(RecNo))
> myNorm <- normalize(myData2, method="range", range = c(0,1), margin = 1)
> myNorm
RecNo Var1 Var2 Var3
1 41 0.62184874 0.40601834 Y
2 43 0.06722689 0.04195255 N
3 47 0.00000000 0.00000000 N
4 49 0.52689076 0.12693105 Y
5 54 0.42857143 0.25663508 Y
6 58 1.00000000 1.00000000 N
7 61 0.12689076 0.07102414 N
8 64 0.03025210 0.21718329 N
9 68 0.52689076 0.12693105 Y
EDIT: For completion I include TUSHAr's solution as well, showing as always that there are many ways around a single problem:
normalize<-function(x){
minval=apply(x[,c(2,3)],2,min)
maxval=apply(x[,c(2,3)],2,max)
#print(minval)
#print(maxval)
y=sweep(x[,c(2,3)],2,minval)
#print(y)
sweep(y,2,(maxval-minval),"/")
}
df[,c(2,3)]=normalize(df)
Thank you for your help!
normalize<-function(x){
minval=apply(x[,c(2,3)],2,min)
maxval=apply(x[,c(2,3)],2,max)
#print(minval)
#print(maxval)
y=sweep(x[,c(2,3)],2,minval)
#print(y)
sweep(y,2,(maxval-minval),"/")
}
df[,c(2,3)]=normalize(df)

R - Sum range over lookback period, divided sum of look back - excel to R

I am looking to workout a percentage total over a look back range in R.
I know how to do this in excel with the following formula:
=SUM(B2:B4)/SUM(B2:B4,C2:C4)
This is summing column B over a range of today looking back 3 lines. It then divides this sum buy the total sum of column B + C again looking back 3 lines.
I am looking to achieve the same calculation in R to run across my matrix.
The output would look something like this:
adv dec perct
1 69 376
2 113 293
3 270 150 0.355625492
4 74 371 0.359559402
5 308 96 0.513790386
6 236 173 0.491255962
7 252 134 0.663886572
8 287 129 0.639966969
9 219 187 0.627483444
This is a line of code I could perhaps add the look back range too:
perct <- apply(data.matrix[,c('adv','dec')], 1, function(x) { (x[1] / x[1] + x[2]) } )
If i could get [1] to sum the previous 3 line range and
If i could get [2] to also sum the previous 3 line range.
Still learning how to apply forward and look back periods within R. So any additional learning on the answer would be appreciated!
Here are some approaches. The first 3 use rollsumr and/or rollapplyr in zoo and the last one uses only the base of R.
1) rollsumr Create a matrix with rollsumr whose columns contain the rollling sums, convert that to row proportions and take the "adv" column. Finally assign that to a new column frac in DF. This approach has the shortest code.
library(zoo)
DF$frac <- prop.table(rollsumr(DF, 3, fill = NA), 1)[, "adv"]
giving:
> DF
adv dec frac
1 69 376 NA
2 113 293 NA
3 270 150 0.3556255
4 74 371 0.3595594
5 308 96 0.5137904
6 236 173 0.4912560
7 252 134 0.6638866
8 287 129 0.6399670
9 219 187 0.6274834
1a) This variation is similar except instead of using prop.table we write out the ratio. The code is longer but you may find it clearer.
m <- rollsumr(DF, 3, fill = NA)
DF$frac <- with(as.data.frame(m), adv / (adv + dec))
1b) This is a variation of (1) that is the same except it uses a magrittr pipeline:
library(magrittr)
DF %>% rollsumr(3, fill = NA) %>% prop.table(1) %>% `[`(TRUE, "adv") -> DF$frac
2) rollapplyr We could use rollapplyr with by.column = FALSE like this. The result is the same.
ratio <- function(x) sum(x[, "adv"]) / sum(x)
DF$frac <- rollapplyr(DF, 3, ratio, by.column = FALSE, fill = NA)
3) Yet another variation is to compute the numerator and denominator separately:
DF$frac <- rollsumr(DF$adv, 3, fill = NA) /
rollapplyr(DF, 3, sum, by.column = FALSE, fill = NA)
4) base This uses embed followed by rowSums on each column to get the rolling sums and then uses prop.table as in (1).
DF$frac <- prop.table(sapply(lapply(rbind(NA, NA, DF), embed, 3), rowSums), 1)[, "adv"]
Note: The input used in reproducible form is:
Lines <- "adv dec
1 69 376
2 113 293
3 270 150
4 74 371
5 308 96
6 236 173
7 252 134
8 287 129
9 219 187"
DF <- read.table(text = Lines, header = TRUE)
Consider an sapply that loops through the number of rows in order to index two rows back:
DF$pred <- sapply(seq(nrow(DF)), function(i)
ifelse(i>=3, sum(DF$adv[(i-2):i])/(sum(DF$adv[(i-2):i]) + sum(DF$dec[(i-2):i])), NA))
DF
# adv dec pred
# 1 69 376 NA
# 2 113 293 NA
# 3 270 150 0.3556255
# 4 74 371 0.3595594
# 5 308 96 0.5137904
# 6 236 173 0.4912560
# 7 252 134 0.6638866
# 8 287 129 0.6399670
# 9 219 187 0.6274834

Using ddply across numerous variables when calculating descriptive statistics

Here's my data. It shows the amount of fish I found at three different sites.
Selidor.Bay Enlades.Bay Cumphrey.Bay
1 39 29 187
2 70 370 50
3 13 44 52
4 0 65 20
5 43 110 220
6 0 30 266
What I would like to do is create a script to calculate basic statistics for each site.
If I re-arrange the data by stacking it. I.e :
values site
1 29 Selidor.Bay
2 370 Selidor.Bay
3 44 Selidor.Bay
4 65 Enlades.Bay
I'm able to use the following:
data <- ddply(df, c("site"), summarise,
N = length(values),
mean = mean(values),
sd = sd(values),
se = sd / sqrt(N),
sum = sum(values)
)
data.
My question is how can I use the script without having to stack my dataframe?
Thanks.
A slight variation on #docendodiscimus' comment:
library(reshape2)
library(dplyr)
DF %>%
melt(variable.name="site") %>%
group_by(site) %>%
summarise_each(funs( n(), mean, sd, se=sd(.)/sqrt(n()), sum ), value)
# site n mean sd se sum
# 1 Selidor.Bay 6 27.5 27.93385 11.40395 165
# 2 Enlades.Bay 6 108.0 131.84688 53.82626 648
# 3 Cumphrey.Bay 6 132.5 104.29909 42.57992 795
melt does what the OP referred to as "stacking" the data.frame. There is likely some analogous function in the tidyr package.

R - setting equiprobability over a specific variable when sampling

I have a data set with more than 2 millions entries which I load into a data frame.
I'm trying to grab a subset of the data. I need around 10000 entries but I need the entries to be picked with equal probability on one variable.
This is what my data looks like with str(data):
'data.frame': 2685628 obs. of 3 variables:
$ category : num 3289 3289 3289 3289 3289 ...
$ id: num 8064180 8990447 747922 9725245 9833082 ...
$ text : chr "text1" "text2" "text3" "text4" ...
You've noticed that I have 3 variables : category,id and text.
I have tried the following :
> sample_data <- data[sample(nrow(data),10000,replace=FALSE),]
Of course this works, but the probability of sample if not equal. Here is the output of count(sample_data$category) :
x freq
1 3289 707
2 3401 341
3 3482 160
4 3502 243
5 3601 1513
6 3783 716
7 4029 423
8 4166 21
9 4178 894
10 4785 31
11 5108 121
12 5245 2178
13 5637 387
14 5946 1484
15 5977 117
16 6139 664
Update: Here is the output of count(data$category) :
x freq
1 3289 198142
2 3401 97864
3 3482 38172
4 3502 59386
5 3601 391800
6 3783 201409
7 4029 111075
8 4166 6749
9 4178 239978
10 4785 6473
11 5108 32083
12 5245 590060
13 5637 98785
14 5946 401625
15 5977 28769
16 6139 183258
But when I try setting the probability I get the following error :
> catCount <- length(unique(data$category))
> probabilities <- rep(c(1/catCount),catCount)
> train_set <- data[sample(nrow(data),10000,prob=probabilities),]
Error in sample.int(x, size, replace, prob) :
incorrect number of probabilities
I understand that the sample function is randomly picking between the row number but I can't figure out how to associate that with the probability over the categories.
Question : How can I sample my data over an equal probability for the category variable?
Thanks in advance.
I guess you could do this with some simple base R operation, though you should remember that you are using probabilities here within sample, thus getting the exact amount per each combination won't work using this method, though you can get close enough for large enough sample.
Here's an example data
set.seed(123)
data <- data.frame(category = sample(rep(letters[1:10], seq(1000, 10000, by = 1000)), 55000))
Then
probs <- 1/prop.table(table(data$category)) # Calculating relative probabilities
data$probs <- probs[match(data$category, names(probs))] # Matching them to the correct rows
set.seed(123)
train_set <- data[sample(nrow(data), 1000, prob = data$probs), ] # Sampling
table(train_set$category) # Checking frequencies
# a b c d e f g h i j
# 94 103 96 107 105 99 100 96 107 93
Edit: So here's a possible data.table equivalent
library(data.table)
setDT(data)[, probs := .N, category][, probs := .N/probs]
train_set <- data[sample(.N, 1000, prob = probs)]
Edit #2: Here's a very nice solution using the dplyr package contributed by #Khashaa and #docendodiscimus
The nice thing about this solution is that it returns the exact sample size within each group
library(dplyr)
train_set <- data %>%
group_by(category) %>%
sample_n(1000)
Edit #3:
It seems that data.table equivalent to dplyr::sample_n would be
library(data.table)
train_set <- setDT(data)[data[, sample(.I, 1000), category]$V1]
Which will also return the exact sample size within each group

Resources