Related
I would like to subset rows of my data
library(data.table); set.seed(333); n <- 100
dat <- data.table(id=1:n, group=rep(1:2,each=n/2), x=runif(n,100,120), y=runif(n,200,220), z=runif(n,300,320))
> head(dat)
id group x y z
1: 1 1 109.3400 208.6732 308.7595
2: 2 1 101.6920 201.0989 310.1080
3: 3 1 119.4697 217.8550 313.9384
4: 4 1 111.4261 205.2945 317.3651
5: 5 1 100.4024 212.2826 305.1375
6: 6 1 114.4711 203.6988 319.4913
in several stages within each group. I need to automate this and it might happen that the subset is empty. For example, focusing only on group 1,
dat1 <- dat[1:50]
> s <-subset(dat1,x>119)
> s
id group x y z
1: 3 1 119.4697 217.8550 313.9384
2: 50 1 119.2519 214.2517 318.8567
the second step subset(s, y>219) would come up empty but I would still want to apply the third step subset(s,z>315). If I were to set the threshold manually, Frank has provided an excellent solution here that outputs
> f(dat1, x>119, y>219, z>315)
cond skip
1: x > 119 FALSE
2: y > 219 TRUE
3: z > 315 FALSE
id group x y z
1: 50 1 119.2519 214.2517 318.8567
and reports which parts were skipped.
My problem is that I need to apply this to different groups simultaneously, where the thresholds for each group are given in a separate data.table. The goal is to have at least one id per group. For example, if my thresholds were
c <- data.table(group=1:2, x=c(119,119), y=c(219,219), z=c(315,319))
> c
group x y z
1: 1 119 219 315
2: 2 119 219 319
I would like to end up with
> res
id group x y z
1: 50 1 119.2519 214.2517 318.8567
2: 55 2 119.2634 219.0044 315.6556
I could apply Frank's function repeatedly within a for-loop but I am sure there are cleverer ways that save time. I wonder, for instance, whether the function can be applied to each group within data.table. Or perhaps there is a way within the tidyverse, which I am not really familiar with yet.
Another possible approach using standard evaluation:
#convert conditions into long format, storing operator in data.table as well
cond <- data.table(group=1:2, bop=c(`>`, `>`), x=c(119,119), y=c(219,219), z=c(315,319))
thres <- melt(cond, id.vars=c("group","bop"))
#convert data into long format and lookup filter and thresholds
mdat <- melt(dat, id.vars=c("id", "group"))[
thres, on=.(group, variable), c("bop","thres") := mget(c("bop","i.value"))]
#apply filtering
ss <- mdat[mapply(function(f, x, y) f(x, y), bop, value, thres)]
#apply sequential subsetting
dat[id %in% ss[, {
idx <- id
ans <- .SD[, {
x <- intersect(idx, id)
if(length(x) > 0) {
idx <- x
}
idx
}, .(variable)]
ans[variable==last(variable), V1]
}, .(group)]$V1
]
output:
id group x y z
1: 50 1 119.2519 214.2517 318.8567
2: 55 2 119.2634 219.0044 315.6556
3: 58 2 119.2211 214.0305 319.3097
4: 72 2 114.0802 217.7402 313.3655
5: 90 2 116.8115 215.1576 317.0261
6: 99 2 119.2964 212.9973 308.9360
data:
library(data.table)
set.seed(333)
n <- 100
dat <- data.table(id=1:n, group=rep(1:2,each=n/2),
x=runif(n,100,120), y=runif(n,200,220), z=runif(n,300,320))
I would like to remove outliers for each cluster of a dataset. The dataset contains 3 columns with different variables and a column indicating the cluster to which each point is allocated. If only one of the 3 variables is an outlier, the entire row will be removed. Outliers are identified determining the interval spanning over the mean plus/minus three standard deviations but I can also use the outlierfunction.
I am able to remove outliers without considering clusters, using:
#data: each row has 3 different variables and the allocating cluster (k)
dat <- cbind.data.frame(v1=c(sample(5:10, 100,replace=T),sample(1:5,5)),
v2=c(sample(20:25, 100,replace=T),sample(5:10,5)),
v3=c(sample(30:35, 100,replace=T),sample(10:20,5)),
k=c(rep(1:5,21)))
### find outliers without considering clusters
#(obviously only the last 5 samples in this example)
rmv<-c()
for(i in 1:3){
variable<-dat[,i]
rmv.tm<-which(variable >= (mean(variable)+sd(variable)*3)
| variable <= (mean(variable)-sd(variable)*3))
rmv<-c(rmv,rmv.tm)
}
rmv<-unique(rmv)
rmv
### remove outliers
dat_clean <- dat[-rmv,]
However, I am not able to detect outliers CONSIDERING clusters and thus determining intervals inside each cluster and not inside the entire population. I thought to nest another loop, but I am finding difficult coding it.
Any help would be much appreciated.
Here's a dplyr-approach:
library(dplyr)
dat %>%
group_by(k) %>%
filter_all(all_vars((abs(mean(.) - .) < 3*sd(.))))
# # A tibble: 100 x 4
# # Groups: k [5]
# v1 v2 v3 k
# <int> <int> <int> <int>
# 1 9 20 30 1
# 2 5 24 35 2
# 3 8 20 30 3
# 4 8 23 32 4
# 5 6 23 35 5
# 6 9 24 32 1
# 7 9 22 33 2
# 8 9 23 31 3
# 9 7 21 35 4
# 10 9 23 32 5
# # ... with 90 more rows
Base R:
dat <- cbind.data.frame(v1=c(sample(5:10, 100,replace=T),sample(1:5,5)),
v2=c(sample(20:25, 100,replace=T),sample(5:10,5)),
v3=c(sample(30:35, 100,replace=T),sample(10:20,5)),
k=c(rep(1:5,21)))
get_remove <- function(x, index, a = 3) {
lower_limit <- tapply(x, index, function(x) mean(x) - a * sd(x))
upper_limit <- tapply(x, index, function(x) mean(x) + a * sd(x))
vals <- split(x, index)
res <- sapply(seq_along(vals), function(i)
((vals[[i]] < lower_limit[i]) | (vals[[i]] > upper_limit[i])))
}
mask <- apply(do.call(cbind,
lapply(dat[ , c("v1", "v2", "v3")],
get_remove, dat$k)),
MARGIN = 1, any)
dat[!mask, ]
print("removed:")
dat[mask, ]
I am basically new to using R software.
I have a list of repeating codes (numeric/ categorical) from an excel file. I need to add another column values (even at random) to which every same code will get the same value.
Codes Value
1 122
1 122
2 155
2 155
2 155
4 101
4 101
5 251
5 251
Thank you.
We can use match:
n <- length(code0 <- unique(code))
value <- sample(4 * n, n)[match(code, code0)]
or factor:
n <- length(unique(code))
value <- sample(4 * n, n)[factor(code)]
The random integers generated are between 1 and 4 * n. The number 4 is arbitrary; you can also put 100.
Example
set.seed(0); code <- rep(1:5, sample(5))
code
# [1] 1 1 1 1 1 2 2 3 3 3 3 4 4 4 5
n <- length(code0 <- unique(code))
sample(4 * n, n)[match(code, code0)]
# [1] 5 5 5 5 5 18 18 19 19 19 19 12 12 12 11
Comment
The above gives the most general treatment, assuming that code is not readily sorted or taking consecutive values.
If code is sorted (no matter what value it takes), we can also use rle:
if (!is.unsorted(code)) {
n <- length(k <- rle(code)$lengths)
value <- rep.int(sample(4 * n, n), k)
}
If code takes consecutive values 1, 2, ..., n (but not necessarily sorted), we can skip match or factor and do:
n <- max(code)
value <- sample(4 * n, n)[code]
Further notice: If code is not numerical but categorical, match and factor method will still work.
What you could also do is the following, it is perhaps more intuitive to a beginner:
data <- data.frame('a' = c(122,122,155,155,155,101,101,251,251))
duplicates <- unique(data)
duplicates[, 'b'] <- rnorm(nrow(duplicates))
data <- merge(data, duplicates, by='a')
So I have a table m, consisting of a random number of rows and columns. (can be any size)...
I want to do this calculation against each rows/columns totals:
r[i] * c[j] / n;
Where r <- rowSums(m);, c <- colSums(m); and n <- sum(m);
I can do it with a double for-loop but I'm hoping to implement it now using while loops.
I wasn't going to use while loops but seems the table size can differ, I figured it was wise too.
I'm storing each value as it's found in a test vector.
This is my attempt, but I'm messing up the indices:
while(i < nrow(m)){
while(j < ncol(m)){
test[i] <- r[i]*c[j] / n;
j=j+1;
i=i+1;
}
j=j+1;
i=i+1;
}
Any guidance to help me sort out my loops would be much appreciated. Thanks in advance.
update
See below for an example and expected result:
m <- t(matrix(c(28,48,10,114), nrow=2, ncol=2));
r <- rowSums(m); #76 124 (sum of rows)
c <- colSums(m); #38 162 (sum of cols)
n <- sum(m); #200 (sum of all cells)
test <- c(0, times length(m)); #empty vector/data frame
#inside while loops, calc for each r and c indice:
test[1] <- 76 *38 /200 #first calc to test[i] where i=1
test[2] <- 124*38 /200
test[3] <- 76*162 /200
test[4] <- 124*162/200 #last calc to test[i] where i==length(m)
I would avoid using a for or while loop and do something like this instead:
> as.vector(outer(r,c, function(x,y) x*y/n))
[1] 14.44 23.56 61.56 100.44
No need to use a while loop. It is always best to use vector operations in R (and any other array-based language). It makes for clearer and faster code.
nrows<-sample(1:100,1) # a random number of rows
ncols<-sample(1:100,1) # a random number of columns
#create a matrix of random numbers with our random dimnesions
m<-matrix(runif(nrows*ncols), nrow=nrows)
n<-sum(m)
#read into outer, it creates a cartesian product of your vectors
#so you will have every r[i] multipled with every r[j]...ie what your loop is doing
r<-outer(rowSums(m),colSums(m),function(x,y) x*y/n)
Hope this helps, let me know if you have any questions.
A more R-like solution would be to use expand.grid instead of a nested while loop:
Set-up:
> m <- matrix(1:12, 3, 4)
> m
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
> n <- sum(m)
> r <- rowSums(m)
> c <- colSums(m)
Now:
> test <- expand.grid(r,c)
> test
Var1 Var2
1 22 6
2 26 6
3 30 6
4 22 15
5 26 15
6 30 15
7 22 24
8 26 24
9 30 24
10 22 33
11 26 33
12 30 33
> test <- test[,1] * test[,2] / n
> test
[1] 1.692308 2.000000 2.307692 4.230769 5.000000 5.769231 6.769231
[8] 8.000000 9.230769 9.307692 11.000000 12.692308
I would like to aggregate an R data.frame by equal amounts of the cumulative sum of one of the variables in the data.frame. I googled quite a lot, but probably I don't know the correct terminology to find anything useful.
Suppose I have this data.frame:
> x <- data.frame(cbind(p=rnorm(100, 10, 0.1), v=round(runif(100, 1, 10))))
> head(x)
p v
1 10.002904 4
2 10.132200 2
3 10.026105 6
4 10.001146 2
5 9.990267 2
6 10.115907 6
7 10.199895 9
8 9.949996 8
9 10.165848 8
10 9.953283 6
11 10.072947 10
12 10.020379 2
13 10.084002 3
14 9.949108 8
15 10.065247 6
16 9.801699 3
17 10.014612 8
18 9.954638 5
19 9.958256 9
20 10.031041 7
I would like to reduce the x to a smaller data.frame where each line contains the weighted average of p, weighted by v, corresponding to an amount of n units of v. Something of this sort:
> n <- 100
> cum.v <- cumsum(x$v)
> f <- cum.v %/% n
> x.agg <- aggregate(cbind(v*p, v) ~ f, data=x, FUN=sum)
> x.agg$'v * p' <- x.agg$'v * p' / x.agg$v
> x.agg
f v * p v
1 0 10.039369 98
2 1 9.952049 94
3 2 10.015058 104
4 3 9.938271 103
5 4 9.967244 100
6 5 9.995071 69
First question, I was wondering if there is a better (more efficient approach) to the code above. The second, more important, question is how to correct the code above in order to obtain more precise bucketing. Namely, each row in x.agg should contain exacly 100 units of v, not just approximately as it is the case above. For example, the first row contains the aggregate of the first 17 rows of x which correspond to 98 units of v. The next row (18th) contains 5 units of v and is fully included in the next bucket. What I would like to achieve instead would be attribute 2 units of row 18th to the first bucket and the remaining 3 units to the following one.
Thanks in advance for any help provided.
Here's another method that does this with out repeating each p v times. And the way I understand it is, the place where it crosses 100 (see below)
18 9.954638 5 98
19 9.958256 9 107
should be changed to:
18 9.954638 5 98
19.1 9.958256 2 100 # ---> 2 units will be considered with previous group
19.2 9.958256 7 107 # ----> remaining 7 units will be split for next group
The code:
n <- 100
# get cumulative sum, an id column (for retrace) and current group id
x <- transform(x, cv = cumsum(x$v), id = seq_len(nrow(x)), grp = cumsum(x$v) %/% n)
# Paste these two lines in R to install IRanges
source("http://bioconductor.org/biocLite.R")
biocLite("IRanges")
require(IRanges)
ir1 <- successiveIRanges(x$v)
ir2 <- IRanges(seq(n, max(x$cv), by=n), width=1)
o <- findOverlaps(ir1, ir2)
# gets position where multiple of n(=100) occurs
# (where we'll have to do something about it)
pos <- queryHits(o)
# how much do the values differ from multiple of 100?
val <- start(ir2)[subjectHits(o)] - start(ir1)[queryHits(o)] + 1
# we need "pos" new rows of "pos" indices
x1 <- x[pos, ]
x1$v <- val # corresponding values
# reduce the group by 1, so that multiples of 100 will
# belong to the previous row
x1$grp <- x1$grp - 1
# subtract val in the original data x
x$v[pos] <- x$v[pos] - val
# bind and order them
x <- rbind(x1,x)
x <- x[with(x, order(id)), ]
# remove unnecessary entries
x <- x[!(duplicated(x$id) & x$v == 0), ]
x$cv <- cumsum(x$v) # updated cumsum
x$id <- NULL
require(data.table)
x.dt <- data.table(x, key="grp")
x.dt[, list(res = sum(p*v)/sum(v), cv = tail(cv, 1)), by=grp]
Running on your data:
# grp res cv
# 1: 0 10.037747 100
# 2: 1 9.994648 114
Running on #geektrader's data:
# grp res cv
# 1: 0 9.999680 100
# 2: 1 10.040139 200
# 3: 2 9.976425 300
# 4: 3 10.026622 400
# 5: 4 10.068623 500
# 6: 5 9.982733 562
Here's a benchmark on a relatively big data:
set.seed(12345)
x <- data.frame(cbind(p=rnorm(1e5, 10, 0.1), v=round(runif(1e5, 1, 10))))
require(rbenchmark)
benchmark(out <- FN1(x), replications=10)
# test replications elapsed relative user.self
# 1 out <- FN1(x) 10 13.817 1 12.586
It takes about 1.4 seconds on 1e5 rows.
If you are looking for precise bucketing, I am assuming value of p is same for 2 "split" v
i.e. in your example, value of p for 2 units of row 18th that go in first bucket is 9.954638
With above assumption, you can do following for not super large datasets..
> set.seed(12345)
> x <- data.frame(cbind(p=rnorm(100, 10, 0.1), v=round(runif(100, 1, 10))))
> z <- unlist(mapply(function(x,y) rep(x,y), x$p, x$v, SIMPLIFY=T))
this creates a vector with each value of p repeated v times for each row and result is combined into single vector using unlist.
After this aggregation is trivial using aggregate function
> aggregate(z, by=list((1:length(z)-0.5)%/%100), FUN=mean)
Group.1 x
1 0 9.999680
2 1 10.040139
3 2 9.976425
4 3 10.026622
5 4 10.068623
6 5 9.982733