Ranking and quantile on specific format - r

i have a column say the first column below 'rawdata', i need to calculate rank, percentile and quintile in the below format using the rawdata column?
RawData Quintiles Rank Rank Percentile
1.20 1 87 3
0.58 2 897 30
0.16 5 2,564 84
1.04 1 145 5
NA na
0.32 4 1,966 64
0.18 5 2,471 81
0.22 4 2,374 78
0.89 1 241 9
0.46 3 1,362 45

RawData <- c(1.20, 0.16, 0.58, 1.04)
in general, you can combine the outputs of individual calculations of descriptive statistics into a data.frame using cbind
df <- cbind(
RawData,
quantile = quantile(RawData),
rank = rank(RawData)
)
However, in the data you shared, there are more values of ranks than there are entries in the data set. Are you asking how you would calculate these specific values of rank, quantile, etc. given this particular raw values?

Perhaps something like this (although it does not reproduce your figures, but presumably this is just part of a larger table)...
df <- data.frame(RawData = c(1.2, 0.58, 0.16, 1.04, NA, 1966, 2471, 2374, 241, 1362))
df$Quintile <- cut(df$RawData,quantile(df$RawData,seq(0,1,0.2),na.rm=TRUE),labels=1:5,include.lowest = TRUE)
df$Rank <- rank(df$RawData,na.last="keep")
df$Percentile <- 100*df$Rank/max(df$Rank,na.rm=TRUE)
df
RawData Quintile Rank Percentile
1 1.20 2 4 44.44444
2 0.58 1 2 22.22222
3 0.16 1 1 11.11111
4 1.04 2 3 33.33333
5 NA <NA> NA NA
6 1966.00 4 7 77.77778
7 2471.00 5 9 100.00000
8 2374.00 5 8 88.88889
9 241.00 3 5 55.55556
10 1362.00 4 6 66.66667

Related

Add numbers corresponding to different hours associated with two different dates in R

I have the following dataframe
set.seed(1000)
data <- data.frame(date = sort(rep(Sys.Date()-1:3, 5)),
hour = rep(0:4, 3),
values = round(rexp(15),2))
date hour values
1 2016-04-25 0 1.00
2 2016-04-25 1 0.52
3 2016-04-25 2 2.44
4 2016-04-25 3 2.16
5 2016-04-25 4 0.48
6 2016-04-26 0 0.17
7 2016-04-26 1 1.56
8 2016-04-26 2 0.51
9 2016-04-26 3 0.96
10 2016-04-26 4 0.05
11 2016-04-27 0 0.75
12 2016-04-27 1 1.69
13 2016-04-27 2 0.61
14 2016-04-27 3 0.85
15 2016-04-27 4 2.23
I want to add numbers correspondig to the column values, these numbers should be associated with the hours from
2 to 1 closed. However, the number 2 correspond to one date, and the number 1 is associated with the next date.
I want a final dataframe like
date sumvalue
2016-04-26 6.81
2016-04-27 3.96
Someone knows an elegant way to do this? I want to do the same with a huge dataframe.
Kind regards
Here is one way to get the expected output
library(data.table)
setDT(data)[, {Un1 <- unique(date)
i1 <- which(hour==2 & date %in% Un1[-length(Un1)])
i2 <- which(hour==1 & date %in% Un1[-1])
v1 <- unlist(Map(function(x,y) sum(values[seq(x,y)]),
i1, i2))
list(date = Un1[-1], sumvalue= v1)}]
# date sumvalue
#1: 2016-04-26 6.81
#2: 2016-04-27 3.96

stratified sampling or proportional sampling in R

I have a data set generated as follows:
myData <- data.frame(a=1:N,b=round(rnorm(N),2),group=round(rnorm(N,4),0))
The data looks like as this
I would like to generate a stratified sample set of myData with given sample size, i.e., 50. The resulting sample set should follow the proportion allocation of the original data set in terms of "group". For instance, assume myData has 20 records belonging to group 4, then the resulting data set should have 50*20/200=5 records belonging to group 4. How to do that in R.
You can use my stratified function, specifying a value < 1 as your proportion, like this:
## Sample data. Seed for reproducibility
set.seed(1)
N <- 50
myData <- data.frame(a=1:N,b=round(rnorm(N),2),group=round(rnorm(N,4),0))
## Taking the sample
out <- stratified(myData, "group", .3)
out
# a b group
# 17 17 -0.02 2
# 8 8 0.74 3
# 25 25 0.62 3
# 49 49 -0.11 3
# 4 4 1.60 3
# 26 26 -0.06 4
# 27 27 -0.16 4
# 7 7 0.49 4
# 12 12 0.39 4
# 40 40 0.76 4
# 32 32 -0.10 4
# 9 9 0.58 5
# 42 42 -0.25 5
# 43 43 0.70 5
# 37 37 -0.39 5
# 11 11 1.51 6
Compare the counts in the final group with what we would have expected.
round(table(myData$group) * .3)
#
# 2 3 4 5 6
# 1 4 6 4 1
table(out$group)
#
# 2 3 4 5 6
# 1 4 6 4 1
You can also easily take a fixed number of samples per group, like this:
stratified(myData, "group", 2)
# a b group
# 34 34 -0.05 2
# 17 17 -0.02 2
# 49 49 -0.11 3
# 22 22 0.78 3
# 12 12 0.39 4
# 7 7 0.49 4
# 18 18 0.94 5
# 33 33 0.39 5
# 45 45 -0.69 6
# 11 11 1.51 6

Ranking variables with conditions

Say I have the following data frame:
df <- data.frame(store = LETTERS[1:8],
sales = c( 9, 128, 54, 66, 23, 132, 89, 70),
successRate = c(.80, .25, .54, .92, .85, .35, .54, .46))
I want to rank the stores according to successRate, with ties going to the store with more sales, so first I do this (just to make visualization easier):
df <- df[order(-df$successRate, -df$sales), ]
In order to actually create a ranking variable, I do the following:
df$rank <- ave(df$successRate, FUN = function(x) rank(-x, ties.method='first'))
So df looks like this:
store sales successRate rank
4 D 66 0.92 1
5 E 23 0.85 2
1 A 9 0.80 3
7 G 89 0.54 4
3 C 54 0.54 5
8 H 70 0.46 6
6 F 132 0.35 7
2 B 128 0.25 8
The problem is I don't want small stores to be part of the ranking. Specifically, I want stores with less than 50 sales not to be ranked. So this is how I define df$rank instead:
df$rank <- ifelse(df$sales < 50, NA,
ave(df$successRate, FUN = function(x) rank(-x, ties.method='first')))
The problem is that even though this correctly removes stores E and A, it doesn't reassign the rankings they were occupying. df looks like this now:
store sales successRate rank
4 D 66 0.92 1
5 E 23 0.85 NA
1 A 9 0.80 NA
7 G 89 0.54 4
3 C 54 0.54 5
8 H 70 0.46 6
6 F 132 0.35 7
2 B 128 0.25 8
I've experimented with conditions inside and outside ave(), but I can'r get R to do what I want! How can I get it to rank the stores like this?
store sales successRate rank
4 D 66 0.92 1
5 E 23 0.85 NA
1 A 9 0.80 NA
7 G 89 0.54 2
3 C 54 0.54 3
8 H 70 0.46 4
6 F 132 0.35 5
2 B 128 0.25 6
Super easy to do with data.table:
library(data.table)
dt = data.table(df)
# do the ordering you like (note, could also use setkey to do this faster)
dt = dt[order(-successRate, -sales)]
dt[sales >= 50, rank := .I]
dt
# store sales successRate rank
#1: D 66 0.92 1
#2: E 23 0.85 NA
#3: A 9 0.80 NA
#4: G 89 0.54 2
#5: C 54 0.54 3
#6: H 70 0.46 4
#7: F 132 0.35 5
#8: B 128 0.25 6
If you must do it in data.frame, then after your preferred order, run:
df$rank <- NA
df$rank[df$sales >= 50] <- seq_len(sum(df$sales >= 50))

Calculate sum of a column based on ranking of another column

I am having a data set:
Security %market value return Quintile*
1 0.07 100 3
2 0.10 88 2
3 0.08 78 1
4 0.12 59 1
5 0.20 106 4
6 0.04 94 3
7 0.05 111 5
8 0.10 83 2
9 0.06 97 3
10 0.03 90 3
11 0.15 119 5
the actual data set is having more than 5,000 rows, and I would like to use R to create 5 quintiles, each quintile is suppose to have 20% of market value. In addition, they have to be ranked in the order of magnitude of return. That is, 1st quintile should contain the 20% securities with the lowest return value, 5th quintile should contain the 20% securities with the highest return value. I would like to create the column "Quintile", among different quintiles there can be different numbers of securities but total %market value should be same.
I have tries several methods and I am very new to R, so please kindly provide me some help. Thank you very much in advance!
Samuel
You can order your data and then use findInterval (adding a small delta to use closed right sided braces):
raw_data <- raw_data[order(raw_data$return),]
raw_data$Q2 <- findInterval( cumsum(raw_data$marketvalue) , seq(0,1,length=5)+0.000001 , right = T )
raw_data
# Security marketvalue return Quintile Q2
#4 4 0.12 59 1 1
#3 3 0.08 78 1 1
#8 8 0.10 83 2 2
#2 2 0.10 88 2 2
#10 10 0.03 90 3 3
#6 6 0.04 94 3 3
#9 9 0.06 97 3 3
#1 1 0.07 100 3 3
#5 5 0.20 106 4 4
#7 7 0.05 111 5 5
#11 11 0.15 119 5 5
The following works with your data.
First, sort by increasing return:
dat <- dat[order(dat$return), ]
Then, compute the cumulative market share and cut every 0.2:
dat$Quintile <- ceiling(cumsum(dat$market) / 0.2)
Finally, sort things back by Security:
dat <- dat[order(dat$Security), ]

selecting and identifying a subset of elements based on criteria

I would like to select a subset of elements from a whole that satisfy certain conditions. There are about 20 elements, each having multiple attributes. I would like to select five elements that offer the least amount of discrepancy from a fixed criterion on one attribute, and offers the highest average value on another attribute.
Lastly, I would like to apply the function over multiple sets of 20 elements.
Thus far, I have been able to identify the subsets "by hand," but I'd like to be able to return the index of the values in addition to returning the values themselves.
Objectives:
I would like to find the set of five values for X1 that are the least discrepant from a fixed value (55), and provide the largest value for the average of X2.
I would like to do this for multiple sets.
##### generating example data
##### this has five groups, each with two variables x1 and x2
set.seed(271828)
grp <- gl(5,20)
x1 <- round(rnorm(100,45, 12), digits=0)
x2 <- round(rbeta(100,2,4), digits = 2)
id <- seq(1,100,1)
##### this is how the data would arrive for me to analyze
dat <- as.data.frame(cbind(id,grp,x1,x2))
The data would arrive in this format, with id as a unique identifier for each element.
##### pulling out the first group for demonstration
dat.grp.1 <- dat[ which(grp == 1), ]
crit <- 55
x <- t(combn(dat.grp.1$x1, 5))
y <- t(combn(dat.grp.1$x2, 5))
mean.x <- rowMeans(x)
mean.y <- rowMeans(y)
k <- (mean.x - crit)^2
out <- cbind(x, mean.x, k, y, mean.y)
##### finding the sets with the least amount of discrepancy
pick <- out[ which(k == min(k)), ]
pick
##### finding the sets with low discrepancy and high values of y (means of X2) by "hand"
sorted <- out[order(k), ]
head(sorted, n=20)
With respect to the values in pick, I can see that the values of X1 are:
> pick
mean.x k mean.y
[1,] 55 47 48 48 52 50 25 0.62 0.08 0.31 0.18 0.54 0.346
[2,] 55 48 48 47 52 50 25 0.62 0.31 0.18 0.48 0.54 0.426
I would like to return the id value for these elements, so that I know that I pick elements: 3, 8, 10, 11, and 18 (choosing set 2 since the discrepancy from k is the same, but the mean for y is higher).
> dat.grp.1
id grp x1 x2
1 1 1 45 0.12
2 2 1 27 0.34
3 3 1 55 0.62
4 4 1 39 0.32
5 5 1 41 0.18
6 6 1 29 0.47
7 7 1 47 0.08
8 8 1 48 0.31
9 9 1 35 0.48
10 10 1 48 0.18
11 11 1 47 0.48
12 12 1 31 0.29
13 13 1 39 0.15
14 14 1 36 0.54
15 15 1 36 0.20
16 16 1 38 0.40
17 17 1 30 0.31
18 18 1 52 0.54
19 19 1 44 0.37
20 20 1 31 0.20
Doing this "by hand" works for now, but it would be good to make this as "hands-off" as possible.
Any help is greatly appreciated.
You are almost there. You can change your definition of sorted to
sorted <- out[order(k, -mean.y), ]
And then sorted[1,] (or if you prefer sorted[1,,drop=FALSE]) is your selected set.
If you want the indexes rather than/in addition to the points, then you can include that earlier. Replace:
x <- t(combn(dat.grp.1$x1, 5))
y <- t(combn(dat.grp.1$x2, 5))
with
idx <- t(combn(1:nrow(dat.grp.1), 5))
x <- t(apply(idx, 1, function(i) {dat.grp.1[i,"x1"]}))
y <- t(apply(idx, 1, function(i) {dat.grp.1[i,"x2"]}))
and include idx in out later.
Putting int all together:
##### pulling out the first group for demonstration
dat.grp.1 <- dat[ which(grp == 1), ]
crit <- 55
idx <- t(combn(1:nrow(dat.grp.1), 5))
x <- t(apply(idx, 1, function(i) {dat.grp.1[i,"x1"]}))
y <- t(apply(idx, 1, function(i) {dat.grp.1[i,"x2"]}))
mean.x <- rowMeans(x)
mean.y <- rowMeans(y)
k <- (mean.x - crit)^2
out <- cbind(idx, x, mean.x, k, y, mean.y)
##### finding the sets with the least amount of discrepancy and among
##### those the largest second mean
pick <- out[order(k, -mean.y)[1],,drop=FALSE]
pick
which gives
mean.x k mean.y
[1,] 3 8 10 11 18 55 48 48 47 52 50 25 0.62 0.31 0.18 0.48 0.54 0.426
EDIT: description of applying over idx was requested; I want more options than just what i can do in a comment so I'm adding it to my answer. Will also address looping over subsets.
idx is a matrix (15504 x 5), each row of which is a set of (5) indexes for the dataframe. apply allows going through row-by-row (row-by-row is margin 1) to do something with each row. That something is take the values and use them to index the desired rows of dat.grp.1 and pull out the corresponding x1 values. I could have written dat.grp.1[i,"x1"] as dat.grp.1$x1[i]. Each row of idx becomes a column and the results of indexing into dat.grp.1 are the rows, so the whole thing needs to be transposed.
You can break the loop apart to see how each step works if you like. Make the function into a non-anonymous function.
f <- function(i) {dat.grp.1[i,"x1"]}
and pass row at a time of idx to it.
> f(idx[1,])
[1] 45 27 55 39 41
> f(idx[2,])
[1] 45 27 55 39 29
> f(idx[3,])
[1] 45 27 55 39 47
> f(idx[4,])
[1] 45 27 55 39 48
These are what get bundled into x
> head(x,4)
[,1] [,2] [,3] [,4] [,5]
[1,] 45 27 55 39 41
[2,] 45 27 55 39 29
[3,] 45 27 55 39 47
[4,] 45 27 55 39 48
As for looping over subsets, the plyr library is very handy for this. The way you have set it up (assign the subset of interest to a variable and work with that) makes the transformation easy. Everything you do to create the answer for one subset goes into a function with that part as a parameter.
find.best.set <- function(dat.grp.1) {
crit <- 55
idx <- t(combn(1:nrow(dat.grp.1), 5))
x <- t(apply(idx, 1, function(i) {dat.grp.1[i,"x1"]}))
y <- t(apply(idx, 1, function(i) {dat.grp.1[i,"x2"]}))
mean.x <- rowMeans(x)
mean.y <- rowMeans(y)
k <- (mean.x - crit)^2
out <- cbind(idx, x, mean.x, k, y, mean.y)
out[order(k, -mean.y)[1],,drop=FALSE]
}
This is basically what you had before, but getting rid of some unnecessary assignments.
Now wrap this in a plyr call.
library("plyr")
ddply(dat, .(grp), find.best.set)
which gives
grp V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18
1 1 3 8 10 11 18 55 48 48 47 52 50 25 0.62 0.31 0.18 0.48 0.54 0.426
2 2 8 10 12 15 16 53 35 55 76 56 55 0 0.71 0.20 0.43 0.50 0.70 0.508
3 3 4 10 15 17 20 47 48 73 55 52 55 0 0.67 0.54 0.28 0.42 0.31 0.444
4 4 2 11 13 17 19 47 46 70 62 50 55 0 0.35 0.47 0.18 0.13 0.47 0.320
5 5 3 6 10 17 19 72 40 58 66 39 55 0 0.33 0.42 0.32 0.32 0.51 0.380
I don't know that that is the best format for your results, but it mirrors the example you gave.

Resources