Get column mean every block on n rows based on condition - r

I have this dataframe
r2 distance
1 33.64 67866
2 8.50 77229
3 15.07 109119
4 24.35 142279
5 7.74 143393
6 8.21 177670
7 12.26 216440
8 12.66 253751
9 26.31 282556
10 39.08 320816
I need to calculate the mean of column r2 for every block of rows where the distance between two values in the column distance is equal or less than 100000.
For this example the desired output would be:
mean_r2 diff_of_distance
1 17.86 75527 ## mean of rows 1 to 5; distance 5 - distance 1
2 13.91 66164 ## mean of rows 2 to 5; distance 5 - distance 2
3 13.84 68551 ## mean of rows 3 to 6; distance 6 - distance 3
4 13.14 74161 ## mean of rows 4 to 7; distance 7 - distance 4
5 9.40 73047 ## mean of rows 5 to 7; distance 7 - distance 5
6 11.04 76081 ## mean of rows 6 to 8; distance 8 - distance 6
and so on.
Edit 1: I have more than 100,000 rows.
Thanks.

Loop through each value of distance, minus this from the values in the distance vector and test if the result is less than 100000. This creates a boolean vector which you sum to identify the index at which the distance is greater than 100000 (i.e. bool becomes FALSE). Use this index to identify your block then take the mean of r2 in each block.
To speed up the code define your vector type and length (to avoid "growing vectors" on each iteration.
means <- vector("numeric", length = nrow(df))
rows <- vector("numeric", length = nrow(df))
distance_diff <- vector("numeric", length = nrow(df))
for (i in seq_along(df$distance)) {
dis_val <- df$distance[i] # the ith distance value
bools <- (df$distance - dis_val) < 100000 # bool indicating if difference between i and every value in vector is less than 100000
block_range <- sum(bools)# taking sum of bools identifies the value at which the distance becomes > 100000
rows[i] <- paste(as.character(i), "-", as.character(block_range))
means[i] <- mean(df$r2[i:block_range]) # take the mean of r2 in the range i to all rows where distance is < 100000
distance_diff[i] <- df$distance[block_range] - dis_val # minus the distance from the value before distance is > 100000 from i
}
data.frame(mean_r2 = means, rows= rows, diff_of_distance=distance_diff)
mean_r2 rows diff_of_distance
1 17.860000 1 - 5 75527
2 13.915000 2 - 5 66164
3 13.842500 3 - 6 68551
4 13.140000 4 - 7 74161
5 9.403333 5 - 7 73047
6 11.043333 6 - 8 76081
7 17.076667 7 - 9 66116
8 26.016667 8 - 10 67065
9 32.695000 9 - 10 38260
10 39.080000 10 - 10 0

You can try:
# your data
d <- read.table(text="r2 distance
1 33.64 67866
2 8.50 77229
3 15.07 109119
4 24.35 142279
5 7.74 143393
6 8.21 177670
7 12.26 216440
8 12.66 253751
9 26.31 282556
10 39.08 320816", header=T)
library(tidyverse) #dplyr_0.7.2
d %>%
mutate(index=1:n()) %>% add row index
group_by(index) %>% # group by this index
# calculate difference and find max row where diff < 100000
mutate(max_row=max(which(.$distance - distance < 100000, arr.ind=T))) %>%
# calculate mean
mutate(mean_r2=mean(.$r2[index:max_row])) %>%
# calculate the difference
mutate(diff_of_distance=.$distance[max_row] - .$distance[index]) %>%
# unite the columns
unite(rows, index, max_row, sep = "-")
# A tibble: 10 x 5
r2 distance rows mean_r2 diff_of_distance
* <dbl> <int> <chr> <dbl> <int>
1 33.64 67866 1-5 17.860000 75527
2 8.50 77229 2-5 13.915000 66164
3 15.07 109119 3-6 13.842500 68551
4 24.35 142279 4-7 13.140000 74161
5 7.74 143393 5-7 9.403333 73047
6 8.21 177670 6-8 11.043333 76081
7 12.26 216440 7-9 17.076667 66116
8 12.66 253751 8-10 26.016667 67065
9 26.31 282556 9-10 32.695000 38260
10 39.08 320816 10-10 39.080000 0
This works because group_by subsets the dataframe, thus you can access within mutate the respective distance value per group and calculate the difference with the complete vector using .$distance as this access the complete column regardless the group_by() function.

Related

creating a dataframe of means of 5 randomly sampled observations

I'm currently reading "Practical Statistics for Data Scientists" and following along in R as they demonstrate some code. There is one chunk of code I'm particularly struggling to follow the logic of and was hoping someone could help. The code in question is creating a dataframe with 1000 rows where each observation is the mean of 5 randomly drawn income values from the dataframe loans_income. However, I'm getting confused about the logic of the code as it is fairly complicated with a tapply() function and nested rep() statements.
The code to create the dataframe in question is as follows:
samp_mean_5 <- data.frame(income = tapply(sample(loans_income$income,1000*5),
rep(1:1000,rep(5,1000)),
FUN = mean),
type='mean_of_5')
In particular, I'm confused about the nested rep() statements and the 1000*5 portion of the sample() function. Any help understanding the logic of the code would be greatly appreciated!
For reference, the original dataset loans_income simply has a single column of 50,000 income values.
You have 50,000 loans_income in a single vector. Let's break your code down:
tapply(sample(loans_income$income,1000*5),
rep(1:1000,rep(5,1000)),
FUN = mean)
I will replace 1000 with 10 and income with random numbers, so it's easier to explain. I also set set.seed(1) so the result can be reproduced.
sample(loans_income$income,1000*5)
We 50 random incomes from your vector without replacement. They are (temporarily) put into a vector of length 50, so the output looks like this:
> sample(runif(50000),10*5)
[1] 0.73283101 0.60329970 0.29871173 0.12637654 0.48434952 0.01058067 0.32337850
[8] 0.46873561 0.72334215 0.88515494 0.44036341 0.81386225 0.38118213 0.80978822
[15] 0.38291273 0.79795343 0.23622492 0.21318431 0.59325586 0.78340477 0.25623138
[22] 0.64621658 0.80041393 0.68511759 0.21880083 0.77455662 0.05307712 0.60320912
[29] 0.13191926 0.20816298 0.71600799 0.70328349 0.44408218 0.32696205 0.67845445
[36] 0.64438336 0.13241312 0.86589561 0.01109727 0.52627095 0.39207860 0.54643661
[43] 0.57137320 0.52743012 0.96631114 0.47151170 0.84099503 0.16511902 0.07546454
[50] 0.85970500
rep(1:1000,rep(5,1000))
Now we are creating an indexing vector of length 50:
> rep(1:10,rep(5,10))
[1] 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 6 6 6
[29] 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 10 10
Those indices "group" the samples from step 1. So basically this vector tells R that the first 5 entries of your "sample vector" belong together (index 1), the next 5 entries belong together (index 2) and so on.
FUN = mean
Just apply the mean-function on the data.
tapply
So tapply takes the sampled data (sample-part) and groups them by the second argument (the rep()-part) and applies the mean-function on each group.
If you are familiar with data.frames and the dplyr package, take a look at this (only the first 10 rows are displayed):
set.seed(1)
df <- data.frame(income=sample(runif(5000),10*5), index=rep(1:10,rep(5,10)))
income index
1 0.42585569 1
2 0.16931091 1
3 0.48127444 1
4 0.68357403 1
5 0.99374923 1
6 0.53227877 2
7 0.07109499 2
8 0.20754511 2
9 0.35839481 2
10 0.95615917 2
I attached the an index to the random numbers (your income). Now we calculate the mean per group:
df %>%
group_by(index) %>%
summarise(mean=mean(income))
which gives us
# A tibble: 10 x 2
index mean
<int> <dbl>
1 1 0.551
2 2 0.425
3 3 0.827
4 4 0.391
5 5 0.590
6 6 0.373
7 7 0.514
8 8 0.451
9 9 0.566
10 10 0.435
Compare it to
set.seed(1)
tapply(sample(runif(5000),10*5),
rep(1:10,rep(5,10)),
mean)
which yields basically the same result:
1 2 3 4 5 6 7 8 9
0.5507529 0.4250946 0.8273149 0.3905850 0.5902823 0.3730092 0.5143829 0.4512932 0.5658460
10
0.4352546

Calculate percentage change in dataframe from first row

I want to calculate the per cent change in my dataframe using the first row as the reference. For example my dataframe
Set rate field
A 3 10
B 2 17
C 5 4
Using row A as the reference, I want to calculate the percentage change from row A to every other row for all columns in the dataframe.
which will result in
Set rate field
A 3 10
B -33 70
C 66.66 -60
or
Set rate field pct_rate pct-field
A 3 10 0 0
B 2 17 -33 70
C 5 4 66.66 -60
My code:
z %>%
mutate(pct_rate = (rate - lag(rate)/ rate ) * 100)
which doesn't give me the desired result
df <- fread("Set rate field
A 3 10
B 2 17
C 5 4")
Soltuion using dplyr: We can use dplyr's first function to refer to the first element of a vector (your attempt with lag is very close to this solution). Also I used first(rate) in the denominator to calculate the percentage difference to get the numbers in your example...
library(dplyr)
df %>%
mutate(pct_rate = (rate - first(rate)) / first(rate) * 100,
pct_field = (field - first(field)) / first(field) * 100)
Returns:
Set rate field pct_rate pct_field
1: A 3 10 0.00000 0
2: B 2 17 -33.33333 70
3: C 5 4 66.66667 -60
You can use z$rate[1] or z$field[1] to get the first element and make than the calculations with all values.
z$pct_rate <- 100 * (z$rate - z$rate[1]) / z$rate[1]
z$pct_field <- 100 * (z$field - z$field[1]) / z$field[1]
z
# Set rate field pct_rate pct_field
#1 A 3 10 0.00000 0
#2 B 2 17 -33.33333 70
#3 C 5 4 66.66667 -60
or for many columns:
rbind(z[1,], do.call(cbind.data.frame, c(z[1],
lapply(z[-1], function(x) 100 * (x - x[1]) / x[1])))[-1,])
# Set rate field
#1 A 3.00000 10
#2 B -33.33333 70
#3 C 66.66667 -60

Developing a row extraction rule

I want to develop a rule to extract certain rows from a matrix. I set up the example as follows:
mat1 = data.frame(matrix(nrow=508, ncol =5))
mat1[1:20,1] = rep(1,20)
mat1[1:20,2:5] = rnorm(20*4,0,1)
mat2 = data.frame(matrix(nrow=508, ncol =5))
seq1 <- seq(1,3,1)
mat2[1:27,1] = rep(seq1,9)
mat2[1:27,2:5] = rnorm(27*4,0,1)
mat3 = data.frame(matrix(nrow=508, ncol =5))
mat3[1:32,1] = rep(seq(1,4,1),8)
mat3[1:32,2:5] = rnorm(32*4,0,1)
colnames(mat1) = colnames(mat2) = colnames(mat3) = c("Cohort Number", "Alpha(t-1)", "date1", "date2", "date3")
mat.list <- list(mat1,mat2,mat3)
Example matrix
Cohort Number Alpha(t-1) date1 date2 date3
1 1 -1.76745451 -1.3227308 2.7099501 -0.13797329
2 1 -0.72651808 -0.8714317 1.3200554 0.76964663
3 1 -0.50325892 0.0742336 -0.6460628 0.30148135
4 1 0.79592650 0.1353875 -0.5694022 -0.59019913
5 1 1.94064961 0.2255595 0.3156252 -0.90996475
6 1 0.27134932 0.3966957 -1.9198976 0.23998928
7 1 -1.13272507 -0.8603225 -1.2042036 0.06609958
8 1 -2.12392748 1.0905405 -0.3788234 0.92850110
9 1 0.22038996 0.4500683 -1.4617004 0.58498275
10 1 0.26348734 -0.8340913 1.2631368 -1.48490518
11 1 0.26931077 -0.5230622 -0.6615288 1.45668453
12 1 -2.03067695 -0.6432484 0.4801026 0.01808834
13 1 1.25915656 -0.1116544 -0.3004298 -1.04072722
14 1 -2.27894271 -2.1058424 -0.3351053 -1.04132045
15 1 0.47742052 2.1564274 -0.4733351 -0.53152019
16 1 -1.57680089 -0.1340645 -0.3134633 0.53223567
17 1 0.25245813 -0.8243152 0.5998211 -1.01892301
18 1 0.18391447 -1.3500645 1.6059798 1.43359399
19 1 -0.09602031 1.4921338 -0.6455687 0.66385823
20 1 -0.13613759 2.2474816 0.7311762 -2.46849071
mat2[1:27,]
Cohort Number Alpha(t-1) date1 date2 date3
1 1 -0.76033920 1.317636591 -0.09684526 -0.08796725
2 2 0.05123185 -0.731591674 -0.37247406 0.04470346
3 3 -0.78460201 0.890336570 1.26737475 -0.39062992
4 1 -0.14111920 1.255008475 -0.32799815 -0.77277716
5 2 -0.46044451 1.175157970 0.82187906 0.54326905
6 3 -0.46804365 0.704203273 -2.04539007 -1.74782065
7 1 0.42009824 0.488807461 3.21093186 -0.13745029
8 2 1.27083389 -1.316989452 0.43565921 0.07870330
9 3 -0.16581119 1.872955624 -0.22399155 -0.79334562
10 1 -1.33436656 0.589311311 -1.03871415 -1.06221057
11 2 1.56584985 0.020699064 0.45691456 0.15858065
12 3 1.07756426 -0.045200151 0.05124461 -1.86633279
13 1 -1.01264994 -0.229406681 1.24954420 0.88846407
14 2 -0.09950713 -0.515798138 1.62560454 -0.20191909
15 3 -0.28319479 0.450854419 1.42963386 -1.11964154
16 1 0.51771608 -1.407248379 0.62626313 0.97775246
17 2 -0.43951262 -0.368739441 0.66564013 -0.79980882
18 3 -0.15865277 -0.231475146 0.37582330 0.93685867
19 1 -0.57758129 0.235550070 0.42480442 -0.14379249
20 2 -0.81726414 -1.207593079 -0.30000514 0.68967230
21 3 -0.72926703 -0.458849409 1.51162785 1.40921409
22 1 -0.32220454 0.334996561 1.26073381 -2.03405958
23 2 -0.51450039 -0.305634241 1.51021957 0.39775430
24 3 1.15476297 -1.040126709 -0.36192432 -0.37346894
25 1 -0.88053587 -0.006829769 -0.89855797 -0.39840858
26 2 -0.64435448 0.209561006 -0.13986834 -0.61308957
27 3 1.22492942 0.812693992 -1.32371617 -1.21852365
and
> mat3[1:32,]
Cohort Number Alpha(t-1) date1 date2 date3
1 1 -0.7657871 -0.35390862 -0.23539987 -1.8365309
2 2 -0.6631690 1.36450837 0.78403072 -0.8344993
3 3 -1.0134022 -0.28380021 0.72149463 -0.7890273
4 4 2.6419455 0.26998803 2.03606725 0.8099134
5 1 -0.1383910 0.90845134 1.09273919 0.4651443
6 2 -0.7549340 -0.23185551 2.21119705 -0.1386960
7 3 0.7296121 -1.09145187 -1.18092505 0.1510642
8 4 -0.5583415 0.71988405 0.09454476 -0.8661514
9 1 -0.2420894 -0.03215026 -2.51249946 1.1659027
10 2 -0.6434337 -0.13910557 -1.10373674 1.2377968
11 3 -0.6297123 2.09797419 0.87128407 -0.1351845
12 4 0.6674166 0.48707847 0.36373509 1.0680623
13 1 0.6254708 -0.61311671 0.82542494 1.7320687
14 2 -2.4704173 0.98460064 -1.10416042 2.9627952
15 3 -0.2544887 0.63177246 -0.39138717 1.6942072
16 4 -0.9807623 1.11882794 -0.47669974 1.2383798
17 1 -0.6900549 1.68086482 -0.01405476 -1.3099288
18 2 1.4510505 -0.04752782 1.49735258 0.2963673
19 3 -1.1355194 -1.76263532 -1.49318214 1.3524114
20 4 0.7168833 -0.76833639 0.60752304 -1.0647885
21 1 2.0004745 2.13931057 -1.35036048 -0.7694501
22 2 2.0985591 0.01569677 0.33975952 -1.4979973
23 3 0.1703261 -1.47625208 -1.13228671 0.5686501
24 4 0.2632233 -0.55672667 0.33428217 0.5341078
25 1 -0.2741324 -1.61301237 0.78861248 0.4982554
26 2 -0.8793897 -1.07266362 -0.78158128 0.9127354
27 3 0.3920579 -0.59869834 -0.76775259 1.8137107
28 4 -1.4088488 -0.54954542 0.32421016 0.7284813
29 1 -1.2421837 0.50599077 1.62464999 0.6801672
30 2 -2.8980422 0.42197236 0.45243582 1.4939070
31 3 0.3965108 -1.35877353 1.52230797 -1.6552039
32 4 0.8112229 0.51970084 0.30830797 -2.0563928
What I want to do:
For every matrix in mat.list I want to extract 6 rows of data, according to certain criteria, and place these rows as a data.frame in a list labelled Output1. I want to store all remaining rows as a data.frame in Output2.
The process:
1) Group data by cohort number.
2a. If there is 1 group (Cohort Number can only = 1). Move to column 2 and extract the 6 rows of matrix with the highest value for "Alpha(t-1)". Store these rows as a data.frame in a list named "Output1". Store all remaining rows as a data.frame in a list named "Output2".
2b. If there are 2 groups (Cohort number can = 1 or Cohort Number can =2) move to column 2 and extract the 3 rows with the largest "Alpha(t-1)" corresponding to Cohort Number ==1 and extract the 3 rows with largest"Alpha(t-1)" corresponding to Cohort Number == 2. Place the 6 rows extracted as a data.frame in a list named "Output1". Place all remaining rows as a data.frame in a list named "Output2".
2c. If there are 3 groups ("Cohort Number can = 1, Cohort Number can =2, Cohort Number can =3 ) move to column 2 and extract the 2 rows with the largest "Alpha(t-1)" corresponding to Cohort Number ==1, extract the 2 rows with the largest "Alpha(t-1)" corresponding to Cohort Number =2 and extract the 2 rows with the largest "Alpha(t-1)" corresponding to Cohort Number =3
2d. If there are 4 groups ("Cohort Number can = 1, Cohort Number can =2, Cohort Number can =3, Cohort Number = 4) move to column 2. Extract the 2 rows with the largest "Alpha(t-1)" corresponding to Cohort Number ==1. Extract the 2 row with the largest "Alpha(t-1)" corresponding to Cohort Number ==2. Extract the 1 row with the largest "Alpha(t-1)" corresponding to Cohort Number ==3 and Extract the 1 row with the largest "Alpha(t-1)" corresponding to Cohort Number ==4. Store the 6 key rows as a data.frame in Output1. Store all remaining rows as a data.frame in the list Output2.
Desired Output:
Output1 <- c()
Output2 <- c()
Output1[[1]] = mat1 %>% group_by(`Cohort Number`) %>% top_n(6, `Alpha(t-1)`)
Output1[[2]] = mat2 %>% group_by(`Cohort Number`) %>% top_n(2, `Alpha(t-1)`)
> Output1[[1]]
# A tibble: 6 x 5
# Groups: Cohort Number [1]
`Cohort Number` `Alpha(t-1)` date1 date2 date3
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1 0.796 0.135 -0.569 -0.590
2 1 1.94 0.226 0.316 -0.910
3 1 0.271 0.397 -1.92 0.240
4 1 0.269 -0.523 -0.662 1.46
5 1 1.26 -0.112 -0.300 -1.04
6 1 0.477 2.16 -0.473 -0.532
> Output1[[2]]
# A tibble: 6 x 5
# Groups: Cohort Number [3]
`Cohort Number` `Alpha(t-1)` date1 date2 date3
<dbl> <dbl> <dbl> <dbl> <dbl>
1 1 0.420 0.489 3.21 -0.137
2 2 1.27 -1.32 0.436 0.0787
3 2 1.57 0.0207 0.457 0.159
4 1 0.518 -1.41 0.626 0.978
5 3 1.15 -1.04 -0.362 -0.373
6 3 1.22 0.813 -1.32 -1.22
Overall I need a function to do this because i have over 1000 matrices in my actual application and can't do this manually.
We can count the number of distinct values in Cohort Number and based on that select the value of n in top_n. For distinct values which are more than 3, we create vector of values to select in top_n for each Cohort Number.
library(tidyverse)
output1 <- map(mat.list, function(x) {
dist <- n_distinct(x$`Cohort Number`, na.rm = TRUE)
if(dist <= 3)
x %>%
group_by(`Cohort Number`) %>%
top_n(6/dist, `Alpha(t-1)`)
else
map2_df(list(2, 2, 1, 1),x %>% na.omit %>% group_split(`Cohort Number`),
~.y %>% top_n(.x, `Alpha(t-1)`))
})
and for output2, we use map2 with ant_join
output2 <- map2(mat.list, output1, anti_join)
Confirming the output
map_dbl(output1, nrow)
#[1] 6 6 6
map_dbl(output2, nrow)
#[1] 502 502 502

Normalize/scale data set

I have the following data set:
dat<-as.data.frame(rbind(10,8,2,7,10,10,1,10,14,9,2,6,10,8,10,8,10,10,7,11,10))
colnames(dat)<-"Score"
print(dat)
Score
10
8
2
7
10
10
1
10
14
9
2
6
10
8
10
8
10
10
7
11
10
these are the test scores which students obtained, a student could get a maximum of 15 or a minimum of 0 in this test (by the way, nobody got the max or the min), however the lowest score obtained in this test was 1 and the highest was 14.
Now, I want to normalize/scale this data to the scale of 0 to 20.
How to achieve this in excel? or in R?
My final goal is to normalize the scores in this test to the above scale and to compare them with another set of data for which the max and min is 5 and 0 respectively.
How to compare these two different scaled data sets correctly against each other?
What I tried:
I went through many stuff on the internet, and came up with this:
which I got it from the wikipedia.
Is this method reliable?
In your case I would use the feature scale formula you posted on your question. The (x - min(x)) / (max(x) - min(x)) will essentially convert your test marks to the range between 0-1.
Since your edges are indeed 0 and 15 and not 2 and 14, your min(x)=0 and your max(x)=15. Once you have your marks between 0-1 using the above, you just multiply by 20.
i.e.
tests <- read.table(header=T, file='clipboard')
tests2 <- (tests - 0) / (15 - 0) #or equally tests / 15
And multiply by 20 to get marks between 0-20:
> tests2 * 20
Score
1 13.333333
2 10.666667
3 2.666667
4 9.333333
5 13.333333
6 13.333333
7 1.333333
8 13.333333
9 18.666667
10 12.000000
11 2.666667
12 8.000000
13 13.333333
14 10.666667
15 13.333333
16 10.666667
17 13.333333
18 13.333333
19 9.333333
20 14.666667
21 13.333333
The results are intuitive and the function is reliable. For example the person who scored 14/15 should get the highest mark (and very close to 20) which is the case here (after the transformation they scored 18.6666).
In Excel, if you want the normalized data to have a min of 0 and and max of 20, then we need to solve:
y = A * x + b
for two points.
Put the max of the raw data in C1:
=MAX(A:A)
Put the min of the raw data in C2:
=MIN(A:A)
Put the desired max in D1 and the desired min in D2. Put the formula for the A-coefficient in C3:
=($D$1-$D$2)/($C$1-$C$2)
and the formula for the B-coefficient in C4:
=$D$1-$C$3*$C$1
Finally put the scaling formula in B1:
=A1*$C$3+$C$4
and copy down:
Naturally, if you want the scaling to be independent of the raw max or min, you would use 15 in C1 and 0 in C2.
You can scale between 0 to 20 with this command in R:
newvalue <- 20/(max(score)-min(score))*(score-min(score))
The math way is fairly straightforward if the floor for all scales is 0.
new_value = new_ceiling * old_value / old_ceiling
The next formula will account for different floors on each scale:
new_value = new_floor + (new_ceiling - old_ceiling) * ((old_value-old_floor)/(old_ceiling-old_floor)) which is actually the formula you posted from Wikipedia. ;)
Hope this helps!
That is very simple. Due to the fact that both of those grades are linear, that a simple multiple ratio will do the work. Or in other word each grade in your set needs to be *20/15.
Here's a little r function which can help you run this if you need to repeat the operation and give you some flexibility on what you rescale to. Also one must be careful of NA values because min() and max() do not drop them by default which will then return NA. Therefore I provided an option on to handle NA values (drops them by default).
# function rescales data from 0 to 1 and optionally multiplies by new max
rescale <- function(x, new_max = 1, na.rm = T) {
as.vector(new_max * scale(x,
center = min(x, na.rm = na.rm),
scale = (max(x, na.rm = na.rm) - min(x, na.rm = na.rm))))
}
# old scores
scores <- c(10,8,2,7,10,10,1,10,14,9,2,6,10,8,10,8,10,10,7,11,10)
# new scores
data.frame(old = scores,
new = rescale(scores, new_max = 20))
#> old new
#> 1 10 13.846154
#> 2 8 10.769231
#> 3 2 1.538462
#> 4 7 9.230769
#> 5 10 13.846154
#> 6 10 13.846154
#> 7 1 0.000000
#> 8 10 13.846154
#> 9 14 20.000000
#> 10 9 12.307692
#> 11 2 1.538462
#> 12 6 7.692308
#> 13 10 13.846154
#> 14 8 10.769231
#> 15 10 13.846154
#> 16 8 10.769231
#> 17 10 13.846154
#> 18 10 13.846154
#> 19 7 9.230769
#> 20 11 15.384615
#> 21 10 13.846154
Created on 2022-03-10 by the reprex package (v2.0.1)

Count values in a data set that exceed a threshold in R

I have 2 data sets. The first data set has a vector of p-values from 0.5 - 0.001, and the corresponding threshold that meets that p-vale. For example, for 0.05, the value is 13. Any value greater than 13 has a p-value of <0.05. This data set contains all my thresholds that I'm interested in. Like so:
V1 V2
1 0.500 10
2 0.200 11
3 0.100 12
4 0.050 13
5 0.010 14
6 0.001 15
The 2nd data set is just one long list of values. I need to write an R script that counts the number of values in this set that exceed each threshold. For example, count how many values in the 2nd data set that exceed 13, and therefore have a p-value of <0.05, and do this fore each threshold value.
Here are the first 15 values of the 2nd data set (1000 total):
1 11.100816
2 8.779858
3 10.510090
4 9.503772
5 9.392222
6 10.285920
7 8.317523
8 10.007738
9 11.021283
10 9.964725
11 9.081947
12 11.253643
13 10.896120
14 10.272814
15 10.282408
Function which will help you:
length( which( data$V1 > 3 & data$V2 <0.05 ) )
Assuming dat1 and dat2 both have a V2 column, something like this:
colSums(outer(dat2$V2, setNames(dat1$V2, dat1$V2), ">"))
# 10 11 12 13 14 15
# 9 3 0 0 0 0
(reads as follows: 9 items have a value greater than 10, 3 items have a value greater than 11, etc.)

Resources