Divide a vector by different values based on the result of the division - r

I have a Df like this:
x y z
<dbl> <dbl> <dbl>
1 408001.9 343 0
2 407919.2 343 0
3 407839.6 343 0
4 407761.2 343 0
5 407681.7 343 0
6 407599.0 343 0
7 407511.0 343 0
8 407420.5 343 0
9 407331.0 343 0
10 407242.0 343 0
11 407152.7 343 0
12 407062.5 343 0
13 406970.7 343 0
14 406876.6 342 0
15 406777.1 342 0
16 406671.0 342 0
17 406560.9 342 0
18 406449.4 342 0
19 406339.0 342 0
20 406232.5 342 0
... ... ... ...
with x decreasing.
And a vector like
vec=(a1, a2, a3, a4, a5, a6, ...)
with a1< a2< a3< a4...
Now I want to divide df$x by vec[1], what will give the same result (rounded) as for df$y.
But now, when the value in df$z drops by one to 342, I want to divide the value in df$x by vec[2] from then on, to get the new df$z values.
From here the result will be different from df$y, as for df$y the number to divide with is allways vec[1]and will not change
Every time the value I get for df$z drops by one, the next values for df$z shal be calculated with the corresponding vec[i] where i is the number of drops+1 so far
In the end I want a vector df$z, where the values are df$x / vec[i], where vec [i] depends on, what the last number of df$z is.
reproducible example:
test <- data.frame(x = sort((seq(500, 600, 2)), decreasing = T)
)
vec <- seq(10, 10.9, 0.03)
for(i in 1:31){
test[i+1] <- round(test$x/vec[i])
}
This will give you a df with one col for every value of vec, that test$x got divided by.
Now, in the end, my vector shall contain the values of col2 until the value in col2 drops from 60 to 59. Afterwards I want the values from col3 until the value in col3 drops below 59 to 58. Then I want the values from col4 and so on.
How can I achive this with any data(like mine above, which is not linear ditributed as this example.)
I tried some for and while loops, but none worked. I didn't even get close to what I want.
I think my problem is that I dont know how to make the condition depenent on a value(the value of df$z at point i), that I want to calculate in the same operation. I want to calculate the value of df$z[i] with the value of vec[t], that has been used so far. But if the value of df$z drops by one at a certain observation[i], the value of vec[t+1] shall be used for the division from then on.
Thanks for your help.

I hope I've understood what you are asking. This might be it...
test <- data.frame(x = sort((seq(500, 600, 2)), decreasing = T)
vec <- seq(10, 10.9, 0.03)
#this function determines the index of `vec` to use
xcol<-function(v){
x<-rep(NA,length(v))
x[1] <- 1
for(i in 2:length(v)){
x[i] <- x[i-1]
if(round(v[i]/vec[x[i]])<round(v[i-1]/vec[x[i]])){
x[i] <- x[i]+1
}
}
return(x)
}
test$xcol <- xcol(test$x)
test$z <- round(test$x/vec[test$xcol])
test
x xcol z
1 600 1 60
2 598 1 60
3 596 1 60
4 594 2 59
5 592 2 59
6 590 2 59
7 588 2 59
8 586 3 58
9 584 3 58
10 582 3 58
11 580 3 58
12 578 4 57
...

Related

Why does the frequency reduce if I use ifelse function in R?Is there a way to create categories from the combination of 2 variables/columns?

when I do
table(df$strategy.x)
0 1 2 3
70 514 223 209
table(df$strategy.y)
0 1 2 3
729 24 7 4
I want to create a variable with both of these combined. I tried this
df <- df %>%
mutate(nstrategy1 = ifelse(strategy.x==1| strategy.y==1 , 1, 0))
table(df$nstrategy1)
0 1
399 519
I am supposed to get 514 + 24 = 538 but I got 519 instead
df <- df %>% mutate(nstrategy2 = ifelse(strategy.x==2| strategy.y==2 , 1, 0))
table(df$nstrategy2)
0 1
578 228
Similarly, I am supposed to get 223 + 7 = 230, but I got 228 instead
Is there a good way to merge both strategy.x and strategy.y and end up with a table like the following with 4 categories?
0 1 2 3
799 538 230 213
table(mtcars$am) # 13 1's
table(mtcars$vs) # 14 1's
mtcars$ones = ifelse(mtcars$am == 1 | mtcars$vs == 1, 1, 0)
table(mtcars$ones) # 20 1's < 13 + 14 = 27
Why is it showing only 20 1's instead of 27? It's because there are 7 + 6 + 7 = 20 cars with either one or two 1's in am and vs. There are 13 with am==1 (6+7), and 14 with vs==1 (7+7). Seven cars are in the bottom left because they have 1's in both dimensions, which you are expecting/seeking to count twice.
table(mtcars$am, mtcars$vs)
# 0 1
# 0 12 7
# 1 6 7
The simplest way to get the sum of the two results would be by adding the two table objects:
table(mtcars$am) + table(mtcars$vs)
# 0 1
# 37 27

How to know if a number is in a determinated interval in R

I have a dataset with 3 columns: Default, Height and Weight.
I made a binning of the variables and almacenated it (I have to do it this way) in a list. Every binning has a woe associated, but now I want to put those woes in the original Dataframe depending in which buckets are my observations:
For example, the data frame
df1 <- data.frame(default=sample(c(0,1), replace=TRUE, size=100, prob=c(0.9,0.1)),
height=sample(150:180, 100, replace=T),
weight=sample(50:80,100,replace=T))
> head(df1)
# default height weight
# 1 0 172 54
# 2 0 169 71
# 3 0 164 61
# 4 0 156 55
# 5 0 180 66
# 6 0 162 63
The bins (I will just show the first one)
bins <- lapply(c("height","weight"), function(x) woe.binning(df1, "default", x,
min.perc.total=0.05,
min.perc.class=0.05,event.class=1,
stop.limit = 0.05)[2])
# [[1]]
# [[1]][[1]]
# woe cutpoints.final cutpoints.final[-1] iv.total.final 0 1 col.perc.a col.perc.b iv.bins
# (-Inf,156] -46.58742 -Inf 156 0.1050725 21 5 0.24137931 0.38461538 0.0667299967
# (156,168] 23.91074 156 168 0.1050725 34 4 0.39080460 0.30769231 0.0198727638
# (168,169] -10.91993 168 169 0.1050725 6 1 0.06896552 0.07692308 0.0008689599
# (169, Inf] 25.85255 169 Inf 0.1050725 26 3 0.29885057 0.23076923 0.0176007627
# Missing NA Inf Missing 0.1050725 0 0 0.00000000 0.00000000
Now I want to see in with bins is my data.
My desired output is something similar to this
# default height weight woe_height woe_weight
# 1 0 160 54 23.91074 -8.180032
# 2 0 140 71 -46.58742 -7.640947
Is there any way to do it? The main problem I see here is that the intervals (a,b) are strings. I was thinking about use substr() or something similar to separate the strings in logical options, but I dont think that would work, and its not very elegant.
Any help will be welcome, thanks in advance.
Does this work fine for you?
apply_woe_binning <- function(df, x){
# woe binning
w <- woe.binning(df, "default", x,
min.perc.total=0.05,
min.perc.class=0.05,
event.class=1,
stop.limit = 0.05)[[2]]
# create new column name
new_col <- paste("woe", x, sep = "_")
# define cuts
cuts <- cut(df[[x]], w$cutpoints.final)
# add new column
df[[new_col]] <- w[cuts, "woe", drop = TRUE]
df
}
# one by one
df2 <- apply_woe_binning(df1, "height")
df2 <- apply_woe_binning(df2, "weight")
# in a functional
df2 <- Reduce(function(y, x) apply_woe_binning(df = y, x = x),
c("height","weight"),
init = df1)

How to find correlation coefficients in a loop?

I have a dataset like this:
Account_tenure_years = c(982,983,984,985,986,987,988)
N=c(12328,18990,21255,27996,32014,15487,4347)
Y=c(76,64,61,76,94,55,11)
df_table_account_tenure_vs_PPC = data.frame(Account_tenure_years,N,Y)
The dataset looks like this:
Account_tenure_years N Y
982 12328 76
983 18990 64
984 21255 61
985 27996 76
986 32014 94
987 15487 55
988 4347 11
What I want to do is this:
I want to find correlation between any two of the Account_tenure_years, example, 982,983 and find the correlation coefficient with N and Y columns i.e I want to find the correlation coefficient of the below table
Account_tenure_years N Y
982 12328 76
983 18990 64
Now I want to repeat this 8C2 times i.e 28 times. Taking different rows and finding the correlation coefficient in each case.
i.e in the next iteration I would want :
Account_tenure_years N Y
983 18990 64
984 21255 61
And find its correlation coefficient. Now after I have received all those 28 correlation coefficients, I average them out and find a mean correlation coefficient for the entire dataset.
How do I do this in R?
Ok lets get this straight if I find out the correlation coefficient between the columns
Account_tenure_years column, N
Also if I try to find out the correlation coefficient between the columns
Account_tenure_years column, Y
And if I find negative correlation coefficients in each case , can we infer anything from that?
It is not an ideal way to calculate correlation coefficient for each case. It should be calculated for the entire dataset:
Account_tenure_years = c(982,983,984,985,986,987,988)
N=c(12328,18990,21255,27996,32014,15487,4347)
Y=c(76,64,61,76,94,55,11)
df = data.frame(Account_tenure_years,N,Y)
cor(df$Account_tenure_years,df$N)
cor(df$Account_tenure_years,df$Y)
Output is as shown below:
> cor(df$Account_tenure_years,df$N)
[1] -0.1662244
> cor(df$Account_tenure_years,df$Y)
[1] -0.5332263
You can inferred that data is negatively correlated. It means increase in the value of Account_tenure_years will decrease the value of N and Y or vice-versa.
Please feel free to correct me!
It should be easier to do this to transpose your data, And the best part is that you don't even need to write a loop.
try this:
dt <- data.table::fread("
Account_tenure_years N Y
982 12328 76
983 18990 64
984 21255 61
985 27996 76
986 32014 94
987 15487 55
988 4347 11
")
dt.t <- as.data.frame(t(dt[, 2:3]))
colnames(dt.t) = dt$Account_tenure_years
# transpose
dt.t
#> 982 983 984 985 986 987 988
#> N 12328 18990 21255 27996 32014 15487 4347
#> Y 76 64 61 76 94 55 11
# calculate correlation matrix, read more help(cor)
cor(dt.t)
#> 982 983 984 985 986 987 988
#> 982 1 1 1 1 1 1 1
#> 983 1 1 1 1 1 1 1
#> 984 1 1 1 1 1 1 1
#> 985 1 1 1 1 1 1 1
#> 986 1 1 1 1 1 1 1
#> 987 1 1 1 1 1 1 1
#> 988 1 1 1 1 1 1 1
Created on 2018-07-20 by the reprex package (v0.2.0.9000).
I do not understand how you want to compute correlation coefficients between two variables with only one observation for each. Therefore, I assume you have more rows than provided here.
First define all combinations:
combinations <- combn(df_table_account_tenure_vs_PPC$Account_tenure_years, 2)
For each combination, you want to extract the corresponding rows and compute the correlation coefficients for each variable:
coefficients <- apply(combinations, 2, function(x, df_table_account_tenure_vs_PPC){
coef <- sapply(c("N", "Y"), function(v, x, df_table_account_tenure_vs_PPC){
c <- cor(df_table_account_tenure_vs_PPC[df_table_account_tenure_vs_PPC == x[1], v], df_table_account_tenure_vs_PPC[df_table_account_tenure_vs_PPC == x[2], v])
return(c)},
x, df_table_account_tenure_vs_PPC)
return(c(x, coef))},
df_table_account_tenure_vs_PPC)
Then, you can aggregate your results in a data.frame:
df <- as.data.frame(t(coefficients))
colnames(df) <- c("Year1", "Year2", "N_cor", "Y_cor")
This should work. Please tell me if you have any problem.
Again, make sure you have more than one observation in each condition if you want a meaningful correlation coefficient.

R - Sum range over lookback period, divided sum of look back - excel to R

I am looking to workout a percentage total over a look back range in R.
I know how to do this in excel with the following formula:
=SUM(B2:B4)/SUM(B2:B4,C2:C4)
This is summing column B over a range of today looking back 3 lines. It then divides this sum buy the total sum of column B + C again looking back 3 lines.
I am looking to achieve the same calculation in R to run across my matrix.
The output would look something like this:
adv dec perct
1 69 376
2 113 293
3 270 150 0.355625492
4 74 371 0.359559402
5 308 96 0.513790386
6 236 173 0.491255962
7 252 134 0.663886572
8 287 129 0.639966969
9 219 187 0.627483444
This is a line of code I could perhaps add the look back range too:
perct <- apply(data.matrix[,c('adv','dec')], 1, function(x) { (x[1] / x[1] + x[2]) } )
If i could get [1] to sum the previous 3 line range and
If i could get [2] to also sum the previous 3 line range.
Still learning how to apply forward and look back periods within R. So any additional learning on the answer would be appreciated!
Here are some approaches. The first 3 use rollsumr and/or rollapplyr in zoo and the last one uses only the base of R.
1) rollsumr Create a matrix with rollsumr whose columns contain the rollling sums, convert that to row proportions and take the "adv" column. Finally assign that to a new column frac in DF. This approach has the shortest code.
library(zoo)
DF$frac <- prop.table(rollsumr(DF, 3, fill = NA), 1)[, "adv"]
giving:
> DF
adv dec frac
1 69 376 NA
2 113 293 NA
3 270 150 0.3556255
4 74 371 0.3595594
5 308 96 0.5137904
6 236 173 0.4912560
7 252 134 0.6638866
8 287 129 0.6399670
9 219 187 0.6274834
1a) This variation is similar except instead of using prop.table we write out the ratio. The code is longer but you may find it clearer.
m <- rollsumr(DF, 3, fill = NA)
DF$frac <- with(as.data.frame(m), adv / (adv + dec))
1b) This is a variation of (1) that is the same except it uses a magrittr pipeline:
library(magrittr)
DF %>% rollsumr(3, fill = NA) %>% prop.table(1) %>% `[`(TRUE, "adv") -> DF$frac
2) rollapplyr We could use rollapplyr with by.column = FALSE like this. The result is the same.
ratio <- function(x) sum(x[, "adv"]) / sum(x)
DF$frac <- rollapplyr(DF, 3, ratio, by.column = FALSE, fill = NA)
3) Yet another variation is to compute the numerator and denominator separately:
DF$frac <- rollsumr(DF$adv, 3, fill = NA) /
rollapplyr(DF, 3, sum, by.column = FALSE, fill = NA)
4) base This uses embed followed by rowSums on each column to get the rolling sums and then uses prop.table as in (1).
DF$frac <- prop.table(sapply(lapply(rbind(NA, NA, DF), embed, 3), rowSums), 1)[, "adv"]
Note: The input used in reproducible form is:
Lines <- "adv dec
1 69 376
2 113 293
3 270 150
4 74 371
5 308 96
6 236 173
7 252 134
8 287 129
9 219 187"
DF <- read.table(text = Lines, header = TRUE)
Consider an sapply that loops through the number of rows in order to index two rows back:
DF$pred <- sapply(seq(nrow(DF)), function(i)
ifelse(i>=3, sum(DF$adv[(i-2):i])/(sum(DF$adv[(i-2):i]) + sum(DF$dec[(i-2):i])), NA))
DF
# adv dec pred
# 1 69 376 NA
# 2 113 293 NA
# 3 270 150 0.3556255
# 4 74 371 0.3595594
# 5 308 96 0.5137904
# 6 236 173 0.4912560
# 7 252 134 0.6638866
# 8 287 129 0.6399670
# 9 219 187 0.6274834

Custom sorting of a dataframe in R

I have a binomail dataset that looks like this:
df <- data.frame(replicate(4,sample(1:200,1000,rep=TRUE)))
addme <- data.frame(replicate(1,sample(0:1,1000,rep=TRUE)))
df <- cbind(df,addme)
df <-df[order(df$replicate.1..sample.0.1..1000..rep...TRUE..),]
The data is currently soreted in a way to show the instances belonging to 0 group then the ones belonging to the 1 group. Is there a way I can sort the data in a 0-1-0-1-0... fashion? I mean to show a row that belongs to the 0 group, the row after belonging to the 1 group then the zero group and so on...
All I can think about is complex functions. I hope there's a simple way around it.
Thank you,
Here's an attempt, which will add any extra 1's at the end:
First make some example data:
set.seed(2)
df <- data.frame(replicate(4,sample(1:200,10,rep=TRUE)),
addme=sample(0:1,10,rep=TRUE))
Then order:
with(df, df[unique(as.vector(rbind(which(addme==0),which(addme==1)))),])
# X1 X2 X3 X4 addme
#2 141 48 78 33 0
#1 37 111 133 3 1
#3 115 153 168 163 0
#5 189 82 70 103 1
#4 34 37 31 174 0
#6 189 171 98 126 1
#8 167 46 72 57 0
#7 26 196 30 169 1
#9 94 89 193 134 1
#10 110 15 27 31 1
#Warning message:
#In rbind(which(addme == 0), which(addme == 1)) :
# number of columns of result is not a multiple of vector length (arg 1)
Here's another way using dplyr, which would make it suitable for within-group ordering. It's also probably pretty quick. If there's unbalanced numbers of 0's and 1's, it will leave them at the end.
library(dplyr)
df %>%
arrange(addme) %>%
mutate(n0 = sum(addme == 0),
orderme = seq_along(addme) - (n0 * addme) + (0.5 * addme)) %>%
arrange(orderme) %>%
select(-n0, -orderme)

Resources