Binary coding of pairwise comparisons - r

I'm working on a questionnaire where there are always three statements presented at a time and participants have to rank order these according to their preferences (3 = most preferred, 1 = least preferred).
For further analyses I have to transform these rankings into pairwise comparisons within each block of three. Below is a code doing this for the first six items (2 blocks) of the questionnaire.
data <- matrix(c(1,2,3,1,2,3,2,1,3,3,1,2),2,6)
i1i2 <- ifelse(data[,1] > data[,2], 1, 0)
i1i3 <- ifelse(data[,1] > data[,3], 1, 0)
i2i3 <- ifelse(data[,2] > data[,3], 1, 0)
i4i5 <- ifelse(data[,4] > data[,5], 1, 0)
i4i6 <- ifelse(data[,4] > data[,6], 1, 0)
i5i6 <- ifelse(data[,5] > data[,6], 1, 0)
result <- cbind(i1i2, i1i3, i2i3, i4i5, i4i6, i5i6)
print(result)
I extended this code to fit a 45 item questionnaire and it works fine. Now, I'd like to write a function which automatically does this job for n items. I experimented with while and for loops but couldn't succeed.
Can anyone please give me a hint/ reference to the relevant functions I need/ an example on how to do this?
Related: Brown, A., & Maydeu-Olivares, A. (2011). Item response modeling of forced-choice questionnaires. Educational and Psychological Measurement, 71(3), 460–502.

First off, remove the ifelse and put them at the end instead:
i1i2 <- data[,1] > data[,2]
i1i3 <- data[,1] > data[,3]
i2i3 <- data[,2] > data[,3]
…
result <- ifelse(cbind(i1i2, i1i3, i2i3, i4i5, i4i6, i5i6), 1, 0)
Next, avoid unnecessary repetition.
three_way_compare = function (data, index) {
cbind(data[, index + 0] > data[, index + 1],
data[, index + 0] > data[, index + 2],
data[, index + 1] > data[, index + 2])
}
result = ifelse(do.call(cbind, lapply(seq(1, ncol(data), by = 3),
three_way_compare, data = data)), 1, 0)

While there are probably more efficient alternatives, you could convert your matrix to a list of vectors of length 3 and apply the ifelse statements to them through a function.
Update:
If you have multiple rows in your matrix, you need to use t(data) inside split() to get the correct values.
# Put data in lists of 3
blocks <- split(t(data), ceiling(seq_along(data)/3))
# Define function
comparison <-function(x) {
i1 <- ifelse(x[1] > x[2], 1, 0)
i2 <- ifelse(x[1] > x[3], 1, 0)
i3 <- ifelse(x[2] > x[3], 1, 0)
return(cbind(i1,i2,i3))
}
# Apply function to list
lapply(blocks,comparison)
# $`1`
# i1 i2 i3
# [1,] 0 0 1
#
# $`2`
# i1 i2 i3
# [1,] 0 1 1
#
# $`3`
# i1 i2 i3
# [1,] 1 0 0
#
# $`4`
# i1 i2 i3
# [1,] 0 0 1
# Or unlist to get vector
unlist(lapply(blocks,comparison))
# 11 12 13 21 22 23 31 32 33 41 42 43
# 0 0 1 0 1 1 1 0 0 0 0 1

Related

Randomly remove some numeric data from a matrix in R?

I have a large data matrix with many numeric values (counts) in it. I would like to remove 10% of all counts. So, for example, a matrix which looks like this:
30 10
0 20
The sum of all counts here is 60. 10% of 60 is 6. So I want to randomly remove 6. A correct output could be:
29 6
0 19
(As you can see it removed 1 from 30, 4 from 10 and 1 from 20). There cannot be negative values.
How could I program this in R?
Here is a way. It subtracts 1 to positive matrix elements until a certain total to remove is reached.
subtract_int <- function(X, n){
inx <- which(X != 0, arr.ind = TRUE)
N <- nrow(inx)
while(n > 0){
i <- sample(N, 1)
if(X[ inx[i, , drop = FALSE] ] > 0){
X[ inx[i, , drop = FALSE] ] <- X[ inx[i, , drop = FALSE] ] - 1
n <- n - 1
}
if(any(X[inx] == 0)){
inx <- which(X != 0, arr.ind = TRUE)
N <- nrow(inx)
}
}
X
}
set.seed(2021)
to_remove <- round(sum(A)*0.10)
subtract_int(A, to_remove)
# [,1] [,2]
#[1,] 30 6
#[2,] 0 18
Data
A <- structure(c(30, 0, 10, 20), .Dim = c(2L, 2L))
Maybe this helps you at least to get on the right track. It's nothing more than a draft though:
randomlyRemove <- function(matrix) {
sum_mat <- sum(matrix)
while (sum_mat > 0) {
sum_mat <- sum_mat - runif(1, min = 0, max = sum_mat)
x <- round(runif(1, 1, dim(matrix)[1]), digits = 0)
y <- round(runif(1, 1, dim(matrix)[2]), digits = 0)
matrix[x,y] <- matrix[x,y] - sum_mat
}
return(matrix)
}
You might want to play with the random number generator process to get more evenly distributed substractions.
edit: added round(digits = 0) to get only integer (dimension) values and modified the random (dimension) value generation to start from 1 (not zero).
I think we can make it work with using sample. This solution is a lot more compact.
The data
A <- structure(c(30, 0, 11, 20), .Dim = c(2L, 2L))
sum(A)
#> [1] 61
The logic
UseThese <- (1:length(A))[A > 0] # Choose indices to be modified because > 0
Sample <- sample(UseThese, sum(A)*0.1, replace = TRUE) # Draw a sample of indices
A[UseThese] <- A[UseThese] - as.vector(table(Sample)) # Subtract handling repeated duplicate indices in the sample
Check the result
A
#> [,1] [,2]
#> [1,] 28 8
#> [2,] 0 19
sum(A) # should be the value above minus 6
#> [1] 55
One disadvantage of this solution is that it could lead to negative
values. So check with:
any(A < 0)
#> [1] FALSE

Choose closest x elements by index in a list/vector

If I have a vector such as x <-c(1,2,3,4,5,6,7,8,9), I want a function f such that
f(vector,index,num) where it takes the vector and gives me num "closest" elements to that one on the index
Examples:
f(x,3,4) = c(1,2,4,5)
f(x,1,5) = c(2,3,4,5,6)
f(x,8,3) = c(6,7,9)
Since there is also the issue where if we have an odd num, we will need to choose whether to pick left or right side by symmetry, let's go with choosing the left side (but right side is ok too)
i.e f(x,4,5) = c(1,2,3,5,6) and f(x,7,3) = c(5,6,8)
I hope my question is clear, thank you for any help/responses!
edit: The original vector of c(1:9) is arbitrary, the vector could be a vector of strings, or a vector of length 1000 with shuffled numbers with repeats etc.
i.e c(1,7,4,2,3,7,2,6,234,56,8)
num_closest_by_indices <- function(v, idx, num) {
# Try the base case, where idx is not within (num/2) of the edge
i <- abs(seq_along(x) - idx)
i[idx] <- +Inf # sentinel
# If there are not enough elements in the base case, incrementally add more
for (cutoff_idx in seq(floor(num/2), num)) {
if (sum(i <= cutoff_idx) >= num) {
# This will add two extra indices every iteration. Strictly if we have an even length, we should add the leftmost one first and `continue`, to break ties towards the left.
return(v[i <= cutoff_idx])
}
}
}
Here's an illustration of this algorithm: we rank the indices in order of desirability, then pick the lowest num legal ones:
> seq_along(x)
1 2 3 4 5 6 7 8 9
> seq_along(x) - idx
-2 -1 0 1 2 3 4 5 6
> i <- abs(seq_along(x) - idx)
2 1 0 1 2 3 4 5 6
> i[idx] <- +Inf # sentinel to prevent us returning the element itself
2 1 Inf 1 2 3 4 5 6
Now we can just find num elements with smallest values (break ties arbitrarily, unless you have a preference (left)).
Our first guess is all indices <= (num/2) ; this might not be enough if index is within (num/2) of the start/end.
> i <= 2
TRUE TRUE FALSE TRUE TRUE FALSE FALSE FALSE FALSE
> v[i <= 2]
1 2 4 5
So, adapting #dash2's code to handle the corner cases where some indices are illegal (nonpositive, or > length(x)), i.e. ! %in% 1:L. Then min(elems) would be the number of illegal indices which we cannot pick, hence we must pick abs(min(elems)) more.
Notes:
in the end the code is simpler and faster to handle it by three piecewise cases. Aww.
it actually seems to simplify things if we pick (num+1) indices, then remove idx before returning the answer. Using result[-idx] to remove it.
Like so:
f <- function (vec, elem, n) {
elems <- seq(elem - ceiling(n/2), elem + floor(n/2))
if (max(elems) > length(vec)) elems <- elems - (max(elems) - length(vec))
if (elems[1] < 1) elems <- elems + (1 - elems[1])
elems <- setdiff(elems, elem)
vec[elems]
}
Giving results:
> f(1:9, 1, 5)
[1] 2 3 4 5 6
> f(1:9, 9, 5)
[1] 4 5 6 7 8
> f(1:9, 2, 5)
[1] 1 3 4 5 6
> f(1:9, 4, 5)
[1] 1 2 3 5 6
> f(1:9, 4, 4)
[1] 2 3 5 6
> f(1:9, 2, 4)
[1] 1 3 4 5
> f(1:9, 1, 4)
[1] 2 3 4 5
> f(1:9, 9, 4)
[1] 5 6 7 8
Start a function with the variable argument x first, and the reference table and n after
.nearest_n <- function(x, table, n) {
The algorithm assumes that table is numeric, without any duplicates, and all values finite; n has to be less than or equal to the length of the table
## assert & setup
stopifnot(
is.numeric(table), !anyDuplicated(table), all(is.finite(table)),
n <= length(table)
)
Sort the table and then 'clamp' maximum and minimum values
## sort and clamp
table <- c(-Inf, sort(table), Inf)
len <- length(table)
Find the interval in table where x occurs; findInterval() uses an efficient search. Use the interval index as the initial lower index, and add 1 for the upper index, making sure to stay in-bounds.
## where to start?
lower <- findInterval(x, table)
upper <- min(lower + 1L, len)
Find the nearest n neighbors by comparing the lower and upper index distance to x, record the nearest value, and increment the lower or upper index as appropriate and making sure to stay in-bounds
## find
nearest <- numeric(n)
for (i in seq_len(n)) {
if (abs(x - table[lower]) < abs(x - table[upper])) {
nearest[i] = table[lower]
lower = max(1L, lower - 1L)
} else {
nearest[i] = table[upper]
upper = min(len, upper + 1L)
}
}
Then return the solution and finish the function
nearest
}
The code might seem verbose, but is actually relatively efficient because the only operations on the entire vector (sort(), findInterval()) are implemented efficiently in R.
A particular advantage of this approach is that it can be vectorized in it's first argument, calculating the test for using lower (use_lower = ...) as a vector and using pmin() / pmax() as clamps.
.nearest_n <- function(x, table, n) {
## assert & setup
stopifnot(
is.numeric(table), !anyDuplicated(table), all(is.finite(table)),
n <= length(table)
)
## sort and clamp
table <- c(-Inf, sort(table), Inf)
len <- length(table)
## where to start?
lower <- findInterval(x, table)
upper <- pmin(lower + 1L, len)
## find
nearest <- matrix(0, nrow = length(x), ncol = n)
for (i in seq_len(n)) {
use_lower <- abs(x - table[lower]) < abs(x - table[upper])
nearest[,i] <- ifelse(use_lower, table[lower], table[upper])
lower[use_lower] <- pmax(1L, lower[use_lower] - 1L)
upper[!use_lower] <- pmin(len, upper[!use_lower] + 1L)
}
# return
nearest
}
For instance
> set.seed(123)
> table <- sample(100, 10)
> sort(table)
[1] 5 29 41 42 50 51 79 83 86 91
> .nearest_n(c(30, 20), table, 4)
[,1] [,2] [,3] [,4]
[1,] 29 41 42 50
[2,] 29 5 41 42
Generalize this by taking any argument and coercing it to the required form using a reference look-up table table0 and the indexes into it table1
nearest_n <- function(x, table, n) {
## coerce to common form
table0 <- sort(unique(c(x, table)))
x <- match(x, table0)
table1 <- match(table, table0)
## find nearest
m <- .nearest_n(x, table1, n)
## result in original form
matrix(table0[m], nrow = nrow(m))
}
As an example...
> set.seed(123)
> table <- sample(c(letters, LETTERS), 30)
> nearest_n(c("M", "Z"), table, 5)
[,1] [,2] [,3] [,4] [,5]
[1,] "o" "L" "O" "l" "P"
[2,] "Z" "z" "Y" "y" "w"

Find closest value with condition

I have a function that finds me the nearest values for each row in a matrix. It then reports a list with an index of the nearest rows. However, I want it to exclude values if they are +1 in the first AND +1 in the second column away from a particular set of values (-1 in the first and -1 in the second column should also be removed). Moreover, +1 in first column and -1 in second column with respect to the values of interest should also be avoided.
As an example, if I want things closes to c(2, 1), it should accept c(3,1) or (2,2) or (1,1), but NOT c(3,2) and not c(1,0).
Basically, for an output to be reported either column 1 or column 2 should be a value of 1 away from a row of interest, but not both.
input looks like this
x
v1 v2
[1,] 3 1
[2,] 2 1
[3,] 3 2
[4,] 1 2
[5,] 8 5
myfunc(x)
The output looks like this. Notice that the closest thing to row 2 ($V2 in output) is row 1,3,4. The answer should only be 1 though.
$V1
[1] 2 3
$V2
[1] 1 3 4
$V3
[1] 1 2
$V4
[1] 2
$V5
integer(0)
Here is myfunc
myfunc = function(t){
d1 <- dist(t[,1])
d2 <- dist(t[,2])
dF <- as.matrix(d1) <= 1 & as.matrix(d2) <= 1
diag(dF) <- NA
colnames(dF) <- NULL
dF2 <- lapply(as.data.frame(dF), which)
return(dF2)
}
Basically, the rows that you want to find should differ from your reference element by +1 or -1 in one column and be identical in the other column. That means that the sum over the absolute values of the differences is exactly one. For your example c(2, 1), this works as follows:
c(3, 1): difference is c(1, 0), thus sum(abs(c(1, 0))) = 1 + 0 = 1
c(1, 1): difference is c(-1, 0), thus sum(abs(c(-1, 0))) = 1 + 0 = 1
etc.
The following function checks exactly this:
myfunc <- function(x) {
do_row <- function(r) {
r_mat <- matrix(rep(r, length = length(x)), ncol = ncol(x), byrow = TRUE)
abs_dist <- abs(r_mat - x)
return(which(rowSums(abs_dist) == 1))
}
return(apply(x, 1, do_row))
}
do_row() does the job for a single row, and then apply() is used to do this with each row. For your example, I get:
myfunc(x)
## [[1]]
## [1] 2 3
##
## [[2]]
## [1] 1
##
## [[3]]
## [1] 1
##
## [[4]]
## integer(0)
##
## [[5]]
## integer(0)
Using sweep(), one can write a shorter function:
myfunc2 <- function(x) {
apply(x, 1, function(r) which(rowSums(abs(sweep(x, 2, r))) == 1))
}
But this seems harder to understand and it turns out that it is slower by about a factor two for your matrix x. (I have also tried it with a large matrix, and there, the efficiency seems about the same.)

Assign an element value based on element adjacencies in R

I have a data frame with {0,1} indicating whether a product was Small, Medium or Large.
dat <- data.frame(Sm = c(1,0,0), Med = c(0,1,0), Lg = c(0,0,1))
Sm Med Lg
1 1 0 0
2 0 1 0
3 0 0 1
I'm looking to assign 1's to the 0's leading up to a 1 in a given row. For example in row 2 the product is a "Med", so I'm looking to assign a 1 to the 0 in the "Sm" column.
Allocation size is a consideration so I'm looking for a vectorized approach without using a for loop please. The final solution should output the following:
Sm Med Lg
1 1 0 0
2 1 1 0
3 1 1 1
I've tried several variations of the code below, but the closest I can get is a ragged array which assigns all of the 1's correctly while dropping the elements that have legitimate 0's.
apply(dat, 1, function(x) {
x[1:which.max(x)] <- 1
})
[1] 1 1 1
And below, which gets close but without the needed trailing 0's
apply(dat, 1, function(x) {
temp <- x[1:which.max(x)]
unlist(lapply(temp, function(y) {
y <- 1
}))
})
[[1]]
Sm
1
[[2]]
Sm Med
1 1
[[3]]
Sm Med Lg
1 1 1
First, convert to matrix and use max.col to get the index of the 1 in each row:
mat <- as.matrix(dat)
mc <- max.col(mat)
logical construction Overwrite the matrix:
mat = +(col(mat) <= mc)
or construct an index of matrix positions to change and change 'em:
logical indexing
mat[col(mat) < mc] <- 1L
# or
mat[which(col(mat) < mc)] <- 1L
matrix indexing
idx <- do.call( rbind, lapply( seq_along(mc), function(i)
if (i==1L) NULL
else cbind(i,seq_len(mc[i]-1))
))
mat[idx] <- 1L
vector indexing
nr <- nrow(mat)
idx <- unlist( lapply( seq_along(mc), function(i)
if (mc[i]==1L) NULL
else seq(from = i, by = nr, length.out = mc[i]-1L)
))
mat[idx] <- 1L
The help for all three indexing methods can be found at help("[<-").
This will do what you want.
dat[which(dat$Med==1),]$Sm = 1
dat[which(dat$Lg==1),]$Med = 1
dat[which(dat$Lg==1),]$Sm = 1

subsetting quickly over many columns

I have some code that identifies outliers in a data frame and then either removes or caps them. I'm trying to speed up the removal process using an apply() function (or perhaps another method).
Example data
https://github.com/crossfitAL/so_ex_data/blob/master/subset
# this is the contents of a csv file, you will need to load it into your R session.
# set up an example decision-matrix
# rm.mat is a {length(cols) x 4} matrix -- in this example 8 x 4
# rm.mat[,1:2] - identify the values for min/max outliers, respectively.
# rm.mat[,3:4] - identify if you wish to remove min/max outliers, respectively.
cols <- c(1, 6:12) # specify the columns you wish to examine
rm.mat <- matrix(nrow = length(cols), ncol= 4,
dimnames= list(names(fico2[cols]),
c("out.min", "out.max","rm outliers?", "rm outliers?")))
# add example decision criteria
rm.mat[, 1] <- apply(fico2[, cols], 2, quantile, probs= .05)
rm.mat[, 2] <- apply(fico2[, cols], 2, quantile, probs= .95)
rm.mat[, 3] <- replicate(4, c(0,1))
rm.mat[, 4] <- replicate(4, c(1,0))
Here's my current code for subsetting:
df2 <- fico2 # create a copy of the data frame
cnt <- 1 # add a count variable
for (i in cols) {
# for each column of interest in the data frame. Determine if there are min/max
# outliers that you wish to remove, remove them.
if (rm.mat[cnt, 3] == 1 & rm.mat[cnt, 4] == 1) {
# subset / remove min and max outliers
df2 <- df2[df2[, i] >= rm.mat[cnt, 1] & df2[, i] <= rm.mat[cnt, 2], ]
} else if (rm.mat[cnt, 3] == 1 & rm.mat[cnt, 4] == 0) {
# subset / remove min outliers
df2 <- df2[df2[, i] >= rm.mat[cnt, 1], ]
} else if (rm.mat[cnt, 3] == 0 & rm.mat[cnt, 4] == 1) {
# subset / remove max outliers
df2 <- df2[df2[, i] <= rm.mat[cnt, 2], ]
}
cnt <- cnt + 1
}
proposed solution:
I think I should be able to do this via an apply type function, with the removal of the for loop / vectorization speeding up the code. The problem that I'm running into is that I'm trying to apply a function if-and-only-if the the decision-matrix indicates that I should. IE- using a logical vector rm.mat[,3] or rm.mat[,4] to determine if subsetting "[" should be applied to the dataframe df2.
Any help you have would be greatly appreciated! Also, please let me know if the example data / code is sufficient.
Here a solution. just to clarify your code. Hope that others can use it to give a better solution.
So if understand, you have a decision matrix, that looks like this :
rm.mat
c1 c2 c3 c4
amount.funded.by.investors 27925.000 NA 0 1
monthly.income 11666.670 NA 1 0
open.credit.lines 18.000 NA 0 1
revolving.credit.balance 40788.750 NA 1 0
inquiries.in.the.last.6.months 3.000 NA 0 1
debt.to.inc 28.299 NA 1 0
int.rate 20.490 NA 0 1
fico.num 775.000 NA 1 0
And you try to filter a big matrix according to the values of this matrix
colnames(rm.mat) <- paste('c',1:4,sep='')
rm.mat <- as.data.frame(rm.mat)
apply(rm.mat,1,function(y){
h <- paste(y['c3'],y['c4'],sep='')
switch(h,
'11'= apply(df2,2, function(x)
df2[x >= y['c1'] & x <= y['c2'],]), ## we never have this!!
'10'= apply(df2,2, function(x)
df2[x >= y['c1'] , ]), ## here we apply by columns!
'01'= apply(df2,2,function(x)
df2[x <= y['c2'], ])) ## c2 is NA!! so !!!
}
)

Resources