Compare array over matrix without loop - r

I need an array Y of integers and NA to compare to a matrix and return TRUE, FALSE, or NA. I'm limited in how I can write this - no loops or if statements. It has to be very plain. The issue is that it only compares the length of the array without repeating over the rest of the matrix; also, it isn't correctly recognizing FALSE values.
I know it's my apply function but I don't know how to get apply() to repeat by itself without looping.
answer <- function(x,y){
y <- as.matrix(y)
z <- apply(apply(x,2,`==`,y),1,any)
q <- as.matrix(z)
print(q)
}

It depends on how you see the matrix but R is a mostly vectorized language you don't need loops to compare elements of different sizes, but be mindful of direction and of recycling
answer <- function(x,y){
cat('+++++Solution 4+++++\n')
q <- x == y
print(q)
}
x <- matrix(c(1,0,1,0,1,1,1,1,0,1,0,1), nrow=4, ncol=4)
y <- c(1, 1, 1, NA)
answer(x,y)
Or solution by row very ugly stuff
answer <- function(x,y){
cat('+++++Solution 4+++++\n')
q <- matrix(apply(t(y),1,`==`,t(x)),nrow = 4,byrow = TRUE)
print(q)
}
answer(x,y)

Related

why smart rounding works differently with map/lapply than without?

I would like to smartly round my results so that it sums up to the same sum after rounding.
Can someone explain me why this is different when I do it with map or lapply?
v <- c(
0.9472164,
71.5330771,
27.5197066)
smart.round <- function(x, digits = 0) {
up <- 10 ^ digits
x <- x * up
y <- floor(x)
indices <- tail(order(x-y), round(sum(x)) - sum(y))
y[indices] <- y[indices] + 1
y / up
}
### works correctly
smart.round(v)
### lapply and map is wrong
lapply(v,smart.round)
map(v,smart.round)
( I think this is merely a comment, but I have not yet earned my right add comments )
lapply, purrr::map are processing your input sequentially. In your example, lapply takes the first value of v and calls smart.round then moves on to the second value of v and so on ...
in total smart.round is called three times, each time without any knowledge of the other two values in v.
I'm not entirely sure why you try to use lapply here, if this is part of a more complex situation you might want to expand your question.
I have written my own solution. Definitely a bit cumbersome but it works.. :) My initial goal was just to input a dataframe and output the rounded dataframe.
The whole example here:
v <- data.frame(a = c(0.9472164,
71.5330771,
27.5197066),
b = c(4.6472164,
5.6330771,
27.1197066))
smart.round <- function(x, digits = 0) {
up <- 10 ^ digits
x <- x * up
y <- floor(x)
indices <- tail(order(x-y), round(sum(x)) - sum(y))
y[indices] <- y[indices] + 1
y / up
}
rounding_function <- function(input_df) {
output_df <- data.frame(matrix(ncol = ncol(input_df), nrow = nrow(input_df)))
for (i in 1:nrow(input_df)) {
a = smart.round(as.numeric(input_df[i,]))
for (k in 1:ncol(input_df)) {
output_df[i,k]=a[k]
}
colnames(output_df) = colnames(input_df)
}
return(output_df)
}
v_rounded <- rounding_function(v)

for loop question in r :number of items to replace is not a multiple of replacement length

all
I'm new to R. I try many ways and still cannot solve it. Can anyone help to check??
I am trying to produce 3 times 100 random values that follow a chisquare distribution. Console says ''number of items to replace is not a multiple of replacement length''. Any hint to fix it??
for(i in 1:3) {
x1[i] <- rchisq(100, df=2)
n1[i] <- length(x1[i])
}
As an explanation for your problem: You are trying to store a vector of 100 elements into a single element, the ith element, of a vector, x1. To illustrate, you could put a vector of values into a vector of the same length:
x <- rnorm(6, 0, 1)
x[1:3] <- c(1,2,3)
x
## [1] 1.0000000 2.0000000 3.0000000 -0.8652300 1.3776699 -0.8817483
You could to store them into a list, each element of a list is a vector that can be of any length. You will need double square brackets.
x1 <- list()
for(i in 1:3) {
x1[[i]] <- rchisq(100, df=2)
n1[i] <- length(x1[[i]])
}
Lists and vectors are different types of data structures in R, you can read a lot about them in advanced R.
It depends on what containers you want to use. There are two containers that come to mind, either a list or matrix.
# list format
x1 = list();
n1 = vector();
for(i in 1:3) {
x1[[i]] <- rchisq(100, df=2)
n1[i] <- length(x1[[i]])
}
note the double brackets [[i]] as mentioned in the comments
# matrix format
x1 = matrix(NA, nrow = 100, ncol = 3)
n1 = vector();
for(i in 1:3) {
x1[,i] <- rchisq(100, df=2)
n1[i] <- length(x1[,i])
}

Is it possible to use vector math in R for a summation involving intervals?

Title's a little rough, open to suggestions to improve.
I'm trying to calculate time-average covariances for a 500 length vector.
This is the equation we're using
The result I'm hoping for is a vector with an entry for k from 0 to 500 (0 would just be the variance of the whole set).
I've started with something like this, but I know I'll need to reference the gap (i) in the first mean comparison as well:
x <- rnorm(500)
xMean <-mean(x)
i <- seq(1, 500)
dfGam <- data.frame(i)
dfGam$gamma <- (1/(500-dfGam$i))*(sum((x-xMean)*(x[-dfGam$i]-xMean)))
Is it possible to do this using vector math or will I need to use some sort of for loop?
Here's the for loop that I've come up with for the solution:
gamma_func <- function(input_vec) {
output_vec <- c()
input_mean <- mean(input_vec)
iter <- seq(1, length(input_vec)-1)
for(val in iter){
iter2 <- seq((val+1), length(input_vec))
gamma_sum <- 0
for(val2 in iter2){
gamma_sum <- gamma_sum + (input_vec[val2]-input_mean)*(input_vec[val2-val]-input_mean)
}
output_vec[val] <- (1/length(iter2))*gamma_sum
}
return(output_vec)
}
Thanks
Using data.table, mostly for the shift function to make x_{t - k}, you can do this:
library(data.table)
gammabar <- function(k, x){
xbar <- mean(x)
n <- length(x)
df <- data.table(xt = x, xtk = shift(x, k))[!is.na(xtk)]
df[, sum((xt - xbar)*(xtk - xbar))/n]
}
gammabar(k = 10, x)
# [1] -0.1553118
The filter [!is.na(xtk)] starts the sum at t = k + 1, because xtk will be NA for the first k indices due to being shifted by k.
Reproducible x
x <- c(0.376972124936433, 0.301548373935665, -1.0980231706536, -1.13040590360378,
-2.79653431987176, 0.720573498411587, 0.93912102300901, -0.229377746707471,
1.75913134696347, 0.117366786802848, -0.853122822287008, 0.909259181618213,
1.19637295955276, -0.371583903741348, -0.123260233287436, 1.80004311672545,
1.70399587729432, -3.03876460529759, -2.28897494991878, 0.0583034949929225,
2.17436525195634, 1.09818265352131, 0.318220322390854, -0.0731475581637693,
0.834268741278827, 0.198750636733429, 1.29784138432631, 0.936718306241348,
-0.147433193833294, 0.110431994640128, -0.812504663900505, -0.743702167768748,
1.09534507180741, 2.43537370755095, 0.38811846676708, 0.290627670295127,
-0.285598287083935, 0.0760147178373681, -0.560298603759627, 0.447188372143361,
0.908501134499943, -0.505059597708343, -0.301004012157305, -0.726035976548133,
-1.18007702699501, 0.253074712637114, -0.370711296884049, 0.0221795637601637,
0.660044122429767, 0.48879363533552)

parallel programming and the <<- assignment operator in R

i have a question regarding parallel programming and the <<- operator in R.
I want to apply the xfun function on matrix x.
If the entry in the first column is smaller than 0.5, it's supposed to append the entry in the second column to the vector vec outside of the function in the global environment.
At the end the function should return the first column plus the random number in y.
If i use the regular apply function, it works exactly like it should. However i want to apply a similar function to a huge dataset and therefore want to do it in parallel
via the future_apply function from the package "future.apply". But when i do this, the <<- operator does not work and vec stays empty.
Is there anyone who knows why that is and if there is a way to make it work?
Thanks in advance
x <- matrix(runif(20), nrow = 10, ncol = 2)
vec <- NULL
xfun <- function(x) {
y <- runif(1)
if (x[1] < 0.5) {
vec <<- append(vec, x[2])
}
return(x[1] + y)
}
# works
xy <- apply(x, 1, xfun)
library(future.apply)
# vec stays empty
xy <- future_apply(x, 1, xfun)

re-expressing a simple operation as a function in R

I am trying to construct a new variable, z, using two pre-existing variables - x and y.  Suppose for simplicity that there are only 5 observations (corresponding to 5 time periods) and that x=c(5,7,9,10,14) and y=c(0,2,1,2,3). I’m really only using the first observation in x as the initial value, and then constructing the new variable z using depreciated values of x[1] (depreciation rate of 0.05 per annum) and each of the observations over time in the vector, y. The variable I am constructing takes the form of a new 5 by 1 vector, z, and it can be obtained using the following simple commands in R:
z=NULL
for(i in 1:length(x)){n=seq(1,i,by=1)
z[i]=sum(c(0.95^(i-1)*x[1],0.95^(i-n)*y[n]))}
The problem I am having is that I need to define this operation as a function. That is, I need to create a function f that will spit out the vector z whenever any arbitrary vectors x and y are plugged into the function, f(x,y). I’ve been going around in circles for days now and I was wondering if someone would be kind enough to provide me with a suggestion about how to proceed. Thanks in advance.
I hope following will work for you...
x=c(5,7,9,10,14)
y=c(0,2,1,2,3)
getZ = function(x,y){
z = NULL
for(i in 1:length(x)){
n=seq(1,i,by=1)
z[i]=sum(c(0.95^(i-1)*x[1],0.95^(i-n)*y[n]))
}
return = z
}
z = getZ(x,y)
z
5.000000 6.750000 7.412500 9.041875 11.589781
This will allow .05 (or any other value) passed in as r.
ConstructZ <- function(x, y, r){
n <- length(y)
d <- 1 - r
Z <- vector(length = n)
for(i in seq_along(x)){
n = seq_len(i)
Z[i] = sum(c(d^(i-1)*x[1],d^(i-n)*y[n]))
}
return(Z)
}
Here is a cool (if I say so myself) way to implement this as an infix operator (since you called it an operation).
ff = function (x, y, i) {
n = seq.int(i)
sum(c(0.95 ^ (i - 1) * x[[1]],
0.95 ^ (i - n) * y[n]))
}
`%dep%` = function (x, y) sapply(seq_along(x), ff, x=x, y=y)
x %dep% y
[1] 5.000000 6.750000 7.412500 9.041875 11.589781
Doing the loop multiple times and recalculating the exponents every time may be inefficient. Here's another way to implement your calculation
getval <- function(x,y,lambda=.95) {
n <- length(y)
pp <- lambda^(1:n-1)
yy <- sapply(1:n, function(i) {
sum(y * c(pp[i:1], rep.int(0, n-i)))
})
pp*x[1] + yy
}
Testing with #vrajs5's sample data
x=c(5,7,9,10,14)
y=c(0,2,1,2,3)
getval(x,y)
# [1] 5.000000 6.750000 7.412500 9.041875 11.589781
but appears to be about 10x faster when testing on larger data such as
set.seed(15)
x <- rpois(200,20)
y <- rpois(200,20)
I'm not sure of how often you will run this or on what size of data so perhaps efficiency isn't a concern for you. I guess readability is often more important long-term for maintenance.

Resources