R: Update with value of previous row (subject to condition) - r

I would like to update the value in a table with values of the previous row, within groups, (and probably stop the updates on a given condition)
Here is an example:
set.seed(12345)
field <- data.table(time=1:3, player = letters[1:2], prospects = round(rnorm(6),2))
setkey(field, player, time)
field[time == 1, energy := round(rnorm(2),2)] #initial level - this is what I want to propagate down the table
#let 'prospects < 0.27' be the condition that stops the process, and sets 'energy = 0'
#player defines the groups within which the updates are made
Here is the table I have.
> field
time player prospects energy
1: 1 a 0.81 -0.32
2: 2 a 0.25 NA
3: 3 a 2.05 NA
4: 1 b 1.63 -1.66
5: 2 b 2.20 NA
6: 3 b 0.49 NA
Here is the table I want.
> field
time player prospects energy
1: 1 a 0.81 -0.32
2: 2 a 0.25 0
3: 3 a 2.05 0
4: 1 b 1.63 -1.66
5: 2 b 2.20 -1.66
6: 3 b 0.49 -1.66
Thanks in advance

Probably there are better ways, but this is what came to my mind. This makes use of roll=TRUE argument. The idea is to first set energy=0.0 where prospects < 0.27:
field[prospects < 0.27, energy := 0.0]
Then, if we remove the NA values from field, we can use roll=TRUE by doing a join with all combinations as follows:
field[!is.na(energy)][CJ(c("a", "b"), 1:3), roll=TRUE][, prospects := field$prospects][]
# player time prospects energy
# 1: a 1 0.81 0.63
# 2: a 2 0.25 0.00
# 3: a 3 2.05 0.00
# 4: b 1 1.63 -0.28
# 5: b 2 2.20 -0.28
# 6: b 3 0.49 -0.28
We've to reset prospects because the roll changes it too. You could do it better, but you get the idea.
A variation, so that the roll is performed only on energy column:
field[!is.na(energy)][CJ(c("a", "b"), 1:3), list(energy),
roll=TRUE][, prospects := field$prospects][]
Or it may be simpler to use na.locf from package zoo :
field[time == 1, energy := round(rnorm(2),2)]
field[prospects < 0.27, energy := 0.0]
require(zoo)
field[, energy := na.locf(energy, na.rm=FALSE)]
which works if the first row of each group is guaranteed to be non-NA, which it is here by construction. But if not, you can run na.locf by group, too :
field[, energy := na.locf(energy, na.rm=FALSE), by=player]

something like this ?
ddply(field, 'player', function(x) {
baseline <- x[x$time == 1, 'energy']
x$energy <- baseline
ind <- which(x$prospects < 0.27)
if (length(ind)) {
x[min(ind):nrow(x), 'energy'] <- 0
}
x
})

Related

Using rle function with condition on a column in r

My dataset has 523 rows and 93 columns and it looks like this:
data <- structure(list(`2018-06-21` = c(0.6959635416667, 0.22265625,
0.50341796875, 0.982942708333301, -0.173828125, -1.229259672619
), `2018-06-22` = c(0.6184895833333, 0.16796875, 0.4978841145833,
0.0636718750000007, 0.5338541666667, -1.3009207589286), `2018-06-23` = c(1.6165364583333,
-0.375, 0.570800781250002, 1.603515625, 0.5657552083333, -0.9677734375
), `2018-06-24` = c(1.3776041666667, -0.03125, 0.7815755208333,
1.5376302083333, 0.5188802083333, -0.552966889880999), `2018-06-25` = c(1.7903645833333,
0.03125, 0.724609375, 1.390625, 0.4928385416667, -0.723074776785701
)), row.names = c(NA, 6L), class = "data.frame")
Each row is a city, and each column is a day of the year.
After calculating the row average in this way
data$mn <- apply(data, 1, mean)
I want to create another column data$duration that indicates the average length of a period of consecutive days where the values are > than data$mn.
I tried with this code:
data$duration <- apply(data[-6], 1, function(x) with(rle`(x > data$mean), mean(lengths[values])))
But it does not seem to work. In particular, it appears that rle( x > data$mean) fails to recognize the end of a row.
What are your suggestions?
Many thanks
EDIT
Reference dataframe has been changed into a [6x5]
The main challenge you're facing in your code is getting apply (which focuses on one row at a time) to look at the right values of the mean. We can avoid this entirely by keeping the mean out of the data frame, and doing the comparison data > mean to the whole data frame at once. The new columns can be added at the end:
mn = rowMeans(data)
dur = apply(data > mn, 1, function(x) with(rle(x), mean(lengths[values])))
dur
# 1 2 3 4 5 6
# 3.0 1.5 2.0 3.0 4.0 2.0
data = cbind(data, mean = mn, duration = dur)
print(data, digits = 2)
# 2018-06-21 2018-06-22 2018-06-23 2018-06-24 2018-06-25 mean duration
# 1 0.70 0.618 1.62 1.378 1.790 1.2198 3.0
# 2 0.22 0.168 -0.38 -0.031 0.031 0.0031 1.5
# 3 0.50 0.498 0.57 0.782 0.725 0.6157 2.0
# 4 0.98 0.064 1.60 1.538 1.391 1.1157 3.0
# 5 -0.17 0.534 0.57 0.519 0.493 0.3875 4.0
# 6 -1.23 -1.301 -0.97 -0.553 -0.723 -0.9548 2.0

How to obtain percentile of variable and the rate of dummy for all

How to do this for all columns of the object x automatically, using base R or SAS.
Here is example using R:
# sample data
set.seed(123)
x <- data.frame(var1=runif(100), var2=runif(100), flag=rbinom(100, size=1, prob=0.7))
x
# calculate percentile of each column
r <- apply(x, 2, function(x) quantile(x, probs=seq(0,1,0.05)))
res <- data.frame(item_id=rownames(r), r, row.names = NULL)
# assign group for each percentile
res$group <- seq_along(res$item_id)
res
# find the bin of the variable (var1, var2, ...) given percentile bin (interval);
x$bin_var1 <- findInterval(x$var1, res$var1)
x
# calculate the occurence, rate of the dummy flag column name (no=no occurence; yes=occurence of flag==1; total=total obs per bucket; rate_var=rate of var1)
op <- data.frame(with(x, aggregate(flag, list(bin_var1), FUN=function(x) c(sum(x==0),sum(x==1), length(x), sum(x==1)/length(x)))))
op1 <- data.frame(do.call(data.frame, op))
colnames(op1) <- c("group","no","yes","total","rate_var1")
op1
# merge
final <- merge(res, op1, by="group")
final
In this SAS solution I'm missing how to include the rate the ration of flag=1/flag all, in R I'm using the findInterval function to assign bin and then calculate the rate, sum(flag=1)...this part I'm not sue how to do in SAS.
Example:
data x;
length groups $12;
input groups Var1 Var2 Flag;
datalines;
constrict 3.50 1.09 1
constrict 0.75 1.50 0
constrict 0.70 3.50 1
no_constrict 1.10 1.70 1
no_constrict 0.90 0.45 1
no_constrict 0.55 2.75 1
no_constrict 1.40 2.33 0
constrict 2.30 1.64 1
constrict 0.85 1.415 0
no_constrict 1.80 1.80 1
no_constrict 0.95 1.36 1
no_constrict 1.50 1.36 0
constrict 0.60 1.50 0
constrict 0.95 1.90 0
constrict 1.60 0.40 1
constrict 2.35 0.03 1
no_constrict 1.10 2.20 0
constrict 0.80 3.33 0
no_constrict 0.75 1.90 0
;
proc univariate data=x noprint;
class groups;
var var1
var2
flag
;
output out=res pctlpts=0 to 100 by 10 pctlpre=var1_
var2_
flag_
;
proc sql;
create table op1
as select a.*,
b.*
from x
as a
left join res
as b
on a.groups=b.groups;
quit;

Apply conditional function in every row of a data frame

I'm new in R and I'm struggling with this df that looks like this:
Date Group Factor 1 Factor 2 Spread
2019-04-01 a 1.01 1.011 0.01
2019-04-02 a 1.02 1.012 0.02
2019-04-03 a 1.03 1.013 0.03
2019-04-01 b 1.005 1.004 0.01
2019-04-02 b 1.0051 1.0041 0.02
2019-04-03 b 1.0052 1.0042 0.03
I would like do verify each group in each row and if the results are Group "a" do Factor1/Factor1(1 day lag) * Factor2 + spread, and if the group it's not "a" do not add the spread.
Since you are conditioning on the group, this is a good example of by (base R), dplyr::group_by, or data.table's x[,,by=].
The equation is effectively the same in all three, capitalizing on the fact that (Group[1] == "a") will be coerced from a logical to numeric when multipled by a number; since FALSE translates to a 0, then effectively disabled adding Spread.
Base
I use within here to make the internals a little more readable, but this is not a requirement (in which case you'd need to prepend x$ in front of all of the variable names).
The lagging can be done using dplyr::lag (even if you don't use the rest of the package for this) or many other techniques. I don't find stats::lag to be the most intuitive in applications like this, but I'm sure somebody will suggest a way to incorporate it into an answer. The use of c(NA, ...) ensures that we don't bring in a different group's data or impute data we don't have, since we have no value to bring in on the first row of a group. Finally, head(..., n = 1) returns the first element of a vector/list, while head(..., n = -1) (negative) returns all but the last.
newx <- by(x, x$Group, function(y) {
within(y, {
NewVal = Factor2 * Factor1 / c(NA, head(Factor1, n=-1)) + (Group[1] == "a") * Spread
})
})
newx
# x$Group: a
# Date Group Factor1 Factor2 Spread NewVal
# 1 2019-04-01 a 1.01 1.011 0.01 NA
# 2 2019-04-02 a 1.02 1.012 0.02 1.042020
# 3 2019-04-03 a 1.03 1.013 0.03 1.052931
# -------------------------------------------------------
# x$Group: b
# Date Group Factor1 Factor2 Spread NewVal
# 4 2019-04-01 b 1.0050 1.0040 0.01 NA
# 5 2019-04-02 b 1.0051 1.0041 0.02 1.0042
# 6 2019-04-03 b 1.0052 1.0042 0.03 1.0043
This is really just a list with some fancy by-specific formatting, so you can treat it as such as combine them in an efficient base-R way:
do.call("rbind.data.frame", c(newx, stringsAsFactors = FALSE))
# Date Group Factor1 Factor2 Spread NewVal
# a.1 2019-04-01 a 1.0100 1.0110 0.01 NA
# a.2 2019-04-02 a 1.0200 1.0120 0.02 1.042020
# a.3 2019-04-03 a 1.0300 1.0130 0.03 1.052931
# b.4 2019-04-01 b 1.0050 1.0040 0.01 NA
# b.5 2019-04-02 b 1.0051 1.0041 0.02 1.004200
# b.6 2019-04-03 b 1.0052 1.0042 0.03 1.004300
dplyr
Many find the tidyverse line of packages to read intuitively.
library(dplyr)
x %>%
group_by(Group) %>%
mutate(NewVal = Factor2 * Factor1 / lag(Factor1) + (Group[1] == "a") * Spread) %>%
ungroup()
# # A tibble: 6 x 6
# Date Group Factor1 Factor2 Spread NewVal
# <chr> <chr> <dbl> <dbl> <dbl> <dbl>
# 1 2019-04-01 a 1.01 1.01 0.01 NA
# 2 2019-04-02 a 1.02 1.01 0.02 1.04
# 3 2019-04-03 a 1.03 1.01 0.03 1.05
# 4 2019-04-01 b 1.00 1.00 0.01 NA
# 5 2019-04-02 b 1.01 1.00 0.02 1.00
# 6 2019-04-03 b 1.01 1.00 0.03 1.00
data.table
On a different note, many find data.table better because of efficiencies gained from in-place modification (most of R's operations are copy-on-write, meaning some operations re-copy the object or a portion of it with each change).
library(data.table)
X <- as.data.table(x)
X[, NewVal := Factor2 * Factor1 / shift(Factor1) + (Group[1] == "a") * Spread, by = "Group"]
X
# Date Group Factor1 Factor2 Spread NewVal
# 1: 2019-04-01 a 1.0100 1.0110 0.01 NA
# 2: 2019-04-02 a 1.0200 1.0120 0.02 1.042020
# 3: 2019-04-03 a 1.0300 1.0130 0.03 1.052931
# 4: 2019-04-01 b 1.0050 1.0040 0.01 NA
# 5: 2019-04-02 b 1.0051 1.0041 0.02 1.004200
# 6: 2019-04-03 b 1.0052 1.0042 0.03 1.004300
The "in-place" part is evident on the second line here, where it appears as if the [ operation should just return a subset or something of the data ... but in this case using := causes the columns to be created (or changed) in-place.

Assigning a value to each range of consecutive numbers with same sign in R

I'm trying to create a data frame where a column exists that holds values representing the length of runs of positive and negative numbers, like so:
Time V Length
0.5 -2 1.5
1.0 -1 1.5
1.5 0 0.0
2.0 2 1.0
2.5 0 0.0
3.0 1 1.75
3.5 2 1.75
4.0 1 1.75
4.5 -1 0.75
5.0 -3 0.75
The Length column sums the length of time that the value has been positive or negative. Zeros are given a 0 since they are an inflection point. If there is no zero separating the sign change, the values are averaged on either side of the inflection.
I am trying to approximate the amount of time that these values are spending either positive or negative. I've tried this with a for loop with varying degrees of success, but I would like to avoid looping because I am working with extremely large data sets.
I've spent some time looking at sign and diff as they are used in this question about sign changes. I've also looked at this question that uses transform and aggregate to sum consecutive duplicate values. I feel like I could use this in combination with sign and/or diff, but I'm not sure how to retroactively assign these sums to the ranges that created them or how to deal with spots where I'm taking the average across the inflection.
Any suggestions would be appreciated. Here is the sample dataset:
dat <- data.frame(Time = seq(0.5, 5, 0.5), V = c(-2, -1, 0, 2, 0, 1, 2, 1, -1, -3))
First find indices of "Time" which need to be interpolated: consecutive "V" which lack a zero between positive and negative values; they have an abs(diff(sign(V)) equal to two.
id <- which(abs(c(0, diff(sign(dat$V)))) == 2)
Add rows with average "Time" between relevant indices and corresponding "V" values of zero to the original data. Also add rows of "V" = 0 at "Time" = 0 and at last time step (according to the assumptions mentioned by #Gregor). Order by "Time".
d2 <- rbind(dat,
data.frame(Time = (dat$Time[id] + dat$Time[id - 1])/2, V = 0),
data.frame(Time = c(0, max(dat$Time)), V = c(0, 0))
)
d2 <- d2[order(d2$Time), ]
Calculate time differences between time steps which are zero and replicate them using "zero-group indices".
d2$Length <- diff(d2$Time[d2$V == 0])[cumsum(d2$V == 0)]
Add values to original data:
merge(dat, d2)
# Time V Length
# 1 0.5 -2 1.50
# 2 1.0 -1 1.50
# 3 1.5 0 1.00
# 4 2.0 2 1.00
# 5 2.5 0 1.75
# 6 3.0 1 1.75
# 7 3.5 2 1.75
# 8 4.0 1 1.75
# 9 4.5 -1 0.75
# 10 5.0 -3 0.75
Set "Length" to 0 where V == 0.
This works, at least for your test case. And it should be pretty efficient. It makes some assumptions, I'll try to point out the big ones.
First we extract the vectors and stick 0s on the beginning. We also set the last V to 0. The calculation will be based on time differences between 0s, so we need to start and end with 0s. Your example seems to tacitly assume V = 0 at Time = 0, hence the initial 0, and it stops abruptly at the maximum time, so we set V = 0 there as well:
Time = c(0, dat$Time)
V = c(0, dat$V)
V[length(V)] = 0
To fill in the skipped 0s, we use approx to do linear approximation on sign(V). It also assumes that your sampling frequency is regular, so we can get away with doubling the frequency to get all the missing 0s.
ap = approx(Time, sign(V), xout = seq(0, max(Time), by = 0.25))
The values we want to fill in are the durations between the 0s, both observed and approximated. In the correct order, these are:
dur = diff(ap$x[ap$y == 0])
Lastly, we need the indices of the original data to fill in the durations. This is the hackiest part of this answer, but it seem to work. Maybe someone will suggest a nice simplification.
# first use rleid to get the sign groupings
group = data.table::rleid(sign(dat$V))
# then we need to set the groups corresponding to 0 values to 0
# and reduce any group numbers following 0s correspondingly
# lastly we add 1 to everything so that we can stick 0 at the
# front of our durations and assign those to the 0 V values
ind = (group - cumsum(dat$V == 0)) * (dat$V != 0) + 1
# fill it in
dat$Length = c(0, dur)[ind]
dat
# Time V Length
# 1 0.5 -2 1.50
# 2 1.0 -1 1.50
# 3 1.5 0 0.00
# 4 2.0 2 1.00
# 5 2.5 0 0.00
# 6 3.0 1 1.75
# 7 3.5 2 1.75
# 8 4.0 1 1.75
# 9 4.5 -1 0.75
# 10 5.0 -3 0.75
It took me longer than I care to admit, but here is my solution.
Because you said you wanted to use it on large datasets (thus speed matters) I use Rcpp to write a loop that does all the checking. For speed comparisons I also create another sample dataset with 500,000 data.points and check the speed (I tried to compare to the other datasets but couldn't translate them to data.table (without that it would be an unfair comparison...)). If supplied, I will gladly update the speed-comparisons!
Part 1: My solution
My solution looks like this:
(in length_time.cpp)
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
NumericVector length_time(NumericVector time, NumericVector v) {
double start = 0;
double time_i, v_i;
bool last_positive = v[0] > 0;
bool last_negative = v[0] < 0;
int length_i = time.length();
NumericVector ret_vec(length_i);
for (int i = 0; i < length_i; ++i) {
time_i = time[i];
v_i = v[i];
if (v_i == 0) { // injection
if (i > 0) { // if this is not the beginning, then a regime has ended!
ret_vec[i - 1] = time_i - start;
start = time_i;
}
} else if ((v_i > 0 && last_negative) || (v_i < 0 && last_positive)) {
ret_vec[i - 1] = (time_i + time[i - 1]) / 2 - start;
start = (time_i + time[i - 1]) / 2;
}
last_positive = v_i > 0;
last_negative = v_i < 0;
}
ret_vec[length_i - 1] = time[length_i - 1] - start;
// ret_vec now only has the values for the last observation
// do something like a reverse na_locf...
double tmp_val = ret_vec[length_i - 1];
for (int i = length_i - 1; i >= 0; --i) {
if (v[i] == 0) {
ret_vec[i] = 0;
} else if (ret_vec[i] == 0){
ret_vec[i] = tmp_val;
} else {
tmp_val = ret_vec[i];
}
}
return ret_vec;
}
and then in an R-file (i.e., length_time.R):
library(Rcpp)
# setwd("...") #to find the .cpp-file
sourceCpp("length_time.cpp")
dat$Length <- length_time(dat$Time, dat$V)
dat
# Time V Length
# 1 0.5 -2 1.50
# 2 1.0 -1 1.50
# 3 1.5 0 0.00
# 4 2.0 2 1.00
# 5 2.5 0 0.00
# 6 3.0 1 1.75
# 7 3.5 2 1.75
# 8 4.0 1 1.75
# 9 4.5 -1 0.75
# 10 5.0 -3 0.75
Which seems to work on the sample dataset.
Part 2: Testing for Speed
library(data.table)
library(microbenchmark)
n <- 10000
set.seed(1235278)
dt <- data.table(time = seq(from = 0.5, by = 0.5, length.out = n),
v = cumsum(round(rnorm(n, sd = 1))))
dt[, chg := v >= 0 & shift(v, 1, fill = 0) <= 0]
plot(dt$time, dt$v, type = "l")
abline(h = 0)
for (i in dt[chg == T, time]) abline(v = i, lty = 2, col = "red")
Which results in a dataset with 985 observations (crossings).
Testing the speed with microbenchmark results in
microbenchmark(dt[, length := length_time(time, v)])
# Unit: milliseconds
# expr min lq mean median uq max neval
# dt[, `:=`(length, length_time(time, v))] 2.625714 2.7184 3.054021 2.817353 3.077489 5.235689 100
Resulting in about 3 milliseconds for calculating with 500,000 observations.
Does that help you?
Here is my attempt done completely in base R.
Joseph <- function(df) {
is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol
v <- df$V
t <- df$Time
sv <- sign(v)
nR <- length(v)
v0 <- which(v==0)
id <- which(abs(c(0, diff(sv))) > 1) ## This line and (t[id] + t[id - 1L])/2 From #Henrik
myZeros <- sort(c(v0*t[1L], (t[id] + t[id - 1L])/2))
lenVals <- diff(c(0,myZeros,t[nR])) ## Actual values that
## will populate the Length column
## remove values that result from repeating zeros from the df$V column
lenVals <- lenVals[lenVals != t[1L] | c(!is.wholenumber(myZeros/t[1L]),F)]
## Below we need to determine how long to replicate
## each of the lenVals above, so we need to find
## the starting place and length of each run...
## rle is a great candidate for both of these
m <- rle(sv)
ml <- m$lengths
cm <- cumsum(ml)
zm <- m$values != 0 ## non-zero values i.e. we won't populate anything here
rl <- m$lengths[zm] ## non-zero run-lengths
st <- cm[zm] - rl + 1L ## starting index
out <- vector(mode='numeric', length = nR)
for (i in 1:length(st)) {out[st[i]:(st[i]+rl[i]-1L)] <- lenVals[i]}
df$Length <- out
df
}
Here is the output of the given example:
Joseph(dat)
Time V Length
1 0.5 -2 1.50
2 1.0 -1 1.50
3 1.5 0 0.00
4 2.0 2 1.00
5 2.5 0 0.00
6 3.0 1 1.75
7 3.5 2 1.75
8 4.0 1 1.75
9 4.5 -1 0.75
10 5.0 -3 0.75
Here is a larger example:
set.seed(142)
datBig <- data.frame(Time=seq(0.5,50000,0.5), V=sample(-3:3, 10^5, replace=TRUE))
library(compiler)
library(data.table)
library(microbenchmark)
c.Joseph <- cmpfun(Joseph)
c.Henrik <- cmpfun(Henrik)
c.Gregor <- cmpfun(Gregor)
microbenchmark(c.Joseph(datBig), c.Gregor(datBig), c.Henrik(datBig), David(datBig), times = 10)
Unit: milliseconds
expr min lq mean median uq max neval cld
David(datBig) 2.20602 2.617742 4.35927 2.788686 3.13630 114.0674 10 a
c.Joseph(datBig) 61.91015 62.62090 95.44083 64.43548 93.20945 225.4576 10 b
c.Gregor(datBig) 59.25738 63.32861 126.29857 72.65927 214.35961 229.5022 10 b
c.Henrik(datBig) 1511.82449 1678.65330 1727.14751 1730.24842 1816.42601 1871.4476 10 c
As #Gregor pointed out, the goal is to find the x-distance between each occurrence of zero. This can be seen visually by plotting (again, as pointed out by #Gregor (many kudos btw)). For example, if we plot the first 20 values of datBig, we obtain:
From this, we can see that the x-distances such that the graph is either positive or negative (i.e. not zero (this happens when there are repeats of zeros)) are approximately:
2.0, 1.25, 0.5, 0.75, 2.0, 1.0, 0.75, 0.5
t1 <- c.Joseph(datBig)
t2 <- c.Gregor(datBig)
t3 <- c.Henrik(datBig)
t4 <- David(datBig)
## Correct values according to the plot above (x above a value indicates incorrect value)
## 2.00 2.00 2.00 0.00 1.25 1.25 0.50 0.75 0.00 0.00 2.00 2.00 2.00 0.00 0.00 0.00 1.00 0.00 0.75 0.50
## all correct
t1$Length[1:20]
[1] 2.00 2.00 2.00 0.00 1.25 1.25 0.50 0.75 0.00 0.00 2.00 2.00 2.00 0.00 0.00 0.00 1.00 0.00 0.75 0.50
## mostly correct
t2$Length[1:20] x x x x x
[1] 2.00 2.00 2.00 0.00 1.25 1.25 0.50 0.75 0.00 0.00 0.75 0.75 0.75 0.00 0.00 0.00 0.50 0.00 0.75 0.25
## least correct
t3$Length[1:20] x x x x x x x x x x x x x
[1] 2.00 2.00 2.00 0.50 1.00 1.25 0.75 1.25 0.00 1.75 1.75 0.00 1.50 1.50 0.00 0.00 1.25 1.25 1.25 1.25
## all correct
t4$Length[1:20]
[1] 2.00 2.00 2.00 0.00 1.25 1.25 0.50 0.75 0.00 0.00 2.00 2.00 2.00 0.00 0.00 0.00 1.00 0.00 0.75 0.50
# agreement with David's solution
all.equal(t4$Length, t1$Length)
[1] TRUE
Well, it seems the Rcpp solution provided by David is not only accurate but blazing fast.

Assign different values to a large number of columns

I have a large set of financial data that has hundreds of columns. I have cleaned and sorted the data based on date. Here is a simplified example:
df1 <- data.frame(matrix(vector(),ncol=5, nrow = 4))
colnames(df1) <- c("Date","0.4","0.3","0.2","0.1")
df1[1,] <- c("2000-01-31","0","0","0.05","0.07")
df1[2,] <- c("2000-02-29","0","0.13","0.17","0.09")
df1[3,] <- c("2000-03-31","0.03","0.09","0.21","0.01")
df1[4,] <- c("2004-04-30","0.05","0.03","0.19","0.03")
df1
Date 0.4 0.3 0.2 0.1
1 2000-01-31 0 0 0.05 0.07
2 2000-02-29 0 0.13 0.17 0.09
3 2000-03-31 0.03 0.09 0.21 0.01
4 2000-04-30 0.05 0.03 0.19 0.03
I assigned individual weights (based on market value from the raw data) as column headers, because I don’t care about the company names and I need the weights for calculating the result.
My ultimate goal is to get: 1. Sum of the weighted returns; and 2. Sum of the weights when returns are non-zero. With that being said, below is the result I want to get:
Date SWeightedR SWeights
1 2000-01-31 0.017 0.3
2 2000-02-29 0.082 0.6
3 2000-03-31 0.082 1
4 2000-04-30 0.07 1
For instance, the SWeightedR for 2000-01-31 = 0.4x0+0.3x0+0.2x0.05+0.1x0.07, and SWeights = 0.2+0.1.
My initial idea was to assign the weights to each column like WCol2 <- 0.4, then use cbind to create new columns and use c(as.matrix() %*% ) to get the sums. Soon I realize that this is impossible as there are hundreds of columns. Any advice or suggestion is appreciated!
Here's a simple solution using matrix multiplications (as you were suggesting yourself).
First of all, your data seem to be of character type and I'm not sure it's the real case with the real data, but I would first convert it to an appropriate type
df1[-1] <- lapply(df1[-1], type.convert)
Next, we will convert the column names to a numeric class too
vec <- as.numeric(names(df1)[-1])
Finally, we could easily create the new columns in two simple steps. This indeed has a to matrix conversion overhead, but maybe you should work with matrices in the first place. Either way, this is fully vectorized
df1["SWeightedR"] <- as.matrix(df1[, -1]) %*% vec
df1["SWeights"] <- (df1[, -c(1, ncol(df1))] > 0) %*% vec
df1
# Date 0.4 0.3 0.2 0.1 SWeightedR SWeights
# 1 2000-01-31 0.00 0.00 0.05 0.07 0.017 0.3
# 2 2000-02-29 0.00 0.13 0.17 0.09 0.082 0.6
# 3 2000-03-31 0.03 0.09 0.21 0.01 0.082 1.0
# 4 2004-04-30 0.05 0.03 0.19 0.03 0.070 1.0
Or, you could convert to a long format first (here's a data.table example), though I believe it will be less efficient as this are basically by row operations
library(data.table)
res <- melt(setDT(df1), id = 1L, variable.factor = FALSE
)[, c("value", "variable") := .(as.numeric(value), as.numeric(variable))]
res[, .(SWeightedR = sum(variable * value),
SWeights = sum(variable * (value > 0))), by = Date]
# Date SWeightedR SWeights
# 1: 2000-01-31 0.017 0.3
# 2: 2000-02-29 0.082 0.6
# 3: 2000-03-31 0.082 1.0
# 4: 2004-04-30 0.070 1.0

Resources