How to substract a column by row? - r

I want to do an easy subtract in R, but I don't know how to solve it. I would like to know if I have to do a loop or if there is a function.
I have a column with numeric variables, and I would like to subtract n by n-1.
Time_Day Diff
10 10
15 5
45 30
60 15
Thus, I would like to find the variable "Diff".

you can also try with package dplyr
library(dplyr)
mutate(df, dif=Time_Day-lag(Time_Day))
# Time_Day Diff dif
# 1 10 10 NA
# 2 15 5 5
# 3 45 30 30
# 4 60 15 15

Does this do what you need?
Here we save the column as a variable:
c <- c(10, 15, 45, 60)
Now we add a 0 to the beginning and then cut off the last element:
cm1 <- c(0, c)[1:length(c)]
Now we subtract the two:
dif <- c - cm1
If we print that out, we get what you're looking for:
dif # 10 5 30 15

With diff :
df <- data.frame(Time_Day = c(10, 15, 45, 60))
df$Diff <- c(df$Time_Day[1], diff(df$Time_Day))
df
## Time_Day Diff
##1 10 10
##2 15 5
##3 45 30
##4 60 15
It works fine in dplyr too :
library("dplyr")
df <- data.frame(Time_Day = c(10, 15, 45, 60))
df %>% mutate(Diff = c(Time_Day[1], diff(Time_Day)))

Related

In R. conditionally adding values where one of the variables has to be positive (using rowsums)

I have used the following code previously to add values of a row:
subset$EBIT <- rowSums(subset[c("rorresul", "resand", "rteinknc",
"rteinext", "rteinov")], na.rm = TRUE)
However, I would actually need to include the condition that "resand" should only be included if it is positive. The other values can be either positive or negative, it does not matter. I have used rowSums because otherwise my total ended up a missing value if one of the variables had a missing value in them.
If you need sample of data, here is some:
rorresul resand rteinknc rteinext rteinov
40 30 2 2 2
50 -40 5 5 5
30 0 1 1 1
Super appreciative of any help! Thanks!
I would just sum everything, and then subtract resand after:
library(dplyr)
df %>%
mutate(
EBIT = rowSums(across(everything())),
EBIT = ifelse(resand < 0, EBIT - resand, EBIT)
)
# rorresul resand rteinknc rteinext rteinov EBIT
# 1 40 30 2 2 2 76
# 2 50 -40 5 5 5 65
# 3 30 0 1 1 1 33
Here is the data:
df <- data.frame(
rorresul = c(40, 50, 30),
resand = c(30, -40, 0),
rteinknc = c(2, 5, 1),
rteinext = c(2, 5, 1),
rteinov = c(2, 5, 1),
stringsAsFactors = FALSE
)
Edit
In case you have variables that shouldn't be included in the rowSums, then you can prespecify these:
sumVars <- c("rorresul", "resand", "rteinknc", "rteinext", "rteinov")
df %>%
mutate(
EBIT = rowSums(across(all_of(sumVars))),
EBIT = ifelse(resand < 0, EBIT - resand, EBIT)
)
You can use pmax to turn the negative values to 0 for resand and calculate rowSums.
cols <- c("rorresul", "resand", "rteinknc", "rteinext", "rteinov")
df$EBIT <- rowSums(transform(df, resand = pmax(resand, 0))[cols])
df
# rorresul resand rteinknc rteinext rteinov EBIT
#1 40 30 2 2 2 76
#2 50 -40 5 5 5 65
#3 30 0 1 1 1 33

R: efficiently merge 1000+ variables

I have 1000+ datasets with the exact same dimensions and the same column a that I need to load from the web (using jsonlite) and then merge. I can choose the data.frame names but not change the data itself. I could do it all manually but there might be a more efficient way to do this. Let me show what I mean with this example of three datasets.
cola <- c(1, 2, 3, 4)
x0001 <- c(10, 11, 12, 13)
x0002 <- c(20, 22, 25, 29)
x0003 <- c(30, 31, 33, 38)
df0001 <- data.frame(cola, x0001)
colnames(df0001) <- c("A","B")
df0002 <- data.frame(cola, x0002)
colnames(df0002) <- c("A","B")
df0003 <- data.frame(cola, x0003)
colnames(df0003) <- c("A","B")
# data.frame names do not matter to me
alldata <- Reduce(function(x,y) merge(x=x, y=y, by="A"), list(df0001, df0002, df0003))
colnames(alldata) <- c("A", "df0001", "df0002", "df0003")
The merging to alldata and the colnames() function below would be veery long if I do it manually by listing all 1000+ variables. Maybe there is a better way, perhaps with a loop?
If the objects are all loaded in memory, you can load all the objects into a list with the mget and ls(pattern = ...) functions.
dfs <- mget(ls(pattern = "df[0-9]+"))
dfs
#$df0001
# A B
#1 1 10
#2 2 11
#3 3 12
#4 4 13
#
#...
#
#$df0003
# A B
#1 1 30
#2 2 31
#3 3 33
#4 4 38
If the data.frames always have the same columns, in the same order, you can use do.call:
cbind(dfs[[1]],do.call(cbind,lapply(dfs[-1],`[`,,-1)))
# A B df0002 df0003
#1 1 10 20 30
#2 2 11 22 31
#3 3 12 25 33
#4 4 13 29 38
Otherwise, you can use Reduce:
Reduce(function(x,y) merge(x,y,by = "A"), dfs)
# A B.x B.y B
#1 1 10 20 30
#2 2 11 22 31
#3 3 12 25 33
#4 4 13 29 38
The drawback of Reduce is it results in significant memory allocation.

Calculate mean of specific row pattern

I have a dataframe like this:
V1 = paste0("AB", seq(1:48))
V2 = seq(1:48)
test = data.frame(name = V1, value = V2)
I want to calculate the means of the value-column and specific rows.
The pattern of the rows is pretty complicated:
Rows of MeanA1: 1, 5, 9
Rows of MeanA2: 2, 6, 10
Rows of MeanA3: 3, 7, 11
Rows of MeanA4: 4, 8, 12
Rows of MeanB1: 13, 17, 21
Rows of MeanB2: 14, 18, 22
Rows of MeanB3: 15, 19, 23
Rows of MeanB4: 16, 20, 24
Rows of MeanC1: 25, 29, 33
Rows of MeanC2: 26, 30, 34
Rows of MeanC3: 27, 31, 35
Rows of MeanC4: 28, 32, 36
Rows of MeanD1: 37, 41, 45
Rows of MeanD2: 38, 42, 46
Rows of MeanD3: 39, 43, 47
Rows of MeanD4: 40, 44, 48
As you see its starting at 4 different points (1, 13, 25, 37) then always +4 and for the following 4 means its just stepping 1 more row down.
I would like to have an output of all these means in one list.
Any ideas? NOTE: In this example the mean is of course always the middle number, but my real df is different.
Not quite sure about the output format you require, but the following codes can calculate what you want anyhow.
calc_mean1 <- function(x) mean(test$value[seq(x, by = 4, length.out = 3)])
calc_mean2 <- function(x){sapply(x:(x+3), calc_mean1)}
output <- lapply(seq(1, 37, 12), calc_mean2)
names(output) <- paste0('Mean', LETTERS[seq_along(output)]) # remove this line if more than 26 groups.
output
## $MeanA
## [1] 5 6 7 8
## $MeanB
## [1] 17 18 19 20
## $MeanC
## [1] 29 30 31 32
## $MeanD
## [1] 41 42 43 44
An idea via base R is to create a grouping variable for every 4 rows, split the data every 12 rows (nrow(test) / 4) and aggregate to find the mean, i.e.
test$new = rep(1:4, nrow(test)%/%4)
lapply(split(test, rep(1:4, each = nrow(test) %/% 4)), function(i)
aggregate(value ~ new, i, mean))
# $`1`
# new value
# 1 1 5
# 2 2 6
# 3 3 7
# 4 4 8
# $`2`
# new value
# 1 1 17
# 2 2 18
# 3 3 19
# 4 4 20
# $`3`
# new value
# 1 1 29
# 2 2 30
# 3 3 31
# 4 4 32
# $`4`
# new value
# 1 1 41
# 2 2 42
# 3 3 43
# 4 4 44
And yet another way.
fun <- function(DF, col, step = 4){
run <- nrow(DF)/step^2
res <- lapply(seq_len(step), function(inc){
inx <- seq_len(run*step) + (inc - 1)*run*step
dftmp <- DF[inx, ]
tapply(dftmp[[col]], rep(seq_len(step), run), mean, na.rm = TRUE)
})
names(res) <- sprintf("Mean%s", LETTERS[seq_len(step)])
res
}
fun(test, 2, 4)
#$MeanA
#1 2 3 4
#5 6 7 8
#
#$MeanB
# 1 2 3 4
#17 18 19 20
#
#$MeanC
# 1 2 3 4
#29 30 31 32
#
#$MeanD
# 1 2 3 4
#41 42 43 44
Since you said you wanted a long list of the means, I assumed it could also be a vector where you just have all these values. You would get that like this:
V1 = paste0("AB", seq(1:48))
V2 = seq(1:48)
test = data.frame(name = V1, value = V2)
meanVector <- NULL
for (i in 1:(nrow(test)-8)) {
x <- c(test$value[i], test$value[i+4], test$value[i+8])
m <- mean(x)
meanVector <- c(meanVector, m)
}

Creating a vector from data.table row without using apply

Let's say I want to create a column in a data.table, in which the value in each row is equal to the standard deviation of the values in three other cells in the same row. E.g., if I make
DT <- data.table(a = 1:4, b = c(5, 7, 9, 11), c = c(13, 16, 19, 22), d = c(25, 29, 33, 37))
DT
a b c d
1: 1 5 13 25
2: 2 7 16 29
3: 3 9 19 33
4: 4 11 22 37
and I'd like to add a column that contains the standard deviation of a, b, and d for each row, like this:
a b c d abdSD
1: 1 5 13 23 12.86
2: 2 7 16 27 14.36
3: 3 9 19 31 15.87
4: 4 11 22 35 17.39
I could of course write a for-loop or use an apply function to calculate this. Unfortunately, what I actually want to do needs to be applied to millions of rows, isn't as simple a function as calculating a standard deviation, and needs to finish within a fraction of a second, so I really need a vectorized solution. I want to write something like
DT[, abdSD := sd(c(a, b, d))]
but unfortunately that doesn't give the right answer. Is there any data.table syntax that can create a vector out of different values within the same row, and make that vector accessible to a function populating a new cell within that row? Any help would be greatly appreciated. #Arun
Depending on the size of your data, you might want to convert the data into a long format, then calculate the result as follows:
complexFunc <- function(x) sd(x)
cols <- c("a", "b", "d")
rowres <- melt(DT[, rn:=.I], id.vars="rn", variable.factor=FALSE)[,
list(abdRes=complexFunc(value[variable %chin% cols])), by=.(rn)]
DT[rowres, on=.(rn)]
or if your complex function has 3 arguments, you can do something like
DT[, abdSD := mapply(complexFunc, a, b, d)]
As #Frank mentioned, I could avoid adding a column by doing by=1:nrow(DT)
DT[, abdSD:=sd(c(a,b,d)),by=1:nrow(DT)]
output:
a b c d abdSD
1: 1 5 13 25 12.85820
2: 2 7 16 29 14.36431
3: 3 9 19 33 15.87451
4: 4 11 22 37 17.38774
if you add a row_name column, it would be ultra easy
DT$row_id<-row.names(DT)
Simply by=row_id, would get you the result you want
DT[, abdSD:=sd(c(a,b,d)),by=row_id]
Result would have:
a b c d row_id abdSD
1: 1 5 13 25 1 12.85820
2: 2 7 16 29 2 14.36431
3: 3 9 19 33 3 15.87451
4: 4 11 22 37 4 17.38774
If you want row_id removed, simply adding [,row_id:=NULL]
DT[, abdSD:=sd(c(a,b,d)),by=row_id][,row_id:=NULL]
This line would get everything you want
a b c d abdSD
1: 1 5 13 25 12.85820
2: 2 7 16 29 14.36431
3: 3 9 19 33 15.87451
4: 4 11 22 37 17.38774
You just gotta do it by row.
data.frame does it by row on default, data.table does it by column on default I think. It's a bit tricky
Hope this helps
I think you should try matrixStats package
library(matrixStats)
#sample data
dt <- data.table(a = 1:4, b = c(5, 7, 9, 11), c = c(13, 16, 19, 22), d = c(25, 29, 33, 37))
dt[, `:=`(abdSD = rowSds(as.matrix(.SD), na.rm=T)), .SDcols=c('a','b','d')]
dt
Output is:
a b c d abdSD
1: 1 5 13 25 12.85820
2: 2 7 16 29 14.36431
3: 3 9 19 33 15.87451
4: 4 11 22 37 17.38774
Not an answer, but just trying to show the difference between using apply and the solution provided by Prem above :
I have blown up the sample data to 40,000 rows to show solid time differences :
library(matrixStats)
#sample data
dt <- data.table(a = 1:40000, b = rep(c(5, 7, 9, 11),10000), c = rep(c(13, 16, 19, 22),10000), d = rep(c(25, 29, 33, 37),10000))
df <- data.frame(a = 1:40000, b = rep(c(5, 7, 9, 11),10000), c = rep(c(13, 16, 19, 22),10000), d = rep(c(25, 29, 33, 37),10000))
t0 = Sys.time()
dt[, `:=`(abdSD = rowSds(as.matrix(.SD), na.rm=T)), .SDcols=c('a','b','d')]
print(paste("Time taken for data table operation = ",Sys.time() - t0))
# [1] "Time taken for data table operation = 0.117115020751953"
t0 = Sys.time()
df$abdSD <- apply(df[,c("a","b","d")],1, function(x){sd(x)})
print(paste("Time taken for apply opertaion = ",Sys.time() - t0))
# [1] "Time taken for apply opertaion = 2.93488311767578"
Using DT and matrixStats clearly wins the race
It's not hard to vectorize the sd for this situation:
vecSD = function(x) {
n = ncol(x)
sqrt((n/(n-1)) * (Reduce(`+`, x*x)/n - (Reduce(`+`, x)/n)^2))
}
DT[, vecSD(.SD), .SDcols = c('a', 'b', 'd')]
#[1] 12.85820 14.36431 15.87451 17.38774

Split a vector in R depending on entries

I input a vector vec<-c(2 3 4 8 10 12 15 19 20 23 27 28 39 47 52 60 64 75), and the size of intervals that I want to break the vector entries into.
In this example I want to break this into 9 different vectors based on the size of each entry.
In my case I want vector number 1 to be entries in the interval [1,9], then vector 2 to be entries in [10,18]...ect
In other words:
vec1: 2 3 4 8
vec2: 10 12 15
vec3: 19 20 23 27
ect...
I have tried using the split function but I do not know how to set a ratio that will work.
Maybe the following will do what you want.
f <- cut(vec, seq(0, max(vec), by = 9), include.lowest = TRUE)
sp <- split(vec, f)
sp <- sp[sapply(sp, function(x) length(x) != 0)]
sp
Use integer division %/% to return a vector of which group each value belongs in. Then split into separate vectors. Use (vec-1) to be "inclusive", i.e. 27 goes with group 3, not group 4.
split(vec,(vec-1) %/% 9)
Edit:
Another way using dplyr and cut which explicitly tags each interval
require(dplyr)
vec <- as.data.frame(vec)
df2 %>% mutate(interval = cut(vec,breaks=seq(0,((max(vec) %/% 9) +1) * 9,9),include.lowest=TRUE,right=TRUE))
vec interval
1 2 [0,9]
2 3 [0,9]
3 4 [0,9]
4 8 [0,9]
5 10 (9,18]
6 12 (9,18]
7 15 (9,18]
8 19 (18,27]
9 20 (18,27]
10 23 (18,27]
11 27 (18,27]
maybe this
library(purrr)
vec <- c(2, 3, 4, 8, 10 ,12, 15 ,19, 20, 23, 27, 28, 39, 47, 52, 60, 64, 75)
vec1 <- keep(vec, function(x) x >= 1 & (x) <= 9)
vec2 <- keep(vec, function(x) x >= 10 & (x) <= 18)

Resources