I have a numeric vector, which I would like to extent with a sequence of three additional numbers between each two values of this vector.
Consider the following example:
# Example vector
set.seed(123)
x <- round(runif(5, 0, 100))
x
### 29 79 41 88 94
I want to insert 3 new numbers between each of these five values. The numbers should be a sequence of length 3 between each pair of values.
The output should look as follows:
# Desired output
c(seq(29, 79, length.out = 5),
seq(79, 41, length.out = 5)[- 1],
seq(41, 88, length.out = 5)[- 1],
seq(88, 94, length.out = 5)[- 1])
# 29 42 54 66 79 70 60 50 41 53 64 76 88 90 91 92 94
The added sequence between 29 and 79 is 42, 54, 66; the added sequence between 79 and 41 is 70, 60, 50; and so on...
How could I do such an operation in an automated way?
One option is Map to get the sequence of adjacent numbers by removing the last element and first element, remove the first element from the list, unlist the output and append the first element of vector
c(x[1], unlist(lapply(Map(seq, x[-length(x)], x[-1],
MoreArgs = list(length.out = 5)), `[`, -1)))
#[1] 29.00 41.50 54.00 66.50 79.00 69.50 60.00 50.50 41.00
#[11] 52.75 64.50 76.25 88.00 89.50 91.00 92.50 94.00
A not fancy solution is:
set.seed(123)
x <- round(runif(5, 0, 100))
x
c(x[1], unlist(lapply(X = 1:(length(x)-1), function(i) seq(x[i], x[i+1], length.out = 5)[-1])))
[1] 29.00 41.50 54.00 66.50 79.00 69.50 60.00 50.50 41.00 52.75 64.50 76.25 88.00 89.50 91.00 92.50 94.00
Actually your seed generates this values: 29 79 41 88 94
Related
I have some dataframe. Here is a small expample:
a <- rnorm(100, 5, 2)
b <- rnorm(100, 10, 3)
c <- rnorm(100, 15, 4)
df <- data.frame(a, b, c)
And I have a character variable vect <- "c('a','b')"
When I try to calculate sum of vars using command
df$d <- df[vect]
which must be an equivalent of
df$d <- df[c('a','b')]
But, as a reslut I have got an error
[.data.frame(df, vect) :undefined columns selected
You're assumption that
vect <- "c('a','b')"
df$d <- df[vect]
is equivalent to
df$d <- df[c('a','b')]
is incorrect.
As #Karthik points out, you should remove the quotation marks in the assignment to vect
However, from your question it sounds like you want to then sum the elements specified in vect and then assign to d. To do this you need to slightly change your code
vect <- c('a','b')
df$d <- apply(X = df[vect], MARGIN = 1, FUN = sum)
This does elementwise sum on the columns in df specified by vect. The MARGIN = 1 specifies that we want to apply the sum rowise rather than columnwise.
EDIT:
As #ThomasIsCoding points out below, if for some reason vect has to be a string, you can parse a string to an R expression using str2lang
vect <- "c('a','b')"
parsed_vect <- eval(str2lang(vect))
df$d <- apply(X = df[parsed_vect], MARGIN = 1, FUN = sum)
Perhaps you can try
> df[eval(str2lang(vect))]
a b
1 8.1588519 9.0617818
2 3.9361214 13.2752377
3 5.5370983 8.8739725
4 8.4542050 8.5704234
5 3.9044461 13.2642793
6 5.6679639 12.9529061
7 4.0183808 6.4746806
8 3.6415608 11.0308990
9 4.5237453 7.3255129
10 6.9379168 9.4594150
11 5.1557935 11.6776181
12 2.3829337 3.5170335
13 4.3556430 7.9706624
14 7.3274615 8.1852829
15 -0.5650641 2.8109197
16 7.1742283 6.8161200
17 3.3412044 11.6298940
18 2.5388981 10.1289533
19 3.8845686 14.1517643
20 2.4431608 6.8374837
21 4.8731053 12.7258259
22 6.9534912 6.5069513
23 4.4394807 14.5320225
24 2.0427553 12.1786148
25 7.1563978 11.9671603
26 2.4231207 6.1801862
27 6.5830372 0.9814878
28 2.5443326 9.8774632
29 1.1260322 9.4804636
30 4.0078436 12.9909014
31 9.3599808 12.2178596
32 3.5362245 8.6758910
33 4.6462337 8.6647953
34 2.0698037 7.2750532
35 7.0727970 8.9386798
36 4.8465248 8.0565347
37 5.6084462 7.5676308
38 6.7617479 9.5357666
39 5.2138482 13.6822924
40 3.6259103 13.8659939
41 5.8586547 6.5087016
42 4.3490281 9.5367522
43 7.5130701 8.1699117
44 3.7933813 9.3241308
45 4.9466813 9.4432584
46 -0.3730035 6.4695187
47 2.0646458 10.6511916
48 4.6027309 4.9207746
49 5.9919348 7.1946723
50 6.0148330 13.4702419
51 5.5354452 9.0193366
52 5.2621651 12.8856488
53 6.8580210 6.3526151
54 8.0812166 14.4659778
55 3.6039030 5.9857886
56 9.8548553 15.9081336
57 3.3675037 14.7207681
58 3.9935336 14.3186175
59 3.4308085 10.6024579
60 3.9609624 6.6595521
61 4.2358603 10.6600581
62 5.1791856 9.3241118
63 4.6976289 13.2833055
64 5.1868906 7.1323826
65 3.1810915 12.8402472
66 6.0258287 9.3805249
67 5.3768112 6.3805096
68 5.7072092 7.1130150
69 6.5789349 8.0092541
70 5.3175820 17.3377234
71 9.7706112 10.8648956
72 5.2332127 12.3418373
73 4.7626124 13.8816910
74 3.9395911 6.5270785
75 6.4394724 10.6344965
76 2.6803695 10.4501753
77 3.5577834 8.2323369
78 5.8431140 7.7932460
79 2.8596818 8.9581837
80 2.7365174 10.2902512
81 4.7560973 6.4555758
82 4.6519084 8.9786777
83 4.9467471 11.2818536
84 5.6167284 5.2641380
85 9.4700525 2.9904731
86 4.7392906 11.3572521
87 3.1221908 6.3881556
88 5.6949432 7.4518023
89 5.1435241 10.8912283
90 2.1628966 10.5080671
91 3.6380837 15.0594135
92 5.3434709 7.4034042
93 -0.1298439 0.4832707
94 7.8759390 2.7411723
95 2.0898649 9.7687250
96 4.2131549 9.3175228
97 5.0648105 11.3943350
98 7.7225193 11.4180456
99 3.1018895 12.8890257
100 4.4166832 10.4901303
I have data with a grouping variable 'grps' and a value 'x'. I have calculated the cummax within each group 'cmx'. Now I need to find the second highest cumulative value of 'x' within each group, scmx.
Some data, including the desired column scmx:
library(data.table)
d = structure(list(date = structure(rep(c(18690, 18691, 18692, 18693, 18694, 18695, 18696, 18697), 2), class = "Date"),
x = c(18, 70, 57, 94, 94, 13, 98, 23, 20, 72, 59, 96, 96, 15, 100, 25),
grps = c(rep("g1", 8), rep("g2", 8))),
row.names = c(NA, -16L), class = c("data.table", "data.frame"))
d[, cmx := cummax(x), by = .(grps)]
d[, scmx := c(18, 18, 57, 70, 70, 70, 94, 94, 20, 20, 59, 72, 72, 72, 96, 96)]
Context
If x corresponds to a performance rating, what I am trying to do is locate the date when they achieved their best performance and their second best. A similar question of mine where I needed to locate the row which corresponded to the highest cumulative value in a column:
Fill down first row within each cumulative max, with a twist
A data.table alternative:
d[ , scmx2 := {
c(x[1], sapply(seq(.N)[-1], function(i){
v = x[1:i]
v[frank(-v, ties.method = "dense") == 2][1]
}))
}, by = grps]
# date x grps cmx scmx scmx2
# 1: 2021-03-04 18 g1 18 18 18
# 2: 2021-03-05 70 g1 70 18 18
# 3: 2021-03-06 57 g1 70 57 57
# 4: 2021-03-07 94 g1 94 70 70
# 5: 2021-03-08 94 g1 94 70 70
# 6: 2021-03-09 13 g1 94 70 70
# 7: 2021-03-10 98 g1 98 94 94
# 8: 2021-03-11 23 g1 98 94 94
# 9: 2021-03-04 20 g2 20 20 20
# 10: 2021-03-05 72 g2 72 20 20
# 11: 2021-03-06 59 g2 72 59 59
# 12: 2021-03-07 96 g2 96 72 72
# 13: 2021-03-08 96 g2 96 72 72
# 14: 2021-03-09 15 g2 96 72 72
# 15: 2021-03-10 100 g2 100 96 96
# 16: 2021-03-11 25 g2 100 96 96
Within each group (by = grps), loop (sapply) over a sequence from 2 to number of rows in the current group (seq(.N)[-1]). In each step, subset 'x' from start of the vector to the index 'i' (v = x[1:i]).
Calculate dense rank and check if the rank is 2 (frank(-v, ties.method = "dense") == 2), i.e. the rank of the second largest number. Use the logical indices to subset 'v' (v[...). Select the first match ([1]; in case of several values with rank 2). Concatenate the result from this 'expanding window' with the first element of 'x' (c(x[1], ...).
In the first window, with only one value, there is clearly no second highest value. Here OP have chosen to return the first value. The same choice needs to be made also for longer windows where all values are equal, which will occur when there are leading runs of equal values. If we rather want to return NA than the first value, then replace the x[1] in the line
c(x[1], sapply(seq(.N)[-1], function(i){
...with NA_real_.
Small demo:
d = data.table(grps = c(1, 1, 2, 2, 2), x = c(3, 3, 4, 4, 5))
d[ , scmx2 := {
c(NA_real_, sapply(seq(.N)[-1], function(i){
v = x[1:i]
v[frank(-v, ties.method = "dense") == 2][1]
}))
}, by = grps]
# grps x scmx
# 1: 1 3 NA # grp 1: all values equal in all windows -> all NA
# 2: 1 3 NA
# 3: 2 4 NA
# 4: 2 4 NA
# 5: 2 5 4 # grp 2: only the last window has a second highest value
This question is indeed similar to the post I linked to above (Finding cumulative second max per group in R). However, here OP asked for a data.table solution.
Here is another option using non-equi join:
d[, s2 := .SD[.SD, on=.(grps, date<=date, x<cmx), by=.EACHI, max(x.x)]$V1]
d[is.na(s2), s2 := x][]
output:
date x grps cmx scmx s2
1: 2021-03-04 18 g1 18 18 18
2: 2021-03-05 70 g1 70 18 18
3: 2021-03-06 57 g1 70 57 57
4: 2021-03-07 94 g1 94 70 70
5: 2021-03-08 94 g1 94 70 70
6: 2021-03-09 13 g1 94 70 70
7: 2021-03-10 98 g1 98 94 94
8: 2021-03-11 23 g1 98 94 94
9: 2021-03-04 20 g2 20 20 20
10: 2021-03-05 72 g2 72 20 20
11: 2021-03-06 59 g2 72 59 59
12: 2021-03-07 96 g2 96 72 72
13: 2021-03-08 96 g2 96 72 72
14: 2021-03-09 15 g2 96 72 72
15: 2021-03-10 100 g2 100 96 96
16: 2021-03-11 25 g2 100 96 96
Create a sequence that is the length of the column x. Apply the function to each sequence in x that is from index 1 to the current number in the sequence, only caring about the unique values. Rfast::nth can be used to take the 2nd highest number in a vector.
library(Rfast)
sapply(seq(length(d$x)), function(x) {
return(nth(unique(d$x[1:x]), 2, descending=TRUE))
})
[1] 2.652495e-315 1.800000e+01 5.700000e+01 7.000000e+01
[5] 7.000000e+01 7.000000e+01 9.400000e+01 9.400000e+01
To do it for the new data frame. We can still use the function created above. Arrange the data frame so that the group names and values are in their own column, then use lapply with rollapplyr to capture the 2nd largest unique value.
d1=d %>% select(-cmx) %>%
pivot_wider(names_from=grps, values_from=x)
lapply(d1[-1], function(x) {
my_list=rollapplyr(x, seq(length(x)), function(x) {return(nth(sort(unique(x), decreasing=TRUE), 2))})
return(my_list)
})
I have 2880 observations in my data.frame. I have to create a new data.frame in which, I have to select rows from 25-77 from every 96 selected rows.
df.new = df[seq(25, nrow(df), 77), ] # extract from 25 to 77
The above code extracts only row number 25 to 77 but I want every row from 25 to 77 in every 96 rows.
One option is to create a vector of indeces with which subset the dataframe.
idx <- rep(25:77, times = nrow(df)/96) + 96*rep(0:29, each = 77-25+1)
df[idx, ]
You can use recycling technique to extract these rows :
from = 25
to = 77
n = 96
df.new <- df[rep(c(FALSE, TRUE, FALSE), c(from - 1, to - from + 1, n - to))), ]
To explain for this example it will work as :
length(rep(c(FALSE, TRUE, FALSE), c(24, 53, 19))) #returns
#[1] 96
In these 96 values, value 25-77 are TRUE and rest of them are FALSE which we can verify by :
which(rep(c(FALSE, TRUE, FALSE), c(24, 53, 19)))
# [1] 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
#[23] 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
#[45] 69 70 71 72 73 74 75 76 77
Now this vector is recycled for all the remaining rows in the dataframe.
First, define a Group variable, with values 1 to 30, each value repeating 96 times. Then define RowWithinGroup and filter as required. Finally, undo the changes introduced to do the filtering.
df <- tibble(X=rnorm(2880)) %>%
add_column(Group=rep(1:96, each=30)) %>%
group_by(Group) %>%
mutate(RowWithinGroup=row_number()) %>%
filter(RowWithinGroup >= 25 & RowWithinGroup <= 77) %>%
select(-Group, -RowWithinGroup) %>%
ungroup()
Welcome to SO. This question may not have been asked in this exact form before, but the proinciples required have been rerefenced in many, many questions and answers,
A one-liner base solution.
lapply(split(df, cut(1:nrow(df), nrow(df)/96, F)), `[`, 25:77, )
Note: Nothing after the last comma
The code above returns a list. To combine all data together, just pass the result above into
do.call(rbind, ...)
I am working on a data set which is large and having many columns. I am using data.table to speed up the calculations. However at certain points I am not sure how to go about and convert my data.table back to data.frame and do the calculation. This slows up the process. It would help a lot to have suggestions on how I can write the below in data.table. Below is a snap of my code on a dummy data -
library(data.table)
#### set the seed value
set.seed(9901)
#### create the sample variables for creating the data
p01 <- sample(1:100,1000,replace = T)
p02 <- sample(1:100,1000,replace = T)
p03 <- sample(1:100,1000,replace = T)
p04 <- sample(1:100,1000,replace = T)
p05 <- sample(1:100,1000,replace = T)
p06 <- sample(1:100,1000,replace = T)
p07 <- sample(1:100,1000,replace = T)
#### create the data.table
data <- data.table(cbind(p01,p02,p03,p04,p05,p06,p07))
###user input for last column
lcol <- 6
###calculate start column as last - 3
scol <- lcol-3
###calculate average for scol:lcol
data <- data[,avg:= apply(.SD,1,mean,na.rm=T),.SDcols=scol:lcol]
###converting to data.frame since do not know the solution in data.table
data <- as.data.frame(data)
###calculate the trend in percentage
data$t01 <- data[,lcol-00]/data[,"avg"]-1
data$t02 <- data[,lcol-01]/data[,"avg"]-1
data$t03 <- data[,lcol-02]/data[,"avg"]-1
data$t04 <- data[,lcol-03]/data[,"avg"]-1
data$t05 <- data[,lcol-04]/data[,"avg"]-1
###converting back to data.table
data <- as.data.table(data)
###calculate the min and max for the trend
data1 <- data[,`:=` (trend_min = apply(.SD,1,min,na.rm=T),
trend_max = apply(.SD,1,max,na.rm=T)),.SDcols=c(scol:lcol)]
###calculate flag if any of t04 OR t05 is an outlier for min and max values. This would be many columns in actual data
data1$flag1 <- ifelse(data1$t04 < data1$trend_min | data1$t04 > data1$trend_max,1,0)
data1$flag2 <- ifelse(data1$t05 < data1$trend_min | data1$t05 > data1$trend_max,1,0)
data1$flag <- ifelse(data1$flag1 == 1 | data1$flag2 == 1,1,0)
So basically, how can I -
calculate the percentages based on user input of column index. Note it is not simple divide but percentage
How can I create the flag variable....I think I need to use any function but not sure how....
Some steps can be made more efficient, i.e. instead of using the apply with MARGIN = 1, the mean, min, max can be replaced with rowMeans, pmin, pmax
library(data.table)
data[ , avg:= rowMeans(.SD, na.rm = TRUE) ,.SDcols=scol:lcol]
data[, sprintf('t%02d', 1:5) := lapply(.SD, function(x) x/avg -1),
.SDcol = patterns("^p0[1-5]")]
data[,`:=` (trend_min = do.call(pmin, c(.SD,na.rm=TRUE)),
trend_max = do.call(pmax, c(.SD,na.rm=TRUE)) ),.SDcols=c(scol:lcol)]
data
# p01 p02 p03 p04 p05 p06 p07 avg t01 t02 t03 t04 t05 trend_min trend_max
# 1: 35 53 22 82 100 59 69 65.75 -0.46768061 -0.19391635 -0.6653992 0.24714829 0.5209125 22 100
# 2: 78 75 15 65 70 69 66 54.75 0.42465753 0.36986301 -0.7260274 0.18721461 0.2785388 15 70
# 3: 15 45 27 61 63 75 99 56.50 -0.73451327 -0.20353982 -0.5221239 0.07964602 0.1150442 27 75
# 4: 41 80 13 22 63 84 17 45.50 -0.09890110 0.75824176 -0.7142857 -0.51648352 0.3846154 13 84
# 5: 53 9 75 47 25 75 66 55.50 -0.04504505 -0.83783784 0.3513514 -0.15315315 -0.5495495 25 75
# ---
# 996: 33 75 9 61 74 55 57 49.75 -0.33668342 0.50753769 -0.8190955 0.22613065 0.4874372 9 74
# 997: 24 68 74 11 43 75 37 50.75 -0.52709360 0.33990148 0.4581281 -0.78325123 -0.1527094 11 75
# 998: 62 78 82 97 56 50 74 71.25 -0.12982456 0.09473684 0.1508772 0.36140351 -0.2140351 50 97
# 999: 70 88 93 4 39 75 93 52.75 0.32701422 0.66824645 0.7630332 -0.92417062 -0.2606635 4 93
#1000: 20 50 99 94 62 66 98 80.25 -0.75077882 -0.37694704 0.2336449 0.17133956 -0.2274143 62 99
and then create the 'flag'
data[, flag := +(Reduce(`|`, lapply(.SD, function(x)
x < trend_min| x > trend_max))), .SDcols = t04:t05]
I'd like to grep for "nitrogen" in the following character vector and want to get
back only the entry which is containing "nitrogen" and nothing of the rest (e.g. nitrogen fixation):
varnames=c("nitrogen", "dissolved organic nitrogen", "nitrogen fixation", "total dissolved nitrogen", "total nitrogen")
I tried something like this:
grepl(pattern= "![[:space:]]nitrogen![[:space:]]", varnames)
But this doesn't work.
Although Dason's answer is easier, you could do an exact match using grep via:
varnames=c("nitrogen", "dissolved organic nitrogen", "nitrogen fixation", "total dissolved nitrogen", "total nitrogen")
grep("^nitrogen$",varnames,value=TRUE)
[1] "nitrogen"
grep("^nitrogen$",varnames)
[1] 1
To get the indices that are exactly equal to "nitrogen" you could use
which(varnames == "nitrogen")
Depending on what you want to do you might not even need the 'which' as varnames == "nitrogen" gives a logical vector of TRUE/FALSE. If you just want to do something like replace all of the occurances of "nitrogen" with "oxygen" this should suffice
varnames[varnames == "nitrogen"] <- "oxygen"
Or use fixed = TRUE if you want to match actual string (regexlessly):
v <- sample(c("nitrogen", "potassium", "hidrogen"), size = 100, replace = TRUE, prob = c(.8, .1, .1))
grep("nitrogen", v, fixed = TRUE)
# [1] 3 4 5 6 7 8 9 11 12 13 14 16 19 20 21 22 23 24 25
# [20] 26 27 29 31 32 35 36 38 39 40 41 43 44 46 47 48 49 50 51
# [39] 52 53 54 56 57 60 61 62 65 66 67 69 70 71 72 73 74 75 76
# [58] 78 79 80 81 82 83 84 85 86 87 88 89 91 92 93 94 95 96 97
# [77] 98 99 100
Dunno about the speed issues, I like to test stuff and claim that approach A is faster than approach B, but in theory, at least from my experience, indexing/binary operators should be the fastest, so I vote for #Dason's approach. Also note that regexes are always slower than fixed = TRUE greping.
A little proof is attached bellow. Note that this is a lame test, and system.time should be put inside replicate to get (more) accurate differences, you should take outliers into an account, etc. But surely this one proves that you should use which! =)
(a0 <- system.time(replicate(1e5, grep("^nitrogen$", v))))
# user system elapsed
# 5.700 0.023 5.724
(a1 <- system.time(replicate(1e5, grep("nitrogen", v, fixed = TRUE))))
# user system elapsed
# 1.147 0.020 1.168
(a2 <- system.time(replicate(1e5, which(v == "nitrogen"))))
# user system elapsed
# 1.013 0.020 1.033