I have a data set similar to the following with 1 column and 60 rows:
value
1 0.0423
2 0.0388
3 0.0386
4 0.0342
5 0.0296
6 0.0276
7 0.0246
8 0.0239
9 0.0234
10 0.0214
.
40 0.1424
.
60 -0.0312
I want to reorder the rows so that certain conditions are met. For example one condition could be: sum(df$value[4:7]) > 0.1000 & sum(df$value[4:7]) <0.1100
With the data set looking like this for example.
value
1 0.0423
2 0.0388
3 0.0386
4 0.1312
5 -0.0312
6 0.0276
7 0.0246
8 0.0239
9 0.0234
10 0.0214
.
.
.
60 0.0342
What I tried was using repeat and sample as in the following:
repeat{
df1 <- as_tibble(sample(sdf$value, replace = TRUE))
if (sum(df$value[4:7]) > 0.1000 & sum(df$value[4:7]) <0.1100) break
}
Unfortunately, this method takes quite some time and I was wondering if there is a faster way to reorder rows based on mathematical conditions such as sum or prod
Here's a quick implementation of the hill-climbing method I outlined in my comment. I've had to slightly reframe the desired condition as "distance of sum(x[4:7]) from 0.105" to make it continuous, although you can still use the exact condition when doing the check that all requirements are satisfied. The benefit is that you can add extra conditions to the distance function easily.
# Using same example data as Jon Spring
set.seed(42)
vs = rnorm(60, 0.05, 0.08)
get_distance = function(x) {
distance = abs(sum(x[4:7]) - 0.105)
# Add to the distance with further conditions if needed
distance
}
max_attempts = 10000
best_distance = Inf
swaps_made = 0
for (step in 1:max_attempts) {
# Copy the vector and swap two random values
new_vs = vs
swap_inds = sample.int(length(vs), 2, replace = FALSE)
new_vs[swap_inds] = rev(new_vs[swap_inds])
# Keep the new vector if the distance has improved
new_distance = get_distance(new_vs)
if (new_distance < best_distance) {
vs = new_vs
best_distance = new_distance
swaps_made = swaps_made + 1
}
complete = (sum(vs[4:7]) < 0.11) & (sum(vs[4:7]) > 0.1)
if (complete) {
print(paste0("Solution found in ", step, " steps"))
break
}
}
sum(vs[4:7])
There's no real guarantee that this method will reach a solution, but I often try this kind of basic hill-climbing when I'm not sure if there's a "smart" way to approach a problem.
Here's an approach using combn from base R, and then filtering using dplyr. (I'm sure there's a way w/o it but my base-fu isn't there yet.)
With only 4 numbers from a pool of 60, there are "only" 488k different combinations (ignoring order; =60*59*58*57/4/3/2), so it's quick to brute force in about a second.
# Make a vector of 60 numbers like your example
set.seed(42)
my_nums <- rnorm(60, 0.05, 0.08);
all_combos <- combn(my_nums, 4) # Get all unique combos of 4 numbers
library(tidyverse)
combos_table <- all_combos %>%
t() %>%
as_tibble() %>%
mutate(sum = V1 + V2 + V3 + V4) %>%
filter(sum > 0.1, sum < 0.11)
> combos_table
# A tibble: 8,989 x 5
V1 V2 V3 V4 sum
<dbl> <dbl> <dbl> <dbl> <dbl>
1 0.160 0.00482 0.0791 -0.143 0.100
2 0.160 0.00482 0.101 -0.163 0.103
3 0.160 0.00482 0.0823 -0.145 0.102
4 0.160 0.00482 0.0823 -0.143 0.104
5 0.160 0.00482 -0.0611 -0.00120 0.102
6 0.160 0.00482 -0.0611 0.00129 0.105
7 0.160 0.00482 0.0277 -0.0911 0.101
8 0.160 0.00482 0.0277 -0.0874 0.105
9 0.160 0.00482 0.101 -0.163 0.103
10 0.160 0.00482 0.0273 -0.0911 0.101
# … with 8,979 more rows
This says that in this example, there are about 9000 different sets of 4 numbers from my sequence which meet the criteria. We could pick any of these and put them in positions 4-7 to meet your requirement.
Related
I try to get the square root of negative number. I got the absolute value of data and, for the positive number, I use the squart root of absolute number directly, otherwive add an negaitve sign to the result. However all numbers I got are negaitve...
My code
Results shown
I try to get negaitve and positive results, but I only got negative numbers.your text``your text
Library and Data
Not sure exactly what you are doing because your original data frame isn't included in the question. However, I have simulated a dataset that should emulate what you want depending on what you are doing. First, I loaded the tidyverse package for data wrangling like creating/manipulating variables, then set a random seed so you can reproduce the simulated data.
#### Load Library ####
library(tidyverse)
#### Set Random Seed ####
set.seed(123)
Now I create a randomly distributed x value that is both positive and negative.
#### Create Randomly Distributed X w/Neg Values ####
tib <- tibble(
x = rnorm(n=100)
)
Creating Variables
Now we can make absolute values, followed by square roots, which are made negative if the original raw value was negative.
#### Create Absolute and Sqrt Values ####
new.tib <- tib %>%
mutate(
abs.x = abs(x),
sq.x = sqrt(abs.x),
final.x = ifelse(x < 0,
sq.x * -1,
sq.x)
)
new.tib
If you print new.tib, the end result will look like this:
# A tibble: 100 × 4
x abs.x sq.x final.x
<dbl> <dbl> <dbl> <dbl>
1 2.20 2.20 1.48 1.48
2 1.31 1.31 1.15 1.15
3 -0.265 0.265 0.515 -0.515
4 0.543 0.543 0.737 0.737
5 -0.414 0.414 0.644 -0.644
6 -0.476 0.476 0.690 -0.690
7 -0.789 0.789 0.888 -0.888
8 -0.595 0.595 0.771 -0.771
9 1.65 1.65 1.28 1.28
10 -0.0540 0.0540 0.232 -0.232
If you just want to select the final x values, you can simply select them, like so:
new.tib %>%
select(final.x)
Giving you just this vector:
# A tibble: 100 × 1
final.x
<dbl>
1 1.48
2 1.15
3 -0.515
4 0.737
5 -0.644
6 -0.690
7 -0.888
8 -0.771
9 1.28
10 -0.232
# … with 90 more rows
Using the first example in ?ifelse:
x <- c(6:-4)
[1] 6 5 4 3 2 1 0 -1 -2 -3 -4
sqrt(ifelse(x >= 0, x, -x))
[1] 2.449490 2.236068 2.000000 1.732051 1.414214 1.000000
[7] 0.000000 1.000000 1.414214 1.732051 2.000000
I am fairly new to stack overflow but did not find this in the search engine. Please let me know if this question should not be asked here.
I have a very large text file. It has 16 entries and each entry looks like this:
AI_File 10
Version
Date 20200708 08:18:41
Prompt1 LOC
Resp1 H****
Prompt2 QUAD
Resp2 1012
TransComp c-p-s
Model Horizontal
### Computed Results
LAI 4.36
SEL 0.47
ACF 0.879
DIFN 0.031
MTA 40.
SEM 1.
SMP 5
### Ring Summary
MASK 1 1 1 1 1
ANGLES 7.000 23.00 38.00 53.00 68.00
AVGTRANS 0.038 0.044 0.055 0.054 0.030
ACFS 0.916 0.959 0.856 0.844 0.872
CNTCT# 3.539 2.992 2.666 2.076 1.499
STDDEV 0.826 0.523 0.816 0.730 0.354
DISTS 1.008 1.087 1.270 1.662 2.670
GAPS 0.028 0.039 0.034 0.032 0.018
### Contributing Sensors
### Observations
A 1 20200708 08:19:12 x 31.42 38.30 40.61 48.69 60.28
L 2 20200708 08:19:12 1 5.0e-006
B 3 20200708 08:19:21 x 2.279 2.103 1.408 5.027 1.084
B 4 20200708 08:19:31 x 1.054 0.528 0.344 0.400 0.379
B 5 20200708 08:19:39 x 0.446 1.255 2.948 3.828 1.202
B 6 20200708 08:19:47 x 1.937 2.613 5.909 3.665 5.964
B 7 20200708 08:19:55 x 0.265 1.957 0.580 0.311 0.551
Almost all of this is junk information, and I am looking to run some code for the whole file that will only give me the lines for "Resp2" and "LAI" for all 16 of the entries. Is a task like this doable in R? If so, how would I do it?
Thanks very much for any help and please let me know if there's any more information I can give to clear anything up.
I've saved your file as a text file and read in the lines. Then you can use regex to extract the desired rows. However, I feel that my approach is rather clumsy, I bet there are more elegant ways (maybe also with (unix) command line tools).
data <- readLines("testfile.txt")
library(stringr)
resp2 <- as.numeric(str_trim(str_extract(data, "(?m)(?<=^Resp2).*$")))
lai <- as.numeric(str_trim(str_extract(data, "(?m)(?<=^LAI).*$")))
data_extract <- data.frame(
resp2 = resp2[!is.na(resp2)],
lai = lai[!is.na(lai)]
)
data_extract
resp2 lai
1 1012 4.36
A solution based in the tidyverse can look as follows.
library(dplyr)
library(vroom)
library(stringr)
library(tibble)
library(tidyr)
vroom_lines('data') %>%
enframe() %>%
filter(str_detect(value, 'Resp2|LAI')) %>%
transmute(value = str_squish(value)) %>%
separate(value, into = c('name', 'value'), sep = ' ')
# name value
# <chr> <chr>
# 1 Resp2 1012
# 2 LAI 4.36
First and foremost, regardless if you have input or not, thank you for taking your time to view my question.
Let me break down what I am doing, the sample dataset, and the error.
What I currently have is data for several different ID's that list the dispersion per day. (you will see below). I want to loop through the dates and add two columns to the data : Rolling Means columns & Rolling standard deviation column.
The code I have written out so far is this:
library(zoo)
Testing1 <- function(dataset, k) {
ops <- data.frame()
for (i in unique(dataset$Date)) {
ops <- dataset %>% mutate(rolling_mean = rollmean(dataset$Dispersion,k)) %>%
mutate(rolling_std = rollapply(dataset$Dispersion, width = k, FUN = sd))
}
Results <<- ops
}
however, i get the following error:
Error in mutate_impl(.data, dots) :
Column rolling_mean must be length 30 (the number of rows) or one, not 26
I am assuming that the row differential is due to me specifying a 5 day window for the rolling average, meaning it won't calculate it for the first 4 rows. But how do I go about telling R that it's ok to input NA's on those rows? Or If you guys have any other solution, that would work as well. Please do help.
Heres a sample of the data:
Identifier Date Dispersion
1000 2/15/2018 0.390
1000 2/16/2018 0.664
1000 2/17/2018 0.526
1000 2/18/2018 0.933
1000 2/19/2018 0.009
1000 2/20/2018 0.987
1000 2/21/2018 0.517
1000 2/22/2018 0.641
1000 2/23/2018 0.777
1000 2/24/2018 0.613
1001 2/15/2018 0.617
1001 2/16/2018 0.234
1001 2/17/2018 0.303
1001 2/18/2018 0.796
1001 2/19/2018 0.359
1001 2/20/2018 0.840
1001 2/21/2018 0.291
1001 2/22/2018 0.699
1001 2/23/2018 0.882
1001 2/24/2018 0.467
1002 2/15/2018 0.042
1002 2/16/2018 0.906
1002 2/17/2018 0.077
1002 2/18/2018 0.156
1002 2/19/2018 0.350
1002 2/20/2018 0.060
1002 2/21/2018 0.457
1002 2/22/2018 0.770
1002 2/23/2018 0.433
1002 2/24/2018 0.366
You get this error because the length of rolling means/stds does not match the legth of Dispersion. Simply add k - 1 NAs at the beginnig of your means/stds vectors.
Below is a working example. You can modify this based on your needs.
my_function <- function(df, k) {
df %>%
mutate(
rolling_mean = c(rep(NA, k - 1), rollmean(Dispersion, k)),
rolling_std = c(rep(NA, k - 1), rollapply(Dispersion, width = k, FUN = sd))
)
}
For example, you may want to add group_by to compute these values for each Identifier:
my_function <- function(df, k) {
df %>%
group_by(Identifier) %>%
mutate(
rolling_mean = c(rep(NA, k - 1), rollmean(Dispersion, k)),
rolling_std = c(rep(NA, k - 1), rollapply(Dispersion, width = k, FUN = sd))
)
}
Update following up #G. Grothendieck's comment:
It turns out the package zoo already has comprehensive features for NA handling, refactoring the above-given code as:
my_function <- function(df, k) {
df %>%
mutate(
rolling_mean = rollmeanr(Dispersion, k, fill = NA),
rolling_std = rollapplyr(Dispersion, width = k, FUN = sd, fill = NA)
)
}
I'd take a look at tibbletime.
Assuming your data frame is named mydata and the Date column is a character: first convert the Date, then convert to a time-aware tibble:
library(dplyr)
library(tibbletime)
mydata <- mydata %>%
mutate(Date = as.Date(Date, "%m/%d/%Y")) %>%
as_tbl_time(index = Date)
Now you can define functions for rolling mean and sd:
mean_5 <- rollify(mean, window = 5)
sd_5 <- rollify(sd, window = 5)
mydata %>%
mutate(rolling_mean = mean_5(Dispersion),
rolling_std = sd_5(Dispersion))
# A time tibble: 30 x 5
# Index: Date
Identifier Date Dispersion rolling_mean rolling_std
<int> <date> <dbl> <dbl> <dbl>
1 1000 2018-02-15 0.39 NA NA
2 1000 2018-02-16 0.664 NA NA
3 1000 2018-02-17 0.526 NA NA
4 1000 2018-02-18 0.933 NA NA
5 1000 2018-02-19 0.009 0.504 0.342
6 1000 2018-02-20 0.987 0.624 0.393
7 1000 2018-02-21 0.517 0.594 0.394
8 1000 2018-02-22 0.641 0.617 0.393
9 1000 2018-02-23 0.777 0.586 0.367
10 1000 2018-02-24 0.613 0.707 0.182
# ... with 20 more rows
I have a large dataset I need to divide into multiple balanced sets.
The set looks something like the following:
> data<-matrix(runif(4000, min=0, max=10), nrow=500, ncol=8 )
> colnames(data)<-c("A","B","C","D","E","F","G","H")
The sets, each containing for example 20 rows, will need to be balanced across multiple variables so that each subset ends up having a similar mean of B, C, D that's included in their subgroup compared to all the other subsets.
Is there a way to do that with R? Any advice would be much appreciated. Thank you in advance!
library(tidyverse)
# Reproducible data
set.seed(2)
data<-matrix(runif(4000, min=0, max=10), nrow=500, ncol=8 )
colnames(data)<-c("A","B","C","D","E","F","G","H")
data=as.data.frame(data)
Updated Answer
It's probably not possible to get similar means across sets within each column if you want to keep observations from a given row together. With 8 columns (as in your sample data), you'd need 25 20-row sets where each column A set has the same mean, each column B set has the same mean, etc. That's a lot of constraints. Probably there are, however, algorithms that could find the set membership assignment schedule that minimizes the difference in set means.
However, if you can separately take 20 observations from each column without regard to which row it came from, then here's one option:
# Group into sets with same means
same_means = data %>%
gather(key, value) %>%
arrange(value) %>%
group_by(key) %>%
mutate(set = c(rep(1:25, 10), rep(25:1, 10)))
# Check means by set for each column
same_means %>%
group_by(key, set) %>%
summarise(mean=mean(value)) %>%
spread(key, mean) %>% as.data.frame
set A B C D E F G H
1 1 4.940018 5.018584 5.117592 4.931069 5.016401 5.171896 4.886093 5.047926
2 2 4.946496 5.018578 5.124084 4.936461 5.017041 5.172817 4.887383 5.048850
3 3 4.947443 5.021511 5.125649 4.929010 5.015181 5.173983 4.880492 5.044192
4 4 4.948340 5.014958 5.126480 4.922940 5.007478 5.175898 4.878876 5.042789
5 5 4.943010 5.018506 5.123188 4.924283 5.019847 5.174981 4.869466 5.046532
6 6 4.942808 5.019945 5.123633 4.924036 5.019279 5.186053 4.870271 5.044757
7 7 4.945312 5.022991 5.120904 4.919835 5.019173 5.187910 4.869666 5.041317
8 8 4.947457 5.024992 5.125821 4.915033 5.016782 5.187996 4.867533 5.043262
9 9 4.936680 5.020040 5.128815 4.917770 5.022527 5.180950 4.864416 5.043587
10 10 4.943435 5.022840 5.122607 4.921102 5.018274 5.183719 4.872688 5.036263
11 11 4.942015 5.024077 5.121594 4.921965 5.015766 5.185075 4.880304 5.045362
12 12 4.944416 5.024906 5.119663 4.925396 5.023136 5.183449 4.887840 5.044733
13 13 4.946751 5.020960 5.127302 4.923513 5.014100 5.186527 4.889140 5.048425
14 14 4.949517 5.011549 5.127794 4.925720 5.006624 5.188227 4.882128 5.055608
15 15 4.943008 5.013135 5.130486 4.930377 5.002825 5.194421 4.884593 5.051968
16 16 4.939554 5.021875 5.129392 4.930384 5.005527 5.197746 4.883358 5.052474
17 17 4.935909 5.019139 5.131258 4.922536 5.003273 5.204442 4.884018 5.059162
18 18 4.935830 5.022633 5.129389 4.927106 5.008391 5.210277 4.877859 5.054829
19 19 4.936171 5.025452 5.127276 4.927904 5.007995 5.206972 4.873620 5.054192
20 20 4.942925 5.018719 5.127394 4.929643 5.005699 5.202787 4.869454 5.055665
21 21 4.941351 5.014454 5.125727 4.932884 5.008633 5.205170 4.870352 5.047728
22 22 4.933846 5.019311 5.130156 4.923804 5.012874 5.213346 4.874263 5.056290
23 23 4.928815 5.021575 5.139077 4.923665 5.017180 5.211699 4.876333 5.056836
24 24 4.928739 5.024419 5.140386 4.925559 5.012995 5.214019 4.880025 5.055182
25 25 4.929357 5.025198 5.134391 4.930061 5.008571 5.217005 4.885442 5.062630
Original Answer
# Randomly group data into 20-row groups
set.seed(104)
data = data %>%
mutate(set = sample(rep(1:(500/20), each=20)))
head(data)
A B C D E F G H set
1 1.848823 6.920055 3.2283369 6.633721 6.794640 2.0288792 1.984295 2.09812642 10
2 7.023740 5.599569 0.4468325 5.198884 6.572196 0.9269249 9.700118 4.58840437 20
3 5.733263 3.426912 7.3168797 3.317611 8.301268 1.4466065 5.280740 0.09172101 19
4 1.680519 2.344975 4.9242313 6.163171 4.651894 2.2253335 1.175535 2.51299726 25
5 9.438393 4.296028 2.3563249 5.814513 1.717668 0.8130327 9.430833 0.68269106 19
6 9.434750 7.367007 1.2603451 5.952936 3.337172 5.2892300 5.139007 6.52763327 5
# Mean by set for each column
data %>% group_by(set) %>%
summarise_all(mean)
set A B C D E F G H
1 1 5.240236 6.143941 4.638874 5.367626 4.982008 4.200123 5.521844 5.083868
2 2 5.520983 5.257147 5.209941 4.504766 4.231175 3.642897 5.578811 6.439491
3 3 5.943011 3.556500 5.366094 4.583440 4.932206 4.725007 5.579103 5.420547
4 4 4.729387 4.755320 5.582982 4.763171 5.217154 5.224971 4.972047 3.892672
5 5 4.824812 4.527623 5.055745 4.556010 4.816255 4.426381 3.520427 6.398151
6 6 4.957994 7.517130 6.727288 4.757732 4.575019 6.220071 5.219651 5.130648
7 7 5.344701 4.650095 5.736826 5.161822 5.208502 5.645190 4.266679 4.243660
8 8 4.003065 4.578335 5.797876 4.968013 5.130712 6.192811 4.282839 5.669198
9 9 4.766465 4.395451 5.485031 4.577186 5.366829 5.653012 4.550389 4.367806
10 10 4.695404 5.295599 5.123817 5.358232 5.439788 5.643931 5.127332 5.089670
# ... with 15 more rows
If the total number of rows in the data frame is not divisible by the number of rows you want in each set, then you can do the following when you create the sets:
data = data %>%
mutate(set = sample(rep(1:ceiling(500/20), each=20))[1:n()])
In this case, the set sizes will vary a bit with the number of data rows is not divisible by the desired number of rows in each set.
The following approach could be worth trying for someone in a similar position.
It is based on the numerical balancing in groupdata2's fold() function, which allows creating groups with balanced means for a single column. By standardizing each of the columns and numerically balancing their rowwise sum, we might increase the chance of getting balanced means in the individual columns.
I compared this approach to creating groups randomly a few times and selecting the split with the least variance in means. It seems to be a bit better, but I'm not too convinced that this will hold in all contexts.
# Attach dplyr and groupdata2
library(dplyr)
library(groupdata2)
set.seed(1)
# Create the dataset
data <- matrix(runif(4000, min = 0, max = 10), nrow = 500, ncol = 8)
colnames(data) <- c("A", "B", "C", "D", "E", "F", "G", "H")
data <- dplyr::as_tibble(data)
# Standardize all columns and calculate row sums
data_std <- data %>%
dplyr::mutate_all(.funs = function(x){(x-mean(x))/sd(x)}) %>%
dplyr::mutate(total = rowSums(across(where(is.numeric))))
# Create groups (new column called ".folds")
# We numerically balance the "total" column
data_std <- data_std %>%
groupdata2::fold(k = 25, num_col = "total") # k = 500/20=25
# Transfer the groups to the original (non-standardized) data frame
data$group <- data_std$.folds
# Check the means
data %>%
dplyr::group_by(group) %>%
dplyr::summarise_all(.funs = mean)
> # A tibble: 25 x 9
> group A B C D E F G H
> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
> 1 1 4.48 5.05 4.80 5.65 5.04 4.60 5.12 4.85
> 2 2 5.57 5.17 3.21 5.46 4.46 5.89 5.06 4.79
> 3 3 4.33 6.02 4.57 6.18 4.76 3.79 5.94 3.71
> 4 4 4.51 4.62 4.62 5.27 4.65 5.41 5.26 5.23
> 5 5 4.55 5.10 4.19 5.41 5.28 5.39 5.57 4.23
> 6 6 4.82 4.74 6.10 4.34 4.82 5.08 4.89 4.81
> 7 7 5.88 4.49 4.13 3.91 5.62 4.75 5.46 5.26
> 8 8 4.11 5.50 5.61 4.23 5.30 4.60 4.96 5.35
> 9 9 4.30 3.74 6.45 5.60 3.56 4.92 5.57 5.32
> 10 10 5.26 5.50 4.35 5.29 4.53 4.75 4.49 5.45
> # … with 15 more rows
# Check the standard deviations of the means
# Could be used to compare methods
data %>%
dplyr::group_by(group) %>%
dplyr::summarise_all(.funs = mean) %>%
dplyr::summarise(across(where(is.numeric), sd))
> # A tibble: 1 x 8
> A B C D E F G H
> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
> 1 0.496 0.546 0.764 0.669 0.591 0.611 0.690 0.475
It might be best to compare the means and mean variances (or standard deviations as above) of different methods on the standardized data though. In that case, one could calculate the sum of the variances and minimize it.
data_std %>%
dplyr::select(-total) %>%
dplyr::group_by(.folds) %>%
dplyr::summarise_all(.funs = mean) %>%
dplyr::summarise(across(where(is.numeric), sd)) %>%
sum()
> 1.643989
Comparing multiple balanced splits
The fold() function allows creating multiple unique grouping factors (splits) at once. So here, I will perform the numerically balanced split 20 times and find the grouping with the lowest sum of the standard deviations of the means. I'll further convert it to a function.
create_multi_balanced_groups <- function(data, cols, k, num_tries){
# Extract the variables of interest
# We assume these are numeric but we could add a check
data_to_balance <- data[, cols]
# Standardize all columns
# And calculate rowwise sums
data_std <- data_to_balance %>%
dplyr::mutate_all(.funs = function(x){(x-mean(x))/sd(x)}) %>%
dplyr::mutate(total = rowSums(across(where(is.numeric))))
# Create `num_tries` unique numerically balanced splits
data_std <- data_std %>%
groupdata2::fold(
k = k,
num_fold_cols = num_tries,
num_col = "total"
)
# The new fold column names ".folds_1", ".folds_2", etc.
fold_col_names <- paste0(".folds_", seq_len(num_tries))
# Remove total column
data_std <- data_std %>%
dplyr::select(-total)
# Calculate score for each split
# This could probably be done more efficiently without a for loop
variance_scores <- c()
for (fcol in fold_col_names){
score <- data_std %>%
dplyr::group_by(!!as.name(fcol)) %>%
dplyr::summarise(across(where(is.numeric), mean)) %>%
dplyr::summarise(across(where(is.numeric), sd)) %>%
sum()
variance_scores <- append(variance_scores, score)
}
# Get the fold column with the lowest score
lowest_fcol_index <- which.min(variance_scores)
best_fcol <- fold_col_names[[lowest_fcol_index]]
# Add the best fold column / grouping factor to the original data
data[["group"]] <- data_std[[best_fcol]]
# Return the original data and the score of the best fold column
list(data, min(variance_scores))
}
# Run with 20 splits
set.seed(1)
data_grouped_and_score <- create_multi_balanced_groups(
data = data,
cols = c("A", "B", "C", "D", "E", "F", "G", "H"),
k = 25,
num_tries = 20
)
# Check data
data_grouped_and_score[[1]]
> # A tibble: 500 x 9
> A B C D E F G H group
> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <fct>
> 1 5.86 6.54 0.500 2.88 5.70 9.67 2.29 3.01 2
> 2 0.0895 4.69 5.71 0.343 8.95 7.73 5.76 9.58 1
> 3 2.94 1.78 2.06 6.66 9.54 0.600 4.26 0.771 16
> 4 2.77 1.52 0.723 8.11 8.95 1.37 6.32 6.24 7
> 5 8.14 2.49 0.467 8.51 0.889 6.28 4.47 8.63 13
> 6 2.60 8.23 9.17 5.14 2.85 8.54 8.94 0.619 23
> 7 7.24 0.260 6.64 8.35 8.59 0.0862 1.73 8.10 5
> 8 9.06 1.11 6.01 5.35 2.01 9.37 7.47 1.01 1
> 9 9.49 5.48 3.64 1.94 3.24 2.49 3.63 5.52 7
> 10 0.731 0.230 5.29 8.43 5.40 8.50 3.46 1.23 10
> # … with 490 more rows
# Check score
data_grouped_and_score[[2]]
> 1.552656
By commenting out the num_col = "total" line, we can run this without the numerical balancing. For me, this gave a score of 1.615257.
Disclaimer: I am the author of the groupdata2 package. The fold() function can also balance a categorical column (cat_col) and keep all data points with the same ID in the same fold (id_col) (e.g. to avoid leakage in cross-validation). There's a very similar partition() function as well.
The problem that I like to solve is a sliding window going over the measurement data with a defined window width and a controllable stepwidth (there 1).
Within the window I need to detect a number of values within a certain range of the
first value expl. 2.2 +- 0.3 and count the number of such values in a row
expl. 2.2, 2.3, 2.1 , 1.8, 2.2, 2.5, 2.1 --> 3,1,3
d <- read.table(text="Number Time.s Potential.V Current.A
1 0.0000 0.075 -0.7653
2 0.0285 0.074 -0.7597
3 0.0855 0.076 -0.7549
17 0.8835 0.074 -0.7045
18 0.9405 0.073 -0.5983
19 0.9975 0.071 -0.1370
19 1.0175 0.070 -0.1370
20 1.0545 0.072 0.1295
21 1.1115 0.073 0.2680
8013 1.6555 0.076 -1.1070
8014 1.7125 0.075 -1.1850
8015 1.7695 0.073 -1.2610
8016 1.8265 0.072 -1.3460
8017 1.8835 0.071 -1.4380
8018 1.9405 0.070 -1.4350
8019 1.9975 0.061 -1.0720
8020 2.1045 0.062 -0.8823
8021 2.1115 0.058 -0.7917
8022 2.1685 0.060 -0.7481", header=TRUE)
rle(round(diff(d$Time.s[d$Time.s>1 & d$Time.s<2]),digits=2))
I can't use rle, because there is no acceptance interval one could define. Working with
a for loop is possible, but seams very un'R'ish.
width=4
bound.low <- 0.00
bound.high <- 0.03
Ergebnis <- data.frame(
Potential.V=seq(1,(nrow(d)-width),by=1),count=seq(1,(nrow(d)-width),by=1))
for (a in 1:(nrow(d)-width)) {
temp <- d[a:(a+width),c("Time.s","Potential.V")]
counter=0
for (b in 1:nrow(temp)){
if (temp$Potential.V[1] >= (temp$Potential.V[b] - bound.low ) &
temp$Potential.V[1] <= (temp$Potential.V[b] + bound.high) ){
(counter=counter+1)
} else { break() }
}
Ergebnis$Potential.V[a] <- temp$Potential.V[1]
Ergebnis$count[a] <- counter
}
print(Ergebnis)
Result
Potential.V count
1 0.075 2
2 0.074 1
3 0.076 5
4 0.074 5
5 0.073 5
6 0.071 2
7 0.070 1
8 0.072 1
9 0.073 1
10 0.076 5
11 0.075 5
12 0.073 5
13 0.072 5
14 0.071 5
15 0.070 5
rle(Ergebnis$count)
Run Length Encoding
lengths: int [1:6] 1 1 3 1 3 6
values : num [1:6] 2 1 5 2 1 5
So I find the needed counts in the lengths vector.
Is there a more elegant way of solving such problems ? My experiments with xts and zoo didn't worked out like I thought
best regards,
IInatas
P.S.
The reason for this data analysis is log data from an experiment which has a degrading problem with an increasing severity in relation to certain voltages. In the end there is a lifetime account and I try to calculate the rest that is left, based on this log data.
Here's a solution using zoo::rollapply to calculate Ergebnis, but you still need to run rle on the result.
# the function we're going to apply to each window
f <- function(x, upper=0.03, lower=0.00) {
# logical test
l <- x[1] >= (x-lower) & x[1] <= (x+upper)
# first FALSE value
m <- if(any(!l)) which.min(l) else length(l)
c(Potential.V=x[1],count=sum(l[1:m]))
}
Ergebnis <- data.frame(rollapply(d$Potential.V, 5, f, align='left'))
rle(Ergebnis$count)