R Markdown 2 columns should have the same names - r

I have a table, where I've performed two statistical tests. So I've received the statistic and the p.value two times. R added after the first case a ".x" and after the second case a ".y" because it isn't possible that two colums have the same names in R.
Now I want to insert my dataframe in R Markdown and convert it to a pdf file. Is there a way to reshape the table so that the names of both columns are the same?
Here is my current table:
# A tibble: 6 x 4
statistic.x p.value.x statistic.y p.value.y
<dbl> <chr> <dbl> <chr>
1 0.533 0.595 115806 0.791
2 0.276 0.783 60380 0.674
3 -0.481 0.633 28392 0.116
4 2.68 0.008 * * 94507 0.195
5 1.95 0.054 56902 0.349
And I want to have this table in R Markdown:
# A tibble: 6 x 4
statistic p.value statistic p.value
<dbl> <chr> <dbl> <chr>
1 0.533 0.595 115806 0.791
2 0.276 0.783 60380 0.674
3 -0.481 0.633 28392 0.116
4 2.68 0.008 * * 94507 0.195
5 1.95 0.054 56902 0.349
Here is the code for my data:
structure(list(statistic.x = c(0.533, 0.276, -0.481, 2.678, 1.95,
1.996), p.value.x = c("0.595", "0.783", "0.633", "0.008 * *",
"0.054", "0.051"), statistic.y = c(115806, 60380, 28392, 94507,
56902, 37688), p.value.y = c("0.791", "0.674", "0.116", "0.195",
"0.349", "0.397")), class = c("tbl_df", "tbl", "data.frame"), row.names = c(NA,
-6L))

I didn't have any problem setting names using colnames
> colnames(dat) <- c( "statistic", "p.value", "statistic", "p.value" )
> dat
statistic p.value statistic p.value
1 0.533 0.595 115806 0.791
2 0.276 0.783 60380 0.674
3 -0.481 0.633 28392 0.116
4 2.678 0.008 * * 94507 0.195
5 1.950 0.054 56902 0.349
6 1.996 0.051 37688 0.397

Related

Loop on several variables with the same suffix in R

I have a database which looks like this but with much more rows and columns.
Several variables (x,y,z) measured at different time (1,2,3).
df <-
tibble(
x1 = rnorm(10),
x2 = rnorm(10),
x3 = rnorm(10),
y1 = rnorm(10),
y2 = rnorm(10),
y3 = rnorm(10),
z1 = rnorm(10),
z2 = rnorm(10),
z3 = rnorm(10),
)
I am trying to create dummies variables from the variables with the same suffix (measured at the same time) like this:
df <- df %>%
mutate(var1= ifelse(x1>0 & (y1<0.5 |z1<0.5),0,1)) %>%
mutate(var2= ifelse(x2>0 & (y2<0.5 |z2<0.5),0,1)) %>%
mutate(var3= ifelse(x3>0 & (y1<0.5 |z3<0.5),0,1))
I am used to coding in SAS or Stata, so I would like to use a function or a loop because I have many more variables in my database.
But I think I don't have the right approach in R to deal with this.
Thank you very much for your help !
{dplyover} makes this kind of operation easy (disclaimer: I'm the maintainer), given that your desired output contains a typo:
I think you want to use all variables with the same digit (1, 2, 3 and so on) in each calculation:
df <- df %>%
mutate(var1= ifelse(x1>0 & (y1<0.5 |z1<0.5),0,1)) %>%
mutate(var2= ifelse(x2>0 & (y2<0.5 |z2<0.5),0,1)) %>%
mutate(var3= ifelse(x3>0 & (y3<0.5 |z3<0.5),0,1))
If that is the case we can use dplyover::over to apply the same function over a vector. Here we construct the vector with extract_names("[0-9]{1}$") which gets us all ending numbers of our variable names here: c(1,2,3). We can then construct the variable names using a special syntax: .("x{.x}"). Here .x evaluates to the first number in our vector so it would return the object name x1 (not a string!) which we can use inside the function argument of over.
library(dplyr)
library(dplyover) # Only on GitHub: https://github.com/TimTeaFan/dplyover
df %>%
mutate(over(cut_names("^[a-z]{1}"),
~ ifelse(.("x{.x}") > 0 & (.("y{.x}") < 0.5 | .("z{.x}") < 0.5), 0, 1),
.names = "var{x}"
))
#> # A tibble: 10 x 12
#> x1 x2 x3 y1 y2 y3 z1 z2 z3 var1
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 0.690 0.550 0.911 0.203 -0.111 0.530 -2.09 0.189 0.147 0
#> 2 -0.238 1.32 -0.145 0.744 1.05 -0.448 2.05 -1.04 1.50 1
#> 3 0.888 0.898 -1.46 -1.87 -1.14 1.59 1.91 -0.155 1.46 0
#> 4 -2.78 -1.34 -0.486 -0.0674 0.246 0.141 0.154 1.08 -0.319 1
#> 5 -1.20 0.835 1.28 -1.32 -0.674 0.115 0.362 1.06 0.515 1
#> 6 0.622 -0.713 0.0525 1.79 -0.427 0.819 -1.53 -0.885 0.00237 0
#> 7 -2.54 0.0197 0.942 0.230 -1.37 -1.02 -1.55 -0.721 -1.06 1
#> 8 -0.434 1.97 -0.274 0.848 -0.482 -0.422 0.197 0.497 -0.600 1
#> 9 -0.316 -0.219 0.467 -1.97 -0.718 -0.442 -1.39 -0.877 1.52 1
#> 10 -1.03 0.226 2.04 0.432 -1.02 -0.535 0.954 -1.11 0.804 1
#> # ... with 2 more variables: var2 <dbl>, var3 <dbl>
Alternatively we can use dplyr::across and use cur_column(), get() and gsub() to alter the name of the column on the fly. To name the new variables correctly we use gsub() in the .names argument of across and wrap it in curly braces {} to evaluate the expression.
library(dplyr)
df %>%
mutate(across(starts_with("x"),
~ {
cur_c <- dplyr::cur_column()
ifelse(.x > 0 & (get(gsub("x","y", cur_c)) < 0.5 | get(gsub("x","z", cur_c)) < 0.5), 0, 1)
},
.names = '{gsub("x", "var", .col)}'
))
#> # A tibble: 10 x 12
#> x1 x2 x3 y1 y2 y3 z1 z2 z3 var1
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 -0.423 -1.42 -1.15 -1.54 1.92 -0.511 -0.739 0.501 0.451 1
#> 2 -0.358 0.164 0.971 -1.61 1.96 -0.675 -0.0188 -1.88 1.63 1
#> 3 -0.453 -0.758 -0.258 -0.449 -0.795 -0.362 -1.81 -0.780 -1.90 1
#> 4 0.855 0.335 -1.36 0.796 -0.674 -1.37 -1.42 -1.03 -0.560 0
#> 5 0.436 -0.0487 -0.639 0.352 -0.325 -0.893 -0.746 0.0548 -0.394 0
#> 6 -0.228 -0.240 -0.854 -0.197 0.884 0.118 -0.0713 1.09 -0.0289 1
#> 7 -0.949 -0.231 0.428 0.290 -0.803 2.15 -1.11 -0.202 -1.21 1
#> 8 1.88 -0.0980 -2.60 -1.86 -0.0258 -0.965 -1.52 -0.539 0.108 0
#> 9 0.221 1.58 -1.46 -0.806 0.749 0.506 1.09 0.523 1.86 0
#> 10 0.0238 -0.389 -0.474 0.512 -0.448 0.178 0.529 1.56 -1.12 1
#> # ... with 2 more variables: var2 <dbl>, var3 <dbl>
Created on 2022-06-08 by the reprex package (v2.0.1)
You could restructure your data along the principles of tidy data (see e.g. https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html).
Here to a long format and using tidyverse:
library(tidyverse)
df <-
df |>
pivot_longer(everything()) |>
separate(name, c("var", "time"), sep = "(?=[0-9])") |>
pivot_wider(id_col = "time",
names_from = "var",
names_prefix = "var_",
values_from = "value",
values_fn = list) |>
unnest(-time) |>
mutate(new_var = ifelse(var_x > 0 & (var_y < 0.5 | var_z < 0.5), 0, 1))
df
You would probably want to keep the data in a long format, but if you want, you can pivot_wider and get back to the format you started with. E.g.
df |>
pivot_wider(values_from = c(starts_with("var_"), "new_var"),
names_from = "time",
values_fn = list) |>
unnest(everything())
As you suggested, a solution using a loop is definitely possible.
# times as unique non-alphabetical parts of column names
times <- unique(gsub('[[:alpha:]]', '', names(df)))
for (time in times) {
# column names for current time
xyz <- paste0(c('x', 'y', 'z'), time)
df[[paste0('var', time)]] <-
ifelse(df[[xyz[1]]]>0 & (df[[xyz[2]]]<.5 | df[[xyz[3]]]<.5), 0, 1)
}
Another way I can think of is transforming the data into a 3D array (observartion × variable × time) so that you can actually do the computation for all variables at once.
times <- unique(gsub('[[:alpha:]]', '', names(df)))
df.arr <- sapply(c('x', 'y', 'z'),
function(var) as.matrix(df[, paste0(var, times)]),
simplify='array')
new.vars <- ifelse(df.arr[, , 1]>0 & (df.arr[, , 2]<0.5 | df.arr[, , 3]<0.5), 0, 1)
colnames(new.vars) <- paste0('var', times)
cbind(df, new.vars)
Here, sapply creates a matrix from columns of measurings for each variable at different times and stacks them into a 3D array.
If you trust (or ensure) correct ordering of columns in the data frame, instead of using sapply you can create the array just by modifying the object's dimensions. I didn't do any benchmarking but i guess this could be the most computationally efficient solution (if it should matter).
df.arr <- as.matrix(df)
dim(df.arr) <- c(dim(df.arr) / c(1, 3), 3)

How to flip column variable names to row labels? [duplicate]

This question already has answers here:
Transposing a dataframe maintaining the first column as heading
(5 answers)
Transposition of a Tibble Using Pivot_Longer() and Pivot_Wider (Tidyverse) [duplicate]
(1 answer)
Closed 1 year ago.
I have the below tibble.
A tibble: 2 x 6
Trial_Type CT_tib_all CT_lum_all CT_tho_all CT_gps_all CT_vest_all
* <fct> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Pre 0.244 0.209 0.309 0.315 0.310
2 Post 0.254 0.211 0.302 0.313 0.316
I would like to flip the rows and columns so I end up with a 6 x 2 tibble, but I'm not sure of the easiest way to do this. How do I get the column variable names to become row labels and the row labels as column variables (Pre and Post)?
You can use pivot_longer and pivot_wider -
library(dplyr)
library(tidyr)
df %>%
pivot_longer(cols = -Trial_Type) %>%
pivot_wider(names_from = Trial_Type, values_from = value)
# name Pre Post
# <chr> <dbl> <dbl>
#1 CT_tib_all 0.244 0.254
#2 CT_lum_all 0.209 0.211
#3 CT_tho_all 0.309 0.302
#4 CT_gps_all 0.315 0.313
#5 CT_vest_all 0.31 0.316
In data.table -
library(data.table)
dcast(melt(setDT(df), id.vars = 'Trial_Type'),
variable~Trial_Type, vvalue.var = 'value')
t i.e. transpose function in baseR may also be used, in combination with tibble::rownames_to_column and tibble::column_to_rownames
library(tibble)
library(dplyr)
df <- read.table(text = 'Trial_Type CT_tib_all CT_lum_all CT_tho_all CT_gps_all CT_vest_all
Pre 0.244 0.209 0.309 0.315 0.310
Post 0.254 0.211 0.302 0.313 0.316', header = T)
df %>% tibble::column_to_rownames('Trial_Type') %>%
t() %>% as.data.frame() %>%
rownames_to_column('Trial_Type')
#> Trial_Type Pre Post
#> 1 CT_tib_all 0.244 0.254
#> 2 CT_lum_all 0.209 0.211
#> 3 CT_tho_all 0.309 0.302
#> 4 CT_gps_all 0.315 0.313
#> 5 CT_vest_all 0.310 0.316
Created on 2021-05-28 by the reprex package (v2.0.0)
We can use transpose from data.table
data.table::transpose(df, make.names = 'Trial_Type', keep.names = 'name')
# name Pre Post
#1 CT_tib_all 0.244 0.254
#2 CT_lum_all 0.209 0.211
#3 CT_tho_all 0.309 0.302
#4 CT_gps_all 0.315 0.313#
5 CT_vest_all 0.310 0.316
A base R option using reshape
reshape(
cbind(name = df$Trial_Type, stack(df[-1])),
direction = "wide",
idvar = "ind",
timevar = "name"
)
gives
ind values.Pre values.Post
1 CT_tib_all 0.244 0.254
3 CT_lum_all 0.209 0.211
5 CT_tho_all 0.309 0.302
7 CT_gps_all 0.315 0.313
9 CT_vest_all 0.310 0.316

Odd behavior of dplyr::between and filter

I have a data.frame I want to filter based on whether the range from low to high contains zero. Here's an example
head(toy)
# A tibble: 6 x 3
difference low high
<dbl> <dbl> <dbl>
1 0.0161 -0.143 0.119
2 0.330 0.0678 0.656
3 0.205 -0.103 0.596
4 0.521 0.230 0.977
5 0.328 0.177 0.391
6 -0.0808 -0.367 0.200
I could swear I have used dplyr::between() to do this kind of filtering operation a million times (even with columns of class datetime, where it warns about S3 objects). But I can't find what's wrong with this one.
# Does does not find anything
toy %>%
filter(!dplyr::between(0, low, high))
# Maybe it's because it needs `x` to be a vector, using mutate
# Does not find anything
toy %>%
mutate(zero = 0) %>%
filter(!dplyr::between(zero, low, high))
# if we check the logic, all "keep" go to FALSE
toy %>%
mutate(zero = 0,
keep = !dplyr::between(zero, low, high))
# data.table::between works
toy %>%
filter(!data.table::between(0, low, high))
# regular logic works
toy %>%
filter(low > 0 | high < 0)
The data below:
> dput(toy)
structure(list(difference = c(0.0161058505175378, 0.329976207353122,
0.20517072042705, 0.520837282826481, 0.328289597476641, -0.0807728725339096,
0.660320444135006, 0.310679750033675, -0.743294517440579, -0.00665462977775899,
0.0890903981794149, 0.0643321993757249, 0.157453334405998, 0.107320325893175,
-0.253664041938671, -0.104025850079389, -0.284835573264143, -0.330557762091307,
-0.0300387610595219, 0.081297046765014), low = c(-0.143002432870633,
0.0677907794288728, -0.103344717845837, 0.229753302951895, 0.176601773133456,
-0.366899428200429, 0.403702557199546, 0.0216878391530755, -1.01129163487875,
-0.222395625167488, -0.135193611295608, -0.116654715121314, -0.168581379777843,
-0.281919444558125, -0.605918194917671, -0.364539852350809, -0.500147478407119,
-0.505906196974183, -0.233810558283787, -0.193048952382206),
high = c(0.118860787421672, 0.655558974886329, 0.595905673925067,
0.97748896372657, 0.391043536410999, 0.199727242557477, 0.914173497837859,
0.633804982827898, -0.549942089679123, 0.19745782761473,
0.340823604797603, 0.317956343103116, 0.501279107093568,
0.442497779066522, 0.0721480109893818, 0.280593530192991,
-0.0434862536882377, -0.229723776097642, 0.22550243301984,
0.252686968655449)), row.names = c(NA, -20L), class = c("tbl_df",
"tbl", "data.frame"))
Just in case somebody finds it useful
> "between" %in% conflicts()
[1] FALSE
> packageVersion("dplyr")
[1] ‘1.0.2’
dplyr::between() is not vectorized. One thing you could do is:
df %>%
rowwise() %>%
filter(!dplyr::between(0, low, high))
difference low high
<dbl> <dbl> <dbl>
1 0.330 0.0678 0.656
2 0.521 0.230 0.977
3 0.328 0.177 0.391
4 0.660 0.404 0.914
5 0.311 0.0217 0.634
6 -0.743 -1.01 -0.550
7 -0.285 -0.500 -0.0435
8 -0.331 -0.506 -0.230
data.table::between() is vectorized: that's the reason why it works.
We could use map2
library(dplyr)
library(purrr)
toy %>%
filter(!map2_lgl(low, high, ~ between(0, .x, .y)))
-output
# A tibble: 8 x 3
difference low high
<dbl> <dbl> <dbl>
1 0.330 0.0678 0.656
2 0.521 0.230 0.977
3 0.328 0.177 0.391
4 0.660 0.404 0.914
5 0.311 0.0217 0.634
6 -0.743 -1.01 -0.550
7 -0.285 -0.500 -0.0435
8 -0.331 -0.506 -0.230

A loop in R to go through variable names and create new lagged variables

Hi I am trying to create new lagged variables that are based off existing variables in a loop. Here is my failed attempt:
varlist = c("retail", "grocery", "parks", "transit", "work", "residential")
for (x in varlist) {
cases %<>%
mutate(x7 = lag(x,n=7L))
}
where the code would generate "retail7", "grocery7", etc. The issue seems to stem from when I say X7 i mean "retail7" or another variable 7, but it doesnt know what to make of it because its a string or something
Thanks
You don't need a loop to achieve this. Using some random example data this can be achieved via dplyr::mutate_at like so:
library(dplyr)
cases <- data.frame(state = rep(c("A", "B"), 10), retail = runif(20), grocery = runif(20), parks = runif(20), transit = runif(20), work = runif(20), residential = runif(20) )
cases <- cases %>%
arrange(state) %>%
group_by(state) %>%
mutate_at(vars("retail", "grocery", "parks", "transit", "work", "residential"), list(`7` = ~ lag(.x, n = 7L)))
head(cases)
#> # A tibble: 6 x 13
#> # Groups: state [1]
#> state retail grocery parks transit work residential retail_7 grocery_7
#> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 A 0.113 0.989 0.805 0.937 0.966 0.800 NA NA
#> 2 A 0.981 0.755 0.274 0.491 0.191 0.897 NA NA
#> 3 A 0.763 0.0676 0.780 0.249 0.363 0.592 NA NA
#> 4 A 0.758 0.905 0.236 0.556 0.202 0.842 NA NA
#> 5 A 0.0250 0.681 0.868 0.204 0.194 0.692 NA NA
#> 6 A 0.592 0.404 0.509 0.478 0.692 0.128 NA NA
#> # ... with 4 more variables: parks_7 <dbl>, transit_7 <dbl>, work_7 <dbl>,
#> # residential_7 <dbl>

Produce a matrix from every row (tidyverse)

I am trying to implement analyses across a posterior of matrices. What I start with is a tibble of k^2 columns, where k is the dimensions of the matrix. The ith row forms the matrix of the ith iteration.
So, for example for a 3x3 matrix, this is:
set.seed(12)
n <- 1000
z1z1 <- rnorm(n, 5, 1)
z2z2 <- rnorm(n, 5, 1)
z3z3 <- rnorm(n, 5, 1)
z1z2 <- rnorm(n, 0, 1)
z1z3 <- rnorm(n, 0, 1)
z2z3 <- rnorm(n, 0, 1)
post3 <- as_tibble(matrix(c(z1z1, z1z2, z1z3,
z1z2, z2z2, z2z3,
z1z3, z2z3, z3z3),
ncol = 9))
post3
Giving:
# A tibble: 1,000 x 9
V1 V2 V3 V4 V5 V6 V7 V8 V9
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 3.52 -0.618 2.96 -0.618 2.48 -0.634 2.96 -0.634 5.98
2 6.58 -0.827 0.0909 -0.827 5.52 -1.84 0.0909 -1.84 6.20
3 4.04 1.48 -1.66 1.48 6.58 0.166 -1.66 0.166 5.58
4 4.08 -1.01 0.809 -1.01 5.49 0.607 0.809 0.607 6.55
5 3.00 0.582 -0.485 0.582 6.20 0.0765 -0.485 0.0765 6.38
6 4.73 0.718 1.97 0.718 4.00 -0.147 1.97 -0.147 4.35
7 4.68 -0.372 0.572 -0.372 4.65 -1.68 0.572 -1.68 3.83
8 4.37 -0.809 0.883 -0.809 3.96 0.985 0.883 0.985 4.97
9 4.89 0.405 0.686 0.405 6.02 0.252 0.686 0.252 6.29
10 5.43 0.124 0.199 0.124 5.75 0.354 0.199 0.354 4.20
# ... with 990 more rows
Where this is the matrix in the first iteration:
k <- sqrt(length(post3))
matrix(post3[1,], nrow = k)
[,1] [,2] [,3]
[1,] 3.519432 -0.618137 2.962622
[2,] -0.618137 2.479522 -0.6338298
[3,] 2.962622 -0.6338298 5.977552
I am then working along this posterior to calculate the dominance of the first eigenvector:
post3 %>%
rowwise %>%
mutate(
pre_eig = list(eigen(matrix(c(V1, V2, V3, V4, V5, V6, V7, V8, V9), nrow = k))),
dom = pre_eig[[1]][1] / sum(pre_eig[[1]][1:k])) %>%
select('dom')
Giving:
# A tibble: 1,000 x 1
dom
<dbl>
1 0.676
2 0.437
3 0.462
4 0.427
5 0.414
6 0.504
7 0.474
8 0.429
9 0.394
10 0.383
# ... with 990 more rows
What I would like to do is make this script versatile so that it can take posteriors for any value of k. The issue I am having is in how to define the matrix without having to hand write all the column names - when applying this to 2000x2000 matrices I don't want to write out V1, V2, V3... V4000000!
I tried a few things (including ...eigen(matrix(c(paste0('V', 1:(k^2))), nrow = k)))..., which I think is not working because it wants V1, V2... rather than "V1", "V2"...) and I all out of ideas. How do I get it to automatically take the column names from the posterior tibble?
I would then be able to use the exact same piece of script for example on post3 <- as_tibble(matrix(c(z1z1, z1z2, z1z2, z2z2), ncol = 4))...
You can avoid naming all the columns explicitly if you gather each row's values into key-value pairs:
library(tidyr)
post3 %>%
# add row ID (so that results can be sorted back into original order)
mutate(row.id = seq(1, n())) %>%
# convert each row to long format, with values sorted from 1st to k^2th column
gather(position, value, -row.id) %>%
mutate(position = as.numeric(gsub("^V", "", position))) %>%
arrange(row.id, position) %>%
select(-position) %>%
# group by row ID & calculate
group_by(row.id) %>%
summarise(pre_eig = list(eigen(matrix(value, nrow = k))[["values"]]),
dom = pre_eig[[1]][1] / sum(pre_eig[[1]][1:k])) %>%
ungroup() %>%
# sort results in original order
arrange(row.id) %>%
select(dom)
The results should be the same as before:
# A tibble: 1,000 x 1
dom
<dbl>
1 0.676
2 0.437
3 0.462
4 0.427
5 0.414
6 0.504
7 0.474
8 0.429
9 0.394
10 0.383
# ... with 990 more rows

Resources