Fine tuning table output of function 'univariateTable' - r

I've found the function univariateTable to be extremely helpful to handling larger data for a nice, clean table output. But there are a couple of things that I still need to do manually after the table is exported in csv, and I would rather do it in R to automate the process and avoid human errors.
Here is the example code with the table output that I then export as csv
value<-cbind(c(rnorm(103.251,503.24,90),rnorm(103.251,823.24,120)))
genotype<-cbind(c(rep("A",100),rep("B",100)))
gender<-rep(c("M","F","F","F"),50)
df<-cbind(value,genotype,gender)
df<-as.data.frame(df)
colnames(df)<-c("value","genotype","gender")
df$value<-as.numeric(as.character(df$value))
library(Publish)
summary(univariateTable(gender ~ Q(value) + genotype, data=df))
The two problems I have are these:
Is there a way to round the numbers in the table in a way similar to this: round(99.73)
Is there a way to substitute , with - in the interquartile range output in a way similar to this: gsub(", ","-","[503.7, 793.3]") , and instead of median [iqr] have it put out median [IQR]
Again, I do these manually after exporting the tables, but for larger tables it is much more convenient to automate the process.

univariateTable has a digits argument that you can use for rounding. To modify the formatting, you can inspect the list returned by univariateTable to figure out where to find the values that need to be changed.
Your example data threw an error, so I've modified it to make it run and also cleaned up the code a bit.
# devtools::install_github("tagteam/Publish")
library(Publish)
value <- c(rnorm(90, 103.251,503.24),rnorm(110, 103.251,823.24))
genotype <- rep(c("A","B"), each=100)
gender <- rep(c("M","F","F","F"),50)
df <- data.frame(value,genotype,gender)
The digits argument to univariateTable can be used for rounding (see ?univariateTable for the help information on the function).
tab = univariateTable(gender ~ Q(value) + genotype, data=df, digits=0)
To change the commas to hyphens, we need to see where those values are stored in the list returned by univariateTable. Run str(tab), which shows you the structure of the list. Note that the heading values in the table look like they're stored in tab$summary.groups$value and tab$summary.totals$value, so we'll edit those:
tab$summary.groups$value = gsub(", ", " - ", tab$summary.groups$value)
tab$summary.totals$value = gsub(", ", " - ", tab$summary.totals$value)
tab
Variable Level gender = F (n=150) gender = M (n=50) Total (n=200) p-value
1 value median [iqr] -6 [-481 - 424] 203 [-167 - 544] 80 [-433 - 458] 0.118
2 genotype A 75 (50) 25 (50) 100 (50)
3 B 75 (50) 25 (50) 100 (50) 1.000

Related

Argument is not numeric

I would like to visualize the number of people infected with COVID-19, but I am unable to obtain the mortality rate because the number of deaths is derived by int when obtaining the mortality rate per 100,000 population for each prefecture.
What I want to achieve
I want to find the solution of "covid19j_20200613$POP2019 * 100" by setting the data type of "covid19j_20200613$deaths" to num.
Error message.
Error in covid19j_20200613$deaths/covid19j_20200613$POP2019:
Argument of binary operator is not numeric
Source code in question.
library(spdep)
library(sf)
library(spatstat)
library(tidyverse)
library(ggplot2)
needs::prioritize(magrittr)
covid19j <- read.csv("https://raw.githubusercontent.com/kaz-ogiwara/covid19/master/data/prefectures.csv",
header=TRUE)
# Below is an example for May 20, 2020.
# Month and date may be changed
covid19j_20200613 <- dplyr::filter(covid19j,
year==2020,
month==6,
date==13)
covid19j_20200613$CODE <- 1:47
covid19j_20200613[is.na(covid19j_20200613)] <- 0
pop19 <- read.csv("/Users/carlobroschi_imac/Documents/lectures/EGDS/07/covid19_data/covid19_data/pop2019.csv", header=TRUE)
covid19j_20200613 <- dplyr::inner_join(covid19j_20200613, pop19,
by = c("CODE" = "CODE"))
# Load Japan prefecture administrative boundary data
jpn_pref <- sf::st_read("/Users/carlobroschi_imac/Documents/lectures/EGDS/07/covid19_data/covid19_data/jpn_pref.shp")
# Data and concatenation
jpn_pref_cov19 <- dplyr::inner_join(jpn_pref, covid19j_20200613, by=c("PREF_CODE"="CODE"))
ggplot2::ggplot(data = jpn_pref_cov19) +
geom_sf(aes(fill=testedPositive)) +
scale_fill_distiller(palette="RdYlGn") +
theme_bw() +
labs(title = "Tested Positiv of Covid19 (2020/06/13)")
# Mortality rate per 100,000 population
# Population number in units of 1000
as.numeric(covid19j_20200613$deaths)
covid19j_20200613$deaths_rate <- covid19j_20200613$deaths / covid19j_20200613$POP2019 * 100
Source code in question.
prefectures.csv
https://docs.google.com/spreadsheets/d/11C2vVo-jdRJoFEP4vAGxgy_AEq7pUrlre-i-zQVYDd4/edit?usp=sharing
pop2019.csv
https://docs.google.com/spreadsheets/d/1CbEX7BADutUPUQijM0wuKUZFq2UUt-jlWVQ1ipzs348/edit?usp=sharing
What we tried
I tried to put "as.numeric(covid19j_20200613$deaths)" before the calculation and set the number of dead to type
num, but I got the same error message during the calculation.
Additional information (FW/tool versions, etc.)
iMac M1 2021, R 4.2.0
Translated with www.DeepL.com/Translator (free version)
as.numeric() does not permanently change the data type - it only does it temporarily.
So when you're running as.numeric(covid19j_20200613$deaths), this shows you the column deaths as numeric, but the column will stay a character.
So if you want to coerce the data type, you need to also reassign:
covid19j_20200613$deaths <- as.numeric(covid19j_20200613$deaths)
covid19j_20200613$POP2019 <- as.numeric(covid19j_20200613$POP2019)
# Now you can do calculations
covid19j_20200613$deaths_rate <- covid19j_20200613$deaths / covid19j_20200613$POP2019 * 100
It's easier to read if you use mutate from dplyr:
covid19j_20200613 <- covid19j_20200613 |>
mutate(
deaths = as.numeric(deaths),
POP2019 = as.numeric(POP2019),
death_rate = deaths / POP2019 * 100
)
Result
deaths POP2019 deaths_rate
1 91 5250 1.73333333
2 1 1246 0.08025682
3 0 1227 0.00000000
4 1 2306 0.04336513
5 0 966 0.00000000
PS: your question is really difficult to follow! There is a lot of stuff that we don't actually need to answer it, so that makes it harder for us to identify where the issue is. For example, all the data import, the join, the ggplot...
When writing a question, please only include the minimal elements that lead to a problem. In your case, we only needed a sample dataset with the deaths and POP2019 columns, and the two lines of code that you tried to fix at the end.
If you look at str(covid19j) you'll see that the deaths column is a character column containing a lot of blanks. You need to figure out the structure of that column to read it properly.

Is there a way to import the results of HSD.test from agricolae directly into geom_text() in a ggplot2?

I'm creating figures that show the efficacy of several warning signals relative to the event they warn about. The figure is based off a dataframe which is produced by a function that runs a model multiple times and collates the results like this:
t type label early
4 847 alarm alarm.1 41
2 849 alarm alarm.2 39
6 853 alarm alarm.3 35
5 923 alarm alarm.4 -35
7 1003 alarm alarm.5 -115
But with a dozen alarms and a value for each alarm n times (typically 20 - 100), with each value being slightly different depending on random and stochastic variables built into the model.
I'm putting the results in an lm
a.lm <- lm(log(early + 500) ~ label, data = alarm.data)
and after checking the assumptions are met, running a 1 way anova
anova(a.lm)
then a tukey post hoc test
HSD.test(a.lm, trt = "label", console = TRUE)
Which produces
log(early + 500) groups
alarm.1 6.031453 a
alarm.2 6.015221 a
alarm.3 6.008366 b
alarm.4 5.995150 b
alarm.5 5.921384 c
I have a function which generates a ggplot2 figure based on the collated data, to which I am then manually adding +geom_text(label = c("a", "a", "b", "b", "c") or whatever the appropriate letters are. Is there a way to generalise that last step? Calling the letters directly from the result of the HSD.test. If I put the results of the HSD.test into an object
a.test <- HSD.test(a.lm, trt = "label", console = TRUE)
I can call the results using a.test$groups and calling the letter groupings specifically using a.test$groups$groups but I don't know enough about manipulating lists to make that useful to me. Whilst the order of the labels in the ggplot is predictable, the order of the groups in the HSD.test result isn't and can vary a lot between iterations of the model running function.
If anyone has any insights I would be grateful.
Okay I actually bumped into a solution just after I posted the question.
If you take the output of the HSD.test and make it into an object
a.test <- HSD.test(ram.lm, trt = "label")
Then convert the groups list into a dataframe
a.df <- as.data.frame(a.test$groups)
The row index is the alarm names rather than numbers
a.df
log(early + 500) groups
alarm.1 6.849082 a
alarm.2 6.842465 a
alarm.3 6.837438 a
alarm.4 6.836437 a
alarm.5 6.812714 a
so they can be called specifically into geom_text inside the function
a.plot +
geom_text(label = c(a.df["alarm.1",2],
a.df["alarm.2",2],
a.df["alarm.3", 2],
a.df["alarm.4", 2],
a.df["alarm.5", 2])
even though not using the same functions to get the compact letter display, I think this may be a more efficient way of doing it? (Make sure to unfold the code via the "Code" button above the ggplots)

What is the best way to manage/store result from either posthoc.krukal.dunn.test() or dunn.test() - where my input data is in dataframe format?

I am a newbie in R programming and seek help in analyzing the Metabolomics data - 118 metabolites with 4 conditions (3 replicates per condition). I would like to know, for each metabolite, which condition(s) is significantly different from which. Here is part of my data
> head(mydata)
Conditions HMDB03331 HMDB00699 HMDB00606 HMDB00707 HMDB00725 HMDB00017 HMDB01173
1 DMSO_BASAL 0.001289121 0.001578235 0.001612297 0.0007772231 3.475837e-06 0.0001221674 0.02691318
2 DMSO_BASAL 0.001158363 0.001413287 0.001541713 0.0007278363 3.345166e-04 0.0001037669 0.03471329
3 DMSO_BASAL 0.001043537 0.002380287 0.001240891 0.0008595932 4.007387e-04 0.0002033625 0.07426482
4 DMSO_G30 0.001195253 0.002338346 0.002133992 0.0007924157 4.189224e-06 0.0002131131 0.05000778
5 DMSO_G30 0.001511538 0.002264779 0.002535853 0.0011580857 3.639661e-06 0.0001700157 0.02657079
6 DMSO_G30 0.001554804 0.001262859 0.002047611 0.0008419137 6.350990e-04 0.0000851638 0.04752020
This is what I have so far.
I learned the first line from this post
kwtest_pvl = apply(mydata[,-1], 2, function(x) kruskal.test(x,as.factor(mydata$Conditions))$p.value)
and this is where I loop through the metabolite that past KW test
tCol = colnames(mydata[,-1])[kwtest_pvl <= 0.05]
for (k in tCol){
output = posthoc.kruskal.dunn.test(mydata[,k],as.factor(mydata$Conditions),p.adjust.method = "BH")
}
I am not sure how to manage my output such that it is easier to manage for all the metabolites that passed KW test. Perhaps saving the output from each iteration appending to excel? I also tried dunn.test package since it has an option of table or list output. However, it still leaves me at the same point. Kinda stuck here.
Moreover, should I also perform some kind of adjusted p-value, i.e FWER, FDR, BH right after KW test - before performing the posthoc test?
Any suggestion(s) would be greatly appreciated.

Juxtaposing Replicate Data

I have provided a sample dataset that I have arranged in column format (called "full.table").
These data were extracted from a 96-well PCR plate, & while collecting my data, I always ran a duplicate experiment, meaning each variable (aka test) has 1 replicate. I would like to take all replicates and juxtapose them (have them be side by side), which would allow me to easily visualize replicates next to each other, and finally calculate an average value for the variable "Cq" between the two.
The complications stems from having done multiple tests over several days (complication one), and NOT having my samples always run in the same fashion on the PCR plate (complication two). Typically, as you see on my data set below, Well A1 has a duplicate in Well B1, however this is not always the case. Occasionally, Well A7 matches Well A8 (and NOT B7).
Replicates were always run on the same day, so an important variable here is “date” which I added via R before uploading to Stack Exchange. I am confused on how to re-arrange the data to get my desired result (not even sure where to start)
I have provided an example of what I would like in the end, called “sample.finished.table”
Logically, having 768 observations in this example, this should divide it in two, resulting in 384 total lines of data (385 with header)
I appreciate any feedback. Thank you
full.table<- read.table("https://pastebin.com/raw/kTQhuttv", header=T, sep="")
sample.finished.table <- read.table("https://pastebin.com/raw/Phg7C9xD", header=T, sep="")
You can use dplyr here to group by sample and extract the requested values:
library(dplyr)
full.table %>% group_by(sample,date) %>% summarise(
Well1 = first(Well), Cq1 = first(Cq),
Well2 = last(Well), sample1 = last(sample), Cq2 = last(Cq), Cq_mean = mean(Cq[Cq > 0]))

how to use the functcomp code in R

I am having trouble using the functcomp package in R.
I have 2 datasets: one with species frequency, and the other listing the functional traits of my species. The frequency dataset has 264 species listed in the first row and 27 sites listed in the first column, all values in dataset are between 0-1. The functional trait dataset has the same 264 species (copied & pasted from the frequency dataset to make sure identical) listed in the first column, and 5 different functional traits listed in the 1st row (height, life history, life form, origin, palatability).
I am using the following code:
traits.df <- read.table("species_functional_traits_6_ August.txt", header = TRUE)
frequency.df <- read.table("Spring 2014 - combined table - 6 August.txt", header = TRUE)
x <- (as.matrix(traits.df))
a <- (as.matrix(frequency.df))
functcomp(x, a, CWM.type = c("dom", "all"), bin.num = height)
But keep getting the following error message:
Error in functcomp(x, a, CWM.type = c("dom", "all"), bin.num = height) :
Different number of species in 'x' and 'a'.
I have tried fiddling with a couple of things in the code and datasets, but cannot work out what I am doing wrong here. Any help would be greatly appreciated!
Here are links the frequency & trait data (a subset of it, but still get same error message with this data) as a tab-delimited text file
frequency: https://www.dropbox.com/s/girs3nrq1ciyg1a/frequency%20-%20small.txt?dl=0
traits: https://www.dropbox.com/s/l888sallx7mu3f6/traits%20-%20small.txt?dl=0
try stating row.names=1 when read in your table, this solved my problem -
Anna

Resources