how to make t-student test in loop in r? - r

I'm new in R and what I want to do is something very simple but I need help.
I have a database that looks like the one above; where spot number = "name" of a protein, grupo = group I and II and APF = fluorescent reading.
I want to do a tstudent test to each protein, by comparing groups I and II, but in a loop.
In the database above there only 1 protein (147) but im my real database i have 444 proteins.

Starting with some fake data:
set.seed(0)
Spot.number <- rep(147:149, each=10)
grupo <- rep(rep(1:2, each=5), 3)
APF <- rnorm(30)
gel <- data.frame(Spot.number, grupo, APF)
> head(gel)
Spot.number grupo APF
1 147 1 2.1780699
2 147 1 -0.2609347
3 147 1 -1.6125236
4 147 1 1.7863384
5 147 1 2.0325473
6 147 2 0.6261739
You can use lapply to loop through the subsets of gel, split by the Spot.number:
tests <- lapply(split(gel, gel$Spot.number), function(spot) t.test(APF ~ grupo, spot))
or just
tests <- by(gel, gel$Spot.number, function(spot) t.test(APF ~ grupo, spot))
You can then move on to e.g. taking only the p values:
sapply(tests, "[[", "p.value")
# 147 148 149
#0.2941609 0.9723856 0.5726007
or confidence interval
sapply(tests, "[[", "conf.int")
# 147 148 149
# [1,] -0.985218 -1.033815 -0.8748502
# [2,] 2.712395 1.066340 1.4240488
And the resulting vector or matrix will already have the Spot.number as names which can be very helpful.

You can perform a t.test within each group using dplyr and my broom package. If your data is stored in a data frame called dat, you would do:
library(dplyr)
library(broom)
results <- dat %>%
group_by(Spot.number) %>%
do(tidy(t.test(APF ~ grupo, .)))
This works by performing t.test(APF ~ grupo, .) on each group defined by Spot.number. The tidy function from broom then turns it into a one-row data frame so that it can be recombined. The results data frame will then contain one row per protein (Spot.number) with columns including estimate, statistic, and p.value.
See this vignette for more on the combination of dplyr and broom.

Related

Create loop to generate multiple plots from multiple dataframes for several designated factor names in r

best wishes for the new year.
I have a question...I would like to create a loop to generate multiple plots from multiple dataframes for several designated factor names in r.
I have different dataset with the same fish species. There are more than 50 species (plaice, flounder etc.) For each of these species I have different datasets with different data for each set, but the species used are the same. For each of these species I want to create the same ggplots and other files. Now I made the scipt so that when I fill in a species name in:
###########################
USEDSPECIES <- "Plaice"
###########################
I can run the scripts all at once to generate the plots etc. I want. The only down side is that I have to write down each species and run the whole thing like 50 times... Is there a way for me to make a kind of loop?
In the past I created several loop constructions like:
MaxValue <- rep(FALSE, length(data$Column))
tags <- unique(data $FishID)
for(i in 1:length(tags)){
t.ss <- which(data $FishID == tags[i])
MaxValue[t.ss][which(data$Column[t.ss] == max(data $Column[t.ss], na.rm=T))] <- TRUE
}
Only this way only one dataframe is used, in stead of multiple without indicating a factor name (as species) used.
Thank you in advance!
An example of the advantages of combining the scattered dataframes and working on from there:
library(dplyr)
## example data frames for Plaice and Flounder:
df_plaice <- data.frame(FishID = rep('Plaice', 8), Weight = sample(500:800, 8))
df_flounder <- data.frame(FishID = rep('Flounder', 8), Weight = sample(500:800, 8))
## row-bind single data frames:
all_the_fish <-
list(df_plaice, df_flounder) |> Reduce(f = rbind)
> all_the_fish
FishID Weight
1 Plaice 553
2 Plaice 776
## ...
15 Flounder 580
16 Flounder 794
## species-wise aggregates:
all_the_fish |>
group_by(FishID) |>
summarize(MaxWeight = max(Weight, na.rm = TRUE),
AverageWeight = mean(Weight, na.rm = TRUE)
)
# A tibble: 2 x 3
FishID MaxWeight AverageWeight
<chr> <int> <dbl>
1 Flounder 794 674.
2 Plaice 776 620.
plotting:
all_the_fish |>
ggplot() +
geom_boxplot(aes(Weight)) +
coord_flip() +
facet_wrap(~ FishID)

Replace words in data frame using list of words in another data frame in R

I am trying to convert a large R data frame (3 million rows of patients by 400 columns of diagnoses), from short description e.g., “ESSENTIAL HYPERTENSION, BENIGN” to the description ICD code e.g., “I10”. The short descriptions are in a data frame “DATA” and also include patient ID and age columns that I do not want changed. The data frame “Dictionary” is my list of words containing the short description and associated ICD code. There are over 20k combinations of descriptions and ICD codes in the actual dictionary list. To make the problem reproducible I have included code that recreates small samples of my larger DATA and Dictionary data frames.
DATA Sample
PAT_ID <- c(1,2,3)
DX_1 <- c('OTHER&UNSPECIFIED HYPERLIPIDEMIA','NA','ESSENTIAL HYPERTENSION, BENIGN' )
DX_AGE_1 <- c(66,68,75)
DX_2 <- c('ESSENTIAL HYPERTENSION, BENIGN','SPECIAL SCR MALIG NEOPLASM PROS','NA' )
DX_AGE_2 <- c(67,69,77)
DATA <- data.frame(PAT_ID, DX_1, DX_AGE_1,DX_2,DX_AGE_2)
Dictionary Sample
From <- c('OTHER&UNSPECIFIED HYPERLIPIDEMIA','ESSENTIAL HYPERTENSION, BENIGN','SPECIAL SCR MALIG NEOPLASM PROS')
To <- c('E784', 'I10', 'Z125')
Dictionary <- data.frame(From, To)
My Desired output would look like the output of this code
PAT_ID <- c(1,2,3)
DX_1 <- c('E784','NA','I10' )
DX_AGE_1 <- c(66,68,75)
DX_2 <- c('I10','Z125','NA' )
DX_AGE_2 <- c(67,69,77)
OUTPUT <- data.frame(PAT_ID, DX_1, DX_AGE_1,DX_2,DX_AGE_2)
Your help is greatly appreciated!
Using Base R, you could do:
vec <- do.call(setNames,rev(unname(Dictionary)))
rapply(DATA, function(x)vec[x],'character', how= 'replace')
PAT_ID DX_1 DX_AGE_1 DX_2 DX_AGE_2
1 1 E784 66 I10 67
2 2 <NA> 68 Z125 69
3 3 I10 75 <NA> 77
Also you could use str_replace_all from stringr:
setNames(type.convert(data.frame(
array(str_replace_all(as.matrix(DATA), vec), dim(DATA)))), names(DATA))
PAT_ID DX_1 DX_AGE_1 DX_2 DX_AGE_2
1 1 E784 66 I10 67
2 2 <NA> 68 Z125 69
3 3 I10 75 <NA> 77

Rerun same code but changing colums names at each loop

I have a script in r to calculate body condition residuals. I would like to apply this code to each columns, which correspond to a specific category of individual.
For example i would run this code to calculate body condition residuals of all individuals that are in the category 1
1) Select rows of interest
Data1 = RawData %>% select(ID,temperature, Bodysize1, mass1, year) %>% filter((temperature %in% c(20:29) & Bodysize1 %in% c(20:100) & mass1 %in% c(15:40))
2) Create a new model with created data
Model1 =lmer(log(mass1) ~ log(Bodysize1) + temperature + (1|year), data = Data1)
3) Extract residuals and add ID to the residuals
ResModel1 = resid(Model1)
ID=Data1$ID
Res1 =data.frame(ResModel1 ,ID)
4) Add residuals to my RawData
RawData2.0 = merge(RawData, Res1, by = c("ID"), all.x = T)
In order to avoid reruning this code and manually changing all the 1 by 2 and then all the 2 by 3... etc is there a way to do this commande automatic whith loops and the apply familly?
My data
ID TEMPERATURE BODYSIZE1 MASS1 BODYSIZE2 MASS2 YEAR
81-012 0.03830645 200 1450 205 1425 1981
84-069 0.26923078 200 1473 205 1498 1984
84-134 0.32692307 209 1448 195 1323 1984
84-145 0.27884614 197 1373 197 1498 1984
84-190 0.31129807 191 1248 195 1323 1984
85-155 0.33056709 198 1637 229 1988 1985
Thanks in advance
Withou example data it is tough to say if this will work but maybe creating a function can simplify your workflow
library(tidyverse)
get_resid <- function(df,filters) {
df_to_model <- df %>% filter({{filters}})
df_to_keep <- df <- filter({{filters}},.preserve = FALSE)
Data1 <- df_to_model %>%
select(ID,temperature, Bodysize1, mass1, year)
Model1 <- lmer(log(mass1) ~ log(Bodysize1) + temperature + (1|year), data = Data1)
ResModel1 <- resid(Model1)
ID <- Data1$ID
Res1 <- data.frame(ResModel1 ,ID)
Res1 %>%
bind_rows(df_to_keep)
}
Then you may use this this function in your pipes
RawData %>%
get_resid(temperature %in% c(20:29) & Bodysize1 %in% c(20:100) & mass1 %in% c(15:40))
You might try to i) create a tibble, in the first column list all dep variables as strings,
ii) list your models of indep vars in the second column,
iii) create a formula in the third column
iv) run your model in the fourth column
df <- tibble(dep = paste0("log(var",seq(1,10,1),")"),
x = "~ your_x_vars") %>%
mutate(formula = as.formula(paste0(dep,x))) %>%
mutate(reg = map(formula, ~lm(as.formula(.x), data=df) ))
then you can easily extract the residuals

loess regression on each group with dplyr::group_by()

Alright, I'm waving my white flag.
I'm trying to compute a loess regression on my dataset.
I want loess to compute a different set of points that plots as a smooth line for each group.
The problem is that the loess calculation is escaping the dplyr::group_by function, so the loess regression is calculated on the whole dataset.
Internet searching leads me to believe this is because dplyr::group_by wasn't meant to work this way.
I just can't figure out how to make this work on a per-group basis.
Here are some examples of my failed attempts.
test2 <- test %>%
group_by(CpG) %>%
dplyr::arrange(AVGMOrder) %>%
do(broom::tidy(predict(loess(Meth ~ AVGMOrder, span = .85, data=.))))
> test2
# A tibble: 136 x 2
# Groups: CpG [4]
CpG x
<chr> <dbl>
1 cg01003813 0.781
2 cg01003813 0.793
3 cg01003813 0.805
4 cg01003813 0.816
5 cg01003813 0.829
6 cg01003813 0.841
7 cg01003813 0.854
8 cg01003813 0.866
9 cg01003813 0.878
10 cg01003813 0.893
This one works, but I can't figure out how to apply the result to a column in my original dataframe. The result I want is column x. If I apply x as a column in a separate line, I run into issues because I called dplyr::arrange earlier.
test2 <- test %>%
group_by(CpG) %>%
dplyr::arrange(AVGMOrder) %>%
dplyr::do({
predict(loess(Meth ~ AVGMOrder, span = .85, data=.))
})
This one simply fails with the following error.
"Error: Results 1, 2, 3, 4 must be data frames, not numeric"
Also it still isn't applied as a new column with dplyr::mutate
fems <- fems %>%
group_by(CpG) %>%
dplyr::arrange(AVGMOrder) %>%
dplyr::mutate(Loess = predict(loess(Meth ~ AVGMOrder, span = .5, data=.)))
This was my fist attempt and mostly resembles what I want to do. Problem is that this one performs the loess prediction on the entire dataframe and not on each CpG group.
I am really stuck here. I read online that the purr package might help, but I'm having trouble figuring it out.
data looks like this:
> head(test)
X geneID CpG CellLine Meth AVGMOrder neworder Group SmoothMeth
1 40 XG cg25296477 iPS__HDF51IPS14_passage27_Female____165.592.1.2 0.81107210 1 1 5 0.7808767
2 94 XG cg01003813 iPS__HDF51IPS14_passage27_Female____165.592.1.2 0.97052120 1 1 5 0.7927130
3 148 XG cg13176022 iPS__HDF51IPS14_passage27_Female____165.592.1.2 0.06900448 1 1 5 0.8045080
4 202 XG cg26484667 iPS__HDF51IPS14_passage27_Female____165.592.1.2 0.84077890 1 1 5 0.8163997
5 27 XG cg25296477 iPS__HDF51IPS6_passage33_Female____157.647.1.2 0.81623880 2 2 3 0.8285259
6 81 XG cg01003813 iPS__HDF51IPS6_passage33_Female____157.647.1.2 0.95569240 2 2 3 0.8409501
unique(test$CpG)
[1] "cg25296477" "cg01003813" "cg13176022" "cg26484667"
So, to be clear, I want to do a loess regression on each unique CpG in my dataframe, apply the resulting "regressed y axis values" to a column matching the original y axis values (Meth).
My actual dataset has a few thousand of those CpG's, not just the four.
https://docs.google.com/spreadsheets/d/1-Wluc9NDFSnOeTwgBw4n0pdPuSlMSTfUVM0GJTiEn_Y/edit?usp=sharing
This is a neat Tidyverse way to make it work:
library(dplyr)
library(tidyr)
library(purrr)
library(ggplot2)
models <- fems %>%
tidyr::nest(-CpG) %>%
dplyr::mutate(
# Perform loess calculation on each CpG group
m = purrr::map(data, loess,
formula = Meth ~ AVGMOrder, span = .5),
# Retrieve the fitted values from each model
fitted = purrr::map(m, `[[`, "fitted")
)
# Apply fitted y's as a new column
results <- models %>%
dplyr::select(-m) %>%
tidyr::unnest()
# Plot with loess line for each group
ggplot(results, aes(x = AVGMOrder, y = Meth, group = CpG, colour = CpG)) +
geom_point() +
geom_line(aes(y = fitted))
You may have already figured this out -- but if not, here's some help.
Basically, you need to feed the predict function a data.frame (a vector may work too but I didn't try it) of the values you want to predict at.
So for your case:
fems <- fems %>%
group_by(CpG) %>%
arrange(CpG, AVGMOrder) %>%
mutate(Loess = predict(loess(Meth ~ AVGMOrder, span = .5, data=.),
data.frame(AVGMOrder = seq(min(AVGMOrder), max(AVGMOrder), 1))))
Note, loess requires a minimum number of observations to run (~4? I can't remember precisely). Also, this will take a while to run so test with a slice of your data to make sure it's working properly.
Unfortunately, the approaches described above did not work in my case. Thus, I implemented the Loess prediction into a regular function, which worked very well. In the example below, the data is contained in the df data frame while we group by df$profile and want to fit the Loess prediction into the df$daily_sum values.
# Define important variables
span_60 <- 60/365 # 60 days of a year
span_365 <- 365/365 # a whole year
# Group and order the data set
df <- as.data.frame(
df %>%
group_by(profile) %>%
arrange(profile, day) %>%
)
)
# Define the Loess function. x is the data frame that has to be passed
predict_loess <- function(x) {
# Declare that the loess column exists, but is blank
df$loess_60 <- NA
df$loess_365 <- NA
# Identify all unique profilee IDs
all_ids <- unique(x$profile)
# Iterate through the unique profilee IDs, determine the length of each vector (which should correspond to 365 days)
# and isolate the according rows that belong to the profilee ID.
for (i in all_ids) {
len_entries <- length(which(x$profile == i))
queried_rows <- result <- x[which(x$profile == i), ]
# Run the loess fit and write the result to the according column
fit_60 <- predict(loess(daily_sum ~ seq(1, len_entries), data=queried_rows, span = span_60))
fit_365 <- predict(loess(daily_sum ~ seq(1, len_entries), data=queried_rows, span = span_365))
x[which(x$profile == i), "loess_60"] <- fit_60
x[which(x$profile == i), "loess_365"] <- fit_365
}
# Return the initial data frame
return(x)
}
# Run the Loess prediction and put the results into two columns - one for a short and one for a long time span
df <- predict_loess(df)

parse multiple XML files based on a vector and rbind in a dataframe

With some effort and help from the stackers, I have been able to parse a webpage and save it as a dataframe. I want to repeat the same operation on multiple xml files and rbind the list. Here is what I tried and did successfully:
library(XML)
xml.url <- "http://www.ebi.ac.uk/ena/data/view/ERS445758&display=xml"
doc <- xmlParse(xml.url)
x <- xmlToDataFrame(getNodeSet(doc,"//SAMPLE_ATTRIBUTE"))
x$UNITS <- NULL
x_t <- t(x)
x_t <- as.data.frame(x_t)
names(x_t) <- as.matrix(x_t[1, ])
x_t <- x_t[-1, ]
x_t[] <- lapply(x_t, function(x) type.convert(as.character(x)))
Above code works well, now when I try to apply a function to do the same for multiple xml files :
ERS_ID <- c("ERS445758","ERS445759", "ERS445760", "ERS445761", "ERS445762")
xml_url_test = as.vector(sprintf("http://www.ebi.ac.uk/ena/data/view/ERS445758&display=xml",
ERS_ID))
XML_parser <- function(XML_url){
doc <- xmlParse(XML_url)
x <- xmlToDataFrame(getNodeSet(doc,"//SAMPLE_ATTRIBUTE"))
x$UNITS <- NULL
x_t <- t(x)
x_t <- as.data.frame(x_t)
names(x_t) <- as.matrix(x_t[1, ])
x_t <- x_t[-1, ]
x_t[] <- lapply(x_t, function(x) type.convert(as.character(x)))
return(x_t)
}
major_test <- sapply(xml_url_test, XML_parser)
It works, but gives me a long list that is not in the right data frame format as I generated for the single XML file.
Finally I would like to also add a column to the final dataframe that has the ERS number from the ERS_ID vector
Something like x_t$ERSid <- ERS_ID in the function
Can someone point out what am I missing in the function as well as any better ways to do the task?
Thanks!
Your main issue is using sapply over lapply() where the latter returns a list and former attempts to simplify to a vector or matrix, here being a matrix.
major_test <- lapply(xml_url_test, XML_parser)
Of course, sapply is a wrapper for lapply and can also return a list: sapply(..., simplify=FALSE):
major_test <- sapply(xml_url_test, XML_parser, simplify=FALSE)
However, a few other items came up:
At beginning, you are not concatenating your ERS_ID to the url stem with sprintf's %s operator. So right now, the same urls are repeating.
At end, you are not binding your list of data frames into a compiled final single dataframe.
Add new ERS column inside your defined function, passing in ERS_ID vector. And while creating column, also remove the ERS prefix with gsub.
R code (adjusted)
XML_parser <- function(eid) {
XML_url <- as.vector(sprintf("http://www.ebi.ac.uk/ena/data/view/%s&display=xml", eid))
doc <- xmlParse(XML_url)
x <- xmlToDataFrame(getNodeSet(doc,"//SAMPLE_ATTRIBUTE"))
x$UNITS <- NULL
x_t <- t(x)
x_t <- as.data.frame(x_t)
names(x_t) <- as.matrix(x_t[1, ])
x_t <- x_t[-1, ]
x_t[] <- lapply(x_t, function(x) type.convert(as.character(x)))
x_t$ERSid <- gsub("ERS", "", eid) # ADD COL, REMOVE ERS
x_t <- x_t[,c(ncol(x_t),2:ncol(x_t)-1)] # MOVE NEW COL TO FIRST
return(x_t)
}
major_test <- lapply(ERS_ID, XML_parser)
# major_test <- sapply(ERS_ID, XML_parser, simplify=FALSE)
# BIND DATA FRAMES TOGETHER
finaldf <- do.call(rbind, major_test)
# RESET ROW NAMES
row.names(finaldf) <- seq(nrow(finaldf))
Using xml2 and the tidyverse you can do something like this:
require(xml2)
require(purrr)
require(tidyr)
urls <- rep("http://www.ebi.ac.uk/ena/data/view/ERS445758&display=xml", 2)
identifier <- LETTERS[seq_along(urls)] # Take a unique identifier per url here
parse_attribute <- function(x){
out <- data.frame(tag = xml_text(xml_find_all(x, "./TAG")),
value = xml_text(xml_find_all(x, "./VALUE")), stringsAsFactors = FALSE)
spread(out, tag, value)
}
doc <- map(urls, read_xml)
out <- doc %>%
map(xml_find_all, "//SAMPLE_ATTRIBUTE") %>%
set_names(identifier) %>%
map_df(parse_attribute, .id="url")
Which gives you a 2x36 data.frame. To parse the column type i would suggest using readr::type_convert(out)
Out looks as follows:
url age body product body site body-mass index chimera check collection date
1 A 28 mucosa Sigmoid colon 16.95502 ChimeraSlayer; Usearch 4.1 database 2009-03-16
2 B 28 mucosa Sigmoid colon 16.95502 ChimeraSlayer; Usearch 4.1 database 2009-03-16
disease status ENA-BASE-COUNT ENA-CHECKLIST ENA-FIRST-PUBLIC ENA-LAST-UPDATE ENA-SPOT-COUNT
1 remission 627051 ERC000015 2014-12-31 2016-10-21 1668
2 remission 627051 ERC000015 2014-12-31 2016-10-21 1668
environment (biome) environment (feature) environment (material) experimental factor
1 organism-associated habitat organism-associated habitat mucus microbiome
2 organism-associated habitat organism-associated habitat mucus microbiome
gastrointestinal tract disorder geographic location (country and/or sea,region) geographic location (latitude)
1 Ulcerative Colitis India 72.82807
2 Ulcerative Colitis India 72.82807
geographic location (longitude) host subject id human gut environmental package investigation type
1 18.94084 1 human-gut metagenome
2 18.94084 1 human-gut metagenome
medication multiplex identifiers pcr primers phenotype project name
1 ASA;Steroids;Probiotics;Antibiotics TGATACGTCT 27F-338R pathological BMRP
2 ASA;Steroids;Probiotics;Antibiotics TGATACGTCT 27F-338R pathological BMRP
sample collection device or method sequence quality check sequencing method sequencing template sex target gene
1 biopsy software pyrosequencing DNA male 16S rRNA
2 biopsy software pyrosequencing DNA male 16S rRNA
target subfragment
1 V1V2
2 V1V2
purrr is really helpful here, as you can iterate over a vector of URLs or a list of XML files with map, or within nested elements with at_depth, and simplify the results with the *_df forms and flatten.
library(tidyverse)
library(xml2)
# be kind, don't call this more times than you need to
x <- c("ERS445758","ERS445759", "ERS445760", "ERS445761", "ERS445762") %>%
sprintf("http://www.ebi.ac.uk/ena/data/view/%s&display=xml", .) %>%
map(read_xml) # read each URL into a list item
df <- x %>% map(xml_find_all, '//SAMPLE_ATTRIBUTE') %>% # for each item select nodes
at_depth(2, as_list) %>% # convert each (nested) attribute to list
map_df(map_df, flatten) # flatten items, collect pages to df, then all to one df
df
## # A tibble: 175 × 3
## TAG VALUE UNITS
## <chr> <chr> <chr>
## 1 investigation type metagenome <NA>
## 2 project name BMRP <NA>
## 3 experimental factor microbiome <NA>
## 4 target gene 16S rRNA <NA>
## 5 target subfragment V1V2 <NA>
## 6 pcr primers 27F-338R <NA>
## 7 multiplex identifiers TGATACGTCT <NA>
## 8 sequencing method pyrosequencing <NA>
## 9 sequence quality check software <NA>
## 10 chimera check ChimeraSlayer; Usearch 4.1 database <NA>
## # ... with 165 more rows
You can retrieve multiple IDs with a single REST url using a comma-separated list or range like ERS445758-ERS445762 and avoid multiple queries to the ENA.
This code gets all 5 samples into a node set and then applies functions using a leading dot in the xpath string so its relative to that node.
ERS_ID <- c("ERS445758","ERS445759", "ERS445760", "ERS445761", "ERS445762")
url <- paste0( "http://www.ebi.ac.uk/ena/data/view/", paste(ERS_ID, collapse=","), "&display=xml")
doc <- xmlParse(url)
samples <- getNodeSet( doc, "//SAMPLE")
## check the first node
samples[[1]]
## get the sample attribute node set and apply xmlToDataFrame to that
x <- lapply( lapply(samples, getNodeSet, ".//SAMPLE_ATTRIBUTE"), xmlToDataFrame)
# labels for bind_rows
names(x) <- sapply(samples, xpathSApply, ".//PRIMARY_ID", xmlValue)
library(dplyr)
y <- bind_rows(x, .id="sample")
z <- subset(y, TAG %in% c("age","sex","body site","body-mass index") , 1:3)
sample TAG VALUE
15 ERS445758 age 28
16 ERS445758 sex male
17 ERS445758 body site Sigmoid colon
19 ERS445758 body-mass index 16.9550173
50 ERS445759 age 58
51 ERS445759 sex male
...
library(tidyr)
z %>% spread( TAG, VALUE)
sample age body site body-mass index sex
1 ERS445758 28 Sigmoid colon 16.9550173 male
2 ERS445759 58 Sigmoid colon 23.22543185 male
3 ERS445760 26 Sigmoid colon 20.76124567 female
4 ERS445761 30 Sigmoid colon 0 male
5 ERS445762 36 Sigmoid colon 0 male

Resources