Looping through columns of csv-file in R [duplicate] - r

This question already has answers here:
How to apply a shapiro test by groups in R?
(3 answers)
Closed 9 years ago.
This probably is an easy question, but I'm just starting learning how to use R.
I have a csv-file filled with columns containing numbers. For every column of numbers I want R to conduct a Shapiro-Wilks test of normality. So, I want to loop through the columns from left to right, as to conduct shapiro.test(file$column1), shapiro.test(file$column2), etc.
All columns have a name as their header, and they don't contain the same number of rows.
How do I go about? Many thanks in advance!

Try
apply(file, 2, shapiro.test)
and take a look at ?apply
Another way is using sapply
sapply(file, shapiro.test, simplify=FALSE)
also take a look at ?sapply
An example using airquality dataset
> data(airquality)
> head(airquality)
Ozone Solar.R Wind Temp Month Day
1 41 190 7.4 67 5 1
2 36 118 8.0 72 5 2
3 12 149 12.6 74 5 3
4 18 313 11.5 62 5 4
5 NA NA 14.3 56 5 5
6 28 NA 14.9 66 5 6
# Applying shapiro.test function
> Test <- apply(airquality, 2, shapiro.test)
# Showing results in a nice format
> sapply(Test, function(x) unlist(x[c( "statistic", "p.value")]))
Ozone Solar.R Wind Temp Month Day
statistic.W 8.786661e-01 9.418347e-01 0.9857501 0.976173252 8.880451e-01 9.531254e-01
p.value 2.789638e-08 9.493099e-06 0.1178033 0.009320041 2.258290e-09 5.047775e-05
> sapply(Test, function(x) c(x["statistic"], x["p.value"])) # same results as above
Ozone Solar.R Wind Temp Month Day
statistic 0.8786661 0.9418347 0.9857501 0.9761733 0.8880451 0.9531254
p.value 2.789638e-08 9.493099e-06 0.1178033 0.009320041 2.25829e-09 5.047775e-05

Related

LOCF and NOCF methods for missing data: how to plot data?

I'm working on the following dataset and its missing data:
# A tibble: 27 x 6
id sex d8 d10 d12 d14
<dbl> <chr> <dbl> <dbl> <dbl> <dbl>
1 1 F 21 20 21.5 23
2 2 F 21 21.5 24 25.5
3 3 NA NA 24 NA 26
4 4 F 23.5 24.5 25 26.5
5 5 F 21.5 23 22.5 23.5
6 6 F 20 21 21 22.5
7 7 F 21.5 22.5 23 25
8 8 F 23 23 23.5 24
9 9 F NA 21 NA 21.5
10 10 F 16.5 19 19 19.5
# ... with 17 more rows
I would like to fill the missiningness data via the Last Observation Carried Forward method (LOCF) and the Next Observation Carried Backward one (NOCB) and report also a graphic representation, plotting the individual profiles during age by sex, highlighting the imputed values, and compute the means and the standard errors at each age by sex. May you suggest a way to set properly the argument in plot() function?
Someone may have any clue about this?
I let you below some code, just in case they could turn out as useful, drawn from other dataset as example.
par(mfrow=c(1,1))
Oz <- airquality$Ozone
locf <- function(x) {
a <- x[1]
for (i in 2:length(x)) {
if (is.na(x[i])) x[i] <- a
else a <- x[i]
}
return(x)
}
Ozi <- locf(Oz)
colvec <- ifelse(is.na(Oz),mdc(2),mdc(1))
### Figure
plot(Ozi[1:80],col=colvec,type="l",xlab="Day number",ylab="Ozone (ppb)")
points(Ozi[1:80],col=colvec,pch=20,cex=1)
Next Observation Carried Backward / Last Observation Carried Forward is probably a very bad choice for your data.
These algorithms are usually used for time series data. Where carrying the last observation forward might be a good idea. E.g. if you think of 10 minute temperature measurements, the actual outdoor temperature will be quite likely quite similar to the temperature 10 minutes ago.
For cross sectional data (it seems you are looking at persons) the previous person is usually no more similar to actual person than any other random person.
Take a look at the mice R package for your cross-sectional dataset.
It offers way better algorithms for your case than locf/nocb.
Here is a overview about the function it offers: https://amices.org/mice/reference/index.html
It also includes different plots to assess the imputations e.g.:
Usually when using mice you create multiple possible imputations ( is worth reading about the technique of multiple imputation ). But you can also just produce one imputed dataset with the package.
There are the following functions for visualization of your imputations:
bwplot() (Box-and-whisker plot of observed and imputed data)
densityplot() (Density plot of observed and imputed data)
stripplot() (Stripplot of observed and imputed data)
xyplot()(Scatterplot of observed and imputed data)
Hope this helps a little bit. So my advice would be to take a look at this package and then start a new approach with your new knowledge.

How to fill a new data frame based on the value and the results of an calculus integrating this very value?

For graphical purpose, I want to create a new data frame with two columns.
The first column is the dose of the treatment received (i; 10 grammes up to 200 grammes).
The second column must be filed with the result of a calculus corresponding to the value of the dose received, id est the percentage of patients developing the disease according the corresponding dose which is given by the formula below:
The dose is extracted from a much larger dataset (data_fcpa) of more than 1 000 rows (patients).
percent_i <- round (prop.table (table (data_fcpa $ n_chir_act [data_fcpa $ cyproterone_dose > i] > 1))[2] * 100, 1)
I know how to create a new data (df) with the doses I want to explore:
df <- data.frame (dose <- seq (10, 200, by = 10))
names (df) <- c("cpa_dose")
> df
cpa_dose
1 10
2 20
3 30
4 40
5 50
6 60
7 70
8 80
9 90
10 100
11 110
12 120
13 130
14 140
15 150
16 160
17 170
18 180
19 190
20 200
For example for a dose of 10 grammes the result is:
> round (prop.table (table (data_fcpa $ n_chir_act [data_fcpa $ cyproterone_dose > 10] > 1))[2] * 100, 1)
TRUE
11.7
I suspect that a loop is needed to produce an output alike the little example provided below but, I have no idea of how to do it.
cpa_dose percentage
1 10 11.7
2 20
3 30
4 40
Any suggestion are welcomed.
Thank you in advance for your help.
It seems that you are describing a a situation where you want to show predicted effects from a statistical model? In that case, ggeffects is your best friend.
library(tidyverse)
library(ggeffects)
lm(mpg ~ hp,mtcars) %>%
ggpredict() %>%
as_tibble()
Btw, in order to answer your question it's required to provide some data and show what you have tried.

Take the mean of three variables containing NAs to create new variable using dplyr [duplicate]

This question already has answers here:
R: How to calculate mean for each row with missing values using dplyr
(3 answers)
Closed 3 years ago.
I have three measures in my dataset that I am trying to combine into one new variable that represents the mean value across those three variables for each row in turn (each row represents a participant). Each of the original three variables contains NA values.
I've tried the code below that I've applied here to a sample dataset from R that contains NA values (airquality):
airquality %>% mutate(New = mean(airquality$Solar.R,airquality$Ozone,airquality$Wind))
But I keep getting the error message:
Error in mean.default(airquality$Solar.R, airquality$Ozone,
airquality$Wind) : 'trim' must be numeric of length one In
addition: Warning message: In if (na.rm) x <- x[!is.na(x)] : the
condition has length > 1 and only the first element will be used
I have also tried :
airquality %>% filter(!is.na(airquality$Solar.R,airquality$Ozone,airquality$Wind)) %>% mutate(New = mean(airquality$Solar.R,airquality$Ozone,airquality$Wind))
But this gives me the same error.
Can anyone advise on how to solve this problem?
Thanks so much in advance!
You can use row_mean_ from hablar which takes mean by row while ignoring missing.
library(hablar)
airquality %>%
mutate(New = row_mean_(Solar.R, Ozone, Wind))
Result
Ozone Solar.R Wind Temp Month Day New
1 41 190 7.4 67 5 1 79.466667
2 36 118 8.0 72 5 2 54.000000
3 12 149 12.6 74 5 3 57.866667
4 18 313 11.5 62 5 4 114.166667
5 NA NA 14.3 56 5 5 14.300000
6 28 NA 14.9 66 5 6 21.450000
7 23 299 8.6 65 5 7 110.200000

How do I create a column using values of a second column that meet the conditions of a third in R?

I have a dataset Comorbidity in RStudio, where I have added columns such as MDDOnset, and if the age at onset of MDD < the onset of OUD, it equals 1, and if the opposite is true, then it equals 2. I also have another column PhysDis that has values 0-100 (numeric in nature).
What I want to do is make a new column that includes the values of PhysDis, but only if MDDOnset == 1, and another if MDDOnset==2. I want to make these columns so that I can run a t-test on them and compare the two groups (those with MDD prior OUD, and those who had MDD after OUD with regards to which group has a greater physical disability score). I want any case where MDDOnset is not 1 to be NA.
ttest1 <-t.test(Comorbidity$MDDOnset==1, Comorbidity$PhysDis)
ttest2 <-t.test(Comorbidity$MDDOnset==2, Comorbidity$PhysDis)
When I did the t test twice, once where MDDOnset = 1 and another when it equaled 2, the mean for y (Comorbidity$PhysDis) was the same, and when I looked into the original csv file, it turned out that this mean was the mean of the entire column, and not just cases where MDDOnset had a value of one or two. If there is a different way to run the t-tests that would have the mean of PhysDis only when MDDOnset = 1, and another with the mean of PhysDis only when MDDOnset == 2 that does not require making new columns, then please tell me.. Sorry if there are any similar questions or if my approach is way off, I'm new to R and programming in general, and thanks in advance.
Here's a smaller data frame where I tried to replicate the error where the new columns have switched lengths. The issue would be that the length of C would be 4, and the length of D would be 6 if I could replicate the error.
> A <- sample(1:10)
> B <-c(25,34,14,76,56,34,23,12,89,56)
> alphabet <-data.frame(A,B)
> alphabet$C <-ifelse(alphabet$A<7, alphabet$B, NA)
> alphabet$D <-ifelse(alphabet$A>6, alphabet$B, NA)
> print(alphabet)
A B C D
1 7 25 NA 25
2 9 34 NA 34
3 4 14 14 NA
4 2 76 76 NA
5 5 56 56 NA
6 10 34 NA 34
7 8 23 NA 23
8 6 12 12 NA
9 1 89 89 NA
10 3 56 56 NA
> length(which(alphabet$C>0))
[1] 6
> length(which(alphabet$D>0))
[1] 4
I would use the mutate command from the dplyr package.
Comorbidity <- mutate(Comorbidity, newColumn = (ifelse(MDDOnset == 1, PhysDis, "")), newColumn2 = (ifelse(MDDOnset == 2, PhysDis, "")))

Avoid using a loop to get sum of rows in R, where I want to start and stop the sum on different columns for each row

I am relatively new to R from Stata. I have a data frame that has 100+ columns and thousands of rows. Each row has a start value, stop value, and 100+ columns of numerical values. The goal is to get the sum of each row from the column that corresponds to the start value to the column that corresponds to the stop value. This is direct enough to do in a loop, that looks like this (data.frame is df, start is the start column, stop is the stop column):
for(i in 1:nrow(df)) {
df$out[i] <- rowSums(df[i,df$start[i]:df$stop[i]])
}
This works great, but it is taking 15 minutes or so. Does anyone have any suggestions on a faster way to do this?
You can do this using some algebra (if you have a sufficient amount of memory):
DF <- data.frame(start=3:7, end=4:8)
DF <- cbind(DF, matrix(1:50, nrow=5, ncol=10))
# start end 1 2 3 4 5 6 7 8 9 10
#1 3 4 1 6 11 16 21 26 31 36 41 46
#2 4 5 2 7 12 17 22 27 32 37 42 47
#3 5 6 3 8 13 18 23 28 33 38 43 48
#4 6 7 4 9 14 19 24 29 34 39 44 49
#5 7 8 5 10 15 20 25 30 35 40 45 50
take <- outer(seq_len(ncol(DF)-2)+2, DF$start-1, ">") &
outer(seq_len(ncol(DF)-2)+2, DF$end+1, "<")
diag(as.matrix(DF[,-(1:2)]) %*% take)
#[1] 7 19 31 43 55
If you are dealing with values of all the same types, you typically want to do things in matrices. Here is a solution in matrix form:
rows <- 10^3
cols <- 10^2
start <- sample(1:cols, rows, replace=T)
end <- pmin(cols, start + sample(1:(cols/2), rows, replace=T))
# first 2 cols of matrix are start and end, the rest are
# random data
mx <- matrix(c(start, end, runif(rows * cols)), nrow=rows)
# use `apply` to apply a function to each row, here the
# function sums each row excluding the first two values
# from the value in the start column to the value in the
# end column
apply(mx, 1, function(x) sum(x[-(1:2)][x[[1]]:x[[2]]]))
# df version
df <- as.data.frame(mx)
df$out <- apply(df, 1, function(x) sum(x[-(1:2)][x[[1]]:x[[2]]]))
You can convert your data.frame to a matrix with as.matrix. You can also run the apply directly on your data.frame as shown, which should still be reasonably fast. The real problem with your code is that your are modifying a data frame nrow times, and modifying data frames is very slow. By using apply you get around that by generating your answer (the $out column), which you can then cbind back to your data frame (and that means you modify your data frame just once).

Resources