Subset Duplicated Values >10 - r

I am looking at a data frame and trying to subset rows that have the same pressure value for more then 5 rows or delete rows that do not have 5 duplicate pressure values...
File Turbidity Pressure
1 3.2 46
2 3.4 46
3 5.4 46
4 3.2 46
5 3.1 46
6 2.3 46
7 2.3 45
8 4.5 45
9 2.3 45
10 3.2 44
11 4.5 44
12 6.5 43
13 3.2 42
14 3.1 41
15 1.2 41
16 2.3 41
17 2.4 41
18 2.1 41
19 1.4 41
25 1.3 41
So basically trying to keep rows that have a pressure of 46 and 41 and delete rows in-between. This is a small portion of my dataset and just need code that will basically keep rows with 5 or more duplicate pressure values and delete others.

Try
library(dplyr)
df %>% group_by(Pressure) %>% filter(n() >= 5)
Which gives:
#Source: local data frame [13 x 3]
#Groups: Pressure
#
# File Turbidity Pressure
#1 1 3.2 46
#2 2 3.4 46
#3 3 5.4 46
#4 4 3.2 46
#5 5 3.1 46
#6 6 2.3 46
#7 14 3.1 41
#8 15 1.2 41
#9 16 2.3 41
#10 17 2.4 41
#11 18 2.1 41
#12 19 1.4 41
#13 25 1.3 41

Here's a data.table solution (relies crucially on Pressure not repeating itself later on):
library(data.table)
setDT(df)[,if(.N>=5) .SD,by=Pressure]
Addendum:
If you expect Pressure values to repeat later on, e.g.:
df<-data.frame(File=c(1:19,25:28),
Pressure=rep(c(46:41,46),c(6,3,2,1,1,7,3)))
Then you'll need to use rleid in order to keep only groups of at least 5 in a row (no gaps):
setDT(df)[,ct:=rleid(Pressure)][,if (.N>=5) .SD,by=ct]

Here is a solution using base R:
df <- data.frame(id=1:10, Pressure=c(rep(1,5),6:10))
p.counts <- table(df[,"Pressure"])
good.pressures <- as.numeric(names(p.counts))[p.counts>=5]
df.sub <- df[df[,"Pressure"]%in%good.pressures,]
Note that I'm using df as an example data set, so you can delete that first line of code and replace all instances of df with the name of your data.frame.

Related

Stack every n-th column of data.frame

I have a large excel sheet which I need to import into R. My issue is that the sheet contains many small tables which are separated by empty rows and columns. The table titles are also separated from the table itself by an empty row.
You can look at the excel output here.
I would like to have a list of all data.frames on the sheet which are correctly named.
Edit: I have removed all empty rows and columns so now I am left with many columns where every fourth column, a new table begins.
results[1:8,1:10]
c01 c02 c03 c04 c05 c06
1 response of AUT_E3CI to AUT_E3CI shocks <NA> <NA> <NA> response of AUT_E3CI to AUT_HICP shocks <NA>
2 <NA> lower bound median upper bound <NA> lower bound
3 1.0 1.0 1.0 1.0 1.0 0.0
4 2.0 -0.0017003281610081986 0.06962694180009907 0.11535402387039681 2.0 0.0320783162263377
5 3.0 -0.01232670095304385 0.08512933823225599 0.22707701169618283 3.0 0.023014339564983166
6 4.0 -0.09369238952198494 0.0904834548010296 0.47546694990618454 4.0 -0.05144787826814039
7 5.0 -0.2301596729710727 0.03155987208641617 0.6502910254692547 5.0 -0.11117305002762731
8 6.0 -0.4024596506843293 0.10607314703381547 1.3257182896467359 6.0 -0.47648920768332415
Now I would like to rbind all columns into a single data.frame of four columns, that is stack all tables on top of each other in one long table of four columns. I have tried it with a while-loop which rbinds the whole data.frame onto itself in sets of four columns.
while (ncol(results)>4) {
results <- rbind(results[,1:ncol(results)/2],
results[,(ncol(results)/2)+1:ncol(results)])
}
However, this gives me the following error:
Error in `[.data.frame`(results, , (ncol(results)/2) + 1:ncol(results)) :
undefined columns selected
Does anyone know how to fix the loop or knows an alternative approach?
I think you're starting with something similar to this:
dat <- structure(list(V1 = 11:13, V2 = 21:23, V3 = 31:33, V4 = 41:43, V5 = 51:53, V6 = 61:63, V7 = 71:73, V8 = 81:83, V9 = 91:93), class = "data.frame", row.names = c(NA, -3L))
dat
# V1 V2 V3 V4 V5 V6 V7 V8 V9
# 1 11 21 31 41 51 61 71 81 91
# 2 12 22 32 42 52 62 72 82 92
# 3 13 23 33 43 53 63 73 83 93
where you want to take every set of (say) 3 columns and stack them. Try this:
datspl <- split.default(dat, (seq_along(dat) - 1) %/% 3)
datspl <- lapply(datspl, `colnames<-`, colnames(datspl[[1]]))
newdat <- do.call(rbind, datspl)
newdat
# V1 V2 V3
# 0.1 11 21 31
# 0.2 12 22 32
# 0.3 13 23 33
# 1.1 41 51 61
# 1.2 42 52 62
# 1.3 43 53 63
# 2.1 71 81 91
# 2.2 72 82 92
# 2.3 73 83 93
This assumes that the names from the first four columns are appropriate for all other sets of 4 columns.
(Replace my 3 with 4 if you're doing every 4 columns, everything else should just work.)
You could use reshape in R:
For example using the data provided by r2evans we could do:
n_cols <- 3
reshape(dat, matrix(names(dat),ncols), dir = 'long')
time V1 V2 V3 id
1.1 1 11 21 31 1
2.1 1 12 22 32 2
3.1 1 13 23 33 3
1.2 2 41 51 61 1
2.2 2 42 52 62 2
3.2 2 43 53 63 3
1.3 3 71 81 91 1
2.3 3 72 82 92 2
3.3 3 73 83 93 3
Then get rid of the id and time columns If you do not need them.

How to transfer a column from a dataset sharing the same one with another one

I have two versions of datasets sharing the same columns (more or less). Let's take as an example
db = airquality
db1 = airquality[,-c(6)]
db1$Ozone[db1$Ozone < 30] <- 24
db1$Month[db1$Month == 5] <- 24
db
db1
If I would like to transfer two columns 'Ozone' and 'Wind' from the dataset 'db1' to the 'db' dataset by writing a code using the pipe operator %>% or another iterative method to achieve this result, which code you may possibly suggest?
Thanks
You csn do:
library(dplyr)
db1 %>%
select(Ozone, Wind) %>%
bind_cols(db)
Note that in this example, since some column names will be duplicated in the final result, dplyr will automatically rename the duplicates by appending numbers to the end of the column names.
Base R:
cbind(db, db1[,c(1,3)])
Ozone Solar.R Wind Temp Month Day Ozone Wind
1 41 190 7.4 67 5 1 41 7.4
2 36 118 8.0 72 5 2 36 8.0
3 12 149 12.6 74 5 3 24 12.6
4 18 313 11.5 62 5 4 24 11.5
5 NA NA 14.3 56 5 5 NA 14.3
6 28 NA 14.9 66 5 6 24 14.9
7 23 299 8.6 65 5 7 24 8.6
8 19 99 13.8 59 5 8 24 13.8
9 8 19 20.1 61 5 9 24 20.1
10 NA 194 8.6 69 5 10 NA 8.6
11 7 NA 6.9 74 5 11 24 6.9
12 16 256 9.7 69 5 12 24 9.7
.
.
.

Wrong Fit using nls function

When I try to fit an exponential decay and my x axis has decimal number, the fit is never correct. Here's my data below:
exp.decay = data.frame(time,counts)
time counts
1 0.4 4458
2 0.6 2446
3 0.8 1327
4 1.0 814
5 1.2 549
6 1.4 401
7 1.6 266
8 1.8 182
9 2.0 140
10 2.2 109
11 2.4 83
12 2.6 78
13 2.8 57
14 3.0 50
15 3.2 31
16 3.4 22
17 3.6 23
18 3.8 20
19 4.0 19
20 4.2 9
21 4.4 7
22 4.6 4
23 4.8 6
24 5.0 4
25 5.2 6
26 5.4 2
27 5.6 7
28 5.8 2
29 6.0 0
30 6.2 3
31 6.4 1
32 6.6 1
33 6.8 2
34 7.0 1
35 7.2 2
36 7.4 1
37 7.6 1
38 7.8 0
39 8.0 0
40 8.2 0
41 8.4 0
42 8.6 1
43 8.8 0
44 9.0 0
45 9.2 0
46 9.4 1
47 9.6 0
48 9.8 0
49 10.0 1
fit.one.exp <- nls(counts ~ A*exp(-k*time),data=exp.decay, start=c(A=max(counts),k=0.1))
plot(exp.decay, col='darkblue',xlab = 'Track Duration (seconds)',ylab = 'Number of Particles', main = 'Exponential Fit')
lines(predict(fit.one.exp), col = 'red', lty=2, lwd=2)
I always get this weird fit. Seems to me that the fit is not recognizing the right x axis, because when I use a different set of data, with only integers in the x axis (time) the fit works! I don't understand why it's different with different units.
You need one small modification:
lines(predict(fit.one.exp), col = 'red', lty=2, lwd=2)
should be
lines(exp.decay$time, predict(fit.one.exp), col = 'red', lty=2, lwd=2)
This way you make sure to plot against the desired values on your abscissa.
I tested it like this:
data = read.csv('exp_fit_r.csv')
A0 <- max(data$count)
k0 <- 0.1
fit <- nls(data$count ~ A*exp(-k*data$time), start=list(A=A0, k=k0), data=data)
plot(data)
lines(data$time, predict(fit), col='red')
which gives me the following output:
As you can see, the fit describes the actual data very well, it was just a matter of plotting against the correct abscissa values.

comparing recent averaged values to a current value in R

I am using Rstudio (version .99.903), have a PC (windows 8). I have a follow up question from yesterday as the problem became more complicated. Here is what the data looks like:
Number Trial ID Open date Enrollment rate
420 NCT00091442 9 1/28/2005 0.2
1476 NCT00301457 26 2/22/2008 1
10559 NCT01307397 34 7/28/2011 0.6
6794 NCT00948675 53 5/12/2010 0
6451 NCT00917384 53 8/17/2010 0.3
8754 NCT01168973 53 1/19/2011 0.2
8578 NCT01140347 53 12/30/2011 2.4
11655 NCT01358877 53 4/2/2012 0.3
428 NCT00091442 55 9/7/2005 0.1
112 NCT00065325 62 10/15/2003 0.2
477 NCT00091442 62 11/11/2005 0.1
16277 NCT01843374 62 12/16/2013 0.2
17386 NCT01905657 62 1/8/2014 0.6
411 NCT00091442 66 1/12/2005 0
What I need to do is compare the enrollment rate of the most current date within a given ID to the average of those values that are up to one year prior to it. For instance, for ID 53, the date of 1/19/2011 has an enrollment rate of 0.2 and I would want to compare this against the average of 8/17/2010 and 5/12/2010 enrollment rates (e.g., 0.15).
If there are no other dates within the ID prior to the current one, then the comparison should not be made. For instance, for ID 26, there would be no comparison. Similarly, for ID 53, there would be no comparison for 5/12/2010.
When I say "compare" I am not doing any analysis or visualization. I simply want a new column that takes the average value of those enrollment rates up to one year prior to the current one (I will be plotting them and percentile ranking them later). There are >20,000 data points. Any help would be much appreciated.
Verbose but possibly high performance way of doing this. No giant for loops looping over all the rows of the data frame. The two sapply loops only operate on a big numeric vector, which should be relatively quick regardless of your data row count. But I'm sure someone will waltz in with a trivial dplyr solution soon enough.
Approach assumes that your data is first sorted by ID then by Opendata. If they are not sorted, you need to sort them first.
# Find indices where the same ID is above and below it
A = which(unlist(sapply(X = rle(df$ID)$lengths,
FUN = function(x) {if(x == 1) return(F)
if(x == 2) return(c(F,F))
if(x >= 3) return(c(F,rep(T, x-2),F))})))
# Store list of date, should speed up code a tiny bit
V_opendate = df$Opendate
# Further filter on A, where the date difference < 365 days
B = A[sapply(A, function(x) (abs(V_opendate[x]-V_opendate[x-1]) < 365) & (abs(V_opendate[x]-V_opendate[x+1]) < 365))]
# Return actual indices of rows - 1, rows +1
C = sapply(B, function(x) c(x-1, x+1), simplify = F)
# Actually take the mean of these cases
D = sapply(C, function(x) mean(df[x,]$Enrollment))
# Create new column rate and fill in with value of C. You can do the comparison from here.
df[B,"Rate"] = D
Number Trial ID Opendate Enrollmentrate Rate
1 420 NCT00091442 9 2005-01-28 0.2 NA
2 1476 NCT00301457 26 2008-02-22 1.0 NA
3 10559 NCT01307397 34 2011-07-28 0.6 NA
4 6794 NCT00948675 53 2010-05-12 0.0 NA
5 6451 NCT00917384 53 2010-08-17 0.3 0.10
6 8754 NCT01168973 53 2011-01-19 0.2 1.35
7 8578 NCT01140347 53 2011-12-30 2.4 0.25
8 11655 NCT01358877 53 2012-04-02 0.3 NA
9 428 NCT00091442 55 2005-09-07 0.1 NA
10 112 NCT00065325 62 2003-10-15 0.2 NA
11 477 NCT00091442 62 2005-11-11 0.1 NA
12 16277 NCT01843374 62 2013-12-16 0.2 NA
13 17386 NCT01905657 62 2014-01-08 0.6 NA
14 411 NCT00091442 66 2005-01-12 0.0 NA
14 411 NCT00091442 66 1/12/2005 0.00 NA
The relevant rows are calculated. You can do your comparison with the newly created Rate column.
You might have to change the code a little since I changed removed the space in the column names
df = read.table(text = " Number Trial ID Opendate Enrollmentrate
420 NCT00091442 9 1/28/2005 0.2
1476 NCT00301457 26 2/22/2008 1
10559 NCT01307397 34 7/28/2011 0.6
6794 NCT00948675 53 5/12/2010 0
6451 NCT00917384 53 8/17/2010 0.3
8754 NCT01168973 53 1/19/2011 0.2
8578 NCT01140347 53 12/30/2011 2.4
11655 NCT01358877 53 4/2/2012 0.3
428 NCT00091442 55 9/7/2005 0.1
112 NCT00065325 62 10/15/2003 0.2
477 NCT00091442 62 11/11/2005 0.1
16277 NCT01843374 62 12/16/2013 0.2
17386 NCT01905657 62 1/8/2014 0.6
411 NCT00091442 66 1/12/2005 0", header = T)

Adding information on a graph using R

I would like to add some information on my graph which was plotted from this data set:
EDITTED:
#data set:
day <- c(0:28)
ndied <- c(342,335,240,122,74,64,49,60,51,44,35,48,41,34,38,27,29,23,20,15,20,16,17,17,14,10,4,1,2)
pdied <- c(19.1,18.7,13.4,6.8,4.1,3.6,2.7,3.3,2.8,2.5,2.0,2.7,2.3,1.9,2.1,1.5,1.6,1.3,1.1,0.8,1.1,0.9,0.9,0.9,0.8,0.6,0.2,0.1,0.1)
pmort <- data.frame(day,ndied,pdied)
> pmort
day ndied pdied
1 0 342 19.1
2 1 335 18.7
3 2 240 13.4
4 3 122 6.8
5 4 74 4.1
6 5 64 3.6
7 6 49 2.7
8 7 60 3.3
9 8 51 2.8
10 9 44 2.5
11 10 35 2.0
12 11 48 2.7
13 12 41 2.3
14 13 34 1.9
15 14 38 2.1
16 15 27 1.5
17 16 29 1.6
18 17 23 1.3
19 18 20 1.1
20 19 15 0.8
21 20 20 1.1
22 21 16 0.9
23 22 17 0.9
24 23 17 0.9
25 24 14 0.8
26 25 10 0.6
27 26 4 0.2
28 27 1 0.1
29 28 2 0.1
I have put together this script and still trying to improve on it so that the rest of the information can be added:
> barplot(pmort$pdied,xlab="Age(days)",ylab="Percent",xlim=c(0,28),ylim=c(0,20),legend="Mortality")
I am trying to insert the numbers 0 to 28 (age in days) on the x-axis but could not and I know that it could be a simple script. Secondly, I would like to add the number died or ndied (342 to 2) below each day(0 to 28) along the x-axis.
Example:
0 1 2 3 4 5 and so on...
(N=342) (N=335) (N=240) (N=122) (N=74) (N=64)
Graph:
Any help would be appreciated.
Baz
I gave you two ways to plot the info: one above the bars and one below. You can tweak it to meet your needs.
barX <- barplot(pmort$pdied,xlab="Age(days)",
ylab="Percent", names=pmort$day,
xlim=c(0,28),ylim=c(0,20),legend="Mortality")
text(cex=.5, x=barX, y=pmort$pdied+par("cxy")[2]/2, pmort$ndied, xpd=TRUE)
barX <- barplot(pmort$pdied,xlab="Age(days)",
ylab="Percent", names=pmort$day,
xlim=c(0,28),ylim=c(0,20),legend="Mortality")
text(cex=.5, x=barX, y=-.5, pmort$ndied, xpd=TRUE)

Resources