I have a question about filtering in my dataset. My dataset look like this:
PROJECT FREQ
1 <NA> NA
2 <NA> NA
3 FSHD 0.01282051
4 <NA> NA
5 <NA> NA
6 GROEI,CMS 0.02564103
7 <NA> NA
8 GROEI 0.00000132
9 <NA> NA
10 NMD,BRCA 0.03846154
Here is my problem: I want to throw away all the rows that haven't in the PROJECT field: GROEI and in the FREQ field: bigger than 0.01.
I thought about something like this, but this isn't the way..
a1<-a[!(a$PROJECT != "GROEI" & a$FREQINHDB >= 0.02),]
Can anyone help me with this?
Thanks!
Since you want to match on a partial string, you can use grepl to match a regular expression with your data:
na.omit(a[!grepl("GROEI", a$PROJECT), ])
n PROJECT FREQ
3 3 FSHD 0.01282051
10 10 NMD,BRCA 0.03846154
Related
I am trying to delete some rows according to a filter.
clean <-function(z){
f <- z[!(z$"Current_status" == "T" & z$"Start_date" == 2012),]
return(f)
}
It worked on all dataframes but one. On this one, the lines I want to delete were fully emptied ("NA" value for each column) but the column remains. This is what I get:
Current_status Start_date
1 O 2005
2 O 2004
3 O 2004
4 O 2002
5 O 2002
NA <NA> NA
NA.1 <NA> NA
8 O 0
9 O 0
10 O 0
11 O 0
NA.2 <NA> NA
I tried several methods but none worked.
My hypothesis is that the problem is due to the fact that the number of the row changed and also became "NA".
How could I get rid of these rows?
Many thanks!
You could subset with the help of is.na():
f <- f[!is.na(f$Current_status) & !is.na(f$Start_date), ]
I have these columns:
text.NANA text.22 text.32
1 Female RNDM_MXN95.tif No NA
12 Male RNDM_QOS38.tif No NA
13 Female RNDM_WQW90.tif No NA
14 Male RNDM_BKD94.tif No NA
15 Male RNDM_LGD67.tif No NA
16 Female RNDM_AFP45.tif No NA
I want to create a column that only has the barcode that starts with RNDM_ and ends with .tif, but not including .tif. The tricky part is to get rid of the gender information that is also in the same column. There are a random amount of spaces between the gender information and the RNDM_:
text.NANA text.22 text.32 BARCODE
1 Female RNDM_MXN95.tif No NA RNDM_MXN95
12 Male RNDM_QOS38.tif No NA RNDM_QOS38
13 Female RNDM_WQW90.tif No NA RNDM_WQW90
14 Male RNDM_BKD94.tif No NA RNDM_BKD94
15 Male RNDM_LGD67.tif No NA RNDM_LGD67
16 Female RNDM_AFP45.tif No NA RNDM_AFP45
I made a very poor attempt with this, but it didn't work:
dfrm$BARCODE <- regexpr("RNDM_", dfrm$text.NANA)
# [1] 8 6 9 7 7 8 9 9 8 8 9 9 6 6 7 8 9 8
# attr(,"match.length")
# [1] 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
# attr(,"useBytes")
# [1] TRUE
Please help. Thanks!
So you just want to remove the file extension? Use file_path_sans_ext:
dfrm$BARCODE = file_path_sans_ext(dfrm$text.NANA)
If there’s more stuff in front, you can use the following regular expression to extract just the suffix:
dfrm$BARCODE = stringr::str_match(dfrm$text.NANA, '(RNDM_.*)\\.tif')[, 2]
Note that I’m using the {stringr} package here because the base R functions for extracting regex matches are terrible. Nobody uses them.
I strongly recommend against using strsplit here because it’s underspecified: from reading the code it’s absolutely not clear what the purpose of that code is. Write code that is self-explanatory, not code that requires explanation in a comment.
You can use sapply() and strsplit to do it easy, let me show you:
sapply(strsplit(dfrm$text.NANA, "_"),"[", 1)
That should work.
Edit:
sapply(strsplit(x, "[ .]+"),"[", 2)
I am new to R and struggling to understand its quirks. I'm trying to do something which should be really simple, but is turning out to be apparently very complicated.
I am used to Excel, SQL and Minitab, where you can enter a value in one column which includes references to other columns and parameters. However, R doesn't seem to be allowing me to do this.
I have a table with (currently) four columns:
Date Pallets Lt Tt
1 28/12/2011 491 NA NA
2 29/12/2011 385 NA 0.787890411
3 30/12/2011 662 NA NA
4 31/12/2011 28 NA NA
5 01/01/2012 46 NA NA
6 02/01/2012 403 NA NA
7 03/01/2012 282 NA NA
8 04/01/2012 315 NA NA
9 05/01/2012 327 NA NA
10 06/01/2012 458 NA NA
and have a parameter "beta", with a value which I have assigned as 0.0002.
All I want to do is assign a formula to rows 3:10 which is:
beta*(Pallets t - Pallets t-1)+(1-beta)*Tt t-1.
I thought that the appropriate code might be:
Table[3:10,4]<-beta*(Table[3:10,"Pallets"]-Table[2:9,"Pallets"])+(1-beta)*Table[2:9,"Tt"]
However, this doesn't work. The first time I enter this formula, it generates:
Date Pallets Lt Tt
1 28/12/2011 491 NA NA
2 29/12/2011 385 NA 0.7878904
3 30/12/2011 662 NA 0.8431328
4 31/12/2011 28 NA NA
5 01/01/2012 46 NA NA
6 02/01/2012 403 NA NA
7 03/01/2012 282 NA NA
8 04/01/2012 315 NA NA
9 05/01/2012 327 NA NA
10 06/01/2012 458 NA NA
So it's generated the correct answer for the second item in the series, but not for any of the subsequent values.
It seems as though R doesn't automatically update each row, and the relationship to each other row, when you enter a formula, as Excel does. Having said that, Excel actually would require me to enter the formula in cell [4,Tt], and then drag this down to all of the other cells. Perhaps R is the same, and there is an equivalent to "dragging down" which I need to do?
Finally, I also noticed that when I change the value of the beta parameter, through, e.g. beta<-0.5, and then print the Table values again, they are unchanged - so the table hasn't updated even though I have changed the value of the parameter.
Appreciate that these are basic questions, but I am very new to R.
In R, the computations are not made "cell by cell", but are vectorised - in your example, R takes the vectors Table[3:10,"Pallets"], Table[2:9,"Pallets"] and Table[2:9,"Tt"] as they are at the moment, computes the resulting vector, and finally assigns it to Table[3:10,4].
If you want to make some computations "cell by cell", you have to use the for loop:
beta <- 0.5
df <- data.frame(v1 = 1:12, v2 = 0)
for (i in 3:10) {
df[i, "v2"] <- beta * (df[i, "v1"] - df[i-1, "v1"]) + (1 - beta) * df[i-1, "v2"]
}
df
v1 v2
1 1 0.0000000
2 2 0.0000000
3 3 0.5000000
4 4 0.7500000
5 5 0.8750000
6 6 0.9375000
7 7 0.9687500
8 8 0.9843750
9 9 0.9921875
10 10 0.9960938
11 11 0.0000000
12 12 0.0000000
As it comes to your second question, R will never update any values on its own (imagine having set manual calculation in Excel). So you need to repeat the computations after changing beta.
Although it's generally a bad design, but you can iterate over rows in a loop:
Table$temp <- c(0,diff(Table$Palletes,1))
prevTt = 0
for (i in 1:10)
{
Table$Tt[i] = Table$temp * beta + (1-beta)*prevTt
prevTt = Table$Tt[i]
}
Table$temp <- NULL
This is my data frame:
ID <- c('TZ1','TZ2','TZ3','TZ4')
hr <- c(56,32,38,NA)
cr <- c(1,4,5,2)
data <- data.frame(ID,hr,cr)
ID hr cr
1 TZ1 56 1
2 TZ2 32 4
3 TZ3 38 5
4 TZ4 NA 2
I want to remove the rows where data$hr = 56. This is what I want the end product to be:
ID hr cr
2 TZ2 32 4
3 TZ3 38 5
4 TZ4 NA 2
This is what I thought would work:
data = data[data$hr !=56,]
However the resulting data frame looks like this:
ID hr cr
2 TZ2 32 4
3 TZ3 38 5
NA <NA> NA NA
How can I mofify my code to encorporate the NA value so this doesn't happen? Thank you for your help, I can't figure it out.
EDIT: I also want to keep the NA value in the data frame.
The issue is that when we do the == or !=, if there are NA values, it will remain as such and create an NA row for that corresponding NA value. So one way to make the logical index with only TRUE/FALSE values will be to use is.na also in the comparison.
data[!(data$hr==56 & !is.na(data$hr)),]
# ID hr cr
#2 TZ2 32 4
#3 TZ3 38 5
#4 TZ4 NA 2
We could also apply the reverse logic
subset(data, hr!=56|is.na(hr))
# ID hr cr
#2 TZ2 32 4
#3 TZ3 38 5
#4 TZ4 NA 2
I'm having trouble with the na.spline() function in the zoo package. Although the documentation explicitly states that this is an interpolation function, the behaviour I'm getting includes extrapolation.
The following code reproduces the problem:
require(zoo)
vector <- c(NA,NA,NA,NA,NA,NA,5,NA,7,8,NA,NA)
na.spline(vector)
The output of this should be:
NA NA NA NA NA NA 5 6 7 8 NA NA
This would be interpolation of the internal NA, leaving the trailing NAs in place. But, instead I get:
-1 0 1 2 3 4 5 6 7 8 9 10
According to the documentation, this shouldn't happen. Is there some way to avoid extrapolation?
I recognise that in my example, I could use linear interpolation, but this is a MWE. Although I'm not necessarily wed to the na.spline() function, I need some way to interpolate using cubic splines.
This behavior appears to be coming from the stats::spline function, e.g.,
spline(seq_along(vector), vector, xout=seq_along(vector))$y
# [1] -1 0 1 2 3 4 5 6 7 8 9 10
Here is a work around, using the fact that na.approx strictly interpolates.
replace(na.spline(vector), is.na(na.approx(vector, na.rm=FALSE)), NA)
# [1] NA NA NA NA NA NA 5 6 7 8 NA NA
Edit
As #G.Grothendieck suggests in the comments below, another, no doubt more performant, way is:
na.spline(vector) + 0*na.approx(vector, na.rm = FALSE)