I have an imported dataframe Measurements that contains many observations from an experiment.
Measurements <- data.frame(X = 1:4,
Data = c(90, 85, 100, 105))
X Data
1 90
2 85
3 100
4 105
I want to add another column Condition that specifies the treatment group for each datapoint. I know which obervation ranges are from which condition (e.g. observations 1:2 are from the control and observations 3:4 are from the experimental group).
I have devised two solutions already that give the desired output but neither are ideal. First:
Measurements["Condition"] <- c(rep("Cont", 2), rep("Exp", 2))
X Data Condition
1 90 Cont
2 85 Cont
3 100 Exp
4 105 Exp
The benefit of this is it is one line of code/one command. But this is not ideal since I need to do math outside separately (e.g. 3:4 = 2 obs, etc) which can be tricky/unclear/indirect with larger datasets and more conditions (e.g. 47:83 = ? obs, etc) and would be liable to perpetuating errors since a small error in length for an early assignment would also shift the assignment of later groups (e.g. if rep of Cont is mistakenly 1, then Exp gets mistakenly assigned to 2:3 too).
I also thought of assigning like this, which gives the desired output too:
Measurements[1:2, "Condition"] <- "Cont"
Measurements[3:4, "Condition"] <- "Exp"
X Data Condition
1 90 Cont
2 85 Cont
3 100 Exp
4 105 Exp
This makes it more clear/simple/direct which rows will receive which assignment, but this requires separate assignments and repetition. I feel like there should be a way to "vectorize" this assignment, which is the solution I'm looking for.
I'm having trouble finding complex indexing rules from online. Here is my first intuitive guess of how to achieve this:
Measurements[c(1:2, 3:4), "Condition"] <- list("Cont", "Exp")
X Data Condition
1 90 Cont
2 85 Cont
3 100 Cont
4 105 Cont
But this doesn't work. It seems to combine 1:2 and 3:4 into a single equivalent range (1:4) and assigns only the first condition to this range, which suggests I also need to specify the column again. When I try to specify the column again:
Measurements[c(1:2, 3:4), c("Condition", "Condition")] <- list("Cont", "Exp")
X Data Condition Condition.1
1 90 Cont Exp
2 85 Cont Exp
3 100 Cont Exp
4 105 Cont Exp
For some reason this creates a second new column (??), and it again seems to combine 1:2 and 3:4 into essentially 1:4. So I think I need to index the two row ranges in a way that keeps them separate and only specify the column once, but I'm stuck on how to do this. I assume the solution is simple but I can't seem to find an example of what I'm trying to do. Maybe to keep them separate I do have to assign them separately, but I'm hoping there is a way.
Can anyone help? Thank you a ton in advance from an R noobie!
If you already have a list of observations which belong to each condition you could use dplyr::case_when to do a conditional mutate. Depending on how you have this information stored you could use something like the following:
library(dplyr)
Measurements <- data.frame(X = 1:4,
Data = c(90, 85, 100, 105))
# set which observations belong to each condition
Cont <- 1:2
Exp <- 3:4
Measurements %>%
mutate(Condition = case_when(
X %in% Cont ~ "Cont",
X %in% Exp ~ "Exp"
))
# X Data Condition
# 1 90 Cont
# 2 85 Cont
# 3 100 Exp
# 4 105 Exp
Note that this does not require the observations to be in consecutive rows.
I normally see this done with a merge operation. The trick is getting your conditions data into a nice shape.
composeConditions <- function(...) {
conditions <- list(...)
data.frame(
X = unname(unlist(conditions)),
condition = unlist(unname(lapply(
names(conditions),
function(x) rep(x, times = length(conditions[x][[1]]))
)))
)
}
conditions <- composeConditions(Cont = 1:2, Exp = 3:4)
> conditions
X condition
1 1 Cont
2 2 Cont
3 3 Exp
4 4 Exp
merge(Measurements, conditions, by = "X")
X Data condition
1 1 90 Cont
2 2 85 Cont
3 3 100 Exp
4 4 105 Exp
Efficient for larger datasets is to know the data pattern and the data id.
Measurements <- data.frame(X = 1:4, Data = c(90, 85, 100, 105))
dat <- c("Cont","Exp")
pattern <- c(1,1,2,2)
Or draw pattern from data, e.g. conditional from Measurements$Data
pattern <- sapply( Measurements$Data >=100, function(x){ if(x){2}else{1} } )
# [1] 1 1 2 2
Then you can add the data simply by doing:
Measurements$Condition <- dat[pattern]
# X Data Condition
#1 1 90 Cont
#2 2 85 Cont
#3 3 100 Exp
#4 4 105 Exp
Related
I'm trying to find a way to add some constraints into a linear programme to force the solution to have a certain level of uniqueness to it. I'll try explain what I mean here. Take the example below, the linear programme returns the max possible Score for a combination of 2 males and 1 female.
Looking at the Team/Grade/Rep columns however we can see that there is a lot of duplication from row to row. In fact Shana and Jason are identical.
Name<-c("Jane","Brad","Harry","Shana","Debra","Jason")
Sex<-c("F","M","M","F","F","M")
Score<-c(25,50,36,40,39,62)
Team<-c("A","A","A","B","B","B")
Grade<-c(1,2,1,2,1,2)
Rep<-c("C","D","C","D","D","D")
df<-data.frame(Name,Sex,Score,Team,Grade,Rep)
df
Name Sex Score Team Grade Rep
1 Jane F 25 A 1 C
2 Brad M 50 A 2 D
3 Harry M 36 A 1 C
4 Shana F 40 B 2 D
5 Debra F 39 B 1 D
6 Jason M 62 B 2 D
library(Rglpk)
num <- length(df$Name)
obj<-df$Score
var.types<-rep("B",num)
matrix <- rbind(as.numeric(df$Sex == "M"),as.numeric(df$Sex == "F"))
direction <- c("==","==")
rhs<-c(2,1)
sol <- Rglpk_solve_LP(obj = obj, mat = matrix, dir = direction, rhs = rhs,types = var.types, max = TRUE)
df[sol$solution==1,]
Name Sex Score Team Grade Rep
2 Brad M 50 A 2 D
4 Shana F 40 B 2 D
6 Jason M 62 B 2 D
What I am trying to work out is how to limit say the level of randomness across those last three columns. For example I would like there to no more than ie 2 columns the same across any two rows. So this would mean that either the Shana row or Jason row would be replaced in the model with an alternative.
I'm not sure if this is something that can be easily added into the Rglpk model? Appreciate any help that can be offered.
It sounds like you're asking how to prevent having a pair of individuals who are "too similar" from being returned by your optimization model. Once you have determined a rule for what makes a pair of people "too similar", you can simply add a constraint for each pair, limiting your solution to have no more than one of those two people.
For instance, if we use your rule of having no more than 2 columns the same, we could easily identify all pairs that we want to block:
pairs <- t(combn(nrow(df), 2))
(blocked <- pairs[rowSums(sapply(df[,c("Team", "Grade", "Rep")], function(x) {
x[pairs[,1]] == x[pairs[,2]]
})) >= 3,])
# [,1] [,2]
# [1,] 1 3
# [2,] 4 6
We want to block the pairs Jane/Harry and Shana/Jason. This is easy to do with linear constraints:
library(Rglpk)
num <- length(df$Name)
obj<-df$Score
var.types<-rep("B",num)
matrix <- rbind(as.numeric(df$Sex == "M"), as.numeric(df$Sex == "F"),
outer(blocked[,1], seq_len(num), "==") + outer(blocked[,2], seq_len(num), "=="))
direction <- rep(c("==", "<="), c(2, nrow(blocked)))
rhs<-c(2, 1, rep(1, nrow(blocked)))
sol <- Rglpk_solve_LP(obj = obj, mat = matrix, dir = direction, rhs = rhs,types = var.types, max = TRUE)
df[sol$solution==1,]
# Name Sex Score Team Grade Rep
# 2 Brad M 50 A 2 D
# 5 Debra F 39 B 1 D
# 6 Jason M 62 B 2 D
The approach of computing every pair to block is attractive because we could have a much more complicated rule for which pairs to block, since we don't need to encode the rule into the linear program. All we need to be able to do is to compute every pair that needs to be blocked.
For each group of rows having the same last 3 columns we construct a constraint such that at most one of those rows may appear. If a is an indictor vector of the rows of such a group then the constraint would look like this:
a'x <= 1
To do that split the row numbers by the last 3 columns into a list of vectors s each of whose components is a vector of row numbers for rows having the same last 3 columns. Only keep those conponents having more than 1 row number giving s1. In this case the first component of s1 is c(1, 3) referring to the Jane and Harry rows and the second component is c(4, 6) referring to the Shana and Jason rows. In this particular data there were 2 rows in each of the groups but in other data there could be more than 2 rows in a group. excl has one row (constraint) for each element of s1.
The data in the question only has groups of size 2 but in general if there were k rows in some group one would need k choose 2 constraint rows to ensure that only one of the k were chosen if this were done pairwise whereas the approach here only requires one constraint row for the entire group. For example, if k = 10 then choose(10, 2) = 45 so this uses 1 constraint in place of 45.
Finally rbind excl to matrix giving matrix2 and adjust the other Rglpk_solve_LP arguments accordingly giving:
nr <- nrow(df)
s <- split(1:nr, df[4:6])
s1 <- s[lengths(s) > 1]
excl <-t(sapply(s1, "%in%", x = 1:nr)) + 0
matrix2 <- rbind(matrix, excl)
direction2 <- c(direction, rep("<=", nrow(excl)))
rhs2 <- c(rhs, rep(1, nrow(excl)))
sol2 <- Rglpk_solve_LP(obj = obj, mat = matrix2,
dir = direction2, rhs = rhs2, types = "B", max = TRUE)
df[ sol2$solution == 1, ]
giving:
Name Sex Score Team Grade Rep
2 Brad M 50 A 2 D
5 Debra F 39 B 1 D
6 Jason M 62 B 2 D
I am trying to check the value of one variable and if it meets a certain condition the new variable gets set to 1 or else it gets set to zero.
I am having difficulty with this in R.
This simple code does not work:
attach(data)
if (Drug = 1) {
Drug_factor <- 0
} else {
if (Drug = 2) {
Drug_factor <- 1
} else Drug_factor<- 0
I do not understand why this will not work.
Why does R use such complicated conventions for doing basic stuff ?
You can either use ifelse
Data$Drug_factor <- with(Data, ifelse(Drug==1, 0, 1))
Or use the factor approach
Data$Drug_factor <- with(Data, as.numeric(as.character(factor(Drug,
levels=1:2, labels=0:1))))
Or
Data$Drug_factor <- c(0,1)[(Data$Drug==2)+1]
Or even shorter assuming that the 'Drug' is 'numeric'
Data$Drug_factor <- c(0,1)[Data$Drug]
All these cases, assume that there are only two unique elements in 'Drug'.
Suppose if you have more than 2 unique elements in 'Drug', from the code, it seems to me that only when 'Drug==2', the value should be returned as 1. Creating another value in 'Drug'
Data$Drug[4] <- 3
In this case, we can change the ifelse condition such that when 'Drug' is 2 return 1 and for all others to return 0.
Data$Drug_factor <- with(Data, ifelse(Drug==2, 1, 0))
A similar option by indexing is,
Data$Drug_factor <- c(0,1)[(Data$Drug==2)+1]
data
set.seed(24)
Data <- data.frame(Drug= sample(1:2, 10, replace=TRUE), val=rnorm(10))
There are two different kinds of problems of this kind.
In the simple case, you want to change a small number of values to some other value. For this purpose, I find that using mapvalues() from plyr is a good solution. For example:
#lets pretend we have loaded some data where missing data is coded as 99
set.seed(1) #reproducible results
test_data = sample(c(0:5, 99), size = 1000, replace = T)
#table of our dta
table(test_data)
Output:
test_data
0 1 2 3 4 5 99
138 145 150 150 127 142 148
Recode:
#recode 99 to NA
library(plyr)
test_data_noNA = mapvalues(test_data, 99, NA)
table(test_data_noNA, exclude = NULL) #also count NAs
Output:
test_data_noNA
0 1 2 3 4 5 <NA>
138 145 150 150 127 142 148
In the other case, you want to conditionally change values to some other value, but there is a large/indefinite/infinite number of values it could be.
Example:
#continuous data
set.seed(1) #reproducible results
test_data = rnorm(1000) #normally distributed data
hist(test_data) #plot with histogram
However, let's say we want to deal with outliers, which we define at beyond 2SD from the mean. However, we don't just want to exclude them, so instead we will recode them.
#change values above 2 to 2
test_data[test_data > 2] = 2
#change valuesbelow -2 to -2
test_data[test_data < -2] = -2
hist(test_data) #plot with histogram
Horrible title question, but this is what I am trying to achieve. For Table1 I want to add the Column "BETWEEN", verifying if the "POSITION" falls between any of the "START" and "STOP" values for the corresponding "BIN" in Table2.
Table1. BIN names (character) and POSITION in BIN (numeric):
BIN POSITION
1 12
1 52
1 86
7 6
7 22
X 112
X 139
MT 3
MT 26
Table2: BIN name (character) and START and STOP positions (numeric)
BIN START STOP
1 2 64
1 90 110
7 20 100
7 105 200
X 1 5
MT 1 1000
And the desired result - Table 1 with "BETWEEN":
CHROM POSITION BETWEEN
1 12 TRUE
1 52 TRUE
1 86 FALSE
7 6 FALSE
7 22 TRUE
X 112 FALSE
X 139 FALSE
MT 3 TRUE
MT 26 TRUE
My Table 1 has about 4,000,000 rows, and Table 2 about 500,000 rows, and anything I came up with was very slow.
As an example of bigger tables, use the following:
positions <- seq(1,100000,10)
bins <- c("A","B","C","D","E","F","G","H","I","J")
tab1 <- data.table(bin = rep(bins,1,each=length(positions)), pos = rep(positions,10))
tab2 <- data.table(bin = rep(bins,1,each=2000), start = seq(5,100000,50), stop = start+25)
The desired output would be:
tab1
bin pos between
1: A 1 FALSE
2: A 11 TRUE
3: A 21 TRUE
4: A 31 FALSE
5: A 41 FALSE
The following method requires that for a given bin, the bins are mutually exclusive. (e.g. you cant have bin A with bounds 1-5 and another bin A with bounds 4-8.) Also, I modified your example a bit.
positions <- seq(1,100000,10)
bins <- c("A","B","C","D","E","F","G","H","I","J")
tab1 <- data.table(bin = rep(bins,1,each=length(positions)), pos = rep(positions,10))
setkey(tab1,"bin","pos")
tab2 <- data.table(bin = rep(bins,1,each=2000), start = seq(5,100000,50))
tab2[, end := start+25]
tab2[,pos:=start]
setkey(tab2,"bin","pos")
x<-tab2[tab1, roll=TRUE, nomatch=0]
tab2[,pos:=end]
setkey(tab2,"bin","pos")
y<-tab2[tab1, roll=-Inf, nomatch=0]
setkey(x,"bin","pos","start")
setkey(y,"bin","pos","start")
inBin<-x[y,nomatch=0]
inBin[, between:=TRUE]
setkey(tab1,"bin","pos")
setkey(inBin,"bin","pos")
result<-inBin[,list(bin,pos,between)][tab1]
result[is.na(between), between:=FALSE]
I don't have the time to explain my solution in depth right now. Instead I'll take the cheap way out and refer you to research the roll parameter of data.table. The basic methodology above is that I'm joining tab1 and tab2, rolling pos forward to the nearest end bound. Then I join tab1 and tab2, rolling pos backward to the nearest start bound. Then I do an inner join on the those two sets, giving me all rows in tab1 which fall inside the bounds of a bin. From that point, it's just grunt work.
Most straightforward approach is to nest the matching loops I think. You may be need to handle factors slightly differently. I haven't tested to see what happens if it does not find a bin match.
BIN <- c("1","1","1","7","7","X","X","MT","MT")
POSITION <- c(12,52,86,6,22,112,139,3,26)
npos <- length(POSITION)
BETWEEN <- vector(mode="logical",length=npos)
tab1 <- as.data.frame(cbind(BIN,POSITION))
BIN2 <- c("1","1","7","7","X","MT")
START <- c(2,90,20,105,1,1)
STOP <- c(64,110,100,200,5,1000)
tab2 <- as.data.frame(cbind(BIN2,START,STOP))
bins <- unique(tab1$BIN)
for(bin in bins){
#print(paste("bin=",bin))
t1.bin.matches <- which(tab1$BIN==bin)
t2.bin.compares <- which(tab2$BIN2==bin)
#print(t1.bin.matches)
#print(t2.bin.compares)
for(match in t1.bin.matches){
between = FALSE
candidate = as.numeric(as.vector(tab1$POSITION)[match])
for(compare in t2.bin.compares){
comp.start <- as.numeric(as.vector(tab2$START)[compare])
comp.stop <- as.numeric(as.vector(tab2$STOP)[compare])
if(candidate>=comp.start&&candidate<=comp.stop){
between = TRUE
break
}
}
#print(paste(comp.start,candidate,comp.stop,between))
BETWEEN[match] = between
}
}
tab1 <- as.data.frame(cbind(tab1,BETWEEN))
tab1
Make sure your BIN columns are character, POSITION, START, END are numeric.
Table1$BIN = as.character(Table1$BIN)
Table1$POSITION = as.numeric(Table1$POSITION)
Table2$BIN = as.character(Table2$BIN)
Table2$START = as.numeric(Table2$START)
Table2$STOP = as.numeric(Table2$STOP)
Convert your data.frame to library(data.table) because the code below might be slow.
Table1 = as.data.table(Table1)
Table2 = as.data.table(Table2)
Generate desired output
z = apply(Table1, 1, function(x) {nrow(Table2[(as.numeric(x[2])>START) & (as.numeric(x[2])<STOP) & (BIN == as.character(x[1])),])>0})
cbind(Table1, z)
Old function is z(), new is y(). With the sample Table1, Table2, the new function is 30% faster. I don't know how this advantage will scale as nrow increases, but I'm guessing this scaling will be very positive. Let me know.
z = function(a){apply(Table1, 1, function(x) {z = subset(Table2, Table2$BIN == as.character(x[1]))
any(as.numeric(x[2])>z$START & as.numeric(x[2])<z$STOP)})}
y = function(a){apply(Table1, 1, function(x) {nrow(Table2[(as.numeric(x[2])>START) & (as.numeric(x[2])<STOP) & (BIN == as.character(x[1])),])>0})}
microbenchmark(z(), y(), times = 1000L)
expr min lq median uq max neval
z() 1168.283 1219.793 1237.791 1276.267 3481.576 1000
y() 809.575 848.052 863.257 885.909 1683.383 1000
edit: you might need to muck with the as.numeric, and as.character in the subsetting. I lost the data.table I created earlier and directly used the answer above's data.frame.
I am trying to use a huge dataframe (180000 x 400) to calculate another one that would be much smaller.
I have the following dataframe
df1=data.frame(LOCAT=c(1,2,3,4,5,6),START=c(120,345,765,1045,1347,1879),END=c(150,390,802,1120,1436,1935),CODE1=c(1,1,0,1,0,0),CODE2=c(1,0,0,0,-1,-1))
df1
LOCAT START END CODE1 CODE2
1 1 120 150 1 1
2 2 345 390 1 0
3 3 765 802 0 0
4 4 1045 1120 1 0
5 5 1347 1436 0 -1
6 6 1879 1935 0 -1
This is a sample dataframe. The rows continue until 180000 and the columns are over 400.
What I need to do is create a new dataframe based on each column that tells me the size of each continues "1" or "-1" and returns it with the location, size and value.
Something like this for CODE1:
LOCAT SIZE VALUE
1 1 to 2 270 POS
2 4 to 4 75 POS
And like this for CODE2:
LOCAT SIZE VALUE
1 1 to 1 30 POS
2 5 to 6 588 NEG
Unfortunately I still didn't figure out how to do this. I have been trying several lines of code to develop a function to do this automatically but start to get lost or stuck in loops and it seems that nothing works.
Any help would be appreciated.
Thanks in advance
Below is code that gives you the answer in the exact format that you wanted, except I split your "LOCAT" column into two columns entitled "Starts" and "Stops". This code will work for your entire data frame, no need to replicate it manually for each CODE (CODE1, CODE2, etc).
It assumes that the only non-CODE column have the names "LOCAT" "START" and "END".
# need package "plyr"
library("plyr")
# test2 is the example data frame that you gave in the question
test2 <- data.frame(
"LOCAT"=1:6,
"START"=c(120,345,765, 1045, 1347, 1879),
"END"=c(150,390,803,1120,1436, 1935),
"CODE1"=c(1,1,0,1,0,0),
"CODE2"=c(1,0,0,0,-1,-1)
)
codeNames <- names(test2)[!names(test2)%in%c("LOCAT","START","END")] # the names of columns that correspond to different codes
test3 <- reshape(test2, varying=codeNames, direction="long", v.names="CodeValue", timevar="Code") # reshape so the different codes are variables grouped into the same column
test4 <- test3[,!names(test3)%in%"id"] #remove the "id" column
sss <- function(x){ # sss gives the starting points, stopping points, and sizes (sss) in a data frame
rleX <- rle(x[,"CodeValue"]) # rle() to get the size of consecutive values
stops <- cumsum(rleX$lengths) # cumulative sum to get the end-points for the indices (the second value in your LOCAT column)
starts <- c(1, head(stops,-1)+1) # the starts are the first value in your LOCAT column
ssX0 <- data.frame("Value"=rleX$values, "Starts"=starts, "Stops"=stops) #the starts and stops from X (ss from X)
ssX <- ssX0[ssX0[,"Value"]!=0,] # remove the rows the correspond to CODE_ values that are 0 (not POS or NEG)
# The next 3 lines calculate the equivalent of your SIZE column
sizeX1 <- x[ssX[,"Starts"],"START"]
sizeX2 <- x[ssX[,"Stops"],"END"]
sizeX <- sizeX2 - sizeX1
sssX <- data.frame(ssX, "Size"=sizeX) # Combine the Size to the ssX (start stop of X) data frame
return(sssX) #Added in EDIT
}
answer0 <- ddply(.data=test4, .variables="Code", .fun=sss) # use the function ddply() in the package "plyr" (apply the function to each CODE, why we reshaped)
answer <- answer0 # duplicate the original, new version will be reformatted
answer[,"Value"] <- c("NEG",NA,"POS")[answer0[,"Value"]+2] # reformat slightly so that we have POS/NEG instead of 1/-1
Hopefully this helps, good luck!
Use run-length encoding to determine groups where CODE1 takes the same value.
rle_of_CODE1 <- rle(df1$CODE1)
For convenience, find the points where the value is non-zero, and the lenghts of the corresponding blocks.
CODE1_is_nonzero <- rle_of_CODE1$values != 0
n <- rle_of_CODE1$lengths[CODE1_is_nonzero]
Ignore the parts of df1 where CODE1 is zero.
df1_with_nonzero_CODE1 <- subset(df1, CODE1 != 0)
Define a group based on the contiguous blocks we found with rle.
df1_with_nonzero_CODE1$GROUP <- rep(seq_along(n), times = n)
Use ddply to get summary stats for each group.
summarised_by_CODE1 <- ddply(
df1_with_nonzero_CODE1,
.(GROUP),
summarise,
MinOfLOCAT = min(LOCAT),
MaxOfLOCAT = max(LOCAT),
SIZE = max(END) - min(START)
)
summarised_by_CODE1$VALUE <- ifelse(
rle_of_CODE1$values[CODE1_is_nonzero] == 1,
"POS",
"NEG"
)
summarised_by_CODE1
## GROUP MinOfLOCAT MaxOfLOCAT SIZE VALUE
## 1 1 1 2 270 POS
## 2 3 4 4 75 POS
Now repeat with CODE2.
I have a data frame consisting of about 22 fields, some system ids and some measurements, such as
bsystemid dcesystemid lengthdecimal heightquantity
2218 58 22 263
2219 58 22 197
2220 58 22 241
What I want:
1 . loop through a list of field ids
2 . define a function to test for a condition
3 . such that both x and y can vary
Where does the y variable definition belong, for varying both x and y? Other different structures?
This code block works for a single field and value of y:
varlist4<-names(brg) [c(6)]
f1<-(function(x,y) count(brg[,x]<y) )
lapply(varlist4, f1, y=c(7.5))
This code block executes, but the counts are off:
varlist4<-names(brg) [c(6,8,10,12)]
f1<-(function(x,y) count(brg[,x]<y) )
lapply(varlist4, f1, y=c(7.5,130,150,0))
For example,
varlist4<-names(brg) [c(6)]
f1<-(function(x,y) count(brg[,x]<y) )
lapply(varlist4, f1, y=c(7.5))
returns (correctly),
x freq
1 FALSE 9490
2 TRUE 309
3 NA 41
whereas the multiple x,y block of code above returns this for the first case,
x freq
1 FALSE 4828
2 TRUE 4971
3 NA 41
Thanks for any comments.
Update:
What I would like is to automate counting of occurances of values in specified fields in a df, meeting some condition. The conditions are numeric constants or text strings, one for each field. For example, I might want to count occurances meeting the condition >360 in field1, >0 in field2, etc. What I thus mean by allowing x and y to vary is reading x and y vectors with the field names and corresponding conditions into a looping structure.
I'd like to automate this task because it involves around 30 tables, each with up to 50 or so fields. And I'll need to do it twice, scanning once for values exceeding a maximum and once for values less than a minimum. Better still might be loading the conditions into a table and referencing that in the loop. That may be the next step but I'd like to understand this piece first.
This working example
t1<-18:29
t2<-c(76.1,77,78.1,78.2,78.8,79.7,79.9,81.1,81.2,81.8,82.8,83.5)
t3<-c(1.2,-0.2,-0.3,1.2, 2.2,0.4,0.6,0.4,-0.8,-0.1,5.0,3.1)
t<-data.frame(v1=t1,v2=t2,v3=t3)
varlist<-names(t) [c(1)]
f1<-(function(x,y) count(t[,x]>y) )
lapply(varlist, f1, y=c(27))
illustrates the correct answer for the first field, returning
x freq
1 FALSE 10
2 TRUE 2
But if I add in other fields and the corresponding conditions (the y's) I get something different for the first case:
varlist<-names(t) [c(1,2,3)]
f1<-(function(x,y) count(t[,x]>y) )
lapply(varlist, f1, y=c(27,83,3))
x freq
1 FALSE 8
2 TRUE 4
[[2]]
x freq
1 FALSE 1
2 TRUE 11
[[3]]
x freq
1 FALSE 11
2 TRUE 1
My sense is I'm not going about structuring the y part correctly.
Thanks for any comments.
You can use mapply. Let's create some data:
set.seed(123) # to get exactly the same results
brg = data.frame(x = rnorm(100), y=rnorm(100), z=rnorm(100))
brg$x[c(10, 15)] = NA # some NAs
brg$y[c(12, 21)] = NA # more NAs
Then you need to define the function to do the job. The function .f1 counts the data, and ensure there are always three levels (TRUE, FALSE, NA). Then, f1 uses .f1 in an mapply context to be able to vary x and y. Finally, some improvements in the output (changing the names of the columns).
f1 = function(x, y, data) {
.f1 = function(x, y, data) {
out = factor(data[, x] < y,
levels=c("TRUE", "FALSE", NA), exclude=NULL)
return(table(out))
}
out = mapply(.f1, x, y, MoreArgs = list(data = data)) # check ?mapply
colnames(out) = paste0(x, "<", y) # more clear names for the output
return(out)
}
Finally, the test:
varlist = names(brg)
threshold = c(0, 1, 1000)
f1(x=varlist, y=threshold, data=brg)
And you should get
x<0 y<1 z<1000
TRUE 46 87 100
FALSE 52 11 0
<NA> 2 2 0