I am working on subsetting multiple variables in a dataset to remove data points that are not useful. When I enter the subset command for the first variable and check the dataset, the variable has been properly subset. However, after doing the same with the second variable, the first is no longer subset in the dataset. It seems as though the second subset command is overriding the first. In the example I came up with below the first variable (Height) is no longer subset once I subset the second variable (Weight). Any thoughts on how to resolve this?
rTestDataSet = TestDataSet
rTestDataSet = subset(TestDataSet, TestDataSet$Height < 4)
rTestDataSet = subset(TestDataSet, TestDataSet$Weight < 3)
You are applying both subsets to the original data. What you need to do is apply one subset, save it to a variable and then apply the second subset to this new variable. Also as already pointed out you don't need the $ when using subset.
try this:
Make some reproducible data:
set.seed(50)
TestDataSet <- data.frame("Height" = c(sample(1:10,30, replace = T)), Weight = sample(1:10,30, replace = T) )
rTestDataSet = TestDataSet
rTestDataSet = subset(rTestDataSet, Height < 4)
rTestDataSet
Height Weight
3 3 5
6 1 7
9 1 4
10 2 5
12 3 9
14 1 1
15 3 1
19 1 8
20 2 9
22 2 8
28 3 6
rTestDataSet = subset(rTestDataSet, Weight < 3)
rTestDataSet
Height Weight
14 1 1
15 3 1
Why not use tidyverse? Chain the operations together to create your own logic. Instead of subset you can use filter to get the rows you want conditionally:
library(tidyverse)
TestDataSet %>%
filter(Height < 4) %>%
filter(Weight < 3)
or
TestDataSet %>%
filter(Height < 4 & Weight < 3)
Related
I'd like to make a data frame using only the last computed values from a Repeat loop.
For the repeat and sample functions, I'm using this data. The numbers in Prob column are the probabilities of each number to occur.
enter image description here
b <- 1
repeat {
c <- sample(a$Plus, size=1, prob=(a$Prob))
cat(b, '\t', c, '\n')
b <- b + 1
if (c >= 10) {
{
break
}
}
}
#I'm interested in the result greater than 10 only
If I run the code above, then it will compute something like
1 4
2 8
3 13
If I run this again, it will compute different results like..
1 9
2 3
3 7
4 3
5 11
What I'd like to do is to make a data frame using only the last outputs of each loop.
For example, using the computed data above, I'd like to make a frame that looks like
Trial Result
3 13
5 11
Is there any way to repeat this loop the number of times I want to and make a data frame using only the last outputs of each repeated function?
You can use a user defined function to do this. Since you haven't given your dataframe a, I've defined it as follows:
library(tidyverse)
a <- tibble(
Plus = 1:15,
Prob = seq(from = 15, to = 1, by = -1)
)
The following function does the same thing as your repeat loop, but stores the relevant results in a tibble. I've left your variable b out of this because as far as I can see, it doesn't contribute to your desired output.
samplefun <- function(a) {
c <- sample(a$Plus, size=length(a$Plus), prob=a$Prob)
res <- tibble(
Trial = which(c >= 10)[1],
Result = c[which(c >= 10)[1]]
)
return(res)
}
Then use map_dfr to return as many samples as you like:
nsamples <- 5
map_dfr(1:nsamples, ~ samplefun(a))
Output:
# A tibble: 5 x 2
Trial Result
<int> <int>
1 4 11
2 6 14
3 5 11
4 2 10
5 4 15
> df = data.frame(id = 1:5, ch_1 = 11:15,ch_2= 10:14,selection = c(11,13,12,14,12))
> df
id ch_1 ch_2 selection
1 1 11 10 11
2 2 12 11 13
3 3 13 12 12
4 4 14 13 14
5 5 15 14 12
Given this data set I need an additional column that follow the rules:
if selection is one of the two choices (ch_1 and ch_2), return the number of the choice (1 or 2)
if the selection is not of the two choices, return 3
I need a way to do this for every row. For a single row, doing the following code works just fine, but I can't seem to find a way to use it with apply to run it to each single row of a dataframe.Looking for a solution that can be applied to more than just two columns and that runs faster than doing a traditional loop
df=df[1,]
if (df$selection %in% df[,paste("ch_",1:2,sep="")]) {
a = which(df[,paste("ch_",1:2,sep="")]==df$selection)
} else {
a = 3
}
# OR
ifelse(df$selection %in% df[,paste("ch_",1:2,sep="")],1,3)
# OR
match(df$selection,df[,paste("ch_",1:2,sep="")])
Compare the vector to the other columns with ==, add a final column which is always TRUE, and then take the index of the first TRUE in each row using max.col
max.col(cbind(df$selection == df[c("ch_1","ch_2")], TRUE), "first")
#[1] 1 3 2 1 3
This should easily extend to n columns then.
You could do this with nested ifelse,
with(df, ifelse(selection == ch_1, 1L, ifelse(selection == ch_2, 2L, 3L)))
# [1] 1 3 2 1 3
but I'm rarely fond of nesting them. If this is all you need (and you never need more than two), then this might suffice.
One alternative is using dplyr::case_when,
with(df, dplyr::case_when(selection == ch_1 ~ 1, selection == ch_2 ~ 2, TRUE ~ 3))
and it can be easily used within a dplyr::mutate if you are already using the package.
I have two columns in my data frame, value and num_leads. I'd like to create a third column that stores the value's value from n rows below - where n is whatever number is stored in num_leads. Here's an example:
df1 <- data.frame(value = c(1:5),
num_leads = c(2, 3, 1, 1, 0))
Desired output:
value num_leads result
1 1 2 3
2 2 3 5
3 3 1 4
4 4 1 5
5 5 0 5
I have tried using the lead function in dplyr but unfortunately it seems all the leads must have the same number.
using indexing
with(df1, value[seq_along(value) + num_leads])
where seq_along(value) gives the row number, and by adding to num_leads you can pull out the right value
This is what I came up with:
df1$result <- df1$value[df1$value + df1$num_leads]
I want to create a subset using another subset as a condition. I can't show my actual data, but I can show an example that deals with the core of my problem.
For example, I have 10 subjects with 10 observations each. So an example of my data would be to create a simple data frame using this:
ID <- rep(1:10, each = 10)
x <- rnorm(100)
y <- rnorm(100)
df <- data.frame(ID,x,y)
Which creates:
ID x y
1 1 0.08146318 0.26682668
2 1 -0.18236757 -1.01868755
3 1 -0.96322876 0.09565239
4 1 -0.64841436 0.09202456
5 1 -1.15244873 -0.38668929
6 1 0.28748521 -0.80816416
7 1 -0.64243912 0.69403155
8 1 0.84882350 -1.48618271
9 1 -1.56619331 -1.30379070
10 1 -0.29069417 1.47436411
11 2 -0.77974847 1.25704185
12 2 -1.54139896 1.25146126
13 2 -0.76082748 0.22607239
14 2 -0.07839719 1.94448322
15 2 -1.53020374 -2.08779769
etc.
Some of these subjects were positive for an event (for example subject 3, 5 and 7), so I have created a subset for that using:
event_pos <- subset(df, ID %in% c("3","5","7"))
Now, I also want to create a subset for the subjects who were negative for an event. I could use something like this:
event_neg <- subset(df, ID %in% c("1","2","4","6","8","9","10"))
The problem is, my data set is too large to specify all the individuals of the negative group. Is there a way to use my subset event_pos to get all the subjects with negative events in one subset?
TL;DR
Can I get a subset_2 by removing the subset_1 from the data frame?
You can use :
ind_list <- c("3","5","7")
event_neg <- subset(df, (ID %in% ind_list) == FALSE)
or
event_neg <- subset(df, !(ID %in% ind_list))
Hope that will helps
Gottaviannoni
I have a function, remove_fun, that removes rows from a data frame based on some conditions (this function is too verbose to include, so here's a simplified example:).
Let's say I have a data frame called block_2, with two columns:
Treatment seq
1 29
1 23
3 60
1 6
2 41
1 5
2 44
For the sake of this example, let's say my function removes 1 row from block_2 at a time based on the highest value of seq in block_2$seq. This function works well when I run it once, i.e. remove_fun(block_2) would return the following output:
Treatment seq
1 29
1 23
1 6
2 41
1 5
2 44
However, what I'm not figuring out is how to repeatedly implement my remove_fun until I reduce block_2 to a certain dimension.
My idea is to do something like this:
while (dim(block_2_df)[1]>1)#The number of rows of block_2_df{
remove_fun(block_2_df)
}
This would theoretically reduce block_2_df until only the observation corresponding to the lowest seq number remains.
However, this doesn't work. I think my problem relates to me not knowing how to use my 'updated' block_2_df iteratively. What I'd like to accomplish is some code that does something like this:
new_df_1<-remove_fun(block_2)
new_df_2<-remove_fun(new_df_1)
new_df_3<-remove_fun(new_df_2)
etc...
I'm not necessarily looking for an exact solution to this problem (as I didn't provide remove_fun), but I'd appreciate some insight re: a general approach to the problem.
Edit: here's my actual code with some example data:
#Start from a block of 10*6 balls, with lambda*(wj) balls of each class
#Allocation ratios
class_1<-"a"
class_2<-"b"
class_3<-"c"
ratio_a<-3
ratio_b<-2
ratio_c<-1
#Min_set
min_set<-c(rep(class_1,ratio_a),rep(class_2,ratio_b),rep(class_3,ratio_c))
min_set_num<-ifelse(min_set=='a',1,ifelse(min_set=='b',2,3))
table_key <- table(min_set_num)
#Number of min_sets
lamb<-10
#Active urn
block_1<-matrix(0,lamb,length(min_set))
for (i in 1:lamb){
block_1[i,]<-min_set
}
#Turn classes into a vector
block_1<-as.vector(block_1)
block_1<-ifelse(block_1=='a',1,ifelse(block_1=='b',2,3))
#Turn into a df w/ identifying numbers:
block_1_df<-data.frame(block_1,seq(1:length(block_1)))
#Enumerate all sampling outcome permutations
library('dplyr')
#Create inactive urn
#Sample from block_1 until min_set is achieved, store in block_2#####
#Random sample :
block_2<-sample(block_1,length(block_1),replace=F)
block_2_df<-block_1_df[sample(nrow(block_1_df), length(block_1)), ]
colnames(block_2_df)<-c('Treatment','seq')
#Generally:####
remove_fun<-function(dat){
#For df
min_set_obs_mat<-matrix(0,length(block_1),2)
min_set_obs_df<-as.data.frame(min_set_obs_mat)
colnames(min_set_obs_df)<-c('Treatment','seq')
for (i in 1:length(block_1)){
if ((sum(min_set_obs_df[,1]==1)<3) || (sum(min_set_obs_df[,1]==2)<2) || (sum(min_set_obs_df[,1]==3)<1)){
min_set_obs_df[i,]<-dat[i,]
}
}
#Get rid of empty rows in df:
min_set_obs_df<-min_set_obs_df%>%filter(Treatment>0)
#Return the sampled 'balls' which satisfy the minimum set into block_2_df (randomized block_!), ####
#keeping the 'extra' balls in a new df: extra_df:####
#Question: does the order of returning matter?####
#Identify min_set
outcome_df<-min_set_obs_df %>% group_by(Treatment) %>% do({
head(., coalesce(table_key[as.character(.$Treatment[1])], 0L))
})
#This removes extra observations 'chronologically'
#Identify extra balls
#Extra_df is the 'inactive' urn####
extra_df<-min_set_obs_df%>%filter(!(min_set_obs_df$seq%in%outcome_df$seq))
#Question: is the number of pts equal to the block size? (lambda*W)?######
#Return min_df back to block_2_df, remove extra_df from block_2_df:
dat<-dat%>%filter(!(seq%in%extra_df$seq))
return(dat)
}
Your while-loop doesn't redefine block2_df. This should work:
while (dim(block_2_df)[1]>1) {
block_2_df <- remove_fun(block_2_df)
}
If all you need is a way to subset the data frame...
df <- data.frame(Treatment = c(1, 1, 3, 1, 2, 1, 2),
seq = c(29, 23, 60, 6, 41, 5, 44))
df
Treatment seq
1 1 29
2 1 23
3 3 60
4 1 6
5 2 41
6 1 5
7 2 44
# Decide how many rows you want in output
n <- 6
# Find the top "n" values in the seq variable
head(sort(df$seq), n)
[1] 5 6 23 29 41 44
# Use them in the subset criteria
df[df$seq %in% head(sort(df$seq), n), ]
Treatment seq
1 1 29
2 1 23
4 1 6
5 2 41
6 1 5
7 2 44