Using semi_join to find similarities but returns none mistakenly - r

I am trying to find the similar genes between two columns that I can later work with just the similar genes. Below is my code:
top100_1Beta <- data.frame(grp1_Beta$my_data.SYMBOL[1:100])
top100_2Beta<- data.frame(grp2_Beta$my_data.SYMBOL[1:100])
common100_Beta <- semi_join(top100_1Beta,top100_2Beta)`
When I run the code I get the following error:
Error: by required, because the data sources have no common variables
This is wrong since when I open top100_1Beta and top100_2Beta I can see at least the first few list the exact same genes: ATP2A1, SLMAP, MEOX2,...
I am confused on why then it's returning that no commonalities.
Any help would be greatly appreciated.
Thanks!

I don't think you need any form of *_join here; instead it seems you're looking for intersect
intersect(grp1_Beta$my_data.SYMBOL[1:100], grp2_Beta$my_data.SYMBOL[1:100])
This returns a vector of common entries amongst the first 100 entries of grp1_Beta$my_data.SYMBOL and grp1_Beta$my_data.SYMBOL.

Without a full working example, I'm guessing that your top100_1Beta and top100_2Beta dataframes do not have the same column names. They are probably grp1_Beta.my_data.SYMBOL.1.100. and grp2_Beta.my_data.SYMBOL.1.100.. This means the semi_join function doesn't know where to match the dataframes up. Renaming the columns should fix the issue.

Related

Filtering with dplyr not working as expected

how are you?
I have the next problem, that is very weird because the task it is very simple.
I want to filter one of my factor variables in R, but the outcome is an empty dataframe.
So my data frame is called "data_2022", if i execute this code:
sum(data_2022$CANALDEVENTA=="WEB")
The result is 2704800 that is the number of times that this filter is TRUE.
a= data_2022 %>% filter(CANALDEVENTA=="WEB")
This returns an empty data frame.
I know i am not an expert in R, but i have done the last thing a million times and i never had this error before.
Do you have a clue about whats the problem with this?
Sorry i did not make a reproducible example.
Already thank you.
you could use subset function:
a<-subset(data_2022, CANALDEVENTA=="WEB")
using tidyverse, make sure you are using the function from dplyr::filter. filter is looking for a logical expression but probably you apply it to a data.frame. Try this code too:
my_names<-c("WEB")
a<-dplyr::filter(data_2022, CANALDEVENTA %in% my_names)
Hope it works.

In R, dataframe[-NULL] returns an empty dataframe

I'm creating some routines in R to ease model creation and to distinguish several groups based on several parameters (ex: original watches VS fakes ones using watches common attributes).
During the proccess, I keep track of the potential excluded lines in a vector (empty at first), and I get ride of them at the end using:
model$var <- raw_data[-line_excluded,]
The problem is that if line_excluded is c() (ndlr no line exlcuded), model$var is an empty dataframe then in that case I want all the lines of the dataframe.
The only solution I have think about is the us of
if (!is.null(line_excluded)){
model$var <- raw_data[-line_excluded,]}
But that's not really pretty, and I have several tracking variables as line_excluded which need that.
Thanks for the help
You can make it in another way using setdiff(), which can deal with empty line_excluded i.e.,
model$var <- raw_data[setdiff(seq(nrow(raw_data)),line_excluded),]
You can also try:
model$var <- raw_data[!(1:nrow(raw_data) %in% line_excluded),]
This is similar to what #THomasIsCoding suggested, you look for the row numbers that are not in your line_excluded..

Finding missing values

I have a couple of questions as I am new to this.
How can I read in data using \-\- to look for missing values?
How can I determine how many values are missing in each variable?
I tried using the summary command and is.na but can't's seem to get it right.
the first question is not clear, for the second one you can use
sapply(yourdataframe, function(x) sum(is.na(x))

How to subset (without filtering) multiple columns from a data frame in R

I'm sorry this may have been done to death, but all the answers I've found veer all over the map into extreme exotica. I can subset using [[]] (I've learned from stackoverflow that I'm not supposed to use subset() and similar for my scripts, since they're intended for interactive use) for a single column, but I can't figure out how to make the leap to more than one column. These two work, of course:
outcomeA <- outcome[['Hospital.Name']]
outcomeB <- outcome[['TX]]
But I've tried a dozen permutations to get both of those columns, like so:
outcomeC <- outcome[[c('Hospital.Name', 'TX')]] (gives "subscript out of bound")
outcomeC <- outcome[c('Hospital.Name', 'TX')] (gives "undefined columns selected")
etc, but they all fail. Can someone please put me out of my misery and help me select more than one column?
Thanks - Ed
Did you try this with a comma and single brackets
outcomeC <- outcome[,c('Hospital.Name', 'TX')]
Also you can only get column names that exist in your data. check them against:
names(outcome)

subsetting complete cases of a dataframe in a list of dataframe

I have a list of dataframes called mylist.
This list contains 300 dataframes.
I need to subset each one of these for their complete cases.
I am very new to R, i started studying it 2 weeks ago, and I tried just this:
mylist[[1]] [[!(complete.cases(mylist[[1]])),]]
but it doesn't seem to work, as
Error in `[[.data.frame`(mylist[[1]], !(complete.cases(mylist[[1]])), :
argument "..2" is missing, with no default
I am searching on the web, but probably I am not asking the right question.
If someone would help me, even only reporting a link where I can take a look to the right function, I would be grateful.
Try this
lapply(mylist,function(x){x[complete.cases(x),]})
Also,
lapply(mylist, function(x) x[!rowSums(is.na(x)),])

Resources