I am trying to use the mRMRe package in R to do a feature selection on a gene expression dataset.I have RNA seq data containing over 10K genes and I would like to find the optimal feature fitter the classification model. I am wondering how to find the optimal feature count. Here is my code ,
mrEnsemble <- mRMR.ensemble(data = Xdata, target_indices = c(1) ,feature_count = 100 ,solution_count = 1)
mrEnsemble_genes <- as.data.frame(apply(solutions(mrEnsemble)[[1]], 2, function(x, y) { return(y[x]) }, y=featureNames(Xdata)))
View(mrEnsemble_genes)
I just set feature_count = 100 but I am wondering how to find the optimal feature count for classification without setting the number.
and the result after extracting mrEnsemble_genes will be the list of genes like,
gene05
gene08
gene45
gene67
Are they ranked by score calculated from mutual information? I mean the first ranked gene gain the highest MI and it may be a good gene for classifying the class of sample i.e. cancer and normal, right ? Thank you
As far as I understand, the MRMR method simply ranks the N number of features you input according to their MRMR score. It is then up to you to decide which features to keep and discard.
According to the package documentation of mRMRe, the MRMR score is computed as the following:
For each target, the score of a feature is defined as the
mutual information between the target and this feature minus the average mutual information of
previously selected features and this feature.
So in other words,
Relevancy = Mutual information with the target
Redundancy = Average mutual information with the previous predictors
MRMR Score = Relevancy - Redundancy.
The way I interpret this, the scores themselves don't offer a clear-cut answer for keeping or discarding features. Higher scores are better, but a zero / negative score does not mean the feature has no effect on the target: It could have some mutual information with the target, but higher average mutual information with the other predictors, leading to a negative MRMR score. Finding the exact optimal feature set requires experimentation.
To retrieve the indexes of the features (in the original data), ranked from highest to lowest MRMR score, use:
solutions(mrEnsemble)
To retrieve the actual MRMR scores, use:
scores(MrEnsemble)
Related
I'm making a rating survey in R (Shiny) and I'm tryng to find a metric that can evaluate the agreement but for only one of the "questions" in the survey. The ratings range from 1 to 5. There is multiple raters and each rater rates a set of 10 questions according to the ratings.
I've used Fleiss Kappa and Krippendorff Alpha for the whole set of questions and raters and it works but when evaluating each question separately these metrics give negative value. I tried calculating them by hand (formulas) and I still get the same results so I guess that they don't work for a small sample of subjects (in this case a sample of 1).
I've looked at other metrics like rwg in the multilevel package but thus far I can't seem to make it work. According to r documentation:
rwg(x, grpid, ranvar=2)
Where:
x = A vector representing the construct on which to estimate agreement.
grpid = A vector identifying the groups from which x originated.
Can someone explain me what the rwg function expects from me?
If someone know some other agreement metric that might work better please let me know.
Thanks.
I'm rather new to R and especially to the method of matching by propensity scores. My dataset includes two groups of people that differ in whether they were treated or not- unfortunately they also differ significantly in age and disease duration, therefore my wish to match them.
So far this is my code:
set.seed(2208)
mod_match <- matchit(TR ~ age + disease_duration + sex + partner + work + academic,
data = Data_nomiss,
method = "nearest",
caliper = .025)
summary(mod_match)
This code works fine, but I wondered whether there is a possibility to weight the importance of the covariates regarding the accuracy of matching? For me it is crucial that the groups are as close as possible concerning age and disease duration (numeric), whereas the rest of the variables (factors) should also be matched, but for my purposes might differ in means a little more than the first two.
While searching for a solution to my problem I came across the request of this one guy, who had basically the same problem http://r.789695.n4.nabble.com/matchit-can-I-weight-the-parameters-td4633907.html
In this case it was proposed to combine nearest neighbor and exact matching, but transferred to my dataset this leads to an unproportional reduction of my sample. In the end what I'd like to have is some sort of customized matching process focussing on age and disease duration while also involving the last three variables but in a weaker way.
Does anyone happen to have an idea how this could be realized? I'd be really glad to receive any kinds of tips on this matter and thank you for your time!
Unfortunately, MatchIt does not provide this functionality. There were two ways to do this instead of using MatchIt, but they are slightly advanced. Note that neither use propensity scores. The point of propensity score matching is to match on a single number, the propensity score, which makes the matching procedure blind to the original covariates for which balance is desired.
The first is to use the package Matching and include your own weight matrix to Weight.matrix in Match(). You could upweight age and disease duration in the weight matrix.
The second is to use the package designmatch to do cardinality matching, which allows you to specify balance constraints, and it will use optimization to find the largest sample that meets those constraints. In bmatch(), enter your covariates of interest into the mom argument, which also allows you to include specific balance constraints for each variable. You can require stricter balance constraints for age and disease duration.
I have a large matrix of 500K observations to cluster using hierarchical clustering. Due to the large size, i do not have the computing power to calculate the distance matrix.
To overcome this problem I chose to aggregate my matrix to merge those observations which were identical to reduce my matrix to about 10K observations. I have the frequency for each of the rows in this aggregated matrix. I now need to incorporate this frequency as a weight in my hierarchical clustering.
The data is a mixture of numerical and categorical variables for the 500K observations so i have used the daisy package to calculate the gower dissimilarity for my aggregated dataset. I want to use hclust in the stats package for the aggregated dataset however i want to take into account the frequency of each observation. From the help information for hclust the arguments are as follows:
hclust(d, method = "complete", members = NULL)
The information for the members argument is:, NULL or a vector with length size of d. See the ‘Details’ section. When you look at the details section you get: If members != NULL, then d is taken to be a dissimilarity matrix between clusters instead of dissimilarities between singletons and members gives the number of observations per cluster. This way the hierarchical cluster algorithm can be ‘started in the middle of the dendrogram’, e.g., in order to reconstruct the part of the tree above a cut (see examples). Dissimilarities between clusters can be efficiently computed (i.e., without hclust itself) only for a limited number of distance/linkage combinations, the simplest one being squared Euclidean distance and centroid linkage. In this case the dissimilarities between the clusters are the squared Euclidean distances between cluster means.
From the above description, i am unsure if i can assign my frequency weights to the members arguments as it is not clear if this is the purpose of this argument. I would like to use it like this:
hclust(d, method = "complete", members = df$freq)
Where df$freq is the frequency of each row in the aggregated matrix. So if a row is duplicated 10 times this value would be 10.
If anyone can help me that would be great,
Thanks
Yes, this should work fine for most linkages, in particular single, group average and complete linkage. For ward etc. you need to correctly take the weights into account yourself.
But even that part is not hard. Just make sure to use the cluster sizes, because you need to pass the distance of two clusters, not two points. So the matrix should contain the distance of n1 points at location x and n2 points at location y. For min/max/mean this n disappears or cancels out. For ward, you should get a SSQ like formula.
I'm working in R, package "topicmodels". I'm trying to work out and better understand the code/package. In most of the tutorials, documentation I'm reading I'm seeing people define topics by the 5 or 10 most probable terms.
Here is an example:
library(topicmodels)
data("AssociatedPress", package = "topicmodels")
lda <- LDA(AssociatedPress[1:20,], k = 5)
topics(lda)
terms(lda)
terms(lda,5)
so the last part of the code returns me the 5 most probable terms associated with the 5 topics I've defined.
In the lda object, i can access the gamma element, which contains per document the probablity of beloning to each topic. So based on this I can extract the topics with a probability greater than any threshold I prefer, instead of having for everyone the same number of topics.
But my second step would then to know which words are strongest associated to the topics. I can use the terms(lda) function to pull this out, but this gives me the N so many.
In the output I've also found the
lda#beta
which contains the beta per word per topic, but this is a Beta value, which I'm having a hard time interpreting. They are all negative values, and though I see some values around -6, and other around -200, i can't interpret this as a probability or a measure to see which words and how much stronger certain words associate to a topic. Is there a way to pull out/calculate anything that can be interpreted as such a measure.
many thanks
Frederik
The beta-matrix gives you a matrix with dimension #topics x #terms. The values are log-likelihoods, therefore you exp them. The given probabilities are of the type
P(word|topic) and these probabilities only add up to 1 if you take the sum over the words but not over the topics P(all words|topic) = 1 and NOT P(word|all topics) = 1.
What you are searching for is P(topic|word) but I actually do not know how to access or calculate it in this context. You will need P(word) and P(topic) I guess. P(topic) should be:
colSums(lda#gamma)/sum(lda#gamma)
Becomes more obvious if you look at the gamma-matrix, which is #document x #topics. The given probabilites are P(topic|document) and can be interpreted as "what is the probability of topic x given document y". The sum over all topics should be 1 but not the sum over all documents.
I want to rank a set of sellers. Each seller is defined by parameters var1,var2,var3,var4...var20. I want to score each of the sellers.
Currently I am calculating score by assigning weights on these parameters(Say 10% to var1, 20 % to var2 and so on), and these weights are determined based on my gut feeling.
my score equation looks like
score = w1* var1 +w2* var2+...+w20*var20
score = 0.1*var1+ 0.5 *var2 + .05*var3+........+0.0001*var20
My score equation could also look like
score = w1^2* var1 +w2* var2+...+w20^5*var20
where var1,var2,..var20 are normalized.
Which equation should I use?
What are the methods to scientifically determine, what weights to assign?
I want to optimize these weights to revamp the scoring mechanism using some data oriented approach to achieve a more relevant score.
example
I have following features for sellers
1] Order fulfillment rates [numeric]
2] Order cancel rate [numeric]
3] User rating [1-5] { 1-2 : Worst, 3: Average , 5: Good} [categorical]
4] Time taken to confirm the order. (shorter the time taken better is the seller) [numeric]
5] Price competitiveness
Are there better algorithms/approaches to solve this problem? calculating score? i.e I linearly added the various features, I want to know better approach to build the ranking system?
How to come with the values for the weights?
Apart from using above features, few more that I can think of are ratio of positive to negative reviews, rate of damaged goods etc. How will these fit into my Score equation?
Unfortunately stackoverflow doesn't have latex so images will have to do:
Also as a disclaimer, I don't think this is a concise answer but your question is quite broad. This has not been tested but is an approach I would most likely take given a similar problem.
As a possible direction to go, below is the multivariate gaussian. The idea would be that each parameter is in its own dimension and therefore could be weighted by importance. Example:
Sigma = [1,0,0;0,2,0;0,0,3] for a vector [x1,x2,x3] the x1 would have the greatest importance.
The co-variance matrix Sigma takes care of scaling in each dimension. To achieve this simply add the weights to a diagonal matrix nxn to the diagonal elements. You are not really concerned with the cross terms.
Mu is the average of all logs in your data for your sellers and is a vector.
xis the mean of every category for a particular seller and is as a vector x = {x1,x2,x3...,xn}. This is a continuously updated value as more data are collected.
The parameters of the the function based on the total dataset should evolve as well. That way biased voting especially in the "feelings" based categories can be weeded out.
After that setup the evaluation of the function f_x can be played with to give the desired results. This is a probability density function, but its utility is not restricted to stats.