I am using R (RStudio) to construct an index/synthetic indicator to evaluate, say, commercial efficiency. I am using the PCA() command from factorMineR package, and using 7 distinct variables. I have previously created similar indexes by calculating the weight of each particular variable over the first component (which can be obtained through PCA()$var$coord[,1]), with no problems, since each variable has a positive weight. However, there is one particular variable that has a weight with an undesired sign: negative. The variable is ‘delivery speed’ and this sign would imply that the greater the speed the less efficient the process. Then, what is going on? How would you amend this issue, preferably still using PCA?
The sign of variable weights shouldn't matter in PCA. Since on the whole, all of the components perfectly represent the original data (when p < n), for some components it is natural that there will be some positive weights and some negative weights. That doesn't mean that that particular variable has an undesired weight, rather that for that particular extracted signal (say, first principal component) the variable weight is negative.
For a better understanding, let's take the classical 2 dimensional example, which I took from this very useful discussion:
Can you see from the graph that one of the weights will necessary be negative for the 2nd principal component?
Finally, if that variable does actually disturb your analysis, one possible solution would be to apply Sparse PCA. Under cross-validated regularization that method is able to make some of the weights equal to zero. If in your case that negative weight is not significant enough, it might get reduced to zero under SPCA.
Related
I am using phylopars() in Rphylopars package in R to generate the missing values in a large dataset about animal body traits (eg body size). (https://www.rdocumentation.org/packages/Rphylopars/versions/0.2.9/topics/phylopars)
This method is called imputation and what it does is to phylogenetically estimate this missing datas.
However the output of the imputation contains some negative values which make no sense because all the estimated trait have to be bigger than zero.
I wonder how I can fix this issue or how to set up a minimum limit for the estimated values.
I'm not new in R but new in Rphylopars so maybe that question is pretty naive but I couldn't find the solution.
For correlated traits, values from one trait will influence another trait resulting in potentially negative results. phylopars allows you to specify if traits are correlated, so you can try setting phylo_correlated = FALSE or imputing the traits individually.
Alternatively transforming can ensure the trait stays within range, depending on the nature of your data. log transforming (and back) and assessing the distribution can help.
In a question on Cross Validated (How to simulate censored data), I saw that the optim function was used as a kind of solver instead of as an optimizer. Here is an example:
optim(1, fn=function(scl){(pweibull(.88, shape=.5, scale=scl, lower.tail=F)-.15)^2})
# $par
# [1] 0.2445312
# ...
pweibull(.88, shape=.5, scale=0.2445312, lower.tail=F)
# [1] 0.1500135
I have found a tutorial on optim here, but I am still not able to figure out how to use optim to work as a solver. I have several questions:
What is first parameter (i.e., the value 1 being passed in)?
What is the function that is passed in?
I can understand that it is taking the Weibull probability distribution and subtracting 0.15, but why are we squaring the result?
I believe you are referring to my answer. Let's walk through a few points:
The OP (of that question) wanted to generate (pseudo-)random data from a Weibull distribution with specified shape and scale parameters, and where the censoring would be applied for all data past a certain censoring time, and end up with a prespecified censoring rate. The problem is that once you have specified any three of those, the fourth is necessarily fixed. You cannot specify all four simultaneously unless you are very lucky and the values you specify happen to fit together perfectly. As it happened, the OP was not so lucky with the four preferred values—it was impossible to have all four as they were inconsistent. At that point, you can decide to specify any three and solve for the last. The code I presented were examples of how to do that.
As noted in the documentation for ?optim, the first argument is par "[i]nitial values for the parameters to be optimized over".
Very loosely, the way the optimization routine works is that it calculates an output value given a function and an input value. Then it 'looks around' to see if moving to a different input value would lead to a better output value. If that appears to be the case, it moves in that direction and starts the process again. (It stops when it does not appear that moving in either direction will yield a better output value.)
The point is that is has to start somewhere, and the user is obliged to specify that value. In each case, I started with the OP's preferred value (although really I could have started most anywhere).
The function that I passed in is ?pweibull. It is the cumulative distribution function (CDF) of the Weibull distribution. It takes a quantile (X value) as its input and returns the proportion of the distribution that has been passed through up to that point. Because the OP wanted to censor the most extreme 15% of that distribution, I specified that pweibull return the proportion that had not yet been passed through instead (that is the lower.tail=F part). I then subtracted.15 from the result.
Thus, the ideal output (from my point of view) would be 0. However, it is possible to get values below zero by finding a scale parameter that makes the output of pweibull < .15. Since optim (or really most any optimizer) finds the input value that minimizes the output value, that is what it would have done. To keep that from happening, I squared the difference. That means that when the optimizer went 'too far' and found a scale parameter that yielded an output of .05 from pweibull, and the difference was -.10 (i.e., < 0), the squaring makes the ultimate output +.01 (i.e., > 0, or worse). This would push the optimizer back towards the scale parameter that makes pweibull output (.15-.15)^2 = 0.
In general, the distinction you are making between an "optimizer" and a "solver" is opaque to me. They seem like two different views of the same elephant.
Another possible confusion here involves optimization vs. regression. Optimization is simply about finding an input value[s] that minimizes (maximizes) the output of a function. In regression, we conceptualize data as draws from a data generating process that is a stochastic function. Given a set of realized values and a functional form, we use optimization techniques to estimate the parameters of the function, thus extracting the data generating process from noisy instances. Part of regression analyses partakes of optimization then, but other aspects of regression are less concerned with optimization and optimization itself is much larger than regression. For example, the functions optimized in my answer to the other question are deterministic, and there were no "data" being analyzed.
The post
Classification functions in linear discriminant analysis in R
from user Tyler provides a function to produce the classification functions (not discriminant functions!) from an LDA model generated with lda().
I used these classification functions to calculate all classification scores for my data. I want to use the additional information e.g. to find out which was the second most probable class and to understand the development in different time slices
Now I would like to ask you for your help to interpret the following scenarios:
scores close to/exactly zero (is it possible to claim that this exact class effectively was not recognized?)
single negative scores of higher absolute value than highest positive value (Does it mean anything at all?)
results with all negative scores (in the original interpretation, the highest score determines the classification. Is this intended by the LDA or does it mean that really none of the classifications is a good fit and one could say that no known pattern could be identified?)
single very low positive values while all others are high absolute negative values (can I argue that the "signal strength" is low in this case?)
I know this is more of a statistical than a programming problem. I thought of it as a follow-up of the post at the beginning of this entry.
Thank you very much for your help!
I have a data set with 20 classes, and it has a pretty non-uniform distribution. Is there any functionality in R that allows us to balance the data set (weighted perhaps)?
I want to use the balanced data with Weka for classification. Since my class distribution is skewed, I am hoping to get better results if there's no single majority class.
I have tried to use the SMOTE filter and Resample filter but they don't quite do what I want.
I dont want any instances to be removed, repetition is fine.
I think there's a misunderstanding in your terminology. Your question's title refers to sampling, and yet the question text involves weighting.
To clarify:
With sampling, you either have fewer, the same, or more instances than in the original set; the unique membership of a sample can be either a strict subset of the original set or can be identical to the original set (with replacement - i.e., duplicates).
By weighting, you simply adjust weights that may be used for some further purpose (e.g. sampling, machine learning) to address or impose some (im)balance relative to a uniform weighting.
I believe that you are referring to weighting, but the same answer should work in both cases. If the total # of observations is N and the frequency of each class is an element of the 20-long vector freq (e.g. the count of items in class 1 is freq[1]*N), then simply use a weight vector of 1/freq to normalize the weights. You can scale it by some constant, e.g. N, though it wouldn't matter. In case any frequency is 0 or very close to it, you might address this by using a vector of smoothed counts (e.g. Good-Turing smoothing).
As a result, each set will have an equal proportion of the total weight.
I'd like to combine a few metrics of nodes in a social network graph into a single value for rank ordering the nodes:
in_degree + betweenness_centrality = informal_power_index
The problem is that in_degree and betweenness_centrality are measured on different scales, say 0-15 vs 0-35000 and follow a power law distribution (at least definitely not the normal distribution)
Is there a good way to rescale the variables so that one won't dominate the other in determining the informal_power_index?
Three obvious approaches are:
Standardizing the variables (subtract mean and divide by stddev). This seems it would squash the distribution too much, hiding the massive difference between a value in the long tail and one near the peak.
Re-scaling variables to the range [0,1] by subtracting min(variable) and dividing by max(variable). This seems closer to fixing the problem since it won't change the shape of the distribution, but maybe it won't really address the issue? In particular the means will be different.
Equalize the means by dividing each value by mean(variable). This won't address the difference in scales, but perhaps the mean values are more important for the comparison?
Any other ideas?
You seem to have a strong sense of the underlying distributions. A natural rescaling is to replace each variate with its probability. Or, if your model is incomplete, choose a transformation that approximately acheives that. Failing that, here's a related approach: If you have a lot of univariate data from which to build a histogram (of each variate), you could convert each to a 10 point scale based on whether it is in the 0-10% percentile or 10-20%-percentile ...90-100% percentile. These transformed variates have, by construction, a uniform distribution on 1,2,...,10, and you can combine them however you wish.
you could translate each to a percentage and then apply each to a known qunantity. Then use the sum of the new value.
((1 - (in_degee / 15) * 2000) + ((1 - (betweenness_centrality / 35000) * 2000) = ?
Very interesting question. Could something like this work:
Lets assume that we want to scale both the variables to a range of [-1,1]
Take the example of betweeness_centrality that has a range of 0-35000
Choose a large number in the order of the range of the variable. As an example lets choose 25,000
create 25,000 bins in the original range [0-35000] and 25,000 bins in the new range [-1,1]
For each number x-i find out the bin# it falls in the original bin. Let this be B-i
Find the range of B-i in the range [-1,1].
Use either the max/min of the range of B-i in [-1,1] as the scaled version of x-i.
This preserves the power law distribution while also scaling it down to [-1,1] and does not have the problem as experienced by (x-mean)/sd.
normalizing to [0,1] would be my short answer recommendation to combine the 2 values as it will maintain the distribution shape as you mentioned and should solve the problem of combining the values.
if the distribution of the 2 variables is different which sounds likely this won't really give you what i think your after, which is a combined measure of where each variable is within its given distribution. you would have to come up with a metric which determines where in the given distribution the value lies, this could be done many ways, one of which would be to determine how many standard deviations away from the mean the given value is, you could then combine these 2 values in some way to get your index. (addition may no longer be sufficient)
you'd have to work out what makes the most sense for the data sets your looking at. standard deviations may well be meaningless for your application, but you need to look at statistical measures that related to the distribution and combine those, rather than combing absolute values, normalized or not.