I have 2 vectors with 11 dimentions.
a <- c(-0.012813841, -0.024518383, -0.002765056, 0.079496744, 0.063928973,
0.476156960, 0.122111977, 0.322930189, 0.400701256, 0.454048860,
0.525526219)
b <- c(0.64175768, 0.54625694, 0.40728261, 0.24819750, 0.09406221,
0.16681692, -0.04211932, -0.07130129, -0.08182200, -0.08266852,
-0.07215885)
cosine_sim <- cosine(a,b)
which returns:
-0.05397935
I used cosine() from lsa package.
for some values i am getting negative cosine_sim like the given one. I am not sure how the similarity can be negative. It should be between 0 and 1.
Can anyone explain what is going on here.
The nice thing about R is that you can often dig into the functions and see for yourself what is going on. If you type cosine (without any parentheses, arguments, etc.) then R prints out the body of the function. Poking through it (which takes some practice), you can see that there is a bunch of machinery for computing the pairwise similarities of the columns of the matrix (i.e., the bit wrapped in the if (is.matrix(x) && is.null(y)) condition, but the key line of the function is
crossprod(x, y)/sqrt(crossprod(x) * crossprod(y))
Let's pull this out and apply it to your example:
> crossprod(a,b)/sqrt(crossprod(a)*crossprod(b))
[,1]
[1,] -0.05397935
> crossprod(a)
[,1]
[1,] 1
> crossprod(b)
[,1]
[1,] 1
So, you're using vectors that are already normalized, so you just have crossprod to look at. In your case this is equivalent to
> sum(a*b)
[1] -0.05397935
(for real matrix operations, crossprod is much more efficient than constructing the equivalent operation by hand).
As #Jack Maney's answer says, the dot product of two vectors (which is length(a)*length(b)*cos(a,b)) can be negative ...
For what it's worth, I suspect that the cosine function in lsa might be more easily/efficiently implemented for matrix arguments as as.dist(crossprod(x)) ...
edit: in comments on a now-deleted answer below, I suggested that the square of the cosine-distance measure might be appropriate if one wants a similarity measure on [0,1] -- this would be analogous to using the coefficient of determination (r^2) rather than the correlation coefficient (r) -- but that it might also be worth going back and thinking more carefully about the purpose/meaning of the similarity measures to be used ...
The cosine function returns
crossprod(a, b)/sqrt(crossprod(a) * crossprod(b))
In this case, both the terms in the denominator are 1, but crossprod(a, b) is -0.05.
The cosine function can take on negative values.
While cosine of two vectors can take any value between -1 and +1, cosine similarity (in dicument retreival) used to take values from the [0,1] interval. The reason is simple: in the WordxDocument matrix there are no negative values, so the maximum angle of two vectors is 90 degrees, for wich the cosine is 0.
Related
I have a question for an assignment I'm doing.
Q:
"Set the seed at 1, then using a for-loop take a random sample of 5 mice 1,000 times. Save these averages.
What proportion of these 1,000 averages are more than 1 gram away from the average of x ?"
I understand that basically, I need to write a code that says: What percentage of "Nulls" is +or- 1 gram from the average of "x." I'm not really certain how to write that given that this course hasn't given us the information on how to do that yet is asking us to do so. Any help on how to do so?
url <- "https://raw.githubusercontent.com/genomicsclass/dagdata/master/inst/extdata/femaleControlsPopulation.csv"
filename <- basename(url)
download(url, destfile=filename)
x <- unlist( read.csv(filename) )
set.seed(1)
n <- 1000
nulls<-vector("numeric", n)
for(i in 1:n){
control <- sample(x, 5)
nulls[i] <-mean(control)
##I know my last line for this should be something like this
## mean(nulls "+ or - 1")> or < mean(x)
## not certain if they're asking for abs() to be involved.
## is the question asking only for those that are 1 gram MORE than the avg of x?
}
Thanks for any help.
Z
I do think that the absolute distance is what they're after here.
Vectors in R are nice in that you can just perform arithmetic operations between a vector and a scalar and it will apply it element-wise, so computing the absolute value of nulls - mean(x) is easy. The abs function also takes vectors as arguments.
Logical operators (such as < and >) can also be used in the same way, making it equally simple to compare the result with 1. This will yield a vector of booleans (TRUE/FALSE) where TRUE means the value at that index was indeed greater than 1, but booleans are really just numbers (1 or 0), so you can just sum that vector to find the number of TRUE elements.
I don't know what programming level you are on, but I hope this helps without giving the solution away completely (since you said it's for an assignment).
I am trying to use solve() in R to find a solution for a 10x10 matrix. Specifically, I am looking for x in Ax=b where b is a ten dimensional 0 vector. When I input solve(A, rep(0,10)), R returns the trivial solution, namely rep(0,10). I also checked -- det(A) is indeed not equal to 0 and thus not singular.
So how can I stop R from returning this result?
Premultiplying both sides of the equation by the inverse of A gives x=A^{-1}b, i.e. on the right hand side we have a zero vector because b is a zero vector. So, that is the only solution.
I have a matrix composing values 0, 1, and 2. 99% of the values are 0. The matrix has 1 million rows and 700 columns. There will be at least one non-zero values each row.
I need to compute the distance between each pair of columns using this formula for distance between column x and y:
D=(Sum(|xi-yi|)/2L for i from 1 to L, L=1 million, i.e. the number of rows.
I wrote a piece of R code but it's taking too long to compute, is it possible to use dynamic programing to do it faster? Here is my code:
#mac is the matrix
nCols=ncol(mac)
nRows=nrow(mac)
#the pairwise distance matrix
distMat=matrix(data=-1,nrow=nCols,ncol=nCols)
abs.dist=function(x){return(abs(x[1]-x[2]))}
for(i in 1:(nCols-1)){
for(j in (i+1):nCols){
d1=apply(mac[,c(i,j),1,abs.dist)
k=sum(d1)/(2*nRows)
distMat[i,j]=k
distMat[j,i]=k
}
}
for(i in 1:nCols) distMat[i,i]=0
Thanks a lot for any help?
I will just summarize what is in the comments already:
#mac is the matrix
nCols=ncol(mac)
nRows=nrow(mac)
#the pairwise distance matrix
distMat=matrix(data=-1,nrow=nCols,ncol=nCols)
for(i in 1:(nCols-1)){
for(j in (i+1):nCols){
d1=abs(mac[,i]-mac[,j])
k=sum(d1)/(2*nRows)
distMat[i,j]=k
distMat[j,i]=k
}
}
diag(distMat) <- 0
This is approximately 100 times faster for a 2000x500 matrix.
It took about half a minute for a 1e6x700 matrix.
Computing a distance matrix means you need (n^2-n)/2 operations. I'm not surprised it is taking a while.
Since you need all pairs, these calculations have to be done independently. Dynamic programming will not help. DP helps when you build the solution from smaller parts. Everything here is independent so DP won't help (as far as I know).
You said most entries are 0. Try looking at a sparse matrix library. This blog post may give you some ideas for doing this in R.
I'm converting a rather complicated set of code from Matlab to R. I have zero experience in Matlab and am a functioning novice in R.
I have a segment of code which reads (in matlab):
dSii=(sum(tao.*Sik,1))'-(sum(m'))'.*Sii-beta.*Sii./N.*(Iii+sum(Iik)');
Which I've simplified and will focus on the first segment (if I can solve the first segment I'm confident I can perform the rest):
J = (sum(A.*B,1))' - ...
tao (or A) and Sik (or B) are matrices. So my assumption is I'm performing matrix multiplication here (A * B)and summing the resultant column. The '1' is what is throwing me off in that statement. In R, that 1 would likely indicate we're talking about a sum of rows as opposed to columns(indicated by 2). But I can't find any supporting documentation for that kind of Matlab statement.
I was thinking of using a statement like this (but of course, too many '1's and ',')
J<- (apply(A*B, 1), 1, sum)
Thanks for all your help. I searched for other examples here and elsewhere and couldn't find an answer. I'm willing to work for it but this is akin to me studying French (which I don't know) to translate in Spanish (which I'm moderate in) while interpreting the whole process in English. :D
Because of the different conventions in R and Matlab, the idiosyncrasies have to be learned for each (just like your language analogy!). The Matlab command sum(A.*B,1) means multiply A and B element-wise, so they must be the same shape, and then sum along dimension 1, i.e. add each row together to get the column sums. Dimension 1 is the default so, sum(A.*B) would do the same thing as sum(A.*B,1). Because R treats * as element-wise for matrix multiplication, the following Matlab and R codes will produce the same column of numbers in J:
Matlab:
A=[[1,2,3];[4,5,6];[7,8,9]];
B=[[10,11,12];[13,14,15];[16,17,18]];
J=sum(A.*B,1)'; %the ' means to transpose the column sums to be a 3x1 matrix
R:
A<-matrix(c(1,2,3,4,5,6,7,8,9),3,byrow=T)
B<-matrix(c(10,11,12,13,14,15,16,17,18),3,byrow=T)
J<-matrix(colSums(A*B)) # no transpose needed here: nrow(J)==3
Problem:
A few R packages feature Levenshtein distance implementations for computing the similarity of two strings, e.g. http://finzi.psych.upenn.edu/R/library/RecordLinkage/html/strcmp.html.
The distances computed can easily be normalised for string length, e.g. by dividing the Levenshtein distance by the length of the longest string involved or by dividing it by the mean of the lengths of the two strings.
For some applications in linguistics (e.g. dialectometry and receptive multilingualism research), however, it is recommended that the raw Levenshtein distance be normalised for the length of the longest least-cost alignment (Heeringa, 2004: 130-132).
This tends to produce distance measures that make more sense from a perceptual-linguistic point of view.
Example:
The German string "tsYklUs" (Zyklus = cycle) can be converted into its Swedish cognate "sYkEl" (cyckel = (bi)cycle) in a 7-slot alignment with two insertions (I) and two substitutions (S) for a total transformation cost of 4.
Normalised Levenshtein distance: 4/7
(A)
t--s--Y--k--l--U--s
---s--Y--k--E--l---
===================
I-----------S--S--I = 4
It is also possible to convert the strings in an 8-slot alignment with 3 insertions (I) and 1 deletion (D), also for a total alignment cost of 4.
Normalised Levenshtein distance: 4/8
(B)
t--s--Y--k-----l--U--S
---s--Y--k--E--l------
======================
I-----------D-----I--I = 4
The latter alignment makes more sense linguistically, because it aligns the [l]-phonemes with each other rather than with the [E] and [U] vowels.
Question:
Does anyone know of any R function that would allow me to normalise Levenshtein distances for the longest least-cost alignment rather than for string length proper?
Thanks for your input!
Reference:
W.J. Heeringa (2004), Measuring dialect pronunciation differences using Levenshtein distance. PhD thesis, University of Groningen. http://www.let.rug.nl/~heeringa/dialectology/thesis/
Edit - Solution: I think I figured out a solution. The adist function can return the alignment and seems to default to the longest low-cost alignment. To take up the example above, here's the alignment associated with sykel to tsyklus:
> attr(adist("sykel", "tsyklus", counts = TRUE), "trafos")
[,1]
[1,] "IMMMDMII"
To compute length-normalised distances as recommended by Heeringa (2004), we can write a modest function:
normLev.fnc <- function(a, b) {
drop(adist(a, b) / nchar(attr(adist(a, b, counts = TRUE), "trafos")))
}
For the example above, this returns
> normLev.fnc("sykel", "tsyklus")
[1] 0.5
This function also returns the correct normalised distances for Heeringa's (2004: 131) examples:
> normLev.fnc("bine", "bEi")
[1] 0.6
> normLev.fnc("kaninçen", "konEin")
[1] 0.5555556
> normLev.fnc("kenEeri", "kenArje")
[1] 0.5
To compare several pairs of strings:
> L1 <- c("bine", "kaninçen", "kenEeri")
> L2 <- c("bEi", "konEin", "kenArje")
> diag(normLev.fnc(L1, L2))
[1] 0.6000000 0.5555556 0.5000000
In case any linguists stumble upon this post, I'd like to point out that the algorithms provided by the RecordLinkage package are not necessarily optimal for comparing non-ASCII strings, e.g.:
> levenshteinSim("väg", "way")
[1] -0.3333333
> levenshteinDist("väg", "way")
[1] 4
> levenshteinDist("väg", "wäy")
[1] 2
> levenshteinDist("väg", "wüy")
[1] 3