Normalize a vector? - vector

How do you normalize a M*N vector, such that the sum of all its elements is now equal to 1. I browsed online a little, and nothing seems to quite match what I need. Thanks!

You add up all the elements, then divide each element by the sum.
Obviously, the division (at least) needs to be in floating point. Since that indicates a floating point matrix, doing the summing while maintaining maximum accuracy will be non-trivial.
Just for example, if you have one large element, and a lot of small elements, you'll probably get a more accurate result from adding all the small elements together, then adding that sum to the large element, than if you added each small element to the large one individually.
Edit: I suppose I should add that the usual way to deal with this is called Kahan summation, after the high guru of numerical analysis, William Kahan.

i think you have to divide every vector component by the euklidean distance of the vector

Related

Array-processing: Eigenstructure of the Spatial Covariance Matrix

I've been staring at the following underlined statement from this book for hours, and I cannot for the life of me figure out how it can be right:
For some definitions:
is an r x r matrix (we may ignore its contents for this purpose).
A is an N x r matrix defined as the following matrix of column vectors, where each vector is N elements long:
First of all I'm convinced that when they write:
they really mean:
otherwise it simply would not make sense from the start. My confusion is when they say is a linear combination of the column vectors of A.
At first I thought maybe it just wasn't obvious to me, so I started doing the calculations as an exercise.
My calculation (please don't make me type all this into a text equation editor):
I THINK my calculation is correct, but there was a lot to keep track of, so...
I did not do the multiplication with because it's trivial, and it doesn't solve the problem.
How can the products of different elements (complex conjugated no less) of the vectors in A end up as a linear combination of the columns of A?
Am I forgetting something fundamental here? Maybe something to do with the fact that is an eigenvector of ...?

generating completely new vector based on other vectors

Assume I have four-vectors (v1,v2,v3,v4), and I want to create a new vector (vec_new) that is not close to any of those four-vectors. I was thinking about interpolation and extrapolation. Do you think they are suitable? Are they also apply for vector and generate a vector of let's say 300 dimensions? Another possible option would be the transformation matrix. But I am not sure if it fit my concern. I think averaging and concatenation are not the good ones as I might be close to some of those four-vectors.
based on my problem, Imagine I divided my vectors into two categories. I need to find a vector which belongs to non-of those categories.
Any other ideas?
Per my comment, I wouldn't expect the creation of synthetic "far away" examples to be useful for realistic goals.
Even things like word antonyms are not maximally cosine-dissimilar from each other, because among the realm of all word-meaning-possibilities, antonyms are quite similar to each other. For example, 'hot' and 'cold' are considered opposites, but are the same kind of word, describing the same temperature-property, and can often be drop-in replacements for each other in the same sentences. So while they may show an interesting contrast in word-vector space, the "direction of difference" isn't going to be through the origin -- as would create maximal cosine-dissimilarity.
And in classification contexts, even a simple 2-category classifier will need actual 'negative' examples. With only positive examples, the 'vector space' won't necessarily model anything about hypothesized-but-not-actually-present negative examples. (It's nearly impossible to divide the space into two categories without training examples showing the real "boundaries".)
Still, there's an easy way to make a vector that is maximally dissimilar to another single vector: negate it. That creates a vector that's in the exact opposite direction from the original, and thus will have a cosine-similarity of -1.0.
If you have a number of vectors against which you want to find a maximally-dissimilar vector, I suspect you can't do much better than negating the average of all the vectors. That is, average the vectors, then negate that average-vector, to find the vector that's pointing exactly-opposite the average.
Good luck!

How do I create an N-Queens Problem with Genetic Algorithm child with constrained row and column?

For the N-Queen problem found here, I am trying to implement a genetic algorithm to solve it.
However, let's say that I am trying to constrain the problem. We know that to get an attacking value of 0, you can't have queens in the same row and column. I limit the boards to always have a different row and column for each queen. I want the genetic algorithm to find a solution where the diagonals are also not attacking.
My problem is with creating a child for this solution using a genetic algorithm. What is a good way to generate a child from two parent boards that follows that the children must not have queens in overlapping rows and columns?
Avoiding both overlapping rows and columns is difficult in a genetic algorithm. The typical approach is to implicitly represent the columns by the index in an array, and then have the queens represented by numbers 1..N.
So, a solution to the 8-queen problem would be represented by (5 1 8 4 2 7 3 6).
If you take any subset of the array you can mix it with another array and be guaranteed that there is one queen in each column (or row - however you prefer to think of this).
You can avoid both column and row overlap by using combinatorics (so there are N! arrangements instead of N^N), but the issue is that the representation required to do this (you can essentially use integers to represent full configurations) doesn't work as well for crossover operations. You will also run into the limit of integer representations. Using an array as above works fairly well, so I would suggest exploring that approach first.

Finding a closest looking segment of data in another sequence

I am doing image processing, in which I came across a situation, where I have to compare two vectors and find an instance of the smaller vector in the larger vector.
Say the two vectors are A: with 100 elements (or entries)
and B; with 10 elements. B is a model and it may not be present exactly as it is' in the vector A. I can compare 10 elements at a time and find the difference. Ideal case is that the B is present somewhere and the difference is zero. Otherwise a minimum will result at some random location, and i am missing the location.
Please help me in giving an algorithm such that the i can find Bs' closest instance in A.
What you are looking for is the cross-correlation function.The peak the the cross correlation of the two vectors will be the point were vector B is most similar to vector A.
You may want to get an explanation of how it is implemented in matlab HERE as it gives an easier explanation of how this operation can be implemented in software.

Movement data analysis in R; Flights and temporal subsampling

I want to analyse angles in movement of animals. I have tracking data that has 10 recordings per second. The data per recording consists of the position (x,y) of the animal, the angle and distance relative to the previous recording and furthermore includes speed and acceleration.
I want to analyse the speed an animal has while making a particular angle, however since the temporal resolution of my data is so high, each turn consists of a number of minute angles.
I figured there are two possible ways to work around this problem for both of which I do not know how to achieve such a thing in R and help would be greatly appreciated.
The first: Reducing my temporal resolution by a certain factor. However, this brings the disadvantage of losing possibly important parts of the data. Despite this, how would I be able to automatically subsample for example every 3rd or 10th recording of my data set?
The second: By converting straight movement into so called 'flights'; rule based aggregation of steps in approximately the same direction, separated by acute turns (see the figure). A flight between two points ends when the perpendicular distance from the main direction of that flight is larger than x, a value that can be arbitrarily set. Does anyone have any idea how to do that with the xy coordinate positional data that I have?
It sounds like there are three potential things you might want help with: the algorithm, the math, or R syntax.
The algorithm you need may depend on the specifics of your data. For example, how much data do you have? What format is it in? Is it in 2D or 3D? One possibility is to iterate through your data set. With each new point, you need to check all the previous points to see if they fall within your desired column. If the data set is large, however, this might be really slow. Worst case scenario, all the data points are in a single flight segment, meaning you would check the first point the same number of times as you have data points, the second point one less, etc. The means n + (n-1) + (n-2) + ... + 1 = n(n-1)/2 operations. That's O(n^2); the operating time could have quadratic growth with respect to the size of your data set. Hence, you may need something more sophisticated.
The math to check whether a point is within your desired column of x is pretty straightforward, although maybe more sophisticated math could help inform a better algorithm. One approach would be to use vector arithmetic. To take an example, suppose you have points A, B, and C. Your goal is to see if B falls in a column of width x around the vector from A to C. To do this, find the vector v orthogonal to C, then look at whether the magnitude of the scalar projection of the vector from A to B onto v is less than x. There is lots of literature available for help with this sort of thing, here is one example.
I think this is where I might start (with a boolean function for an individual point), since it seems like an R function to determine this would be convenient. Then another function that takes a set of points and calculates the vector v and calls the first function for each point in the set. Then run some data and see how long it takes.
I'm afraid I won't be of much help with R syntax, although it is on my list of things I'd like to learn. I checked out the manual for R last night and it had plenty of useful examples. I believe this is very doable, even for an R novice like myself. It might be kind of slow if you have a big data set. However, with something that works, it might also be easier to acquire help from people with more knowledge and experience to optimize it.
Two quick clarifying points in case they are helpful:
The above suggestion is just to start with the data for a single animal, so when I talk about growth of data I'm talking about the average data sample size for a single animal. If that is slow, you'll probably need to fix that first. Then you'll need to potentially analyze/optimize an algorithm for processing multiple animals afterwards.
I'm implicitly assuming that the definition of flight segment is the largest subset of contiguous data points where no "sub" flight segment violates the column rule. That is to say, I think I could come up with an example where a set of points satisfies your rule of falling within a column of width x around the vector to the last point, but if you looked at the column of width x around the vector to the second to last point, one point wouldn't meet the criteria anymore. Depending on how you define the flight segment then (e.g. if you want it to be the largest possible set of points that meet your condition and don't care about what happens inside), you may need something different (e.g. work backwards instead of forwards).

Resources