Vectors and Matrices - vector

i've got a little question about vectors and matrices in the programming world.
I read a lot of articles when people used vector/matrices for saying arrays.
I can understand the "why" of matrices, because they are like an imaginary table with rows and columns and then they can be compared to multi-dimensinal arrays.
But i cannot explain to me the similarity between a one-dimension array and a vector.
Vector is a thing which have a directory and a magnutude, representable by an arrow, then why compare it to a one-dimension array, which is a collection of scalable values?
I read about a recent vulnerabilty in the yahoo toolbar
Any one using Y! Toolbar could simply get their Yahoo, Google, Youtube, and other services hijacked by visiting any of those websites containing an XSS vector. Since these are highly reputable websites, it makes it easier for attackers to hijack accounts due to the fact that reputation and websites that contains a malicious code designed for an attack
I know what XSS is, but a XSS vector... oh my god, no idea.
Regards

The direction and magnitude of a vector can be described in the form of coordinates of the end point of the vector when the start of the vector is at the origin. When expressing a vector as a one dimensional array, the elements of the array are the coordinates of the end point of the vector.
With respect to security vulnerabilities, the use of the word is a bit different: there, the "vector" is the way security is bypassed. It's related only in the sense that the "vector" for a security vulnerability can metaphorically be thought of as kind of an arrow getting past the security measures.

Related

R geographic address validation

I am trying to calculate physical distances between geographic locations (addresses) with ggmaps/mapdist function in R. Apart from the uncomfortable fact that Google Maps allows only 2500 queries/session, I have to cope with the misspelled or other way imperfect "addresses". The most typical problem is that the exact address strings themselves are added by several other info (floor, door etc.), but it is very problematic to detect any pattern in these what would allow applying regular expression.
My goal is:
Check if the address string is recognizable to Google Maps;
If not, find a way to truncate to an acceptable form, perhaps by parsing words step by step from the string.
Have anybody coped with this kind of problem?
Thanks.
There are a couple of factors running into each other here. One factor is the misspellings and other complexities related to addresses and the other is pinpointing (geocoding) a given address. Although they are related problems, each must be handled to accomplish your objectives.
There are numerous service providers out there that can do either or both with minimal cost involved. This can be found with a simple Google search. You can then investigate each to see if they match your use case and licensing requirements.
All of that considered, you'll want to get your address list cleaned up on a minimum. Doing that will enable you to utilize any number of geocoding providers.
Depending upon the size of your list, you can get your list cleaned up and geocoded for perhaps $20.
In the interest of full disclosure, I'm the founder of SmartyStreets. We provide a web interface (to help clean up the address list) as well as an API (which can be used on a continual basis to keep addresses clean). We also geocode your list at no extra charge. Further, we don't have any licensing restrictions on the number of lookups that can be performed during a given timeframe. (We have customers that hit us hundreds of millions of times per day.) The entire process of signing up and cleaning up your list takes just a few minutes.

Selecting optimal combinations

I have a problem that I am currently solving via brute force, but am looking for a more elegant solution. I have a system that runs various functions across multiple nodes. Each function is defined by a 'role'. Each 'role' can be defined to be allowed to one or more clients to hold it. Additionally, preference may be given to a particular client (or clients) over other clients.
The complexity comes in that it is also possible for 'roles' to be related to each other. For example, a client may only be able to hold 'RoleA' if they don't hold 'RoleB', or a client may only be able to hold 'RoleC' if they hold 'RoleD'. Additionally, roles can be related preferentially (i.e. it is preferred that a client holding 'RoleE' holds 'RoleF', but that this is not mandatory).
A client may advertise its willingness to hold any number of roles, but is not required to do so. i.e 'client1' may advertise for roles 'A', 'B', and 'C', while 'client2' may only advertise for roles 'A' and 'B'.
I have solved this problem in a brute force fashion, but obviously, as the number of related roles increases, solving it takes exponentially longer.
Currently, my algorithm is:
Work out all of the possible combinations for clients advertising a given role, and then asses that role in isolation to generate an list of legal combinations, ordered by preference.
Generate all possible combinations for the lists generated in the previous step, and iterate over these, deciding which is the 'most optimal' based on heuristics around mandatory, illegal, favoured, and unfavoured relationships of the group of roles. This is the part that explodes exponentially as the number of related roles increases.
I have tried some 'early out' approaches whereby a theoretical maximum possible 'score' is determined based on the role relationships, and that as soon as we encounter a combination that has a 'score' >= this that we just stop processing, but I'm wondering if there's a more mathematical solution. Any solution is presumably going to be an approximation of the optimal combination, but that is fine.
Ideally I need something that can run sub second.
Hopefully my explanation is not too vague and someone can point me in the right direction!
Thanks in advance.
Cam
Sounds like the Boolean satisfiability problem with some extra complication. BSP is an NP-complete problem, therefore there is no algorithm that can solve it in less than exponential time, however there are some algorithms (mentioned in the link) that can do it better than brute force.

Give me a practical use-case of Multi-set

I would like to know a few practical use-cases (if they are not related/tied to any programming language it will be better).I can associate Sets, Lists and Maps to practical use cases.
For example if you wanted a glossary of a book where terms that you want are listed alphabetically and a location/page number is the value, you would use the collection TreeMap(OrderedMap which is a Map)
Somehow, I can't associate MultiSets with any "practical" usecase. Does someone know of any uses?
http://en.wikipedia.org/wiki/Multiset does not tell me enough :)
PS: If you guys think this should be community-wiki'ed it is okay. The only reason I did not do it was "There is a clear objective way to answer this question".
Lots of applications. For example, imagine a shopping cart. That can contain more than one instance of an item - i.e. 2 cpu's, 3 graphics boards, etc. So it is a Multi-set. One simple implementation is to also keep track of the number of items of each - i.e. keep around the info 2 cpu's, 3 graphics boards, etc.
I'm sure you can think of lots of other applications.
A multiset is useful in many situations in which you'd otherwise have a Map. Here are three examples.
Suppose you have a class Foo with an accessor getType(), and you want to know, for a collection of Foo instances, how many have each type.
Similarly, a system could perform various actions, and you could use a Multiset to keep track of how many times each action occurred.
Finally, to determine whether two collections contain the same elements, ignoring order but paying attention to how often instances are repeated, simply call
HashMultiset.create(collection1).equals(HashMultiset.create(collection2))
In some fields of Math, a set is treated as a multiset for all purposes. For example, in Linear Algebra, a set of vectors is teated as a multiset when testing for linear dependancy.
Thus, implementations of these fields should benefit from the usage of multisets.
You may say linear algebra isn't practical, but that is a whole different debate...
A Shopping Cart is a MultiSet. You can put several instances of the same item in a Shopping Cart when you want to buy more than one.

Is information a subset of data?

I apologize as I don't know whether this is more of a math question that belongs on mathoverflow or if it's a computer science question that belongs here.
That said, I believe I understand the fundamental difference between data, information, and knowledge. My understanding is that information carries both data and meaning. One thing that I'm not clear on is whether information is data. Is information considered a special kind of data, or is it something completely different?
The words data,information and knowlege are value-based concepts used to categorize, in a subjective fashion, the general "conciseness" and "usefulness" of a particular information set.
These words have no precise meaning because they are relative to the underlying purpose and methodology of information processing; In the field of information theory these have no meaning at all, because all three are the same thing: a collection of "information" (in the Information-theoric sense).
Yet they are useful, in context, to summarize the general nature of an information set as loosely explained below.
Information is obtained (or sometimes induced) from data, but it can be richer, as well a cleaner (whereby some values have been corrected) and "simpler" (whereby some irrelevant data has been removed). So in the set theory sense, Information is not a subset of Data, but a separate set [which typically intersects, somewhat, with the data but also can have elements of its own].
Knowledge (sometimes called insight) is yet another level up, it is based on information and too is not a [set theory] subset of information. Indeed Knowledge typically doesn't have direct reference to information elements, but rather tells a "meta story" about the information / data.
The unfounded idea that along the Data -> Information -> Knowledge chain, the higher levels are subsets of the lower ones, probably stems from the fact that there is [typically] a reduction of the volume of [IT sense] information. But qualitatively this info is different, hence no real [set theory] subset relationship.
Example:
Raw stock exchange data from Wall Street is ... Data
A "sea of data"! Someone has a hard time finding what he/she needs, directly, from this data. This data may need to be normalized. For example the price info may sometimes be expressed in a text string with 1/32th of a dollar precision, in other cases prices may come as a true binary integer with 1/8 of a dollar precision. Also the field which indicate, say, the buyer ID, or seller ID may include typos, and hence point to the wrong seller/buyer. etc.
A spreadsheet made from the above is ... Information
Various processes were applied to the data:
-cleaning / correcting various values
-cross referencing (for example looking up associated codes such as adding a column to display the actual name of the individual/company next to the Buyer ID column)
-merging when duplicate records pertaining to the same event (but say from different sources) are used to corroborate each other, but are also combined in one single record.
-aggregating: for example making the sum of all transaction value for a given stock (rather than showing all the individual transactions.
All this (and then some) turned the data into Information, i.e. a body of [IT sense] Information that is easily usable, where one can quickly find some "data", such as say the Opening and Closing rate for the IBM stock on June 8th 2009.
Note that while being more convenient to use, in part more exact/precise, and also boiled down, there is not real [IT sense] information in there which couldn't be located or computed from the original by relatively simple (if only painstaking) processes.
An financial analyst's report may contain ... knowledge
For example if the report indicate [bogus example] that whenever the price of Oil goes past a certain threshold, the value of gold start declining, but then quickly spikes again, around the time the price of coffee and tea stabilize. This particular insight constitute knowledge. This knowledge may have been hidden in the data alone, all along, but only became apparent when one applied some fancy statistically analysis, and/or required the help of a human expert to find or confirm such patterns.
By the way, in the Information Theory sense of the word Information, "data", "information" and "knowlegde" all contain [IT sense] information.
One could possibly get on the slippery slope of stating that "As we go up the chain the entropy decreases", but that is only loosely true because
entropy decrease is not directly or systematically tied to "usefulness for human"
(a typical example is that a zipped text file has less entropy yet is no fun reading)
there is effectively a loss of information (in addition to entropy loss)
(for example when data is aggregate the [IT sense] information about individual records get lost)
there is, particular in the case of Information -> Knowlege, a change in level of abstration
A final point (if I haven't confused everybody yet...) is the idea that the data->info->knowledge chain is effectively relative to the intended use/purpose of the [IT-sense] Information.
ewernli in a comment below provides the example of the spell checker, i.e. when the focus is on English orthography, the most insightful paper from a Wallstreet genius is merely a string of words, effectively "raw data", some of it in need of improvement (along the orthography purpose chain.
Similarly, a linguist using thousands of newspaper articles which typically (we can hope...) contain at least some insight/knowledge (in the general sense), may just consider these articles raw data, which will help him/her create automatically French-German lexicon (this will be information), and as he works on the project, he may discover a systematic semantic shift in the use of common words betwen the two languages, and hence gather insight into the distinct cultures.
Define information and data first, very carefully.
What is information and what is data is very dependent on context. An extreme example is a picture of you at a party which you email. For you it's information, but for the the ISP it's just data to be passed on.
Sometimes just adding the right context changes data to information.
So, to answer you question: No, information is not a subset of data. It could be at least the following.
A superset, when you add context
A subset, needle-in-a-haystack issue
A function of the data, e.g. in a digest
There are probably more situations.
This is how I see it...
Data is dirty and raw. You'll probably have too much of it.
... Jason ... 27 ... Denton ...
Information is the data you need, organised and meaningful.
Jason.age=27
Jason.city=Denton
Knowledge is why there are wikis, blogs: to keep track of insights and experiences. Note that these are human (and community) attributes. Except for maybe a weird science project, no computer is on Facebook telling people what it believes in.
information is an enhancement of data:
data is inert
information is actionable
note that information without data is merely an opinion ;-)
Information could be data if you had some way of representing the additional content that makes it information. A program that tries to 'understand' written text might transform the input text into a format that allows for more complex processing of the meaning of that text. This transformed format is a kind of data that represents information, when understood in the context of the overall processing system. From outside the system it appears as data, whereas inside the system it is the information that is being understood.

Way to infer the size of the userbase of a site from sampling taken usernames

Suppose you wanted to estimate the size of a userbase of a site which does not publicize this information.
People are more likely to have acquired different usernames with different probabilities. For instance, if the username 'nick' doesn't exist on the system, it's likely to have an extremely small userbase. If the username 'starbaby' is taken, it's likely to be a much larger site. It seems like a straightforward Bayesian problem.
There is the problem that different sites may have a different space of allowable usernames. The biggest problem would be the legality of common characters such as spaces, I imagine. Another issue that could taint the prior distribution is whether the site suggests names when the one you want is taken, or leaves you to think of a more creative name yourself.
How could you build a training set of the frequency of occurrence of usernames across different sized systems? Is there a way to use Bayes to do numeric estimation rather than classification into fixed-width buckets?
What you need to do is accurately estimate the probability that a certain user name is present given the number of users registered. Lets say N is the number of users and u = 1 if user u is present and 0 if they are absent.
First of all, make the assumption that the probability distributions for each user name are independent of each other. This is not going to be true - and you've already come up with one reason why - but it will probably be necessary since it makes the data collection and the maths a lot easier.
You are going to need a lot of data from sites with registered user names and the total number of users of that site. Now, take any specific user name and imagine your data points on a 2d plot (with N on x and u on y), there's going to be one horizontal line of points at y=0 and another at y=1. You can either bin the x axis as you suggest and take the mean y coordinate of all the data points in the bin to get a discrete function, or you could try to fit the points on the graph to some class of functions. I don't really know what that class of functions that would be - maybe some kind of power law? (I'm thinking of Zipf's law).
You now have the probability distributions to apply Bayes' rule. I don't know what kind of prior for N you would want to use. A uniform distribution (up to some large number) would make no assumptions, but I would guess most sites have a small user base.
I suspect that in order to make this work, when you sample users from a site you will need to do so for a specific set of users. I'm betting that the popularity of user names is going to have a very long tail and so a random sample of users is going to give you a lot of very infrequently used names and therefore a lot of uninformative evidence.
EDIT: I had another thought; in most forums (and on StackOverflow) users have consecutive user ids, so you can use a single site with a large number of users to give you estimates for all smaller N.
I think this is a cool idea!
You may be able to put together a data set by using UserNameCheck.com for some different usernames and cross-referencing the results with the stated userbase sizes of those sites that give them out.
Note: that website does not seem to check if the usernames are valid for the site, so e.g. it thinks Gmail would let you register "nick#gmail.com" even though that's too short.
The only way is to get a large set of taken usernames on systems for which you know the size of the userbase. Data may be skewed in userbases where certain names are more common. Even a tiny userbase from a Lord of the Rings forum will likely contain the username Strider, for example.

Resources