Linkedin: 2nd degree connections checking - linkedin

We have a list of about 400 people who are prospective biz dev targets that we are cultivating.
We're looking for a way to see who on that list I'm 2nd degree connects with (and also who the mutual connections are). It can be done manually for each 1 of the 400, but this takes a long time. Also when I add a new contact, I'd like to see if they are linked to any of the 400.
Perhaps, it's possible with the API or some such?

Related

Multiple SabreCommandLLSRQ HostCommands

Anyone know how to send multiple chained commands via SabreCommandLLSRQ HostCommand?
I'm trying to avoid the network back and forward (gets slow when you need 50 commands to get one full page)
Ex:
FQYYZBKK15JAN-AC + all the necessary move downs, or get the fare rules for all the 99 or so fares there.
Sounds like you want to scrape all available Fares + the attributes (penalties, baggage etc). I cant clarify because of my reputation.
If so you might be better of with Vayant or some caching company of Fares as it is way cheaper (you are producing tons of looks but no books and might brake the limit (look/book) that has been agreed upon with sabre.
If you want to stay with sabre you should use: FareLLSRQ also called Air Fare by City Pairs + OTA_AirRulesLLSRQ

Google Analytics Add-on limitaions

I would like to get some data from GA via spreadsheet add-on as I did a few weeks ago (I gathered ~200 000 rows). I am using same metrics, dimensions and rest of the settings but I am still getting this error :
https://i.stack.imgur.com/hTpIg.png
I found that I will get some data when I do not set up "max-results", but the default is set up on 1000 which is not enough for my needs. Why?
What I have tried to solve this problem and it doesn’t work:
change GA views
change dimensions and metrics
change time range
create new spreadsheet
set up sharing settings of spreadsheet to "public on web"
I found the link regarding limits and quotas on API (https://developers.google.com/analytics/devguides/config/mgmt/v3/limits-quotas#) and I should pass only through 50 000 requests per project, which I actually exceed on the first run, so another question how is it even possible to get more data than I suppose to get?
Should I really order more request or does "request" mean anything else than "one row"? Second why or what?
There is no any interpretation for the error.
Perhaps I am missing something, appreciate your help.
In short: while one could only guess what causes your problem it's most certainly not the API limit. Rows and requests are not at all the same, every request may fetch up to 10,000 rows.
"Request" is a call to the API, which might include one or many rows of data (unless your script somehow only requests one row at a time, which would be unusual).
If you exceeded your API quota the error message would say pretty much that.
The default is 1000 rows because that's a sensible default (compromise between convenience and performance). The API will return max 10,000 rows per request. To fetch 200 000 results the Add-on would have to do 20 requests, not 50 000.
Also a Google spreadsheet support 2mio cells at max, this might be exceeded by your result set.
"Service error" is a very unspecific error message which can be caused by a variety of causes from out-of-bound ranges to script timeouts or network latency. Sometimes the spreadsheet service dumps an additional error message in the browser console, so you should check your developer tools.

how to spoof location so google autocomplete API will provide local results, ideally with R

google has an API for downloading search suggestions:
https://www.google.com/support/enterprise/static/gsa/docs/admin/70/gsa_doc_set/xml_reference/query_suggestion.html
unfortunately, as far as i can tell, these results are specific to your location. for an analysis, i would like to be able to define the city/location that google thinks it is making the suggestion to. here's what happens when i scrape from dar es salaam, tanzania:
http://suggestqueries.google.com/complete/search?client=firefox&q=insurance
["insurance",["insurance","insurance companies in tanzania","insurance group of tanzania","insurance principles","insurance act","insurance policy","insurance act tanzania","insurance act 2009","insurance definition","insurance industry in tanzania"]]
i understand that a vpn would partially solve this issue, but only by giving me a different location and not lots of locations. is there a reasonable way to replicate this sort of thing quickly and easily from, say, the 100 largest cities in the united states?
confirmation that results differ within the usa-
thanks!
Google will use your IP and your location history (if turned on) to determine your location.
To be able to go around it, you can spoof your IP while logged off your google account (but I don't know if google will consider it a trial of hacking no matter what your intentions are).
Another way is to use Tor browser (even though it is not it's original purpose). You can configure tor to exit from a certain country using the Exitnode parameter in the torrc config file
As found in the docs:
ExitNodes node,node,…
A list of identity fingerprints, country codes, and address patterns of nodes to use as exit node
But if you want a fast way to do it, I don't think that's possible since google wants to know the real location of the users and have put a lot of effort into making such tricks fail.
The hl param for interface language changes the search results, but I can't tell if it's actually changing the location. For example:
http://suggestqueries.google.com/complete/search?client=chrome&q=why&hl=FR
Here's an example with 5 different values of hl:
http://jsbin.com/tusacufaza/edit?js,output

How to Sample Adobe Analytics (Omniture) Data

I can't find anything on the web about how to sample Adobe Analytics data? I need to integrate Adobe Analytics into a new website with a ton of traffic so the stakeholders want to sample the data to avoid exorbitant server calls. I'm using DTM but not sure if that will help or be a non-factor? Can anyone either point me to some documentation or give me some direction on how to do this?
Adobe Analytics does not have any built-in method for sampling data, neither on their end nor in the js code.
DTM doesn't offer anything like this either. It doesn't have any (exposed) mechanisms in place to evaluate all requests made to a given property (container); any rules that extend state beyond "hit" scope are cookie based.
Adobe Target does offer ability to output code based on % of traffic so you can achieve sampling this way, but really, you're just trading one server call cost for another.
Basically, your only solution would be to create your own server-side framework for conditionally outputting the Adobe Analytics (or DTM) tag, to achieve sampling with Adobe Analytics.
Update:
#MichaelJohns comment below:
We have a file that we use as a boot strap file to serve the DTM file.
What I think we are going to do is use some JS logic and cookies
around that to determine if a visitor should be served the DTM code.
Okay, well maybe i'm misunderstanding what your goal here is (but I don't think I am) but that's not going to work.
For example, if you only want to output tracking for 50% of visitors, how would you use javascript and cookies alone to achieve this? In order to know that you are only filtering 50%, you need to know the total # of people in play. By itself, javascript and cookies only know about ONE browser, ONE person. It has no way of knowing anything about those other 99 people unless you have some sort of shared state between all of them, like keeping track of a count in a database server-side.
The best you can do solely with javascript and cookies is that you can basically flip a coin. In this example of 50%, basically you'd pick a random # between 1 and 100 and lower half gets no tracking, higher half gets tracking.
The problem with this is that it is possible for the pendulum to swing 100% one way or the other. It is the same principle as flipping a coin 100 times in a row: it is entirely possible that it can land on tails all 100 times.
In theory, the trend over time should show an overall average of 50/50, but this has a major flaw in that you may go one month with a ton of traffic, another month with few. Or you could have a week with very little traffic followed by 1 day of a lot of traffic. And you really have no idea how that's going to manifest over time; you can't really know which way your pendulum is swinging unless you ARE actually recording 100% of the traffic to begin with. The affect of all this is that it will absolutely destroy your trended data, which is the core principle of making any kind of meaningful analysis.
So basically, if you really want to reliably output tracking to a % of traffic, you will need a mechanism in place that does in fact record 100% of traffic. If I were going to roll my own homebrewed "sampler", I would do this:
In either a flatfile or a database table I would have two columns, one representing "yes", one representing "no". And each time a request is made, I look for the cookie. If the cookie does NOT exist, I count this as a new visitor. Since it is a new visitor, I will increment one of those columns by 1.
Which one? It depends on what percent of traffic I am wanting to (not) track. In this example, we're doing a very simple 50/50 split, so really, all I need to do is increment whichever one is lower, and in the case that they are currently both equal, I can pick one at random. If you want to do a more uneven split, e.g. 30% tracked, 70% not tracked, then the formula becomes a bit more complex. But that's a different topic for discussion ( also, there are a lot of papers and documents and wikis out there published by people a lot smarter than me that can explain it a lot better than me! ).
Then, if it is fated that that I incremented the "yes" column, I set the "track" cookie to "yes". Otherwise I set the "track" cookie to "no".
Then in in my controller (or bootstrap, router, whatever all requests go through), I would look for the cookie called "track" and see if it has a value of "yes" or "no". If "yes" then I output the tracking script. If "no" then I do not.
So in summary, process would be:
Request is made
Look for cookie.
If cookie is not set, update database/flatfile incrementing either yes or no.
Set cookie with yes or no.
If cookie is set to yes, output tracking
If cookie is set to no, don't output tracking
Note: Depending on language/technology of your server, cookie won't actually be set until next request, so you may need to throw in logic to look for a returned value from db/flatfile update, then fallback to looking for cookie value in last 2 steps.
Another (more general) note: In general, you should beware sampling. It is true that some tracking tools (most notably Google Analytics) samples data. But the thing is, it initially records all of the data, and then uses complex algorithms to sample from there, including excluding/exempting certain key metrics from being sampled (like purchases, goals, etc.).
Just think about that for a minute. Even if you take the time to setup a proper "sampler" as described above, you are basically throwing out the window data proving people are doing key things on your site - the important things that help you decide where to go as far as giving visitors a better experience on your site, etc..so now the only way around it is to start recording everything internally and factoring those things in to whether or not to send the data to AA.
But all that aside.. Look, I will agree that hits are something to be concerned about on some level. I've worked with very, very large clients with effectively unlimited budgets, and even they worry about hit costs racking up.
But the bottom line is you are paying for an enterprise level tool. If you are concerned about the cost from Adobe Analytics as far as your site traffic.. maybe you should consider moving away from Adobe Analytics, and towards a different tool like GA, or some other tool that doesn't charge by the hit. Adobe Analytics is an enterprise level tool that offers a lot more than most other tools, and it is priced accordingly. No offense, but IMO that's like leasing a Mercedes and then cheaping out on the quality of gasoline you use.

Standard and reliable way to track RSS subscribers?

What's the best way to track RSS subscribers reliably without using Feedburner? Some of the obvious approaches like tracking by IP or by the number of hits have some fata flaws. IP addresses can change with each request or multiple users can use the same IP. Also, feed readers can request a feed multiple times per day or even hour. Both problems make it really hard to get reliable stats on unique subscribers.
I've read articles by both Leo Notenboom and Tim Bray on the topic, but none of their suggestions seems to really solve how to track subscribers in an accurate and reliable way. Leo suggests having a unique ID generated programatically to be appended to the RSS feed URL for each time the referring page is loaded. Tim advocates having RSS readers generate a unique hashtag and also has suggestions ranging from tracking the referrers to using cookies. A unique URL would be reliable, but it has two flaws: It's not a user-friendly URL and it creates duplicate content for SEO. Are there any other reliable methods of tracking RSS subscribers? How does Feedburner estimate subscribers?
There isn't really a standard way to do this. Subscriber counting is always unreliable but you can get good estimates with it.
Here's how Google does it (source):
Subscribers counts are calculated by matching IP address and feed reader
combinations, then using our detailed understanding of the multitude of
readers, aggregators, and bots on the market to make additional inferences.
Of course part of this is easy for Google, as they can first calculate how many Google Reader users are subscribed to the feed in question. After that they use IP address matching also, and that's what you should use too.
You could calculate individual IP addresses (i.e. unique) from the web-servers logs, but that would count 10 people as 1 if they all use the same address. That's why you should inspect the HTTP-headers which are sent by the client, more specifically header fields HTTP_X_FORWARDED_FOR and HTTP_VIA. You could use the HTTP_VIA address as the "main" address, and then calculate how many unique HTTP_X_FORWARDED_FOR addresses are subscribed to the feed. If the subscriber doesn't have these proxy-added fields, then it's counted as a unique IP address. These should be handled in the code that generates the feed. You could also add a GeoIP lookup for the IP's and store everything to a database. This would allow you to see which country has the most subscribers to your feed.
This has it's problems too. All proxies don't use these fields and it doesn't fix the problem of calculating subscribers behind NAT gateways. It is however a good estimate. Besides, you are probably more interested in the order of magnitude rather than the exact count of subscribers, aren't you? If the counter says that you have 5989 subscribers you probably have more subscribers as the counter gives you the lower bound.
Standard and Reliable are not exactly word in RSS dictionary :-) Got to remember that the thing doesn't even have standard XSD after how many years ? If by tracking you mean the "count" there are a few things you can do and the tactics depend on the purpose i.e. are demonstrating a big number or small number? It is a marketing thing so you have to define your goals :-)
You may have to classify IP numbers for a start - to have the basic collection of big / corporate / umbrella IP numbers. For them, you can use referrer as a reasonable filtering criteria and count everything else as unique unless proven otherwise. Vast majority of IP numbers remain stable for about 2 days but again it always good to use basic referrer logic as a filter for people who just keep "clicking" so to speak.
Then you need a decent list of aggregators and a classification on how they process URLs and if they obscure end readers completely then you need either published or inferred averages - it's always fair game to use equitable distribution of an average count. Using cookies may help to collect aggregator IPs and differentiate between automated agents and individuals.
One very important thing is to keep in mind that you can't use just one method and expect it to be a silver bullet - you need to use these 3-4 aspects at the same time plus basic statistical reasoning.
You could query your web server logs for traffic to your RSS feed, perhaps filter it by IP to get the number of uniques.
The problem is, that would rely on folks checking the feed daily. The frequency of hits to your RSS feed by one individual could vary day to do and the number could be lower.
If you configure your RSS feed to require some kind of authentication, you can do user-based metrics instead of ip-based metrics. Although this would be a technically-correct solution, getting people to opt into an authenticated blog in anything other than an Intranet scenario is a stretch.

Resources