I have a client who wants to build a web application targeted towards college students. They want the students to be able to pick which class they're in from a valid list of classes and teachers. Websites like koofers, schedulizer, and noteswap all have accurate lists from many universities which are accurate year by year.
How do these companies aggregate this data? Do these universities have some api for this specific purpose? Or, do these companies pay students from these universities to input this data every year?
We've done some of this for a client, and in each case we had to scrape the data. If you can get an API definitely use it, but my guess is that the vast majority will need to be scraped.
I would guess that these companies have some kind of agreements and use an API for data exchange. If you don't have access to that API though you can still build a simple webscraper that extracts that data for you.
Related
I've been doing some research on intent data and I have some technical questions, especially about how two businesses might be collecting "contact level" i.e. personally identified web traffic details without using third-party cookies.
Some quick background: Most of the large providers of intent data (bombora, the big willow/aberdeen/Spiceworks Ziff Davis, Tech Target etc.) offer "account" based intent data - essentially when users visit websites in their network, they do a reverse IP addresses lookup, match them to know IP addresses of large companies (usually companies with at least 250 employees) and note what topics are "surging" - aka showing unusual traffic on a given week. This largely makes sense to me. I'm assuming that when a visitor shows up at your site, google analytics and similar tools can tell you what google search keywords were used to arrive at your site, and that's how they can say things like - we can "observe intent signals across an unlimited number of contextual keyword categories, allowing you to customize your keywords and layer these insights onto your campaigns for optimal performance." Third party cookies, and data from DSP's (demand side platform's enabling ad buyers to buy ads across many platforms) are also involved in providing data, those these will be less useful sources of data after google sunset's third party cookies on Chrome.
Two providers - intentdata.io, and intentflow.com are offering contact level intent data. You can imagine why that would be of interest - if the director of sales is interested in your sales SaaS tool, you have a better idea of how qualified that lead is and who to reach out to. Only one of the two providers is specific about what exactly they're collecting - i.e. what "intent" they are capturing and how they're collecting it.
Intentdata.io:
Intentdata.io looks like a tiny company (two employees on LinkedIn). The most specific statement I've found about what their data is was in an Impact+ podcast interview - Ed, the CRO at intentdata.io, mentions that the data is analogous to commenting on a Forbes article or a conversation on LinkedIn. But he's clear - "that's just an analogy." They also say elsewhere that the data they provide mentions specifically what action the contact took that landed them in the provided data.
Ed from intentdata.io is also asked about GDPR compliance in his Impact+ interview - he basically says, some lawyers will disagree but he believes their data to be GDPR compliant, and it is in use by some firms in the EU. He does mention though that some firms have asked them to exclude certain columns from the data, like email addresses.
Edit: Found a bit more on intentdata.io - looks like they build a custom setup to pull "intent" data for each customer - they don't have a database monitoring company interaction with content across social media and b2b sites, instead you provide them with "lists (names and URLs) of customers, competitors, influencers, events, target accounts and key terms that would indicate intent at different stages in the buying journey. Pull together important hashtags, details on your ideal buyer (job titles, functions, seniority) and firmographics (size, industry, location)" - then they create a custom "algorithm" from this info, and they iterate on that "algorithm" a little bit over time.
They also make this statement on their site: "IntentData.io's data is collected from observing public actions that users are taking around the web. That means that first, we observe action (not reading, searching, browsing, being shown an ad, etc.) which we believe is a more concrete manifestation of intent. Second, people are taking these actions publicly for the world to see. We do not use any cookies, bidstream data or reverse IP lookups."
Finally one piece of their sales collateral asks: What ad budget do you have for PPC nurturing ads? So their may be some targeted PPC ads involved in the "algorithm."
Edit 2: Their sales collateral also states that they use "a third-party intent data methodology that uses multi-variable linear regression analysis to correlate observed actions with a specific contact. This is the method that the LeadSift engine of IntentData.io data uses."
Intentflow.com:
Intentflow.com seems like the sketchier of the two providers if I'm honest. They provide a video walkthrough of how they get their data at intentflow.com/thesis - but I'm not following how using "traceable urls" with no cookies involved, could give you contact level information. They also say they lookup what the most popular articles/pages are for 5k to 40k unique keywords or phrases that are related to 10-50 keywords or phrases you give them to target. And they use "traceable urls" to track who visits those sites. Again - no cookies involved. Supposedly fully compliant at least with US laws. They don't provide data for the EU "by design" so presumably they're not GDPR compliant? They also claim they can identify the individuals who are visiting your website, again using "traceable urls" - it seems clear from the pitch that you're asked to reach out to your backlink providers around the web to use this traceable url.
I've seen an interview where a rep from Bombora says they tried for a while to do contact level intent data and it wasn't very useful - and it wasn't really doable in a compliant way. Ed seems to be aware they've said that publicly, and he says "that's just not true."
So what's going on here? How exactly are these two small firms getting contact level intent data? Do you think they're doing it in a compliant way?
Got more information:
Intentdata.io use public comments, likes, shares etc. on blogs, social posts via web crawling and scraping for events, influencers, hashtags, articles etc. that the customer deems worth tracking. They do some work to try and connect the commenters with an identifiable contact. They bill on a quarterly basis for this.
Intentflow.com doesn't seem to use "traceable urls" at all. They take bidstream data, and identify the individual visitors via an "identity graph." They provide a minimum of 5k contacts per month at $2 per contact, making their data very expensive ($120k+ per year). You can't get lower than however many contacts their system spits out per month so it seems like there's not a good firm limit on what you will be charged. They say they can identify ~70% of web traffic, and they only provide data on US site visitors. Each row of their output would include not just the contact, but the site that contact was shown an ad on. Definitely interesting data - but I'm guessing they will be very affected by upcoming changes to third party cookies, privacy laws, etc.
For an university project (Big Data lecture), I’d like to analyze auctions on eBay. I wasn’t able to find reliable information so far whether it’s possible to get all current auctions on eBay via their API or not. I only need the auction title and the current price and I am aware that this is a huge load of data, but I’m just curios.
I don't think it's possible, in part because of the huge amount of data, and perhaps also because I don't think eBay wants people downloading data en masse like that. Doing so might allow people to do data mining and market research from a vantage point that is too publicly revealing for them.
If you're willing to settle for a large segment of data, look into eBay's Large Merchant Services and their LMS API.
For your research project, you should be able to make sense of an even smaller subset of data by just pulling from eBay's Finding API in a few automated large chunks.
I have participated in a Hackathon in my city, and the traffic department made public a dataset with more than 250 thousand traffic accident datapoints, each one containing Latitude, Longitude, type of accident, vehicles involved, etc.
I made a test to display the data using Google Maps API and Google Fusion Tables, but the usage limits were quickly reached with the first two years of a total of 13 years of records.
The data for two years can be displayed and filtered here.
So my question is:
Which free online services could I use in order to interactively display and filter 250 thousand such datapoints as map layers?
It is important that the service be free, because we are volunteering our time for non-profit public good. Currently our City Hall is implementing an API, but it is not ready yet, and it would be useful to present them some popularly well-accepted use-cases to make some political pressure for further API development with THEIR server (specially remotely querying a database instead of crawling a bunch of .csv files as it is now...)
An alternative would be to put everything in GitHub and load the whole dataset client-side to be manipulated with D3.js for example, but that seems very inefficient either for the client/user as for the server.
Thanks for reading, and feel free to re-tag if needed.
You need Google Maps API for Business to achieve what you want, but it costs a lot of money.
However, in some cases, you can get this Business Licence if you work for non-profit organization. I can't find the exact rules to be eligible for this free licence. I tried googled them but I can't find anything. I only find this link, just take a look if it can answer your problem.
You should be able to do that with Google Fusion Tables. The limit is 100,000 points per table, but you can overlay 5 layers onto a single map so in effect you can reach 500,000 points. I implemented the website below and have run it with over 200,000 points.
http://www.skyscan.co.uk/mapsearch.html
I may start working on a project very similar to Hipmunk.com, where it pulls the hotel cost information by calling different APIs (like expedia, orbitz, travelocity, hotels.com etc)
I did some research on this, but I am not able to find any unique hotel id or any field to match the hotels between several API's. Anyone have experience on how can to compare the hotel from expedia with orbitz or travelcity etc?
Thanks
EDIT: Google also doing the same thing http://www.google.com/hotelfinder/
From what I have seen of GDS systems, and these API's there is rarely a unique identifier between systems for e.g. hotels
Airports, airlines and countries have unique ISO identifiers: http://www.iso-code.com/airports.2.html
I would guess you are going to have to have your own internal mapping to identify and disambiguate the properties.
:|
When you get started with hotel APIs, the choice of free ones isn't really that big, see e.g. here for an overview.
The most extensive and accessible one is Expedia's EAN http://developer.ean.com/ which includes Sabre and Venere with unique IDs but still each structured differently.
That is, you are looking into different database tables.
You do get several identifies such as Name, Address, and coordinates, which can serve for unique identification, assuming they are free of errors. Which is an assumption.
I was wondering how and where companies like Foursquare/Gowalla find and keep up to date their list of location/businesses.
Is it a web service? Do they buy a directory and enter it into a database?
This is from a comment I found at http://www.quora.com/Where-or-how-does-a-company-like-Foursquare-get-a-directory-of-all-locations-and-their-addresses
Companies usually get place data from one of the following:
Data licenses: Companies like Localeze, InfoUSA, Amacai etc.
license location data: Big players like TeleAtlas and Navteq serve as global aggregators of this data. There are also lots of small niche players that license e.g. restaurant data only, or ATM data only, on a per country basis.
Crowd Sourcing. Some companies crowd source their data. Open Data Sets. There are
some data sets with a creative commons or other license from which location related data can be extracted. E.g. GeoCommons and Wikipedia. APIs. A number of companies provides APIs by which you can access data on the fly. This include GeoAPI.com, Google, Yelp and others.
In general, this data is fragmented both in type (e.g. POI vs neighborhood or geocode) and place (US vs UK vs South Africa vs Wherever)
Google has a geocoding service that's freely available for personal use.
For business use, it costs a few$, but it's still pretty reasonable.
And the API is pretty straightforward
http://code.google.com/apis/maps/documentation/javascript/v2/services.html