Is there an API call for searching datastreams? - xively

In the Xively API, I found a way to search for feeds, but what I really wanted is a way to search within datastreams. For example, I want the most recent five datapoints that == 1. I looked at the "historical data" API and could not find such functionality. Do I have to write the search algorithm manually?

This is currently not provided by Xively APIv2, you will indeed need to check the values manually.

Related

where to find google analytics dimensions values

I'm trying to implement my own google analytics reporting UI where users pick metrics and dimensions and then pick the filters according to what the user have already chosen and my question is there any way to get all the dimensions values? for example the dimension "country" can take the value of all the countries(USA,Fance,etc...). I've did some research on google but found nothing. any help is appreciated!
I would suggest using something like Analytics Edge extension for Excel to do exactly what you're trying to do.
If that doesn't completely solve what you're doing, the extension does pull all the pullable fields from GA. It also gives you an insight into what kind of request it sends to G to pull the data. Very useful for debugging, but also for learning the reporting API.
Alternatively, there's documentation on the API here: https://developers.google.com/analytics/devguides/reporting/core/v4
Make sure you know whether you're using GA4 or GA UA. They have different APIs.

Using LinkedIn API to retrieve advertising reports

I'm working on a simple app to programmatically retrieve ads performance within Linkedin. I have general API experience but this is the first time i get my feet wet with the Linkedin API.
One example from Linkedin API documentation suggest something that would get me started:
GET https://api.linkedin.com/v2/adAnalyticsV2?q=analytics&dateRange.start.month=1&dateRange.start.day=1&dateRange.start.year=2016&timeGranularity=MONTHLY&pivot=CREATIVE&campaigns=urn:li:sponsoredCampaign:112466001
I am encountering two problems:
First this example implies that you already know the campaign ID. However I am unable to find a way to retrieve a list of campaign ID's for a given account.
Second, if I manually pull a campaign ID, I receive an error: "{"serviceErrorCode":2,"message":"Too many fields requested. Maximum possible fields to request: 20","status":400}". Pretty clear error.
A little research tells me that by adding the parameter "&fields=" I will be able to limit my query to less than 20 field (I really need only a dozen anyway) but I can't find and documentation regarding the names of the fields available.
Any help or pointer will be appreciated.
please refer the link below scroll down where you ill see the field names mentioned as metrics , these are the fields.
https://learn.microsoft.com/en-us/linkedin/marketing/integrations/ads-reporting/ads-reporting?tabs=http#analytics-finder

How to determine how many free google distance queries are left on my account?

I'm pulling distance/time information for a large number of origin/destination pairs using the Google Maps API in R. I'm currently using the gmapsdistance package but have looked at a few others.
My premium API key includes 100k free queries per day. Are there any packages that can return how many are remaining? For example, the ggmap package has a geocodeQueryCheck(). The problem is I don't think this function actually returns the number remaining on your account. It doesn't ask for your API key. My guess is that it just keeps track of how many it has called today. The latest github version has a register_google() function that does allow you to set your API key, but when I make API requests with the gmapsdistance package, geocodeQueryCheck() doesn't update.
In summary, I just want to know how many are left. Even if I need to construct the URL address directly. When I look at the API documentation, I don't even see URL calls for it, which doesn't give me much hope.
As confirmed by #SymbolixAU, there is currently no way to do this.
Sorry, I guess this is late, but have you tried this?
sum(.GoogleDistQueryCount$elements)

Scrape all google search result for a specific name

I think the question has been answered here before,but i could not find the desired topic.I am a newbie in web scraping.I have to develop a script that will take all the google search result for a specific name.Then it will grab the related data against that name and if there is found more than one,the data will be grouped according to their names.
All I know is that,google has some kind of restriction on scraping.They provide a custom search api.I still did not use that api,but hoping to get all the resulted links corresponding to a query from that api. But, could not understand what will be the ideal process to do the scraping of the information from that links.Any tutorial link or suggestion is very much appreciated.
You should have provided a bit more what you have been doing, it does not sound like you even tried to solve it yourself.
Anyway, if you are still on it:
You can scrape Google through two ways, one is allowed one is not allowed.
a) Use their API, you can get around 2k results a day.
You can up it to around 3k a day for 2000 USD/year. You can up it more by getting in contact with them directly.
You will not be able to get accurate ranking positions from this method, if you only need a lower number of requests and are mainly interested in getting some websites according to a keyword that's the choice.
Starting point would be here: https://code.google.com/apis/console/
b) You can scrape the real search results
That's the only way to get the true ranking positions, for SEO purposes or to track website positions. Also it allows to get a large amount of results, if done right.
You can Google for code, the most advanced free (PHP) code I know is at http://scraping.compunect.com
However, there are other projects and code snippets.
You can start off at 300-500 requests per day and this can be multiplied by multiple IPs. Look at the linked article if you want to go that route, it explains it in more details and is quite accurate.
That said, if you choose route b) you break Googles terms, so either do not accept them or make sure you are not detected. If Google detects you, your script will be banned by IP/captcha. Not getting detected should be a priority.

How to Stream Through Large Amounts of Twitter Data?

I'll be working on a project that will require a live output of a number of tweets users have hash tagged on Twitter as well as their tweets. Something along the lines of MTV's Twitter Tracker: http://vma-twittertracker.mtv.com/live/#buzz.
What intrigued me about this site is how can they constantly make API calls to Twitter without breaching the request limit?
I'd appreciate if anyone could guide me on the most effective way to accomplish this. From the research I've carried out thus far, I presume I will need to use Twitter's Streaming API.
Since there is a chance that the number of tweets output to my page could be in their thousands (AJAX loaded) along with stats on number of retweets/favourites, what would be the most scalable approach within my .NET site? Any examples or guidance would be appreciated.
Check out Linq2Twitter. It is a great wrapper around the Twitter API, and provides two mechanisms that will help you:
There is a search function that allows you to search for hash tags, etc, which will limit the amount of data you are getting back
You have the option to specify getting all the data since a certain tweet ID. You can therefore incrementally search the feed by performing searches and searching, in subsequent calls, from the ID you left off on.
I have used this many times to search the public feed and have not had any issues to date. I think the search function is key not requesting too much. Good luck!
you can look into Storm framework. Below are few links for further reference:-
http://storm-project.net/
https://github.com/nathanmarz/storm
Thanks for all your responses.
It looks like sites such that display a lot of Twitter stats/data use third party approved providers that have direct access to Twitter's Firehose API.
I have managed to get in contact with an approved provider to supply us with the feeds of data required (and it ain't cheap!).

Resources