I'm curious to see when accounts started following me on Twitter (and when I started following accounts). It'd be interesting to see my user activity related to the types of accounts I follow, as well as maps of my followers/followings over time + season.
I've tried getting followers and lookup users in the following manner:
followers <- get_followers("twitterhandlehere", n = 50)
followers_data <- lookup_users(followers$user_id)
Followers_data is a data frame with user info including profile picture, bio, and when the user's account was created, but no where in there does it indicate when the relationship started, as far as I can tell.
Nor does this function seem to indicate the date in which the follow/following started:
lookup_friendship("BarackObama", "MyUsername")
It appears the API didn't support this functionality in the past, and I understand I can stream this data in the future - but is there any way to salvage specificity in the past data?
No, this is not available in the API. You would have to have been regularly polling the friends and followers endpoints to record those changes. You cannot discover it from the API at a specific point in time, you'd have to make the record of follower list changes youself.
Related
I'm working on a simple app to programmatically retrieve ads performance within Linkedin. I have general API experience but this is the first time i get my feet wet with the Linkedin API.
One example from Linkedin API documentation suggest something that would get me started:
GET https://api.linkedin.com/v2/adAnalyticsV2?q=analytics&dateRange.start.month=1&dateRange.start.day=1&dateRange.start.year=2016&timeGranularity=MONTHLY&pivot=CREATIVE&campaigns=urn:li:sponsoredCampaign:112466001
I am encountering two problems:
First this example implies that you already know the campaign ID. However I am unable to find a way to retrieve a list of campaign ID's for a given account.
Second, if I manually pull a campaign ID, I receive an error: "{"serviceErrorCode":2,"message":"Too many fields requested. Maximum possible fields to request: 20","status":400}". Pretty clear error.
A little research tells me that by adding the parameter "&fields=" I will be able to limit my query to less than 20 field (I really need only a dozen anyway) but I can't find and documentation regarding the names of the fields available.
Any help or pointer will be appreciated.
please refer the link below scroll down where you ill see the field names mentioned as metrics , these are the fields.
https://learn.microsoft.com/en-us/linkedin/marketing/integrations/ads-reporting/ads-reporting?tabs=http#analytics-finder
Asking here, after asking to Clockify support.
Trying to extend some of clockify capabilities to create extra reporting for our clients,
I’ve been playing with your API and specifically: the enpoint /reports/{reportsId}
• My goal:
Get all the time entries of a specific "saved report” (usually saved by our Project Managers)
• What I EXPECT from "/reports/{reportsId}”:
To get all the info and entities (users, time entries, projects, etc.) only regarding that particular reportId
• What I GET from "/reports/{reportsId}”:
Lots of info regarding the whole workspace, and I only see summaryReport
as more “specific to the saved report itself”...
• Questions:
Is this the correct behavior?
How do you filter down time entries of specific reports in URLs like https://clockify.me/bookmarks/BOOKMARK_HASH_HERE ?
Do you only call "/reports/{reportsId}” and filter down on client-side? (it seems to me that way, exploring the Network tab)
If that’s the way, what’s the point of calling the report endpoint? Only for the summaryReport object?
3- Is "/reports/{reportsId}” the best endpoint I can use to reach my goal? …or which way would you recommend me?
summaryReport.timeEntries will contain all the individual time entries from that particular report. Each entry has a user, project, client, time etc. Grouping by project is done on the client.
I'm not sure I fully understand your specific problem though. Are you suggesting the entries you get from the report endpoint do not belong to the given report?
I started to learn R, but now I am stuck.
I want to analyse followers from a specific twitter account. The problem is that that profile has a lot of followers, so to get all followers would take much time. And I am just interested in the followers from Switzerland.
So I wonder if its possible to just load the data of followers who are coming from switzerland?
This is what I already have:
library("twitteR")
consumer_key <- "my_key"
consumer_secret <- "my_secret"
access_token <- "my_token"
access_secret <- "my_secret"
options(httr_oauth_cache=T) #This will enable the use of a local file to cache OAuth access credentials between R sessions.
setup_twitter_oauth(consumer_key,
consumer_secret,
access_token,
access_secret)
[1] "Using direct authentication"
trump <- getUser("RealDonaldTrump")
follower <- trump$getFollowers(retryOnRateLimit=180)
So, the last line of code obviously would take hours, so I need a better solution. Thanks :)
Could you elaborate on what information you want about each follower? Do you want a count of how many followers list "Switzerland" as their home country? Or do you want more information about each user?
My understanding is that the API doesn't permit the filtering of followers' output on a field such as country. Thus, it seems to me, that one would need to collect all users' information, then filter, after the fact, on the country.
I collected the user ID numbers for all of Donald Trump's followers in June 2016 (when he had fewer than 10 million followers, I think). It took some time, but, with the use of the smappR package & smappR::getFriends function, it was easy to do. I'm sure that it will take longer now that he has many more followers, but the procedure with smappR::getFriends should work.
It will, however, require some additional time to download the user information for each user ID. I think that you'll need to make a distinct query to a twitter API to get the user information, as smappR::getFriends will give only the user IDs (and maybe the user names). You would then need to query an API with a function like smappR::getUsers to get their user information, including country of residence. I admit that my understanding of Twitter APIs is incomplete, but I hope that this response helps.
Recently started exploring the Firebase data via the Data Studio Firebase connector. I'm doing some custom reports based on the user_engagement event to compare with data we previously reported on in Flurry.
When looking at some DAU figures they are pretty close but on MAU they tend to get inflated. (Saw this behavior first on the Firebase Events Report Template). Digging into it a little more we do have a pattern where users frequently reinstall the app which generates a new app_instance_id. So as I fallback I'm using the resettable_device_id but then there's the situation advertising tracking is disabled on device resulting in a zeroed value. (Or for a brief period in January nulled out values, not sure if this was client or part of the Firebase link)
Currently thinking something roughly following the logic below, falling back to app_instance_id if the advertising identifier was not set. What approaches would be worth looking into to have a reliable user identifier for metrics reporting? (In future will be calling the setUserID to utilize our own identifier but looking to match up historical data)
IF(user_dim.device_info.resettable_device_id is not null,
IF(user_dim.device_info.resettable_device_id = '00000000-0000-0000-0000-000000000000', user_dim.app_info.app_instance_id, user_dim.device_info.resettable_device_id),
user_dim.app_info.app_instance_id
) as unique_user_identifier,
Thanks in advance.
Simpler way to deal with the cases where a resettable_device_id is not available:
IF(user_dim.device_info.limited_ad_tracking, user_dim.app_info.app_instance_id, user_dim.device_info.resettable_device_id) as unique_user_identifier
This may be a possible duplicate of this question, but according to all the Google Analytics documentation I really should be able to pull my list of custom segments.
Since I have a very large list of them, it would be suboptimal for me to manually copy the segment ids over one at a time.
I'm following this walk through. Steps to reproduce:
Create a custom segment using date of first session in your Google Analytics account.
Authorize the Google Analytics guide to access your Google Analytics account.
Try their on-page query tester, and inspect whether your custom segment is there.
One thing I've already ruled out was the user that created the segment. I've manually created a segment with the same user that I'm querying the API with and it still does not show. Is there a flag I need to set somewhere to include custom segments?
Edit:
It turns out that it will list some custom segments, but not ones created with date of first session, so this is a duplicate of this question, which means that there is a bug in the Google Analytics API.
There was a bug which is now fixed. So it is now possible to list the Date of Session Segments in the Google Analytics Management API by calling the segments.list() method.
So after days of trying to solve this one I've come to the conclusion that it cannot be done as asked.
There is, however, another way to do it. For every segment set up a daily (or weekly, etc) email report to a email as a TSV. In each email body specify the name of the segment so when you're consuming the emails you can know which segment the attached TSV is for. It doesn't look like the daily reports were designed with segments in mind, since non of the metadata included in the TSV mentions which segment it is for.
From there it's trivial. Connect to the email address using an IMAP client once a day and update the numbers.
Note that the daily email only contains the numbers for that day (not a specified range), so you'll need to first generate the report one time with the historical data to load in.
While hacky, one nice thing about this approach is that it keeps your reports in sync with your (faked through email) api code (provided you match the column headings in the TSV). So, if for example, a new filter is included into a report, the new daily fields will continue to update.
Unfortunately though, the past data won't be reflected in the change.
Obviously this isn't great, but if you are monitoring daily cohorts it's the best you've got if you need to stay with Google Analytics. I have raised this as a bug to the Google Analytics developers, but I haven't heard back as to whether or not they plan to fix it.