i have a realtime platform when users are staying on pages for a long duration, i found that after 5 minutes (more or less) the GA realtime stop show them so i created timer that each 4 minutes send pageview and this way all users remain "connected" to GA.
I wonder if it's a good approach or it's can may produce un-accurate data on the reports later.
Is anyone experienced that?
Your terminology seems a little off - users do not become "disconnected" from Google Analytics, the difference between realtime reports and data from the reporting api is that the former shows only a subset of ad hoc computed dimensions and metrics whereas the reporting api shows, after some processing latency, the full set of metrics and dimensions, including stuff that required more processing time like session- and user scoped data.
Other than that your approach is fine. There is a limit on the number of API calls you are allowed to make - the documentation has an example on how to calculate your calls to stay within the limits, and Google suggests to implement some sort of serverside caching if you do need a lot of realtime dashboards.
But this is not going to affect the data quality of reports in any way. Realtime API is a read-only API, the worst thing that can happen is that you exceed your quota and get blocked for the rest of the day. So there is no way this would create "un-accurate data on the reports later".
Related
Context: I am total Google Cloud begginer and I have just convinced my company headers to use Firestore Realtime Database for pushing transaction status to our mobile application. We have around 4 millions users that will use significantly our application for small money transfers. Now-a-days we use the concept of polling from Android/IOS to our Microservice endpoints and it will replaced by Firebase SDK imported to our Mobile app which will listen/observe to our Firestore Collection following few Firestore Rules. Since all money transfer will be confirmed/denied in short time (from few seconds to 1 or 2 minutes) the idea of replacing polling by a real reactive approach straigh from Firestore sounded and is already ongoing coding.
The issue: Firstly I don't what to compare solutions. It is just my reality: the prodution support operators must look after our internal Dashboard. Isn't allowed to them look at Google Dashboard Console (please accept this for this question). I need get on demand metrics of our FIrestore. It is nothing to do with Google pricing. It is just our demand: they want to see metrics like:
how many users listening at the same time now
how many users took some exception during connection
is there any user holding connection for more than X minute
when was the connection pick this morning
any exception of any type surrounding our Firestore database
I read Code Samples carefully follow the sample step-by-step trying to figure out some idea if there is some API providing the answers I am looking for.
So, my straight question is: is there such type of Google API providing metrics about my Firestore Database? Maybe following the same idea we found in Performance Monitor which works on Mobile side also some similar aproach on Firestore side.
*** Edited
Future readers may find worth read also about a way to get Firestore metrics info striagh from curl/postman
A couple of things: You mentioned both Firestore and Realtime Database; just wanted to make sure that you are aware that those are two different databases offered under the Firebase umbrella.
how many users listening at the same time now
is there any user holding connection for more than X minute
Yes, there's a dashboard: https://support.google.com/firebase/answer/6317517?hl=en. Including lots of options, like users active in the last 30 mins.
how many users took some exception during connection
any exception of any type surrounding our Firestore database
Yes, you can track errors and other logging via Stack Driver logging. These can give you reports on your cloud functions.
https://cloud.google.com/functions/docs/monitoring
Where can I find Stackdriver in Firebase console?
when was the connection pick this morning
For this one, I'm not sure if you mean A. when did somebody log on in the morning, or B. what was the time that there was the peak \ most usage. If B see 1. If A,
Real-time database has the concept of presence, which lets you know if a user is currently logged in or not. See examples here from the official documentation:
https://firebase.google.com/docs/firestore/solutions/presence
and this post
How to make user presence mechanism using Firebase?
Also applies to your
is there any user holding connection for more than X minute
..............
Edit in response to comments: I believe you are experiencing the XY problem https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem where you are focused on a particular solution, even though your problem has other solutions. User metrics, database events, and errors are all accessible through both dashboards and cloud functions. You can cURL cloud functions if you wish, or set up cron functions to auto report, or set up database trigger functions to log errors. So, while the exact way you want this to work may not exist, you just need to connect existing tools to get the result you want.
We're using Firebase for analytics on our mobile apps. But Firebase only appears to report on active users for 1, 7 and 28-day rolling periods. These are not the industry standard reporting metrics I'm looking for.
We also have a web app, where we're counting unique active users in Google Analytics, and we'd like to be able to compare (and combine) MAUs from our apps in firebase with web MAUs calculated in GA.
Is this possible without BigQuery?
If no, how much will BigQuery cost us?
It seems crazy to have to purchase BigQuery for this purpose alone. Any help is appreciated.
Is [it] possible [to get MAU] without BigQuery?
If the intervals in the analytics reports in the Firebase console don't suit your needs, you will have to roll your own. There is nothing built into Firebase for custom intervals. Most developers use BigQuery for such custom reporting, especially since this is quite easy to do by tweaking the default Data Studio template.
If no, how much will BigQuery cost us?
If you have a look at the BigQuery pricing page, you'll see that this is quite involved making it hard to answer without knowing your exact amount of data. In general: if you store and process more data (i.e. have more users in your app or more reports), you will pay more. Luckily there is now a BigQuery sandbox, which allows you to process significant data without paying (even without entering a credit card). This gives you an option to try BigQuery, before committing to it.
I have a google service account setup for their calendar api, but it seems as though I can only make 5 requests per second. I've only figured that out from trial and error, there are no per second rate limit settings on my developer console settings.
I have 'queries per day' and 'queries per 100 seconds per user', both of which are currently set to 1,000,000.
I'm definitely not hitting these limits, so I can only assume there is a hidden 'per second' rate limit that is being applied. Does anyone know if that is the case?
Thanks!
I think this documentation will help you to understand more the Calendar usage limits.
Google Calendar puts certain limits in place to protect our users and infrastructure from abusive behavior. When these limits are reached by a user, Google Calendar will go into read-only mode for that user, and all edit actions will fail for a certain period of time. Most users will never hit these limits, as they are well above the activity level of a typical Calendar user.
I'd also like to add a few tips to work efficiently with your quota:
Use push notifications instead of polling.
If you cannot avoid polling, make sure you only poll when necessary (for example poll very seldomly at night).
Use incremental synchronization with sync tokens for all collections instead of repeatedly retrieving all the entries.
Increase page size to retrieve more data at once by using the maxResults parameter.
Update events when they change, avoid re-creating all the events on every sync.
Use exponential backoff for error retries.
Check the performance tips of the Calendar API
I am planning to develop a website which will allow registered users to
view there analytic data from various sites like Google Analytic in one
dashboard, some what similar to http://www.cyfe.com/ which provides all in
one dashboard.
I am thinking of two approaches to implement this application.
Approach #1: once the user logins to my web application and request for data, my application would make a call to analytic website using there API (ex
Google Analytic API) and display response data.
Approach #2: execute a job which executes for a particular interval
(say for every 30 min) and retrieves analytics data for all registered users
and saves in my application database. And when user requests data, my application would display the data from application database instead sending request to Analytic website.
Can anyone please suggest the pros/cons of each approach and which one is good to implement?
Remember google analytics data isn't done processing for 24 - 48 hours so requesting data every 30 minutes is over kill the data wont be complete or accurate. Run your application once a day to get data for two days ago.
The main problem you are going to have is the limit of 7 dimensions and 10 metrics per request. There is no primary key so there is no way of linking data from one request back to the data of another request.
Another issue you will have is that you can max return 10k rows per request depending upon how many rows are returned by the request you end up making a large number of requests against the API which will be hard on your quota.
Also you may end up with quota issues you can make a max of 10k requests to each profile per day. once you have hit that quota you will not be able to make any more requests against that profile until the next day. This quota can not be extended.
You can also make a max of 10 requests a second per user / profile you can tweek this a little using quota user but your application will not be able to run very fast it takes on average half a second for each request to return data. Things are going to take time unless you want to run multiple versions of your extracting application but again it will require you tweek quota user. This quota can not be extended.
Your application can make a max of 50 k requests against the api a day for all profiles. Once you reach 80% of that quota I recommend you apply for an extension it can take a month or more to get the extension of this quota it is a good idea to plan ahead for this.
Note: I am the lead developer on a business intelligence application that exports data from Google Analytics into data warehouse application daily I have run into each of those issues. While what you are planning on doing is possible to do you just need to understand the limitations of the google analytics api before you begin your development process.
I'm looking for a good way to track CPU and memory usage for a variety of web applications and to be able to cross-reference this information with information on Google Analytics. For example, I'd like to be able to generate a report that shows the CPU and memory usage along with number of hits averaged over minute periods. One way I thought this could be solved is by adding custom page-level variables to Google Analytics for tracking CPU and memory usage. My questions:
For those familiar with GA reporting as it pertains to custom variables, is this possible?
Is there a better way to generate the kind of report I'm seeking? Perhaps even without using GA?
Thanks.
You can use the Google analytics API to push this data directly from the web page via javascript, or from the server using whatever language is relevant.
I've seen at least one large implementation use the API for UX A/B testing by way of event tracking, but there's no reason you couldn't store whatever related data you'd like.