How often can I ping Google Calendar without getting banned? - google-calendar-api

We are writing our custom scheduling app for our website.
Necessarily, it requests Google Calendar data to see when one of our 3 team members are available and then offers the visitor an array of available time slots.
Problem is, this takes too damn long to get the updated info.
I'm wondering if we could simply get all this data in the background and offer visitors to pick from data that is a few seconds old :)
So my question is, how often can we initiate this without getting banned by Google.

Here you go. The limit you're looking for depends on the type of google account you're using.
https://developers.google.com/apps-script/guides/services/quotas
Also you won't get banned, it just won't run. If you're on a consumer account you could ping it 1x every 18 seconds without it failing. That's as close as you can get to "Live Data".

Related

What would cause events to suddenly stop being sent to/recevied by firebase?

We are using Firebase/Google analytics for our Android and iOS app. Everything seemed to be sending data correctly and we were able to view the data in Big Query etc. However we started to notice that some data seemed to be getting lost.
We detected an odd situation where some users' analytics data stopped showing on Firebase/Google Analytics/Big Query, despite having previously received data from that user in the past. The data seems to just stop at a random point in time, for random users.
in_app_purchase events from those players were still appearing in the data on dates where they didn't have any other events. We checked our backend service (gamesparks) for their account and could see that they were active players who had been using the app very recently. That is, after their last event was appearing in Big Query.
After investigating some more and started finding other users who had the same issue. They would be sending data without issue and then all of a sudden we would receive nothing from them, except from in_app_purchase events/notification events etc which are sent via a seperate service (app store etc) rather than the client.
After scouring our implementation and going over it line by line comparing to the samples/documentation we couldn't really see any issues, and even the automatic events (session_start etc) stop appearing. We made sure we were using the latest versions of the firebase SDKs etc in the hope it would fix it but it made no difference.
One peculiar thing is that when we find a in_app_purchase event from one of these 'broken' players, things like the user properties and default parameters for that player have changed from when they stopped sending data, so it seems like the lost data is somewhere but not being logged anywhere.
I was wondering if it was possible for specific users to stop their app sending any analytics data to Firebase via a device/google account setting?
While looking into the documentation we noticed that if Google Play Services is installed on the device, data is sent via that, rather than via the client/firebase sdk itself. Is there any known issue with players changing their Google Play Services settings that could cause something like this?
Wondered if this was a known issue but please let me know what other information you might need.
EDIT: I also wanted to mention that although we can't be 100% certain, we believe this is only happening to our Android users. We haven't found any iOS users that have the same issue.
Thanks,
Matt

Best strategy to develop back end of an app with large userbase, taking into account limitations of bandwidth, concurrent connections etc.?

I am developing an Android app which basically does this: On the landing(home) page it shows a couple of words. These words need to be updated on daily basis. Secondly, there is an 'experiences' tab in which a list of user experiences (around 500) shows up with their profile pic, description,etc.
This basic app is expected to get around 1 million users daily who will open the app daily at least once to see those couple of words. Many may occasionally open up the experiences section.
Thirdly, the app needs to have a push notification feature.
I am planning to purchase a managed wordpress hosting, set up a website, and add a post each day with those couple of words, use the JSON-API to extract those words and display them on app's home page. Similarly for the experiences, I will add each as a wordpress post and extract them from the Wordpress database. The reason I am choosing wordpress is that it has ready made interfaces for data entry which will save my time and effort.
But I am stuck on this: will the wordpress DB be able to handle such large amount of queries ? With such a large userbase and spiky traffic, I suspect I might cross the max. concurrent connections limit.
What's the best strategy in my case ? Should I use WP, or use firebase or any other service ? I need to make sure the scheme is cost effective also.
My app is basically very similar to this one:
https://play.google.com/store/apps/details?id=com.ekaum.ekaum
For push notifications, I am planning to use third party services.
Kindly suggest the best strategy I should go with for designing the back end of this app.
Thanks to everyone out there in advance who are willing to help me in this.
I have never used Wordpress, so I don't know if or how it could handle that load.
You can still use WP for data entry, and write a scheduled function that would use WP's JSON API to copy that data into Firebase.
RTDB-vs-Firestore scalability states that RTDB can handle 200 thousand concurrent connections and Firestore 1 million concurrent connections.
However, if I get it right, your app doesn't need connections to be active (i.e. receive real-time updates). You can get your data once, then close the connection.
For RTDB, Enabling Offline Capabilities on Android states that
On Android, Firebase automatically manages connection state to reduce bandwidth and battery usage. When a client has no active listeners, no pending write or onDisconnect operations, and is not explicitly disconnected by the goOffline method, Firebase closes the connection after 60 seconds of inactivity.
So the connection should close by itself after 1 minute, if you remove your listeners, or you can force close it earlier using goOffline.
For Firestore, I don't know if it happens automatically, but you can do it manually.
In Firebase Pricing you can see that 100K Firestore document reads is $0.06. 1M reads (for the two words) should cost $0.6 plus some network traffic. In RTDB, the cost has to do with data bulk, so it requires some calculations, but it shouldn't be much. I am not familiar with the pricing small details, so you should do some more research.
In the app you mentioned, the experiences don't seem to change very often. You might want to try to build your own caching manually, and add the required versioning info in the daily data.
Edit:
It would possibly be more efficient and less costly if you used Firebase Hosting, instead of RTDB/Firestore directly. See Serve dynamic content and host microservices with Cloud Functions and Manage cache behavior.
In short, you create a HTTP function that reads your database and returns the data you need. You configure hosting to call that function, and configure the cache such that subsequent requests are served the cached result via hosting (without extra function invocations).

Google Analytics real-time - keep alive

i have a realtime platform when users are staying on pages for a long duration, i found that after 5 minutes (more or less) the GA realtime stop show them so i created timer that each 4 minutes send pageview and this way all users remain "connected" to GA.
I wonder if it's a good approach or it's can may produce un-accurate data on the reports later.
Is anyone experienced that?
Your terminology seems a little off - users do not become "disconnected" from Google Analytics, the difference between realtime reports and data from the reporting api is that the former shows only a subset of ad hoc computed dimensions and metrics whereas the reporting api shows, after some processing latency, the full set of metrics and dimensions, including stuff that required more processing time like session- and user scoped data.
Other than that your approach is fine. There is a limit on the number of API calls you are allowed to make - the documentation has an example on how to calculate your calls to stay within the limits, and Google suggests to implement some sort of serverside caching if you do need a lot of realtime dashboards.
But this is not going to affect the data quality of reports in any way. Realtime API is a read-only API, the worst thing that can happen is that you exceed your quota and get blocked for the rest of the day. So there is no way this would create "un-accurate data on the reports later".

Check if anyone is currently using an ASP.Net app (site)

I build ASP.NET websites (hosted under IIS 6 usually, often with SQL Server backends and forms authentication).
Clients sometimes ask if I can check whether there are people currently browsing (and/or whether there are users currently logged in to) their website at a given moment, usually so the can safely do a deployment (they want a hotfix, for example).
I know the web is basically stateless so I can't be sure whether someone has closed the browser window, but I imagine there'd be some count of not-yet-timed-out sessions or something, and surely logged-in-users...
Is there a standard and/or easy way to check this?
Jakob's answer is correct but does rely on installing and configuring the Membership features.
A crude but simple way of tracking users online would be to store a counter in the Application object. This counter could be incremented/decremented upon their sessions starting and ending. There's an example of this on the MSDN website:
Session-State Events (MSDN Library)
Because the default Session Timeout is 20 minutes the accuracy of this method isn't guaranteed (but then that applies to any web application due to the stateless and disconnected nature of HTTP).
I know this is a pretty old question, but I figured I'd chime in. Why not use Google Analytics and view their real time dashboard? It will require minor code modifications (i.e. a single script import) and will do everything you're looking for...
You may be looking for the Membership.GetNumberOfUsersOnline method, although I'm not sure how reliable it is.
Sessions, suggested by other users, are a basic way of doing things, but are not too reliable. They can also work well in some circumstances, but not in others.
For example, if users are downloading large files or watching videos or listening to the podcasts, they may stay on the same page for hours (unless the requests to the binary data are tracked by ASP.NET too), but are still using your website.
Thus, my suggestion is to use the server logs to detect if the website is currently used by many people. It gives you the ability to:
See what sort of requests are done. It's quite easy to detect humans and crawlers, and with some experience, it's also possible to see if the human is currently doing something critical (such as writing a comment on a website, editing a document, or typing her credit card number and ordering something) or not (such as browsing).
See who is doing those requests. For example, if Google is crawling your website, it is a very bad idea to go offline, unless the search rating doesn't matter for you. On the other hand, if a bot is trying for two hours to crack your website by doing requests to different pages, you can go offline for sure.
Note: if a website has some critical areas (for example, writing this long answer, I would be angry if Stack Overflow goes offline in a few seconds just before I submit my answer), you can also send regular AJAX requests to the server while the user stays on the page. Of course, you must be careful when implementing such feature, and take in account that it will increase the bandwidth used, and will not work if the user has JavaScript disabled).
You can run command netstat and see how many active connection exist to your website ports.
Default port for http is *:80.
Default port for https is *:443.

Google Calendar API DoS prevention

It appears that the Google calendar API effectively locks you out if you create and delete a few (less than 10) calendars within a short space of time.
This has made it basically impossible for me to test my app, because it creates/deletes a calendar for each user that is added/removed from the app. Currently, I'm "working around" this issue by creating a new Google account each time I get locked out of the Calendar API. Clearly, this solution is less than satisfactory.
Is there any way I can avoid this over-zealous DoS prevention?
Thanks,
Don
If your application doesn't require an instantaneous call out to the Google API, your code can queue the actions and throttle the calls to x calls over y seconds. Not an ideal solution but it would reduce the likelihood of hitting the quota limit.
Can't you just "reset" (i.e. delete all entries) the test calendars instead of re-creating them every time?
Try creating a localized version of the calendar that you either have the user save (upload to Google) with a button click or as an event (ie. closing the program, every x minutes). Store all the data locally, and only upload to Google as needed. I don't know how neccessary it is that the data be available immediately, but if your app can handle some delay, then this may work for you.

Resources