Google Calendar API DoS prevention - google-calendar-api

It appears that the Google calendar API effectively locks you out if you create and delete a few (less than 10) calendars within a short space of time.
This has made it basically impossible for me to test my app, because it creates/deletes a calendar for each user that is added/removed from the app. Currently, I'm "working around" this issue by creating a new Google account each time I get locked out of the Calendar API. Clearly, this solution is less than satisfactory.
Is there any way I can avoid this over-zealous DoS prevention?
Thanks,
Don

If your application doesn't require an instantaneous call out to the Google API, your code can queue the actions and throttle the calls to x calls over y seconds. Not an ideal solution but it would reduce the likelihood of hitting the quota limit.

Can't you just "reset" (i.e. delete all entries) the test calendars instead of re-creating them every time?

Try creating a localized version of the calendar that you either have the user save (upload to Google) with a button click or as an event (ie. closing the program, every x minutes). Store all the data locally, and only upload to Google as needed. I don't know how neccessary it is that the data be available immediately, but if your app can handle some delay, then this may work for you.

Related

Why firebase events are counted less then they are actually triggered

We are developing a multiplayer game made with unity, and every-time a match starts, after our API has been successfully executed (on our custom backend servers), I immediately log a custom firebase event called MatchStart.
However when we compare the number of matches that have been started in our server database versus firebase analytics, we find that number in our database is 2.5x - 3x larger then the number of events reported by firebase analytics.
Same happens with other events, ex. similarly, I send a custom Register event every-time when a user registers, and numbers are messed up again, in database they're 2.5x - 3x time larger compared to firebase analytics.
Any idea why is this happening?
NOTE: I know that firebase may take some time to process events, so I made sure to also test number of events that have been sent a few days ago. Also we double checked in our database, everything seems to be ok.

How often can I ping Google Calendar without getting banned?

We are writing our custom scheduling app for our website.
Necessarily, it requests Google Calendar data to see when one of our 3 team members are available and then offers the visitor an array of available time slots.
Problem is, this takes too damn long to get the updated info.
I'm wondering if we could simply get all this data in the background and offer visitors to pick from data that is a few seconds old :)
So my question is, how often can we initiate this without getting banned by Google.
Here you go. The limit you're looking for depends on the type of google account you're using.
https://developers.google.com/apps-script/guides/services/quotas
Also you won't get banned, it just won't run. If you're on a consumer account you could ping it 1x every 18 seconds without it failing. That's as close as you can get to "Live Data".

Is there a way to cache some firebase data and not downloading it on every page refresh

I am using angularfire's sync arrays and the javascript SDK of firebase.I need to download about 5MB of data to fill my array so the app could work offline for a short time if it loses connection. I am afraid that the way I do things, the size of this array can easily bring me a high bill at the end of the month.
What if the user refresh's or starts and stops his/her app 100 times a month? What if 100 users do it? Is there some way to cache the array offline and only apply changes to it after I refresh the app?
I suggest that you take a look at AngularFire Offline which I am using for a similar use case within an Ionic hybrid mobile app and so far it looks to handle things well.

Google Analytics real-time - keep alive

i have a realtime platform when users are staying on pages for a long duration, i found that after 5 minutes (more or less) the GA realtime stop show them so i created timer that each 4 minutes send pageview and this way all users remain "connected" to GA.
I wonder if it's a good approach or it's can may produce un-accurate data on the reports later.
Is anyone experienced that?
Your terminology seems a little off - users do not become "disconnected" from Google Analytics, the difference between realtime reports and data from the reporting api is that the former shows only a subset of ad hoc computed dimensions and metrics whereas the reporting api shows, after some processing latency, the full set of metrics and dimensions, including stuff that required more processing time like session- and user scoped data.
Other than that your approach is fine. There is a limit on the number of API calls you are allowed to make - the documentation has an example on how to calculate your calls to stay within the limits, and Google suggests to implement some sort of serverside caching if you do need a lot of realtime dashboards.
But this is not going to affect the data quality of reports in any way. Realtime API is a read-only API, the worst thing that can happen is that you exceed your quota and get blocked for the rest of the day. So there is no way this would create "un-accurate data on the reports later".

Sending notifications according to database value changes

I am working on a vendor portal. An owner of a shop will login and in the navigation bar (similar to facebook) I would like the number of items sold to appear INSTANTLY, WITHOUT ANY REFRESH. In facebook, new notifications pop up immediately. I am using sql azure as my database. Is it possible to note a change in the database and INSTANTLY INFORM the user?
Part 2 of my project will consist of a mobile phone app for the vendor. In this app I, too , would like to have the same notification mechanism. In this case, would I be correct if I search on push notifications and apply them?
At the moment my main aim is to solve the problem in paragraph 1. I am able to retrieve the number of notifications, but how on earth is it possible to show the changes INSTANTLY? thank you very much
First you need to define what INSTANT means to you. For some, it means within a second 90% of the time. For others, they would be happy to have a 10-20 second gap on average. And more importantly, you need to understand the implications of your requirements; in other words, is it worth it to have near zero wait time for your business? The more relaxed your requirements, the cheaper it will be to build and the easier it will be to maintain.
You should know that having near-time notification can be very expensive in terms of computing and locking resources. The more you refresh, the more web roundtrips are needed (even if they are minimal in this case). Having data fresh to the second can also be costly to the database because you are potentially creating a high volume of requests, which in turn could affect otherwise good performing requests. For example, if your website runs with 1000 users logged on, you may need 1000 database requests per second (assuming that's your definition of INSTANT), which could in turn create a throttling condition in SQL Azure if not designed properly.
An approach I used in the past, for a similar requirement (although the precision wasn't to the second; more like to the minute) was to load all records from a table in memory in the local website cache. A background thread was locking and refreshing the in memory data for all records in one shot. This allowed us to reduce the database traffic by a factor of a thousand since the data presented on the screen was coming from the local cache and a single database connection was needed to refresh the cache (per web server). Because we had multiple web servers, and we needed the data to be exactly the same on all web servers within a second of each other, we synchronized the requests of all the web servers to refresh the cache every minute. Putting this together took many hours, but it allowed us to build a system that was highly scalable.
The above technique may not work for your requirements, but my point is that the higher the need for fresh data, the more design/engineering work you will need to make sure your system isn't too impacted by the freshness requirement.
Hope this helps.

Resources