Parameter for Ecommerce Transaction timestamp in google measurment protocol? - google-analytics

I am migrating pushing the e-commerce transactions from controller, to a cron job, that will run every minute.
However, I cannot seem to find the parameter in measurement protocol, which I could to specify the exact time at which the transaction occurred?
Anyone have any ideas? Is this even necessary, since the maximum delay will be 1 minute?

You cannot specify a timestamp for the transaction (you can add a custom dimension with a timestamp, but GA will happily ignore this for session aggregation).
What you can do is add an offset in milliseconds between the actual transaction time (or time of any other hit) and the time you finally send the hit to Google. This is called "queue time", I think this was originally intended for native/web apps that might be offline for some time.
Just for a minute delay I probably would not bother. However it might be useful for cases when your cron job fails for some reason, and you want to pick up and send the transactions later.

Related

WSO2 lags in processing fixed-time APIs

We have a backend API which runs in almost constant time (it does "sleep" for given period). When we run a managed API which proxies to it for a long time, we see that from time to time its execution time increases up to twice the average.
From analyzing the Amazon ALB data in production, it seems that the time the request spends inside Synapse remains the same, but the connection time (the time the request enters the queue for processing) is high.
In an isolated environment we noticed that those lags happen approximately every 10 minutes. In production, where we have multiple workers that gets request all the time, the picture is more obscured, as it happens more often (possibly the lag accumulates).
Does anyone aware of any periodic activity in the worker which result delays entering the queue every
few minutes? Any parameter that control this? Otherwise, any idea how to figure out what is the cause?
Attached is an image demonstrating it.
Could be due to gateway token cache invalidation. The default timeout is 15 minutes.
Once the key validation is done via the Key Manager, the key validation info is cached in the gateway. The subsequent API invocations, key validation will be done from this cache. During this time, the execution time will be lower.
After the cache invalidation, the token validation will be done via the key manager (using DB). This will cause an increase in the execution time.
Investigating further, we found out two causes for the spikes.
Our handler writes log to shared file system which was set as sync instead of async. This caused delays. This reduced most of the spikes.
Additional spikes seem to be related to registry updates. We did not investigate those, as they were more sporadic.

Missing significant number of transactions when using measurement protocol and non-interactive

Using google analytics and it's measurement protocol, I am trying to track eCommerce transactions based on my customers (who aren't end-consumers meaning not sparse unique userid's, locations, etc...) which have a semantic idea of a "sale" with revenue.
The problem is that not all of my logged requests to the ga mp API are resulting in "rows" of transactions when looking at conversions->ecommerce->transactions. And additionally, the revenue reported is respectively missing too. An example of the discrepancy is listing all my non-zero transaction revenue API calls, I should see 321 transactions in the analytics dashboard. However, I see only 106... 30%!!! This is about the same every day even tweaking some attributes which I would think would force uniqueness of a session or transaction.
A semantic difference is that a unique consumer (cid or uid) can send a "t=transaction" with a unique "ti" (transaction id) which overlap and are not serial. I say this to suggest that maybe there is some session related deduplication happening even though my "ti" attribute is definitely unique across my notion of a "transaction". In other words, a particular cid/uid maybe have many different ti's in the same minute.
I have no google analytics javascript or client-side components in use and are simply not applicable to how I need to use google analytics which takes me to using the measurement protocol.
Using the hit-builder, /debug/collect, and logging of any http non-200 responses, I see absolutely no indication that all of my "t=transaction" messages would not be received and processed. Some of the typical debugging points I think are eliminated with this list of what I have tried
sent message via /collect
sent multiple message via /batch (t=transaction and t=item)
sent my UUID of my consumer as cid=, uid= and both
tried with and without "sc=start" to ensure there was no session deduplication of a transaction
tried with and without ua (user-agent) and uip (ip override) since it's server side but hits from consumers do come from different originations sometimes
took into consideration my timezone (UTC-8) and how my server logs these requests (UTC)
waited 24 to 48 hours to ensure data
ecommerce is turned on for my view
amount of calls to measurement protocol are < 10000 per day so I don't think I am hitting any limits
I have t=event messages too although I am taking a step back from using them for now until I can see that data is represented at least to 90%+.
Here is an example t=transaction call.
curl \
--verbose \
--request POST \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data 'ta=customer1&t=transaction&sc=start&v=1&cid=4b014cff-ccce-44c2-bfb8-e0f05fc7827c&tr=0.0&uid=4b014cff-ccce-44c2-bfb8-e0f05fc7827c&tid=UA-xxxxxxxxx-1&ti=5ef618370b01009807f780c5' \
'https://www.google-analytics.com/collect'
You've done a very good job debugging so unfortunately there isn't much left to do, a few things left to check:
View bot/spider filter: disable this option to be on the safe sife
500 hits / session: if you're sending lots of hits for the same cid/uid within 30 minutes (whatever your session-timeout is), then these would be recorded as per of the same session and thus you could reach quota limit.
10M hits / property / month: you didn't mention overall properly volume so I'm mentioning this in case
Paylod limit of 1KB = 8192 bytes: I've seen people running into that issue when tracking transactions with a crazy amount of products attached to it
Other view filters: same thing, you didn't mention so I'm mentioning just in case
Further debugging could include:
Using events instead of transactions: the problem with transactions is that they're a black box, if they don't show up you don't have any debug. I personally always track my transactions via events and I set a copy of the Ecommerce payload as event label (JSON string) for debugging, so if the event is present I know it's not a data ingestion issue but most likely my ecommerce payload which is malformed (and I have the event label to debug it), and if the event is missing then it's a data ingestion problem. See below example, replace UA-XXXXXXX-1 with your own:
v=1&t=event&tid=UA-XXXXXXX-1&cid=1373730658.1593409595&ec=Ecommerce&ea=Purchase&ti=T12345&ta=Online%20Store&tr=15.25&tt=0.00&ts=0.00&tcc=SUMMER_SALE&pa=purchase&pr1nm=Triblend%20Android%20T-Shirt&pr1id=12345&pr1pr=15.25&pr1br=Google&pr1ca=Apparel&pr1va=Gray&pr1qt=1&pr1cc=&el=%7B%22purchase%22%3A%7B%22actionField%22%3A%7B%22id%22%3A%22T12345%22%2C%22affiliation%22%3A%22Online%20Store%22%2C%22revenue%22%3A%2215.25%22%2C%22tax%22%3A%220.00%22%2C%22shipping%22%3A%220.00%22%2C%22coupon%22%3A%22SUMMER_SALE%22%7D%2C%22products%22%3A%5B%7B%22name%22%3A%22Triblend%20Android%20T-Shirt%22%2C%22id%22%3A%2212345%22%2C%22price%22%3A%2215.25%22%2C%22brand%22%3A%22Google%22%2C%22category%22%3A%22Apparel%22%2C%22variant%22%3A%22Gray%22%2C%22quantity%22%3A1%2C%22coupon%22%3A%22%22%7D%5D%7D%7D
Using a data collection platform like Segment: which will give you an extra level of debugging via their debugger (although their payload syntax is != than GA so that introduces another level of complexity, it does help me from time to time to spot issues with the underlying data though as I'm familiar with its syntax).

Can the Google Calendar API events watch be used without risking to exceed the usage quotas?

I am using the Google Calendar API to preprocess events that are being added (adjust their content depending on certain values they may contain). This means that theoretically I need to update any number of events at any given time, depending on how many are created.
The Google Calendar API has usage quotas, especially one stating a maximum of 500 operations per 100 seconds.
To tackle this I am using a time-based trigger (every 2 minutes) that does up to 500 operations (and only updates sync tokens when all events are processed). The downside of this approach is that I have to run a check every 2 minutes, whether or not anything has actually changed.
I would like to replace the time-based trigger with a watch. I'm not sure though if there is any way to limit the amount of watch calls so that I can ensure the 100 seconds quota is not exceeded.
My research so far shows me that it cannot be done. I'm hoping I'm wrong. Any ideas on how this can be solved?
AFAIK, that is one of the best practice suggested by Google. Using watch and push notification allows you to eliminate the extra network and compute costs involved with polling resources to determine if they have changed. Here are some tips to best manage working within the quota from this blog:
Use push notifications instead of polling.
If you cannot avoid polling, make sure you only poll when necessary (for example poll very seldomly at night).
Use incremental synchronization with sync tokens for all collections instead of repeatedly retrieving all the entries.
Increase page size to retrieve more data at once by using the maxResults parameter.
Update events when they change, avoid re-creating all the events on every sync.
Use exponential backoff for error retries.
Also, if you cannot avoid exceeding to your current limit. You can always request for additional quota.

What is the rate limit for direct use of the Google Analytics Measurement Protocol API?

In the Documentation for Google Analytics Collection Limits and Quotas
It gives the rate limits that are implemented by the various Google-provided libraries. I can't seem to find a published rate limit for users that are POSTing directly to measurement protocol (https://www.google-analytics.com/collect).
Is there one and if so what is it?
Edit on 10 July 2015 -
A few commenters asked for an example of the kind of data I am sending in.
Using a series of calls to wget with a sleep of one second between each call.
Here is an example with the app name and tracking code removed:
wget -nv --post-data 'ul=en&qt=7150000&av=0.0.1&ea=PLET&v=1&tid=<my_tracking_code>&ec=Move+to+Object&cid=1434738538-738-654031&an=<my_app_name>&t=event' -O /dev/null 'https://www.google-analytics.com/collect'
I've tried sending these queries to the /debug endpoint and all of them are valid. My first upload worked as expected and reports looked good. Subsequent uploads of the same data set to different GA properties have had mixed results. Sometimes no data appears in reports. Sometimes partial data appears in reports. During upload, realtime reports always show activity, though.
Directly from the documentation Google Analytics Collection Limits and Quotas
These limits apply to the Web Property / Property / Tracking ID.
10 million hits per month per property
Measurement protocol
Universal Analytics Enabled
This applies to analytics.js, Android iOS SDK, and the Measurement
Protocol.
200,000 hits per user per day 500 hits per session not including
ecommerce (item and transaction hit types). If you go over either of
these limits, additional hits will not be processed for that session /
day, respectively. These limits apply to Premium as well.
Now I agree it doesn't specifically state the per second it rate for measurement protocol but the above one dumped Measurement in with analytics.js so I think we can assume its
analytics.js:
Each analytics.js tracker object starts with 20 hits that are
replenished at a rate of 2 hit per second. Applies to all hits except
for ecommerce (item or transaction).
But just to make sure I am sending an email off to the development team they should make it more clear where the per second rate of the measurement protocol lies. I will repost here when I hear from them
Response from Google
The Measurement Protocol does not do any kind of rate limiting or
quota-ing by IP address or tracking ID or anything like that. However,
most of the client libraries do rate limit in some form or another.
As Linda points out in her answer, there are various limits and quotas
imposed by the back end, but those are done at processing time, not
collection time.
Conclusion
There is no limit to sending data through the measurement protocol. But when the data is processed limit may be applied. I think they may be referring to the max 2 million hits a month. It seems it's the libraries that apply limits on how fast you can send data not the measurement protocol directly.
Last Update: Please watch this video which explains all GA quotas policies:
https://youtu.be/1UfER93ALxo
In particular, your issue might be result of 10 requests / 1 second limitation:
https://youtu.be/1UfER93ALxo?t=5m27s
I can confirm the same thing. In my case I had own buildHitTask which constructs URL for a measurement protocol request (MPR) and stores it in the hitPayload field. But instead of original GA reporting - I was saving those URLs into cookies for delayed reporting.
In my experiment, only 10-20% of 2,000 measurement protocol requests were actually "stored".
Rest of hits are not available in GA Reporting UI, neither API or BigQuery. Each request was sent with 2 seconds delay via new Image() method, and slowdown in case of errors. Received results are not consistent. Both success and failed hits are randomly distributed across whole time period.
Please let me know in case if you find more details on this constraint!

How to implement real time updates in ASP.NET

I have seen several websites that show you a real time update of what's going on in the database. An example could be
A stock ticker website that shows stock prices in real time
Showing data like "What other users are searching for currently.."
I'd assume this would involve some kind of polling mechanism that queries the database every few seconds and renders it on a web page. But the thought scares me when I think about it from the performance standpoint.
In an application I am working on, I need to display the real time status of an operation that a user has submitted. Users wait for the process to be completed. As and when an operation is completed, the status is updated by another process (could be a windows service). Should I query the database every second to get the updated status?
It's not necessarily done in the db. As you suggested that's expensive. Although db might be a backing store, likely a more efficient mechanism is used to accompany the polling operation like storing the real-time status in memory in addition to finally on the db. You can poll memory much more efficiently than SELECT status from Table every second.
Also as I mentioned in a comment, in some circumstances, you can get a lot of mileage out of forging the appearance of status update through animations and such, employ estimation, checking the data source less often.
Edit
(an optimization to use less db resources for real time)
Instead of polling the database per user to check job status every X seconds, slightly alter the behaviour of the situation. Each time a job is added to the database, read the database once to put meta data about all jobs in the cache. So , for example, memory cache will reflect [user49 ... user3, user2, user1, userCurrent] 50 user's jobs if 1 job each. (Maybe I should have written it as [job49 ... job2, job1, job_current] but same idea)
Then individual users' web pages will poll that cache which is always kept current. In this example the db was read just 50 times into the cache (once per job submission). If those 50 users wait an average 1 minute for job processing and poll for status every second then user base polls the cache a total of 50 users x 60 secs = 3000 times.
That's 50 database reads instead of 3000 over a period of 50 min. (avg. one per min.) The cache is always fresh and yet handles the load. It's much less scary than considering hitting the db every second for each user. You can store other stats and info in the cache to help out with estimation and such. As long as fresh cache provides more efficiency then it's a viable alternative to massive db hits.
Note: By cache I mean a global store like Application or 3rd party solution, not the ASP.NET page cache which goes out of scope quickly. Caching using ASP.NET's mechanisms might not suit your situation.
Another Note: The db will know when another job record is appended no matter from where, so a trigger can init the cache update.
Despite a good database solution, so many users polling frequently is likely to create problems with web server connections and you might need a different solution at that level, depending on traffic.
Maybe have a cache and work with it so yo don't hit the database each time the data is modified and update the database every few seconds or minutes or what you like
The problem touches many layers of a web application.
On the client, you either use an iframe whose content autorefreshes every n seconds using the meta refresh tag (HTML), or a javascript which is triggered by a timer and updated a named div (AJAX).
On the server, you have at least two places to cache your data:
One is in the Application object, where you keep a timestamp of the last update, and refresh the cached data as your refresh interval elapses.
If you want to present data from a database, keep aggregated values or cache relevant data for faster retrieval.

Resources