I am stuck in a case where Google Analytics is recording multiple eCommerce transaction. We have added code on server side to execute GA eCommerce posting code only one time. Still this issue is reproducible for some transaction. The multiple eCommerce transaction are for same transaction Id but on different dates.
On research I found that this case is with small devices (mobile, tablet). The small devices browser caches whole webpage. And when the browser is opened it reload webpage from cache. So each time user opens the browser and page loads from cache hence the causing this issue.
Can anyone help me on this?
Thanks
"Ignore double transaction ids" would be quite a useful setting and we should try and make this a feature request. However at the moment it does not exist.
The only way I can think of would be to use an API script that selects the transaction ids for the last "n" days and then inserts a heap of filters via the management API to exclude hits with that transaction id. After some time (when the caches have presumably expired) you could throw out old filters. This would be only feasible if you have a small number of transactions (I think there is an upper limit to the number of filters a view can have).
Or if your transaction ids are somehow sequential (e.g. if they contain the date) you might be able to construct a regex that matches earlier parts of the sequence (e.g. previous dates) and only let's a transaction pass if it is higher up in the sequence than the last recorded transaction id (or does not let it pass if the date in the transaction id is lower than the current date - remember to update your filter at midnight).
Caveat: I have not actually tried something like this, but it sounds like it should work.
Related
I am installing ecommerce tracking for a pretty simple ecommerce site. I am tracking the conversion on the order confirmation page, recording the transaction ID and order value, and everything's working fine.
However, sometimes the system issues the customer an offer to make an additional purchase on the order confirmation page with a single click (some small accessories that are discounted). If the customer chooses to make an additional purchase, I would like to be able to update the previously sent conversion. I do not want to assign a new transaction ID, because that will artificially inflate my conversion rate. I have tried sending the new amount of revenue with the same transaction ID, however that does not seem to have consistent results (sometimes ignored, sometimes value is just doubled).
I cannot hold back sending the conversion to GA until the customer makes a decision, because oftentimes the customer simply exits the browser without stating whether he is going to accept or decline the offer - in this case no conversion data would be sent at all.
Any ideas? Is there something in the GA library that I'm missing for this situation? Thanks
Nope, there is nothing. Even if a transaction with the same id goes through it's internally treated as a second transaction w/r/t the conversion rate.
If you want to get really fancy you could try and collect the transaction hit on your own server, wait a few minutes to see if you need to add another product and add a queue time parameter to offset for the actual collection time before you send it to Google. While this would work in theory I am not sure it is really feasible in a production environment (and in any case it would probably be more work than it's worth).
I have a question regarding "Add Calendar By URL" function in Google Calendar:
How often it is updated (most sources I've found says 24h per day). Does caladress.ics?noCache workaround still works?
How it is updated? If I have a large calendar (e.g 2008 - 2016) and add a single event, does Calendar reupload the whole calendar or check for diff? If check for diff, is there any limitations?
Is there any limit to how long events could be? E.g is it possible to set 5 year event?
1. How often it is updated (most sources I've found says 24h per day).Does caladress.ics?noCache workaround still works?
Based from the Google thread, updates may take a few hours for the new information to be parsed and viewable by your users.
Note: It might take up to 12 hours for changes to show in your Google Calendar.
You can use no-cache to indicate that the returned response cannot be used to satisfy a subsequent request to the same URL without first checking with the server if the response has changed. Here is the documentation and example.
2. How it is updated? If I have a large calendar (e.g 2008 - 2016) and add a single event, does Calendar reupload the whole calendar or check for diff? If check for diff,is there any limitations?
Calendar is updated based on how you will implement the "incremental synchronization" of calendar data. It can be Initial full sync or Incremental sync.
Initial full sync is performed once at the very beginning in order to fully synchronize the client’s state with the server’s state. You can optionally restrict the list request using request parameters if you only want to synchronize a specific subset of resources.
While Incremental sync allows you to retrieve all the resources that have been modified since the last sync request. You need to perform a list request with your most recent sync token specified in the syncToken field. Keep in mind that the result will always contain deleted entries, so that the clients get the chance to remove them from storage.
3. Is there any limit to how long events could be? E.g is it possible to set 5 year event?
For the limitation, the Google Calendar API has a courtesy limit of 1,000,000 queries per day. You can see the calendar usage limits here. It is possible to set an event as long as you haven't reached the limit for the number of events you can create.
I wish to extract (via the Analytics Core Reporting API) all the transactions made TODAY by users that had a specific ga:eventCategory few weeks ago.
I'm looking to see the date of a transaction and all dated of event that are related to that transaction.
If GA was sql I would join by the ga user and take in the dimension both his transactions date and his dimension update date...
Thanks.
Noam.
Like I have indicated in my comment you can segment the data to include only those users who have the specific event. Segmentation works fine with the core reporting API.
Your segment defintion would look like this:
users::condition::ga:eventCategory==[myEventCategory]
(where obviously the thing in [brackets] is a placeholder that needs to be substituted for the event category name). The "users::" prefix means you are segmenting by user scope (as opposed to sessions), so this will include all sessions in the selected timeframe for users who had the event at least in one of their session (even if the event was outside the selected timeframe).
Select transactionId as dimension and some metric (revenue) and todays date and you are done. Or you would be done if this was actually going to work, but there are at least two caveats:
Google Analytics does not work in realtime, so it's unlikely that TODAYs transactions are fully available (Google says it's 24 hours until the data is processed - actually it might happen faster, but you cannot rely on it).
If a user has deleted his or her cookie she won't be recognized as a recurring user and GA will be unable to segment her out. The longer the interval between the event and the transaction the less likey it is that the GA cookie is still present.
So even with a technically correct query it might be that you won't get the data you need.
I would look for some feedback on tracking user activity on an commerce website using th google analytics commerce capabilities.
I can't fully understand those 3 parts :
Adding an item (ecommerce:addItem) : obviously when some user add a thing to the cart
Adding a Transaction (ecommerce:addTransaction) : that's where I'm very confused
Sending the data (ecommerce:send) : that's obvious
Can those 3 event append at a different moment ? in what manner ?
What would be a real-world use case that would make you use execute ecommerce:addTransaction and ecommerce:send at a different moment ?
This thing makes me wonder a lot, and I'd like to have some experienced feedback on this as you tend to easily break your stats if something is not done week enough
Thanks in advance
EDIT
So the main purpose right here is to get stats for the pending orders (you add stuff to your cart), and the complete orders (you paid for the things you added).
Right now I only send it all when the order is complete, and things are working pretty good in analytics, but I just don't know anything about the ones that did not complete.
This question was a lack of knowledge.
Simple ecommerce plugin has nothing to do with the enhanced ecommerce plugin
You won't track that much with the first one, except the checkouts. A plain, one order at a time, revenue value.
If you want a deep insight on your users behaviors (when i say deep, I mean it), You have to go for the second one.
We might be able to debate over the unusefullness of the first one; and the fact that its existence in itself compared to the second is completely misleading, as when you first get in, as usual with google, you get flooded by an endless documentation
ecommerce:addItem does not add items to a cart; it adds items to a transaction (with "conventional" ecommcerce tracking there is no cart tracking, you'd have to use enhanced ecommerce tracking. Actually your title refers to enhanced ("ec:") and your question to conventional ecommerce ("ecommerce:") tracking).
So ecommerce:addTransaction starts a transaction; here goes the stuff that affects the transaction as a whole, like transaction id, tax on the total purchase or shipping costs.
Now that you have started the transaction you can add items to it that are associated via the transaction id.
Finally the ecommerce:send command tells Universal Analytics that the transaction should be processed on the server. "send" is actuall a misnomer; addItem and addTransaction do already send data to the server (they each create an request to the tracking server and thus count towards your hit quota).
The reason for this is, as far as I can tell, that the information is transmitted via url parameters (you call the Google Analytics endpoint which returns an transparent pixel). The maximum length for an url request is limited (actual limits depend on browser and browser version).
So the transaction is broken up into multiple parts not because you want to execute the commands at different moments but so it can be transmitted via Url parameters without being truncated. The send command merely tells that you are now finished adding new parts to the transaction and the data can now be processed.
Ok! So I have spoken to a google representative about this issue, however since I am not enterprise level, he can't push me to tech support and suggested that I use the SO for answers. Here is the question...
In Google Maps Terms it states the following:
(b) No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store
any Content, except that you may store: (i) limited amounts of Content for the purpose of
improving the performance of your Maps API Implementation if you do so temporarily (and in
no event for more than 30 calendar days), securely, and in a manner that does not permit
use of the Content outside of the Service; and (ii) any content identifier or key that
the Maps APIs Documentation specifically permits you to store. For example, you must not
use the Content to create an independent database of "places" or other local listings
information.
This led me to originally believe that google would not allow caching of any type of information. However, then I read the following:
When to Use Client-Side Geocoding
The basic answer is "almost always." As geocoding limits are per user session, there is no risk that your application will reach a global limit as your userbase grows. Client-side geocoding will not face a quota limit unless you perform a batch of geocoding requests within a user session. Therefore, running client-side geocoding, you generally don't have to worry about your quota.
Two basic architectures for client-side geocoding exist.
Run the geocoding and display entirely in the browser. For instance, the user enters an address on your page. Your application geocodes it. Then your page uses the geocode to create a marker on the map. Or your app does some simple analysis using the geocode. No data is sent to your server. This reduces load on your server, but doesn't give you any sense of what your users are doing.
Run the geocode in the browser and then send it to the server. For instance, the user enters an address. Your application geocodes it in the browser. The app then sends the data to your server. The server responds with some data, such as nearby points of interest. This allows you to customize a response based on your own data, and also to cache the geocode if you want. This cache allows you to optimize even more. You can even query the server with the address, see if you have a recently cached geocode for it, and if you do, use that. If you don't, then return no result to the browser, and let it geocode the result and send it back to the server to for caching.
So one side says you cannot cache, the other side tells you, you should. Another solution it states is to always use clientside when you can, but then this becomes a grey area as well, because both examples state that you must have a user input data. What if the jquery read data from a div or span and then geocoded the information? The user wouldn't have actually done the geocode,but it was still done client-side? I'm trying to create a site that has a bunch of events generated by users and this site could get pretty loaded, so I am trying to determine the best practice in being able to do this. Google suggested here, so before you go and say this is "off-topic" please note, this is where they stated me to post.
Any feedback would be greatly appreciated.
The first quote does not explicitly forbid caching data at all. It is ambiguous as to how much you can cache (what number explicitly is "limited amounts"?) but it does not forbid caching.
You are allowed to cache the data if it helps improve the performance of your site as long as you retain the data for no longer than 30 days and do not make it available in any way to any other service except the service that originally retrieved the data.
Regarding user interaction - if your user explicitly enters a page with the expectation that they will be shown geocoded information I would assume that this would fulfill "user interaction".
As an example from a project I worked on last year I had it set up to do the following:
- Show markers on the map
- If the user clicked a marker they were shown a popup with data from the cache if available, otherwise a geocode would be performed and the returned information would be cached along with the date/time of the cache.
Another page of the site showed a history of these markers at 5 minute intervals throughout the day. If cached data was present (from clicking the map marker as in the previous part) this would be shown, otherwise a geocode would be performed and the data cached as before. The user clicking to run the report was (in my opinion) enough "user interaction" to not count as pre-fetching as the user had to manually select a timeframe before the report would be displayed.
A cronjob then ran every day at midnight which would go through each record with cached data over 25 days old and remove it.
As it was I was caching much less than 10% of the marker positions being shown (20+ markers being updated every minute, but the report was being run on maybe 3-5 markers each day and only geocoding data for every 5th point).