I am working on a project where I do not have access to page source nor can I ask client's IT team to create in-page/onload dataLayer e.g. having dataLayer before the tag. Is the idea of implementing enhanced e-commerce tracking remotely possible via dataLayer.push e.g. build dataLayer as you go?I have a very little knowledge of dataLayer.push (which I am starting to read up). Question is :
Is dataLayer.push the correct way move forward? Have anyone done this
before?
what issue I might face e.g. "add to cart" event, remove cart or
category page any working example? I still don't have a clear view of
the workflow here.
What are the downside of doing it this way beside "style/css" based
trigger might fall off during future site design update.
Thanks.
Your main problem is that you need to get the data from somewhere. Usually without a dataLayer this means DOM scraping and then assembling the scraped data into a dataLayer in a custom javascript function. Drawbacks are the same as with your stle/css based triggers:
Implementation is strongly coupled to the page layout
data might be missing, or require cleaning (if mixed with html or unrelated text)
potentially expensive in clients CPU time
custom javascript introduces new points of failure (i.e. can you test rigorously enough to guarantee that there are no side effects)
If you do workarounds around your client's ITs shortcomings remember that you own them - you will be responsible if the workarounds break, or have side effects, or need to be amended to account for new features. Make sure that you are very well compensated for that risk (and personally with the experience I have now I would not do this at all).
I think it's important to acknowledge here that the Data Layer is just one way of storing this information, and is used solely because it's possible to push data to it from the page.
If you can't write the code into the site itself, don't push information to the data layer. Just keep that information for yourself, in GTM's variables. You'll save yourself a huge headache, and a bit of computation too.
DOM scraping is a perfectly reasonable way to get hold of information, but you will run into some barriers.
You're going to have to write a lot of JavaScript to get the data you need.
Some buttons may turn out to be composed of several elements that you'll have to cover with your triggers etc.
Any changes to the site will potentially ruin your code.
Not everything is available to you, especially verification of data (checking if a purchase went through is probably no longer possible before reporting the transaction).
Related
I'm currently implementing a POC project in NextJs 12 to check whether it is possible for us to integrate it. I'm not able to update to 13 yet since I cannot support React 18 due to some in-house packages that we use, so for the purposes of this discussion let's pretend that Next 13 does not exist yet.
We all know that Next officially discourages the use of getInitialProps and recommends using getServerSideProps wherever possible. I'm aware of all the downsides of getInitialProps as opposed to getServerSideProps, main one being the constant client/server context switch which you have to be mindful of.
However, I cannot understand the easiness with which this is discouraged. Apart from NextJs itself I've seen a lot of blog posts calling it the worst thing ever and such. It seems to me like people who say stuff like that have not had some realistic use cases, and that the opinion mostly comes from toy projects (notes app, todo app, blog app etc).
Anyway the purpose of this question is twofold. One, to verify if it is at all possible in my case to avoid getInitialProps, and two, to see if anyone else thinks that this discouragement is somewhat unfounded and not based on reality.
The reason I've decided to use Next at all was to achieve SSR in React. The entire point of that, at least I believe so, is to enable SSR and still preserve some main benefits of React, such as a seamless SPA-like navigation in certain pages. If that was not the case I would have gone for a traditional SSR framework such as Ruby on Rails, Django, etc.
The reason why I need to use getInitialProps, and why I believe I cannot possibly avoid it, is based on two aspects:
Every single page that I have requires certain global data, which I don't want to refetch on every route.
The perfect example of this is the page header. The header of every one of my page depends on user data and translations (i18n). Both of these things I fetch from an external API. If I were to use GSSP then every route and every sub-route of every page would have to re-perform this data fetching which seems like a huge performance kill. I have no way to properly persist this data through GSSP navigation without conforming to some hacks such as sending hidden query parameters which check if data was already fetched. If we were to assume that the user always comes into contact with a page solely through its root URL, then this would work, but this assumption is extremely unrealistic.
By using getInitialProps in combination with redux and next-redux-wrapper, it is very easy to check if data was already fetched (even better if using RTKQ you don't even have to explicitly check it).
Big pages where I want SPA-like behaviour are not possible.
In my case I have one page which has about 5 sub-routes. On its homepage, we display a list of certain entities, which is fetched from the API. The API is such that entities can only be fetched as a list, and you cannot fetch entities individually. Then, for each entity, when you click on it, you can go to a sub-page where you see its specific info.
The only natural way to do this is to have the entire data fetched on the first page visit and then reused throughout the page as we navigate. Re-fetching the whole data on every page navigation is also a performance kill. The only way I was able to implement this and preserve a seamless SPA-like navigation was with getInitialProps.
What's interesting about this use case is that hacks with sending hidden query params would actually not do the trick, since even though I can force GSSP to be aware that the data is already fetched, I cannot access that data, therefore I cannot do any server-side route validation. What I mean is that if a user was to land on the home page, where all the entities are fetched, then somehow visit an entity page, like page/123, where an entity with id 123 does not exist, I cannot validate that and properly handle it in GSSP without re-fetching the entire list of entities again which is, once more, a performance kill.
So, in conclusion, I'd like to hear opinions on the discouragement of getInitialProps. For me it seems borderline impossible to completely migrate to getSeverSideProps if your app is at all realistic, uses translations, global data, etc.
Thanks in advance.
My department has analytic rule conditions in DTM that trigger events based on particular classes or custom data attributes. I'm concerned that if our dev team makes a change that would break the rule, we wouldn't find out until it's discovered that the metric is no longer tracking.
We're trying to future proof our scripts to allow for changing conditions (eg: using regex for changing class names &/or functions to traverse the DOM to find a condition without it needing to be hardcoded), but I thought someone here might have experience with this type of issue. How was it handled at your company?
**EDIT:**I'm exploring using custom Data Elements within DTM that are created with javascript that has multiple conditions for traversing the DOM in ways we've identified. So a sort of data layer that's controllable by my team.
Note: This isn't really an actual coding question; more of [analytics/marketing tag] coding principles/best practices. So I'm not entirely sure this question belongs on SO (maybe one of the other stack exchange sites, perhaps superuser.com?). But I'll answer here anyways.
TL;DR - You need to get site devs involved and have them take on some level of initial and ongoing ownership of it.
Tag managers sell themselves on being able to deploy tags without getting site devs involved, and many times this works out in the short term. But in my experience, this kind of passive deployment just doesn't work out in the long term, especially for websites that have active and regular changes over time.
In my experience, the only way to effectively help prevent site devs from inadvertently breaking the tracking from this, is to include the site devs in the deployment and make them take ownership of it on some level, so that it is something they are aware of within their own system/flow.
Sometimes it is as easy as having designated classes or attributes added to html tags on the page. For example, you can write a spec for the site devs to add a data-analytics='true' to any header, footer, CTA links on a given page, and tell the site devs this is something they need to keep as part of their workflow whenever they make changes to the site.
For more complicated things, you could spec for them to do something like broadcast a custom event for you to listen for. For example, maybe you have a purchase confirmation page and right now you have code in DTM to trigger based off the URL, or scrape the page for details about the purchase to push to tags. Instead, create a spec with instructions for the devs to put that in a data layer object and push to a custom event, and then create an event based rule for it.
The overall theme here is to create a spec document for all the things you want to be able to track on the website, that you know you can't reliably passively track without it breaking sooner or later, and hand the document to the devs and tell them they need to make it a part of their flow when making changes to the site. Bonus points if you can get them to loop you in whenever changes are going to be made and pushed to production, that you may go to your dev/qa version of your site to test to make sure your tracking still looks good.
The overall overall theme here is in order to prevent site devs from breaking your tracking, you need to be more proactive about making them and keeping them aware of your tracking, and in practice, this usually means putting some of the code work on their plate to own, so it's something in their history, in front of their face to see and know about. Because it is a lot easier for a dev to take notice of a data-analytics='true' in the header nav links they are about to restructure, than knowing hey, some piece of code in DTM relies on this current structure.. something that's not more directly in front of their face in their own code editor/environment.
Yes, actually accomplishing the above is often easier said than done. But it is the reality of the situation. Passively tracking things in a tag manager rarely works out for longer term stuff, short of "every page" tags that have little or no customization requirements at all.
I tell you from my own experience of over 10 years of working in the digital marketing and analytics industry, specifically with implementation, I have seen this time and time again. Too many times to count. Clients often want to and actually take the easier route of leaving the site devs out of the loop, all tracking requirements done solely through whatever the tag manager is capable of doing.
I've seen setups with hundreds of rules with trigger conditions based off scraping the page for some id or class, or some complex css selector dependent on 5 levels of html structure not changing. Or some random cookie you just assume means what you think it means. And you're spending more and more of your time playing whack-a-mole trying to re-adjust/fix individual rules/selectors as the next random change happens, and then one day comes a full site redesign and it's a nuclear bomb on all your tracking efforts.
And time and time again, without fail, eventually they wind up asking exactly what you've asked here, kicking themselves in the arse for the time and money they spent already on it for that "quick win", because nobody is confident in the data and they're wondering why they are allocating money/resources on tracking if it's just a bunch of trash, broken, pothole data. And the solution to it has always been "site dev awareness".
If it helps.. one card I sometimes play if I have a hard time convincing the dev team or other powers-that-be to jump on that bandwagon, is to point out that one of the biggest reasons for tracking websites is to help companies determine whether or not it's worth investing money in said website. If they can't determine that, then they may not be so inclined to do so, which means their need to even have a site dev team may also decline. To be more candid: it is something that helps justify their job.
We are just about to release a big update to our website. It is a complete rebuild. Our domain is staying the same and the website generally functions the same way (the business hasn't changed and we are still selling the same stuff). But every page has changed and most page URLs have also changed.
My question is, how should we deal with Google Analytics in this situation? Should we stick with the original GA account and simply start feeding information from the new website instead? Or should we make a new account, or just a new property, or view?
I think it makes sense to stick with the original account and property, so we can easily derive meaningful stats about the effect of the website upgrade on performance. However, I'm worried that having a completely different URL structure mixed in with the older structure will make things difficult to dig through.
Am I right that sticking with the original account and property is sensible in this situation, and does anyone have any other general pointers?
Thanks.
I'm working on an a pretty big project right now and am trying to implement an MVP architecture. I'm starting to run across a instances where I think JQuery or Javascript might be better suited than server-side code. I'm looking for feedback on how others are implementing client-side programming into their enterprise applications. How are you structuring the client-side code and how do you determine when to use it?
Things that can make user say "wow". For example - Populating search results while user has just typed 3-4 character of search term. Just go back in past and think about Yahoo or Hotmail which used to postback to server when you clicked on "Create Message". But when google came they just did on client side without going to server. I bet you would have said "wow" to that. At least I did.
Things that can reduce server load. For example - Adding extra data entry row in HTML table, instead of doing it through round trip, Increase/Decrease of quantity etc.
These are just some example to sight. Even to do these things properly you need to go to server but that will be behind the scene using ajax. Other than this you need to select few more jquery plugins that you will use in your project. To name some are jQuery UI, jQuery Validation, jQuery AnythingSlider etc. There are too many of them.
Http://ClearTrip.com is one site that I envy for their UX. Visit their site from mobile device and you will get further clues about their UX work. Besideds just coding you need to have a person in your team who can work on these UX aspects.
Regarding how this fits into DDD: I've just recently started my journey into DDD but one hears a lot about command/query separation in that circle. Certainly if you are doing something that hits your domain (like fetching for auto-completion or certainly if you allow partial page submission to accomplish a domain command) you have to decide how it gets there and how the domain is structured to handle it.
I think two decisions are most relevant.
First, bits entirely in the browser and even those specifically in your application layer are outside your domain and thus, though covered in the layered architecture part of the DDD discussion, do not land in the entity/value/event/service, etc. discussion. If, however, you are using AJAX to interact with your application layer and in turn need to access your domain, you need to consider again two things in my mind.
(a) Are you separating commands and queries simply using different methods on your domain? Fine if you have a relatively small demand for either queries or commands and this will not seem like "noise" in your domain API. Otherwise, you have a separate bounded context...another domain modeled just for queries that your UI needs to avoid clutter on your domain. Regardless, you are doing something like JS->AJAX handler in application layer->domain (including a domain service).
(b) Is this a command or a query? Once you have (a) figured out, this lets you know where the access will land...then use the presentation layer's use case to elaborate the domain concept and put it into your ubiquitous language.
Second, you have the DTO vs direct to domain decision. This can be a religious war gathering topic, but usually the answer is "depends." I think there are cases for using DTOs and cases for not (within the same architecture)...just search for all the discussions around the topic and apply the pattern only where it adds value; I won't try to cover details here.
Hope this provides some insight or at least conversation magnet to which others will add.
I guess this question is a little too subjective. Looks like I'm just going to grab a view books on advanced javascript and study up on the JQuery library.
Stackoverflow members,
How do you currently find the balance between javascript and code behind. I have recently come across some extremely bad (in my eyes) legacy code that lends itself to chaos (someHugeJavafile.js) which contains a lot of the logic used in many of the pages.
Let's say for example that you have a Form that you need to complete.
1. Personal Details
2. Address Information
3. Little bit more about yourself
You don't want to overload the person with all the fields at once, so you decide to split it up into steps.
Do you create separate pages for Personal Details, Address Information and a Little bit more about yourself.
Do you create controls for each and hide and show them on a postback or using some update panel?
Do you use jQuery and do some checking to ensure that the person has completed the required fields for the step and show the new "section" by using .show()?
How do you usually find the balance?
First of all, let's step back on this for a moment:
Is there a CMS behind the site that should be considered when creating this form? Many sites will use some system for managing content and this shouldn't be forgotten or ignored at first glance to my mind.
Is there a reason for having 3 separate parts to the form? I may set up a Wizard control to go through each step but this is presuming that the same outline would work and that the trade-offs in using this are OK. If not, controls would be the next logical size as I don't think a complete page is worth adopting here.
While Javscript validation is a good idea, there may be some browsers with JavaScript disabled that should be considered here. Should this be supported? Warned about the form needing Javascript to be supported?
Balance is in the eye of the beholder, and every project is different.
Consider outlining general themes for your project. For example: "We're going to do all form validation client-side." or "We're going to have a 0 refresh policy, meaning all forms will submit via AJAX." etc.
Having themes helps answers questions like the one you posted and keeps future developers looking in the right places for the right code.
When in doubt, try to see your code through the eyes of someone who has never seen it before (or as is often the case, yourself 2 to 3 years down the road), and ask yourself: "Based on the rest of the code, where would i look for this function?"
Personally, I like option number 3, but that's just because it fits best with the project I'm currently working on and I have no need to postback or create additional pages.