On the discouragement of getInitialProps - next.js

I'm currently implementing a POC project in NextJs 12 to check whether it is possible for us to integrate it. I'm not able to update to 13 yet since I cannot support React 18 due to some in-house packages that we use, so for the purposes of this discussion let's pretend that Next 13 does not exist yet.
We all know that Next officially discourages the use of getInitialProps and recommends using getServerSideProps wherever possible. I'm aware of all the downsides of getInitialProps as opposed to getServerSideProps, main one being the constant client/server context switch which you have to be mindful of.
However, I cannot understand the easiness with which this is discouraged. Apart from NextJs itself I've seen a lot of blog posts calling it the worst thing ever and such. It seems to me like people who say stuff like that have not had some realistic use cases, and that the opinion mostly comes from toy projects (notes app, todo app, blog app etc).
Anyway the purpose of this question is twofold. One, to verify if it is at all possible in my case to avoid getInitialProps, and two, to see if anyone else thinks that this discouragement is somewhat unfounded and not based on reality.
The reason I've decided to use Next at all was to achieve SSR in React. The entire point of that, at least I believe so, is to enable SSR and still preserve some main benefits of React, such as a seamless SPA-like navigation in certain pages. If that was not the case I would have gone for a traditional SSR framework such as Ruby on Rails, Django, etc.
The reason why I need to use getInitialProps, and why I believe I cannot possibly avoid it, is based on two aspects:
Every single page that I have requires certain global data, which I don't want to refetch on every route.
The perfect example of this is the page header. The header of every one of my page depends on user data and translations (i18n). Both of these things I fetch from an external API. If I were to use GSSP then every route and every sub-route of every page would have to re-perform this data fetching which seems like a huge performance kill. I have no way to properly persist this data through GSSP navigation without conforming to some hacks such as sending hidden query parameters which check if data was already fetched. If we were to assume that the user always comes into contact with a page solely through its root URL, then this would work, but this assumption is extremely unrealistic.
By using getInitialProps in combination with redux and next-redux-wrapper, it is very easy to check if data was already fetched (even better if using RTKQ you don't even have to explicitly check it).
Big pages where I want SPA-like behaviour are not possible.
In my case I have one page which has about 5 sub-routes. On its homepage, we display a list of certain entities, which is fetched from the API. The API is such that entities can only be fetched as a list, and you cannot fetch entities individually. Then, for each entity, when you click on it, you can go to a sub-page where you see its specific info.
The only natural way to do this is to have the entire data fetched on the first page visit and then reused throughout the page as we navigate. Re-fetching the whole data on every page navigation is also a performance kill. The only way I was able to implement this and preserve a seamless SPA-like navigation was with getInitialProps.
What's interesting about this use case is that hacks with sending hidden query params would actually not do the trick, since even though I can force GSSP to be aware that the data is already fetched, I cannot access that data, therefore I cannot do any server-side route validation. What I mean is that if a user was to land on the home page, where all the entities are fetched, then somehow visit an entity page, like page/123, where an entity with id 123 does not exist, I cannot validate that and properly handle it in GSSP without re-fetching the entire list of entities again which is, once more, a performance kill.
So, in conclusion, I'd like to hear opinions on the discouragement of getInitialProps. For me it seems borderline impossible to completely migrate to getSeverSideProps if your app is at all realistic, uses translations, global data, etc.
Thanks in advance.

Related

Enhanced e-commerce using GTM but no in-page dataLayer

I am working on a project where I do not have access to page source nor can I ask client's IT team to create in-page/onload dataLayer e.g. having dataLayer before the tag. Is the idea of implementing enhanced e-commerce tracking remotely possible via dataLayer.push e.g. build dataLayer as you go?I have a very little knowledge of dataLayer.push (which I am starting to read up). Question is :
Is dataLayer.push the correct way move forward? Have anyone done this
before?
what issue I might face e.g. "add to cart" event, remove cart or
category page any working example? I still don't have a clear view of
the workflow here.
What are the downside of doing it this way beside "style/css" based
trigger might fall off during future site design update.
Thanks.
Your main problem is that you need to get the data from somewhere. Usually without a dataLayer this means DOM scraping and then assembling the scraped data into a dataLayer in a custom javascript function. Drawbacks are the same as with your stle/css based triggers:
Implementation is strongly coupled to the page layout
data might be missing, or require cleaning (if mixed with html or unrelated text)
potentially expensive in clients CPU time
custom javascript introduces new points of failure (i.e. can you test rigorously enough to guarantee that there are no side effects)
If you do workarounds around your client's ITs shortcomings remember that you own them - you will be responsible if the workarounds break, or have side effects, or need to be amended to account for new features. Make sure that you are very well compensated for that risk (and personally with the experience I have now I would not do this at all).
I think it's important to acknowledge here that the Data Layer is just one way of storing this information, and is used solely because it's possible to push data to it from the page.
If you can't write the code into the site itself, don't push information to the data layer. Just keep that information for yourself, in GTM's variables. You'll save yourself a huge headache, and a bit of computation too.
DOM scraping is a perfectly reasonable way to get hold of information, but you will run into some barriers.
You're going to have to write a lot of JavaScript to get the data you need.
Some buttons may turn out to be composed of several elements that you'll have to cover with your triggers etc.
Any changes to the site will potentially ruin your code.
Not everything is available to you, especially verification of data (checking if a purchase went through is probably no longer possible before reporting the transaction).

Why should I use Drupal's Form API & Ajax framework for this rather than just implementing my own solution with calling node_save()?

I want users to be able to submit nodes and comments via AJAX. I also want to do some fairly extensive customization of the node and comment forms.
I've spent time looking through documentation and code examples for Drupal 7's Form API and Ajax framework, but I find it very complex. Therefore, I simply want to create my own form in HTML and use my own JavaScript code to submit it via Ajax. I'll also set up a specific URL for processing these Ajax requests, which will ultimately call node_save() or comment_save() when appropriate.
What are the downsides to doing it this way as opposed to going through the Form API and Ajax framework? I'm not creating modules for contribution to the community. Everything is just for my own site.
Technically, yes you can do this with no ill effects. There have been times where i have had to import data from feeds and have used node_save manually.
What you are missing out on is the flexibility that drupal offers. For example, want to add a new checkbox to the form to indicate a featured item? Now you have to manually update the form to add the field and update the submit handler to save the data. Had you used drupal's system it would of been auto-populated for you.
Further on flexibility, say for example, you decide you want to add a CAPTCHA field to your form. All you would have to do is enable the CAPTCHA module and specify the form you want it on and it would be done for you. There are a bunch of 3rd party modules that let you do things like this.
Drupals form system also lets you add more complicated items such as date selectors, or even managed file uploads, which can save you a lot of time once you are familiar with the API.
If your looking just to get the project done and not spend time learning something new, sure you can do it all manually. I can promise you any drupal developer that looks at your code in the future will have a very low opinion of your work. Depending on your situation this may or may not be important. But really the biggest thing you are missing out on is ease of maintenance and flexibility.
So just to recap:
pros:
it will work
quick and easy
comfortable
cons:
loss of flexibility
harder to maintain
inability to take advantage of drupals form widgets / helpers
inability to take advantage of 3rd party modules
shame from other developers
Lack of sleep from dirty feeling
eternal damnation
The usual argument would be about portability I guess, but if you're not going to be porting these modules to another Drupal site then I guess that falls down.
The same can be said for offering other modules the chance to alter your form based on some global/inherited setting, but again if you really don't want/need this functionality then it can't really be used as an argument against.
The one thing you will lose out on is the built in Cross-Site Request Forgery protection. As long as you're rolling your own version of that, though, you should be ok.
If you plan to use Drupal a lot I'd recommend getting used to the FAPI though...after a while it actually becomes a lot easier to use the FAPI than write out custom HTML.

ASP.NET Client vs Server View Rendering

Using ASP.NET MVC: I am caught in the middle between rendering my views on the server vs in the client (say jquery templates). I do not like the idea of mixing the two as I hear some people say. For example, some have said they will render the initial page (say a list of a bunch of comments) server side, and then when a new comment is added they use client side templating. The requirement to have the same rendering logic in two different areas of your code makes me wonder how people convince themselves it is worth it.
What are the reasons you use to decide which to use where?
How does your argument change when using ASP.NET Web Forms?
One reason that people do that is because they want their sites to get indexed by search engines but also want to have the best user experience, so are writing client code for that. You have to decide what makes sense given the constraints and goals you have. Unfortunately, what makes the most business sense won't always seem to make the most sense from a technical perspective.
One advantage to server-side rendering is that your clients don't have to use javascript in order for your pages to be functional. If you're relying on JQuery templates, you pretty much have to assume that your page won't have any content when rendered without javascript. For some people this is important.
As you say, I would prefer not to use the same rendering logic twice, since you run the risk of letting it get out of sync.
I generally prefer to just leverage partial views to generate most content server-side. Pages with straight HTML tend to render a bit faster than pages that have to be "built" after they've loaded, making the initial load a little speedier.
We've developed an event-based AJAX architecture for our application which allows us to generate a piece of content in response to the result of an action, and essentially send back any number of commands to the client-side code to say "Use the results of this rendered partial view to replace the element with ID 'X'", or "Open a new modal popup dialog with this as the content." This is beneficial because the server-side code can have a lot more control over the results of an AJAX request, without having to write client-side code to handle every contingency for every action.
On the other hand, putting the server in control like this means that the request has to return before the client-side knows what to do. If your view rendering was largely client-based, you could make something "happen" in the UI (like inserting the new comment where it goes) immediately, thereby improving the "perceived speed" of your site. Also, the internet connection is generally the real speed bottleneck of most websites, so just having less data (JSON) to send over the wire can often make things more speedy. So for elements that I want to respond very smoothly to user interaction, I often use client-side rendering.
In the past, search-engine optimization was a big issue here as well, as Jarrett Widman says. But my understanding is that most modern search engines are smart enough to evaluate the initial javascript for pages they visit, and figure out what the page would actually look like after it loads. Google even recommends the use of the "shebang" in your URLs to help them know how to index pages that are dynamically loaded by AJAX.

Where Does JQuery/Client-Side Programming Fit Into MVP and DDD

I'm working on an a pretty big project right now and am trying to implement an MVP architecture. I'm starting to run across a instances where I think JQuery or Javascript might be better suited than server-side code. I'm looking for feedback on how others are implementing client-side programming into their enterprise applications. How are you structuring the client-side code and how do you determine when to use it?
Things that can make user say "wow". For example - Populating search results while user has just typed 3-4 character of search term. Just go back in past and think about Yahoo or Hotmail which used to postback to server when you clicked on "Create Message". But when google came they just did on client side without going to server. I bet you would have said "wow" to that. At least I did.
Things that can reduce server load. For example - Adding extra data entry row in HTML table, instead of doing it through round trip, Increase/Decrease of quantity etc.
These are just some example to sight. Even to do these things properly you need to go to server but that will be behind the scene using ajax. Other than this you need to select few more jquery plugins that you will use in your project. To name some are jQuery UI, jQuery Validation, jQuery AnythingSlider etc. There are too many of them.
Http://ClearTrip.com is one site that I envy for their UX. Visit their site from mobile device and you will get further clues about their UX work. Besideds just coding you need to have a person in your team who can work on these UX aspects.
Regarding how this fits into DDD: I've just recently started my journey into DDD but one hears a lot about command/query separation in that circle. Certainly if you are doing something that hits your domain (like fetching for auto-completion or certainly if you allow partial page submission to accomplish a domain command) you have to decide how it gets there and how the domain is structured to handle it.
I think two decisions are most relevant.
First, bits entirely in the browser and even those specifically in your application layer are outside your domain and thus, though covered in the layered architecture part of the DDD discussion, do not land in the entity/value/event/service, etc. discussion. If, however, you are using AJAX to interact with your application layer and in turn need to access your domain, you need to consider again two things in my mind.
(a) Are you separating commands and queries simply using different methods on your domain? Fine if you have a relatively small demand for either queries or commands and this will not seem like "noise" in your domain API. Otherwise, you have a separate bounded context...another domain modeled just for queries that your UI needs to avoid clutter on your domain. Regardless, you are doing something like JS->AJAX handler in application layer->domain (including a domain service).
(b) Is this a command or a query? Once you have (a) figured out, this lets you know where the access will land...then use the presentation layer's use case to elaborate the domain concept and put it into your ubiquitous language.
Second, you have the DTO vs direct to domain decision. This can be a religious war gathering topic, but usually the answer is "depends." I think there are cases for using DTOs and cases for not (within the same architecture)...just search for all the discussions around the topic and apply the pattern only where it adds value; I won't try to cover details here.
Hope this provides some insight or at least conversation magnet to which others will add.
I guess this question is a little too subjective. Looks like I'm just going to grab a view books on advanced javascript and study up on the JQuery library.

Integrating AspDotNetStorefront and Sitecore

Has anyone ever tried to integrate AspDotNetStorefront and Sitecore? I've been trying for the past couple of days to come up with a way to get the two systems to play nicely together, but it doesn't seem feasible from what I can tell. A couple issues I've run across so far:
Authentication between the two (AspDotNetStorefront has its own implementation, Sitecore just uses/extends .NET Membership)
The main DLL for AspDotNetStorefront is what pops up in the stack trace when I get yellow-screened, but that DLL is obfuscated so I can't figure out what the problem is.
The biggest issue is that we need to keep our existing AspDotNetStorefront application as an e-commerce backend and use Sitecore to do everything else. AspDotNetStorefront has a CMS as part of it, but it's really not an acceptable solution for anything but really basic content pages.
Any thoughts on how I might go about this?
EDIT:
I've decided to break this whole thing down into the different problems that I am facing at the moment and solve each one as efficiently as I know how. I'll detail the ones I have here and then update when I run into new ones.
Problem 1: Authentication between the two systems.
This one isn't too bad actually if you're knowledgeable about forms authentication tickets, which I wasn't at the time but am learning quickly enough. As long as the two systems share the same encryption info, it's easy enough to pass information back and forth between them using cookies as stated below in the accepted answer. The other kicker is that I needed to set the CustomerGUID in the AspDotNetStorefront Customer table to be the user ID from the Sitecore user tables (standard ASP.NET membership). So far this approach seems to work pretty well (I'm still in the proof of concept stage at the moment.
Another thing to keep in mind if you ever need to attempt this is that AspDotNetStorefront comes with a web service that you can use to basically do anything you need. Since they use the same encryption keys, I am able to log in on the storefront side using this service more securely than just passing over clear text passwords (I had to write the method myself, I don't believe it comes standard, if I am mistaken please let me know). Although I doubt it's a huge deal since it all happens server side anyways.
Problem 2: Getting at the product data
This one was a little more troublesome. The aforementioned web service has a few issues I've had difficulty working around. However, since the databases are going to be on the same server, I simply decided that since all I really need is the price and ID I would go ahead and set the ProductGUID column of each product in the Storefront database to match the Sitecore item ID of the corresponding item in the Sitecore database. This way I just need a quick query to grab the ProductID and price information which is only used in a few places. Everything else is going to be housed in Sitecore.
If anyone has anything to add feel free, as far as I can tell from Google, no one has actually done this before, so I'm having a lot of trouble finding resources on this particular topic.
UPDATE:
The integration is in fact possible and our site has been up for a week and a half now with very few integration related problems. This isn't something I recommend doing really on a personal level, but it is in fact possible to pull off.
I know ASPDotNetStorefront and other CMS systems (but not Sitecore). If I was approaching this, I would probably start simple and create a custom URL structure for sitecore 'content' pages that ASPDNSF would direct to Sitecore to handle. [possibly replacing the existing topics system in ASPDNSF]. So, for example: a URL such as www.domain.com/p-1234-aproductpage.aspx would be handled by ASPDNSF whereas www.domain.com/content/123/a-content-page would get sent to Sitecore to render. This is a straightforward web.config edit.
Security sharing across the systems should be possible across the same domain as the cookie information will be available (you should be able to create some code in Sitecore using the ASPDNSFCommon.dll and a cast of HttpContext.Current.User into a AspDotNetStorefrontPrincipal class to detect if a customer is logged in)
Another way to approach the problem would be to write a function that retrieved Sitecore content from the database based on a URL id and then write an ASPDNSF XML template to use the function to retrieve this content based on the URL. For example, you could create a custom URL structure in ASPDNSF such as www.domain.com/sc-1234-sitecore-content-item.aspx which is sent to your custom code; 1234 is used as the sitecore content id and the XML template retrieves the content and renders it on screen.
This second approach has the advantage of using Sitecore for all non-product content management while keeping the live application in ASPDNSF. Also one set of design templates and all your security issues go.

Resources