Has anyone ever tried to integrate AspDotNetStorefront and Sitecore? I've been trying for the past couple of days to come up with a way to get the two systems to play nicely together, but it doesn't seem feasible from what I can tell. A couple issues I've run across so far:
Authentication between the two (AspDotNetStorefront has its own implementation, Sitecore just uses/extends .NET Membership)
The main DLL for AspDotNetStorefront is what pops up in the stack trace when I get yellow-screened, but that DLL is obfuscated so I can't figure out what the problem is.
The biggest issue is that we need to keep our existing AspDotNetStorefront application as an e-commerce backend and use Sitecore to do everything else. AspDotNetStorefront has a CMS as part of it, but it's really not an acceptable solution for anything but really basic content pages.
Any thoughts on how I might go about this?
EDIT:
I've decided to break this whole thing down into the different problems that I am facing at the moment and solve each one as efficiently as I know how. I'll detail the ones I have here and then update when I run into new ones.
Problem 1: Authentication between the two systems.
This one isn't too bad actually if you're knowledgeable about forms authentication tickets, which I wasn't at the time but am learning quickly enough. As long as the two systems share the same encryption info, it's easy enough to pass information back and forth between them using cookies as stated below in the accepted answer. The other kicker is that I needed to set the CustomerGUID in the AspDotNetStorefront Customer table to be the user ID from the Sitecore user tables (standard ASP.NET membership). So far this approach seems to work pretty well (I'm still in the proof of concept stage at the moment.
Another thing to keep in mind if you ever need to attempt this is that AspDotNetStorefront comes with a web service that you can use to basically do anything you need. Since they use the same encryption keys, I am able to log in on the storefront side using this service more securely than just passing over clear text passwords (I had to write the method myself, I don't believe it comes standard, if I am mistaken please let me know). Although I doubt it's a huge deal since it all happens server side anyways.
Problem 2: Getting at the product data
This one was a little more troublesome. The aforementioned web service has a few issues I've had difficulty working around. However, since the databases are going to be on the same server, I simply decided that since all I really need is the price and ID I would go ahead and set the ProductGUID column of each product in the Storefront database to match the Sitecore item ID of the corresponding item in the Sitecore database. This way I just need a quick query to grab the ProductID and price information which is only used in a few places. Everything else is going to be housed in Sitecore.
If anyone has anything to add feel free, as far as I can tell from Google, no one has actually done this before, so I'm having a lot of trouble finding resources on this particular topic.
UPDATE:
The integration is in fact possible and our site has been up for a week and a half now with very few integration related problems. This isn't something I recommend doing really on a personal level, but it is in fact possible to pull off.
I know ASPDotNetStorefront and other CMS systems (but not Sitecore). If I was approaching this, I would probably start simple and create a custom URL structure for sitecore 'content' pages that ASPDNSF would direct to Sitecore to handle. [possibly replacing the existing topics system in ASPDNSF]. So, for example: a URL such as www.domain.com/p-1234-aproductpage.aspx would be handled by ASPDNSF whereas www.domain.com/content/123/a-content-page would get sent to Sitecore to render. This is a straightforward web.config edit.
Security sharing across the systems should be possible across the same domain as the cookie information will be available (you should be able to create some code in Sitecore using the ASPDNSFCommon.dll and a cast of HttpContext.Current.User into a AspDotNetStorefrontPrincipal class to detect if a customer is logged in)
Another way to approach the problem would be to write a function that retrieved Sitecore content from the database based on a URL id and then write an ASPDNSF XML template to use the function to retrieve this content based on the URL. For example, you could create a custom URL structure in ASPDNSF such as www.domain.com/sc-1234-sitecore-content-item.aspx which is sent to your custom code; 1234 is used as the sitecore content id and the XML template retrieves the content and renders it on screen.
This second approach has the advantage of using Sitecore for all non-product content management while keeping the live application in ASPDNSF. Also one set of design templates and all your security issues go.
Related
I'm currently implementing a POC project in NextJs 12 to check whether it is possible for us to integrate it. I'm not able to update to 13 yet since I cannot support React 18 due to some in-house packages that we use, so for the purposes of this discussion let's pretend that Next 13 does not exist yet.
We all know that Next officially discourages the use of getInitialProps and recommends using getServerSideProps wherever possible. I'm aware of all the downsides of getInitialProps as opposed to getServerSideProps, main one being the constant client/server context switch which you have to be mindful of.
However, I cannot understand the easiness with which this is discouraged. Apart from NextJs itself I've seen a lot of blog posts calling it the worst thing ever and such. It seems to me like people who say stuff like that have not had some realistic use cases, and that the opinion mostly comes from toy projects (notes app, todo app, blog app etc).
Anyway the purpose of this question is twofold. One, to verify if it is at all possible in my case to avoid getInitialProps, and two, to see if anyone else thinks that this discouragement is somewhat unfounded and not based on reality.
The reason I've decided to use Next at all was to achieve SSR in React. The entire point of that, at least I believe so, is to enable SSR and still preserve some main benefits of React, such as a seamless SPA-like navigation in certain pages. If that was not the case I would have gone for a traditional SSR framework such as Ruby on Rails, Django, etc.
The reason why I need to use getInitialProps, and why I believe I cannot possibly avoid it, is based on two aspects:
Every single page that I have requires certain global data, which I don't want to refetch on every route.
The perfect example of this is the page header. The header of every one of my page depends on user data and translations (i18n). Both of these things I fetch from an external API. If I were to use GSSP then every route and every sub-route of every page would have to re-perform this data fetching which seems like a huge performance kill. I have no way to properly persist this data through GSSP navigation without conforming to some hacks such as sending hidden query parameters which check if data was already fetched. If we were to assume that the user always comes into contact with a page solely through its root URL, then this would work, but this assumption is extremely unrealistic.
By using getInitialProps in combination with redux and next-redux-wrapper, it is very easy to check if data was already fetched (even better if using RTKQ you don't even have to explicitly check it).
Big pages where I want SPA-like behaviour are not possible.
In my case I have one page which has about 5 sub-routes. On its homepage, we display a list of certain entities, which is fetched from the API. The API is such that entities can only be fetched as a list, and you cannot fetch entities individually. Then, for each entity, when you click on it, you can go to a sub-page where you see its specific info.
The only natural way to do this is to have the entire data fetched on the first page visit and then reused throughout the page as we navigate. Re-fetching the whole data on every page navigation is also a performance kill. The only way I was able to implement this and preserve a seamless SPA-like navigation was with getInitialProps.
What's interesting about this use case is that hacks with sending hidden query params would actually not do the trick, since even though I can force GSSP to be aware that the data is already fetched, I cannot access that data, therefore I cannot do any server-side route validation. What I mean is that if a user was to land on the home page, where all the entities are fetched, then somehow visit an entity page, like page/123, where an entity with id 123 does not exist, I cannot validate that and properly handle it in GSSP without re-fetching the entire list of entities again which is, once more, a performance kill.
So, in conclusion, I'd like to hear opinions on the discouragement of getInitialProps. For me it seems borderline impossible to completely migrate to getSeverSideProps if your app is at all realistic, uses translations, global data, etc.
Thanks in advance.
Before I get to the question, let me say that I saw a similar question here with a fairly detailed response:
https://wordpress.stackexchange.com/questions/115211/loading-wordpress-stuff-on-laravel-site
And it was the closest thing I have found on the web to what I'm looking for, but the potential solution looked like it might end up being so laborious as to not be worth the time. Here is my situation:
I develop and maintain a small custom SaaS program that typically functions on a subdomain of a client site (say, software.client.com). The latest version of the code was rewritten using Laravel and there were a lot of gains associated with that. In the past, when the program was basically procedural spaghetti, if we had a client with a Wordpress site on their primary domain, we ran some atrocious (by best-practices standards) hack-around code to pull the Wordpress header and footer onto the pages of my program - sitting, of course, outside the CMS - while modifying meta tags and doing a number of other things. It wasn't pretty but it worked.
Now I'm in a situation where I'd like to solve the same problem - that is, to at least pull the Wordpress headers and footers onto some of the Laravel subdomain views - but nothing I have found on the web so far has enabled me to make much progress in that direction. I have found a lot of tutorials explaining how to integrate Laravel and Wordpress, or use one for frontend and the other for backend, etc., but nothing yet about the specific type of integration I'm talking about.
What I have tried so far is implementing some of the code I've used elsewhere into various parts of the Laravel codebase. Most of the recent experiments have been in public/index.php and in Controller methods. Laravel will allow me to get as far as including the Wordpress config, but if I attempt to go any further I cause a 500 error. Here's an example snippet that actually attempts to do more than I need, but I can't even get past wp_init(). Imagine the following code in a Laravel Controller method. The first two lines are OK, but:
public function index() {
define('WP_USE_THEMES', false);
require '/path_to/wp-config.php';
$wp->init(); // from here on, 500 errors occur
$wp->parse_request();
$wp->query_posts();
$wp->register_globals();
// And then, at some point, I would call and modify get_header()
(I didn't really expect this to work from the Controller, but it doesn't work from anywhere else in the codebase I've tried either.) This is not a situation where I want to hand off control from Laravel to Wordpress for these URLs (I need Laravel functions / DB queries and more flexibility, and I know I could just do that hand-off through public/index.php and routes.php if it would solve the problem). And for these intallations, I don't need to grab posts or other items from Wordpress. I would just like to find a way to pull the header and footer into these views directly from Wordpress while maintaining control of the views in Laravel. If I can't, among other things, the design team will end up rebuilding headers and footers for every program install on a Wordpress client (for the time being) and they will have to make changes in at least two places when things are modified or updated.
If we have to, we will find a way to live with that until the next program version rollout, but if I can build a solution in what my superiors will deem a reasonable amount of time, we would all be happier. I hope that I have just missed something simple somewhere and I will be embarrassed to find out that I could have solved this in less time than it took to explain the problem. Thank you for any and all helpful responses and potential solutions.
You're not going to be able to cleanly merge the two codebases together. That would cause a disaster.
The complexity of the solution depends on the complexity the information you need to share. The simplest solution possible would be to write something custom to WordPress that builds a document with no body data and just supplies a token, like {!! $body !!}. Then, in Laravel, you can do an HTTP request to localhost to fetch this tokenized content. Store the result in a memory cache and use Blade to render the final view.
Essentially, my suggestion boils down to: Create a Blade layout with WordPress.
There's a thousand different ways to do this, and all of them are wrong.
I'm working on an a pretty big project right now and am trying to implement an MVP architecture. I'm starting to run across a instances where I think JQuery or Javascript might be better suited than server-side code. I'm looking for feedback on how others are implementing client-side programming into their enterprise applications. How are you structuring the client-side code and how do you determine when to use it?
Things that can make user say "wow". For example - Populating search results while user has just typed 3-4 character of search term. Just go back in past and think about Yahoo or Hotmail which used to postback to server when you clicked on "Create Message". But when google came they just did on client side without going to server. I bet you would have said "wow" to that. At least I did.
Things that can reduce server load. For example - Adding extra data entry row in HTML table, instead of doing it through round trip, Increase/Decrease of quantity etc.
These are just some example to sight. Even to do these things properly you need to go to server but that will be behind the scene using ajax. Other than this you need to select few more jquery plugins that you will use in your project. To name some are jQuery UI, jQuery Validation, jQuery AnythingSlider etc. There are too many of them.
Http://ClearTrip.com is one site that I envy for their UX. Visit their site from mobile device and you will get further clues about their UX work. Besideds just coding you need to have a person in your team who can work on these UX aspects.
Regarding how this fits into DDD: I've just recently started my journey into DDD but one hears a lot about command/query separation in that circle. Certainly if you are doing something that hits your domain (like fetching for auto-completion or certainly if you allow partial page submission to accomplish a domain command) you have to decide how it gets there and how the domain is structured to handle it.
I think two decisions are most relevant.
First, bits entirely in the browser and even those specifically in your application layer are outside your domain and thus, though covered in the layered architecture part of the DDD discussion, do not land in the entity/value/event/service, etc. discussion. If, however, you are using AJAX to interact with your application layer and in turn need to access your domain, you need to consider again two things in my mind.
(a) Are you separating commands and queries simply using different methods on your domain? Fine if you have a relatively small demand for either queries or commands and this will not seem like "noise" in your domain API. Otherwise, you have a separate bounded context...another domain modeled just for queries that your UI needs to avoid clutter on your domain. Regardless, you are doing something like JS->AJAX handler in application layer->domain (including a domain service).
(b) Is this a command or a query? Once you have (a) figured out, this lets you know where the access will land...then use the presentation layer's use case to elaborate the domain concept and put it into your ubiquitous language.
Second, you have the DTO vs direct to domain decision. This can be a religious war gathering topic, but usually the answer is "depends." I think there are cases for using DTOs and cases for not (within the same architecture)...just search for all the discussions around the topic and apply the pattern only where it adds value; I won't try to cover details here.
Hope this provides some insight or at least conversation magnet to which others will add.
I guess this question is a little too subjective. Looks like I'm just going to grab a view books on advanced javascript and study up on the JQuery library.
Three associates and I want to integrate our individual Drupal websites so that a user can move fairly seamlessly between them. We're all new at Drupal, so our planned approach avoids "doing it the right way" by combining modules and database tables.
Rather, we plan on simply having each site's menu system include links to the other sites, and load the selected site via Iframes so that the overall user experience is more like that of a single, integrated system. We'll adopt a common theme for all sites, and pass the user id through the HTML call (and then process it via normal Drupal code) to avoid the need for more than one logon.
What are the negatives of this simple approach and are they so severe that a more traditional site-integration approach should be used?
To be honest, that sounds like a rather nasty can of worms you're looking at opening there. The mere mention of IFrames has me shuddering!
It seems to me like you'd be better off simply having one Drupal instance, with you and your associates as different content authors on the same site.
If you're looking at having the same theme across the three integrated sites, how will the users know which one they're on? And if the aim is to tightly integrate them, why not have the four of you simply contribute to the same core site?
If I had to make the decission, I would use the drupal multisite feature. You can even use the "single sign on" module to get all your users logged in to all sites. It is a bit of work, but I think it is well worth it.
Once you start throwing things into frames your users/visitors will loose the ability to bookmark the correct page. For example, if they find a page they like and book mark it, they will get 'www.site.com/index.php' rather than 'www.site.com/article/article.php?Id=12345'. When they come back, they'll be getting the default page of where the frame lives at rather than the expected page.
Since all three of your sites are based on the same data scheme, it would probably be better to 'do it right' the first time around rather than hacking something together that in the end will cause more headaches than solutions.
Good luck on your project and hope this helps some.
Quick question.
There is a legacy website (that is not under my control and cannot be modified), that gives users a form to fill in data and then the user 'submits' the form for processing. There is virtually no error checking on this form, and very little help for the user (i.e. it was very poorly designed about 12 years ago and hasn't been updated since).
None-the-less, the back-end of this application performs a critical function.
My question is, is it possible (without having any ability to modify the legacy website), to write my own new front-end in asp.net (with proper pre-submit validation) living on a different server & domain, and then simulate the 'submit' to another webserver as long as I reproduce the form/data that is being sent?
The key question here I guess, is it possible to submit a form produce on one website, to another, and can this be done with ANY changes to the legacy site?
Comments appreciated.
The key question here I guess, is it possible to submit a form produce on one website, to another, and can this be done with ANY changes to the legacy site?
Yes, I've done this before - provided that the target site doesn't do any referer checking. A POST request is a POST request, no matter where it originates from.
You just need to make sure that all the fields are exactly the same in your request as they would be coming from the original page, i.e. - same field names, same encoding etc.
The short answer is "yes", the long answer is "it depends". The basics of HTML and HTTP allow for it, but without knowing a little more about the implementation of the legacy site you can't know for sure that it will work.
In theory you just need to make sure that the name of the fields are the same and set the target of the form to the legacy site's page URL.
In practice the legacy site could be doing various things that make it difficult or impossible to achieve (it could require cookies set correctly or hold internal state for example).
The best thing would be just to try it. It shouldn't take long just to mock up the basic fields and post the form to see if it works. Once you know it works then you can worry about adding your extra validation etc
Beware that if the existing site is authenticating users you'll need to find a way to also collect and pass that info along. Otherwise, though, Phill's point is spot-on.