What are the main differences between Resumability , Hydration and Reconcillation ?
We know Resumability is future of web app, Is it possible to make most of the current meta framework (Nextjs,Remix, Sveltekit, Solidstart, etc.. ) resumable ?
SSR means server side rendering. It is desired for search engine optimization and faster load time. Hovewer. a server written in Javascript does not have the same API as the browser. So, there is no way to render full application. Even if it is possible, it would not make sense since runtime environments are tailored for different use cases. For example, there is no click events on the server side, etc. So, SSR returns partially rendered application + client side code.
When client side code executes, it will hydrate the application, meaning it will take the partially rendered app returned from the server, calculate the new state and bind events etc. Client side application does less work than its client only version but still some tasks are repeated. Resumable frameworks like Qwik tries to address this shortcoming.
In Resumabilty, there is no hydration. Client side logic is infused into the server returned code. Qwik serializes the application's state and framework state into HTML. Events like click event are be bound to the UI upon user's interaction, when user clicks on a button.
Reconciliation means reconciling two states, in other words diffing and patching previously rendered state of the application. React uses virtual DOM and re-renders everything when the state changes. But for a large application, this is costly. So, rather than re-calculating whole DOM, it keeps the unchanged parts and re-renders only changed branches. In the context of server side rendering, reconciliation means reconciling server side rendered state of the application with client side rendering logic.
We know Resumability is future of web app.
This is a bold statement. In computer science everything is a tradeoff.
Is it possible to make most of the current meta framework resumable ?
I don't think so. Maybe some of them but definitely not all because resumability is hard to retrofit and may require complete re-write. Not all applications needs SSR or search engine optimization.
Related
This seems like a really simple thing to do, yet I am having trouble finding the right architecture to do this.
Here's the scenario:
We have an API route api/templates that should, in theory, happen in every single route/page of the App. It fetches all the different templates and all the data in the app belongs to one of those templates. These are dynamic and can change over time, so they are not an 'importable JSON'
Every page should get these assets on load, but...
once it's loaded, and you start navigating through pages, the app should NOT re-fetch them on every single page navigation
We will implement a socket notification to alert an already-loaded client when templates change in the database
The problem is that, since this is needed on every page, SSR still needs to be able to access this on every page and our SEO policy requires server side rendering to send these pages fully rendered to client.
So, what we are looking for is:
to have a somewhat 'conditional' getServerSideProps that, if it is a full reload, it fetches that, but, if it is already in the client's memory, it skips that
we have looked into SWR, which, in theory, would work, but it still makes the API call as an after-thought, helping on the client side, but defeating the objective of not actually making the call, so that the backend is not 'burdened' with an unnecessary call
Honestly, this looks like a very 'common' pattern, yet I have completely failed to achieve a proper solution within the NextJS app environment. Maybe it's an "anti-pattern" and we shouldn't be doing this?
I am writing a web application using ASP.NET (not MVC), with .NET v4 (not v4.5).
I fetch some of the data which I must display from a 3rd-party web service, one of whose methods takes a long time (several seconds) to complete. The information to be fetched/prefetched varies depending on the users' initial requests (because different users ask for details about different objects).
In a single-user desktop application, I might:
Display my UI as quickly as possible
Have a non-UI background task to fetch the information in advance
Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it
To do something similar using ASP.NET, I guessed I can:
Use a BackgroundWorker, passing the Session instance as a parameter to the worker
On completion of the worker's task, write fetched data to the Session
If the user's request for data arrives before the task is complete, then block until it it has completed
Do you foresee problems, can you suggest improvements?
[There are other questions on StackOverflow about ASP.NET and background tasks, but these all seem to be about fetching and updating global application data, not session-specific data.]
Why not use same discipline as in a desktop application:
Load the page without the data from the service ( = Display my UI as quickly as possible)
Fetch the service data using an ajax call (= Have a non-UI background task to fetch the information in advance)
this is actually the same, although you can show an animated gif indicating you are still in progress... (Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it)
In order to post an example code it will be helpful to know if you are using jquery? plain javascript? something else? no javascript?
Edit
I am not sure if this was your plan but Another idea is to fetch the data on server side as well, and cache the data for future requests.
In this case the stages will be:
Get a request.
is the service data cached?
2.a. yes? post page with full data.
2.b. no? post page without service data.
2.b.i. On server side fetch service data and cache it for future requests.
2.b.ii. On client side fetch service data and cache it for current session.
Edit 2:
Bare in mind that the down side of this discipline is that in case the method you fetch the data changes, you will have to remember to modify it both on server and client side.
Given that the web is a very one-way synchronous architecture, is there any benefit to an NSB-enabled web application using MVC4?
I love the fault tolerance and ease of development that comes with NSB, but since the technology is all about one-way asynchronous messaging, how can I design my application around it in such a way that the user doesn't (often) notice their commands not being complete by the time a postback occurs? What paradigm should I adopt in designing my UI to naturally fit the curvature of NServiceBus?
Indeed, it seems NSB is an unnecessary complexity between a website and its SQL store because users always assume the "work" is done when their browser is done refreshing. Am I wrong in this regard?
Edit: I've seen other solutions whereby each command handler publishes an event when the work is done by the NSB service, and that event handlers on the ASP.NET project will create "stub files" that a Java-script enabled page is constantly polling to indicate that an operation completed. Is this the only way to bridge the gap between one way sync and async platforms?
NServiceBus fits in quite nicely with any web front-end. You are just going to need to be aware of how asynchronous message processing affects your UI. In most instances one could simply indicate to the user that the request has been accepted and will be processed. But in other cases you may need to forgo eventual consistency for immediately consistent.
For instance, for user registration I typically check availability of the user name and then register the user immediately but I send a command to e-mail the activation message so that the user does not have to wait for that. The user will eventually receive their e-mail. So a message is displayed indicating that an e-mail will be sent and that they need to click the activation link even though the mail may only be sent in 5 minutes.
Another example is an application where the user could convert various document formats to TIFF. The request would be sent and the web front-end would poll to wait for the result of the conversion and then display the converted pages.
So it is going to affect how your UI/UX works. It is definitely still useful and in some instances makes your life a whole lot easier.
In my case I used my FOSS Shuttle Service Bus: http://shuttle.codeplex.com/ --- but the concepts apply anyway.
NServiceBus has hooks into the typical MVC web application that allow you to cause the user's postback to wait until a response arrives over the bus. See the AsyncPages sample to see how it's done.
I want to make a form where people can sign up for a course. Number of people for a course is limited. I want to make a page where user can see how many places are still available and that number is dynamically updated, so if another user signs for a course the other one sees change. When number of available places reaches 0 the signup button should be disabled. Such task should be easy to implement but I am afraid it is not. I suppose some Ajax will be involved but how to handle server side counting? WebServices? I have a problem to design a logic behind all of this.
The technology/technique you're looking for is called Server Push.
Basic idea: Client should respond to some events happening on Server.
Possible solutions:
Polling some server action via AJAX in a timely fashion;
Keeping long-running AJAX request open on server-side until timeout occurs or event happens, then process acquired result on client (determine if it was server action or just timeout), reestablish connection from client if necessary.
and a couple of other solutions which are basically variations of the above two. Also solution will much depend on server-side technology you're using.
Google has a short yet very informative article on what this technique is and how it can be implemented here. It's (almost) technology agnostic so it should help you to understand concepts and possible solutions.
I'd use a database on the server. For the "courses" table, have an associated table containing the "bookings". Add them up in a SQL query.
Say, for example, you are caching data within your ASP.NET web app that isn't often updated. You have another process running outside of the app which ocassionally updates this data, when you do this you would like the cached data to be cleared immediately so that the next request picks up the new data straight away.
The caching service is running in the context of your web app and not externally - what is a good method of calling into the web app to get it to update the cache?
You could of course, just hack a page or web service together called ClearTheCache that does it. This can then be called by your other process. Of course you don't want this process to be externally useable or visible on your web app, so perhaps you could then check that incoming requests to this page are calling localhost, if not throw a 404. Is this acceptable? Could this be spoofed at all (for instance if you used HttpApplication.Request.Url.Host)?
I can think of many different ways to go about this, mainly revolving around creating a page or web service and limiting requests to it somehow, but I'm not sure any are particularly elegant. Neither do I like the idea of the web app routinely polling out to another service to check if it needs to execute something, I'd really like a PUSH solution.
Note: The caching scenario is just an example, I could use out-of-process caching here if needed. The question is really concentrating on invoking code, for any given reason, within a web app externally but in a controlled context.
Don't worry about the limiting to localhost, you may want to push from a different server in future. Instead share a key (asymmetrical or symmetrical doesn't really matter) between the two, have the PUSH service encrypt a block of data (control data for example) and have the receiver decrypt. If the block decrypts correctly and the data is readable you can safely assume that only the service that was supposed to call you has and you can perform the required actions! Not the neatest solution, but allows you to scale beyond a single server.
EDIT
Having said that an asymmetrical key would be better, have the PUSH service hold the private part and the website the public part.
EDIT 2
Have the PUSH service put the date/time it generated the cipher text into the data block, then the client can be sure that a replay attack hasn't taken place by ensuring the date/time is within an acceptable time period (say a minute).
Consider an external caching mechanism like EL's caching block, which would be available to both the web and the service, or a file to cache data to.
HTH.