Imagine I have this component:
<input bind-value-oninput="#Name">
<p>Your name is #Name</p>
string _name;
string Name
{
get => _name;
set => _name = value.ToUpper();
}
When I type on the input, is the text is transforming directly to uppercase and showing in the paragraph.
I think (please, correct me if I'm wrong) that server-side Blazor runs the .NET MSIL code on the server and sends the DOM changes via SignalR connection.
The connection to the server can be delayed, specially with poor Internet connections.
In the case of this input, can be transformed to uppercase some seconds after the user input text? In afirmative case, how can I solve it? Only using client-side Blazor?
Using server-side Blazor has a couple of drawbacks, which you must take into account, when you have to decide which flavor of Blazor to use. Server-side Blazor is mostly recommended to use on private Intranet network, a for instance, an enterprise network, with a few hundreds of users, accessing the network at the same time. In such a case, you'll experience no rendering delay at all.
Using server-side Blazor on the public Internet can be problematic in that respect (It has other downsides to seriously consider), and can result in unacceptable delay in rendering. But if its use is forced upon you, it's essential to search for ways that can mitigate this issue (rendering delay). As for instance, don't use input event. Use change event instead. An input event is triggered whenever you hit a keyboard button, which result in a call to the server to process the event. But if you use the change event, you may reduce the number of calls to the server.
Hope this helps...
The rendering delay on a good/average connection is < 50 ms.
Even on a bad connection you ought to stay well below 1 sec.
When your connection gets so bad that you get above that then there is a reconnect dialog built-in.
But yes, if you want to be always available, also on a very bad connection (or no connection), it would be better to use Client side.
Related
I've started experimenting with SignalR. I've been trying to come up with a flexible way of storing information about each connected client. For example, storing the name in a chat app rather than passing it with each message.
At the moment, I have a static dictionary which matches the connectionId to an object which contains these properties. I add to this dictionary on connection, and remove on disconnection.
The issue I'm having is that I don't seem to get all disconnect events. If I close a tab in Chrome, the disconnect seems to go through. However, if I rapidly reload a tab, the disconnect doesn't seem to occur (at least not 'cleanly'). For example, if I reload the same tab over and over, it'll tell me my dictionary has multiple items when it should - in theory still be one.
Is there a standard way of storing this kind of per-connection information? Otherwise, what might be causing the issue I'm having?
You are actually handling connection id data correctly. Ensure that you are only instantiating your user data in OnConnected and uninstantiating it in OnDisconnected.
When spamming refresh on your page there are situations which result in the OnDisconnected event not being triggered immediately. However you should not worry about this because SignalR will actually time-out the connection and trigger the OnDisconnected event after a designated timeout (DisconnectTimeout).
If you do come across scenarios where there is not a 1-to-1 correlation for OnConnected and OnDisconnected events (after a significant amount of time) make sure to file a bug at https://github.com/SignalR/SignalR/issues.
Lastly if you're looking at doing some advanced chat mechanics and looking for some inspiration check out JabbR, it's open source!
https://github.com/davidfowl/JabbR
Hope this helps!
Given that the web is a very one-way synchronous architecture, is there any benefit to an NSB-enabled web application using MVC4?
I love the fault tolerance and ease of development that comes with NSB, but since the technology is all about one-way asynchronous messaging, how can I design my application around it in such a way that the user doesn't (often) notice their commands not being complete by the time a postback occurs? What paradigm should I adopt in designing my UI to naturally fit the curvature of NServiceBus?
Indeed, it seems NSB is an unnecessary complexity between a website and its SQL store because users always assume the "work" is done when their browser is done refreshing. Am I wrong in this regard?
Edit: I've seen other solutions whereby each command handler publishes an event when the work is done by the NSB service, and that event handlers on the ASP.NET project will create "stub files" that a Java-script enabled page is constantly polling to indicate that an operation completed. Is this the only way to bridge the gap between one way sync and async platforms?
NServiceBus fits in quite nicely with any web front-end. You are just going to need to be aware of how asynchronous message processing affects your UI. In most instances one could simply indicate to the user that the request has been accepted and will be processed. But in other cases you may need to forgo eventual consistency for immediately consistent.
For instance, for user registration I typically check availability of the user name and then register the user immediately but I send a command to e-mail the activation message so that the user does not have to wait for that. The user will eventually receive their e-mail. So a message is displayed indicating that an e-mail will be sent and that they need to click the activation link even though the mail may only be sent in 5 minutes.
Another example is an application where the user could convert various document formats to TIFF. The request would be sent and the web front-end would poll to wait for the result of the conversion and then display the converted pages.
So it is going to affect how your UI/UX works. It is definitely still useful and in some instances makes your life a whole lot easier.
In my case I used my FOSS Shuttle Service Bus: http://shuttle.codeplex.com/ --- but the concepts apply anyway.
NServiceBus has hooks into the typical MVC web application that allow you to cause the user's postback to wait until a response arrives over the bus. See the AsyncPages sample to see how it's done.
I have an ASP.NET web page that connects to a number of databases and uses a number of files. I am not clear what happens if the end user closes the web page before it was finished loading i.e. does the ASP.NET life cycle end or will the server still try to generate the page and return it to the client? I have reasonable knowledge of the life cycle but I cannot find any documentation on this.
I am trying to locate a potential memory leak. I am trying to establish whether all of the code will run i.e. whether the connection will be disposed etc.
The code would still run. There is a property IsClientConnected on the HttpRequest object that can indicate whether the client is still connected if you are doing operations like streaming output in a loop.
Once the request to the page is generated, it will go through to the unload on the life cycle. It has no idea the client isn't there until it sends the information on the unload.
A unique aspect of this is the Dynamic Compilation portion. You can read up on it here: http://msdn.microsoft.com/en-us/library/ms366723
For more information the the ASP.NET Life Cycle, look here:
http://msdn.microsoft.com/en-us/library/ms178472.aspx#general_page_lifecycle_stages
So basically, a page is requested, ASP.NET uses the Dynamic Compilation to basically create the page, and then it attempts to send the page to the client. All code will be run in that you have specified in the code, no matter if the client is there or not to receive it.
This is a very simplified answer, but that is the basics. Your code is compiled, the request generates the response, then the response is sent. It isn't sent in pieces unless you explicitly tell it to.
Edit: Thanks to Chris Lively for the recommendation on changing the wording.
You mention tracking down a potential memory leak and the word "connection". I'm going to guess you mean a database connection.
You should ALWAYS wrap all of your connections and commands in using clauses. This will guarantee the connection/command is properly disposed of regardless of if an error occurs, client disconnects, etc.
There are plenty of examples here, but it boils down to something like:
using (SqlConnection conn = new SqlConnection(connStr)) {
using (SqlCommand cmd = new SqlCommand(conn)) {
// do something here.
}
}
If, for some reason, your code doesn't allow you to do it this way then I'd suggest the next thing you do is restructure it as you've done it wrong. A common problem is that some people will create a connection object at the top of the page execution then re-use that for the life of the page. This is guaranteed to lead to problems, including but not limited to: errors with the connection pool, loss of memory, random query issues, complete hosing of the app...
Don't worry about performance with establishing (and discarding) connections at the point you need them in code. Windows uses a connection pool that is lightning fast and will maintain connections for as long as needed even if your app signals that it's done.
Also note: you should use this pattern EVERY TIME you are using an un-managed class. Those always implement IDisposable.
I want to make a form where people can sign up for a course. Number of people for a course is limited. I want to make a page where user can see how many places are still available and that number is dynamically updated, so if another user signs for a course the other one sees change. When number of available places reaches 0 the signup button should be disabled. Such task should be easy to implement but I am afraid it is not. I suppose some Ajax will be involved but how to handle server side counting? WebServices? I have a problem to design a logic behind all of this.
The technology/technique you're looking for is called Server Push.
Basic idea: Client should respond to some events happening on Server.
Possible solutions:
Polling some server action via AJAX in a timely fashion;
Keeping long-running AJAX request open on server-side until timeout occurs or event happens, then process acquired result on client (determine if it was server action or just timeout), reestablish connection from client if necessary.
and a couple of other solutions which are basically variations of the above two. Also solution will much depend on server-side technology you're using.
Google has a short yet very informative article on what this technique is and how it can be implemented here. It's (almost) technology agnostic so it should help you to understand concepts and possible solutions.
I'd use a database on the server. For the "courses" table, have an associated table containing the "bookings". Add them up in a SQL query.
I am not able to make more than one request at a time in asp.net while the session is active. Why does this limitation exist? Is there a way to work around it?
This issue can be demonstrated with a WebForms app with just 3 simple aspx pages (although the limitation still applies in asp.net mvc).
Create an asp.net 3.5 web application.
There should be just three pages:
NoWait.aspx, Wait.aspx, and SessionStart.aspx
NoWait.aspx has this single nugget added between the default div tags: <%=DateTime.Now.Ticks %>. The code-behind for this page is the default (empty).
Wait.aspx looks just like NoWait.aspx, but it has one line added to Page_Load in the code-behind: Thread.Sleep(3000); //wait 3 seconds
SessionStart.aspx also looks just like NoWait.aspx, but it has this single line in its code-behind: Session["Whatever"] = "Anything";
Open a browser and go to NoWait.aspx. It properly shows a number in the response, such as: "633937963004391610". Keep refreshing and it keeps changing the number. Great so far! Create a new tab in the same browser and go to Wait.aspx. It sits for 3 seconds, then writes the number to the response. Great so far! No, try this: Go to Wait.aspx and while it's spinning, quickly tab over to NoWait.aspx and refresh. Even while Wait.aspx is sleeping, NoWait.aspx WILL provide a response. Great so far. You can continue to refresh NoWait.aspx while Wait.aspx is spinning, and the server happily sends a response each time. This is the behavior I expect.
Now is where it gets weird.
In a 3rd tab, in the same browser, visit SessionStart.aspx. Next, tab over to Wait.aspx and refresh. While it's spinning, tab over to NoWait.aspx and refresh. NoWait.aspx will NOT send a response until Wait.aspx is done running!
This proves that while a session is active, you can't make concurrent requests with the same user. Requests are all queued up and served synchronously. I do not expect or understand this behavior. I have tested this on Visual Studio 2008's built in web server, and also IIS 7 and IIS 7.5.
So I have a few questions:
1) Am I correct that there is indeed a limitation here, or is my test above invalid because I am doing something wrong?
2) Is there a way to work around this limitation? In my web app, certain things take a long time to execute, and I would like users to be able to do things in other tabs while they wait of a big request to complete. Can I somehow configure the session to allow "dirty reads"? This could prevent it from being locked during the request?
3) Why does this limitation exist? I would like to gain a good understanding of why this limitation is necessary. I think I'd be a better developer if I knew!
Here is a link talking about session state and locking. It does perform and exclusive lock.
The easiest way around this is to make the long running tasks asynchronous. You can make the long running tasks run on a separate thread, or use and asynchronous delegate and return a response to the browser immediately. The client side page can send requests to the server to check and see if it is done (through ajax most likely), and when the server tells the client it's finished, notify the user. That way although the server requests have to be handled one at a time by the server, it doesn't look like that to the user.
This does have it's own set of problems, and you'll have to make sure that account for the HTTP context closing as that will dispose certain functionality in the asp.net session. One example you'll probably have to account for is probably releasing a lock on the session, if that is actually occurring.
This isn't too surprising that this could be a limitation. Each browser would have it's own session, before the advent of ajax, post back requests were synchronous. Making the same session handle concurrent could get really ugly, and I can see how that wouldn't be a priority for the IIS and ASP.NET teams to add in.
For reasons Kevin described, users can't access two pages that might write to their session state at the same time - the framework itself can't exert fine-grained control over the locking of the session store, so it has to lock it for entire requests.
To work around this, pages that only read session data can declare that they do so. ASP.NET won't obtain a session state write lock for them:
// Or false if it doesn't need access to session state at all
EnableSessionState="ReadOnly"