I want to use the JBoss 5.1 asynch feature as described in https://docs.jboss.org/ejb3/docs/reference/build/reference/en/html/jboss_extensions.html - however, I'm a bit unclear on how to keep the returned Future around between requests. Since it's not serializable, I can't properly keep it stored in my Wicket page nor in the http session. How do you properly handle a Future if you need to wait for longer than a request?
This thread contains a discussion about how to handle Future instances in Wicket:
http://apache-wicket.1842946.n4.nabble.com/Handling-futures-td3688341.html
Related
After deploying my Web Api to Azure, I noticed that I have a very high response latency. I stopwatched a method that awaits a Http request to a controller that just returns a "Hello" string. The times I measure are also not consistent, but most of the time I get something around 0.9 seconds. The problem is, that my database queries take forever, even the least fancy ones take something around two seconds (and when my UI updates multiple elements, it takes up to 4 seconds until the whole thing is loaded).
I have really no idea where to start diagnosing this issue, so any help (even the most basic) would be highly appreciated!
I found my problem, I used Database.EnsureCreated() on every service request, since it was called in the constructor of the DbContext I inject on Startup. Seems pretty unefficient. Whoopsie!
I have an ASP.NET application, using NGINX as a server and Servicestack 3.
When it comes to PUT requests, I'd like them to be synchronously processed as they come in to Servicestack from the NGINX server (never have multiple requests processed at the same time, since this could lead to them finishing their job in a different order from the one they were called in). As I understand it, it's possible to control the number of threads being used for incoming requests - If I'd be able to restrict that number to 1, I imagine I'd get the result that I want.
How would I go about achieving this from the level of ServiceStack configuration (if it's even possible - the accepted response from this question is making me think it isn't; but then if that's true, how am I supposed to enforce synchronisation)?
It's unlikely you want to block the whole HTTP Server, that could cause all other requests to fail.
Have you just tried using a singleton lock? e.g. you could use the Type of your Service for this:
lock (typeof(MyService))
{
}
But locking in a Web Server like this is generally a bad idea, you should think of using some kind of optimistic concurrency control where it throws an error when trying to update a stale record. e.g. In OrmLite you can use its RowVersion feature for optimistic concurrency where clients send the RowVersion which represents the state of the record they have and if it's been updated since OrmLite will throw a OptimisticConcurrencyException which in ServiceStack automatically gets converted to a 409 Conflict HTTP Error Response.
As of my empirically gathered knowledge suggests, .NET WebForms is probably using a queue of requests and when the first request is properly handled, the new first comes to the head of the queue and so on. This behavior recently led to a misunderstanding, where we thought that a feature is very slow, but as a matter of fact, some other features always running before it were the slow ones. But this is only a minor misunderstanding. I can imagine more serious problems, for instance a longer request blocking the other requests and I did not find the time yet to test this on multiple sessions, to see whether this queue is session-level, but I think it should be if I am even right about its existence. Hence my question: why are later requests waiting for earlier requests' parsing in .NET WebForms projects?
Probably Session.
Requests from the same session that use session state don't run concurrently. This means that applications can use session state without needing to worry about race conditions.
There is no blocking for calls from different sessions. Also not blocking for calls from the same client that have session state disabled or readonly.
See the MSDN description of the PagesEnableSessionState Enumeration
I am trying to make 6 asynchronous jQuery ajax calls to my .NET Page Method all at once on document.ready to request for different sets of data from the database and in return render as charts to users.
Problem is that when one chart takes a long time to generate, it locks up generation of the next 5 charts, for instance, when each chart takes 1 min to generate, the user will be approx waiting for 6 mins, instead of 1 - 2 mins which i thought it will be when using async ajax calls and page method gets processed in parallel.
After reading a lot of useful posts in this forum, i found that this is because I have to read and write to session objects within the page methods, and asp.net will lock the whole request, as a result making them run sequentially.
I have seen people suggesting to set the session state to read only in #Page tag, but it will not address my problem because i need write to the session as well. I have considered moving from inProc session to sql database session, but my session object is not serializable and is used across the whole project. I also cannot change to use Cache instead because the session contains user specific details.
Can anyone please help and point me to the right direction? I have been spending days to investigate this page inefficiency and still haven't yet found a nice way yet.
Thanks in advance
From my personal experience, switching to SQL session will NOT help this problem as all of the concurrent threads will block in SQL as the first thread in will hold an exclusive lock on one or more rows in the database.
I'm curious as to why your session object isn't serializable. The only solution that I can think of is use a database table to store the user specific data that you are keeping in session and then only holding onto a database lock for as long as it takes you to update the user data.
You can use the ASP.NET session id or other unique cookie value as the database key.
The problem may not be server side at all.
Browsers have a built in limit on how many concurrent HTTP requests they will make - this is part of the HTTP/1.1 spec which sugests a limit of 2.
In IE7 the limit is 2. in IE8 it is 6. But when a page loads you could easily hit 6 due to the concurrent requests for CSS, JS, images etc.
A good source of info about these limits is BrowserScope (see Connections per Hostname column).
What about combining those 6 requests into 1 request? This will also load a little faster.
I'm making a small web application in Seaside. I have a login component, and after the user logs in I want to send along a cookie when the next component renders itself. Is there a way to get at the object handling the response so I can add something to the headers it will output?
I'm trying to avoid using WASession>>redirectWithCookies since it seems pretty kludgey to redirect only because I want to set a cookie.
Is there another way that already exist to add a cookie that will go out on the next response?
There is currently no built-in way to add cookies during the action/callback phase of request processing. This is most likely a defect and is noted in this issue: http://code.google.com/p/seaside/issues/detail?id=48
This is currently slated to be fixed for Seaside 2.9 but I don't know if it will even be backported to 2.8 or not.
Keep in mind that there is already (by default) a redirection between the action and rendering phases to prevent a Refresh from re-triggering the callbacks, so in the grand scheme of things, one more redirect in this case isn't so bad.
If you still want to dig further, have a look at WARenderContinuation>>handleRequest:. That's where callback processing is triggered and the redirect or rendering phase begun.
Edited to add:
The issue has now been fixed and (in the latest development code) you can now properly add cookies to the current response at any time. Simply access the response object in the current request context and add the cookie. For example, you might do something like:
self requestContext response addCookie: aCookie
This is unlikely to be backported to Seaside 2.8 as it required a fairly major shift in the way responses are handled.
I've just looked into this in depth, and the answer seems to be no. Specifically, there's no way to get at the response from the WARenderCanvas or anything it can access (it holds onto the WARenderingContext, which holds onto the WAHtmlStreamDocument, which holds onto the response's stream but not the response itself). I think it would be reasonable to give the context access to the current response, precisely to be able to set headers on it, but you asked if there was already a way, so: no.
That said, Seaside does a lot of extra redirecting, and it doesn't seem to have much impact on the user experience, so maybe the thing to do is to stop worrying about it seeming kludgey and go with the flow of the API that's already there :)