I would like to understand the internals of Session~setMaxInactiveInterval
I understand that if HTTP requests are not received within the said interval then the session is cleared. All the objects belonging to the session will be gone.
Does that also mean that the existing requests part of the session will be terminated ?
I have scenario for large file transfer to happen over a single request.
So if one request (A) is long running and there are no other requests sent within the time interval. Then will the request A will be terminated ?
Best Regards,
Saurav
Related
I have an application that retrieves data from a specific database and through a SOA client sends this data to the integration, I have several threads sending instantiating this client and sending this data in parallel. However, the amount of submissions is being limited to 1,000,000 per hour, so when I reach this limit, I will have to send the registrations in the next submission and so on. What implementation/technology can I use to ensure that all records are submitted?
Sounds like any persistent queue would be helpful here. I'd make it so that all the request behave the same, that is the server would only reply with a place to get the data (or the client would give a callback to where the data should be sent) and all the server will do on request would be to queue the request and return the next step. It can then have separate process read from the queue and process the requests at whichever way makes sense
I am currently debugging some issue about this.
We have a ASP.NET web application and I am debugging on Cassini. When I tried to use IE and send out the request to the server, some time (e.g. in about 20minutes) is needed to process and then send out the response.
In case of multi-tab IE, I tried to send out the requests in different tab at about the same time to the same server but the response is handled only after the one of the response is sent out.
If a new instance of IE is started and the requests are sent out in these different instances, the server can process and send out the response almost simultaneously. After doing some research I found that IIS express may solve my problem, but I cannot. Anyone has experienced similar problem or have I missed out some really important things to check with first?
Thank you for your help.
This is primarily due to ASP.net's session state variable and the fact that only one request at a time may have R/W access to a particular session (as determined by the SessionID cookie).
Any additional requests requiring any form of session access (since Read/Write is the default) will be blocked until the previous request has been completed.
Based on the following links:
http://johnculviner.com/asp-net-concurrent-ajax-requests-and-session-state-blocking/
https://msdn.microsoft.com/en-us/library/ms178581.aspx?f=255&MSPPError=-2147217396
I think that you miss the point that the session is lock all request leaving only one per time to run.
Read about that and why:
Replacing ASP.Net's session entirely
Also : Web app blocked while processing another web app on sharing same session
The reason is that Sessions in ASP.NET are not thread safe. Therefore ASP.NET serializes access to requests from the same session.
If you have a multi-tab IE then your tabs share one session. The first request is executed right off and the other ones are queued. If you have different instances then each of them creates a new session and therefore the request are executed in parallel.
I've a servlet app deployed in side oc4j.
I am trying to invalidate the user session after 1 minute using:
session.setMaxInactiveInterval(1 * 60);
But What happens is that It takes over 1 minute (and may reach 1 min and half) before the session get destroyed.
Is this an implementation issue, or what?
You seem to be checking the destroy by waiting until HttpSessionListener#sessionDestoryed() get called instead of actually sending a HTTP request to the server after exactly 1 minute.
The session destroy is on most servers managed by a background job which runs at intervals, which can be each minute or more depending on server make/version, configuration and possibly also load. This job checks all open sessions whether it has been expired or not and sweeps the expired ones accordingly. Thus, it is not true that the session destroy is immediately called on the same second as the session is expired as long as the client hasn't sent a request. This background job does not run every second, it would have been too CPU intensive.
The session destroy will however be immediately called whenever the server retrieves a request with a session ID while the session is still present in server's memory but is been expired.
So, you'd either have to accept it or to change your testing methodology.
Let's imaging there are 2 pages on the web site: quick and slow. Requests to slow page are executed for a 1 minute, request to quick 5 seconds.
Whole my development career I thought that if 1st started request is slow: he will do a (synchronous) call to DB... wait answer... If during this time request to quick page will be done, this request will be processed while system is waiting for response from DB.
But today I've found:
http://msdn.microsoft.com/en-us/library/system.web.httpapplication.aspx
One instance of the HttpApplication class is used to process many requests in its lifetime. However, it can process only one request at a time. Thus, member variables can be used to store per-request data.
Does it mean that my original thoughts are wrong?
Could you please clarify what they mean? I am pretty sure that thing are as I expect...
The requests have to be be processed in the sequential order on the server side if the both request use the same session state with read/write access, because of asp.net session locking.
You can find more information here:
http://msdn.microsoft.com/en-us/library/ie/ms178581.aspx
Concurrent Requests and Session State
Access to ASP.NET session state is exclusive per session, which means that if two different users make concurrent requests, access to each separate session is granted concurrently. However, if two concurrent requests are made for the same session (by using the same SessionID value), the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data. However, read-only requests for session data might still have to wait for a lock set by a read-write request for session data to clear.
Your original thoughts are right, and so is the documentation. The IIS worker process can spawn many threads, each with their own instance of the HttpApplication class.
ASP .NET will host multiple AppDomains for your web application under a single worker process (w3wp.exe). It may even share AppDomains for different web applications under the same worker process (if they are assigned to the same app pool).
Each AppDomain that ASP .NET creates can host multiple HttpApplication instances which serve requests and walk through the ASP .NET lifecycle. Each HttpApplication can (as you've said) respond to one request at a time.
Our asp.net 2.0 application has a very long process (synchronized) before sending response back to client. I observed that a second request, exactly same the initial one, was sent after client IE8 waited response for a long period of time while our application was still processing the first request.
I use page session with predefined key to store a flag when the initial request arrives and then starts long process while client IE waits for the response, so if second request comes in, our application checks the session value. After our application sets the session flag and starts processing, I use Fiddler “Abort Session” to abort the initial request, right away the second request (same as the first one) is sent automatically, but session value set earlier seems no longer exist.
Any thoughts?
When the second request comes in during your ongoing process isn't it overwritting your current request's value since it is only storing one item? Assuming both requests are coming in under the same session.
Maybe consider storing a list of items so that you can add the second item to your list of flags and then find any previous items and delete them.
Maybe kill the request currently in the session before starting the second requests session?
I don't really understand your problem / solution all that well but hopefully that helps.
Edit based on your comment:
If it no longer exists it's probably due to your session timing out and wiping the values so the second one wouldn't be able to access it. Is the second connection coming in under the exact same session? Compare the Session IDs in both cases. Also check your timeout.
You could also store this information in your application Cache that has a real long expiry. Have a dictionary with the key being the session ID or even the user if you only want one process per user and then store your value. This when the second request comes in by the same user, you will be able to find it regardless of session ID. Just make sure you are clearing once your process is complete.