ASP.Net WebForms Communication Failure in Production - asp.net

I am experiencing a problem in production with two specific webforms that perform a server-side postback to perform calculations.
There is a <button runat=server onserverclick=doMath>Calc</button>.
All of the data for the calculations is on the web page, and there is no database communication, but the code is written old school and everything happens server-side via postbacks; no ajax panels etc.
When the button is pressed in production, for some users, a page can not be displayed error is returned after 30-60 seconds. In the application logs on the server there is matching log entry that states an object reference was null. After testing and testing further it is clear that the data for the null reference is being sent to the web server, but it is not getting there in its entirety, and no response is making it to the user even though an error is logged.
The code seems to not be relevant, however, if that was the case, I think I would see this taking place on more than two pages. And these two pages are very similar and related to each other. However, because the problem is intermittent and it only happens to some users I also think it is a network communication problem. For example:
From home I can use the calc button over and over and I only get the error once out of 1000 clicks.
From the office I can get the error almost every single click.
The problem never takes place in dev or in qa. I am hoping for help with a method to isolate the source of the problem or maybe someone has seen this before.
EventValidation is off.
Path Pings show that there are some nodes dropping packets, but they are not "our" servers.
After cracking open Wireshark I have discovered some additional information. When the "timeout" takes place a handshake is failing.
bad handshake?
Unfortunately, I am not a network guru. Even if this is the problem I am still concerned as it only seems to happen with two specific pages.

Related

ASP.NET: Session state has created a session id, but cannot save it because the response was already flushed by the application

in an old ASP.NET Web Forms application, developed in Visual Studio 2010,
suddenly does not run anymore, and in the log file appears this message:
Session state has created a session id,
but cannot save it because the response was already flushed by the application.
No new deployment has been made, and no code modifications take place.
Until now I didn't find any solution to this.
What I have to check?
I state that the source code is no longer available, and therefore it would be very difficult to change the code and proceed with a new deployment.
Thanks in advance.
Luis
This would suggest that someone might be hitting the site and jumping directly to some URL (and thus code) that say does some response redirect to another page or some such.
Remember, when code behind runs, and say re-directs to another page, in most cases the running code for the current page is terminated, and that is normal behaviors.
However, the idea that you going to debug code and debug a web site when you don't have the code to debug? Gee, I don't see how that's going to work at all. As noted, if this just started, then it sounds like incoming requests are to pages that don't expect to be hit "first", but some pages that expect to be ONLY called from other pages in the site when some session() and imporant values are setup BEFORE such pages are to be hit.
It also not clear if the site is using sql based sessions, or just in-memory sessions. In memory can (and is) faster, but it also not particually relaible. Now, if you deployed to a new web server or new hosting, then often session errrors can now start to appear, and this is due to the MASSIVE HUGE LARGE DIFFERENT of using cloud based hosting vs that of older hosting soluions that run on a single server.
Clould computing is real utility computing, and thus when you host a web site on such systems, then in-memory session() cannot be used anymore, since multiple servers can and will be used to "dish out" web pages. Since more then one server might be used, then obvisouly in-memory sesson() can't work, since a few web pages might be served out by one server, and then a few more pages might be served out by another server. And using shared memory for a session is limited to ONE server, since multiplel servers don't and can't transfer their memory to other servers.
So, this suggests that you want to be sure that sql server based sessions are being used here - and for any kind of server farm, or any kind of system that does load balances between more then one server, then of course you HAVE to use sql server based sessions, since in memory can't work in that kind of environment.
The error could also be due to excessive server loads - often the session database is "locked" for a short period of time, and can thus often be a bottleneck. So, for say years you might not see a issue, but then as load and use of the web site increases, then this can become noticed where as in the past it was not. I suppose the database used for storing sessions could be checked, or looked at, since as you note, you don't have the ability to test + debug the code. I would REALLY but REALLY work towards solving and fixing this lack of source code for the web site, since without that, you have really no means to manage, maintain, and fix issues for that web site.
But, abrupt terminations of web pages? As noted, this could be a error triggered in code, and thus the page never finished what it was supposed to do. And as noted, perhaps a page that expects some session() values but does not have them as explained above could be the problem. It not clear if your errors also shows what URL this was occurring for.
While nothing seems to have changed - something obviously did.
Ultimate, you need to get that source code, or deal with the people + vendor that supplies the code for that site. If you don't have a vendor, and you don't have source code, you quite much attempting to work on a car that you cant even open the hood to check what's going on under that hood.
so, one suggestion here? Someone is hitting a page that expected some value(s) in session to exist. Often the simple solution is to shove ANY simple or dummy value into session so session REALLY does get created, and then when the page attempts to save the session(), there is one to save!!!
In other words, this error often occurs when session is attempted to be saved, but no sesison exists. For such pages, as noted, a simple tiny small code change of doing this session("zoozoo") = "my useless text" will fix this error. So, it sounds like session is being lost.
As noted, a error on a web page can also trigger a app-pool re-start. If app-pool re-starts, then session is lost (in memory session). Now, with session being lost, then any page that decides to terminate early AND ALSO having used session() will trigger this error.
So, this sounds like app-pool is being re-started and session is being lost. (you can google why app-pool restarts and for the many reasons). However, critical to this issue would be are you using sql based sessions, or in-memory (server) sessions? So, this sounds like some code is triggering a error, and with a error triggered, app-pool re-starts. And with app-pool being restarted, then in-memory session is blown away. And now, without ANY session at all, then attempts to save the session trigger the exact error message you see. (and this is why shoving a dummy value into the session allows and can fix some pages - since you can't save a "nothing" session, and if you do, then you get that exact error message.
but, as noted, you can't make these simple changes to code anyway, right?
But, first on this issue - are you using memory based sessions or not? And that feature can be setup and configured in IIS, and without changes to the code base. So, one quick fix might be to turn on sql server based sessions. It will cost web site performance (10%), but the increased reliability is more then worth the performance hit.
Another area to look at? Are AJAX calls being made to a page, and again without any previous session having been created? So, once again, we down to a change in end user behaviors, and possible those hitting a page first before having logged in, or done other things - and again one would see this error crop up.

Web App in Flaky Internet connection

Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,

Application Insights removing telemetry after it has been logged

I've had Application Insights set up on my ASP.NET project for a couple months with no issues. I use Custom Events for logging certain events.
Recently, I tried to add a Custom Event after a user has authenticated in order to track the login behavior. My custom event DOES log to application insights debug session. I know this because I can see it in the telemetry when paused on a breakpoint just after the event.
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
I cannot understand what the issue is. Does anyone familiar have any (application) insights? I couldn't help myself ;)
There are some things to check:
are you logging to one resource (iKey) and searching on another? (a lot of people send data to one resource in dev/debug and a different resource in release/prod environments. so make sure you're sending to the place you expect, and searching the place you expect.
is the data actually going out successfully? you may need to use fiddler or some other tool to watch your outbound http for calls to dc.services.visualstudio.com. It could somehow be the case that there's something wrong with the data you're sending, or maybe you're getting capped or throttled by the service. If that's the case, the outbound requests will have responses other than 200, and will generally tell you the reason it didn't accept any items that it rejected.
if the data is getting successfully sent and is going where you expect it to go, there might just be a delay in backend processing. you can always check aka.ms/aistatus to see if there are any current issues with the service.
I am confused, however, by what you mean when you say
However, when I continue running the application, my custom event no longer shows up the telemetry. It just disappears.
What do you mean "it just disappears" ? if you see it in the output window, then the SDK saw it, and it will get sent, precluding any of the above 3 items. Where is it "disappearing" from? unless you clear the output window, it's never gone from there. If you're talking about the VS search tools that show data sent by the AI SDK during debug, that tool currently has a cap of the most recent 250 items that have occurred during the debug session.

.NET Session variable is null - for all users

Problem description
I have an ASP.NET app in which the users have different rights, and are logged in through Facebook. The app includes (among other things) filling out some forms. Some users have access to forms others don't. The forms can sometimes require some searching in books and/or on the internet before being able to submit them.
As such, we're having problems with session time-outs (it seemed), where users would be met with "Not authorized to see this page/form" after doing research somewhere else.
Attempted solutions
I've created a log function that logs the state of a handful of variables on strategic points in the application. I've pinpointed the problem to the fact that the Session variable "UserRole" is null when the problem occurs.
Relogging
The obvious solution is: "Have you tried relogging?" - which should reset the session and allow the user back to the form they want. On logout, I use
Session.Clear();
Session.RemoveAll();
and I create a new session with relevant variables (including UserRole) on login. This doesn't help, though.
Keeping session alive
One way to do it is just increase the standard 20-minute Session length to an arbitrary, higher number (say 2 hours). Although that could be viable during beta (there are only around 5 users right now), it is not a viable solution in the long haul as the server would have to keep the Session objects from many users for longer time, exponentially increasing server demands.
Instead, I created a 'dummy' .ashx handler "RefreshSession.ashx", that can recieve a POST request and return "200" statuscode. I then created a jQuery function in the shared part of the app (that all the pages use) that calls this handler every 10 minutes in order to refresh the session as long as the tab is open in the browser. I've checked the network traffic, and it works as intended, calling the handler even if the window is minimized or the user is viewing another tab. This did not solve the problem either.
A caveat
When one of the users encounter the problem, they call me or my programming partner up. Of course, we go and see if we get the same issue. We all have the same (admin) rights. The 'funny' thing is that we see the exact same error on the same subpage - even if we haven't had any contact with the application for days.
The problem will 'fix itself' (i.e. let users with proper role back on the subpage) after a while, but not even republishing the app to the server will reset it manually.
Therefore, it seems to not be a simpel session error as supposed from the "UserRole" session variable being null after 15-20 minutes of inactivity. It seems to be saved somewhere internally in the server state.
My problem is, that I now have no idea where to look and how to progress. I was hoping that someone here might have an idea for a solution, or at least be able to point me in the right direction? :-)
Thank you all for your time, it is much appreciated.
Based on MaCron's comment to the question, we decided to keep the information in the user's cookies instead of the session variables. Everything seemed to point to us having exactly that issue, and deadlines being deadlines and with me not being able to figure out how to disable the synchronization of worker processes, this seemed to be a feasible and comparatively easy fix.

Multiple postbacks of ASPX page

I have an aspx page with a simple form to send emails to pre-defined lists of users. On the longer lists the page usually times out before the emails finish sending but this has never been an issue.
Today something weird happened and each user got four emails. In the log I could see three new threads crank up one at a time and start over sending from the beginning of the list.
Any ideas? I absolutely know I didn't intentionally refresh the Web page myself, and certainly not three times. But could the browser (IE8) have done it? Would it post again trying to re-establish a connection when it timed out? Or when I switched back to the browser window from another app? I have never seen behavior like this before.
First question would be whether there is any reason to do a long-running task syncronously, i.e. lock up a thread that should be serving web requests for something that could be done in the background, while the browser sits and waits for a response that its probably not going to get. I'd look into running this asynchronously unless there's a very deliberate reason not to.
Secondly have you looked into creating some kind of locking mechanism such that the process can't be started more than once? I have processes where I add a token to the application cache (and remove it when I'm done) so that if the token exists the process won't run again (the call to the asynch task isn't made), and that does the job. That way it doesn't matter how many clients call your code, you prevent things happening more than they should.

Resources