I have a number of Azure websites (10) with the same code. Inside the Global.asax in Application_error I have some simple logging code, the crucial line of which is:
System.Diagnostics.Trace.TraceError(err)
Where err is a string.
I've set each website to log errors straight to table storage, but my problem is that only some of them actually work. The ones that work seem fine and I can generate new entries into the storage tables by simulating an error, but randomly some of the sites just refuse to log anything at all.
I've republished all 10 sites to make sure they are all running the most recent code. They are all using the same subscription, and all appear to have the same application logging setting on (each pointing to their own table, all of which exist).
I've also tried live streaming and the same ones that fail to log in the tables fail to stream anything either.
What am I doing wrong here?
Related
in an old ASP.NET Web Forms application, developed in Visual Studio 2010,
suddenly does not run anymore, and in the log file appears this message:
Session state has created a session id,
but cannot save it because the response was already flushed by the application.
No new deployment has been made, and no code modifications take place.
Until now I didn't find any solution to this.
What I have to check?
I state that the source code is no longer available, and therefore it would be very difficult to change the code and proceed with a new deployment.
Thanks in advance.
Luis
This would suggest that someone might be hitting the site and jumping directly to some URL (and thus code) that say does some response redirect to another page or some such.
Remember, when code behind runs, and say re-directs to another page, in most cases the running code for the current page is terminated, and that is normal behaviors.
However, the idea that you going to debug code and debug a web site when you don't have the code to debug? Gee, I don't see how that's going to work at all. As noted, if this just started, then it sounds like incoming requests are to pages that don't expect to be hit "first", but some pages that expect to be ONLY called from other pages in the site when some session() and imporant values are setup BEFORE such pages are to be hit.
It also not clear if the site is using sql based sessions, or just in-memory sessions. In memory can (and is) faster, but it also not particually relaible. Now, if you deployed to a new web server or new hosting, then often session errrors can now start to appear, and this is due to the MASSIVE HUGE LARGE DIFFERENT of using cloud based hosting vs that of older hosting soluions that run on a single server.
Clould computing is real utility computing, and thus when you host a web site on such systems, then in-memory session() cannot be used anymore, since multiple servers can and will be used to "dish out" web pages. Since more then one server might be used, then obvisouly in-memory sesson() can't work, since a few web pages might be served out by one server, and then a few more pages might be served out by another server. And using shared memory for a session is limited to ONE server, since multiplel servers don't and can't transfer their memory to other servers.
So, this suggests that you want to be sure that sql server based sessions are being used here - and for any kind of server farm, or any kind of system that does load balances between more then one server, then of course you HAVE to use sql server based sessions, since in memory can't work in that kind of environment.
The error could also be due to excessive server loads - often the session database is "locked" for a short period of time, and can thus often be a bottleneck. So, for say years you might not see a issue, but then as load and use of the web site increases, then this can become noticed where as in the past it was not. I suppose the database used for storing sessions could be checked, or looked at, since as you note, you don't have the ability to test + debug the code. I would REALLY but REALLY work towards solving and fixing this lack of source code for the web site, since without that, you have really no means to manage, maintain, and fix issues for that web site.
But, abrupt terminations of web pages? As noted, this could be a error triggered in code, and thus the page never finished what it was supposed to do. And as noted, perhaps a page that expects some session() values but does not have them as explained above could be the problem. It not clear if your errors also shows what URL this was occurring for.
While nothing seems to have changed - something obviously did.
Ultimate, you need to get that source code, or deal with the people + vendor that supplies the code for that site. If you don't have a vendor, and you don't have source code, you quite much attempting to work on a car that you cant even open the hood to check what's going on under that hood.
so, one suggestion here? Someone is hitting a page that expected some value(s) in session to exist. Often the simple solution is to shove ANY simple or dummy value into session so session REALLY does get created, and then when the page attempts to save the session(), there is one to save!!!
In other words, this error often occurs when session is attempted to be saved, but no sesison exists. For such pages, as noted, a simple tiny small code change of doing this session("zoozoo") = "my useless text" will fix this error. So, it sounds like session is being lost.
As noted, a error on a web page can also trigger a app-pool re-start. If app-pool re-starts, then session is lost (in memory session). Now, with session being lost, then any page that decides to terminate early AND ALSO having used session() will trigger this error.
So, this sounds like app-pool is being re-started and session is being lost. (you can google why app-pool restarts and for the many reasons). However, critical to this issue would be are you using sql based sessions, or in-memory (server) sessions? So, this sounds like some code is triggering a error, and with a error triggered, app-pool re-starts. And with app-pool being restarted, then in-memory session is blown away. And now, without ANY session at all, then attempts to save the session trigger the exact error message you see. (and this is why shoving a dummy value into the session allows and can fix some pages - since you can't save a "nothing" session, and if you do, then you get that exact error message.
but, as noted, you can't make these simple changes to code anyway, right?
But, first on this issue - are you using memory based sessions or not? And that feature can be setup and configured in IIS, and without changes to the code base. So, one quick fix might be to turn on sql server based sessions. It will cost web site performance (10%), but the increased reliability is more then worth the performance hit.
Another area to look at? Are AJAX calls being made to a page, and again without any previous session having been created? So, once again, we down to a change in end user behaviors, and possible those hitting a page first before having logged in, or done other things - and again one would see this error crop up.
I have two application's running independently of one another. At present, ApplicationSend is web-based and ApplicationReceive is desktop-based, but tomorrow it might be the opposite. There are no Multiple UI.
The ApplicationSend keep logging info into the database or Text file. when the ApplicationSend logs info, it needs to display in ApplicationReceive at the same time without any delay.
The ApplicationReceive uses rest API to display the data.
I have two questions:
How this can be achieved in C#. Please note the application's are
completely unaware of each other. Also one application can be
Windows and other can be web.
When the REST API Display 10 logs, next time should it query for all
the logs or only the updated logs?. Ideally it should be only
updated logs. But how would one get the updated logs
Azure has been acting very strange. Previously, all was fine until an update for my cloud service took more than the usual 20 minutes. I then decided to delete and redeploy, however not preserving the previous certificate.
So I repackaged and redeployed; however this time I was unable to connect to the web app via the usual http://youraddrhere.cloudapp.net. Every time I do that Chrome disappointingly tells me: ERR_CONNECTION_TIMED_OUT. When I remote into the cloud service and open the app from within IIS (i.e. its internal IP), it miraculously works! But there is no use if I am unable to expose my cloud service as a public site...
A quick check through my errors in the Event Viewer shows the following:
A fatal error occurred when attempting to access the SSL server credential private key. The error code returned from the cryptographic module is 0x8009030d. The internal error state is 10001. I have tried deleting, creating, uploading, deleting, recreating, and re-uploading numerous .cer and .pfx files but to no avail.
What am I doing wrong this time round?
Try to delete the deployment and re-publish your cloud service.
Thereafter, my friend and a colleague RDPed into the Cloud Service and tried all sorts of methods to diagnose the issue, but saw none. There were no glaring issues in the Event Viewer, no endpoint binding conflicts in IIS, nothing wrong with the memory management. Overall, no problem at all.
His suggestion was then: to recreate a new Azure deployment project (.ccproj) from scratch, add in all the relevant roles, and attempt to republish. Initially I was skeptical, but deployment proved successful and as of now I am able to navigate to my cloud service both from within and outside.
I only attempted this when I had utterly no clue what the issue was-from past experiences however the Event Viewer will prove to be a great starting point for any of your headaches.
we recently migrated our application (IIS Server + DB Server) to AWS and also modified the network architecture a little bit. The entry point of the system is an Astaro Firewall (we use the AWS AMI) which also host the SSL certificate of the web server. Everything related to the firewall has been done by a vendor and we only have some read-only privileges.
We are getting 403 errors in a few situations but I will explain one, as they all may be related.
We got a form which query the database and return a report in HTML format (this report also have some checkbox to do updates). The first time the form is submitted, we always get the report back. If we wanna post the form again, updated with new data, it crash, returning error 403. We noted that it doesn't crash when the first results returned a very low number of rows (or none).
Looking at the details of the POSTs in Developer Tools, what seems to be the only difference between a working and 403 error reply is the size of the data posted. The second post is always bigger because it contains the data of the first report (as the page have also other option to checkbox the rows).
Also, looking at the IIS logs we don't see any traces of the POST that crash. Nothing at all.
This problem happen only in production. In dev environment it's all working flawlessly. The only difference is that the production have the firewall/ssl, while development is all open. This is why we think it may be related to SSL.
The vendor is not the most helpful, we are looking for help to pinpoint the issue and trying to take the situation in our hands.
Any input appreciated.
Here's the scenario...
We have an internal website that is running the latest version of the ODAC (Oracle Client). It opens database connections, runs a stored procedure or packaged method, then disconnects. Connection pooling is turned on, and we are currently under version 11g in both our development and test environments, but under 10gR2 in our production environment. This happens on Production.
A few days ago, a process began firing off a ORA-2020 error. The process is called from a webpage on our internal website. The user simply sets a date, hits a button, and a job is started on another system that is separate from the website. The call itself, however, uses a database link to run a function.
We've scoured the SQL to find that it only uses that one database link. And since these links are on a per session basis and the user isn't exceeding the default limit of 4, how is it possible that we are getting a ORA-2020 error.
We have ran a number of tests to try to push over the default limit of 4. ODAC, from what I recall, runs a commit after each connection, and I can't seem to run 4 DB links, then run a piece of SQL with 1 DB link directly after with any errors. The only way I can bring up this error is if I run a query with 4 DB links, then a function or piece of dynamic SQL with a database link within it. We don't have that problem as this issue is sporadic. It isn't ALWAYS happening.
Questions
Is it possible that connection pooling is allowing User B to use User A's connection after the initial process was run, thus adding to the open links number if User B runs a SQL statement with more database links?
Is this a scenario where we should up our limit past 4? What are the disadvantages of increasing the number?
Do I need to explicitly close open database links before disconnecting from the database? Oracle documentation seems to suggest it should automatically happen, but "on occasion"... doesn't.
Firstly, the simple solution: I'd double check that in the production database the number of default links is actually 4.
select *
from v$system_parameter
where name = 'OPEN_LINKS'
Assuming you're not going to get off that lightly:
Is it possible that connection pooling is allowing User B to use User
A's connection after the initial process was run, thus adding to the
open links number if User B runs a SQL statement with more database
links?
You say that you explicitly close the session, which, according to the documentation, should mean that all links associated with that session are closed. Other than that I confess complete ignorance on this point.
Is this a scenario where we should up our limit past 4? What are the
disadvantages of increasing the number?
There aren't any disadvantages that I can think of. Tom Kyte suggests, albeit a long time ago, that each open database link uses 500k of PGA memory. If you don't have any then this will obviously cause a problem but it should be more than fine for most situations.
There are, however, unintended consequences: Imagine that you up this number to 100. Somebody codes something that continually opens links and draws a lot of data through all them select * from my_massive_table or similar. Instead of 4 sessions doing this you have 100, which is attempting to transfer hundreds of gigabytes simultaneously. Your network dies under the strain...
There's probably more but you get the picture.
Do I need to explicitly close open database links before disconnecting
from the database? Oracle documentation seems to suggest it should
automatically happen, but "on occasion"... doesn't.
As you've noted the best answer is "probably not", which isn't much help. You don't mention exactly how you're terminating the session but if you're killing it rather than closing gracefully then definitely.
Using a database link spawns a child process on the remote server. Because your server is no longer in absolute charge of this process there's a myriad of things that could cause it to become orphaned or otherwise not close on termination of the parent process. By no means does this happen the whole time but it can and does.
I would do two things.
In your process, if an exception is encountered, e-mail the results of the following query to yourself.
select *
from v$dblink
As a minimum at least you will know what database links are open in the session and give you some way of tracing them.
Follow the documentations advice; specifically the following:
"You may have occasion to close the link manually. For example, close
links when:
The network connection established by a link is used infrequently in an application.
The user session must be terminated."
The first seems to exactly fit your situation. Unless your process is time-sensitive, which doesn't seem to be the case, then what have you got to lose? The syntax is:
alter session close database link <linkname>
We ended up increasing the link amount, but we never did find the root cause.