Strange Code first Migration behavior, or IIS issue? - ef-code-first

Ok, so the background is this.
I have created a hardware controller for a fingerprint reader, and a web application that allows users that have scanned in to do things in the web application. The web application was created using Code First, and the communication is done through signalr 2.0 The problem that I am having is this. Everything works beautifully for about a day, this used to be about half a day, but in IIS 7.0 I changed the idle time on the application pool to 200 mins, but I am still getting an error at random times on the web server, I have managed to have extended the amount of time that is stays running. However, what confuses me, and why I cannot seem to get a handle on what is happening is that when it does go down;
A) I do not know why? (I am leaning towards a timeout somewhere)
B) The error message is the same one you get when you make a change to the database structure and forget to use Database-Update from the package manager console, Yet no one is changing the database.
c) If you leave it alone it will fix itself, and I do not know why or how.
Has anyone seen behavior like this? and if so what caused it and how did you fix it? Or can anyone offer how I can debug this?
Thanks so much for any help!
Kelso

If the exception is "The model backing the 'YourContext' context has changed since the database was created. Consider using Code First Migrations to update the database" you could try to catch that exception and log the content of the following method and compare it to the return value of the method in Application_Start or whenever it worked for you.
((System.Data.Entity.DbContext)(context)).InternalContext.QueryForModel(0)
The method gives you a XML representation of your database schema.

Just to update on this issue, it turns out that the IIS server had been set to only a single CPU and a single thread, (VMare setting) and that thread was getting hung, and could not create a new thread to continue processing, once we updated the cpu's and increased the thread count to 5, everything works like a dream.

Related

ORA-22337: the type of accessed object has been evolved - in application

Setting: ASP.Net application with Oracle backend, we utilize User Defined Types (UDTs) and use ODP.Net to communicate them between the front and back-ends.
Problem: I had to alter one of my UDTs attribute length, once I did that and tested in backend it worked fine, but when I run my site I keep getting the ORA-22337 error (in subject line)!!
You will not find much if you research this problem online, other than the useless Oracle error documentation you will not find anything helpful. The Oracle documentation says to close and re-open the connection, but that does not apply to my scenario
I already solved the problem by dropping and recreating the UDTs and NTs, but this is inefficient to have to do every time I need to modify one of my core UDTs, any ideas how to solve this without dropping and recreating everything?
If the error info says "Close and reopen the connection" as the solution and you are using a OracleConnection which has a connection pool in it, then simply Close()ing the connection is not good enough. It will just go back to the pool still open and when you "reconnnect" you will just get it back again. You'll need to Close all open connections and then call ClearPool() to make sure that all old connections in the pool are removed.

How do I create a long lived object in within the IIS/ASP.Net?

I am currently using SignalR in and ASP.Net MVC 4 appliation. I am consuming messages from RabbitMQ, and in essence need to broadcast via SignalR whenever a new message hits the queue. Problem is under normal usage there is not a place where a long lived object can live within IIS. I am getting about 1000 messages a second, so the standard approach of pushing the message into IIS by making a request from an outside queue monitoring service/app would pretty much kill my IIS.
I have a general idea of creating a singleton instance on a background thread. Not sure whats the best way to do this in iis, would want the singleton to automatically be recreated if the application dies.
Are you thinking you would have something in a background thread that would check for messages every so often?
I have used quartz.net to create scheduled jobs in the past. They are fairly simple to set up. You can basically just say, execute this job every x interval starting at y time. With whatever solution you go with you will probably need to add in error handling. I think your quartz job would keep trying to execute every x interval even if it threw an exception, but you will need to make sure that you clear out whatever caused the exception in the first place. Otherwise every time it runs it will fail. I.E. like if there is something wrong with your message such that it will error every time you try and broadcast it.
Watch out for IdleTimeout of your application. If IIS puts your web app to sleep your singleton in your background worker/quartz job goes to sleep too. If you set IdleTimeout to 0 your application will never sleep.
If you init your job/worker in Global.asax.cs Application_Start() your job will always start when your web app does.
When you first deploy your app or update it or restart it you will need to make sure your app is running. Not sure if there is a setting for this in IIS. But normally your application doesn't start up until a request is made to it. Good luck. Let me know if you find a solution to this.
Same deal if you app crashes for some other reason. You need something to re-start your web app.
Hope that helps!

IIS Worker process hangs forever on first request

I am working on solving a problem that I have had for a couple of days now. Every time one of my sites are rebuild or the AppPool is recycled, the first pageload will hang forever (well, I've only waited up to 30 minutes). It is only happening on one particular site out of ~10 sites. It is an ASP.NET site.
Here are the things I have observed:
In IIS Manager under worker processes I can see the request. Verb = GET, Sate = ExecuteRequestHandler, Module Name = ManagedPipelineHandler. Time Elapsed just keeps increasing, of course.
If I close down the browser in which I made the initial request and then open a new one to make another request, the page will load instantly.
In my code the Application_Start of my Global.asax file is not called on the first request. It is called on the second request.
The worker proccess is causing the memory usage on my machine to go through the roof
I'm inexperienced in troubleshooting IIS, but hours and hours of searching has led me nowhere.
The only major code change we have made on the site recently is that we have started implementing logging using log4net. I have though tried to remove any log4net code, both from my web.config file and Global.asax - still no luck.
Has anyone else experienced this and if so how did you solve it?
Any and all help will be much appreciated.
ADD:
If I place a .txt file in the root of the site and load that as the first thing after a build it will load instantly.
However the worker proccess still acts exactly as before and the memory usage still goes through the roof.
Final edit:
I feel like such an idiot. I can't explain why, but for some reason my break points in Global.asax suddenly got hit and I was able to identify the problem. It was a call to a database via Entity Framework that was badly written - i.e. the filtering was done after all the rows from the column in question had been fetched. And to make it worse, the filtering was done inside a foreach loop. Anyway, now everything is back to normal and I'm happy.
Possibly stating the obvious but you haven't got any silly code in your global asax in the app_start that could be causing this?
Sounds like an infinite loop or something?
Just a quick note what happend in my case:
Neither Process Monitor nor Failed Request Tracing was of any help. The website simply loaded (nearly) forever.
Finally, after waiting for several minutes an error occurred stating that it "cannot locate the network path".
The reason was that I entered a connection string to a non-existing SQL Server instance, so it somehow keept searching for the server. Finally, a timeout occured.
The solution was to simply specify the correct SQL Server in the connection string inside Web.Config.

long running http process - how to put in separate process?

I know that similar questions have been asked all over the place, but I'm having trouble finding one that relates directly to what I'm after.
I have a website where a user uploads a data file, then that file is transformed and imported into SQL. The file could be up to 50mb in size, and some times this process can take 30 minutes or sometimes even longer.
I realise I need to palm off the actual work to another process, and poll that process on the web page. I'm wondering what the best approach would be though? Being a web developer by trade, I'm finding all this new Windows Service stuff a bit confusing, and I just wanted somewhere to start.
So:
Can I do / should I being doing this with a windows service? if so, how?
Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
clarifications
The data is imported into sql, there's no file distribution taking place.
If there is a failure, it absolutely MUST be reported to the user. The web page will poll every, lets say, 5 seconds, from the time the async task begins, to get the 'status' of the import. Once it's finished another response will tell the page to stop polling for status updates.
queries on final decision
ok, so as I thought, it seems that a windows service is the best idea. So as to HOW to get it to work, it seems the 'put the file there and wait for the service to pick it up' idea is the generally accepted way, is there a way I can start a process run by the service, without it having to constantly be checking a database table / folder? As I said earlier, I don't have any experience with Windows Services - I wondered if I put a public method in the service, can I call it somehow?
well ...
var thread = new Thread(() => {
// your action
});
thread.Start();
but you will have problems with that:
what if the import to sql fails? should there be any response to the client
if it fails, how do you ensure the file on a later request
what if the applications shuts down ... this newly created and started thread will be killed either
...
it's not always a good idea to store everything in sql (especially files...). if you want to make the file available to several servers why not distribute them via ftp ...?
i believe that your whole concept is a bit messed up (sry assuming this), and it might be helpful if you elaborate and give us more information about your intentions!
edit:
Can I do / should I being doing this
with a windows service? if so, how?
you can :) i advise you to create a simple console-program and convert this with srvany and sc. you can get a rough overview howto here (note: insert blanks after =... that's a silly pitfall)
the term should is relative, because you did not answer the most important question
what if a record is persisted to the database, telling a consumer that file test.img should be persisted, but your service hasn't captured it or did not transform it yet?
so ... next on
Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
you probably could create a WCF-service which recieves some binary-data and then stores this to a database. this request could be async. yes. but what for?
once again:
please give us more insight to your workflow: what are you exactly trying to achieve? which "environmental-conditions" to you have (eg. app A polls db and expects file-records which are referenced in table x to be persisted) ...
edit:
so you want to import a .csv-file. well that changes everything :)
but i won't advise you to use a wcf-service (there could be a usage: eg. a wcf-service which has a method to insert a single row, then your iteration through the file would be implemented in another app... not that good, though).
i would suggest following:
at first do everything in your webapp (as you've already done), but rather use some sort of bulk-insert and do your transformation/logic on the database.
if you have some sort of bottle-neck then, i would suggest you something like a minor job-service, eg:
webapp will upload the file and insert a row to a job-table. the job-service is continiously polling the table/or gets informed via wcf by the webapp (hey, hey, finally some sort of usage for WCF in your scenario... :) ) and then does the import-job, writing a finish-note to a table/or set the state of the job to finished ...
but this is a bit overkill :)
Please see if my below comments helps you to resolve your issue:
•Can I do / should I being doing this with a windows service? if so, how?
Yes you can do this with a windows service. And I think that is the way you should be doing it. You can implement your own service to process your request or you can use the open source code Job Proccessor
Basically the idea is..
You submit a request for processing
the csv file in database table with
some status as not started.
Then your windows service picks up
the request from database table which
are not started and update them as in
progress status.
Once the processing is complete
succesfully /unsuccesfuly your
service updated the database table
with status as Completed / Failed.
And your asp.net page can poll to
database table for the current status
every 5 sec or so.
•Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
you should not be using WCF for this purpose.

sporadic ASP.NET data error: "Cannot find table 0"

Having deployed a new build of an ASP.NET site in a production environment, I am logging dozens of data errors every second, almost always with the error "Cannot find table 0." We use datasets and frequently refer to Table[0], and while I understand the defensive coding practice of checking the dataset for tables before accessing Table[0], it's never been a problem in the past. A certain page will load fine one second, and then be missing one of its data-driven components the next. Just seeing if this rings a bell for anyone.
More detail: I used a different build server this time, and while I imagine the compiler settings are the same on both, I have a hard time thinking that there's a switch that makes 50% of my database calls come back with no tables. I also switched the project to VS 2008, but then reverted all of those changes when I switched back to VS 2005. I notice that the built assembly has a new MyLibrary.XmlSerializers.dll, where it didn't used to, but I also can't imagine that that's causing all the trouble. (It also doesn't fall down on calls to MyLibrary, or at least no more than any other time.)
Updated to add: I've discovered that the troublesome build is a "Release" build, where the working build was compiled as "Debug". Could that explain it?
Rolling back to the build before these changes fixed it. (Rebooting the SQL Server, the step we tried before that, did not.)
The trouble also seems to be load-based - this cruised through our integration and QA environments without a problem, and even our smoke test environment - the one that points to production data - is fine under light load.
Does this have the distinguishing characteristics of anything you might have seen in the past?
Bumping this old question because we have encountered the same issue and perhaps our solution would give more insight in what causes this.
Essentially this problem occurs in a production environment that is under very heavy load in a Windows service that uses multiple threads to process several jobs simultaneously (100 users use the same DB via ASP.NET web app and there are about 60 transactions/second on older hardware with SQL Server 2000).
No variables are shared, that is connections are opened anew, transaction is started, operations executed, transaction committed and connection closes.
Under heavy load sometimes one of the following exceptions occurs:
NullReferenceException: Object reference not set to an instance of an
object.
at System.Data.SqlClient.SqlInternalConnectionTds.get_IsLockedForBulkCopy()
or
System.Data.SqlClient.SqlException:
The server failed to resume the transaction. Desc:3400000178
or
New request is not allowed to start because it should come with valid transaction descriptor
or
This SqlTransaction has completed; it is no longer usable
It seems somehow the connection that is within the pool becomes corrupted and remains associated with previously used transactions. Furthermore, if such connection is retrieved from pool then sqlAdapter.Fill(dataset) results in an empty dataset, causing "Cannot find table 0". Because our service would retry the operation (reading job list) on failure and it would always get the same corrupt connection from the pool it would fail with this error until restarted.
We removed the issue by using SqlConnection.ClearPool(connection) on exception to make sure this connection is discarded from the pool and restructuring the application so less threads access the same resources simultaneously.
I have no clue who exactly caused this issue so I am not sure we have really fixed that, maybe just made it so rare it had not occurred again yet.
I've fought precisely this error message before. The key is that an underlying data method is swallowing a timeout exception.
You're probably doing something like this:
var table = GetEmployeeDataSet().Tables[0];
GetEmployeeDataSet is swallowing an exception, probably a timeout exception, which is why it only happens sporadically - it happens under load. You need to do the following to fix it:
Modify the underlying code to not swallow the exception, but rather let it bubble up to the next level so you can identify it properly.
Identify the query(s) causing the problem, and then rewrite, reindex, denormalize or throw hardware at the problem. See this for more info: System.Data.SqlClient.SqlException: Timeout expired
I've seen something similar. I believe our problem had to do with failed sessions being re-used (once the session object failed it went into a poor state and could not recover.) We fixed it by increasing the memory for the session pool and increasing the frequency of the web application recycling.
It also was "caused" by a new version that at first blush did not seem to have any change to cause such an effect. However, eventually it became clear that the logic of the program was opening and closing a lot more connections (maybe 20% more) than it used to. This small change pushed the limit of our prior configuration.
You might check the SQL Server logs for errors. Or, the Web server event log. It sounds like your connection pool could be out of open connections or your db could be out.
Which database calls changed between versions?
The error is obviously telling you one of your database calls isn't returning any data on occasion; I can't think of any cases where a code/assembly issue would cause it.
I have seen something like this when doing something with nHibernate Sessions in a non-thread-safe manner. That would explain why you only see it under load. Would need to see your code to guess at what isn't thread-safe though.

Resources