This question already has answers here:
Entity framework very slow to load for first time after every compilation
(7 answers)
Closed 4 years ago.
Were using Entity Framework 6.1.3 on a modular ASP.Net application. When an entity like a 'sales order' is created, this imply the initialization of several modules DbContexts. Typically the first 'sale order' created after a server reboot, takes tens of seconds to complete, and many times this results in a client side time-out, which is awful in terms of user experience.
Is this inevitable due to EF nature, or is something that can be mitigated?
This "first request slowness" issue is typical EF behavior. There are several things you can do to remedy the issue, but nothing to completely make it go away. The initial creation of the contexts takes time due to the initial creation of the metadata that is used by EF. Here is an article that shows several things you can do to improve this startup time.
Also, another thing we have done in the past is force the initial creation to happen when our service endpoint app starts...this way the hit is taken by the service app, but is not seen by your end users. You can also do something similar by executing some type of script that pings your service endpoint (if using SOA) after deploying so that the first request is hit by something that isn't a true consumer.
https://www.fusonic.net/developers/2014/07/09/3-steps-for-fast-entity-framework-6-1-code-first-startup-performance/
Related
Background: I have an ASP.NET Core App and have an API method that takes a file name of a blob that the frontend has uploaded to Azure Blob. It then needs to create a thumbnail version of the blob and return the name of the newly uploaded thumbnail Blob. Sometimes, for exactly the same file size it can take up to 40 seconds to complete. Mostly, it's around 400ms.
Below is the end to end from App Insights, I have a few things I don't understand:
1) The request duration is 37.5 s but yet the other operations add up to nowhere near this time
2) Why are there calls to master db? We are using EF6 with multiple contexts
3) The app is using an Azure App Service and SQL Azure. I don't understand why the response time is so inconsistent.
Any help would be much appreciated!
I've noticed multiple time that the first request after an application is deployed to Azure or after a long period that no requests were made to the application, it takes significantly longer to get a response.
As far as I remember it was related to start-up time of the site (if you're using an App Service on Windows based underlying VM it still uses IIS as a reverse proxy).
I solved the issue by configuring health checks that occasionally perform requests to the app.
Also, in addition to Application Insights (which logs information only after the application has started), you can try the tools listed here to see more information.
Hope it helps!
1.
The way the request timeline is displayed gives you only the time-span for the whole request (37.5s) and the individual time-spans for each dependency.
A dependency being another call that sends its run-time to the application insights.
In your example each call to the database is automatically tracked as a dependency. The code running after each database call is not though.
So e.g. requesting a database entry which takes 200ms and then issuing a Thread.Sleep of 2 seconds and requesting another database entry which takes 300ms would result in a 2 second gap between the two database-call dependencies which will each be listed with 200/300ms respectively.
You can use TelemetryClient.TrackDependency to wrap parts of your own code into its own dependency. This way you will see your own code as an entry on the request timeline.
2.
Depending on your EntityFramework database-initialisier EF will connect to the master db on context creation. (E.g. to create the database if it does not exist).
3.
Try tracking your own code to find out what parts of it are slow. EF has a few performance issues to consider, try to understand the performance caveats of the libs you use. If your calls are inconsistently slow it might be an issue with resources being over-utilized or caches being emptied too early (like for EF warm vs. cold queries).
This question already has answers here:
ASP.NET -> WCF concurrency problem
(4 answers)
Closed 9 years ago.
I have a REST service running on ASP.NET 3.5 and I have multiple concurrent asynchronous calls from the client hitting the WCF service. They all get serialized (sequential) instead of parallel calls. Here are my settings
InstanceContextMode = InstanceContextMode.PerSession
ConcurrencyMode = ConcurrencyMode.Multiple
web.config - throttling values
<serviceThrottling maxConcurrentCalls="64" maxConcurrentInstances="64" maxConcurrentSessions="64"/>
There are no session writes in the code.
What am I missing to make my WCF concurrent?
I had a similar problem but I was using plain old .aspx pages. No matter how many calls I made, IIS was only handling them in serial rather than parallel.
It turns out the problem was the current page gets a lock on the session, so only one page could access the session at a time, hence in serial.
I have no idea if your problem is caused by the same issue, but I figure it might be a good place to start.
Assume that it happens, cause PerSession mode. Try to invoke service from different clients I think that you will have parallel calls then.
With a per-session service, Max Instances is both the total number of concurrently active instances and the number of concurrent sessions.
You can use per-call type of a service than,the number of instances is actually the same as the number of concurrent calls. Or you can use asynchronous methods on the client such a 'Begin...()'.
My question is same as this question - but a little to add on to it. My problem is that the users of my web app are allowed to create new versions of a record. Every new version of a record results in creating corresponding "new versions" in 50 other related tables to record individual changes to that particular version. This creation of a new version with about 50 tables involved is running within a transaction (to rollback all changes in the event of an error). Many a times this procedure is slow, understandably due to a "lengthy transaction" with too many table "insertions" involved.
I am looking at a better solution/design to implement such a scenario.
Is there a better way to maintain "versions" of the same record, particularly when it creates too many duplicates in multiple tables
I don't feel the design in itself is good with too many records getting inserted for "every row version", but would at least like to address the immediate problem, the "lengthy transaction" - which causes delay at times. There may not be way way out, but wanted to still ask out - If I don't put the "versioning" inside a transaction, is there a better way to rollback in the event of an error (because transaction appears to block other OLTP queries - due to inserting new versions on all primary tables)
The versioning query runs for about 10 seconds right now, but gets worse at times. Any thoughts are appreciated
Do you have to return the "created" or "failed" message in real time? The following might also be overkill for your solution but it does create a scalable solution.
The web-server could post a message (to a queue of some sort) requesting the action. At this point the user could continue using the site to do other stuff. A windows service in the background can process the message (out of context of the website) and then notify the user (by a message inside the website, similar to stack overflow notifications) or by email that the task has either run or failed.
If you can get away with doing processing decoupled in near real time then you can modify your windows service to scale. You can have a thread pool to manage requests - so maybe you only have 5 threads running at any one time to limit load. If you run into more performance problems you can scale out and develop a system that can have 2 or more processors of the queue (that does add it's own problems / complexity).
We are building an extranet loan status check website using ASP.NET MVC with a WCF backend. Its a pretty standard design with the MVC site using a WCF service reference to get customer objects. The ervice uses an Oracle backend + http binding, and won't be hosted on the same server as the MVC site (so we can't use tcp binding to reduce latency).
The problem we encountered is that every call to the service is resulting in a 7-8s response time which is unacceptable for an extranet site and much higher than the 2s magic mark. The service method(s) call 12 stored procedures to create the customer object. The database is, unfortunately, denormalized (we can't change it as its also used by other inhouse production systems) so most of the calls are basic select statements which populate the customer object and its associated objects. The service proxy is properly opened and closed/disposed in the MVC actions so there are no instances of any service connection leaks. A new client proxy is created for every request (i.e., we are not using the singleton pattern for the service).
Any ideas how we can speed this up ?
Thanks
It sounds like you already know where the problem is - it's the database.
I've never heard of a WCF operation taking more than a fraction of a second to set up and tear down, excluding any logic inside. So even if you could shave off 1-2 seconds of latency (which is probably an optimistic estimate), that doesn't really help if the database operation takes 5-6 seconds by itself.
Honestly? Running 12 stored procedures to create a customer is completely off-the-wall. The purpose of a stored procedure is to encapsulate all of the logic necessary to perform a complex database operation. The very first thing you need to do is change this to be one stored procedure - then if it's still slow, profile the database to see what's taking so long and fix it accordingly. Usually poor database performance is due to one or more missing indexes.
Until you accurately measure what is really happening, don't be too quick to assume where the bottleneck is.
You really need to do an Oracle extended SQL trace to see where that slowness is coming from. Anything other than that is mostly guesswork. Here is a paper from Cary Millsap (of Method R and formerly of Hotsos) that you can download that details doing this:
http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap
We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information.
I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this.
For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours.
Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing?
So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach?
Thanks and sorry for such a long post!
Kevin
In your case it actually sounds like the solution you will be looking for is multifaceted (and not a simple in and done project).
Since you said that some processes can last for hours that is absolutely not something for ASP.NET to own. This should be ran inside a windows service and managed with native windows threading.
You will need to implement some type of work queue in your service and a way to communicate with the queue. One way is to expose a WCF service for all actions your service will govern. Another would be to have service poll a database table and pick up work from the table.
To be able express the status of the process you will want the ASP.NET application to be able to have some reference to the processID for example the WCF service returns a guid identifier. Then you have a method that when you give it the processID it will return the status of the process. You can then implement the polling of that service call using AJAX and display any type of modal you wish.
Another thing to remember is that you need to design your processes to have knowledge of where it is and where it will be when it is finished so it can track the state it's in. For example, BatchJobA is run and will have 1000 records to process. The service needs to know what record it's on or what the current % of competition is for it to be able to return information to the UI. For sql queries that take a very long time to execute this can be very problematic to accurately gauge where it is unless you do alot of pre and post processing of temp tables that you can in the middle of it read the status of the temp tables to understand where it is.
Based on what you are saying I think that BackgroundWorker is not a good choice.
Furthermore keeping this functionality as a part of your main app can be problematic, specifically because you do not want the submitted processing to be interrupted if the main app recycles. You can play with asynch processing but it still will be a part of the main app AppDomain - all of it will die if the app recycles.
I would suggest buidling a separate app implementing this functionality. In a similar situation I separated background processing to a Windows service and hosted a web service in it as a means of communication
You might consider a slightly different approach.
For example, have a command and control table in which you send commands like "REFORMAT PHONE NUMBERS" or whatever.
Then have a windows service monitoring that table. Whenever a record shows up, run the command.
This eliminates any sort of worry about a background thread. Further you have a bit more flexibility with regards to what's in the queue, order of operations including priority, etc. Finally, you would have a definitive list of what is running or needs to run.
As an option, instead of a windows service you might just use a SQL job to execute every so often to watch your control table and perform the requested action.