How to cache Entity Framework Core model? - .net-core

I have an Azure fabric cluster with .NET core 2.2 micro services inside of it. The mentioned services use EF core for communicating with Azure SQL databases. Also, the fabric cluster is behind a load balancer.
The database context has a scoped lifetime and is injected into the controllers using dependency injection.
Everything works well when the services are queried consistently by the same client, since the load balancer guarantees that for at least 4 minutes the same user will be sent to the same service instance. However, when the load balancer decides to send the user to a different instance, the database context is created again (since the lifetime is scoped, it means that a context is created per new web request). Unfortunately, the model building process takes quite long and due to that reason the first query is always way slower than the subsequent ones (on the same web request).
The questions would be, is it possible to somehow cache the EF Core model, so that it wouldn't have to be rebuilt every time the situation described above occurs ?
I mean, a similar procedure to EF - where an .edmx file is created once and loaded on context creation.

As of 3/3/2020 https://github.com/dotnet/efcore/issues/1906 this is still not possible.
If you need model caching for performance reasons you will need to use EF 6. The good news is that EF 6.3 and the the SQL Server provider for EF 6 were ported over to run on .net core 3.0. However other providers may or may not port their code over so support may be spotty.
https://devblogs.microsoft.com/dotnet/announcing-ef-core-3-0-and-ef-6-3-general-availability/#what-s-new-in-ef-6-3

Related

Make .net core service run in multiple machines to make it highly available but do the work by only one node

I have a .Net core application that consists of some background tasks (hosted services) and WEB APIs (which controls and get statuses of those background tasks). Other applications (e.g. clients) communicate with this service through these WEB API endpoints. We want this service to be highly available i.e. if a service crashes then another instance should start doing the work automatically. Also, the client applications should be able to switch to the next service automatically (clients should call the APIs of the new instance, instead of the old one).
The other important requirement is that the task (computation) this service performed in the background can’t be shared between two instances. We have to make sure only one instance does this task at a given time.
What I have done up to now is, I ran two instances of the same service and use a SQL server-based distributed locking mechanism (SqlDistributedLock) to acquire a lock. If a service could acquire a lock then goes and do the operation while the other node waiting to acquire the lock. If one service crashed the next node could be able to acquire the lock. On the client-side, I used Polly based retry mechanism to switch the calling URL to the next node to find the working node.
But this design has an issue, if the node which acquired the lock loses the connectivity to the SQL server then the second service managed to acquire the lock and started doing the work while the first service is also in the middle of doing the same.
I think I need some sought of leader election (seems done it wrongly), Can anyone help me with a better solution for this kind of a problem?
This problem is not specific to .Net or any other framework. So please make your question more general so as to make it more accessible. Generally the solution to this problem lies in the domain of Enterprise Integration Patterns, so consult the references as the status quo may change.
At first sight and based on my own experience developing distributed systems, I suggest two solutions:
use a load balancer or gateway to distribute requests between your service instances.
use a shared message queue broker to put requests in and let each service instance dequeue a request for processing.
Either is fine and I can use both for my own designs.

addScoped service lifetime in an ASP.NET Core multithreaded application

I know that AddSingleton() creates a single instance of the service when it is first requested and reuses that same instance in all the places where that service is needed.
If my ASP.NET Core application is multi-threaded, does that mean all HTTP requests from all users will share the same object instance created by dependency injection (DI)?
If so, that would be not a good way if the application process data to be stored. Are there any best practices?
As mentioned in Microsoft documentation, Service lifetimes, it depends on your specific case.
Presumably, if you have a service, A, and you want to create a new instance on every single request, you can use AddScoped() rather than AddSingleton(). A scoped lifetime service would be created per client request.
If for example, it was some shared data that possibly doesn't change between requests such as some values that are computed at application startup and reused throughout the lifetime of the application, then that is a usable scenario.

WCF service architecture query

I have an application that consists of a web application, and mutliple windows services, only one windows service is installed depending on what version of the backend sofware is used.
Currently, Data is saved by the web app in a database, then the relevant service is installed and this picks up the data and posts it in to the backend system that is installed.
I want to change this to use WCF services so the resulting data is returned directly to the web app.
I have not used WCF services before but Im assuming I can do something like this.
WebApp.Objects.Dll - contains Database objects, eg PurchaseOrder object
WebApp.Service.Contracts.dll - here I can describe the service methods, this will reference the WebApp.Objects.dll so I can take a PurchaseOrder object as a parameter
WebApp.Service.2011.dll - This will be the actual service for the 2011 version of the backend system, this will reference the WebApp.Service.Contracts dll
WebApp.Service.2012.dll - This will be the actual service for the 2012 version of the backend system, this will reference the WebApp.Service.Contracts dll
So, my question is, does the web app need to know the specifics about what backend WCF service is used? I just want to call a service with the specified Interface and not care about how its implemented or what it does internally, but just to return the purchase order that was created in the backend system (whether it return an interface or a concrete class)
Will i be able to create a service client without needing to know whether its the 2011, or 2012 WCF service being used?
As long as you are able to use the exact same contract for all the versions the web application does not need to know which version of the WCF service it is accessing.
In the configuration of the web application, you specify the URL and the contract. However, besides the contract there might be other differences between the services. In an extreme example this might mean that v2011 uses a different binding as v2012 of the backend - which is not very likely from your description. But also subtle differences in the configuration or the behavior of the services should be addressed in the configuration files. E.g. if v2012 needs longer for an action as v2011 does, the timeouts need to be configured so that the longer time of v2012 does not lead to an expiration.

Mixing VB6 "legacy code" and a web application

I'm working on a new website, written in VB.Net using ASP.NET MVC2, there is a need to call "legacy" VB6 code for various complex bits of business logic. The VB6 is a framework consisting of many dlls and is very stateful, we are pretty much emulating how the framework is used in our client application, ie the application runs (lots of state setup), a user logs on (even more state) and then loads a file (even more state).
I've been provided with a "web service interface framework" to get this up and running for use in the web app, this "web framework" hides the legacy code behind a thin layer running under IIS. The idea being that thread pooling provided by IIS will reduce memory use etc etc. I can't help but believe that the guy who provided this has missed the point, since each instance is so stateful there is no way that a thread pool can work, since once a user logs on using one particular object from the pool, no other object will be capable of servicing that client (since it wont have the state)! Also, adding a web service interface and associated SOAP marshalling is a huge overhead compared to calling the objects directly.
The only way I can think of doing this is either a single legacy interface instance which is used by all clients and blocked by each call until it completes, or a thread per client with each legacy interface object being created in a new thread and living for the life of the client.
None of these is ideal but with the amount of code in question and the prolonged migration programme to .net (2+ years and still stateful) I can't think of an alternative. We run the original client app in a citrix environment for some customers so I expect that it could also run ok with thread per client given a beefy enough server and that the overheads of the framework itself should be lower than when the client app is involved.
Any ideas??
I suggest that you take a look at this framework Visual WebGui. I am an employee with this company and therefore wouldn’t sound objective but I believe Visual WebGui had solved some of the major issues with scaling statefull applications and turning single user environment into multi user environment. Worth a look.
Here's an option but it won't be pretty.
It sounds like you need to associate a long lived object (the stateful object to your backend tier) with individual users.
You could store this object in Application state and associate it with the users Session state with a key. You'd need to provide a wrapper to keep track of them all. When the session dies you could capture the event and destroy the backend object.
Application state is a key/value store just like Session. You can access through HttpContext.Application
The big downfall to this is that the objects you put in there stick around until you destroy them so your wrapper and session destroying code need to be spot on. Other than that this might be a quick way to get up and running.
Like I said, it won't be optimal, but it'll probably work.
More info on implications:
http://msdn.microsoft.com/en-us/library/bf9xhdz4(VS.71).aspx
EDIT:
You could also make this work in a web farm environment. Store the information needed to recreate your stateful legacy object in Session state which can be shared between the machines using the built in SQL Provider. If a user bounces to a server where the object doesn't exist your Application state wrapper can just recreate it from the Session state info.
This just leaves how to clean up the stateful object on servers where it isn't needed. In your retrieval wrapper update a hashtable or something with the access time each time the given stateful object is accessed. Have a periodic cleanup routine in th wrapper detroy the stateful objects that haven't been accessed since a little more than the session timeout value of your web app.

ASP.NET, IIS and COM

I'm not that familiar with COM and was hoping that someone out there, who is, could help verify what I have below is correct.
If I have two completely separate Requests (request 1 & request 2), then this creates two separate instances of my WebApplication. So far so boring.
If each instance then contacts the SAME web service, then presumably two instances of the Web Service are also instantiated.
This is where it gets interesting.
These web services create a .NET assembly which then references an in-process (registered via regsvr32) COM-dll (via Interop).
Is my diagram correct?
This COM-DLL connects to the database, performs a query, returns data to the web service which then returns the data in JSON to the client. All done AJAXy.
The other question I have is - is this okay performance-wise? I don't see why it shouldn't scale, and be able to return data to the user
Seems OK as a logical pattern. But, as always. the devil is in the detail.
This all hinges on the implementation of your Services use of COM components and specifically the COM components handling of threads. If your COM components are thread-safe and marked to use an MTA (Multi threaded apartment) you should be OK. However many COM objects are marked as STA (and so use the Single Threaded Apartment)
In relation to "is this okay performance-wise?", then if your COM component is an STA (which it will be if it was created in VB 6.0) you will have to do a bit of thread untangling (otherwise all your service requests will queue up and performance will get worse under load.)
This article explains both the problem and the solution to this (for ASMX services)...
http://msdn.microsoft.com/en-us/magazine/cc163544.aspx
..and solution if you're using WCF services...
http://blogs.catalystss.com/blogs/scott_seely/archive/2007/09/27/203.aspx

Resources