In reading an article on N-Tiered Applications, I came across information regarding concurrency tokens and change tracking information:
Another important concept to understand is that while the
default-generated entities support serialization, their
change-tracking information is stored in the ObjectStateManager (a
part of the ObjectContext), which does not support serialization.
My question is three-fold:
Is there the same thing when using DbContext?
If the only interaction with the database is in a Repository class within a using statement, does closing the database connection when the program leaves the using statement get rid of any option for change tracking?
Can this be leveraged as/with a Concurrency Token?
Yes. DbContext is just wrapper around ObjectContext and it exposes change tracking information through ChangeTracker property (returns DbChangeTracker) and for particular entity through calling Entry method (returns DbEntityEntry<T>).
Yes. Closing context will remove all change tracking information.
Concurrency token and change tracking are two completely different concepts. Change tracking tells context what operations it has to execute on database when you call SaveChanges. It tracks changes you did on your entities since you loaded them into current context instance. Concurrency token resolves optimistic concurrency in the database => it validates that another process / thread / user / context instance didn't change the same record your context is going to modify during SaveChanges.
Related
I am looking at the AxonIQ framework and have managed to get a test application up and running. But I have a question about how EventHandlers should be treated when using a store that has persistence in the Read Model.
From my (possible naive) understanding. #EventHandler annotated methods in my Projection class get called from the beginning when first launched. This would mechanism seems to assume that the Projection utilises some kind of in volatile store (e.g. an in memory sql like h2) which is re-created from scratch during the application bootup.
However, if the store was persistent in something like Elastic Search, I would want the #EventHandler to resume from its last persisted event instead of from the beginning event.
Is there anyway to control the behaviour of the #EventHandler in this way?
Axon has two types of Event Processors: Subscribing and Tracking.
The Subscribing mode (which was the default up to Axon 3) will handle events in the thread that delivers them. That means you're at "the mercy" of the delivery guarantees of whichever component delivers the events.
The Tracking mode (which is the default since Axon 4 when using an Event Store or otherwise a source that supports it) will have events handled in dedicated threads, managed by the Event Processor itself. That means events are handled asynchronously from the actual publication mechanism.
The Tracking Event Processor uses Tokens to keep track of progress. These Tokens are stored in a TokenStore and updates as the Processor has correctly processed each incoming event (possibly batched). You decide where those tokens are stored. If you update a relational database, we recommend storing the tokens in the same database, so that event changes and tokens are updated atomically.
If you don't specify any TokenStore, it depends on whether you're on Spring Boot, in which case Axon will attempt to detect a suitable TokenStore implementation for you. Otherwise, it may very well just be an in-memory TokenStore, which causes Processors to re-initialize on every startup (and possibly start from the beginning).
To configure a TokenStore
On Spring (Boot), simply add a bean of type TokenStore with the implementation you want to use
When using Axon's Configuration API, on the EventProcessingConfigurer, use one of the registerTokenStore(...) methods.
When the Tracking Processor starts, it will check the Token Store for previous progress, and continue from there automatically.
Background (TLDR: I need parallel queries)
I am building REST service that needs to be able to answer queries very fast.
As such I'm pre-loading a large part of the database into memory and answering using that data instead of making complex database queries for each request. This works great, and the average response time of the API is well below the requirements and a lot faster than direct database queries.
But I have a problem. The service takes about 5 minutes to start and pre-load all of its information. During this time it can not answer queries.
Problem
I want to change this so that during the pre-load phase it makes database queries until the in-memory cache is loaded.
This leads me to a problem. I need to have multiple active queries to my database. Anyone who has tried this in EF Core has problably seen this message.
System.InvalidOperationException: A second operation started on this context before a previous operation completed. This is usually caused by different threads using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
The first sentence on the linked page is
Entity Framework Core does not support multiple parallel operations
being run on the same DbContext instance.
I thought this would be easily solved by wrapping my cache-loading into its own class and the direct query into another, and then having both of these requiring their own instance of the Database Context. Then my service can in turn get these injected and use both of these dependencies in parallel.
This should be what I have:
I have also set up my database context so that it uses transient for all parts.
services.AddDbContext<IDataContext, DataContext>(options =>
options.UseSqlServer(connectionString), ServiceLifetime.Transient, ServiceLifetime.Transient
);
I have also enabled MultipleActiveResultSets=True
All of this however results in the exact same error as listed above.
Again, everything is Transient except the HandlerService which is Singelton as I want this to keep a copy of the cache in memory and not have to load it for every request.
What is it I have failed to understand about the ef-core database context, or DI in general?
I figured out what the problem was. In my case there is as described above, one singleton handler. This handler has one (indirect) context (through DI) for fulfilling requests until the cache is loaded. When multiple parallel queries are sent to the API before the cache is loaded, then this error occurs as each of these request are using the same context. And in my test I was always hitting the parallel requests as part of the startup and hence the singelton service was trying to use the same db context for multiple requests. My solution is to in this one place step outside the "normal" dependency injection and use the IServiceScopeFactory to get a new instance of the dependency used to resolve requests before the cache is loaded. Bohdans answer led me to this conclusion and ultimate solution.
I'm not sure whether it qualifies for a full answer but it's too broad for a comment.
When doing .NET core background services which are obviously singletons too I use IServiceScopeFactory to create services with a limited lifetime.
Here's how I create a context
using (var scope = _scopeFactory.CreateScope())
{
var context = scope.ServiceProvider.GetRequiredService<DbContext>();
}
My guess is that you could inject it in your hander and use it like this too. So it would allow you to leave context as scoped instead of transient with is default setting btw.
Hope that helps.
I'm finding that in the current application I'm working with, I'm retrieving several entities (related to the authenticated users account) in almost every controller. These entities are cached at the orm layer however, it seems that these entities would be a good candidate to load once at authentication time and add a few properties to the applications custom IPrincipal object.
Another option I was thinking of was creating a custom context object (with the users related account objects) and passing it around with the current request.
Benefits / drawbacks to either approach? Is there another way of dealing with commonly used objects like this?
It sounds like you miss the fact that the instance of IPrincipal/IIdentity is recreated upon every request. It is not persisted anywhere if you not persist it in an explicit way.
I don't think then there's performance difference between a custom principal class holding the data vs a cached ambient property.
On the other hand, the drawback of a custom authentication classes is that you have to provide a custom authentication module so that these instances are recreated during AuthenticateRequest event in the processing pipeline. In other words, you'd have to replace FormsAuthenticationModule with your own one. This is not difficult but I wouldn't do this if it is not absolutely necessary.
Note also that some data can be persisted in the UserData section of the forms cookie. This means that you can have it as long as the cookie is valid and create it only once.
I'm working with a project in ASP.Net using Webforms. I'm using Entity Framework to save data on Microsoft SQL.
My question is:
Is possible to use a Static class to keep the ObjectContext of EF live and put/get entities NOT saved inside the ObjectContext?
I want to create an Object, then added with AddObject on the ObjectContext, But NOT to do the Savechanges. All this in one webform. And then, in other webform, access to the ObjectContext and get the Object when added.
It is this possible?
My rules to using ObjectContext:
Do not use static context.
Do not share context.
You are trying to violate both rules. If you do that your application will have undeterministic behavior. Create new ObjectContext instance for each request. It is the same as openning new connection and starting new transaction in the request instead of sharing one connection and one transaction among all of them.
Further explanation also here. Also check linked question in right column and you will see what type of problems people have just because of violating one or both mentioned rules.
Also in web application it becames even more interesting because ObjectContext is not thread safe.
You could add it to the application items collection. See this blog post for syntax and such.
http://www.informit.com/articles/article.aspx?p=27315&seqNum=3
Generally, you don't want to. An ObjectContext is intended to be a unit of work, alive for a single set of related transactions. In an ASP.NET application, that generally corresponds to a single request.
If you must keep it alive for multiple requests, I wouldn't use either a static class, nor the application context. Instead, I'd recommend using the Cache, and then attaching the callbacks to it that let you ensure all your transactions are committed before it gets evicted, just in case.
I'm wondering what strategies exist to handle object integrity in a stateful client like a Flex or Silverlight app.
What I mean is the following: consider an application where you have a Group and a Member entity. Groups contain multiple members and members can belong to multiple groups. A view lists the different groups, which are lazy loaded (no members initially). When requesting the details of the group, all members are loaded and cached so the next time we don"t need to invoke a service to fetch the details and members of the group.
Now, when we request the details of another group that has the same member of a group that was already loaded, would we care about the fact that the member is already in memory?
If we don't, I can see a potential data conflict when we edit the member (referenced in the first group) and changes are not applied to the other member instance. So to solve this, we could check the result of the service call (that gets the group details) for members that are already loaded and then replace the loaded ones with the cached ones.
Any tips, ideas or experiences to share?
What you are describing is something that is usually solved by a "first-level cache" (in Hibernate, the "Session"; in JPA, the "EntityManager") which ensures that only one instance of a particular entity exists in a particular context. As you suggest, this could be applied to objects as they are fetched from the server to ensure that all references to a particular entity are in fact references to the same object instance. You would also need a mechanism to ensure that entities created inside the AVM exist in that same context so they have similar logic applied to them.
The Granite Data Services project has a project called "Tide" which aims to solve this problem:
http://www.graniteds.org/confluence/display/DOC/6.+Tide+Data+Framework
As far as DDD goes, it's important not to design the backend as a simple data access API, such as simply exposing a set of DAOs or Repositories. The client application cannot be trusted and in fact is very easy to manipulate with a debugging proxy such as Charles. I always design a services API that is tailored to the UI (so that data for a screen can be fetched in a single call) and has necessary security or validation logic enforced, often using annotations and Spring AOP.
What I would do is create a client side application service which does the caching and servicing of requests for data. This would handle whether an object already exists in the cache. If you are using DDD then you'll need to decide what is going to be your aggregate root entity. Group or Member. You can't have both control each other. There needs to be one point for managing loading etc. Check out this video on DDD at the Canadian ALT.NET OpenSpaces. http://altnetpedia.com/Calgary200808.ashx