In an Asp.Net Core 2.2 project with EF core (latest everything, ran all NuGet updates today) I have this operation:
return Ok(_db.GlobalRoles
.Include(gr => gr.GlobalRoleFeatures)
.ThenInclude(grf => grf.Feature)
.Include(gr => gr.GlobalRoleCompanyGroupRoles)
.ThenInclude(grcgr => grcgr.CompanyGroupRole)
.ThenInclude(cgr => cgr.CompanyGroupRoleFeatures)
.ThenInclude(cgrf => cgrf.Feature)
.ToList());
For the most part the details aren't important, suffice to say it's a tree of entities I want to eager-load. When I profile the DB this ultimately results in 4 queries. At first I found that unexpected, but shrugged it off as perhaps just how EF has optimized fetching these results. No big deal. And the resulting data is correct.
But when I wrap it in an IMemoryCache:
return Ok(_cache.GetOrCreate(nameof(GlobalRole), entry =>
{
entry.SlidingExpiration = TimeSpan.FromMinutes(_appSettings.DataCacherExpiryMinutes);
return _db.GlobalRoles
.Include(gr => gr.GlobalRoleFeatures)
.ThenInclude(grf => grf.Feature)
.Include(gr => gr.GlobalRoleCompanyGroupRoles)
.ThenInclude(grcgr => grcgr.CompanyGroupRole)
.ThenInclude(cgr => cgr.CompanyGroupRoleFeatures)
.ThenInclude(cgrf => cgrf.Feature)
.ToList();
}));
While the first fetch of this data works as expected, subsequent fetches from the cache result in an exception:
Newtonsoft.Json.JsonSerializationException: Error getting value from 'GlobalRoleCompanyGroupRoles' on 'Castle.Proxies.GlobalRoleProxy'. ---> System.InvalidOperationException: Error generated for warning 'Microsoft.EntityFrameworkCore.Infrastructure.LazyLoadOnDisposedContextWarning: An attempt was made to lazy-load navigation property 'GlobalRoleCompanyGroupRoles' on entity type 'GlobalRoleProxy' after the associated DbContext was disposed.'. This exception can be suppressed or logged by passing event ID 'CoreEventId.LazyLoadOnDisposedContextWarning' to the 'ConfigureWarnings' method in 'DbContext.OnConfiguring' or 'AddDbContext'.
It appears that when serializing the object, the eager-loaded lists of contained entities aren't there. (Or perhaps they are but it's still trying to load them again? Or in some way query the context?) Naturally the context instance has long since been disposed, just the fully materialized list should be cached.
When I debug, the top-level list is indeed returned from the cache. But upon inspection the GlobalRoleFeatures and GlobalRoleCompanyGroupRoles properties of any object therein result in the same above exception.
Note: Same behavior using .ToListAsync() on the query and async all the way up through .GetOrCreateAsync() and through the controller action.
Am I overlooking something? Is there a way to get the fully materialized list, no longer dependent on the DB context, into a memory cache?
The problem is using IMemoryCache. You are not, in fact, serializing the items into your cache. Objects are cached directly in memory, which means that the ties they have to things like a DbContext persist, even though the DbContext does not.
Specifically, the way lazy loading works is that EF actually creates a dynamic proxy of your entity class and overrides (hence the need for the virtual keyword) the reference or collection property with a customer getter that checks the EF object cache for the items, and if they cannot be found, makes a query to get them. Because you're caching directly in memory, you're caching these proxy class instances, which still have this logic on them.
It's a bad idea to use IMemoryCache regardless. Instead, you should always use IDistributedCache. There is a MemoryDistributedCache provider (which is actually the default) if you still want to actually cache in memory, but using IDistributedCache does two things for you:
It's more generic than IMemoryCache, so you can later sub in any cache provider (Redis, SQL Server, etc.) without changing you app code.
Specifically to your issue here, it will force you to actually serialize the cache value, even if using the memory cache provider, meaning you won't have this same non-obvious problem.
That does mean it's a little more work. You'll need to use something like JsonConvert to serialize and deserialize to/from the cache, but you can add extensions to IDistributedCache to take care of that for you.
Related
I try to improve myself with .NET Web API now and I am trying to return a custom error in Swagger. But when returning this custom error, I can see the error is on which line. How can I do to prevent this?
public async Task<BookCreateDTO> CreateBook(BookCreateDTO bookCreateDto)
{
if (await _context.Books.AnyAsync(x => x.Name == bookCreateDto.Name))
{
throw new BookExistException("Book already exist");
}
var book= _mapper.Map<Book>(bookCreateDto);
_context.Books.Add(book);
await _context.SaveChangesAsync();
return book;
}
What should I do to see only this exception message in the Swagger response?
Thank you for your help.
Exceptions should be exceptional: Don't throw exceptions for non-exceptional errors.
I don't recommend specifying your web-service's response DTO type in the C# action method return type because it limits your expressiveness (as you're discovering).
Instead use IActionResult or ActionResult<T> to document the default (i.e. HTTP 2xx) response type and then list error DTO types in [ProducesResponseType] attributes with their corresponding HTTP status codes.
This also means that each response status code should only be associated with a single DTO type.
While Swagger is not expressive enough to allow you to say "if the response status is HTTP 200 then the response body/DTO is one-of DtoFoo, DtoBar, DtoQux", in-practice a well-designed web-service API should not exhibit that kind of response DTO polymorphism.
And if it didn't, how else is a client supposed to know what the type is just from the HTTP headers? (Well, you could put the full DTO type-name in a custom HTTP response header, but that introduces other problems...)
For error conditions, add the errors to ModelState (with the Key, if possible) and let ASP.NET Core handle the rest for you with ProblemDetails.
If you do throw an exception, then ASP.NET Core can be configured to automatically render it as a ProblemDetails - or it can show the DeveloperExceptionPage - or something else entirely.
I note that a good reason to not throw an exception inside a Controller for non-exceptional exceptions is because your logging framework may choose to log more details about unhandled exceptions in ASP.NET Core's pipeline, which would result in useless extraneous entries in your logs that make it harder to find "real" exceptions that you need to fix.
Document the DTOs used, and their corresponding HTTP status codes, with [ProducesResponseType]: this is very useful when using Swagger/NSwag to generate online documentation and client libraries.
Also: do not use EF entity types as DTOs or ViewModels.
Reason 1: When the response (with EF entity objects) is serialized, entities with lazy-loaded properties will cause your entire database object-graph to be serialized (because the JSON serializer will traverse every property of every object).
Reason 2: Security! If you directly accept an EF entity as an input request body DTO or HTML form model then users/visitors can set properties arbitrarily, e.g. POST /users with { accessLevel: 'superAdmin' }, for example. While you can exclude or restrict which properties of an object can be set by a request it just adds to your project's maintenance workload (as it's another non-local, manually-written, list or definition in your program you need to ensure is kept in-sync with everything else.
Reason 3: Self-documenting intent: an entity-type is for in-proc state, not as a communications contract.
Reason 4: the members of an entity-type are never exactly what you'll want to expose in a DTO.
For example, your User entity will have a Byte[] PasswordHash and Byte[] PasswordSalt properties (I hope...), and obviously those two properties must never be exposed; but in a User DTO for editing a user you might want different members, like NewPassword and ConfirmPassword - which don't map to DB columns at all.
Reason 5: On a related note to Reason 4, using Entity classes as DTOs automatically binds the exact design of your web-service API to your database model.
Supposing that one day you absolutely need to make changes to your database design: perhaps someone told you the business requirements changed; that's normal and happens all the time.
Supposing the DB design change was from allowing only 1 address per customer (because the street addresses were being stored in the same table as customers) to allowing customers to have many addresses (i.e. the street-address columns are moved to a different table)...
...so you make the DB changes, run the migration script, and deploy to production - but suddenly all of your web-service clients stop working because they all assumed your Customer object had inline Street address fields but now they're missing (because your Customer EF entity types' don't have street-address columns anymore, that's over in the CustomerAddress entity class).
If you had been using a dedicated DTO type specifically for Customer objects then during the process of updating the design of the application you would have noticed builds breaking sooner (rather than inevitably later!) due to C# compile-time type-checking in your DTO-to-Entity (and Entity-to-DTO) mapping code - that's a benefit right there.
But the main benefit is that it allows you to completely abstract-away your underlying database design - and so, in our example, if you have remote clients that depend on Customer address information being inline then your Customer DTO can still emulate the older design by inlining the first Customer Address into the original Customer DTO when it renders its JSON/XML/Protobuf response to the remote client. That saves time, trouble, effort, money, stress, complaints, firings, unnecessary beatings, grievous bodily harm and a scheduled dental hygienist's appointment.
Anyway, I've modified your posted code to follow the guidance above:
I added [ProducesResponseType] attributes.
I appreciate it is redundant to specify the default response type BookCreateDTO twice (in [ProducesResponseType] as well as ActionResult<BookCreateDTO> - you should be able to remove either one of those without affecting Swagger output.
I added an explicit [FromBody], just to be safe.
If the "book-name is unused" check fails, it returns the model validation message in ASP.NET's stock BadRequest response, which is rendered as an IETF RFC 7807 response, aka ProblemDetails instead of throwing an exception and then hoping that you configured your ASP.NET Core pipeline (in Configure()) to handle it as a ProblemDetails instead of, say, invoking a debugger or using DeveloperExceptionPage.
Note that in the case of a name conflict we want to return HTTP 409 Conflict and not HTTP 400 Bad Request, so the conflictResult.StatusCode = 409; is overwritten.
The final response is generated from a new BookCreateDTO instance via AutoMapper and Ok() instead of serializing your Book entity object.
[ProducesResponseType(typeof(BookCreateDTO), StatusCodes.Status200OK)]
[ProducesResponseType(typeof(ProblemDetails), StatusCodes.Status409Conflict)]
public async Task< ActionResult<BookCreateDTO> > CreateBook( [FromBody] BookCreateDTO bookCreateDto )
{
// Does a book with the same name exist? If so, then return HTTP 409 Conflict.
if( await _context.Books.AnyAsync(x => x.Name == bookCreateDto.Name) )
{
this.ModelState.Add( nameof(BookCreateDTO.Name), "Book already exists" );
BadRequestObjectResult conflictResult = this.BadRequest( this.ModelState );
// `BadRequestObjectResult` is HTTP 400 by default, change it to HTTP 409:
conflictResult.StatusCode = 409;
return conflictResult;
}
Book addedBook;
{
addedBook = this.mapper.Map<Book>( bookCreateDto );
_ = this.context.Books.Add( book );
_ = await this.context.SaveChangesAsync();
}
BookCreateDTO responseDto = this.mapper.Map<BookCreateDTO >( addedBook );
return this.Ok( responseDto );
}
I've got a WebAPI OData controller which is using the Delta to do partial updates of my entity.
In my entity framework model I've got a Version field. This is a rowversion in the SQL Server database and is mapped to a byte array in Entity Framework with its concurrency mode set to Fixed (it's using database first).
I'm using fiddler to send back a partial update using a stale value for the Version field. I load the current record from my context and then I patch my changed fields over the top which changes the values in the Version column without throwing an error and then when I save changes on my context everything is saved without error. Obviously this is expected, the entity which is being saved has not been detacched from the context so how can I implement optimistic concurrency with a Delta.
I'm using the very latest versions of everything (or was just before christmas) so Entity Framework 6.0.1 and OData 5.6.0
public IHttpActionResult Put([FromODataUri]int key, [FromBody]Delta<Job> delta)
{
using (var tran = new TransactionScope())
{
Job j = this._context.Jobs.SingleOrDefault(x => x.JobId == key);
delta.Patch(j);
this._context.SaveChanges();
tran.Complete();
return Ok(j);
}
}
Thanks
I've just come across this too using Entity Framework 6 and Web API 2 OData controllers.
The EF DbContext seems to use the original value of the timestamp obtained when the entity was loaded at the start of the PUT/PATCH methods for the concurrency check when the subsequent update takes place.
Updating the current value of the timestamp to a value different to that in the database before saving changes does not result in a concurrency error.
I've found you can "fix" this behaviour by forcing the original value of the timestamp to be that of the current in the context.
For example, you can do this by overriding SaveChanges on the context, e.g.:
public partial class DataContext
{
public override int SaveChanges()
{
foreach (DbEntityEntry<Job> entry in ChangeTracker.Entries<Job>().Where(u => u.State == EntityState.Modified))
entry.Property("Timestamp").OriginalValue = entry.Property("Timestamp").CurrentValue;
return base.SaveChanges();
}
}
(Assuming the concurrency column is named "Timestamp" and the concurrency mode for this column is set to "Fixed" in the EDMX)
A further improvement to this would be to write and apply a custom interface to all your models requiring this fix and just replace "Job" with the interface in the code above.
Feedback from Rowan in the Entity Framework Team (4th August 2015):
This is by design. In some cases it is perfectly valid to update a
concurrency token, in which case we need the current value to hold the
value it should be set to and the original value to contain the value
we should check against. For example, you could configure
Person.LastName as a concurrency token. This is one of the downsides
of the "query and update" pattern being used in this action.
The logic
you added to set the correct original value is the right approach to
use in this scenario.
When you're posting the data to server, you need to send RowVersion field as well. If you're testing it with fiddler, get the latest RowVersion value from your database and add the value to your Request Body.
Should be something like;
RowVersion: "AAAAAAAAB9E="
If it's a web page, while you're loading the data from the server, again get RowVersion field from server, keep it in a hidden field and send it back to server along with the other changes.
Basically, when you call PATCH method, RowField needs to be in your patch object.
Then update your code like this;
Job j = this._context.Jobs.SingleOrDefault(x => x.JobId == key);
// Concurrency check
if (!j.RowVersion.SequenceEqual(patch.GetEntity().RowVersion))
{
return Conflict();
}
this._context.Entry(entity).State = EntityState.Modified; // Probably you need this line as well?
this._context.SaveChanges();
Simple, the way you always do it with Entity Framework: you add a Timestamp field and put that field's Concurrency Mode to Fixed. That makes sure EF knows this timestamp field is not part of any queries but is used to determine versioning.
See also http://blogs.msdn.com/b/alexj/archive/2009/05/20/tip-19-how-to-use-optimistic-concurrency-in-the-entity-framework.aspx
Some background:
Working with:
.NET 4.5 (thinking of migrating to 4.5.1 if it's painless)
Web Forms
Entity Framework 5, Lazy Loading enabled
Context Per Request
IIS 8
Windows 2012 Datacenter
Point of concern: Memory Usage
Over the project we are currently on, and probably our first bigger project, we're often reading some bigger chunks of data, coming from CSV imports, that are likely to stay the same for very long periods of time.
Unless someone explicitly re-imports the CSV data, they are guaranteed to be the same, this happens in more than one places in our project and similar approach is used for some regular documents that are often being read by the users. We've decided to cache this data in the HttpRuntime cache.
It goes like this, and we pull about 15,000 records consisting mostly of strings.
//myObject and related methods are placeholders
public static List<myObject> GetMyCachedObjects()
{
if (CacheManager.Exists(KeyConstants.keyConstantForMyObject))
{
return CacheManager.Get(KeyConstants.keyConstantForMyObject) as List<myObject>;
}
else
{
List<myObject> myObjectList = framework.objectProvider.GetMyObjects();
CacheManager.Add(KeyConstants.keyConstantForMyObject, myObjectList, true, 5000);
return myObjectList;
}
}
The data retrieving for the above method is very simple and looks like this:
public List<myObject> GetMyObjects()
{
return context.myObjectsTable.AsNoTracking().ToList();
}
There are probably things to be said about the code structure, but that's not my concern at the moment.
I began profiling our project as soon as I saw high memory usage and found many parts where our code could be optimized. I never faced 300 simultaneous users before and our internal tests, done by ourselves were not enough to show the memory issues. I've highlighted and fixed numerous memory leaks but I'd like to understand some Entity Framework related unknowns.
Given the above example, and using ANTS Profiler, I've noticed that 'myObject', and other similar objects, are referencing many System.Data.Entity.DynamicProxies.myObject, additionally there are lots of EntityKeys which hold on to integers. They aren't taking much but their count is relatively high.
For instance 124 instances of 'myObject' are referencing nearly 300 System.Data.Entity.DynamicProxies.
Usually it looks like this, whatever the object is:
Some cache entry, some object I've cached and I now noticed many of them have been detached from dbContext prior caching, the dynamic proxies and the objectContext. I've no idea how to untie them.
My progress:
I did some research and found out that I might be caching something Entity Framework related together with those objects. I've pulled them with NoTracking but there are still those DynamicProxies in the memory which probably hold on to other things as well.
Important: I've observed some live instances of ObjectContext (74), slowly growing, but no instances of my unitOfWork which is holding the dbContext. Those seem to be disposed properly per request basis.
I know how to detach, attach or modify state of an entry from my dbContext, which is wrapped in a unitOfWork, and I often do it. However that doesn't seem to be enough or I am asking for the impossible.
Questions:
Basically, what am I doing wrong with my caching approach when it comes to Entity Framework?
Is the growing number of Object Contexts in the memory a concern, I know the cache will eventually expire but I'm worried of open connections or anything else this context might be holding.
Should I be detaching everything from the context before inserting it into the cache?
If yes, what is the best approach. Especially with List I cannot think of anything else but iterating over the collection and call detach one by one.
Bonus question: About 40% of the consumed memory is free (unallocated), I've no idea why .NET is reserving so much free memory in advance.
You can try using non entity class with specific properties with SELECT method.
public class MyObject2 {
public int ID { get; set; }
public string Name { get; set; }
}
public List<MyObject2> GetObjects(){
return framework.provider.GetObjects().Select(
x=> new MyObject2{
ID = x.ID ,
Name = x.Name
}).ToList();
);
}
Since you will be storing plain c# objects, you will not have to worry about dynamic proxies. You will not have to call detach on anything at all. Also you can store only few properties.
Even if you disable tracking, You will see dynamic proxy because EF uses dynamic class derived from your class which stores extra meta data information (relation e .g. name of foreign key etc to other entities) for the entity.
steps to reduce memory here:
Re new the context, often
Dont try and delete content from the Context. Or Set it to detached.
It hangs around like a fart in a phone box
eg context = new MyContext.
But if possible you should be
using (var context = new Mycontext){ .... }
//short lived contexts is best practice
With your Context you can set Configurations
this.Configuration.LazyLoadingEnabled = false;
this.Configuration.ProxyCreationEnabled = false; //<<<<<<<<<<< THIS one
this.Configuration.AutoDetectChangesEnabled = false;
you can disable proxies if you still feel they are hogging memory.
But that may be unecesseary if you apply using to the context in the first place.
I would redesign the solution a bit:
You are storing all data as a single entry in cache
I would move this and have an entry per cache item.
You are using HTTPRuntime cache
I would use Appfabric Caching, also MS, also free.
Not sure where you are calling that code from
I would Call it on Application start, then all data is in memory when the user needs it
You are using Entity SQL
For this I would use an Entity Data Reader http://msdn.microsoft.com/en-us/library/system.data.entityclient.entitydatareader(v=vs.110).aspx
See also:
http://msdn.microsoft.com/en-us/data/hh949853.aspx
I'm trying to query UserMetaData for a single record using the following query
using (JonTestDataEntities context = new JonTestDataEntities())
{
return context.UserMetaData.Single(user => user.User.ID == id);
}
The error I'm receiving is: The ObjectContext instance has been disposed and can no longer be used for operations that require a connection. It is trying to lazyload Group for UserMetaData record. How can I change my query to prevent this error?
As the message says, you cannot lazily load it after the function returns, because you've already disposed the context. If you want to be able to access Group, you can make sure you fetch it earlier. The extension method .Include(entity => entity.NavigationProperty) is how you can express this:
using (JonTestDataEntities context = new JonTestDataEntities())
{
return context.UserMetaData.Include(user => user.Group).Single(user => user.User.ID == id);
}
Also consider adding .AsNoTracking(), since your context will be gone anyway.
You need to create a strong type that matches the signature of your result set. Entity Framework is creating an anonymous type and the anonymous type is disposed after the using statement goes out of scope.
So assigning to a strong type avoids the issue altogether. I'd recommend creating a class called UserDTO since you're really creating a data transfer object in this case. The benefit of the dedicated DTO is you can include only the necessary properties to keep your response as lightweight as possible.
Basically I want to make my script service only serialise properties that are not null on an array of object I am returning... So this..
{"k":"9wjH38dKw823","s":10,"f":null,"l":null,"j":null,"p":null,"z":null,"i":null,"c":null,"m":0,"t":-1,"u":2}
would be
{"k":"9wjH38dKw823","s":10,"m":0,"t":-1,"u":2}
Does anyone know if this is possible?
Basically the reason for this is because null values are for unchanged properties. A local copy is kept in the javascript that is just updated to reduce traffic to the server. Change values are then merged.
You can create a custom JavaScriptConverter class for the JSON serialization process to use to handle your object, and then put the necessary logic in the Serialize method of that class to exclude the properties that are null.
This article has a clear step-by-step discussion of the process involved in creating it.
You probably would not need to actually implement the Deserialize method (can throw a NotImplementedException) if you are not passing that type of object in as an input parameter to your web services.