Can I use NHibernate Criteria to project an entity and its child collection onto a class? - collections

I'm using NH Criteria to retrieve an entity and project selective fields onto a custom class (a bit like projecting data onto a ViewModel for display on an MVC view).
This is easy enough using ProjectionList:
var emailCriteria = mSession.CreateCriteria<Email>();
emailCriteria.SetProjection(
Projections.ProjectionList()
.Add(Projections.Property("Subject"), "Subject")
);
emailCriteria.SetResultTransformer(Transformers.AliasToBean<EmailDataModel>());
var result = emailCriteria.List<EmailDataModel>();
However, my entity contains a collection, and I want to bring that back too, and project it as a collection onto my custom class.
My domain model looks (in simplified form) like this:
public class Email {
public string Subject
public List<EmailAttachment> Attachments
etc...
}
public class EmailAttachment {
public UploadedFile File
}
public class UploadedFile {
public string Filename
public UploadedFileData Data
}
public class UploadedFileData {
public byte[] Data
}
Here's the "data model" classes I want to project onto:
public class EmailDataModel {
public string Subject
public List<EmailAttachmentDataModel> Attachments
}
public class EmailAttachmentDataModel {
public string Filename
public byte[] Data
}
Now I know these models look very similar, and you'd be forgiven for thinking "what's the point?", but that's because I've simplified them. It's nice to be able to flatten my domain objects into handy data models.
My big problem is figuring out how to access the necessary fields from deep down in my child objects (in this case, UploadedFile.Filename and UploadedFileData.Data), and project them as an EmailAttachmentDataModel collection onto my EmailDataModel.
I've read a lot of articles online which discuss accessing child collections - using either EmailCriteria.CreateAlias or EmailCriteria.CreateQuery - but I haven't found anything which explains how to project a child collection AS a collection.
I hope this will be a useful exercise for anyone interested in tinkering with NH Criteria queries.

Ok, I think I've resolved this upgrading to NHibernate 3 and using QueryOver. Here's what my code looks like now:
//Declare entities
Email email = null;
EmailAttachment attachment = null;
UploadedFile file = null;
Byte[] fileData = null;
//Select data from parent and child objects
var results = mSession.QueryOver<QueuedEmail>(() => email)
.JoinAlias(() => email.Attachments, () => attachment, JoinType.LeftOuterJoin)
.JoinAlias(() => attachment.File, () => file, JoinType.LeftOuterJoin)
.JoinAlias(() => file.Data, () => fileData, JoinType.LeftOuterJoin)
.TransformUsing(Transformers.DistinctRootEntity)
.List<Email>()
//Loop through results projecting fields onto POCO
.Select(x => new EmailDataModel()
{
Id = x.Id,
Body = x.Body,
AttachmentCount = x.Attachments.Count(),
FromAddress = x.FromAddress,
//Loop through child collection projecting fields onto POCO
Attachments = x.Attachments.Select(attach => new EmailAttachmentDataModel()
{
Data = attach.File.Data.Data,
Filename = attach.File.Filename,
Id = attach.Id
}).ToArray() //NB Now projecting this collection as an array, not a list
}).ToArray();
So there it is. Our result is a flattened class which contains the data we need, plus a collection of attachments (which each contain just two fields from our data structure - nicely flattened).
Why should you do this?
It simplifies the result by flattening into only the fields I really want.
My data is now safely encapsulated in a class which can be passed around without fear of accidentally updating my data (which could happen if you just pass back NH data entities).
Finally (and most importantly), because the code above only generates one SELECT statement. Had I stuck with my original Criteria query, it would have generated one SELECT for each row, plus more for the children further down the chain. That's fine if you're dealing with small numbers, but not if you're potentially returning thousands of rows (as I will in this instance - it's a web service for an email engine).
I hope this has been useful for anybody wishing to push a bit further into NHibernate querying. Personally I'm just happy I can now get on with my life!

Related

EF Core Update with List

To make updates to a record of SQL Server using Entity Framework Core, I query the record I need to update, make changes to the object and then call .SaveChanges(). This works nice and clean.
For example:
var emp = _context.Employee.FirstOrDefault(item => item.IdEmployee == Data.IdEmployee);
emp.IdPosition = Data.IdPosition;
await _context.SaveChangesAsync();
But is there a standard method if I want to update multiple records?
My first approach was using a list passing it to the controller, but then I would need to go through that list and save changes every time, never really finished this option as I regarded it as not optimal.
For now what I do is instead of passing a list to the controller, I pass each object to the controller using a for. (kind of the same...)
for(int i = 0; i < ObjectList.Count; i ++)
{
/* Some code */
var httpResponseObject = await MyRepositories.Post<Object>(url+"/Controller", Object);
}
And then do the same thing on the controller as before, when updating only one record, for each of the records...
I don't feel this is the best possible approach, but I haven't found another way, yet.
What would be the optimal way of doing this?
Your question has nothing to do with Blazor... However, I'm not sure I understand what is the issue. When you call the SaveChangesAsync method, all changes in your context are committed to the database. You don't have to pass one object at a time...You can pass a list of objects
Hope this helps...
Updating records in bulk using Entity Framework or other Object Relational Mapping (ORM) libraries is a common challenge because they will run an UPDATE command for every record. You could try using Entity Framework Plus, which is an extension to do bulk updates.
If updating multiple records with a single call is critical for you, I would recommend just writing a stored procedure and call if from your service. Entity Framework can also run direct queries and stored procedures.
It looks like the user makes some changes and then a save action needs to persist multiple records at the same time. You could trigger multiple AJAX calls—or, if you need, just one.
What I would do is create an endpoint—with an API controller and an action—that's specific to your needs. For example, to update the position of records in a table:
Controller:
/DataOrder
Action:
[HttpPut]
public async void Update([FromBody] DataChanges changes)
{
foreach(var change in changes)
{
var dbRecord = _context.Employees.Find(change.RecordId);
dbRecord.IdPosition = change.Position;
}
_context.SaveChanges();
}
public class DataChanges
{
public List<DataChange> Items {get;set;}
public DataChangesWrapper()
{
Items = new List<DataChange>();
}
}
public class DataChange
{
public int RecordId {get;set;}
public int Position {get;set;}
}
The foreach statement will execute an UPDATE for every record. If you want a single database call, however, you can write a SQL query or have a stored procedure in the database and pass the data as a DataTable parameter instead.

Loading multiple sets of data in a content page in ASP.NET MVC Entitiy Framework

I need to load multiple entity types in my View page. I am using ViewModel for this purpose. However, I need to make around 5-6 database calls to load each set of data and assign them to the relevant property of the ViewModel. I wonder if this is a recommended approach since it requires multiple database calls. Or, am I being over-concerned about this? Here is a snapshot from my code:
var model = new EntryListVM();
string userid = "";
if (ViewBag.CurrentUserId == null)
userid = User.Identity.GetUserId();
else
userid = ViewBag.CurrentUserId;
ViewBag.CurrentUserId = userid;
//First database call
model.DiscussionWall = db.DiscussionWalls.Find(wallId);
//Second database call to learn if the current students has any entry
model.DiscussionWall.DoesStudentHasEntry = db.Entries.Any(ent => ent.DiscussionWallId == wallId && ent.UserId == userid);
model.PageIndex = pageIndex;
//Third database call
model.TeacherBadges = db.Badges.Where(b => b.CourseId == model.DiscussionWall.CourseId && b.IsSystemBadge == false && b.IsEnabled == true).ToList();
//Fourth database call
model.Reactions = db.Reactions.Where(re => re.CourseId == model.DiscussionWall.CourseId).ToList();
int entryPageSize = Convert.ToInt32(ConfigurationManager.AppSettings["EntryPageSize"]);
int firstChildSize = Convert.ToInt32(ConfigurationManager.AppSettings["FirstChildSize"]);
List<ViewEntryRecord> entryviews = new List<ViewEntryRecord>();
bool constrainedToGroup = false;
if (!User.IsInRole("Instructor") && model.DiscussionWall.ConstrainedToGroups)
{
constrainedToGroup = true;
}
//Fifth database call USING VIEWS
//I used views here because of paginating also to bring the first
//two descendants of every entry
entryviews = db.Database.SqlQuery<ViewEntryRecord>("DECLARE #return_value int;EXEC #return_value = [dbo].[FetchMainEntries] #PageIndex = {0}, #PageSize = {1}, #DiscussionWallId = {2}, #ChildSize={3}, #UserId={4}, #ConstrainedToGroup={5};SELECT 'Return Value' = #return_value;", pageIndex, entryPageSize, wallId, firstChildSize, userid, constrainedToGroup).ToList();
model.Entries = new List<Entry>();
//THIS FUNCTION MAP entryviews to POCO classes
model.Entries = ControllerUtility.ConvertQueryResultsToEntryList(entryviews);
//Sixth database call
var user = db.Users.Single(u => u.Id == userid);
model.User = user;
I wonder if this is too much of a burden for the initial page load?
I could use SQL-View to read all data at once, but I guess I would get a too complicated data set to manage.
Another option could be using Ajax to load the additional results after the page loading (with the main data) is completed. For example, I could load TeacherBadges with AJAX after the page is being loaded.
I wonder which strategy is more effective and recommended? Are there specific cases when a particular strategy could be more useful?
Thanks!
It all depends on your scenario - different scenarios have different ways of doing things. There is no single right way of doing things that are similar in nature. What might work for me might not work for you. Ever heard that saying: there are many ways to kill a cat? Well this certainly applies to programming.
I am going to answer based on what I think you are asking. Your questions are very broad and not that specific.
However, I am not sure if this is a recommended approach since it
requires multiple database calls.
Sometimes you need to do one database call to get data, and sometimes you need to do more than one database call to get the data. For example:
User details with addresses: one call for user and one call for addresses
User details: one call
I am using ViewModel for this purpose.
Using view models for your views is a good thing. If you want to read up more on what I had to say about view models then you can go and read an answer that I gave on the topic:
What is ViewModel in MVC?
View models are ideal for when you have data that is coming from multiple datasets. View models can also be used to display data coming from one dataset, for example:
Displaying user details with multiple addresses
Displaying only user details
I read the data in the controller in separate linq statements, and
assign them to the relevant List property of the ViewModel.
I would not always return a list - it all depends on what you need.
If I have a single object to return then I will populate a single object:
User user = userRepository.GetById(userId);
If I have a list of objects to return then I will return a list of objects:
List<User> users = userRepository.GetAll();
It is of no use to return a single object and then to populate a list for this object:
List<User> user = userRepository.GetByUserId(userId).ToList();
Second option could be using SQL-View to read all data with one
database call, and then map them to the entities properly in
controller.
This is similar to your first question, how you return your data on the database level is up to you. It can be stored procedures or views. I personally prefer stored procedures. I have never used views before. Irrespective of what you choose your above mentioned repository methods should still look the same.
Third option could be using Ajax to load the additional results after
the page loading (with the main data) is completed.
You can do this if you want to. I would not do it if it is not really needed. I try to load data on page load. I try to get as much data on the screen before the page is fully loaded. There have been times that I had to go the AJAX route after the page was loaded. After the page was loaded I had to do an AJAX call to load my HTML table.
If you really just need to have data displayed then do just that. You do not need any fancy ways of doing this. Maybe later you need to change on screen data, then AJAX is cool to use.
I wonder which strategy is more effective and recommended? Are there
specific cases when a particular strategy could be more useful?
Let us say you want to display a list of users. We do a database call and return the list to the view. I do not normally use view models if I only return a list:
public class UserController : Controller
{
private IUserRepository userRepository;
private IAddressRepository addressRepository;
public UserController(IUserRepository userRepository, IAddressRepository addressRepository)
{
this.userRepository = userRepository;
this.addressRepository = addressRepository;
}
public ActionResult Index()
{
List<User> users = userRepository.GetAll();
return View(users);
}
}
And your view could look like this:
#model List<YourProject.Models.User>
#if (Model.Count > 0)
{
foreach (var user in Model)
{
<div>#user.Name</div>
}
}
If you need to get a single user's details and a list of addresses, then I will make use of a view model because now I need to display data coming from multiple datasets. So a user view model can look something like this:
public class UserViewModel
{
public UserViewModel()
{
Addresses = new List<Address>();
}
public int Id { get; set; }
public string Name { get; set; }
public List<Address> Addresses { get; set; }
}
The your details action method could look like this:
public ActionResult Details(int id)
{
User user = userRepository.GetById(id);
UserViewModel model = new UserViewModel();
model.Name = user.Name;
model.Addresses = addressRepository.GetByUserId(id);
return View(model);
}
And then you need to display the user details and addresses in the view:
#model YourProject.ViewModels.UserViewModel
<div>First Name: #Model.Name</div>
<div>
#if (Model.Addresses.Count > 0)
{
foreach (var address in Model.Address)
{
<div>#address.Line1</div>
<div>#address.Line2</div>
<div>#address.Line3</div>
<div>#address.PostalCode</div>
}
}
</div>
I hope this helps. It might be to broad of an answer but it can guide you on the correct path.
Includes for linked data
For linked data it's simple (you probably know this way):
var users = context.Users.Include(user => user.Settings).ToList();
It queries all users and pre-loads Settings for each user.
Use anonymous class for different data sets
Here is an example:
context.Users.Select(user => new
{
User = user,
Settings = context.Settings
.Where(setting => setting.UserId == user.Id)
.ToList()
}).ToList();
You still kinda need to choose your main query collection (Users in this case), but it's an option. Hope it helps.

Azure appfabric cache, serialization issues with anonymous classes from Linq to SQL

I'm having trouble putting a List<> of an anonymous class into the cache.
var cache = new DataCacheFactory().GetCache("default");
var parts = somethingIQueryable.Select(i => new { i.s1, i.s2 } );
cache.Put("somekey", parts.ToList(), TimeSpan.FromMinutes(2));
This throws a serialization exception. However it works if I put the data in a class like this:
public class A { public string s1, public string s2 }
var cache = new DataCacheFactory().GetCache("default");
var parts = somethingIQueryable.Select(i => new A { s1 = i.s1, s2 = i.s2 } );
cache.Put("somekey", parts.ToList(), TimeSpan.FromMinutes(2));
I would rather not have to define classes for every little bit of data going into the cache though, and was wondering if there is a way to make the first example work?
You will not be able to serialize anonymous types and store them in a cache like this and unfortunately, would need to create List<A> and store this.
This would be because there is nothing to compare the anonymous type against to do the serialization and deserialization. Simply, it has no way of knowing what the anonymous type is, because as it's name implies, it is anonymous.

different hashtable cacheItem with similar data values or separate cacheItems for each data value – which is an efficient approach?

I have broadly two different classes of data caching requirements based on data size:
1) very small data (2-30 characters) – this includes such things as the type code for a given entityId. The system is based upon the concept of parent-child entity hierarchy and actions are authorized against values that are built in combination with entity type code. Caching these type codes for different entities saves time on db fetch.
2) medium/Large data – This is general data like products description and pages.
I'm confused as to which approach is better suited for first class of data.
I can cache it like this:
HttpRuntime.Cache.Insert("typeCode" + entityId, entityTypeCode);
or like this:
Dictionary<int, string> etCodes =
(Dictionary<int, string>)HttpRuntime.Cache["typeCode"];
etCodes[entityId] = entityTypeCode;
Clearly, In the second approach, I'm saving on unnecessary cache items for each entityId.
or, having Cache object populated with several items of such small size is okay.
Which of these approachs is good in terms of performance and overhead?
Personally I would take your second approach of one single object and use a custom object instead of a Dictionary.
This would enable me to later control more aspects like expiration of items within the object or changing the implementation.
I would do it similar to this:
public class MyCacheObject
{
public static MyCacheObject
{
get
{
// ...Omitted locking here for simplification...
var o = HttpRuntime.Cache["MyCacheObject] as MyCacheObject;
if ( o = null )
{
o = new MyCacheObject();
HttpRuntime.Cache["MyCacheObject] = o;
}
return o;
}
}
public object GetEntity( string id, string code )
{
// ...
}
public void SetEntity( object entity, string id, string code )
{
// ...
}
// ...
}
If you have a custome base class for the entities, the GetEntity and SetEntity methods could be optimized further.

How to dispose data context after usage

I have a member class that returned IQueryable from a data context
public static IQueryable<TB_Country> GetCountriesQ()
{
IQueryable<TB_Country> country;
Bn_Master_DataDataContext db = new Bn_Master_DataDataContext();
country = db.TB_Countries
.OrderBy(o => o.CountryName);
return country;
}
As you can see I don't delete the data context after usage. Because if I delete it, the code that call this method cannot use the IQueryable (perhaps because of deferred execution?). How to force immediate execution to this method? So I can dispose the data context..
Thank you :D
The example given by Codeka is correct, and I would advice writing your code with this when the method is called by the presentation layer. However, disposing DataContext classes is a bit tricky, so I like to add something about this.
The domain objects generated by LINQ to SQL (in your case the TB_Countries class) often contain a reference to the DataContext class. This internal reference is needed for lazy loading. When you access for instance list of referenced objects (say for instance: TB_Country.States) LINQ to SQL will query the database for you. This will also happen with lazy loaded columns.
When you dispose the DataContext, you prevent it from being used again. Therefore, when you return a set of objects as you've done in your example, it is impossible to call the States property on a TB_Country instance, because it will throw a ObjectDisposedException.
This does not mean that you shouldn't dispose the DataContext, because I believe you should. How you should solve this depends a bit on the architecture you choose, but IMO you basically got two options:
Option 1. Supply a DataContext to the GetCountriesQ method.
You normally want to do this when your method is an internal method in your business layer and it is part of a bigger (business) transaction. When you supply a DataContext from the outside, it is created outside of the scope of the method and it shouldn't dispose it. You can dispose it at a higher layer. In that situation your method basically looks like this:
public static IQueryable<TB_Country> GetCountriesQ(
Bn_Master_DataDataContext db)
{
return db.TB_Countries.OrderBy(o => o.CountryName);
}
Option 2. Don't return any domain objects from the GetCountriesQ method.
This solution is useful when the method is a public in your business layer and will be called by the presentation layer. You can wrap the data in a specially crafted object (a DTO) that contains only data and no hidden references to the DataContext. This way you have full control over the communication with the database and you can dispose the DataContext as you should. I've written more about his on SO here. In that situation your method basically looks like this:
public static CountryDTO[] GetCountriesQ()
{
using (var db = new Bn_Master_DataDataContext())
{
var countries;
from country in db.TB_Countries
orderby country.CountryName
select new CountryDTO()
{
Name = country.CountryName,
States = (
from state in country.States
order by state.Name
select state.Name).ToList();
};
return countries.ToArray();
}
}
public class CountryDTO
{
public string Name { get; set; }
public List<StateDTO> States { get; set; }
}
As you will read here there are some smart things you can do that make using DTOs less painful.
I hope this helps.
You can convert the queryable to a list, like so:
public static List<TB_Country> GetCountriesQ()
{
using(var db = new Bn_Master_DataDataContext())
{
return db.TB_Countries
.OrderBy(o => o.CountryName).ToList();
}
}

Resources