Entity Framework - Including tables not mapped in data model? - asp.net

I think this question is probably fairly simple, but I've been searching around and haven't been able to find what I'm looking for.
My team and I are adding a new module to our existing web application. We already have an existing data model which is hooked up to our sql db, and it's pretty huge... So for the new module I created a new EF data model directly from our database with the new tables for the new module. These new tables reference some of our existing tables via foreign keys, but when i add those tables, all of the foreign keys need to be mapped for that table, and their tables, and their tables... and it seems like a huge mess.
My question is, instead of adding the old tables to the data model, since I'm only referencing the ID's of our existing tables for Foreign key purposes can I just do a .Includes("old table") somewhere in the DataContext class or should I go back and add those tables to the model and remove all of their relationships? Or maybe some other method I'm not even aware of?
Sorry for the lack of code, this is more of a logic issue rather than a specific syntax issue.

Simple answer is no. You cannot include entity which is not part of your model (= is not mapped in your EDMX used by your current context).
More complex answer is: in some very special case you can but it requires big changes to your development process and the way how you work with EF and EDMX. Are you ready to maintain all EDMX files manually as XML? In such case EF offers a way to reference whole conceptual model in another one and use one way relations from the new model to the old model. It is a cheat because you will have multiple conceptual models (CSDL) but single mapping file (MSL), single storage description (SSDL) and single context using all of them. Check this article for an example.

I'm not aware that you can use Include to reference tables outside of the EF diagram. To start working with EF then you only need to include a portion of the database in - if your first project is working with a discrete functional area which it probably would be. This might get round the alarming mess when you import and entire legacy database. It scared me when I tried to do it.
In our similar situation - a big legacy system that used stored procedures, we only added the tables that we were directly working at that time. Later on you can always add in additional tables as and when you require them. Don't worry about foreign keys in the EF diagram that are referencing tables that aren't included. Entity Framework happily copes with this.
It does mean running two business layers though one for entity framework and one for the old style data access. Not a problem for us though. In fact from what I've read about legacy system programming it's probably the way to go - you have a business layer with your scruffy old stuff and a business layer with your sparkly new stuff. Keep moving from old to the new until one day the old business layer evaporates into nothing.

You have to use [Include()] over the member.
For example:
// This class allows you to attach custom attributes to properties
// of the Frame class.
//
// For example, the following marks the Xyz property as a
// required property and specifies the format for valid values:
// [Required]
// [RegularExpression("[A-Z][A-Za-z0-9]*")]
// [StringLength(32)]
// public string Xyz { get; set; }
internal sealed class FrameMetadata
{
// Metadata classes are not meant to be instantiated.
private FrameMetadata()
{
}
[Include()]
public EntityCollection<EventFrame> EventFrames { get; set; }
public Nullable<int> Height { get; set; }
public Guid ID { get; set; }
public Layout Layout { get; set; }
public Nullable<Guid> LayoutID { get; set; }
public Nullable<int> Left { get; set; }
public string Name { get; set; }
public Nullable<int> Top { get; set; }
public Nullable<int> Width { get; set; }
}
}
And the LINQ should have
.Includes("BaseTable.IncludedTable")
syntax.
And for the entities which are not part of your model you have to create some view classes.

Related

How to make changes in POCO file in Entity Framework so changes are retained

I am using Database First approach in Entity Framework. I have a table which contain one field called CustomerName and it is NOT NULL.
The generated POCO is given below.
public partial class Customers
{
public string CustomerName {get; set;}
}
I have two questions.
How can I make this a required field so my code would become like this (shown below). As you know POCO is automatically generated so after I do this and update model from database, all my code is removed.
public partial class Customers
{
[Required]
public string CustomerName {get; set;}
}
Second question is why EF automatically doesn't apply [Required] with this field when generating code? The field is NOT NULL in database so shouldn't this be done automatically without having to manually write [Required]?
Here's the answer if you're using EF6:
Notice that the generated Customers class is partial, we're going to leverage that. First, we'll need to create a new Customers partial class with the exact same name within the exact same namespace:
namespace WebApp.TheSameNamespaceAsTheGeneratedCustomersClass
{
public partial class Customers
{
}
}
Now both of these partials make up the same class it's just that the source code of this class is now split in different files, one of which is generated by the tool and one that you wrote by hand. The difference of course is that you can change the latter without it getting rewritten all the time.
Note that the namespace has to match but the folder that contains the class file doesn't.
Now we need to create the metadata class that contains all the necessary attributes and decorate our Customers partial with it, like so:
namespace WebApp.TheSameNamespaceAsTheGeneratedCustomersClass
{
[MetadataType(typeof(CustomersMetadata))] //decorating the entity with the metadata
public partial class Customers
{
}
public class CustomersMetadata //metadata class
{
[Required] //your data annotations
public string CustomerName { get; set; } //the property name has to match
}
}
and that's it.
Is it verbose? Yeah, but that decision was made when db first was chosen.
A word of caution:
If you're doing this to use entity classes as data models in MVC, generally speaking, that's considered a bad practice. The recommended way is to create separate model classes and map data from and to entities. There are some security reasons for that, which you should research before you make the final decision.
If you are using ef core then try adding --data-annotations flag in your scaffold command.
Please refer for more info: https://learn.microsoft.com/en-us/ef/core/managing-schemas/scaffolding?tabs=dotnet-core-cli#fluent-api-or-data-annotations
EF doesn't have any means of validating your data in your POCO classes when it generates sql. That is why it is recommended that we should have a corresponding model object layer (corresponding model classes for your entities) that your application can manipulate. You can use something like AutoMapper for mapping between models and entities. In this way you can modify your model classes without impacting your EF entities.

DDD Accessing an Entity by an indirect parent Entity Id

I am building an app that integrates Plaid API to access user bank info (logins, accounts, transactions, etc.). I'm trying to follow DDD principles.
Here is a general idea of how the Plaid API flow works:
A user provides his email/password for some bank institution. If valid, a plaid Item is created. This object associates a user to a set of bank credentials and contains an access token which can be used to further interact with the API.
Every plaid Item has access to a certain set of bank Accounts.
Every bank Account has a set of Transactions
So far, I created 3 entities in my domain layer: Item, Account and Transaction. I created a repository with basic CRUD operations for each.
public class Item
{
public string Id { get; set; }
public string AccessToken { get; set; }
public string UserId { get; set; }
...
}
public class Account
{
public string Id { get; set; }
public string ItemId { get; set;
...
}
public class Transaction
{
public string Id { get; set; }
public string AccountId { get; set;
...
}
As you can see, the relationship between these entities is:
User HAS Item -> Item HAS Accounts -> Account HAS Transactions
My question is, what happens when I need to find an entity by an indirect parent? For example: GetTransactionsByItemId or GetAccountsByUserId. Based on DDD, where should this logic go?
Because of how my data is structured (No-SQL chain of 1-many relations) I know I have to do these sort of queries in multiple steps. However, I've read that a Repository should only be concerned about it's own entity so I suspect that injecting the ItemsRepository and AccountsRepository to the TransactionsRepository to add a GetTransactionsByItemId method might not be a good idea.
I also read about injecting many repositories to a Service and managing all these "joins" from inside. However, I can't come up with a name for this Service, so I'm worried that's because conceptually this doesn't make much sense.
I also read about Aggregates but I'm not sure if I recognize a root in these entities.
Another option I can think of is to try shortening relationships by adding an ItemId to every transaction for example. However, this would need to be a hack because of how I get the data from the api.
I would say your aggregation root would be an Item. If I got the structure right, Accounts cannot exist withoug Items and Transactions without account. So you could be ok just with ItemsRepository:
public class ItemsRepository
{
public async Task<Item> GetById(long id, IncludesSpec includes)
{
return await this.context.Items
.Where(c => c.Id == id)
.Include(c => c.Accounts).ThenInclude(c => c.Transactions)
.SingleOrDefaultAsync();
}
}
Than you get an Item with all the loaded data in it. The IncludesSpec is up to you: it would contain which includes should be made and includes shall be added dynamically in the repository method.
As of .net ef core 5 you can do filtered Includes, like .Include(c => c.Accounts.Where(...)), so you could further narrow the actual include down based on your requirements. You could pass another parameter which would contain this filter information.
Also your Item should expose Accounts as read-only collection (use backing field for EF) and provide a method AddAccount() so that nobody can modify your DDD item as pure entity.
What would make the most sense, I believe, would be to have a Service with multiple Repositories injected into it.
You could have one ItemRepository which returns Item objects, one AccountRepository which returns Accounts, one TransactionRepository returning Transactions and one UserRepository returning Users.
If your data model makes it cumbersome to do your query in one request, then have a function in your service which is transactional (ACID : either it all completes or it's all rollbacked) which does different queries to the each injected repository, then builds the objects and returns them.
If you do see a way to make it one query, you can hard-code that query inside the relevant repository. From Domain-Driven Design, Evans:
Hard-coded queries can be built on top of any infrastructure and without a lot of investment, because they do just what some client would have to do anyway.
On projects with a lot of querying, a REPOSITORY framework can be built that allows more flexible queries.[...]
One particularly apt approach to generalizing REPOSITORIES through a framework is to use SPECIFICATION-based queries. A SPECIFICATION allows a client to describe (that is, specify) what is wants without concern for how it will be obtained. In the process, an object that can actually carry out the selection is created.[...]
Even a REPOSITORY design with flexible queries should allow for the addition of specialized hard-coded queries. They might be convenience methods that encapsulate an often-used query or a query that doesn't return the objects themselves, such as a mathematical summary of selected objects. Frameworks that don't allow for such contingencies tend to distort the domain design or get bypassed by developers.

.NET Core 2.1 Identity : Creating a table for each Role + bridge M:M table

I'm having issues in figuring out the best design that fits my needs regarding a Role based authorizations using Identity in a .NET Core 2.1 project.
I already extended the User class from Identity with an ApplicationUser class.
I need 5 different roles to control the access to the different features of the app :
Admin, Teacher, Student, Parent and Supervisor
All the common attributes are kept in User and ApplicationUser but I still require different relationships to other tables depending of the User's Role.
User in Role Teacher is linked to 1-N School
User in Role Student is linked to 1-N GroupOfStudents (but not to a School directly)
User in Role Parent is linked to 1-N Student (but not to a School)
...
The other requirement is that a User must be able to be in 1-N Role.
What would be the best practice in my case?
Is there something I'm missing in the features of Identity?
My idea at first was to use nullable FK, but as the number of role increased, it doesn't look like a good idea to have so many empty fields for all those records.
I was thinking of using a "bridge table" to link a User to other tables for each role.
Have a many-to-many relationship between ApplicationUser and the bridge table nd a 0-1 relationship between the bridge table and individual tables for each role. But that's not really helping either since every record will produce the same amount of empty fields.
I'm fairly new with .NET Core and especially Identity, I'm probably missing some keywords to make an effective research because it looks to me that it's a really basic system (nothing really fancy in the requirements).
Thanks for reading !
EDIT :
I don't really have a error right now since I'm trying to figure out the best practice before going deeper in the project. Since it's the first time I face that kind of requirement, I'm trying to find documentation on what are the pros/cons.
I followed Marco's idea and used inheritance for my role based models as it was my first idea. I hope it will help understand my concern.
public class ApplicationUser : IdentityUser
{
public string CustomTag { get; set; }
public string CustomTagBis { get; set; }
}
public class Teacher : ApplicationUser
{
public string TeacherIdentificationNumber { get; set; }
public ICollection<Course> Courses { get; set; }
}
public class Student : ApplicationUser
{
public ICollection<StudentGroup> Groups { get; set; }
}
public class Parent : ApplicationUser
{
public ICollection<Student> Children { get; set; }
}
public class Course
{
public int Id { get; set; }
public string Title { get; set; }
public string Category { get; set; }
}
public class StudentGroup
{
public int Id { get; set; }
public string Name { get; set; }
}
This creates the database having one big table for the User containing all the attributes :
User table generated
I can use this and it will work.
A user can have any of those nullable fields filled if he requires to be in different role.
My concern is that for each record I will have a huge number of "inappropriate fields" that will remain empty.
Let's say that on 1000 users 80% of the users are Students.
What are the consequences of having 800 lines containing :
- an empty ParentId FK
- an empty TeacherIdentificationNumber
And this is just a small piece of the content of the models.
It doesn't "feel" right, am I wrong?
Isn't there a better way to design the entities so that the table User only contains the common attributes to all users (as it is supposed to?) and still be able to link each user to another table that will link the User to 1-N tables Teacher/Student/Parent/... table?
Diagram of the Table-Per-Hierarchy approach
EDIT 2:
Using the answer of Marco, I tried to use the Table-Per-Type approach.
When modifying my context to implement the Table-Per-Type approach, I encountered this error when I wanted to add a migration :
"The entity type 'IdentityUserLogin' requires a primary key to be defined."
I believe this happens because I removed :
base.OnModelCreating(builder);
Resulting in having this code :
protected override void OnModelCreating(ModelBuilder builder)
{
//base.OnModelCreating(builder);
builder.Entity<Student>().ToTable("Student");
builder.Entity<Parent>().ToTable("Parent");
builder.Entity<Teacher>().ToTable("Teacher");
}
I believe those identity keys are mapped in the base.OneModelCreating.
But Even if I Uncomment that line, I keep the same result in my database.
After some research, I found this article that helped me go through the process of creating Table-per-type models and apply a migration.
Using that approach, I have a schema that looks like this :
Table-Per-Type approach
Correct me if I'm wrong, but both Techniques fits my requirements and it is more about the preference of design? It doesn't have big consequence in the architecture nor the identity features?
For a third option, I was thinking to use a different approach but I'm not too sure about it.
Does a design like this could fit my requirements and is it valid?
By valid, I mean, it feels weird to link a teacher entity to a Role and not to a User. But in a way, the teacher entity represent the features that a User will need when in the teacher role.
Role to Entities
I'm not yet too sure of how to implement this with EF core and how overriding the IdentityRole class will affect the Identity features. I'm on it but haven't figured it out yet.
I suggest you take advantage of the new features of asp.net core and the new Identity framework. There is a lot of documentation about security.
You can use policy based security, but in your case resource-based security seems more appropriate.
The best approach is to not mix contexts. Keep a seperation of concerns: Identity context (using UserManager) and business context (school, your DbContext).
Because putting the ApplicationUser table in your 'business context' means that you are directly accessing the Identity context. This is not the way you should use Identity. Use the UserManager for IdentityUser related queries.
In order to make it work, instead of inheriting the ApplicationUser table, create a user table in your school context. It is not a copy but a new table. In fact the only thing in common is the UserId field.
Check my answer here for thoughts about a more detailed design.
Move fields like TeacherIdentificationNumber out of the ApplicationUser. You can either add this as claim to the user (AspNetUserClaims table):
new Claim("http://school1.myapp.com/TeacherIdentificationNumber", 123);
or store it in the school context.
Also instead of roles consider to use claims, where you can distinguish the claims by type name (e.g. http://school1.myapp.com/role):
new Claim("http://school1.myapp.com/role", "Teacher");
new Claim("http://school2.myapp.com/role", "Student");
Though I think in your case it may be better to store the information in the school context.
The bottom line, keep the Identity context as is and add tables to the school context instead. You don't have to create two databases, just don't add cross-context relations. The only thing that binds the two is the UserId. But you don't need an actual database relation for that.
Use UserManager, etc. for Identity queries and your school context for your application. When not for authentication you should not use the Identity context.
Now to the design, create one user table that has a matching UserId field to link the current user. Add fields like name, etc only when you want to show this (on report).
Add a table for Student, Teacher, etc. where you use a composite key: School.Id, User.Id. Or add a common Id and use a unique constraint on the combination of School.Id, User.Id.
When a user is present in the table this means that the user is a student at school x or teacher at school y. No need for roles in the Identity context.
With the navigation properties you can easily determine the 'role' and access the fields of that 'role'.
What you do is completely up to your requirements. What you currently have implemented is called Table-Per-Hierarchy. This is the default approach, that Entity Framework does, when discovering its model(s).
An alternative approach would be Table-Per-Type. In this case, Entity Framework would create 4 tables.
The User table
The Student table
The Teacher table
The Parent table
Since all those entities inherit from ApplicationUser the database would generate a FK relationship between them and their parent class.
To implemt this, you need to modify your DbContext:
public class FooContext : DbContext
{
public DbSet<ApplicationUser> Users { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>().ToTable("Students");
modelBuilder.Entity<Parent>().ToTable("Parents");
modelBuilder.Entity<Teacher>().ToTable("Teachers");
}
}
This should be the most normalized approach. There is however a third approach, where you'd end up with 3 tables and the parent ApplicationUser class would be mapped into its concrete implementations. However, I have never implemented this with Asp.Net Identity, so I don't know if it would or will work and if you'd run into some key conflicts.

Load Partial Views from Database

I'm having a requirement where the number of partial views could grow tomorrow and they could be a composition of any number of values and of any type. Well, yes, I can do this using partial views itself but the time I will add a new partial add, I will be required to recompile the application that I want to avoid. It would be very like a CMS where you just specify the fields and the form is generated on the fly based on fields and their type you specify.
Edit 1
Let's say for example you're building a survey application where you have multiple types of question and have associated partial views for every type. Now, if tomorrow you need to add one or more question types- how would you create a partial view dynamically on the fly for the new question type?
This is where the idea come from to store view definitions in an XML file or in Database so that the you can just add an entry for new partial view and you're good to display a new view for new question type without re-compiling > restarting your server.
Can we do something like that in ASP.NET MVC 5 using a data store (Any DB: SQL Server / MySQL or XML File / Flat File)? Any thoughts, pointers, tips are highly appreciated!
please correct if I'm incorrect.
Yes you can use a objectContainer that has multiple values:
Model of partial views:
public List<DynamicQuestion> dynamicQuestionList { get; set; }
public class DynamicQuestion
{
public string question{ get; set; }
public string ask{ get; set; }
}
wth that that you can get a List of DynamicQuestion so it's ok for you
in Db, you should have a table "question" that contains
id, question
that host all questions
and a table "ask" that has
id, idQuestion, response
that save all ask

XML Serializiation migration to MySql

I have an ASP.NET project that uses XML Serialization for the main operation for saving data. This project was to stay small relative to size of data. However, the amount of data has ballooned as it always will and now I'm consider moving to a SQL based alternative for managing the data.
For now I have multiple objects defined that are simply storage classes for saving my data for the project to work.
public class Customer
{
public Customer() { }
public string Name { get; set; }
public string PhoneNumber { get; set; }
}
public class Order
{
public Order() { }
public int ID { get; set; }
public Date OrderDate { get; set; }
public string Product { get; set; }
}
Something along these lines although not so rudimentary. Migrating to SQL seems to be a no-brainer and I've landed on using MySql because of the free availability of the service. What I'm running into is that the only way I can see to do this now is to have a solution where there is a storage class, Order, and a class built to Load/Save the data, OrderIO.
The project relies heavily on using List<> to populate the data fields on the page. I'm not using any built-in .NET controls such as DataGrid to assist in displaying the data. Simple TextBox or ComboBox controls that are populated on Page_Load.
I'm aware it would make better sense to pick a way in which the data fields could bind to the SQL through a Repeater but I'm not looking at a full redesign, just a difference on the infrastructure to manage the data.
I would like to be able to create a class that can return an object similar to what I'm dealing with now, such as List<>, from the SQL statements I'm executing. I'm having some trouble getting started on the best method of approach.
Any suggestions on how best to Load/Save this data using SQL or some tutorials on ideas using the .NET framework would be helpful. This is quite a generalized question but I'm open to most ideas. Thanks.
What you need is a Data Access Layer (DAL) that takes care of running the SQL code and returning the required data in the List<> format that you require. I would definitely recommend you read the two series of articles by Imar Spaanjar on Building a N-Layer Application. Note that there are two sets of series, but I linked to the second set, because it contains links to the first one.
Also, it might be beneficial to know that Sql Server 2008 R2 express edition is free to use, but has a limit of 10 GB per database. I am not saying that you shouldn't use MySQL, but just wanted to inform you in case you didn't know that there is a free edition of Sql Server available.

Resources