I have a normal Universal App, nothing fancy, really simple.
There is a SQLite database that weights more than 200KB (could be even 20MB, keep that in mind) and I want to share this database between Windows 8.1 and Windows Phone 8.1 devices.
I guess that roaming data isn't enough, am I right?
So, I googled and find Azure Mobile Services and it's really sweet but the examples that are provided are too simply and I don't know how to extend it.
My database has 7 tables and some of them are connected by foreign key. Here is example one of my class (database is based on it).
public class Meeting : INotifyPropertyChanged
{
public Meeting() { }
private int _id;
[PrimaryKey, AutoIncrement]
public int Id { get; set; }
private int _adressId;
public int AdressId { get; set; }
private Adress _adress;
[Ignore]
public Adress Adress { get; set; }
}
Then it is method
private async Task InitLocalStoreAsync()
{
if (!App.MobileService.SyncContext.IsInitialized)
{
var store = new MobileServiceSQLiteStore("MyDatabase.db");
store.DefineTable<Adress>();
store.DefineTable<Contractor>();
store.DefineTable<Item>();
store.DefineTable<ItemGroup>();
store.DefineTable<Meeting>();
store.DefineTable<Note>();
store.DefineTable<Order>();
await App.MobileService.SyncContext.InitializeAsync(store);
}
}
and I get exception "Error getting value from 'Adress' on 'Projekt.DataModel.Meeting'."
The thing is: It is really hard to work with this. Isn't there a simplier solution? I don't need at this moment nothing but synchronization of my database. Please remember that I have a tables that are connected by foreign key. Maybe I skip some worthy example or tutorial?
Thanks for all your help.
You're right that this database would be too big for roaming settings. You should be able to get Azure Mobile Services working for your scenario.
One way to manage the relationships, especially if they are 1-to-many and not many-to-many, is to create database views in the backend that join the tables. Then you just sync against them. See my messages on this forum thread that describes how this would work: https://social.msdn.microsoft.com/Forums/azure/en-US/4c056373-d5cf-4ca6-9f00-f152d2b01372/best-practice-for-syncing-related-tables-in-offline-mode?forum=azuremobile
You might also be interested in a solution accelerator we have built, that shows how you would connect everything in a real app: https://code.msdn.microsoft.com/windowsapps/Field-Engineer-501df99d
If you have more questions, feel free to post in our forums that the product team monitors regularly: https://social.msdn.microsoft.com/Forums/azure/en-US/home?forum=azuremobile
Related
I am building an app that integrates Plaid API to access user bank info (logins, accounts, transactions, etc.). I'm trying to follow DDD principles.
Here is a general idea of how the Plaid API flow works:
A user provides his email/password for some bank institution. If valid, a plaid Item is created. This object associates a user to a set of bank credentials and contains an access token which can be used to further interact with the API.
Every plaid Item has access to a certain set of bank Accounts.
Every bank Account has a set of Transactions
So far, I created 3 entities in my domain layer: Item, Account and Transaction. I created a repository with basic CRUD operations for each.
public class Item
{
public string Id { get; set; }
public string AccessToken { get; set; }
public string UserId { get; set; }
...
}
public class Account
{
public string Id { get; set; }
public string ItemId { get; set;
...
}
public class Transaction
{
public string Id { get; set; }
public string AccountId { get; set;
...
}
As you can see, the relationship between these entities is:
User HAS Item -> Item HAS Accounts -> Account HAS Transactions
My question is, what happens when I need to find an entity by an indirect parent? For example: GetTransactionsByItemId or GetAccountsByUserId. Based on DDD, where should this logic go?
Because of how my data is structured (No-SQL chain of 1-many relations) I know I have to do these sort of queries in multiple steps. However, I've read that a Repository should only be concerned about it's own entity so I suspect that injecting the ItemsRepository and AccountsRepository to the TransactionsRepository to add a GetTransactionsByItemId method might not be a good idea.
I also read about injecting many repositories to a Service and managing all these "joins" from inside. However, I can't come up with a name for this Service, so I'm worried that's because conceptually this doesn't make much sense.
I also read about Aggregates but I'm not sure if I recognize a root in these entities.
Another option I can think of is to try shortening relationships by adding an ItemId to every transaction for example. However, this would need to be a hack because of how I get the data from the api.
I would say your aggregation root would be an Item. If I got the structure right, Accounts cannot exist withoug Items and Transactions without account. So you could be ok just with ItemsRepository:
public class ItemsRepository
{
public async Task<Item> GetById(long id, IncludesSpec includes)
{
return await this.context.Items
.Where(c => c.Id == id)
.Include(c => c.Accounts).ThenInclude(c => c.Transactions)
.SingleOrDefaultAsync();
}
}
Than you get an Item with all the loaded data in it. The IncludesSpec is up to you: it would contain which includes should be made and includes shall be added dynamically in the repository method.
As of .net ef core 5 you can do filtered Includes, like .Include(c => c.Accounts.Where(...)), so you could further narrow the actual include down based on your requirements. You could pass another parameter which would contain this filter information.
Also your Item should expose Accounts as read-only collection (use backing field for EF) and provide a method AddAccount() so that nobody can modify your DDD item as pure entity.
What would make the most sense, I believe, would be to have a Service with multiple Repositories injected into it.
You could have one ItemRepository which returns Item objects, one AccountRepository which returns Accounts, one TransactionRepository returning Transactions and one UserRepository returning Users.
If your data model makes it cumbersome to do your query in one request, then have a function in your service which is transactional (ACID : either it all completes or it's all rollbacked) which does different queries to the each injected repository, then builds the objects and returns them.
If you do see a way to make it one query, you can hard-code that query inside the relevant repository. From Domain-Driven Design, Evans:
Hard-coded queries can be built on top of any infrastructure and without a lot of investment, because they do just what some client would have to do anyway.
On projects with a lot of querying, a REPOSITORY framework can be built that allows more flexible queries.[...]
One particularly apt approach to generalizing REPOSITORIES through a framework is to use SPECIFICATION-based queries. A SPECIFICATION allows a client to describe (that is, specify) what is wants without concern for how it will be obtained. In the process, an object that can actually carry out the selection is created.[...]
Even a REPOSITORY design with flexible queries should allow for the addition of specialized hard-coded queries. They might be convenience methods that encapsulate an often-used query or a query that doesn't return the objects themselves, such as a mathematical summary of selected objects. Frameworks that don't allow for such contingencies tend to distort the domain design or get bypassed by developers.
I'm having issues in figuring out the best design that fits my needs regarding a Role based authorizations using Identity in a .NET Core 2.1 project.
I already extended the User class from Identity with an ApplicationUser class.
I need 5 different roles to control the access to the different features of the app :
Admin, Teacher, Student, Parent and Supervisor
All the common attributes are kept in User and ApplicationUser but I still require different relationships to other tables depending of the User's Role.
User in Role Teacher is linked to 1-N School
User in Role Student is linked to 1-N GroupOfStudents (but not to a School directly)
User in Role Parent is linked to 1-N Student (but not to a School)
...
The other requirement is that a User must be able to be in 1-N Role.
What would be the best practice in my case?
Is there something I'm missing in the features of Identity?
My idea at first was to use nullable FK, but as the number of role increased, it doesn't look like a good idea to have so many empty fields for all those records.
I was thinking of using a "bridge table" to link a User to other tables for each role.
Have a many-to-many relationship between ApplicationUser and the bridge table nd a 0-1 relationship between the bridge table and individual tables for each role. But that's not really helping either since every record will produce the same amount of empty fields.
I'm fairly new with .NET Core and especially Identity, I'm probably missing some keywords to make an effective research because it looks to me that it's a really basic system (nothing really fancy in the requirements).
Thanks for reading !
EDIT :
I don't really have a error right now since I'm trying to figure out the best practice before going deeper in the project. Since it's the first time I face that kind of requirement, I'm trying to find documentation on what are the pros/cons.
I followed Marco's idea and used inheritance for my role based models as it was my first idea. I hope it will help understand my concern.
public class ApplicationUser : IdentityUser
{
public string CustomTag { get; set; }
public string CustomTagBis { get; set; }
}
public class Teacher : ApplicationUser
{
public string TeacherIdentificationNumber { get; set; }
public ICollection<Course> Courses { get; set; }
}
public class Student : ApplicationUser
{
public ICollection<StudentGroup> Groups { get; set; }
}
public class Parent : ApplicationUser
{
public ICollection<Student> Children { get; set; }
}
public class Course
{
public int Id { get; set; }
public string Title { get; set; }
public string Category { get; set; }
}
public class StudentGroup
{
public int Id { get; set; }
public string Name { get; set; }
}
This creates the database having one big table for the User containing all the attributes :
User table generated
I can use this and it will work.
A user can have any of those nullable fields filled if he requires to be in different role.
My concern is that for each record I will have a huge number of "inappropriate fields" that will remain empty.
Let's say that on 1000 users 80% of the users are Students.
What are the consequences of having 800 lines containing :
- an empty ParentId FK
- an empty TeacherIdentificationNumber
And this is just a small piece of the content of the models.
It doesn't "feel" right, am I wrong?
Isn't there a better way to design the entities so that the table User only contains the common attributes to all users (as it is supposed to?) and still be able to link each user to another table that will link the User to 1-N tables Teacher/Student/Parent/... table?
Diagram of the Table-Per-Hierarchy approach
EDIT 2:
Using the answer of Marco, I tried to use the Table-Per-Type approach.
When modifying my context to implement the Table-Per-Type approach, I encountered this error when I wanted to add a migration :
"The entity type 'IdentityUserLogin' requires a primary key to be defined."
I believe this happens because I removed :
base.OnModelCreating(builder);
Resulting in having this code :
protected override void OnModelCreating(ModelBuilder builder)
{
//base.OnModelCreating(builder);
builder.Entity<Student>().ToTable("Student");
builder.Entity<Parent>().ToTable("Parent");
builder.Entity<Teacher>().ToTable("Teacher");
}
I believe those identity keys are mapped in the base.OneModelCreating.
But Even if I Uncomment that line, I keep the same result in my database.
After some research, I found this article that helped me go through the process of creating Table-per-type models and apply a migration.
Using that approach, I have a schema that looks like this :
Table-Per-Type approach
Correct me if I'm wrong, but both Techniques fits my requirements and it is more about the preference of design? It doesn't have big consequence in the architecture nor the identity features?
For a third option, I was thinking to use a different approach but I'm not too sure about it.
Does a design like this could fit my requirements and is it valid?
By valid, I mean, it feels weird to link a teacher entity to a Role and not to a User. But in a way, the teacher entity represent the features that a User will need when in the teacher role.
Role to Entities
I'm not yet too sure of how to implement this with EF core and how overriding the IdentityRole class will affect the Identity features. I'm on it but haven't figured it out yet.
I suggest you take advantage of the new features of asp.net core and the new Identity framework. There is a lot of documentation about security.
You can use policy based security, but in your case resource-based security seems more appropriate.
The best approach is to not mix contexts. Keep a seperation of concerns: Identity context (using UserManager) and business context (school, your DbContext).
Because putting the ApplicationUser table in your 'business context' means that you are directly accessing the Identity context. This is not the way you should use Identity. Use the UserManager for IdentityUser related queries.
In order to make it work, instead of inheriting the ApplicationUser table, create a user table in your school context. It is not a copy but a new table. In fact the only thing in common is the UserId field.
Check my answer here for thoughts about a more detailed design.
Move fields like TeacherIdentificationNumber out of the ApplicationUser. You can either add this as claim to the user (AspNetUserClaims table):
new Claim("http://school1.myapp.com/TeacherIdentificationNumber", 123);
or store it in the school context.
Also instead of roles consider to use claims, where you can distinguish the claims by type name (e.g. http://school1.myapp.com/role):
new Claim("http://school1.myapp.com/role", "Teacher");
new Claim("http://school2.myapp.com/role", "Student");
Though I think in your case it may be better to store the information in the school context.
The bottom line, keep the Identity context as is and add tables to the school context instead. You don't have to create two databases, just don't add cross-context relations. The only thing that binds the two is the UserId. But you don't need an actual database relation for that.
Use UserManager, etc. for Identity queries and your school context for your application. When not for authentication you should not use the Identity context.
Now to the design, create one user table that has a matching UserId field to link the current user. Add fields like name, etc only when you want to show this (on report).
Add a table for Student, Teacher, etc. where you use a composite key: School.Id, User.Id. Or add a common Id and use a unique constraint on the combination of School.Id, User.Id.
When a user is present in the table this means that the user is a student at school x or teacher at school y. No need for roles in the Identity context.
With the navigation properties you can easily determine the 'role' and access the fields of that 'role'.
What you do is completely up to your requirements. What you currently have implemented is called Table-Per-Hierarchy. This is the default approach, that Entity Framework does, when discovering its model(s).
An alternative approach would be Table-Per-Type. In this case, Entity Framework would create 4 tables.
The User table
The Student table
The Teacher table
The Parent table
Since all those entities inherit from ApplicationUser the database would generate a FK relationship between them and their parent class.
To implemt this, you need to modify your DbContext:
public class FooContext : DbContext
{
public DbSet<ApplicationUser> Users { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Student>().ToTable("Students");
modelBuilder.Entity<Parent>().ToTable("Parents");
modelBuilder.Entity<Teacher>().ToTable("Teachers");
}
}
This should be the most normalized approach. There is however a third approach, where you'd end up with 3 tables and the parent ApplicationUser class would be mapped into its concrete implementations. However, I have never implemented this with Asp.Net Identity, so I don't know if it would or will work and if you'd run into some key conflicts.
I think this question is probably fairly simple, but I've been searching around and haven't been able to find what I'm looking for.
My team and I are adding a new module to our existing web application. We already have an existing data model which is hooked up to our sql db, and it's pretty huge... So for the new module I created a new EF data model directly from our database with the new tables for the new module. These new tables reference some of our existing tables via foreign keys, but when i add those tables, all of the foreign keys need to be mapped for that table, and their tables, and their tables... and it seems like a huge mess.
My question is, instead of adding the old tables to the data model, since I'm only referencing the ID's of our existing tables for Foreign key purposes can I just do a .Includes("old table") somewhere in the DataContext class or should I go back and add those tables to the model and remove all of their relationships? Or maybe some other method I'm not even aware of?
Sorry for the lack of code, this is more of a logic issue rather than a specific syntax issue.
Simple answer is no. You cannot include entity which is not part of your model (= is not mapped in your EDMX used by your current context).
More complex answer is: in some very special case you can but it requires big changes to your development process and the way how you work with EF and EDMX. Are you ready to maintain all EDMX files manually as XML? In such case EF offers a way to reference whole conceptual model in another one and use one way relations from the new model to the old model. It is a cheat because you will have multiple conceptual models (CSDL) but single mapping file (MSL), single storage description (SSDL) and single context using all of them. Check this article for an example.
I'm not aware that you can use Include to reference tables outside of the EF diagram. To start working with EF then you only need to include a portion of the database in - if your first project is working with a discrete functional area which it probably would be. This might get round the alarming mess when you import and entire legacy database. It scared me when I tried to do it.
In our similar situation - a big legacy system that used stored procedures, we only added the tables that we were directly working at that time. Later on you can always add in additional tables as and when you require them. Don't worry about foreign keys in the EF diagram that are referencing tables that aren't included. Entity Framework happily copes with this.
It does mean running two business layers though one for entity framework and one for the old style data access. Not a problem for us though. In fact from what I've read about legacy system programming it's probably the way to go - you have a business layer with your scruffy old stuff and a business layer with your sparkly new stuff. Keep moving from old to the new until one day the old business layer evaporates into nothing.
You have to use [Include()] over the member.
For example:
// This class allows you to attach custom attributes to properties
// of the Frame class.
//
// For example, the following marks the Xyz property as a
// required property and specifies the format for valid values:
// [Required]
// [RegularExpression("[A-Z][A-Za-z0-9]*")]
// [StringLength(32)]
// public string Xyz { get; set; }
internal sealed class FrameMetadata
{
// Metadata classes are not meant to be instantiated.
private FrameMetadata()
{
}
[Include()]
public EntityCollection<EventFrame> EventFrames { get; set; }
public Nullable<int> Height { get; set; }
public Guid ID { get; set; }
public Layout Layout { get; set; }
public Nullable<Guid> LayoutID { get; set; }
public Nullable<int> Left { get; set; }
public string Name { get; set; }
public Nullable<int> Top { get; set; }
public Nullable<int> Width { get; set; }
}
}
And the LINQ should have
.Includes("BaseTable.IncludedTable")
syntax.
And for the entities which are not part of your model you have to create some view classes.
I have an ASP.NET project that uses XML Serialization for the main operation for saving data. This project was to stay small relative to size of data. However, the amount of data has ballooned as it always will and now I'm consider moving to a SQL based alternative for managing the data.
For now I have multiple objects defined that are simply storage classes for saving my data for the project to work.
public class Customer
{
public Customer() { }
public string Name { get; set; }
public string PhoneNumber { get; set; }
}
public class Order
{
public Order() { }
public int ID { get; set; }
public Date OrderDate { get; set; }
public string Product { get; set; }
}
Something along these lines although not so rudimentary. Migrating to SQL seems to be a no-brainer and I've landed on using MySql because of the free availability of the service. What I'm running into is that the only way I can see to do this now is to have a solution where there is a storage class, Order, and a class built to Load/Save the data, OrderIO.
The project relies heavily on using List<> to populate the data fields on the page. I'm not using any built-in .NET controls such as DataGrid to assist in displaying the data. Simple TextBox or ComboBox controls that are populated on Page_Load.
I'm aware it would make better sense to pick a way in which the data fields could bind to the SQL through a Repeater but I'm not looking at a full redesign, just a difference on the infrastructure to manage the data.
I would like to be able to create a class that can return an object similar to what I'm dealing with now, such as List<>, from the SQL statements I'm executing. I'm having some trouble getting started on the best method of approach.
Any suggestions on how best to Load/Save this data using SQL or some tutorials on ideas using the .NET framework would be helpful. This is quite a generalized question but I'm open to most ideas. Thanks.
What you need is a Data Access Layer (DAL) that takes care of running the SQL code and returning the required data in the List<> format that you require. I would definitely recommend you read the two series of articles by Imar Spaanjar on Building a N-Layer Application. Note that there are two sets of series, but I linked to the second set, because it contains links to the first one.
Also, it might be beneficial to know that Sql Server 2008 R2 express edition is free to use, but has a limit of 10 GB per database. I am not saying that you shouldn't use MySQL, but just wanted to inform you in case you didn't know that there is a free edition of Sql Server available.
I may have the wrong "pattern" here, but I think it's a fair topic.
I have an ASP.Net MVC application in which it calls out to a WCF service to get back the ViewModels that will be rendered. (the reason it's using a WCF service is so that other small MVC apps may also call on for these ViewModels...only internally, it's not a publicly available thing so I can change anything either side of the service. The idea is to move the logic that was in the website, closer to the server/database so the roundtrips aren't so costly - and only do one roundtrip overall from the webserver to the database server).
I'm trying to work out the best thing to return these "ViewModels" in from the service. There are lots of common little bits of functionality, but each page may want to display different subsets of these things (so homepage maybe a list of tables, next page, a list of tables and users that are available).
So what's the best way of returning the information that the page wants, hopefully without the webservice knowing about the page?
Edit:
It's been suggested below that I move the logic in process. This would be a lot faster, except that's what we're moving away from because it is actually a lot slower (in this case). The reason for this is that the database is on one server, and the webapp is on another server, and the webapp is particularly chatty at points (there are pages it could end up doing 2K round trips - (I have no control over reducing this number before that's suggested)), so moving the logic closer to the db is the next best way of making it more performant.
I would look at creating a ViewModel per each MVC app/view. The service could just return the maximum amount of data for the "view" in a logical sense and each MVC app uses the information it wants when composing the ViewModel for it's view.
Your service is then only responsible for one thing, returning data specific to a view's function. The controller of each app is responsible for using/not using pieces of the returned data.
This will be more flexible as your ViewModels may require different validation rules as well. ViewModels also have MVC-specific needs(SelectList etc..) that shouldn't really be returned by a service layer. It seems like something can be shared at a glance, but there are generally lots of small differences that make sharing ViewModels a bad idea.
class MyServiceViewResult
{
public int SomethingEveryViewNeeds { get; set; }
public bool OnlyOneViewMightNeedThis { get; set; }
}
class ViewModel1
{
public int IdProperty { get; set; }
public ViewModel1(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
}
}
class ViewModel2
{
public int IdProperty { get; set; }
public bool IsAllowed { get; set; }
public ViewModel2(MyServiceViewResult result)
{
IdProperty = result.SomethingEveryViewNeeds;
IsAllowed = result.OnlyOneViewMightNeedThis;
}
}
Instead of having a web service, why don't you just implement the service as a reusable library that encapsulates the desired functionality?
This will also allow you to use polymorphism to implement customizations. WCF doesn't support polymorphism in a flexible way...
Using an in-proc service will also be a lot faster.
See this related question for outlines of a polymorphic solution: Is this a typical use case for IOC?