ASP.NET plugin architecture: reference to other modules - asp.net

We're currently migrating our ASP Intranet to .NET and we started to develop this Intranet in one ASP.NET website. This, however, raised some problems regarding Visual Studio (performance, compile-time, ...).
Because our Intranet basically exists of modules, we want to seperate our project in subprojects in Visual Studio (each module is a subproject).
This raises also some problems because the modules have references to each other.
Module X uses Module Y and vice versa... (circular dependencies).
What's the best way to develop such an Intranet?
I'll will give an example because it's difficult to explain.
We have a module to maintain our employees. Each employee has different documents (a contract, documents created by the employee, ...).
All documents inside our Intranet our maintained by a document module.
The employee-module needs to reference the document-module.
What if in the future I need to reference the employee-module in the document-module?
What's the best way to solve this?

It sounds to me like you have two problems.
First you need to break the business orientated functionality of the system down into cohesive parts; in terms of Object Orientated design there's a few principles which you should be using to guide your thinking:
Common Reuse Principle
Common Closure Principle
The idea is that things which are closely related, to the extent that 'if one needs to be changed, they all are likely to need to be changed'.
Single Responsibility Principle
Don't try to have a component do to much.
I think you also need to look at you dependency structure more closely - as soon as you start getting circular references it's probably a sign that you haven't broken the various "things" apart correctly. Maybe you need to understand the problem domain more? It's a common problem - well, not so much a problem as simply a part of designing complex systems.
Once you get this sorted out it will make the second part much easier: system architecture and design.
Luckily there's already a lot of existing material on plugins, try searching by tag, e.g:
https://stackoverflow.com/questions/tagged/plugins+.net
https://stackoverflow.com/questions/tagged/plugins+architecture
Edit:
Assets is defined in a different module than employees. But the Assets-class defines a property 'AssignedTo' which is of the type 'Employee'. I've been breaking my head how to disconnect these two
There two parts to this, and you might want to look at using both:
Using a Common Layer containing simple data structures that all parts of the system can share.
Using Interfaces.
Common Layer / POCO's
POCO stands for "Plain Old CLR Objects", the idea is that POCO's are a simple data structures that you can use for exchanging information between layers - or in your case between modules that need to remain loosely Coupled. POCO's don't contain any business logic. Treat them like you'd treat the String or DateTime types.
So rather than referencing each other, the Asset and Employee classes reference the POCO's.
The idea is to define these in a common assembly that the rest of your application / modules can reference. The assembly which defines these needs to be devoid of unwanted dependencies - which should be easy enough.
Interfaces
This is pretty much the same, but instead of referring to a concrete object (like a POCO) you refer to an interface. These interfaces would be defined in a similar fashion to the POCO's described above (common assembly, no dependencies).
You'd then use a Factory to go and load up the concrete object at runtime. This is basically Dependency Inversion.
So rather than referencing each other, the Asset and Employee classes reference the interfaces, and concrete implementations are instantiated at runtime.
This article might be of assistance for both of the options above: An Introduction to Dependency Inversion
Edit:
I've got the following method GetAsset( int assetID ); In this method, the property asset.AssignedTo (type IAssignable) is filled in. How can I assign this properly?
This depends on where the logic sits, and how you want to architect things.
If you have a Business Logic (BL) Layer - which is mainly a comprehensive Domain Model (DM) (of which both Asset and Employee were members), then it's likely Assets and Members would know about each other, and when you did a call to populate the Asset you'd probably get the appropriate Employee data as well. In this case the BL / DM is asking for the data - not isolated Asset and Member classes.
In this case your "modules" would be another layer that was built on top of the BL / DM described above.
I variation on this is that inside GetAsset() you only get asset data, and atsome point after that you get the employee data separately. No matter how loosely you couple things there is going to have to be some point at which you define the connection between Asset and Employee, even if it's just in data.
This suggests some sort of Register Pattern, a place where "connections" are defined, and anytime you deal with a type which is 'IAssignable' you know you need to check the register for any possible assignments.

I would look into creating interfaces for your plug-ins that way you will be able to add new modules, and as long as they follow the interface specifications your projects will be able to call them without explicitly knowing anything about them.
We use this to create plug-ins for our application. Each plugin in encapsulated in user control that implements a specific interface, then we add new modules whenever we want, and because they are user controls we can store the path to the control in the database, and use load control to load them, and we use the interface to manipulate them, the page that loads them doesn't need to know anything about what they do.

Related

Generating files to multiple paths with Swagger Codegen?

I'm creating a template for our server-side codegen implementation, but I ran into an issue for a feature request...
The developers who are going to use the generated base want the following pattern (the generator is based on the dotnetcore):
Controllers
v{apiVersion}
{endpoint}ApiController : Controller, I{endpoint}Api
Interfaces
v{apiVersion}
I{endpoint}Api
I{endpoint}DataProvider
DataProviders
-v{apiVersion}
-{endpoint}DataProvider : I{endpoint}DataProvider
Both interfaces are the same, describing the endpoints. The DataProvider implementation will allow us to use DI to hot-swap the actual data provider/business logic layer during runtime.
The generated ApiControllers will refer to the IDataProviders, and use the actual implementation (the currently active one, that is). For that we're going to use dotnetcore's built-in dependency injection system.
However I can't seem to find a way to have the operations generator output to three different folders, based on the template. It will all end up jumbled in a single folder, and I will need to manually move them.
Is there a way to solve these requirements, or should I solve it all the time manually?

Who's responsibility should be to paginate controller/domail service/repository?

My question might seem strange for pros but please take to account that I am coming from ruby on rails world =)
So, I am learning ASP.NET Core. And I like what I am seeing in it compared to rails. But there is always that but... Let me describe the theoretical problem.
Let's say I have a Product model. And there are over 9000 records in the database. It is obvious that I have to paginate them. I've read this article, but it seems to me that something is wrong here since the controller shouldn't use context directly. It has to use some repository (but that example might be provided in such a way only for simplicity).
So my question is: who should be responsible for pagination? Should it be the controller which will receive some queryable object from the repository and take only those records it needs? Or should it be my own business service which does the same? Or should the repository has a method like public IEnumerable<Product> ListProducts(int offset, int page)?
One Domain-Driven-Design solution to this problem is to use a Specification. The Specification design pattern describes a query in an object. So you might create a PagedProduct specification which would take in any necessary parameters (pageSize, pageNumber, filter). Then one of your repository methods (usually a List() overload) would accept an ISpecification and would be able to produce the expected result given the specification. There are several benefits to this approach. The specification has a name (as opposed to just a bunch of LINQ) that you can reason about and discuss. It can be unit tested in isolation to ensure correctness. And it can easily be reused if you need the same behavior (say on an MVC View action and a Web API action).
I cover the Specification pattern in the Pluralsight Design Patterns Library.
For first, I would like to remind you that all such examples you linked are overly simplified, so it shouldn't drive you to believe that that is the correct way. Simple things, with fewer abstraction layers are easier to oversee and understand (at least in the case of simple examples for beginners when the reader may not know where to look for what) and that's why they are presented like that.
Regarding the question: I would say none of the above. If I had to decide between them then I would say the service and/or the repository, but that depends on how you define your storage layer, etc.
"None of the above", then what? My preference is to implement an intermediary layer between the service layer and the Web UI layer. The service layer exposes manipulation functionality but for read operations, exposes the whole collection as an IQueryable, and not as an IEnumerable, so that you can utilize LINQ-to-whatever-storage.
Why am I doing this, many may ask. Because almost all the time you will use specialized viewmodels. To display the list of products on an admin page, for example, you would need to display values of columns in the products table, but you are very likely to need to display its category as well. Very rarely is it the case that you need data only from one table and by exposing the items as an IQueryable<T> you get the benefit of being able to do Selects like this:
public IEnumerable<ProductAdminTableViewModel> GetProducts(int page, int pageSize)
{
backingQueryable.Select(prod => new ProductAdminTableViewModel
{
Id = prod.Id,
Category = prod.Category.Name, // your provider will likely resolve this to a Join
Name = prod.Name
}).Skip((page - 1) * pageSize).Take(pageSize).ToList();
}
As commented, by using the backing store as an IQueryable you will be able to do projections before your query hits the DB and thus you can avoid any nasty Select N+1s.
The reason that this sits in an intermediary layer is simply you do not want to add references to your web project neither in your repo nor in your service layer (project) but because of this you cannot implement the viewmodel-specific queries in your service layer simply because the viewmodels cannot be resolved there. This implies that the viewmodels reside in this same project as well, and to this end, the MVC project only contains views, controllers and the ASP.NET MVC-related guttings of your app. I usually call this intermediate layer as 'SolutionName.Web.Core' and it references the service layer to be able to access the IQueryable<T>-returning method.

How can I implement additional entity properties for Entity Framework?

We have a requirement to allow customising our core product and adding additional fields on a per client basis e.g. People entity some client wants to record their favourite colour etc. As far as I know we can't add properties to EF at runtime as it needs classes defined at startup. Each customer has their own database but we are deploying the same solution to all customers with all additional code. We are then detecting which customer they are and running customer specific services etc.
Now the last thing I want is to be forking my project or alternatively adding all fields for all clients. This would seem likely to become a nightmare. Also more often than not the extra fields would only be required in a very limited amount of place. Maybe some reports, couple of screens etc.
I found this article from Jermey Miller http://codebetter.com/jeremymiller/2010/02/16/our-extension-properties-story/ describing how they are adding extension properties and having them go from domain to the web front end.
Has anyone else implemented anything similar using EF? How did it work out? Are there any blogs/samples that anyone has seen? I am not sure if I am searching for the right thing even if someone could tell me the generic name for what we want to do that would help. I'm guessing it is a problem that comes up for other people.
Linked question still requires some forking or implementing all possible extensions in single solution because you are still creating strongly typed extensions upfront (= you know upfront what extensions customer wants). It is not generally extensible solution. If you want generic extensible solution you must leave strongly typed world and describe extensions as data.
You will need to use some metamodel. Your entity classes will contain only properties used by all customers and navigation property to special extension entity (additional table per every extensible entity) where you will be able to put additional properties as name / value pair (you can add other columns like type, validation, etc. if needed).
This will in general moves part of your model from hardcoded scenario to configuration based scenario and your customers will even be allowed to define extensions at runtime (if you implement such feature).

Entity Framework ( Questions on POCO, Context, and DTO)

I have been reading about entity framework over the past couple of days and have managed to get a fair idea of using it but I still have a couple of questions some of which might seem a bit too basic. For perspective I am using entity framework 4.0 in an asp.net web application.If you can answer any of the questions please go ahead.
What advantage do I get by using POCO templates. I understand that if I wish to get persistence ignorance and keep my Entities clear of any information related to storage POCO entities are the way to go. Also I could switch from Entity framework to say NHibernate with relative ease when using POCO entities? Apart from loose coupling is there any significant reason for me to go towards POCO entities. Also if I do use POCO do I end up losing anything. I still get change tracking and lazy loading with the help of proxies?
Is it normal practice to use the Entities of the EF model as Data transfer Objects or Business Objects. i.e for example I have a separate class library for my entity model.Supposing I am using MVP , where I want a list of Employee's in a company. The presenter would request my business logic functions which would query the entity model for the list of Employee's and return the list of entities to the presenter. In this case my presenter would need to have a reference to the EF model. Is this the correct way? In the case of my asp.net web applciation it shouldnt be a problem but if I am using web services how does this work? Is this the reason to go towards POCO entities?
Supposing The Employee entity has a navigation property to a company table. If I use and wrap the data context in an 'using' block , and try to access the navigation property in the BL I am assuming I would get an exception. Would I also get an exception if I turned off lazyloading and used the 'include' linq query to get the entity? On a previous post someone recommended I use an context per request implying that the context remains active even when I am in the BL. I am assuming I would still need to detach the object and attach it to the context on my next request if I wish to persist any changes I make? or Instead should I just query for the object again with the new context and update it?
This question has more to do with organizing files/best practices and is a followup to a question i posted earlier. When I am using separate files based on entities to organize my data access layer, what is the best practice to organize my queries involving joins between multiple tables. I am still a bit hazy on organization. Have tried searching online but havent had much help.
Terrific question. My first recommendation is to think in patterns. With that said...
You pretty much nailed the advantages of using POCO. There are some distinct advantages to decoupling your business objects (POCO entities) from your data access layer. But the primary reason is like you said the ability to change or modify layers below. However using POCO you are essentially following the Code First (CF) approach. Personally, I consider it Code In Parallel depending upon your software development life cycle. You still have all the bells and whistles that data or model first approach have and some since you can extend the DbContext which is ObjectContext under the hood. I read an article, which I cannot seem to find, that CF is the future of Entity Framework. Lastly the nice thing with POCO is you are able to incorporate validation rules here or else where. You can also provide projections. Lets say you have Date of Birth but you want an Age property as well. That now becomes a no brainer as the Age property is ignored when mapping to the database.
Personally I create my own business objects (POCO) for large projects that tend to have a life of its own where change is a way of life. Another thought is scalability and maintainability. What if down the road I choose to split functionality between applications where, like you mentioned web services, functionality is now delivered from two disparate locations. If you have encapsulated your business objects and DAL within the same code block separation or scalability has now become a bit more complex. However, consider the project. It may be small with very little future change so no need to throw a grenade to kill a fly. At which time data first might be the way to go and let edmx file represent your objects. So don't marry yourself to one technology or one methodology/pattern. Do what makes sense for your time and business.
Using statements are perfectly fine. In fact I've recently been turned on to then wrapping that within a TransactionScope. If an error occurs rollbacks are inherent. Next, something to consider is the UnitOfWork. UnitOfWork pattern encapsulates a snapshot of what needs to be performed where the Data Context is the boundaries from which you work within. For each UnitOfWork you have a subject for which work is to be performed on. For example an Employee. So if you are to save Employee information to keep it simple you would make a call to the BL service or repository (which ever). There you pass in the Employee Id, perform some work under that UnitOfWork where it is either instantiated in the constructor or using Dependency Injections (DI or IoC). Easy starter is StructureMap. There the service makes the necessary calls to your UnitOfWork (DbContext) then returns control back upstream (e.g. UI).
The best way to learn here is to view others code. I'd start with some Microsoft examples. I'd start with Nerd Dinner (http://nerddinner.codeplex.com/) then build off that.
Additional Reading:
Use prototype pattern or not
http://weblogs.asp.net/manavi/archive/2011/05/17/associations-in-ef-4-1-code-first-part-6-many-valued-associations.aspx
[EDIT]
NightHawk457, I'm terribly sorry for not responding to your questions. Hopefully you figured it out but for future readers...
To help everyone visualize, imagine the below Architecture using the Domain Model and Repository as an example. Remember, there are many ways to skin a cat so take this and make it your own and don't forget my Grenade comment above.
Data Layer (Data Access): MyDbContext : DbContext, IUnitOfWork, where IUnitWork contracts the CRUD operations.
Data Repository (Data Access / Business Logic): MyDomainObjectRepository : IMyDomainObjectRepository, which receives IUnitOfWork by Factory class or Dependency Injection. Calls MyDomainObject validation on CRUD operations.
Domain Model (Business Logic): MyDomainObject using [Custom] Validation Attributes. Read this for pros/cons.
MVVM / MVC / WCF (Presentation / Service Layers): What ever additional layers you chose, you now have access to your data which is wrapped nicely in smaller modules who are self encapsulating of their function. The presentation layer (e.g. ViewModel, Controller, Code-Behind, etc.) can then receive an IMyObjectRepository by a Factory class or by Dependency Injection.
Tips:
Pass connection string into MyDbContext so you can reuse MyDbContext.
MySql does not play well with System.Transactions.TransactionScope, example. I don't recall exactly but it was something MySql did not support. This makes Testing a bit difficult since we have created this level of separation.
Create a Test project for each layer and at the minimum test general functionality/rules.
Each Domain Object should extend base object with ID field at minimum. Also do not implement Key attributes here. Domain Object should not describe architecture but rather the specific data as an entity. Even on Code First this can be achieved by the Fluent API.
Think generics when creating MyDbContext. ;) Read Diego's post.
In ASP.NET, the repositories are nice to use with ObjectDataSources.
As you can see, there is clear separation of roles where IUnitOfWork and IMyDomainObjectRepository are the Interfaces which expose the above layers functionality. And as an example, IUnitOfWork could be NHibernate, Entity Framework, LinqToSql or ADO.NET where a change to the factory class or dependency injection registration is all that has to change. FYI, I've heard the Repository called the Service Layer as well. Personally I like the first name to not be confused with Web Services. The next big take away from this structure is realizing the scope for you Database Context (IUnitOfWork). A simple example would be a ASP.NET page where for each page there is one and only one IUnitOfWork for either each repository or for that scope of work. Same holds true for ViewModels, Controllers, etc. So let's say you need to utilize two repositories, EmployeeRepository and HRRepository. You then could share the IUnitOfWork between both or not. To cross page, ViewModel or Controller boundaries, we use the ID for entities where they are then pulled from the DB and work is performed. You could alternatively pass a DTO across boundaries and attach to the context but then you begin losing separation of layers.
To continue, POCO classes do not have to be auto generated. In fact you can create your Entity Classes from scratch and perform the mapping in your extended DbContext class inside the OnModelCreating(DbModelBuilder mb) method. Start here, then here and note the Additional Resources, google Fluent API and read this post by Diego.
As for validation, this is an interesting point because it would be GREAT if all Business Rules could be validated in one location. Well, as we all know that doesn't work real well. So here is my recommendation, keep all data level validation (i.e. required, range, format, etc.) with data annotation as much as possible in the domain object and leave process validation in the Repository with clear roles of the Repository (i.e. if (isEmployee) do this, else that). I say clear, such that you do not want to add an Employee in two different Repositories where validation has to be duplicated. To call the validation, start here. Capture the ValidationResults and send upstream with a MyRepositoryValidationException which contains a collection of validations errors (e.g. Employee is required) which can be presented to the presentation layer. With all that said, don't forget to perform validation at the presentation layer. You don't want post backs to make sure an Employee has a valid Email, for example.
Just remember to balance time and effort with complexity. For something simple, use Data First or Model First with your EDMX file. Then lay a repository on top of that which also contains all the validation rules.

What is Reflection?

I am VERY new to ASP.NET. I come from a VB6 / ASP (classic) / SQL Server 2000 background. I am reading a lot about Visual Studio 2008 (have installed it and am poking around). I have read about "reflection" and would like someone to explain, as best as you can to an older developer of the technologies I've written above, what exactly Reflection is and why I would use it... I am having trouble getting my head around that. Thanks!
Reflection is how you can explore the internals of different Types, without normally having access (ie. private, protected, etc members).
It's also used to dynamically load DLL's and get access to types and methods defined in them without statically compiling them into your project.
In a nutshell: Reflection is your toolkit for peeking under the hood of a piece of code.
As to why you would use it, it's generally only used in complex situations, or code analysis. The other common use is for loading precompiled plugins into your project.
Reflection lets you programmatically load an assembly, get a list of all the types in an assembly, get a list of all the properties and methods in these types, etc.
As an example:
myobject.GetType().GetProperty("MyProperty").SetValue(myobject, "wicked!", null)
It allows the internals of an object to be reflected to the outside world (code that is using said objects).
A practical use in statically typed languages like C# (and Java) is to allow invocation of methods/members at runtime via a string (eg the name of the method - perhaps you don't know the name of the method you will use at compile time).
In the context of dynamic languages I haven't heard the term as much (as generally you don't worry about the above), other then perhaps to iterate through a list of methods/members etc...
Reflection is .Net's means to manipulate or extract information of an assembly, class or method at run time. For example, you can create a class at runtime, including it's methods. As stated by monoxide, reflection is used to dynamically load assembly as plugins, or in advance cases, it is used to create .Net compiler targeting .Net, like IronPython.
Updated: You may refer to the topic on metaprogramming and its related topics for more details.
When you build any assembly in .NET (ASP.NET, Windows Forms, Command line, class library etc), a number of meta-data "definition tables" are also created within the assembly storing information about methods, fields and types corresponding to the types, fields and methods you wrote in your code.
The classes in System.Reflection namespace in .NET allow you to enumerate and interate over these tables, providing an "object model" for you to query and access items in these tables.
One common use of Reflection is providing extensibility (plug-ins) to your application. For example, Reflection allows you to load an assembly dynamically from a file path, query its types for a specific useful type (such as an Interface your application can call) and then actually invoke a method on this external assembly.
Custom Attributes also go hand in hand with reflection. For example the NUnit unit testing framework allows you to indicate a testing class and test methods by adding [Test] {TestFixture] attributes to your own code.
However then the NUnit test runner must use Reflection to load your assembly, search for all occurrences of methods that have the test attribute and then actually call your test.
This is simplifying it a lot, however it gives you a good practical example of where Reflection is essential.
Reflection certainly is powerful, however be ware that it allows you to completely disregard the fundamental concept of access modifiers (encapsulation) in object oriented programming.
For example you can easily use it to retrieve a list of Private methods in a class and actually call them. For this reason you need to think carefully about how and where you use it to avoid bypassing encapsulation and very tightly coupling (bad) code.
Reflection is the process of inspecting the metadata of an application. In other words,When reading attributes, you’ve already looked at some of the functionality that reflection
offers. Reflection enables an application to collect information about itself and act on this in-
formation. Reflection is slower than normally executing static code. It can, however, give you
a flexibility that static code can’t provide

Resources