Plugin based WCF Webservice - asp.net

I am implementing a IIS Hosted WCF web service to accept leads from third parties. There are plenty of operations that may happen pre/post saving the information. I am thinking of implementing this as plug-in based architecture.
Example of pre-save operations are
duplicate checking before saving
making sure the information is valid (not mickey mouse)
Post-save operations are
zipcode based routing to correct warehouse
lead assignment.
I have been reading about MEF, but i have been unable to decide if this is actually worth implementing MEF where loading and unloading of plugins for every call will likely increase the overhead? Is there a way to just load all your plugins in some magic application_start?

I agree with Steven that you don't need a plugin architecture for this. You are good to go with just properly designed services. There are some very good hints about this in Steven's blog post - Writing Highly Maintainable WCF Services.
Nonetheless, to answer the second part of the question, there is nothing stopping you from initializing the MEF composition container in your Application_Start and store it statically (other than that it introduces a global state, which is often a bad design decision). Then it would be shared across the requests and you could use it to compose parts as needed without the overhead of repeated exports discovery.

Related

ASP.NET infrastructure in MediatR handlers

I prefer to keep my handlers free from ASP.NET infrastructure that is very hard to test (yes, even in ASP.NET Core). But sometimes it happens and you have a dependency like UserManager (I'd like to know one day why it's not an interface), HttpContext, etc. and unit-tests are turned into a mocking-hell.
I tried using integration testing to deal with it by creating a TestServer and having all the ASP.NET infrastructure initialized for every api call. It works quite well but sometimes seems like an overkill if I want to test simple logic of my handler. And while it solves technical problem of mocking ASP.NET infrastructure, it keeps architectural problem (if you consider it so) of having ASP.NET infrastructure into your handlers.
I'd like to know what are the recommended approaches to deal with it?
I feel your pain. I stumbled across a fantastic blog post from Jimmy Bogard that handles this problem by using what Martin Fowler calls Subcutaneous Tests. I will leave the deep explanation to those experts but in a nutshell subcutaneous tests simply avoid all the difficult to test aspects of the UI.
Shameless plug: I am currently in the process of writing up a wiki that demonstrates these patterns in a sample end-to-end project on github. It's not difficult to follow but is probably too much code to post for a SO answer.
To Summarize:
If you are using MediatR correctly your controllers should be very thin which makes testing them pointless.
What you want to test are your handlers.
However, you want to test your handlers as part of your real world pipeline.
To Solve:
Wrap the http request in a transaction.
Build a test fixture that mimics the applications Startup.cs
Setup a test db server to execute queries and commands against but also is reset after each test.
That's basically it. Everytime you run an integration test against one of your handlers:
The hosting environment is mocked but your application is started up in a real world test.
Your query or command is wrapped in a transaction mimicking your DbContext.
The handler is executed against a real database and then reset.
I would add more code examples to my answer but between the blog post and the wiki I provided, it is much easier to follow the code examples there.
Edit 8/2021:
Stick with the source. Jimmy Bogard keeps the contoso university project current on his github page. Another great and a little more advanced example is the modular monolith project by Kamil Grzybek. That also is updated regularly on his github page.
Mediatr or no, you should always try to have only very basic pass this along logic in your controllers and call injected business logic classes from there to do the actual work. As you inject them with interfaces to this business logic, your controllers' dependencies are easily mocked in your unit tests, and your tests can focus on if they implement those interfaces properly and do only the basic work of routing input/output. And your actual business logic can be tested even easier.
For those classes that are static, for instance for reading the web.config settings, one strategy that I like a lot is make an interfaced wrapper class around them. While ConfigurationManager is static, I can still just write a regular class with an interface that I put methods or properties on to read a specific setting (preferably semantically named) from the Configuration Manager. Now I can easily mock any configured setting (or absence of it) in my test by just mocking the interface and setting up different return values.
I'd say it depends on the level of confidence you want to get in the end. If you want to make sure the whole system works as expected, then integration tests using a TestServer are probably the way to go.
One advantage of MediatR, though, is it allows you to decouple your business logic from the application using it, which is why at the very top level, let's say in controllers, there's no logic but just a delegation to the mediator.
That being said, you're right that sometimes your logic needs information from the hosting application. An example would be the user making the request, which is accessible in the HTTP context.
In that case, if you want to avoid having to set up a test HTTP server to test your logic works, you could represent that information in an abstraction and your handler would then take a dependency on that abstraction. Your tests could then mock that dependency while using the real system for everything else.
Does that make sense?

When to choose Enterprise Application Integration Middleware?

I work in a software company that delivers a software product. Many times we must integrate with other applications. 70% of the time we integrate with a single application. Currently we do not use middleware (MuleESB, Biztalk, ...) in these situations: the data conversion, transport conversion, etc is handled inside the applications.
Wouldn't it be better to ALWAYS use a middleware solution? (no matter if your integrating with 1 or more systems) This way, all the customizations (data formatting, restructuring, transport conversions) of both parties, can be handled by the middleware, instead of by the applications.
Logically this seems to me the right approach. But I ask myself: Is middleware in the case of two applications justifiable?
In practical terms, you will always be using a "Middleware" solution, either custom or packaged.
I'd look at it this way, rather than requireing a "middleware" package, I'd focus on making the app itself integration friendly by using a consistent and as-standard-as-practical API for exchanging data.
Then the decision on what "middleware" to use is more driven by the circumstance on site. If the customer has only one app to integrate, then a simple custom solution might be perfectly serviceable. If they have 5 apps and defined processes for each, then a package makes more sense.
I don't always use middleware, but when I do, I use BizTalk. ;)
I found this to be a useful read when considering appropriate architecture. If you get a chance to see Richard Seroter's "Decision Framework" presentation, you should.
Wiki site on Publish subscribe pattern http://en.wikipedia.org/wiki/Publish/subscribe shows an interesting comparison between how publish subscribe relates to client server.

Implementation of a ASP.NET based portal-like application

There is the requirement, to write a portal like ASP.NET based web application.
There should be a lightweigted central application, which implements the primary navigation and the authentication. The design is achieved by masterpages.
Then there are several more or less independent applications(old and new ones!!), which should easily and independent be integrated into this central application (which should be the entry point of these applications).
Which ways, architectures, patterns, techniques and possibilities can help and support to achieve these aims? For example makes it sense to run the (sub)applications in an iframe?
Are there (lightweighted and easy to learn) portal frameworks, which can be used (not big things like "DOTNETNUKE")?
Many thanks in advance for you hints, tips and help!
DON'T REINVENT THE WHEEL! The thing about DotNetNuke is that it can be as big or as small as you make it. If you use it properly, you will find that you can limit it to what you need. Don't put yourself through the same pain that others have already put themselves through. Unless of course you are only interested in learning from your pain.
I'm not saying that DNN is the right one for you. It may not be, but do spend the time to investigate a number of open source portals before you decide to write your own one. The features that you describe will take 1000s of hours to develop and test if you write them all from scratch.
#Michael Shimmins makes some good suggests about what to use to implement a portal app with some of the newer technology and best practice patterns. I would say, yes these are very good recommendations, but I would encourage you to either find someone who has already done it this way or start a new open source project on codeplex and get other to help you.
Daniel Dyson makes a fine point, but if you really want to implement it your self (there may be a reason), I would consider the following components:
MVC 2.0
Inversion of Control/Dependency Injection (StructureMap for instance)
Managed Extensibility Framework
NHibernate (either directly or through a library such as Sh#rp or Spring.NET
A service bus (NServiceBus for instance).
This combination gives you flexible user interface through MVC, which can be easily be added to via plugins (exposed and consumed via MEF), a standard data access library (NHibernate) which can be easily configured by the individual plugins to connect to specific databases, an ability to publish events and 'pick them up' by components composed at runtime (NServiceBus).
Using IoC and DI you can pass around interfaces which are resolved at runtime based on your required configuration. MEF gives you the flexibility of defining 'what' each plugin can do, and then leave it up to the plugins to do so, whilst your central application controls cross cutting concerns such as authentication, logging etc.

Best Practices Server Side Scripting or Web Services

Let me start off by stating that I am a novice developer, so please excuse the elementary nature of my question(s).
I am currently working on a Flex Application, and am getting more and more confused about when to use server side scripting, and when to develop web services. For most of the functionality I am working on, I am taking various files from the user (client), uploading to the server for processing/conversion, then sending back to client in new format.
I am accomplishing most of this using asp.net generic handlers (ashx) files, but not very confident this is best practice. But at the same time, does making web services make any more sense? What would be considered best practice for this? Any suggestions would be greatly appreciated.
The way I look at it is as follows:
Web Services mean Established Best Practice.
For most of our development, we don't need to create "Web Services", or what I'm thinking when I think REST, SOAP, and the Twitter API. You only need to start doing that once you've got something you're going to be using every day for years.
Clean and DRY code will Lead you to Creating a Web Service
If you spend the time to clearly define the parts of your upload-process-render Architecture, and you find that it can be applied to almost everything you are doing, then all you need to do to make it a Web Service is define a clear, 1-2-3 set of rules for using the system (GET/POST data, etc.). As long as you are consciously building an architecture the whole way, you'll end up creating a Web Service if it's worthy. Otherwise there's no need.
It sounds like you have a clear workflow going, I don't know anything about asp.net though.
As far as it being confusing sometimes, and best practices, I suggest the following:
Create a Flex Library Project for your "generic ashx file handling" Flex classes. Give it a cool, simple name.
Create a .NET Library Project that encapsulates all the logic for your server-side file processing. Host it online and make it open source. I recommend github. Test it as you go, and document it, its purpose and the theory behind it.
If you don't have to do anymore work at this point, and it's just plug and chug, then you've probably arrived at something that might become a Web Service, though that's probably a few years down the road.
I don't think you should try to create a Web Service right off the bat. Just make some clean and reusable code, make a few examples, get it online and open source, have others contribute and give feedback, and if it solves a specific problem, then make it a web service. You can just use REST for now probably, and build your system around that. RestfulX is a great library for that.
Best,
Lance
making web services without any sense make no sense ;)
Now in the world of FLEX as3 with flash version 10, you can easily read local files, modify them with whatever modifcation algorithm and save local files without pinging server.
You only have to use webservices if you want to get some server data or to send some data to server. that's all.
RSTanvir
Flash / Flex uses a simple HTTP POST approach for file uploads, so trying to do that using SOAP web services will be problematic. Your approach of using ASHX here sounds reasonable to me.
To send / receive data that isn't file based (e.g. a list of files the user has uploaded previously), I would recommend looking at the open source Fluorine FX library. Fluorine uses AMF which is a highly performant way of doing data transfer with Flash. It's also purely configuration-based, which means you don't need to code against any of its APIs, just configure Fluorine to expose your .NET service classes. You could easily add attributes to those same classes to expose them as SOAP web services via WCF if you need that in the future. I would not recommend using SOAP with Flex however, due to the performance losses and also because the Flex implementation of SOAP has a history of bugs and interoperability problems.

Concurrency ASP.NET best-practices worst-practices

In which cases to you need to watch out for Concurrency problems (and use lock for instance) in ASP.NET?
Are there 'best practices' around on this topic
Documentation?
Examples?
'worst practices...' or things you've seen that can cause a disaster...?
I'm curious about for instance singletons (even though they are considered bad practice - don't start a discussion on this), static functions (do you need to watch out here?), ...?
Since ASP.NET is a web framework and is mainly stateless there are very few concurrency concerns that need to be addressed.
The only thing that I have ever had to deal with is managing application cache but this is easily done with a cache-management type that wraps the .NET caching mechanisms.
One huge problem that caused us a lot of grief was using Modules vs. Classes in our main Web Service. This was before we really knew what we were doing and has since been fixed.
The big problem with using modules is that by default any module level variables are visible to every instance of the ASP worker process. We pass in multiple datasets and manipulate them then return them to the client. Because we were using modules the variables holding these datasets were getting corrupted by multiple calls occuring at one time.
This was not caught in testing and was difficult to reproduce until we figured out how to properly load test our web services. It took something like 10-20 requests per second before we could reproduce it accurately.
In the end, we just changed all the modules to classes, and then used those classes instead of calls to the modules, this cleared up this concurrency issue as each instantiated class had its own copy of the dataset in memory.

Resources