Extending/overriding System.Net.Mail.SmtpClient Send(message As MailMessage) method - asp.net

Scenario
Around 20 ASP.net(VB) applications share the same code framework and when deployed also shares a common web.config. Throughout the various applications we use System.Net.Mail.SmtpClient/MailMessage to send e-mails and now we would like to implement an e-mail opt-out feature for our users with a minimal amount of change to the existing code. That leaves out the simplest approach; inheriting a class from SmtpClient, say OurSmtpClient, and override the Send() method to remove all users that have opted to not receive e-mails, as that would mean we would have to change all New SmtpClient() to New OurSmtpClient() throughout the apps.
Alternatives
We've previously used tagMapping to remap tags to our in-house, derived alternatives, are there anything similar for classes so that all SmtpClient automatically becomes OurSmtpClient and thus will use the overridden Send() method?
We've also looked at Extensions, but the problem here is that we can't override existing methods, only add new ones?
Next alternative we have considered is reflection, but we couldn't get our minds around on how to actually implement it.
Events .. Oh, if there was a Sending event ...
Code (cause everyone likes it)
Here is the inherit approach, just to understand what we are looking for:
Public Class OurSmtpClient
Inherits SmtpClient
Public Overloads Sub Send(message As MailMessage)
For i As Integer = message.To.Count - 1 To 0 Step -1
With message.To(i)
If (.Address.Contains("test")) Then
message.To.RemoveAt(i)
End If
End With
Next
MyBase.Send(message)
End Sub
End Class
Any suggestions? How can this be done without changing the code in the existing applications and only in the shared code (lives in App_Code in the apps) or the shared web.config?

I would change this at the data layer instead of in the mail client.
I'm assuming you store all the information about you users somewhere centrally, along with the information that they would rather not receive any further emails. So in my eyes, the chance would be to simple no longer return those users whenever you ask for the list of users to send emails to.
I don't know enough about the way your current applications work, but that does seem like the most convenient place to change it.

The fact that you are struggling to implement what should be a straightforward requirement is a big clue that you've built up too much technical debt. Your post conveys a strong reluctance to pay down technical debt. This is something that you must avoid and instead embrace Merciless Refactoring. Bite the bullet and introduce that specialised SMTP class.

Related

Properly scope functions and methods

I wanted to ask a question about securing my web site. I have an ASP.net site, using VB. I have decided on a 3 tiered approach. So, one project, with three applications. The first application is the website, the second is a class library for business logic, and the last is a class library for database interactions.
My question is on how do I protect functions/subs in my various tiers, from be accessed and used outside of the project? I can't make the functions/subs Private, Protected, or Friends as each library is in its own project. But, I want to make sure other entities, outside of the project, can access resources. Here is an example layout of my project:
MySolution
MyWebSite - Has references to MyBusinessLayer and MyDataLayer class libraries
MyBusinessLayer - Has a reference to MyDataLayer
MyDataLayer - has no References
Example code would be:
MyWebsite load:
Public DATA_PROXY As New MyDataLayer
Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
dim someData as new Users = DATA_PROXY.getSomeData()
'do stuff with data
end Sub
MyBusinessLayer
Public Function getSomeData() As Users
Dim conn As New myBussinessLayer
Dim dr As SqlDataReader
conn.Connect()
dr = conn.getSomeDataFromDB()
'...more code for logic done here and returns an object
end function
MyDataLayer
Public Function getSomeDataFromDB() As SqlDataReader
Dim dr As SqlDataReader
'... do stuff to get data from database
return dr
end function
So, you can see everything is Public. Is this a bad pattern to use? Can other, outside entities, access these DLLs once they are deployed? What are some security concerns I should have? Also, are there any issues with using this pattern of development?
This is a partial answer.
You have the beginnings of going down the right path. The use of layers is a concept to hide (or encapsulate) various functions from those who do not need to know the implementation. But your implementation of the layers will give you grief. I will use your three layers. I will also use the term "function" liberally where I mean either function or sub-routine depending on the context.
The use of (server-side) ASP helps, because the output is a rendered page, not an exposed series of function calls or scripts (always one of my beefs with client-side JavaScript).
MyWebsite should have the public facing code. This can be the functions that render information onto the page, or validate and accept input from the users. Tightly manage the function calls (arguments/parameters) so that you only get inputs that you expect.
With MyBusinessLayer and MyDataLayer, create some interface/adapter functions that do nothing else except validate inputs (the function arguments/parameters), call the working functions, and then validate the outputs (so that website exposing code or Cross-scripting does not occur). All the other functions that do the work (and thus expose details of the actual database etc) can be Private or Friend. The tighter you can make it, the easier it is to secure.
These extra functions initially seem like more work - but aid in maintenance at a later stage. If your database details change, you can update the Private working code, but if it give the same type of output then your higher level interfaces do not change and all the layers above do not change. From a security perspective, you have now minimised the work to check for any security issues due to code change.
Going back to your original post:
Is this a bad pattern to use? The pattern itself is not bad - as I have noted above it is the implementation of that pattern that is important.
Can other, outside entities, access these DLLs once they are
deployed? If they are Public - yes.
What are some security concerns I should have? If people can bypass your interface functions (because your working code is also public), they can bypass your validations. Sure, you can put validations in your working code, but from experience this results in much duplication, mistakes and much harder to maintain. This is a good question though - design with security in mind because it is much harder to add security later. So - you minimise what the others can see and access.
Also, are there any issues with using this pattern of development? I have used this pattern of development myself (User interface, business logic, data adapters) and it works. It is more work initially and, if you are like me (code - run - code - work out what I am doing - design a bit - code - run etc), sometimes seems to be a lot of rework. But, from my experience, this short-term pain is definitely worth the long-term gain.

Passing ViewModel from Presentation to Service - Is it Okay?

In one of my views, I have a ViewModel which I populate from two tables, and then bind a List<ViewModel> to an editable GridView (ASP.NET Web Forms).
Now I need to send that edited List<ViewModel> back to the Services layer to update it in the database.
My question is - is it Okay to send the ViewModel back to Services, or should it stay in the Presentation? If not - should I better use a DTO? Many thanks.
Nice question !
After several (hard) debates with my teammates + my experience with MVC applications, I would not recommend to pass viewmodel to your service / domain layer.
ViewModel belongs to presentation, no matter what.
Because viewModel can be a combination of different models (e.g : 1 viewModel built from 10 models), your service layer should only work with your domain entities.
Otherwise, your service layer will end up to be unusable because constrained by your viewModels which are specifics for one view.
Nice tools like https://github.com/AutoMapper/AutoMapper were made to make the mapping job.
I would not do it. My rule is: supply service methods with everything they need to do their job and nothing more.
Why?
Because it reduces coupling. More often than not service methods are addressed from several sources (consumers). It is much easier for a consumer to fulfil a simple method signature than having to build a relatively complex object like a view model that it otherwise may have nothing to do with. It may even need a reference to an assembly it wouldn't need otherwise.
It greatly reduces maintenance effort. I think an average developer spends more than 50% of his time inspecting and tracking existing code (maybe even much more). Now everybody knows that looking for something that is not there takes disproportionally much time: you must have been everywhere to be sure. If a method receives arguments (or object with properties) that are not used directly or further down the call stack you or others will walk this long road time and again.
So if there is anything in the view model that does not play a part in the service method, don't use it to call the method.
Yes. I am pretty sure it is ok.
Try to use MS Entity Framework and it will help you allots.

Dynamic form creation in asp.net c#

So, I need some input refactoring an asp.net (c#) application that is basically a framework for creating dynamic forms (any forms). From a high level point of view, there is a table that has the forms, and then there is a table that has all the form fields, where it is one to many between the two. There is a validation table, where each field can have multiple types of validation, and it is a one to many from the form fields table to the validation table.
So the issue is that this application has been sold as the be-all-end-all customizable solution to all the clients. So, the idea is whatever form they want, we can build it jsut using DB configurations. The thing is, that is not always possible, because there is complex relationship between the fields, and complex relationship between the forms themselves. Also, there is only once codebase, and this is for multiple clients - all of whom host it on their own. There is very specific logic for each of the clients, and they are ALL in the same codebase, with no real separation. Sometimes it was too difficult to make it generic, so there are instances where it has hard coded logic (as in if formID = XXX then do _). You can also have nested forms, as in, one set of fields on its own within each form.
So usually, when one client requests a change, we make the change and deploy it to that client - but then another client requests a different change, and we make the change and deploy it for THAT client, but the change from the earlier client breaks it, and its a headache trying to debug, because EVERYTHING is dynamic. There is no way we can rollback the earlier change, because then the other client would be screwed.
Its not done in a real 3-tier architecture - its a web site with references to a DB class, and a class library. There is business logic in the web site itself, in the class library, and the database stored procs (Validation is done in the stored procs).
I've been put in charge of re-organizing the whole thing, and these are my thoughts/questions:
I think this is a bad model in general, because one of the things I heard one of the developers say is that anytime any client makes a change, we should deploy to everybody - but that is not realistic, if we have say 20 clients - there will need to be regression testing on EVERYTHING, since we don't know the impact...
There are about 100 forms in total, and their is some similarity in them (not much). But I think the idea that a dynamic engine can solve ALL form requests was not realistic as well. Clients come up with the most weird requests. For example, they have this engine doing a regular data entry form AND a search form.
There is a lot of preserving state between pages, and it is all done using session variables, which is ok, except that it is not really tracked, and so sessions from the same user keep getting overwritten, and I think sessions should be got rid of.
Should I really just rewrite the whole thing? This app is about 3 years old, and there has been lots of testing and things done, and serious business logic implemented, so I hate to get rid of all that (joel's advice). But its really a mess of a sphagetti code, and everything takes forever to do, and things break all the time because of minor changes.
I've been reading Martin Fowlers "Refactoring" and Michael Feathers "working effectively with legacy code" - and they are good, but I feel they were written for an application that was 'slightly' better architected, where it is still a 3-tiered architecture, and there is 'some' resemblance of logic..
Thoughts/input anyone?
Oh, and "Help!"
My current project sounds like almost exactly the same product you're describing. Fortunately, I learned most of my hardest lessons on a former product, and so I was able to start my current project with a clean slate. You should probably read through my answer to this question, which describes my experiences, and the lessons I learned.
The main thing to focus on is the idea that you are building a product. If you can't find a way to implement a particular feature using your current product feature set, you need to spend some additional time thinking about how you could turn this custom one-off feature into a configurable feature that can benefit all (or at least many) of your clients.
So:
If you're referring to the model of being able to create a fully customizable form that makes client-specific code almost unnecessary, that model is perfectly valid and I have a maintainable working product with real, paying clients that can prove it. Regression testing is performed on specific features and configuration combinations, rather than a specific client implementation. The key pieces that make this possible are:
An administrative interface that is effective at disallowing problematic combinations of configuration options.
A rules engine that allows certain actions in the system to invoke customizable triggers and cause other actions to happen.
An Integration framework that allows data to be pulled from a variety of sources and pushed to a variety of sources in a configurable manner.
The option to inject custom code as a plugin when absolutely necessary.
Yes, clients come up with weird requests. It's usually worthwhile to suggest alternative solutions that will still solve the client's problem while still allowing your product to be robust and configurable for other clients. Sometimes you just have to push back. Other times you'll have to do what they say, but use wise architectural practices to minimize the impact this could have on other client code.
Minimize use of the session to track state. Each page should have enough information on it to track the current page's state. Information that needs to persist even if the user clicks "Back" and starts doing something else should be stored in a database. I have found it useful, however, to keep a sort of breadcrumb tree on the session, to track how users got to a specific place and where to take them back to when they finish. But the ID of the node they're actually on currently needs to be persisted on a page-by-page basis, and sent back with each request, so weird things don't happen when the user is browsing to different pages in different tabs.
Use incremental refactoring. You may end up re-writing the whole thing twice by the time you're done, or you may never really "finish" the refactoring. But in the meantime, everything will still work, and you'll have new features every so often. As a rule, rewriting the whole thing will take you several times as long as you think it will, so don't try to take the whole thing in a single bite.
I have a number of similar apps for building dynamic forms that I support.
There's a whole lot of things you could/could not do & you're right to think hard before throwing away 3 years of testing/development.
My input for you to consider is to implement a plug-in architecture on top of what you're got. Any custom code for a form goes in the plug-in & the name of this plug-in is stored with the form. When you generate a form, the correct plug-in is called to enhance the base functionality. that way you get to move all the custom code out of the existing library. It should also mean less breaking changes, each plug-in only affects the form it's attached to.
From that point it'll be easy to refactor the core engine as it's common functionality across all clients & forms.
Since your application seems to have become a big ball of mud, a complete (or an almost complete rewrite) might make sense.
You should also take into account new technologies like document-oriented databases (couchDB, MongoDB)
Most of the form definitions could probably fit pretty well in document-oriented databases. For exemple:
To define a customer form, you could use a document that looks like:
{Type:"FormDefinition",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName",
FieldType:"String",
Validations:[
{ValidationType:"Required"},
{ValidationType:"StringLength", Minimum:15, Maximum:50},
]},
...
{FieldName:"CustomerType",
FieldType:"Dropdown",
PossibleValues: ["Standard", "Valued", "Gold"],
DefaultValue: ["Standard"]
Validations:[
{ValidationType:"Required"},
{
ValidationType:"Custom",
ValidationClass:"MySystem.CustomerName.CustomValidations.CustomerStatus"
}
]},
...
]
};
With this kind of document to define your forms, you could easily add forms and validations which are customer specific.
You could easily add subforms using a fieldtype of SubForm or whatever.
You could define FieldTypes for all common types of fields like e-mail, phone numbers, address, etc.
namespace System.CustomerName.CustomValidations {
class CustomerStatus: IValidator {
private FormContext form;
private List<ValidationErrors> validationErrors;
CustomerStatus(FormContext fc) {
this.validationErrors = new List<ValidationErrors>();
this.form = fc;
}
public List<ValidationErrors> Validate() {
if (this.formContext.Fields["CustomerType"] == "Gold" && Int.Parse(this.form.Fields["OrderCount"]) < 10) {
this.validationErrors.Add(new ValidationError("A gold customer must have at least 10 orders"))
}
if (this.formContext.Fields["CustomerType"] == "Valued" && Int.Parse(this.form.Fields["OrderCount"]) < 5) {
this.validationErrors.Add(new ValidationError("A valued customer must have at least 5 orders"))
}
return this.validationErrors;
}
}
}
A record of a document with that definition could look like this:
{Type:"Record",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName", Value:"ABC Corp.",
{FieldName:"CustomerType", Value:"Gold",
...
]
};
Sure, this solution is a lot of work, but if/when realized it could be really easy to create/update/customize forms.
This is a common but (IMO) somewhat naive design approach. "Instead of solving the customer's problem, let's build a tool to let them solve their own problems!". But the reality is, that generally customers want YOU to solve their ACTUAL problems. So build things that solve their problems.
If you can architect it in a way that allows you to reuse some parts for different customers, fine. But that is generally what the frameworks have done for you already - work out the common features that applications need and make them available in neat packages.

What's the use of having private properties if you can alter them with reflection?

I know it sounds like a stupid question, but how safe is your API if you can access private properties and change them?
What's the point of having locks on your door when people can just kick the door down? Using reflection requires more skill and more effort. For the most part the code is fine. Reflection doesn't work well in non full trust environments anyway.
In a language not supporting reflection there's always a possibility of circumventing API through direct memory access.
Encapsulation is not about protecting your API from misuse, it is about hiding away parts of code that are subject of change. If client code uses official interface - it will continue to work after such a change. If not, it was author of this code who just have shoot his foot.
Well, in .NET at least, you can disallow reflection by using .NET permissions.
Also, the purpose of visibility levels in classes and class members is not only the access security. It is also a means to organize and document your code: when you see a private member you know that it is not intended to be used outside the class, and while maybe you can use it via reflection, you will normally not do it as it can cause unexpected behavior in your application.
Anyway I find this question sort of like "What's the purpose of doors having locks if I can smash them with a big enough hammer?" :-)
That's right, its not totally safe, but reflection can also be immensely useful. But you can still only set a property if it has a setter, so it isn't all "bad".
Eventhough reflection is very useful indeed, it's considered an indirect method of changing properties and not nesseserily a method that should be endorsed or supported by your API.
Having said that, by setting a private property, ensures that it won't be changed by those accessing it by normal means
The use of private properties is with reflection the same as without it, but if one considers using reflection to access private members in a third-party class, he should be very sure to know what he does - and he sure knows that this can break operability.
You can prevent access to private properties by installing a SecurityManager. So if you need it, you can really make it private (and pay the price: some 3rd party libraries won't work anymore).
Like laws, private are a price tag. They say "if you don't follow the rules which I impose, there'll be a price to pay." It doesn't mean you must follow the rules (just like outlawing killing people didn't stop murder).

Fetch userdata on each request

My problem is quite simple - I think. I'm doing an ASP.NET MVC project. It's a project that requires the user to be logged in at all time. I probably need the current user's information in the MasterPage, like so; "Howdy, Mark - you're logged in!".
But what if I need the same information in the view? Or some validation in my servicelayer?
So how to make sure this information is available when I need it, and where I need it?
How much user information do you need? You can always access the Thread.Current.Principal and get the user's name - and possibly use that to look up more info on the user in a database.
Or if you really really really need some piece of information at all times, you could implement your own custom principal deriving from IPrincipal (this is really not a big deal!), and add those bits of information there, and when the user logs in, create an instance of the MyCustomPrincipal and attach that to the current thread. Then it'll be available anywhere, everywhere, anytime.
Marc
I've had exactly the same issue, and have yet to find a satisfactory answer. All the options we've explored have had various issues. In the specific example you mention, you could obviously store that data in the session, as that would work for that example. There may be other scenarios, that we've had, where that may not work, but simple user info like that would be fine in the session.
We've just setup a BaseController that handles making sure that info is always set and correct for each view. Depending on how you're handling authentication, etc, you will have some user data available in HttpContext.User.Identity.Name at all times. Which can also be referenced.
Build a hierarchy of your models and put the shared information in the base model. This way it will be available to any view or partial view.
Of course it has to be retrieved on each request since web applications are not persistent.
You should store this in Session and retrieve it into your controllers via a custom ModelBinder.
Not sure if I get what you want to ask, but if you are looking for things like authentication and role-based authorization, actually ASP.net is providing a great framework to work on/start with.
This article (with also 2nd part) is something I recently discovered and read about which is really good start with the provider-pattern which help to understand the underlying authentication framework of ASP.net. Be sure to read about the membershipProvider class and the RoleProvider class in msdn also, they together make a great framework on most basic role-base authentication to work with (if you are comfortable with the function they provided, you even don't need to code data-access part, all are provided in the default implementation!)
PS: Check out Context.Users property too! It stores the current authenticated user information.
HttpContext.Current.Users.Identity returns the current user's information. Though I am not sure whether it gets passed implicitly when you make a webservice call.

Resources