I wanted to ask a question about securing my web site. I have an ASP.net site, using VB. I have decided on a 3 tiered approach. So, one project, with three applications. The first application is the website, the second is a class library for business logic, and the last is a class library for database interactions.
My question is on how do I protect functions/subs in my various tiers, from be accessed and used outside of the project? I can't make the functions/subs Private, Protected, or Friends as each library is in its own project. But, I want to make sure other entities, outside of the project, can access resources. Here is an example layout of my project:
MySolution
MyWebSite - Has references to MyBusinessLayer and MyDataLayer class libraries
MyBusinessLayer - Has a reference to MyDataLayer
MyDataLayer - has no References
Example code would be:
MyWebsite load:
Public DATA_PROXY As New MyDataLayer
Protected Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) Handles Me.Load
dim someData as new Users = DATA_PROXY.getSomeData()
'do stuff with data
end Sub
MyBusinessLayer
Public Function getSomeData() As Users
Dim conn As New myBussinessLayer
Dim dr As SqlDataReader
conn.Connect()
dr = conn.getSomeDataFromDB()
'...more code for logic done here and returns an object
end function
MyDataLayer
Public Function getSomeDataFromDB() As SqlDataReader
Dim dr As SqlDataReader
'... do stuff to get data from database
return dr
end function
So, you can see everything is Public. Is this a bad pattern to use? Can other, outside entities, access these DLLs once they are deployed? What are some security concerns I should have? Also, are there any issues with using this pattern of development?
This is a partial answer.
You have the beginnings of going down the right path. The use of layers is a concept to hide (or encapsulate) various functions from those who do not need to know the implementation. But your implementation of the layers will give you grief. I will use your three layers. I will also use the term "function" liberally where I mean either function or sub-routine depending on the context.
The use of (server-side) ASP helps, because the output is a rendered page, not an exposed series of function calls or scripts (always one of my beefs with client-side JavaScript).
MyWebsite should have the public facing code. This can be the functions that render information onto the page, or validate and accept input from the users. Tightly manage the function calls (arguments/parameters) so that you only get inputs that you expect.
With MyBusinessLayer and MyDataLayer, create some interface/adapter functions that do nothing else except validate inputs (the function arguments/parameters), call the working functions, and then validate the outputs (so that website exposing code or Cross-scripting does not occur). All the other functions that do the work (and thus expose details of the actual database etc) can be Private or Friend. The tighter you can make it, the easier it is to secure.
These extra functions initially seem like more work - but aid in maintenance at a later stage. If your database details change, you can update the Private working code, but if it give the same type of output then your higher level interfaces do not change and all the layers above do not change. From a security perspective, you have now minimised the work to check for any security issues due to code change.
Going back to your original post:
Is this a bad pattern to use? The pattern itself is not bad - as I have noted above it is the implementation of that pattern that is important.
Can other, outside entities, access these DLLs once they are
deployed? If they are Public - yes.
What are some security concerns I should have? If people can bypass your interface functions (because your working code is also public), they can bypass your validations. Sure, you can put validations in your working code, but from experience this results in much duplication, mistakes and much harder to maintain. This is a good question though - design with security in mind because it is much harder to add security later. So - you minimise what the others can see and access.
Also, are there any issues with using this pattern of development? I have used this pattern of development myself (User interface, business logic, data adapters) and it works. It is more work initially and, if you are like me (code - run - code - work out what I am doing - design a bit - code - run etc), sometimes seems to be a lot of rework. But, from my experience, this short-term pain is definitely worth the long-term gain.
Related
Scenario
Around 20 ASP.net(VB) applications share the same code framework and when deployed also shares a common web.config. Throughout the various applications we use System.Net.Mail.SmtpClient/MailMessage to send e-mails and now we would like to implement an e-mail opt-out feature for our users with a minimal amount of change to the existing code. That leaves out the simplest approach; inheriting a class from SmtpClient, say OurSmtpClient, and override the Send() method to remove all users that have opted to not receive e-mails, as that would mean we would have to change all New SmtpClient() to New OurSmtpClient() throughout the apps.
Alternatives
We've previously used tagMapping to remap tags to our in-house, derived alternatives, are there anything similar for classes so that all SmtpClient automatically becomes OurSmtpClient and thus will use the overridden Send() method?
We've also looked at Extensions, but the problem here is that we can't override existing methods, only add new ones?
Next alternative we have considered is reflection, but we couldn't get our minds around on how to actually implement it.
Events .. Oh, if there was a Sending event ...
Code (cause everyone likes it)
Here is the inherit approach, just to understand what we are looking for:
Public Class OurSmtpClient
Inherits SmtpClient
Public Overloads Sub Send(message As MailMessage)
For i As Integer = message.To.Count - 1 To 0 Step -1
With message.To(i)
If (.Address.Contains("test")) Then
message.To.RemoveAt(i)
End If
End With
Next
MyBase.Send(message)
End Sub
End Class
Any suggestions? How can this be done without changing the code in the existing applications and only in the shared code (lives in App_Code in the apps) or the shared web.config?
I would change this at the data layer instead of in the mail client.
I'm assuming you store all the information about you users somewhere centrally, along with the information that they would rather not receive any further emails. So in my eyes, the chance would be to simple no longer return those users whenever you ask for the list of users to send emails to.
I don't know enough about the way your current applications work, but that does seem like the most convenient place to change it.
The fact that you are struggling to implement what should be a straightforward requirement is a big clue that you've built up too much technical debt. Your post conveys a strong reluctance to pay down technical debt. This is something that you must avoid and instead embrace Merciless Refactoring. Bite the bullet and introduce that specialised SMTP class.
I am creating my assignment project from scratch, I started off by creating a DAL that handles connection to SQL and can execute stored procedure.
The project later will be a CMS or Blog
My DAC or DAL is reading connection string from web.config and accepting arrays of SQL Parameters along with Stored Procedure Name and is returning output of the executed stored procedure in DataTable.
I am calling my DAC as follows(using class functions and sometimes using code behind)
Dim dt As DataTable = Dac.ReturnDataTable("category_select_bypage", New SqlParameter("#pageid", pageid), New SqlParameter("#id", id))
Which is working fine and i am assuming to complete my whole application by calling this DAC and insert and retrieve data using stored procedures or may be simple queries later.
But, i have showed this work to my teacher and he told's me that you are using a wrong approach and your code is not fulfilling the correct DAL approach.
So, i am now too much confused with DAL and DAC. Any How here are main questions.
1. What really is the difference between DAL and DAC
2. By using my approach, what kind of application is good to make with it ? Can i make shopping carts using this approach?
3. What is wrong with my approach ?
I think you can start with these links:
- http://msdn.microsoft.com/en-us/library/aa581778.aspx
- http://martinfowler.com/eaaCatalog/ (Data Source Architectural Patterns)
- http://www.simple-talk.com/dotnet/.net-framework/.net-application-architecture-the-data-access-layer/ (it talks about DataSets, I guess you can use them until application is small or pretty simple: they're well known and deep integrated in many VS tools).
Take a look at the SO question too: How to organize Data Access Layer (DAL) in ASP.NET
Adriano provided a very good link. Certainly a must read. But I do see one thing. When creating logical layers in an application, each layer is not aware of the inner workings of those beneath it. For example the presentation layer has no idea how the data layer gets data, it just magic. The reason for this is what if you decide to not use the SqlClient and chose a different technology. Now by using "Technology Hiding" with layers you can make that change quite easily.
So given what you have I'm assuming the DataTable is in the presentation (or application) layer. And, if this is true your DAC method call should not expose anything related to what or how it is retrieving data. In your case this rule is violated by the SqlParameter parameter. Perhaps you could pass string's or int's. For example:
public DataTable Dac.GetPage(int pageId, int id)
Never the less, best of luck. I'm glad to see those willing to learn and those willing to teach.
I don't think returning DataSets could readily be described as 'wrong'.
Your DAL serves purely as an abstraction so (hypothetically) we can swap databases, or change from a database to an XML file or whatever. The rest of the application is unchanged.
As long as your business logic is happy with a contract that passes DataSets up from the DAL I don't think it's an architectural problem. It might be suboptimal, since DataSets can be quite expensive in terms of resources, but they are very well supported in .Net (they are the foundation of more simple architectures like the Transaction Script or Table patterns).
What you might have instead are some custom objects and have the DAL transform its SQL query results into those objects, so they can then be processed by the business logic layer.
But then I can think of situations where you just might want DataSets…
All subjective, of course ;)
Function in Data Access SQLServer Layer:
public override DataTable abc()
{
//Connection with database
}
Function in Data Access Layer:
public abstract DataTable abc();
Function in Business Logic Layer:
public DataTable abc()
{
//Preparing the list
}
Function in Presentation Layer:
using (DataTable DT = (new BusinessLogicLayerClass()).abc())
{
}
The above approach is according to the DataTable. But it should insteadd return list of class to the presentation. The List preparation job should be done in Business Logic Layer.
The problem with DataSet/DataTable is that it's a representation of your database. Using it everywhere would create high coupling to your database.
That means that you have to update all depending code every time you do a change in your db.
It's better to create simple classes which represent the database. by doing so you'll only have to change the internals of those objects (or the mapping layer) when the db is changed.
I recommend that you read about the Repository pattern. It's the most common way to abstract away the data source.
(TLDR? Skip to the last couple of paragraphs for the questions...)
I have a classic ASP.Net site which was originally built around a data access layer consisting of static methods wrapping around ADO.Net. More recently, we've been introducing abstraction layers separated by interfaces which are glued together by StructureMap. However, even in this new, layered approach the repository layer still wraps around the old static ADO.Net classes (we weren't prepared to take on the task of implementing an ORM whilst simultaneously reorganising our application architecture).
This was working fine - until today. Whilst investigating some unexpected performance issues we've been having lately we noticed a couple of things about our data access classes:
Our SqlDataConnection instances aren't always being closed.
The connection objects are being stored in static variables.
Both of the above are bad practice and likely to be significantly contributing to our performance problems. The reason why the connections were being set in static variables was to share them across data access methods which is a good idea in theory - it's just a terrible implementation.
Our solution is to convert the static data access classes/methods into objects - with our core DataManager class being instantiated once at the beginning of a request and disposed once at the end (via a new PageBase class in the web layer - much of our code is not yet separated into layers). This means we have one instance of the data access class which gets used for the entire life cycle of the request and therefore only one connection.
The problem starts now when we get to the areas of the site using the newer layered architecture. With the older code, we could just pass a reference to the DataManager instance directly to the data access layer from the code behinds but this doesn't work when the layers are separated by interfaces and only StructureMap has knowledge of the different parts.
So, with all of the background out of the way here's the questions:
Is it possible for StructureMap to create instances by passing previously instantiated objects as dependencies - within the context of a single ASP.Net Page lifecycle?
If it is possible, how is this achieved? I haven't seen anything obvious in my searching and haven't had to do this in the past.
If it is not possible, what might be an alternative solution to the problem I've described above?
NOTE: This may or may not be relevant: we're calling ObjectFactory.BuildUp( this ) in a special base page for those pages which have been converted to use the new Architecture - ASP.Net doesn't provide a good access point.
Okay, this wasn't so hard in the end. Here's the code:
var instantiatedObject = new PropertyType();
ObjectFactory.Configure( x =>
{
x.For<IPropertyType>().Use( () => instantiatedObject );
x.SetAllProperties( p => p.OfType<IPropertyType>() );
}
);
We just put this in our PageBase class before the ObjectFactory.BuildUp( this ) line. It feels slightly dirty to be putting IoC configuration in the main code like this - but it's classic ASP.Net and there aren't many alternatives. I guess we could have provided some abstraction.
I am currently working on several flex projects, that have gone in a relative short amount of time from prototype to almost quite large applications.
Time has come for some refactoring to be done, so obviously the mvc principle came into mind.
For reasons out my control a framework(i.e. robotlegs etc.) is not an option.
Here comes the question...what general guidelines should I take into consideration when designing the architecture?
Also...say for example that I have the following: View, Ctrl, Model.
From View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.performControllerMethod();
In Controller
public function performControllerMethod():void{
//dome some sort of processing and store the result in the model.
Model.instance.result = method_scope_result;
}
and based on the stored values update the view.
As far as storing values in the model, that will be later used dynamically in the application, via time filtering or other operation, everything is clear, but in cases where data just needs to go in(say a tree that gets populated once at loading time), is this really necessary to use the view->controller->model->view update scheme, or can I just make the controller implement IEventDispatcher and dispatch some custom events, that hold necessary data, after the controller operation finished.
Ex:
View:
var ctrlInstance:Ctrl= new Ctrl();
ctrl.addEventListener(CustomEv.HAPPY_END, onHappyEnd);
ctrl.addEventListener(CustomEv.SAD_END, onSadEnd);
ctrl.performControllerMethod();
Controller
public function performControllerMethod():void{
(processOk) ? dispatchEvent(new CustomEv(CustomEv.HAPPY_END, theData)) : dispatchEvent(new CustomEv(CustomEv.SAD_END));
}
When one of the event handlers kicks into action do a cleanup of the event listeners(via event.currentTarget).
As I realize that this might not be a question, but rather a discussion, I would love to get your opinions.
Thanks.
IMO, this whole question is framed in a way that misses the point of MVC, which is to avoid coupling between the model, view, and controller tiers. The View should know nothing of any other tier, because as soon as it starts having references to other parts of the architecture, you can't reuse it.
Static variables that are not constant are basically just asking for trouble http://misko.hevery.com/2009/07/31/how-to-think-about-oo/. Some people believe you can mitigate this by making sure that you only access these globals by using a Controller, but as soon as all Views have a Controller, you have a situation where the static Model can be changed literally from anywhere.
If you want to use the principles of a framework without using a particular framework, check out http://www.developria.com/2010/04/combining-the-timeline-with-oo.html and http://www.developria.com/2010/05/pass-the-eventdispatcher-pleas.html . But keep in mind that established frameworks have solved most of the issues you will encounter, you're probably better off just using one.
Is there a way to mock/fake the session object in ASP.Net Web forms when creating unit tests?
I am currently storing user details in a session variable which is accessed by my business logic.
When testing my business logic in isolation, the session is not available. This seems to indicate a bad design (though I'm not sure). Should the business logic layer be accessing session variables in the first place?
If so, then how would I go about swapping the user details with a fake object for testing?
You can do it with essentially 4 lines of code. Although this doesn't speak to the previous comment of moving session out of your business logic layer, sometimes you might need to do this anyway if you're working with legacy code that is heavily coupled to the session (my scenario).
The namespaces:
using System.Web;
using System.IO;
using System.Web.Hosting;
using System.Web.SessionState;
The code:
HttpWorkerRequest _wr = new SimpleWorkerRequest(
"/dummyWorkerRequest", #"c:\inetpub\wwwroot\dummy",
"default.aspx", null, new StringWriter());
HttpContext.Current = new HttpContext(_wr);
var sessionContainer = new HttpSessionStateContainer(
"id", new SessionStateItemCollection(),
new HttpStaticObjectsCollection(), 10, true,
HttpCookieMode.AutoDetect, SessionStateMode.InProc, false);
SessionStateUtility.AddHttpSessionStateToContext(
HttpContext.Current, sessionContainer);
You can then refer to the session without getting a NullReferenceException error:
HttpContext.Current.Session.Add("mySessionKey", 1);
This is a combination of code I compiled from the articles below:
Unit testing with HttpContext
Faking a current HttpContext
SimpleWorkerRequest, HttpContext, and session state
Unit testing code that uses HttpContext.Current.Session
In ASP.NET, you can't create a Test Double of HttpSessionState because it is sealed. Yes, this is bad design on the part of the original designers of ASP.NET, but there's not a lot to do about it.
This is one of many reasons why TDD'ers and other SOLID practictioners have largely abandonded ASP.NET in favor of ASP.NET MVC and other, more testable frameworks. In ASP.NET MVC, the HTTP session is modelled by the abstract HttpSessionStateBase class.
You could take a similar approach and let your objects work on an abstract session, and then wrap the real HttpSessionState class when you are running in the ASP.NET environment. Depending on circumstances, you may even be able to reuse the types from System.Web.Abstractions, but if not, you can define your own.
In any case, your business logic is your Domain Model and it should be modeled independently of any particular run-time technology, so I would say that it shouldn't be accessing the session object in the first place.
If you absolutely need to use Test Doubles for unit tets involving HttpSessionState, this is still possible with certain invasive dynamic mocks, such as TypeMock or Moles, althought these carry plenty of disadvantages as well (see this comparison of dynamic mocks).
Your instincts are correct---you shouldn't be accessing pieces of the ASP.NET framework from your business logic. This would include Session.
To answer your first question, you can mock static classes using a product like Typemock Isolator, but you'll be better off if you refactor your code to wrap access to Session in an interface (i.e., IHttpSession.) You can then mock IHttpSession.
In Asp.Net webforms, you cannot escape the fact that framework's entry into your code comes from aspx pages. I agree that your business layer should not touch the asp.net specific components directly but you have to have a model's storage container and session in asp.net is a good area. Thus, one possible approach is to create ISessionManager for purpose of interacting inside your business layer. Then, implement the concrete type by using HttpSessionState ... btw, a good trick is to use HttpContext.Current.Session to implement accessors/getters out of the HttpSessionState.
Your next challenge would be how to wire it all together.
One approach is to pass a lambda expression to your code that takes a string (or some other object) as input, and uses it to set either the Session object or a test container.
However, as others have said, it's a good idea to move access to the Session object out of your BLL.