How secure are singletons in ASP.NET? - asp.net

Are singletons in ASP.NET shared between users/sessions? And if they are, are there any safety considerations? Think serializing/deserializing vulnerabilities, thread safety etc.
Is it the way to go using settings from the database that are the same for all users?

Hand crafting the anti-pattern called "Singleton" in C# code is a really bad idea in general, ASP.NET or not.
The singleton lifetime that is supported in the dependency injection framework is a good idea if it does what you need.
I would advise you to only use it for read-only data, like settings, though. You don't have an application on the desktop as of old. Your application might be recycled on the fly, or maybe stretched across multiple nodes on a server farm. So suddenly your "singleton" is actually only a singleton if you have a single instance of your program running. Building your application so this becomes an artificial problem (i.e. the framework would support it, but your own code is build to fail if you actually do so) is not a smart way to go about this.
So to summarize: Singleton lifetime in your dependency injection container? Might be okay. Depends on your use case. An actual "Singleton" pattern in your code? Bad. Very Bad. Tells me you don't actually do any unit testing and nothing is planned to bring this application over a few thousand hobbyist users who don't care if your app is down every time you deploy.

Related

Sharing stored procedures across multiple apps

Team A has an enterprise app that uses ADO.NET for data access that executes stored procedures. The data access is encapsulated in it's own project (let's call it DAL.dll)
Team B is creating another unrelated app that's reusing the stored procedures in the enterprise app. This app is currently using the MS application block for data access. The issue we run into is that whenever Team A make any change to the input/output params in the stored procedures, there is a runtime error in Team B's app and this app needs to be updated to accommodate the additional params (or params that were removed). So, most of these go unnoticed until a user complains. At the very least, we would like to have the app throw a compilation error so that the build process warns us of the changes made.
One way to do this is to have Team B's project add a reference to the DAL.dll
I'd like to know if there are any other cleaner ways of solving the issue. We are ready to replace Team B's MS Data application block to use a different technology (Entity Framework?) if necessary.
Among the other answers, I'd strongly suggest getting those stored procedures into source control, in a Database Project. You then may be able to use the features of your source control system to do several things:
Lock some of the code so that it cannot be changed
Give you notifications if the code is changed
Warn you if the stored procedures change in a way that would prevent them from being called
Branch the stored procedures so that each team can have their own version of changed code, while keeping the unchanged stored procedures common. You of course will need to separate the different versions in the database.
I agree with the other posters on this thread that you should not share stored procedure's across different .NET DLL's, that is just a recipe for disaster. I would also shy away from ORM's like Entity Framework if you are doing anything at all complicated with your database schema because ORM's excel at getting a simple object model translated from your .NET application classes into SQL tables and SP's, but traditionally do poorly at optimizing them for performance on the database side. There will be people who claim otherwise, and they may have a valid point if you are an expert in wrangling an ORM to do waht you want like they are, but chances are you are not and it will cause you headaches in the long run.
A shared data access layer might work, but conceptually you are then just changing the implementation of the dependency from some code that a DBA wrote to some code that a .NET programmer wrote. Yes, you can use integration tests to achieve better verifiability, but the same case could be made for SQL with tools like Red Gate's SQL Test. I would shy away from this approach if the two applications are already experiencing some sort of pain from sharing SP's. That is an indication that the dependency just should be done away with.
If it were up to me, I'd just make a new schema for Team B's app. You can read more about schemas in SQL Server here: MSDN Schema description for 2008 R2. You can think of them as namespaces for SQL Server but with some additional bells and whistles like permission and access control. Separating out your different applications into separate schemas on the same shared database will probably make for the most flexible implementation in the long run.
unrelated app that's reusing the stored procedures in the enterprise app
If these two application are really unrelated why are those sharing procedures or even the same database. I know this is a long read, but I recommend you to read this: A Better Path to Enterprise Architectures
The partioning concept in there relates to the bounded context in Domain driven design:
Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confusing. It is often unclear in what context a model should not be applied.
Therefore: Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don’t be distracted or confused by issues outside.
It is expected you end with problems when you don't explicitely deal with this. You're lucky you're seeing early failures, as it can turn into problems much harder to find on the long run.
Analyze the problem again with the above in mind. Consider if you're missing some explicit context where this common functionality should live.
My question is: which team owns the store procedured and the database shared? Usually as a good architecture/design, you should not have two different apps sharing same database / procedures.
A better way to share data/functionality between two different applications is through a services or API, so the team who owns the functionality would be responsible to maintain it.
Also, have a good communication between both teams is highly recommend.
Depending on the owner of the DAL project, you could host web services and share the API. That way, you separate the Data Access Layer from the business logic, which allows anyone to use the same DAL without having to publish it to each different location.
From my point of view, it looks like both Team A and Team B should share the same core model and look at Multitier architecture as a possible solution.
It sounds like it would make sense to create a shared DAL that both applications can share.
I would add unit tests (or really integration tests) to make sure the DAL is compatible with the apps after changes. That way your tests would fail if incompatible changes have been made
"I'd like to know if there are any other cleaner ways of solving the issue."
The cleanest way is for Team B to sit down with Team A and encapsulate the relevant business logic into a shared API. It doesn't matter so much how you implement that API; what does matter is that the API's interface is documented and versioned so everyone knows what to expect.
One reasonable mechanism for this in a .NET environment is to use Microsoft's WebAPI.
In short, the question of "how do we share a stored procedure?" is most likely looking at the wrong level of abstraction.

Make a .Net DLL Thread-safe for Web App Consumption?

I've written a class in VB.Net that is consumed in an ASP.NET Web Application running IIS7. I use .NET Framework 4.0. The class performs a REST request to an online and retrieves an XML response containing strongly typed data.
This class also performs caching using an SQL Server database. The class is compiled to a .DLL and referenced by the Web Application. It works excellent and now I need to know how to make the class thread-safe.
I have no experience with making code 'thread-safe'. I don't know where to begin in determining whether or not it is thread-safe. Am assuming, because I didn't pay attention to this during development, that it is NOT thread-safe and that since it the web application will be used by many users at the same time that it must be payed attention to.
Can anyone point me on how to test for thread-safety? Are there any resources online that that will give me some ideas? Are there any rules of thumb that will tip me off as to where my main concerns are?
The easiest possible thing you can look out for is the use of "static" (C#) or "Shared" (VB.NET) variables. If these variables can be modified throughout the lifetime of the application you will likely run into threading issues which can really often result in "random looking" problems.
I would also be concerned about how you are doing the caching in your database as multiple .NET threads hitting SQL (for the cache) could cause issues as well depending on how its designed.
Bottom line is you are likely going to need to learn more about threading if you want to be sure this is going to not have issues. Probably the best book I have ever read in terms of simple to very complex C# topics is C# 4.0 In a Nutshell I would take a look at that book especially the threading chapters. (Seriously read the whole thing though) If you get that read through and have a good understanding of the concepts mentioned you should be fine.

IOC Containers and Web applications

I have started to work on this .NET web application where it has an IOC container (Windsor) to create business managers, and to keep them in the memory until the IIS recycles them. Basically these business managers are having their own states, and data of which the content is modified from background threads that are fired at the Application_Start . This is not the way I was expecting an web application to work ( which are supposed to be stateless and per thread for per request) and I'm not quite sure if this implementation is sustainable/scalable. Has anybody tried the things in this manner if so what are the consequences/pros that you see in this?
We use statics in the application, only for the core features. Static classes are shared across all the requests, so usability should be somewhat low. In the development world, we're seeing statics pop up more and more: ASP.NET MVC 3 utilizes them for various areas of the application, as well as other popular OS source libraries.
As long as there aren't a lot of them, you should be OK... but you can always verify with a memory profiler too see how big they are getting, and whether they are sucking up too much memory.
The other alternative could be to place them in cache, or rebuild them and store them in each request. To store them globally in a request, use HttpContext.Current.Items collection.

Workflow design advice for ASP.Net web application?

My team has been tasked with designing a web application that is workflow driven. I need some advice regarding the design.
The workflows need to be dynamic. Meaning, users can define the workflows through some interface and apply those workflows to a given scenario (The definitions will live in a SQL 2008 Database). The scenarios are defined by the business and will never change. So there may be only 2 types of scenarios a workflow can be defined for. The workflows are not necessarily linear. Some sort of state will drive the workflow. States will also be dynamic, but only exist in a workflow.
I have been looking at examples of workflows and state machines and my head is spinning. I am not sure I want o leverage Workflow Foundation or something we develop. I have seen this and think it may work, but I am not sure the state full implementation will work for us.
You can do this using WF4. I have never used Objectflow so I can't really comment on that but it appears to be an in memory solution and with an ASP.NET web site hosted in IIS that means you will occasionally lose state as IIS recycles and AppDomain. Usually not a big problem as it doesn't happen often but a WF4 InstanceStore will take care of that. It will also allow you to run on a web farm without sticky sessions and have the workflow migrate from machine to machine.
Another nice thing is the workflow designer. Its a WPF control you can rehost in your own app. Not in am ASP.NET or Silverlight app but you can provide a smart client to have users update the workflow definition using the sane designer as you use in VS2010.
The biggest problem with WF4 is the asynchronous execution nature. You will need to use a SynchronizationContext to execute the activities and wait for the workflow to go idle in the new state before you return the resulting HTML to the browser.

Concurrency ASP.NET best-practices worst-practices

In which cases to you need to watch out for Concurrency problems (and use lock for instance) in ASP.NET?
Are there 'best practices' around on this topic
Documentation?
Examples?
'worst practices...' or things you've seen that can cause a disaster...?
I'm curious about for instance singletons (even though they are considered bad practice - don't start a discussion on this), static functions (do you need to watch out here?), ...?
Since ASP.NET is a web framework and is mainly stateless there are very few concurrency concerns that need to be addressed.
The only thing that I have ever had to deal with is managing application cache but this is easily done with a cache-management type that wraps the .NET caching mechanisms.
One huge problem that caused us a lot of grief was using Modules vs. Classes in our main Web Service. This was before we really knew what we were doing and has since been fixed.
The big problem with using modules is that by default any module level variables are visible to every instance of the ASP worker process. We pass in multiple datasets and manipulate them then return them to the client. Because we were using modules the variables holding these datasets were getting corrupted by multiple calls occuring at one time.
This was not caught in testing and was difficult to reproduce until we figured out how to properly load test our web services. It took something like 10-20 requests per second before we could reproduce it accurately.
In the end, we just changed all the modules to classes, and then used those classes instead of calls to the modules, this cleared up this concurrency issue as each instantiated class had its own copy of the dataset in memory.

Resources