WCF Intra machine communication - is security necessary? - asp.net

I'm working on a WCF service that, at least initially, will only be consumed by an ASP.Net application on the same machine. This may change eventually, but for the time being, all communication with the WCF service will be intra-machine.
Is any sort of security necessary, and indeed, is any sort of authentication necessary? It seems to me that in order to compromise the security, one would have to compromise the security of the machine - nothing from the outside is going to be able to connect in, particularly if I'm using named pipes...
I'm confident in the security of the box - is that good enough?
TIA.

Seems fine to me. However, I think when doing development getting the base stuff (like secure channels, authentication) figured out early is a good idea. Retro fitting can be a pain.

Related

ASP.NET Web App, WCF Services and Database hosting

I have a general question about database hosting in relation to WCF and ASP.NET. We are currently developing a new online web application in ASP.NET, which gets/posts data to our MSSQL database with a WCF service (three tier infrastructure).
Now later in development we will be launching our website and hosting it on an external provider. We are unsure whether to keep the database for the website internally on our own servers, or host it externally along with our provider (they offer database hosting options as well).
If we hosted it externally, we would obviously back it up internally using batch scripts etc.
One major concern is the security of the database, as we are only a small business with not much experience in web security architecture. Due to this, we are leaning towards an external provider for both the website and database, who would obviously have experience and the equipment to manage such things.
Could you please offer some opinions on the matter?
Thanks!
There's always a risk associated with handing sensitive data off to an outside party, and trusting them to be as secure as you need.
There's no mystery here, someone at the provider will have enough access to look at your data if they really wanted to. So it all boils down to how sensitive is your data? Is there bank account info or social security numbers? For these reasons, our company cannot hand off such data to an outside party.
I'm a little confused though about one thing: if you could potentially host the database server when you go to production, why couldn't you host the website as well? Is it a matter of being able to handle high traffic?
Update in response to your comment:
It sounds like your data is somewhat sensitive, not highly sensitive. In which case if we're not being totally bonkers pedantic here, then you can reasonably assume a reputable hosting company will take the proper measures to secure your data, and from the sounds of it, they're probably more capable in this respect then your own company (not because you're careless or wet behind the ears, just because they would have considerable experience in this area where your company does not).
Now for the performance and hardware setup part if your comment... if you dont have the hardware or network infrastructure to meet your requirements, then you either a) upgrade your own infrastructure and hire the appropriate personnel to set it up and maintain it or b) you pay someone else to do it. Sounds like a no-brainer for you guys to go with option b.

Restrict number of requests for particular mapping in Spring context

I am just unsure whether Spring has any mechanism preventing users/malicious bots from spamming for example registration request hundred times on my web app.
Does spring offer this kind of protection under the hood and if does not which direction I am to look? Some magical property in Spring Security?
Also does AWS provide any protection against this kind of brute attack when my application is deployed there?
The short answer to both your questions is no. There is no built-in mechanisms in either Spring or Amazon Web services to prevent this.
You will likely have to provide your own implementation to prevent excessive access to your API.
There are a couple of useful resources that can help:
Jeff Atwood's piece on throttling failed log-in attempts should give you a good starting point on how to implement a good strategy for this.
Spring Security's Authorization architecture is really well designed and you can plug in your own implementations fairly easily. It is well documented too.
There is the official Amazon Web Services documentation for using Security Groups, which again should help you ensure you're running on AWS with least permissions in terms of network access
Finally you could look at a service like Fail2Ban for monitoring log files and blocking malicious requests.
So in short there isn't really a simple ready-to-roll solution, but using the above resources should get you on the road to running something that ensures you're using the best practices possible to prevent malicious attempts to access your system.

is Silverlight more friendly to load-balancing than ASP.NET?

I was discussing load-balancing with a colleague at lunch. I admit that I know very little about this topic. We were discussing the various ways of maintaing session in a ASP.NET application -- none of which suited the high performance load balancing that he was looking for.
What about Silverlight? says I. As far as I know it is stateless, you've got the app running in the browser and you've got services on the server that feed/process data.
Does Silverlight totally negate the need for Session state management? Is it more friendly to load-balancing? Is it something in between?
I would say that Silverlight is likely to be a little more load-balancer friendly than ASP.NET. You have much more sophisticated mechanisms for maintaining state (such as isolated local storage), and pretty much, you only need to talk to the server when (a) you initially download the application, and then (b) when you make a web service call to retrieve or update data. It's analogous in this sense to an Ajax application written entirely in C#.
To put it another way, so long as either (a) your server-side persistence layer knows who your client is, or (b) you pass in all relevant data on each WCF call, it doesn't matter which web server instance the call goes to. You don't have to muck about with firewall-level persistence to make sure your HTTP call goes back to the right web server.
I'd say it depends on your application. If it's a banking application,then yes I want something timingout out after 5 minutes and asking for my password again. If it's facebook then not so much.
Silverlight depends on XMLHttpRequest like any other ajax impelementation and is therefore capable of maintaining a session, forms authentiction, roles, profiles etc etc.
The benefit you are getting is obviating virtually all of the traffic. json requests are negligable compared to serving pages. Even the .xap can be cached on the client.
I would say you are getting the best of both worlds in regards to your question.

How to upgrade my asp.net app to support more users?

When an asp.net website has about 1,000 active users, it works good.
How should I do if the website has about 100,000 active users?
How to upgrade my asp.net app to support a larger number of users?
Changing the webApp's architecture?
Or buying more web servers?
I just wonder in the real-world, how do other people build an asp.net website supporting millions of users? What's the app architecture of a website to support that?
Any suggestion will be welcome.
First, make sure you're with a first rate hosting provider.
Second, download a performance profiler (I always suggest Red Gate Performance Profiler) and profile your app. Find the bottlenecks and eliminate them. Repeat until you get your desired performance metric.
If your application is querying a database or other web services, try to use asynchronous methods. Using asynch methods will free up the web server to handle a lot more client requests while it is waiting for a response from the database server or web service.
You say it "works good" at the moment. It's impossible to know what the point at which this may change will be wihtout knowing a whole lot more about the nature of your traffic, current set up, what else runs on the server, etc ,etc. It could be that it continues to "work good" with a million users as it is.
When you need to make changes (and slowly reducing performance will alert you), that's whne you need to worry. And then, as Justin says, knowing the potential bottelnecks will give you pointers as to what solution you need.
Buying more servers is one strategy. So is changing the architecture. The easiest and cost effective is throwing more servers at it. It does depend a little bit on the current application architecture, but nothing that can't be easily overcome.
What I suggest, is to load test your application. See what happens as you increase the active users. Who knows it might handle 100k active users, maybe it won't but at least you will know the tipping point.
In regards to what you should do, that really depends on your business needs. If your company has the $$ and this is a core product, then it makes sense to architect a robust application. If it's not, maybe throwing hardware at the problem is good enough.
It would also help if you could define an active user. Is it someone who is visiting your site and has a session? Is it 100k concurrent requests to the server...?
In terms of hardware scaling: Scaling Up or Scaling out
Software scaling - Profile your app

When should a web service not be used?

Using a web service is often an excellent architectural approach. And, with the advent of WCF in .Net, it's getting even better.
But, in my experience, some people seem to think that web services should always be used in the data access layer for calls to the database. I don't think that web services are the universal solution.
I am thinking of smaller intranet applications with a few dozen users. The web app and its web service are deployed to one web server, not a web farm. There isn't going to be another web app in the future that can use this particular web service. It seems to me that the cost of calling the web service unnecessarily increases the burden on the web server. There is a performance hit to inter-process calls. Maintaining and debugging the code for the web app and the web service is more complicated. So is deployment. I just don't see the advantages of using a web service here.
One could test this by creating two versions of the web app, with and without the web service, and do stress testing, but I haven't done it.
Do you have an opinion on using web services for small-scale web app's? Any other occasions when web services are not a good architectural choice?
Web Services are an absolutely horrible choice for data access. It's a ton of overhead and complexity for almost zero benefit.
If your app is going to run on one machine, why deny it the ability to do in-process data access calls? I'm not talking about directly accessing the database from your UI code, I'm talking about abstracting your repositories away but still including their assemblies in your running web site.
There are cases where I'd recommend web services (and I'm assuming you mean SOAP) but that's mostly for interoperability.
The granularity of the services is also in question here. A service in the SOA sense will encapsulate an operation or a business process. Data access methods are only part of that process.
In other words:
- someService.SaveOrder(order); // <-- bad
// some other code for shipping, charging, emailing, etc
- someService.FulfillOrder(order); //<-- better
//the service encapsulates the entire process
Web services for the sake of web services is irresponsible programming.
Nick Harrison, a brilliant developer in Charlotte, suggested these scenarios where using a web service makes sense:
On a Web farm, where there are multiple web servers hosting website(s), all pointing to web service(s) running on another web server. This allows for distributing the load over multiple servers.
Client/server, where Windows forms apps can call a web service.
Cross platform
Passing through a firewall
Just because the tool generates a bunch of stubs doesn't mean it's a good use. WS-* excels in scenarios where you expose services to external parties. This means that each operation should be on the granularity of business process as opposed to data access.
The multitude of standards can be used to describe different facets of your contract in great detail and a (hypothetical) fully compliant WS stack can take away a lot of pain from the third party developers and even allow the fabled point and click integration a'la Yahoo Pipes. With good governance controls you can evolve your public interface and manage the backward compatibility as needed.
All this is next to impossible to be generated automatically. The C# stub generator knows only the physical interface of your class, but doesn't have any idea about the semantics involved. See this paper for more detailed discussion.
If you are building a web site, then build a web site. If you want asynchronous messaging inside your application, use MSMQ. If you want to expose data to internal clients, use POX. If you need efficient binary message format, check Google's Protocol Buffers or if you need RPC check Hessian for C# or DCOM.
Web services are a coarse grained integration solution. They are rigid, they are slower than alternatives, they take too much effort to do well (and when not done well are next to pointless).
To summarize: "When should a web service not be used?" - anytime you can get away without it
If you are just coding a tiny (less than 50 users) web application for your intranet, a web service seems overkill. Especially if its primary function (providing a single point of access to many services) won't be used.
I agree that the use of a web service in a small scale web app adds a layer of complexity that does not seem justified. Most of my solutions, internet and intranet, 10-50 users, do not employ web services. I am glad others feel the same...I thought I was the only one.
For a small scale web app I think that using web services is often quite a good idea, you can use it to easily decouple the web server from the data tier. With the straightofrward development requirements and great tooling I don't see the problem.
However don't use web services in the following scenarios:
When you must use Http as the transport and Xml serialization of your data and you need lots of different bits of data, synchronously and often. Whether REST or SOAP or WS-* you're going to suffer performance issues. The more calls you make the slower your system will be. If you want medium size chunks of data less frequently, asynchronously and you can use straight TcpIp (e.g. Wcf netTcpBinding) you'd be better off.
When you need to query and join data from your web service with other data sources, rather motivate for a data warehouse which can be populated with properly consolidated and rationalized data from across the enterprize
This is my experience, hope it helps.
For a small-scale web app (You have to ask the question, "Will it always remain small scale?" though) using web services, separate business layers, data layers, and so on and so forth can be overkill.
Before anyone shoots me, I do agree that separation of logic between layers along with unit tests, continuous integration, et al are bloody brilliant. In my current role I'd be utterly lost and rocking in the corner without them. However for a very small-scale web app being used to, for example, track contact numbers and addresses for a company of 36 employees, the cost/benefit analysis would suggest that all the "niceties" listed above would be overkill.
However... Remember to ask the question "Will it always remain small scale?" :-)

Resources