detecting resubmitted SOAP messages - asp.net

I've got an ASP.NET website that calls an ASP.NET web service.
How can I detect a resubmitted SOAP message? Is that something I need to worry about if the app and the service are both .NET and on the same server?
Since .NET creates the proxy and takes care of the SOAP details, how can you detect resubmitted SOAP messages?

I can't speak to the specifics of ASP.NET, it might have some "magic" to help in this area, but generally your Web Services need to be designed to be resilient to duplicate requests unless there is cleverness in the infrastructure.
There are several scenarios:
The Web App makes a service call. Does the infrastructure make any any attempts to retry if an answer does not come in a certain time? I don't know what ASP.NET does here, my guess is "no", but that is very much a guess. Equally your Web App itself might choose to retry. In either case we might get exactly the same request sent twice. Any responsibility for detecting the duplicate request would lie with the server. This is very much easier if the request contains some unique label.
A second scenario is the "impatient user" scenario. The user hits submit too often. A well designed Web App will prevent this or detect this and not resubmit the web service. In this case the Service should not see any duplicate.
A more difficult scenario is the "did you really mean to buy two Rolls Royces?" The customer submits a request, let's pretend it's a high value request. Then their computer crashes, or they lose connectivity. Later, maybe even from a different computer, the user attempts the same purchase, not realising that the first one actually worked. Now that's a hard one to spot, and many vendors don't even try, but in some scenarios you need to make a really clever service with the duplicate detection being quite sophisticated pattern matching - for really high value customers this extra effort may be essential.

Related

How to Design a Database Monitoring Application

I'm designing a database monitoring application. Basically, the database will be hosted in the cloud and record-level access to it will be provided via custom written clients for Windows, iOS, Android etc. The basic scenario can be implemented via web services (ASP.NET WebAPI). For example, the client will make a GET request to the web service to fetch an entry. However, one of the requirements is that the client should automatically refresh UI, in case another user (using a different instance of the client) updates the same record AND the auto-refresh needs to happen under a second of record being updated - so that info is always up-to-date.
Polling could be an option but the active clients could number in hundreds of thousands, so I'm looking for a more robust and lightweight (on server) solution. I'm versed in .NET and C++/Windows and I could roll-out a complete solution in C++/Windows using IO Completion Ports but feel like that would be an overkill and require too much development time. Looked into ASP.NET WebAPI but not being able to send out notifications is its limitation. Are there any frameworks/technologies in Windows ecosystem that can address this scenario and scale easily as well? Any good options outside windows ecosystem e.g. node.js?
You did not specify a database that can be used so if you are able to use MSSQL Server, you may want to lookup SQL Dependency feature. IF configured and used correctly, you will be notified if there are any changes in the database.
Pair this with SignalR or any real-time front-end framework of your choice and you'll have real-time updates as you described.
One catch though is that SQL Dependency only tells you that something changed. Whatever it was, you are responsible to track which record it is. That adds an extra layer of difficulty but is much better than polling.
You may want to search through the sqldependency tag here at SO to go from here to where you want your app to be.
My first thought was to have webservice call that "stays alive" or the html5 protocol called WebSockets. You can maintain lots of connections but hundreds of thousands seems too large. Therefore the webservice needs to have a way to contact the clients with stateless connections. So build a webservice in the client that the webservices server can communicate with. This may be an issue due to firewall issues.
If firewalls are not an issue then you may not need a webservice in the client. You can instead implement a server socket on the client.
For mobile clients, if implementing a server socket is not a possibility then use push notifications. Perhaps look at https://stackoverflow.com/a/6676586/4350148 for a similar issue.
Finally you may want to consider a content delivery network.
One last point is that hopefully you don't need to contact all 100000 users within 1 second. I am assuming that with so many users you have quite a few servers.
Take a look at Maximum concurrent Socket.IO connections regarding the max number of open websocket connections;
Also consider whether your estimate of on the order of 100000 of simultaneous users is accurate.

Can an ASP.NET application handle NServiceBus events?

Most if not all of the NSB examples for ASP.NET (or MVC) have the web application sending a message using Bus.Send and possibly registering for a simple callback, which is essentially how I'm using it in my application.
What I'm wondering is if it's possible and/or makes any sense to handle messages in the same ASP.NET application.
The main reason I'm asking is caching. The process might go something like this:
User initiates a request from the web app.
Web app sends a message to a standalone app server, and logs the change in a local database.
On future page requests from the same user, the web app is aware of the change and lists it in a "pending" status.
A bunch of stuff happens on the back-end and eventually the requests gets approved or rejected. An event is published referencing the original request.
At this point, the web app should start displaying the most recent information.
Now, in a real web app, it's almost a sure thing that this pending request is going to be cached, quite possibly for a long period of time, because otherwise the app has to query the database for pending changes every time the user asks for the current info.
So when the request finally completes on the back-end - which might take a minute or a day - the web app needs, at a minimum, to invalidate this cache entry and do another DB lookup.
Now I realize that this can be managed with SqlDependency objects and so on, but let's assume that they aren't available - perhaps it's not a SQL Server back-end or perhaps the current-info query goes to a web service, whatever. The question is, how does the web app become aware of the change in status?
If it is possible to handle NServiceBus messages in an ASP.NET application, what is the context of the handler? In other words, the IoC container is going to have to inject a bunch of dependencies, but what is their scope? Does this all execute in the context of an HTTP request? Or does everything need to be static/singleton for the message handler?
Is there a better/recommended approach to this type of problem?
I've wondered the same thing myself - what's an appropriate level of coupling for a web app with the NServiceBus infrastructure? In my domain, I have a similar problem to solve involving the use of SignalR in place of a cache. Like you, I've not found a lot of documentation about this particular pattern. However, I think it's possible to reason through some of the implications of following it, then decide if it makes sense in your environment.
In short, I would say that I believe it is entirely possible to have a web application subscribe to NServiceBus events. I don't think there would be any technical roadblocks, though I have to confess I have not actually tried it - if you have the time, by all means give it a shot. I just get the strong feeling that if one starts needing to do this, then there is probably a better overall design waiting to be discovered. Here's why I think this is so:
A relevant question to ask relates to your cache implementation. If it's a distributed or centralized model (think SQL, MongoDB, Memcached, etc), then the approach that #Adam Fyles suggests sounds like a good idea. You wouldn't need to notify every web application - updating your cache can be done by a single NServiceBus endpoint that's not part of your web application. In other words, every instance of your web application and the "cache-update" endpoint would access the same shared cache. If your cache is in-process however, like Microsoft's Web Cache, then of course you are left with a much trickier problem to solve unless you can lean on Eventual Consistency as was suggested.
If your web app subscribes to a particular NServiceBus event, then it becomes necessary for you to have a unique input queue for each instance of your web app. Since it's best practice to consider scale-out of your web app using a load balancer, that means that you could end up with N queues and at least N subscriptions, which is more to worry about than a constant number of subscriptions. Again, not a technical roadblock, just something that would make me raise an eyebrow.
The David Boike article that was linked raises an interesting point about app pools and how their lifetimes might be uncertain. Also, if you have multiple app pools running simultaneously for the same application on a server (a common scenario), they will all be trying to read from the same message queue, and there's no good way to determine which one will actually handle the message. More of then than not, that will matter. Sending commands, in contrast, does not require an input queue according to this post by Udi Dahan. This is why I think one-way commands sent by web apps are much more commonly seen in practice.
There's a lot to be said for the Single Responsibility Principle here. In general, I would say that if you can delegate the "expertise" of sending and receiving messages to an NServiceBus Host as much as possible, your overall architecture will be cleaner and more manageable. Through experience, I've found that if I treat my web farm as a single entity, i.e. strip away all acknowledgement of individual web server identity, that I tend to have less to worry about. Having each web server be an endpoint on the bus kind of breaks that notion, because now "which server" comes up again in the form of message queues.
Does this help clarify things?
An endpoint(NSB) can be created to subscribe to the published event and update the cache. The event shouldn't be published until the actual update is made so you don't get out of sync. The web app would continue to pull data from the cache on the next request, or you can build in some kind of delay.

Check if anyone is currently using an ASP.Net app (site)

I build ASP.NET websites (hosted under IIS 6 usually, often with SQL Server backends and forms authentication).
Clients sometimes ask if I can check whether there are people currently browsing (and/or whether there are users currently logged in to) their website at a given moment, usually so the can safely do a deployment (they want a hotfix, for example).
I know the web is basically stateless so I can't be sure whether someone has closed the browser window, but I imagine there'd be some count of not-yet-timed-out sessions or something, and surely logged-in-users...
Is there a standard and/or easy way to check this?
Jakob's answer is correct but does rely on installing and configuring the Membership features.
A crude but simple way of tracking users online would be to store a counter in the Application object. This counter could be incremented/decremented upon their sessions starting and ending. There's an example of this on the MSDN website:
Session-State Events (MSDN Library)
Because the default Session Timeout is 20 minutes the accuracy of this method isn't guaranteed (but then that applies to any web application due to the stateless and disconnected nature of HTTP).
I know this is a pretty old question, but I figured I'd chime in. Why not use Google Analytics and view their real time dashboard? It will require minor code modifications (i.e. a single script import) and will do everything you're looking for...
You may be looking for the Membership.GetNumberOfUsersOnline method, although I'm not sure how reliable it is.
Sessions, suggested by other users, are a basic way of doing things, but are not too reliable. They can also work well in some circumstances, but not in others.
For example, if users are downloading large files or watching videos or listening to the podcasts, they may stay on the same page for hours (unless the requests to the binary data are tracked by ASP.NET too), but are still using your website.
Thus, my suggestion is to use the server logs to detect if the website is currently used by many people. It gives you the ability to:
See what sort of requests are done. It's quite easy to detect humans and crawlers, and with some experience, it's also possible to see if the human is currently doing something critical (such as writing a comment on a website, editing a document, or typing her credit card number and ordering something) or not (such as browsing).
See who is doing those requests. For example, if Google is crawling your website, it is a very bad idea to go offline, unless the search rating doesn't matter for you. On the other hand, if a bot is trying for two hours to crack your website by doing requests to different pages, you can go offline for sure.
Note: if a website has some critical areas (for example, writing this long answer, I would be angry if Stack Overflow goes offline in a few seconds just before I submit my answer), you can also send regular AJAX requests to the server while the user stays on the page. Of course, you must be careful when implementing such feature, and take in account that it will increase the bandwidth used, and will not work if the user has JavaScript disabled).
You can run command netstat and see how many active connection exist to your website ports.
Default port for http is *:80.
Default port for https is *:443.

Is there a different way to do ASP.Net forms authentication that's already built and audited?

Like a lot of people I've gone with ASP.Net Forms authentication because it's already written and writing our own security code we're told is generally a bad idea.
With the current problems with ASP.Net I'm thinking it might be a good time to look at alternatives.
Important: ASP.NET Security Vulnerability - ScottGu
Video demonstrating attack
Microsoft advisory including workaround
From what I understand Microsoft tend to store things on the client side because it makes it easier to operate over server farms without needing database access calls.
I don't really care about server farms though and I'd like to simply have an opaque cookie that demonstrates my lack of trust in the callers.
Is there a decent solution that's already been proven solid?
Update: to clarify my question. I'm talking about the authentication token part of the forms authentication that I'd like to replace. The back end is quite easy to replace, you can implement the interfaces to store your users and roles quite easily. You can also use existing libraries like http://www.memberprotect.net/ which has been mentioned here.
I'd like to change the front end part of the process to use a token that doesn't provide the client with any leverage. Sticking with the existing back end infrastructure would be useful but not essential.
I've been working on an HttpModule that basically does what you're looking for. When a FormsAuthenticationCookie and FormsAuthenticatedTicket are generated, before the response is sent to the client (i.e., during the processing of the postback of the Login page/action), all of the details about the cookie & ticket are stored on the server. Also, the UserData from the ticket is moved to the server (if present) and replaced with a salted SHA-512 hash of the other properties in the ticket along with a GUID that serves as a key into the server-side store of the ticket.
The validation of the cookie & tickets compares everything the client provided (optionally including their IP address) with all of the properties that were known about them at the time they were issued. If anything doesn't match, they are removed from the request before the FormsAuthenticationModule even kicks in. If everything does match, the server's UserData is stuck back in the FormsAuthTicket in case you had any modules or code that depend on it. It's all transparent. Plus, it can detect suspicious and blatantly malicious requests and inserts a random delay in the processing. It also has some explicit padding oracle workarounds in there.
The demo app actually lets you create/modify your cookie & ticket values on the server, with the server encrypting your ticket for you with the machine keys. This way you can prove to yourself that you can't create a ticket/cookie that gets around the server validation unless you write the exact set of data to the server (which should be impossible under normal circumstances).
http://sws.codeplex.com/
http://www.sholo.net/post/2010/09/21/Padding-Oracle-vs-Forms-Authentication.aspx
http://www.sholo.net/post/2010/09/22/Sholo-Web-Security-and-the-EnhancedSecurityModule.aspx
-Scott
If you have your keys in the web.config and the attacker gets to it, they are pretty much done.
If that's not the case (they don't get the keys from your .config), then afaik the padding oracle shouldn't allow them to sign a new auth ticket. The paper explains ability to encrypt by taking advantage of the cbc mode, ending with a tiny bit of garbage there. It should be enough to make it an invalid signature.
As for the video where they get the keys with the tool, its against a dotnetnuke install. A default dotnetnuke has those keys in the web.config.
Implement the workaround, keep your keys off your site level web.config, if you don't use webresource.axd and scriptresource.axd disable those handlers, and apply the patch as soon as ms releases it.
I will simply recommend taking a look at InetSolution's MemberProtect product, it's a component designed with security in mind for the banking and financial services industries but is widely applicable to any site designed on ASP.NET or application built on top of the .NET platform. It provides support for encrypting of user information and a host of authentication methods from the simplistic to the very advanced and the various methods and functions are designed to be used as the developer sees fit, so it's not a canned solution so much as a very flexible one, this may or may not be a good thing depending on the particular situation. It's also a very solid foundation on which to build new member-based websites and applications in general.
You can find out more about it at http://www.memberprotect.net
I am the developer for MemberProtect and I work at InetSolution :)
This isn't a value-less question, but I have to say that I think your logic is suspect. It's no bad idea to consider alternative authentication solutions, but the newly-announced ASP.NET vulnerability should not push you to abandon a current (presumably working) solution. I'm also not entirely sure what the relevance of this comment is:
From what I understand Microsoft tend to store things on the client side because it makes it easier to operate over server farms without needing database access calls.
What is it about the vulnerability that makes you think that ASP.NET forms auth is broken any more than another solution?
The detail of the MS advisory would seem to suggest that pretty much any other authentication system could be rendered similarly vulnerable to attack. For example, any solution that uses the web.config file to store settings would still have its settings open to the world, assuming a successful attack.
The real solution here is not to change security, but to apply the published workaround to the problem. You might switch authentication providers only to find that you are still vulnerable, and your effort has gained nothing.
Regarding tokens/sessions: you have to push something to the client for authentication to work (whether you call it a token or not), and it's not this part of the process that causes the current security issue: it's the way the server responds to certain calls that makes this secret vulnerable to attack.

is Silverlight more friendly to load-balancing than ASP.NET?

I was discussing load-balancing with a colleague at lunch. I admit that I know very little about this topic. We were discussing the various ways of maintaing session in a ASP.NET application -- none of which suited the high performance load balancing that he was looking for.
What about Silverlight? says I. As far as I know it is stateless, you've got the app running in the browser and you've got services on the server that feed/process data.
Does Silverlight totally negate the need for Session state management? Is it more friendly to load-balancing? Is it something in between?
I would say that Silverlight is likely to be a little more load-balancer friendly than ASP.NET. You have much more sophisticated mechanisms for maintaining state (such as isolated local storage), and pretty much, you only need to talk to the server when (a) you initially download the application, and then (b) when you make a web service call to retrieve or update data. It's analogous in this sense to an Ajax application written entirely in C#.
To put it another way, so long as either (a) your server-side persistence layer knows who your client is, or (b) you pass in all relevant data on each WCF call, it doesn't matter which web server instance the call goes to. You don't have to muck about with firewall-level persistence to make sure your HTTP call goes back to the right web server.
I'd say it depends on your application. If it's a banking application,then yes I want something timingout out after 5 minutes and asking for my password again. If it's facebook then not so much.
Silverlight depends on XMLHttpRequest like any other ajax impelementation and is therefore capable of maintaining a session, forms authentiction, roles, profiles etc etc.
The benefit you are getting is obviating virtually all of the traffic. json requests are negligable compared to serving pages. Even the .xap can be cached on the client.
I would say you are getting the best of both worlds in regards to your question.

Resources