Can User Authentication become a performance bottleneck in Centralized Server? - asynchronous

To explain the context, let us take example of a single threaded Chat server which makes use of asynchronous event notification (e.g. epoll etc) to handle IO and which makes extensive use of PKI and other Cryptographic tools during User Authentication and Registration processes which are handled and completed locally. While Message Indirection (from source-to-destination) is a gentle-on-CPU data-intensive process, few Cryptographic processes like signing of message using RSA key is computationally heavy and may become the slow path in the whole IO loop.
Can an attacker make use of this slow path to substantially degrade the performance of the server by making too many registration requests in a short duration? If yes then what are the methods to reduce the impact? If this is a real threat then how do large service providers manage it?
Let us expand the discussion to cover the Federated XMPP servers.

Really not an expert in this, but the following are common-sense reflexions might answer your questions.
Too many registrations
This is why most registration forms use captchas.
Too many authentications
Authentications aren't computationally heavy; they usually involve comparisons of (salted) hashes.
As #SergeBallesta pointed out, this is plain wrong; password hashing seems to be slow by design in order to prevent brute-force attacks. In fact, this post mentions the issue of vulnerability to DDoS, and suggests IP ban as a counter-measure (see paragraphs below and see also this thread for recommended number of rounds with BCrypt).
The number of authentication trials is usually limited using session parameters, and it doesn't seem unreasonable to deny authentication requests not associated with an existing session.
Massive authentication requests coming from the same IP in a short period of time can potentially be monitored and the IP banned temporarily. This would typically involve a monitor process independent of the application code itself.
Too many messages
I am not entirely sure here, I would probably test what I'm about to say if faced with this situation. I think that in the worse case, the overhead due to low-level encryption (eg SSL) is comparable to the application processing time, so I wouldn't worry about that.
Regarding message routing, you could potentially generate a new token every time an authenticated user sends N message(s), update the token validity at each submission and trigger an update when it expires. This might cause a slight overhead, but it allows to limit the rate of message submission per user, and therefore to control the overall server load due to message routing.
Hope this helps.

Related

Reduce captcha calls by remembering successful captcha users

I'm looking for a solution to reduce the number of captcha calls I need to make.
I have a website that allows free usage for one app with a text box and button. Users might use it 10 times, and thus I call captcha 10 times. However, this adds up to a ridiculous expense I can't continue to afford. I need a solution to track successful captcha so a user receives only one captcha if successful.
My Thought:
On successful captcha
store real user identifier in redis (Hash out IP, User Agent, and WebRTC (not fully aware of what this is, but I was recommended to use this))
Future calls check if user is valid by checking their hashed identifier (TTL of 90 days)
Any other recommendations or suggestions? Any potential problems?
PS: info about WebRTC in this use case would be helpful as well
I decided to use a hash of the IP and User-Agent considering everything I would take from the client can be spoofed (even the User-Agent can be). To combat the spoofing, I added ip rate limiting.
Now before making API calls from the client, I make a call to verify that the user is a verified user (I consider a validated captcha a non-robot verification for 24 hours. I'll mess with increasing this over time).
This should effectively reduce my captcha calls by 60%.

How can I create a queue with multiple workers?

I want to create a queue where clients can put in requests, then server worker threads can pull them out as they have resources available.
I'm exploring how I could do this with a Firebase repository, rather than an external queue service that would then have to inject data back into Firebase.
With security and validation tools in mind, here is a simple example of what I have in mind:
user pushes a request into a "queue" bucket
servers pull out the request and deletes it (how do I ensure only one server gets the request?)
server validates data and retrieves from a private bucket (or injects new data)
server pushes data and/or errors back to the user's bucket
A simplified example of where this might be useful would be authentication:
user puts authentication request into the public queue
his login/password goes into his private bucket (a place only he can read/write into)
a server picks up the authentication request, retrieves login/password, and validates against the private bucket only the server can access
the server pushes a token into user's private bucket
(certainly there are still some security loopholes in a public queue; I'm just exploring at this point)
Some other examples for usage:
read only status queue (user status is communicated via private bucket, server write's it to a public bucket which is read-only for the public)
message queue (messages are sent via user, server decides which discussion buckets they get dropped into)
So the questions are:
Is this a good design that will integrate well into the upcoming security plans? What are some alternative approaches being explored?
How do I get all the servers to listen to the queue, but only one to pick up each request?
Wow, great question. This is a usage pattern that we've discussed internally so we'd love to hear about your experience implementing it (support#firebase.com). Here are some thoughts on your questions:
Authentication
If your primary goal is actually authentication, just wait for our security features. :-) In particular, we're intending to have the ability to do auth backed by your own backend server, backed by a firebase user store, or backed by 3rd-party providers (Facebook, twitter, etc.).
Load-balanced Work Queue
Regardless of auth, there's still an interesting use case for using Firebase as the backbone for some sort of workload balancing system like you describe. For that, there are a couple approaches you could take:
As you describe, have a single work queue that all of your servers watch and remove items from. You can accomplish this using transaction() to remove the items. transaction() deals with conflicts so that only one server's transaction will succeed. If one server beats a second server to a work item, the second server can abort its transaction and try again on the next item in the queue. This approach is nice because it scales automatically as you add and remove servers, but there's an overhead for each transaction attempt since it has to make a round-trip to the firebase servers to make sure nobody else has grabbed the item from the queue already. But if the time it takes to process a work item is much greater than the time to do a round-trip to the Firebase servers, this overhead probably isn't a big deal. If you have lots of servers (i.e. more contention) and/or lots of small work items, the overhead may be a killer.
Push the load-balancing to the client by having them choose randomly among a number of work queues. (e.g. have /queue/0, /queue/1, /queue/2, /queue/3, and have the client randomly choose one). Then each server can monitor one work queue and own all of the processing. In general, this will have the least overhead, but it doesn't scale as seamlessly when you add/remove servers (you'll probably need to keep a separate list of work queues that servers update when they come online, and then have clients monitor the list so they know how many queues there are to choose from, etc.).
Personally, I'd lean toward option #2 if you want optimal performance. But #1 might be easier for prototyping and be fine at least initially.
In general, your design is definitely on the right track. If you experiment with implementation and run into problems or have suggestions for our API, let us know (support#firebase.com :-)!
This question is pretty old but in case someone makes it here anyway...
Since mid 2015 Firebase offers something called the Firebase Queue, a fault-tolerant multi-worker job pipeline built on Firebase.
Q: Is this a good design that will integrate well into the upcoming security plans?
A: Your design suggestion fits perfectly with Firebase Queue.
Q: How do I get all the servers to listen to the queue, but only one to pick up each request?
A: Well, that is pretty much what Firebase Queue does for you!
References:
Introducing Firebase Queue (blog entry)
Firebase Queue (official GitHub-repo)

detecting resubmitted SOAP messages

I've got an ASP.NET website that calls an ASP.NET web service.
How can I detect a resubmitted SOAP message? Is that something I need to worry about if the app and the service are both .NET and on the same server?
Since .NET creates the proxy and takes care of the SOAP details, how can you detect resubmitted SOAP messages?
I can't speak to the specifics of ASP.NET, it might have some "magic" to help in this area, but generally your Web Services need to be designed to be resilient to duplicate requests unless there is cleverness in the infrastructure.
There are several scenarios:
The Web App makes a service call. Does the infrastructure make any any attempts to retry if an answer does not come in a certain time? I don't know what ASP.NET does here, my guess is "no", but that is very much a guess. Equally your Web App itself might choose to retry. In either case we might get exactly the same request sent twice. Any responsibility for detecting the duplicate request would lie with the server. This is very much easier if the request contains some unique label.
A second scenario is the "impatient user" scenario. The user hits submit too often. A well designed Web App will prevent this or detect this and not resubmit the web service. In this case the Service should not see any duplicate.
A more difficult scenario is the "did you really mean to buy two Rolls Royces?" The customer submits a request, let's pretend it's a high value request. Then their computer crashes, or they lose connectivity. Later, maybe even from a different computer, the user attempts the same purchase, not realising that the first one actually worked. Now that's a hard one to spot, and many vendors don't even try, but in some scenarios you need to make a really clever service with the duplicate detection being quite sophisticated pattern matching - for really high value customers this extra effort may be essential.

Is there a different way to do ASP.Net forms authentication that's already built and audited?

Like a lot of people I've gone with ASP.Net Forms authentication because it's already written and writing our own security code we're told is generally a bad idea.
With the current problems with ASP.Net I'm thinking it might be a good time to look at alternatives.
Important: ASP.NET Security Vulnerability - ScottGu
Video demonstrating attack
Microsoft advisory including workaround
From what I understand Microsoft tend to store things on the client side because it makes it easier to operate over server farms without needing database access calls.
I don't really care about server farms though and I'd like to simply have an opaque cookie that demonstrates my lack of trust in the callers.
Is there a decent solution that's already been proven solid?
Update: to clarify my question. I'm talking about the authentication token part of the forms authentication that I'd like to replace. The back end is quite easy to replace, you can implement the interfaces to store your users and roles quite easily. You can also use existing libraries like http://www.memberprotect.net/ which has been mentioned here.
I'd like to change the front end part of the process to use a token that doesn't provide the client with any leverage. Sticking with the existing back end infrastructure would be useful but not essential.
I've been working on an HttpModule that basically does what you're looking for. When a FormsAuthenticationCookie and FormsAuthenticatedTicket are generated, before the response is sent to the client (i.e., during the processing of the postback of the Login page/action), all of the details about the cookie & ticket are stored on the server. Also, the UserData from the ticket is moved to the server (if present) and replaced with a salted SHA-512 hash of the other properties in the ticket along with a GUID that serves as a key into the server-side store of the ticket.
The validation of the cookie & tickets compares everything the client provided (optionally including their IP address) with all of the properties that were known about them at the time they were issued. If anything doesn't match, they are removed from the request before the FormsAuthenticationModule even kicks in. If everything does match, the server's UserData is stuck back in the FormsAuthTicket in case you had any modules or code that depend on it. It's all transparent. Plus, it can detect suspicious and blatantly malicious requests and inserts a random delay in the processing. It also has some explicit padding oracle workarounds in there.
The demo app actually lets you create/modify your cookie & ticket values on the server, with the server encrypting your ticket for you with the machine keys. This way you can prove to yourself that you can't create a ticket/cookie that gets around the server validation unless you write the exact set of data to the server (which should be impossible under normal circumstances).
http://sws.codeplex.com/
http://www.sholo.net/post/2010/09/21/Padding-Oracle-vs-Forms-Authentication.aspx
http://www.sholo.net/post/2010/09/22/Sholo-Web-Security-and-the-EnhancedSecurityModule.aspx
-Scott
If you have your keys in the web.config and the attacker gets to it, they are pretty much done.
If that's not the case (they don't get the keys from your .config), then afaik the padding oracle shouldn't allow them to sign a new auth ticket. The paper explains ability to encrypt by taking advantage of the cbc mode, ending with a tiny bit of garbage there. It should be enough to make it an invalid signature.
As for the video where they get the keys with the tool, its against a dotnetnuke install. A default dotnetnuke has those keys in the web.config.
Implement the workaround, keep your keys off your site level web.config, if you don't use webresource.axd and scriptresource.axd disable those handlers, and apply the patch as soon as ms releases it.
I will simply recommend taking a look at InetSolution's MemberProtect product, it's a component designed with security in mind for the banking and financial services industries but is widely applicable to any site designed on ASP.NET or application built on top of the .NET platform. It provides support for encrypting of user information and a host of authentication methods from the simplistic to the very advanced and the various methods and functions are designed to be used as the developer sees fit, so it's not a canned solution so much as a very flexible one, this may or may not be a good thing depending on the particular situation. It's also a very solid foundation on which to build new member-based websites and applications in general.
You can find out more about it at http://www.memberprotect.net
I am the developer for MemberProtect and I work at InetSolution :)
This isn't a value-less question, but I have to say that I think your logic is suspect. It's no bad idea to consider alternative authentication solutions, but the newly-announced ASP.NET vulnerability should not push you to abandon a current (presumably working) solution. I'm also not entirely sure what the relevance of this comment is:
From what I understand Microsoft tend to store things on the client side because it makes it easier to operate over server farms without needing database access calls.
What is it about the vulnerability that makes you think that ASP.NET forms auth is broken any more than another solution?
The detail of the MS advisory would seem to suggest that pretty much any other authentication system could be rendered similarly vulnerable to attack. For example, any solution that uses the web.config file to store settings would still have its settings open to the world, assuming a successful attack.
The real solution here is not to change security, but to apply the published workaround to the problem. You might switch authentication providers only to find that you are still vulnerable, and your effort has gained nothing.
Regarding tokens/sessions: you have to push something to the client for authentication to work (whether you call it a token or not), and it's not this part of the process that causes the current security issue: it's the way the server responds to certain calls that makes this secret vulnerable to attack.

Storing last 10 web service calls

I have a SOAP web service and I'm trying to figure how to save/log the last 10 requests for each user. Each user is required to send their user/pass in each request, so it's easy to know who the request originated from. With these last 10 requests saved, my goal is to develop some sort of page that will allow them to log-in with their credentials and view the raw request, the actual SOAP message, http header information, and anything relevant that I can think of.
The point is to allow people to troubleshoot their own connection issues instead of having to contact me each time they can't connect, have trouble formatting their request, etc....
My first thought was to store all this information in memory in a hashtable or something, but that may have scalability issues when we have hundreds/thousands of users hitting the web service.
We could use our database to store these requests. Instead of hitting the database each time, I may need to create some "buffer" mechanism that will only update the database after the buffer gets to a certain number of requests. Is there an existing library or mechanism that will do this?
We can't store these requests on the file system on the machine hosting the web service. Since these requests can potentially contain sensitive information, it's a business decision that I'll need to work around.
Or maybe there's a better way to achieve what I'm trying to do?
I see two alternatives.
1.- Mix your two approaches: Create the hashtable in memory and, when it hits a limit (say, 1000 requests), push them to the database. No need for a special library for this. The hastable is your in memory buffer.
2.- Set up a totally different process sniffing the requests, and offer the clients to download the packet captures (or process and present them yourself.) This is arguably more work, but separates the request saving from your application. You could even then move the sniffer to another machine if load so demanded.

Resources