How captcha can be tested in high requesting sitution (Load & Stress Testing)? - asp.net

I have a project implemented in ASP.NET and in requirements of this project Captcha is needed so I searched for a good Captcha and finally chosen one.
That version of selected Captcha uses ViewState and after of some simple testing of its functionality it was selected, but because the load of site is high (1000 request per minute) this Captcha failed.
I conclude that I should load test it before using it but I faced to a question: How can I Load Test it when computer can not read the text!
And another question I have is: what's the difference between using ViewState and Session in Captcha?(If you know a good Captcha -Except Recaptcha, because it is very complicated to read by human(!!!)- please inform me about it.)
Thanks in Advance

Recaptcha may be difficult to read, but that's one reason why it works. It can also definitely handle 1000 requests a minute. Youll probably find your requests going down after you implement a good captcha. Or have you considered asking users to register?

You could easily set up a load test with JMeter for example, in case you provide your load testing tool with the url and the correct captcha result as a data pool (make sure to also include a few test cases where the wrong result is entered to have a realistic test).
You can then scale up the virtual users and see how the response time evolves.

Related

ASP.NET page to reflect server status

I'm looking to create a webpage that will reflect the status of one of my company's servers automatically. Frequently there will be a minor error that only lasts 2-3 minutes, and it would be great to have this reflected on a self-generated page, which might prevent 50-60 unhappy clients from calling in simultaneously and asking what's wrong.
I'm not quite sure where to begin - would anyone have a suggestions for good resources to study? Programming examples? I'm not referring to the basics of writing an ASP.NET page, of course, but rather process interaction in Windows.
Thanks.
To pull this off, you'd need a separate page that essentially runs server diagnostics, otherwise the page wouldn't know if it was up or down. Also, the page would need to be isolated from the sort of problems that are kill other people's requests, such as cache hit problems, memory starvation, high CPU usage, insufficient bandwidth. So ideally the diagnostics would run in a separate app-pool, separate virtual directory, separate machine.
Many of the interesting diagnostics would require a WMI call, but some you can get from the My.Computer namespace.
Also, are you going to do this on every server, or do you want one web server to display the status of several different servers?
It also depends on the type of errors your servers are encountering.
If they are going down completely, or are losing internet connection, then pinging them after an interval of time will let you know if they are up or not.
If you have a specific process running on a server that becomes unavailable, that can be a little more tricky.
Your best bet is to find a way to do a simple request from the services/applications that are important and see if you get a response, if you do, the server is likely up, if not, then it is likely not.
Anything you can do to reduce the number of support calls you get is a good idea, but I'd also focus some time and try to figure out why your servers are going down so often.
Also, telling your users that the server is down, but not giving a reason why may not give the effect you are looking for. Users will still be confused and frustrated when they can't get their work done.
I know you were looking to build a webpage to display the server diagnostics, but there are plenty of server monitoring tools that produce webpages for an easy dashboard view of the history.
A quick google returned the following link:
http://www.webdesignbooth.com/10-really-useful-server-monitoring-tools/

ASP.NET What's the best way to produce a trial version for customers to download?

I've written a ASP.NET app that I hope to sell to businesses, I could host the trial but it's designed to connect to the customers data so customers will certainly want to install it to do a successful evaluation.
I've never produced anything commercial before so I'm looking for advice on how best to limit the trial, a 30 day trial seems most common, do you simply rely on the clock of the PC/Server they install it on? Any other suggestions welcome, please keep in mind this is ASP.NET app so will be installed on their web server.
Thanks
Craig
I would just do it via the PC's clock. At the end of the day, they could just change the clock and continue to use your software, though it's probably not going to work in practice (i.e. most software actually uses the date/time for other things as well and changing it going to screw that up).
Generally, you can usually trust business more than you trust the general public. The liability of a business is much higher than that of an individual, so if it came to it, you could potentially sue them for quite a bit. That alone means most businesses will purchase licenses for all of their software: a few hundred (or even thousand) dollars for a software license is much better than risk getting sued.
When they sign up for the demo, make sure you get all of their contact details and so on.
I would setup a web service on your server to authenticate the demo application. The web service should get called periodically and if it fails, then shut down the application. That way you have complete control over the trial (you can extend it or shut it down remotely).
You should give them some sort of key which they will place in your web.config that will identify them as a customer.
Make sure you take the usual precautions of encrypting / using hashes with both the key and the web service so it's not bypassed.
This sort of thing has been well covered on SO in the past.
You cannot make it unbreakable, but you can make it very difficult for the client to break your trial period.
One way to do it is to take the first run time and encrypt that info and store it either in your web.config or database. This has a weakness though: what do you do if the value is not present where you expect it to be?
Another option is to ping a webservice that you host. If the webservice says their trial is over then you can render the appropriate page to tell them that. This has the advantage that the webservice is beyond their control and cannot be messed with. It has the disadvantage that not every client will want to be allowing their web app to phone home, and there may be connectivity issues which would interfere with the functioning of your app.
So you might want to come up with a variety of options, and then implement a licencing module using the Provider pattern, so that you can swap in the licencing module most suitable for that client.
Put a counter in the web.config, of course give the counter a non-related name so the customer does not know what it is for. Every time they access the application you can increment the counter. Give them x number of log-in's.
If you want you can encrypt the counter if you do not want the customer to figure out that the counter is incrementing.

Prioritise ASP.NET requests

I would be surprised if this is possible, but you never know.
Is there a way in which I could prioritise ASP.NET requests? For example, if the request is a NEW request (coming from Location X) I would like it to take priority over a request coming from a known location.
This will be running under IIS 7 so can I make use of the integrated pipeline to pre-process requests before they take threads out the ThreadPool?
Hmmm. Any feedback welcomed, even if it's to say No!
Thanks
Duncan
I don't think what you're after is possible in the truest sense of what you're asking for, but it might be possible to 'simulate' what you're after at the application level. John's right, they're processed first come, first served. But you might be able to give some kind of priority to your web application by setting a cookie for all visitors, and checking if that cookie is present before you render your homepage. If it is not present, you could assume that the request is new and therefore continue to render your homepage (or whatever). If it is present, you might choose to redirect them to another page (or perhaps a cached copy of your page).
Like I said, this isn't the 'truest' sense of what you are after, but if your homepage is particulary process intensive right now, and you want some way to separate recurring visitors from new visitors, this might do the trick.
Since you've asked, though - I'd have to ask you why it is necessary in your implementation to prioritise requests as you have mentioned. Is load on your web server a problem, and you want to appear more responsive to new customers?
Just hazarding a guess - interesting question, though! :)
Best,
Richard.

Why shouldn't data be modified on an HTTP GET request?

I know that using non-GET methods (POST, PUT, DELETE) to modify server data is The Right Way to do things. I can find multiple resources claiming that GET requests should not change resources on the server.
However, if a client were to come up to me today and say "I don't care what The Right Way to do things is, it's easier for us to use your API if we can just use call URLs and get some XML back - we don't want to have to build HTTP requests and POST/PUT XML," what business-conducive reasons could I give to convince them otherwise?
Are there caching implications? Security issues? I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
Edit:
Thanks for the answers so far regarding prefetching. I'm not as concerned with prefetching since is mostly surrounding internal network API use and not visitable HTML pages that would have links that could be prefetched by a browser.
Prefetch: A lot of web browsers will use prefetching. Which means that it will load a page before you click on the link. Anticipating that you will click on that link later.
Bots: There are several bots that scan and index the internet for information. They will only issue GET requests. You don't want to delete something from a GET request for this reason.
Caching: GET HTTP requests should not change state and they should be idempotent. Idempotent means that issuing a request once, or issuing it multiple times gives the same result. I.e. there are no side effects. For this reason GET HTTP requests are tightly tied to caching.
HTTP standard says so: The HTTP standard says what each HTTP method is for. Several programs are built to use the HTTP standard, and they assume that you will use it the way you are supposed to. So you will have undefined behavior from a slew of random programs if you don't follow.
How about Google finding a link to that page with all the GET parameters in the URL and revisiting it every now and then? That could lead to a disaster.
There's a funny article about this on The Daily WTF.
GETs can be forced on a user and result in Cross-site Request Forgery (CSRF). For instance, if you have a logout function at http://example.com/logout.php, which changes the server state of the user, a malicious person could place an image tag on any site that uses the above URL as its source: http://example.com/logout.php. Loading this code would cause the user to get logged out. Not a big deal in the example given, but if that was a command to transfer funds out of an account, it would be a big deal.
Good reasons to do it the right way...
They are industry standard, well documented, and easy to secure. While you fully support making life as easy as possible for the client you don't want to implement something that's easier in the short term, in preference to something that's not quite so easy for them but offers long term benefits.
One of my favourite quotes
Quick and Dirty... long after the
Quick has departed the Dirty remains.
For you this one is a "A stitch in time saves nine" ;)
Security:
CSRF is so much easier in GET requests.
Using POST won't protect you anyway but GET can lead easier exploitation and mass exploitation by using forums and places which accepts image tags.
Depending on what you do in server-side using GET can help attacker to launch DoS (Denial of Service). An attacker can spam thousands of websites with your expensive GET request in an image tag and every single visitor of those websites will carry out this expensive GET request against your web server. Which will cause lots of CPU cycle to you.
I'm aware that some pages are heavy anyway and this is always a risk, but it's bigger risk if you add 10 big records in every single GET request.
Security for one. What happens if a web crawler comes across a delete link, or a user is tricked into clicking a hyperlink? A user should know what they're doing before they actually do it.
I'm kind of looking for more than just "it doesn't make sense semantically" or "it makes things ambiguous."
...
I don't care what The Right Way to do things is, it's easier for us
Tell them to think of the worst API they've ever used. Can they not imagine how that was caused by a quick hack that got extended?
It will be easier (and cheaper) in 2 months if you start with something that makes sense semantically. We call it the "Right Way" because it makes things easier, not because we want to torture you.

ASP.Net, Capture image/screenshot of client error

We currently have fairly robust error handling functionality in our ASP.Net application.
We log all errors in the database, a text file on the server
and also send automated emails containing the error details back to our support people.
This all happens on the server of course.
We would like to capture (and retrieve) an image of the client browser at the time the error occurred to provide additional info for troubleshooting?
Is this at all possible?
If so what would be an elegant approach to this problem?
This is not technically impossible, but it is so impractical for nearly all purposes that it might as well be impossible. You would need a plugin running on the client's machine which can receive instructions from your error page to take the screenshot, connect to the server and upload it.
If your client screens have complex data which affects the state surrounding the exception, you should revisit your design to ensure all of that is recorded before it's sent to the client, so you can keep all relevant state tracked with a given exception.
Saying something is "impractical" is usually easier than actually trying to solve something that is difficult, but not technically impossible.
I have done some more research and have come across
an approach that allows one to get hold of the rendered html server side.
Further more, there are ways to also convert html to images
I will implement the solution using a combination of the two.
Capturing a client browser screenshot is not possible due to security and privacy reasons. What you can (and imho you should) do is capture the url and the browser version and try to reproduce it in the same environment.

Resources