What is CAPTCHA for security purpose? - asp.net

What does CAPTCHA do as far as security issue is concerned? Registration form of many sites have this field but how it works?

Completely Automated Public Turing Test To Tell Computers and Humans Apart
It prevents hackers from posting forms using automatic scripts, by requiring the user to input data read from images which are difficult to read automatically. The text can also be in the form of a sound, as per #BeRecursive's comments See this site.
It is used for logins as well as on other data entry forms. Here on Stack Overflow, if you edit answers or questions a number of times, you will be prompted before further edits are accepted.
There are two main forms. One has a single combination of characters that the user has to enter, the other, such as on SO has two.
The CAPCHA with two words usually consists of a word known to the Web Application and a second word that it is trying to decipher. See this site (thanks #Piskvor) The first word is used for validating the user and the answers to the second word are compared to other users' answers for that word and in this way the probable meaning of the text is determined. This is performed as a public service to organisations such as Libraries and Public Archives that are scanning large numbers of historical documents. The Optical Character Recognition (OCR) is not perfect and sometimes the meaning cannot be determined. So the word is made available in the CAPTCHA of a participating website and the meaning is determined. This process has no affect on the user of the website as it is only the first word that is used to determine whether they are a robot.

Its purpose is as a challenge-response test to demonstrate that the person using it is a human being and not an automated program. It doesn't really "secure" a website, it just makes it increasingly difficult for an automated system to access that functionality of the site. The idea is that some functions (such as posting a comment on a forum) should be done by real humans only and not automated processes.
This complexity can range wildly. There's the common "distorted text" CAPTCHA which requires the user to enter text displayed in an image designed to be difficult for a computer to read, but those are getting increasingly easier to beat with software. For accessibility purposes there are audio CAPTCHAs which play a short clip of a word and the user enters what they hear. I've even seen ones that ask simple questions that any reasonable person should be able to answer but may stump a computer that wasn't prepared for it. Some of my favorites are a matrix of pictures that say "click on the cat" or something else innocuous, which again a computer probably won't be able to do easily but a human would.
See Wikipedia: http://en.wikipedia.org/wiki/CAPTCHA
See Captcha.net: http://www.captcha.net/

A CAPTCHA is a "Completely Automated Public Turing test to tell Computers and Humans Apart". This basically means it is a simple test that makes it easy for a programmer to tell if a user is a computer or a person. It is usually visual and it relies on the fact that object recognition (including characters) is in its infancy at the moment. Recognizing letters is trivial for a human, however.
This ensures that the only users who will be able to fill in the form are those that can easily identify the objects on the CAPTCHA, usually characters. This is generally used to prevent automated form filling by bots (and to prevent spam)

it is an attempt to stop bots from registering on a site, it works by generating and image with text on it, the idea is that it very difficult (though apprently not impossible) to write a bot that can recognize the text within an image, this is also why the text is in wierd fonts (sometimes making it impossible for human, well me, to read!!)
here is a good link

CAPTCHA is just a riddle in the form of image or sound. "Stupid" bots can't solve the riddle and so they can't enter the correct answer to the riddle. If the correct answer is not entered, then there is no registration. Simple as that:)

Related

Front-end and back-end terminology

While designing a Web Application in ASP.Net, I usually split the project in 2 parts, the back-end (the admin part) and the front-end (the visitors/SEO part). Let's say that my visitors can login on the website and will do a lot of tasks, like fill profile, send messages, etc.
That part (authenticated user) looks for me a differente "layer" between Front-end and Back-end, and is somewhat hard define if is front-end (why visitors/users will handle it, but no admins) or if it is back-end (why the user will proactively make changes in database, like admins, but with less rights).
There is a term to define that "layer", like "mid-end" or I'm floating my mind to wrong places here and there is a clear definition to this (unknown to me)?
I would call these:
public area
user area
admin area
Collectively, I'd identify these as either 'areas' or 'zones'. To me, 'back-end' means code running on a server, and 'front-end' means the output from that code. I'd avoid using that terminology.
This is a very subjective answer, but that seems to be the nature of your question.
Funny how sometimes the biggest obstacle in development is: "what should I call it?!"
Front-end vs. Back-end seem very subjective, in that they depend on the context, circumstance, and sometimes on individual interpretations. I think Wikipedia does a good job defining it, but there' still not always a clear distinction.
For that middle layer, I prefer the term "Mediator".

Quick questions re moving to InfoPath forms

I’ve been asked to look into how best to move forms into InfoPath and have a couple of basic questions about your experiences so I can get an insider’s lay of the land. Even some short, quick bullets would be really helpful -- thank you!
Are you starting from scratch in InfoPath, or are you converting from paper or a different e-format? (Jetform, PDF, etc.)
Are you trying to re-create the layout of a specific paper form, or is a regular online form OK? (trying to learn what the latest thinking is about this)
Do you need only simple fill and submit capabilities, or do you need programming for calculations, validation, database lookup/entry/reporting, etc. as well? (don’t know how much harder it is to do all this vs. not)
How long does each form take to finish? (I know it depends, but is there a rough guideline for planning purposes?)
Who’s doing the actual work? (by title or function)
What is especially straightforward or challenging about moving to InfoPath forms? (forewarned is forearmed!)
We are in the process of moving all our company forms to InfoPath. This is approximately 70 total encompassing things such as Vacation Requests, Time Card Adjustments, etc. I will answer your questions based on that.
The current forms are in Word format and people print them and fill them out. There is actually a function in InfoPath to import/convert Word documents so our "forms" department can create them fairly easily without developer support (and the forms have to identically match the Word versions - even if there are ways to improve, it is a political factor not a technical one). This process is very quick (the form can be created in an hour or so). At this time we are just using simple fill in and submit capabilities although we would like to add prepopulation of certain fields in the future.
The two most challenging aspects we faced (so far) are digital signatures and publishing the forms. The idea of digital signatures is great and we definitely wanted to use them but understanding how they work and making sure the form is designed correctly took a little training for the nontechnical people creating the forms. Publishing took a little explaining as well. Our users were creating a form locally and then just emailing it around or copying it to shares without ever publishing it - which just errors out for any user but the author. Teaching them the proper process (and explaining why it was setup that way) took a bit of time.

Breaking CAPTCHAs for a Noble Purpose

CAPTCHAs that ask users to read distorted text are fine for sighted people, but a terrible barrier for those who are blind or have other disabilities. Audio alternatives are occasionally available but still don't help those who are both deaf and blind and can be hard to use with a screenreader (which is already reading words to you).
There exist a couple of solutions that use humans to solve the CAPTCHA on behalf of the user, such as WebVisium and Solona, but these rely on the availability of volunteer operators (for example, Solona apparently has just one volunteer so you have to hope he is awake when you want help).
It occurs to me that the volume of CAPTCHA solutions needed by blind people is very low - I'd guess less than a few hundred per day in a populous country like the UK. This means that unlike the bad folks who want to perform an action many times in a short period, a CAPTCHA assistance service for blind people could afford to devote considerable computational resource - for example, a cloud of computers in Amazon EC2 - to identifying the presented text.
My question is this: assuming you don't care about speed very much, and you have lots of computers available, are there algorithms that let you solve the text-distortion CAPTCHAs that are common today, such as those used by reCaptcha? Or are these problems really intractable even with lots of resource and time?
A few notes:
At this point, my question is just theoretical, but clearly any such service would have to carefully control access to keep spammers out. Perhaps only registered blind people would be allowed to use it.
I am aware that an old Yahoo CAPTCHA was broken a few years ago using an algorithm that runs in seconds on a single computer. I am asking whether modern CAPTCHAs can be broken, perhaps more slowly and with more resource.
I am aware that some new CAPTCHA types are appearing, which ask users to identify kittens or orient a picture. These aren't widespread yet, so I'm just asking about text-distortion for now.
Basically solving a text distortion CAPTCHA consists of three individual steps:
Find out where the interesting parts are
Segment the text into individual letters
Recognize the letters
The only problem that's left which is pretty hard for computers is the second one. The first usually isn't very hard, unless you happen to stumble upon the CAPTCHA from hell. And the third gets solved by computers with a much better success rate than by humans.
An interesting site for learning how CAPTCHAs are broken is the one by the OCR Research Team.
CAPTCHA has been created to avoid machines from detecting the words. It's meant to be read by humans only. Making it more readable for blind/deaf people adds a risk of machines being able to understand them again, thus nullifying their effect.
Spammers did find a very effective way to break the more popular CAPTCHA's though. They just hire cheap labourers to read them, in return for a few cents per working account. As a result, there's a small industry around breaking CAPTCHA's to create millions of accounts that can then be used to send more spam. Compared to the amount gained by the spammers, the costs is almost none. A similar solution could be used by blind/deaf people, who would send the CAPTCHA image to some cheap labourer in China or wherever, where they will reply with the correct words and the blind/deaf person will be able to proceed. Unfortunately, blind people only need this service only a few times while spammers need a continuous flow, thus those labourers will prefer to work for spammers instead. (The pay is better.) Still, the best solution would be to send the CAPTCHA to some friend, let them read and/or decipher it and return the answer.
The ReCAPTCHA style also reads out the words. A simple speech recognition application might be able to recognise whatever is said, although speech recognition still needs more optimizations. Still, you might want to work from that angle, getting the application to listen to the sound byte instead.
When it is possible to break CAPTCHA's, they will just think of better CAPTCHA-like methods. OCR techniques are still improving thus more work will be done to make CAPTCHA's harder. That is, until OCR has become as good as the human eye at recognizing words...
An algorithm could be created, although slow. With 26 lowercase and 26 uppercase letters and 10 digits, it should not be too difficult to come up with an algorithm. With Serif and Sans-serif fonts, the number of combinations would need to be doubled, though. Still, if you try to curve all letters in a similar way as the letter in the CAPTCHA, you should be able to detect a letter which gets covered by the CAPTCHA letter the most. And that would be the most likely candidate. Still needs you to clear lines, dirt and other artefacts from the image that the human eye has less trouble to recognise than a computer. You'd need the following steps:
Clean up the image.
Detect the locations of the letters.
For every letter
3a. Determine the curve of the letter by checking the left side.
3b. Do an overlay of every possible letter/digit to find the one that covers it the best. (That's the most likely letter.)
Once you've found the word, do a dictionary check to make sure it's a real word. (Unless the CAPTCHA doesn't use real words.)
Even though they can twist the letters in the CAPTCHA's, it should be possible to detect the twist rotation that they used simply by looking at the left side of every letter and then trying to apply the same curve to every letter. (52 combinations, plus 10 digits, if digits are also used.) Basically, you'd try to put a box around every letter, then check which letter will contain the least amount of white space. That's the most likely letter.
The main reason why this isn't often used for OCR is basically the need for speed. Step 3a/b tends to be slow, especially if you have to take font style in consideration.
Making this answer bigger but in reply to one of the comments:
There are several ways to cleanup an image. You'd need some color filtering, noise reduction and an algorithm that's able to recognise the noisy lines through an image. The DEFCON slideshow that you've pointed to shows a few simple techniques to filter away some of the noise. It shows that a basic image processing tool can already make an image a lot clearer for a machine to read. A simple blur will clean up random dots and thin lines while color filters would filter away the noisy colours. A next step would be to try to put a box around every letter in the CAPTCHA, hoping the system is able to recognise their locations. I don't know any practical algorithms for this but there should be ways to recognise them. There's software that can create vector images from bitmaps, thus there should be software that's able to calculate a box around a letter.
It is likely that this box won't have rectangular corners, thus you would have to distort all 52 letters to match the same box. Italic or bold shouldn't make much of a difference since these styles are just additional distortions. Serif or Sans-serif does make a difference, though. Serif fonts tend to have a few more spikes and ornaments. Fortunately, there are algorithms that can transform a box to any other figure with four corners.
Regular OCR applications will assume that letters are mostly straight and will just check a few hotspots to find a match. Thus, they sometimes get it wrong because of noise. To crack CAPTCHA, you would need a more sensitive match, preferably "XOR-ing" the CAPTCHA letter image with an image of one of the 52 letters, then counting the number of black and white spots to calculate the ratio. Assuming white=1 and black=0, the result of the XOR should be almost black for the best match.
I think several spammers have already found some useful algorithms to crack CAPTCHA's but for them, keeping these algorithms a secret just keeps them in business.
Another comment, more text. :-)
Segmentation would be a problem, but it's not impossible to solve. It's just extremely complex. But when you've cleaned the image, it should be possible to calculate two lines. One line that touches the bottom of every letter and a second line that touches the top. However, good CAPTCHA's won't put letters on the same lines any more, but those not-so-good ones could be cracked by just following the lines. (Guess? ReCAPTCHA puts letters between two lines!) With two lines, you know the first letter will start at the left, thus you can try overlaying all 52 possibilities there until you've found a match. When you found one, move to the right for the second one. And further until you've read all letters. With two lines to guide you, you don't need a complete box.
Letters tend to use a constant ratio between width and height. With two lines, you can calculate the height of the complete letter and thus get a good estimation of the matching width.
Still, working out the correct algorithm to calculate this all is a bit too much for my poor math skills. You'd need an expert mathematician to crack this algorithm.
My answer to your question "are these problems really intractable even with lots of resource and time?" is to point out that this is the very reason that CAPTCHAs work.
My understanding is that the purpose of a CAPTCHA is to prove that you are human rather than a spam bot. reCAPTCHAs are a novel take on this theme because they take images that represent text that cannot be resolved by OCR (optical character recognition) engines. The difference between a person and a machine in this instance is that specialized algorithm(s) has tried to interpret this image and failed while a "normal" person has the intrinsic ability to interpret the text in a consistently human way. That being said, in the future we hope that someone will come up with better OCR engines so that there needs to be less human intervention in digitizing the worlds information. We hope that someone will come up with an tractable solution to this particular problem.
From your point of view of trying to make CAPTCHAs more accessible to blind people -- who still need to prove that they're people rather than spam bots -- the community needs to become aware of this issue and find a way to identify people in a less vision centric way.
The introduction of CAPTCHA has certainly made the web less accessible to the visually impaired, and I agree with you in citing this as a significant problem that deserves more attention and concern. However, while CAPTCHA can be and has been inconsistently bypassed on popular web sites, I don't think this is a viable long-term solution for those in need. Indeed, the day that the CAPTCHA variants currently present on sites like Facebook, Google, MySpace etc. can be reliably and consistently broken is the day they will become obsolete and abandoned for either stronger variants of the same or an entirely new solution (as you implied, distinguishing cats from dogs in pictures has been a popular alternative trend).
When it comes to online accessibility, what I think those with disabilities need most right now is advocacy. The more people contact software companies, open source groups, and standards bodies and speak out about this need, the more awareness will be raised and that will (hopefully) lead to more action on behalf of the development community. Ultimately, it would be great to see sites like Google or Facebook offering alternative access methods just for their visually impaired users.
Idealism aside, I think it is productive to pursue other avenues like you mentioned with the CAPTCHA volunteer network, possibly even the development of something like OpenID for those with relevant disabilities as a universal form validation pass.
As for the technical aspect of your question, I don't think the availability of additional processing power alone will allow you to reliably and consistently break CAPTCHA. There is A LOT of money in spam, and you can be sure that shady SEO companies and Spammers alike have a great number of servers at their disposal. As Johannes Rössel mentioned, if you want to learn more about how this is done and where the technical difficulty lies, research Optical Character Recognition (OCR) and look at the wide variety of number/letter skewing that occurs on high traffic sites.
This related SO question has a number of good ideas in it, including a DEFCON talk that claims using multiple OCRs and voting breaks many simple CAPTCHAs. This suggests a candidate solution method: distribute the problem over several servers, each of which runs one or more OCR tools in parallel, collect the results, and take the most popular answer. Comments welcome.

Best way to incorporate spell checkers with a build process

I try to externalize all strings (and other constants) used in any application I write, for many reasons that are probably second-nature to most stack-overflowers, but one thing I would like to have is the ability to automate spell checking of any user-visible strings. This poses a couple problems:
Not all strings are user-visible, and it's non-trivial to spearate them, and keep that separation in place (but it is possible)
Most, if not all, string externalization methods I've used involve significant text that will not pass a spell checker such as aspell/ispell (eg: theStrName="some string." and comments)
Many spellcheckers (once again, aspell/ispell) don't handle many words out of the box (generally technical terms, proper nouns, or just 'new' terminology, like metadata).
How do you incorporate something like this into your build procedures/test suites? It is not feasible to have someone manually spell check all the strings in an application each time they are changed -- and there is no chance that they will all be spelled correctly the first time.
We do it manually, if errors aren't picked up during testing then they're picked up by the QA team, or during localization by the translators, or during localization QA. Then we lodge a bug.
Most of our developers are not native English speakers, so it's not an uncommon problem for us. The number that slip through the cracks is so small that this is a satisfactory solution for us.
Nothing over a few hundred lines is ever 100% bug-free (well... maybe the odd piece of embedded code), just think of spelling mistakes as bugs and don't waste too much time on it.
As soon as your application matures, over 90% of strings won't change between releases and it would be a reasonably trivial exercise to compare two versions of your resources, figure out what'ts new (check them first), what's changed/updated (check next) and what hasn't changed (no need to check these)
So think of it more like I need to check ALL of these manually the first time, and I'm only going to have to check 10% of them next time. Now ask yourself if you still really need to automate spell checking.
I can think of two ways to approach this semi-automatically:
Have the compiler help you differentiate between strings used in the UI and strings used elsewhere. Overload different variants of the string datatype depending on it's purpose, and overload the output methods to only accept that type - that way you can create a fake UI that just outputs the UI strings, and do the spell checking on that.
If this is doable of course depends on the platform and the overall architecture of the application.
Another approach could be to simply update the spell checkers database with all the strings that appear in the code - comments, xpaths, table names, you name it - and regard them as perfectly cromulent. This will of course reduce the precision of the spell checking.
First thing, regarding string externalization - GNU GetText (if used properly) creates string files that are contain almost no text other then the actual content of the strings (there are some headers but its easy to cause a spell checker to ignore them).
Second thing, what I would do is to run the spell checker in a continuous integration environment and have the errors fed externally, probably through a web interface but email will also work. Developers can then review the errors and either fix them in the code or use some easy interface to let the spell check know that a misspelling should be ignored (a web interface can integrate both the error view and the spell checker interface).
If you're using java and are storing your localized strings in resource bundles then you could check the Bundle.properties files and validate the bundle strings. You could also add a special comment annotation that your processor could use to determine if an entry should be skipped.
This method will allow you to give a hint as to the locale and provide a way of checking multiple languages within the one build process.
I can't answer how you would perform the actual spell checking itself, though I think what I've presented will guid you as for the method of performing the spell checking.
Use aspell. It's a programme, it's available for unixoids and cygwin, it can be run over lots of kinds of source code. Use it.
First point, please don't put it into you build process. I would be a vengeful coder if I (meaning my computer) had to spell check all the content on the site every time I tried to debug or build a new feature. I don't even think this kind of operation belongs as a unit test (you're testing a human interface, not a computerised one).
Second point, don't write a script. You're going to have so many false positives fall through the cracks that people will stop reading the reports and you are no better off than when you started.
Third point, this is probably most easily solved by having humans do it: QA team, copy writers, beta testers, translators, etc. All the big sites with internationalised content that I've built had the same process: we took the copy from the copy writers, sent it to the translating service/agency, put it into the persistence layer, and deployed it. Testers (QA, developers, PMs, designers, etc.) would find spelling or grammatical mistakes and lodge bug reports. There is just too much red tape and pairs of eyes for that many spelling/grammar errors to slip through.
Fourth point, there will always be spelling and grammar mistakes on your page. Even major newspaper web sites haven't gotten around this and they have whole office buildings filled with editors.

Developing a online exam application, how do I prevent cheaters?

I have the task of developing an online examination software for a small university, I need to implement measures to prevent cheating...
What are your ideas on how to do this?
I would like to possibly disable all IE / firefox tabs, or some how log internet activity so I know if they are googling anwsers...is there any realistic way to do such things from a flex / web application?
Simply put, no there is no realistic way to accomplish this if it is an online exam (assuming they are using their own computers to take the exam).
Is opening the browser window full screen an option? You could possibly also check for the window losing focus and start a timer that stops the test after some small period of time.
#Chuck - a good idea.
If the test was created in Flash/Flex, you could force the user to make the application fullscreen in order to start the test (fullscreen mode has to be user-initiated). Then, you can listen for the Flash events dispatched when flash exits fullscreen mode and take whatever appropriate action you want (end the test, penalize the user, etc.).
Flash/Flex fullscreen event info.
blog.flexexamples.com has an example fo creating a fullscreen-capable app.
Random questions and large banks of questions help. Randomizing even the same question (say changing the numbers, and calculating the result) helps too. None of these will prevent cheating though.
In the first case, if the pool is large enough, so that no two students get the same question, all that means is that students will compile a list of questions over the course of several semesters. (It is also a ton of work for the professors to come up with so many questions, I've had to do it as a TA it is not fun.)
In the second case, all you need is one smart student to solve the general case, and all the rest just take that answer and plug in the values.
Online review systems work well with either of these strategies (no benefit in cheating.) Online tests? They won't work.
Finally, as for preventing googling... good luck. Even if your application could completely lock down the machine. The user could always run a VM or a second machine and do whatever they want.
My school has always had a download link for the Lockdown browser, but I've never taken a course that required it. You can probably force the student to use it with a user agent check, but it could probably be spoofed with some effort.
Proctored tests are the only way to prevent someone from cheating. All the other methods might make it hard enough to not be worth the effort for most, but don't discount the fact that certain types of people will work twice as hard to cheat than it would have taken them to study honestly.
Since you can't block them from using google, you've got to make sure they don't have time to google. Put the questions in images so they can't copy and paste (randomize the image names each time they are displayed).
Make the question longer (100 words or more) and you will find that people would rather answer the question than retype the whole thing in google.
Give them a very short time. like 30-45 seconds. Time to read the question, think for a moment, and click either A, B, C, D, E,
(having just graduated from CSUN I can tell you scantron tests work.)
For essay questions? do a reverse google lookup (meaning put their answer into google as soon as they click submit) and see if you get exact matches. If so, you know what to do.
Will they always take the test on test machines, or will they be able to take the test from any machine on the network? If it will be specific machines, just use the hosts file to prevent them from getting out to the web.
If it is any machine, then I would look at having the testing backend change the firewall rules for the machine the test is running on so the machine cannot get out to the interwebs.
I'd probably implement a simple winforms (or WPF) app that hosts a browser control in it -- which is locked in to your site. Then you can remove links to browsers and lock down the workstations so that all they can open is your app.
This assumes you have control over the workstations on which the students are taking the tests, of course.
As a teacher, I can tell you the single best way would be to have human review of the answers. A person can sense copy/paste or an answer that doesn't make sense given the context of the course, expected knowledge level of the students, content of the textbook, etc, etc, etc.
A computer can do things like check for statistical similarity of answers, but you really need a person for final review (or, alternatively, build a massive statistical-processing, AI stack that will cost 10x the cost of human review and won't be as good ;-))
No, browsers are designed to limit the amount of damage a website or application can do to the system. You might be able to accomplish your goals through Java, an activex control, or a custom plugin, but other than that you aren't going to be able to 'watch' what they're doing on their system, much less control it. (Think if you could! I could put a spy on this webpage, and if you have it open I get to see what other websites you have open?)
Even if you could do this, using a browser inside a VM would give them the ability to use one computer to browse during the test, and if you could fix that they could simply use a library computer with their laptop next to it, or read things from a book.
The reality is that such unmonitored tests either have to be considered "open book" or "honor" tests. You must design the test questions in such a manner that references won't help solve the problems, which also means that each student needs to get a slightly different test so there is no way for them to collude and generate a key.
You have to develop an application that runs on their computer, but even then you can't solve the VM problem easily, and cannot solve the side by side computers or book problem at all.
-Adam
Randomize questions, ask a random set of questions from a large bank... time the answers...
Unless you mean hacking your site, which is a different question.
Short of having the application run completely on the user's machine, I do not believe there is a way to make sure they are not google-ing the answers. Even then it would be difficult to check for all possible loop-holes.
I have taken classes that used web based quiz software and used to work for a small college as well. For basic cheating prevention I would say randomize the questions.
Try adding SMS messages into the mix.
I agree with Adam, that even with the limitations that I suggested, it would still be trivial to cheat. Those were just "best effort" suggestions.
Your only hopes are a strong school honor code and human proctoring of the room where the test is being given.
As many other posters have said, you can't control the student's computer, and you certainly can't keep them from using a second computer or an iPhone along side the one being used for the test -- note that an iPhone (or other cellular device) can bypass any DNS or firewall on the network, since it uses the cellular provider's network, not the college's.
Good luck; you're going to need it.
Ban them from using any wireless device or laptop and keylog the machines?
You could enforce a small time window during which the test is available. This could reduce the chance that a student who knows the answers will be free to help one who doesn't (since they both need to be taking the test at the same time).
If it's math-related, use different numbers for different students. In general, try to have different questions for different copies of the test.
If you get to design the entire course: try to have some online homeworks as well, so that you can build a profile for each student, such as a statistical analysis of how often they use certain common words and punctuations. Some students use semi-colons often; others never, for example. When they take the test you get a good idea of whether or not it's really them typing.
You could also ask a couple questions you know they don't know. For example, list 10 questions and say they must answer any 6 out of the 10. But make 3 of the questions based on materials not taught in class. If they choose 2 or 3 of these, you have good reason to be suspicious.
Finally, use an algorithm to compare for similar answers. Do a simple hash to get rid of small changes. For example, hash an answer to a list of lower-cased 3-grams (3 words in a row), alphabetize it, and then look for many collisions between different users. This may sound like an obvious technique, but as a teacher I can assure you this will catch a surprising number of cheaters.
Sadly, the real trouble is to actually enforce punishment against cheaters. At the colleges where I have taught, if a student objects to your punishment (such as flunking them on the test in question), the administration will usually give the student something back, such as a positive grade change. I guess this is because the student('s parents) have paid the university a lot of money, but it is still very frustrating as a teacher.
The full screen suggestions are quite limited in their effectiveness as the user can always use a second computer or w/ multi monitor a second screen to perform their lookups. In the end it is probably better to just assume the students are going to cheat and then not count online tests for anything important.
If the tests are helpful for the students they will then do better on the final / mid term exams that are proctored in a controlled setting. Otherwise, why have them in the first place...
Make the questions and answers jpeg images so that you cannot copy and paste blocks of text into a search engine or IDE (if it is a coding test). This combined with a tight time limit to answer each question, say three minutes, makes it much harder to cheat.
I second what Guy said. We also created a Flex based examination system which was hosted in a custom browser built in .NET. The custom browser launched fullscreen, all toolbars were hidden and shortcuts were disabled.
Here is tutorial on how to create a custom browser with C# and VB.NET.
This will solve your problem. http://www.neuber.com/usermonitor/index.html
This will allow you to view the student's browser history during and after the test as well as look in on their screen during the test. Any urls visited during test time will be logged, so you can show them the log when you put a big F on their report card. :)
No one can stop people from cheating, but everyone can receive different questions altogether.
I prefer you buy available online scripts in market as starting point for it. This will save you time, cost and testing efforts.
Below is one of the fine scripts that I worked with and it worked like charm. Using this as base I developed a online testing portal of over 1000 users using computer adaptive test.
http://codecanyon.net/item/online-skills-assessment/9379895
It is a good starting point for people looking to develop Online Exam System.
I customized the script with the help of their support.

Resources