Is there anything not good using POST instead of GET? - http

I know the difference between POST and GET, however if I used POST instead of GET, anything not good besides not up to W3C standards?
Anything inefficiency, insecurity or anything else?

See the answer from deceze:
POST requests can't be bookmarked.
In all the interviews I've done, all the teaching I've done, this is the best place to start. There's a lot more, but start with this.
Ignore anything anyone says about security. A good hacker can change POST to GET easily.
If you get this far, know that POST changes data (adds a membership, or charges a credit card), whereas GET only fetches data (searches for red shirts). The makers of browsers make their browsers behave differently for the results of POST vs GET. The results of POST have side effects that you may not want to repeat (such as adding another membership or double charging a credit card).
If you understand THIS, then read about the POST-Redirect-GET pattern, and understand it well. (Then know that GET has a URL length limit, and that you may need to resort to POST in this case.)

Never use POST requests for normal view-only pages. POST requests can't be bookmarked, send in an email or otherwise be reused. They screw up proper navigation using the browsers back/forward buttons. Only ever use them for sending data to the server in one unique operation and (usually) have the server answer with a redirect.
Other than that, they're not more or less efficient or secure than GET requests, they're just for a different purpose.

Related

Hampering website parsing by adding useless data inside actual data

I want to prevent or hamper the parsing of the classifieds website that I'm improving.
The website uses API with JSON responses. As a solution, I want to add useless data between my data as programmers will probably parse by ID. And not give a clue about it in neither JSON response body nor header; so they won't be able to distinguish it without close inspection.
To prevent users from seeing it, I won't give that "useless data" to my users if they don't request it explicitly by ID. From an SEO perspective, I know that Google won't parse the page with useless data if there isn't any internal or external link.
How reliable would that technic be? And what problems/disadvantages/drawbacks do you think can occur in terms of user experience or SEO? Any ideas or suggestions will be very much appreciated.
P.S. I'm also limiting big request counts made in a short time. But it doesn't help. That's why I'm thinking of this technic.
I think banning parsers won't be better because they can change IP and etc.
Maybe I can get a better solution by requiring a login to access more than 50 item details for example (and that will work for Selenium, maybe?). Registering will make it harder. Even they do it, I can see these users and slow their response times and etc.

Is it true that POST can be used instead of GET in all scenarios?

I've read lots of articles about the differences between GET and POST. Lots of them are available here at StackOverflow.
A summary of the important differences is:
Post can send its information via body while GET should not (but I think it can be done practically)
Some browsers cache the GET results and rely on the idempotent behavior of GET requests.
Using GET is much easier than using POST for most of developers.
Concluding this summary, Using GET in POST situations is bad and dangerous.
But is it true that ignoring the easiness, POST can be used as a replacement of the GET requests as it seems it totally covers the GET requirements.
To clarify that I'm not crazy!, I'm not going to use POST instead of GET. This question is just about to check if I understand the GET and POST difference correctly.
No, POST is not a replacement of GET requests. There are two important things that a POST request cannot do that a GET request can.
You cannot generate a POST request simply by typing a URL in the address bar of the browser. This always generates a GET request.
You cannot generate a POST requesting using an ordinary link in HTML. This has far-reaching consequences. You cannot find a page that is only accessible using a POST request with any search engine, and you cannot link to it unless it is done by an HTML form or using Javascript.
Its a good practice that you classify your transaction. These methods are very important specially when you are developing an API Service Oriented architecture or even Single Page Applications.
GET - used to retrieve a dataset. (also has a limitation for url length. parameters are exposed and urlencoded.)
POST - Saving/adding (this is more secure)
EX:
GET /items - means you are getting the list of items.
POST /items - means you are saving/adding item(s)
and later you might need to learn PUT and DELETE too.
But for now, always use POST in your form or ajax request when saving/adding data. and GET when retrieving data.

How can I tell the difference between a post from a browser, and someone trying to post programmatically

Is there a way to determine if the request coming to a handler (lets assume the handler responds to get and post) is being performed by a real browser versus a programmatic client?
I already know that it is easy to spoof things like the User Agent and the Referrer, but are there other headers that are more difficult to spoof? Maybe headers that are not commonly available in classes like .net's HttpWebRequest?
The other path that I looked at is maybe using the Encrypted View State to send a value to the browser that gets validated on the server side, though couldn't that value simply be scraped from the previous response and added as a post parameter to the next request?
Any help would be much appreciated,
Cheers,
There is no easy way to differentiate because in the end, a post programitically looks the same to the server as a post by a user from the browser.
As mentioned, captcha's can be used to control posting but are not perfect (as it is very hard but not impossible for a computer to solve them). They also can annoy users.
Another route is only allowing authenticated users to post, but this can also still be done programatically.
If you want to get a good feel for how people are going to try to abuse your site, then you may want to look at http://seleniumhq.org/
This is very similar to the famous Halting Problem in computer science. See some more on the proof, and Alan Turing here: http://webcache.googleusercontent.com/search?q=cache:HZ7CMq6XAGwJ:www-inst.eecs.berkeley.edu/~cs70/fa06/lectures/computability/lec30.ps+alan+turing+infinite+loop+compiler&cd=1&hl=en&ct=clnk&gl=us
The most common way is using captcha's. Of course captcha's have their own issues (users don't really care for them) but they do make it much more difficult to programatically post data. Doesn't really help with GETs though you can force them to solve a captcha before delivering content.
Many ways to do this, like dynamically generated XHR requests that can only be made with human tasks.
Here's a great article on NP-Hard problems. I can see a huge possibility here:
http://www.i-programmer.info/news/112-theory/3896-classic-nintendo-games-are-np-hard.html
One way: You could use some tricky JS to handle tokens on click. So your server issues token-id's to elements on the page during the backend render phase. Log these in a database or data file. Then, when users click around and submit, you can compare the id's sent via the onclick() function. There's plenty of ways around this, but you could apply some heuristics to determine if posts are too fast to be a human or not, that is, even if they scripted the hijacking of the token-ids and auto submitted, you could check that the time between click events appears automated. Signed up for a twitter account lately? They use passive human detection that while not 100% foolproof, it is slower and more difficult to break. Many if not all of the spam accounts there had to be human opened.
Another Way: http://areyouahuman.com/
As long as you are using encrypted methods verifying humanity without crappy CAPTCHA is possible.I mean, don't ignore your headers either. These are complimentary ways.
The key is to have enough complexity to make for an NP-Complete problem in terms of number of ways to solve the total number of problems is extraordinary. http://en.wikipedia.org/wiki/NP-complete
When the day comes when AI can solve multiple complex Human problems on their own, we will have other things to worry about than request tampering.
http://louisville.academia.edu/RomanYampolskiy/Papers/1467394/AI-Complete_AI-Hard_or_AI-Easy_Classification_of_Problems_in_Artificial
Another company doing interesting research is http://www.vouchsafe.com/play-games they actually use games designed to trick the RTT into training the RTT how to be more solvable by only humans!

What is the difference between GET and POST in the context of creating an AJAX request?

I have an AJAX request that sends a GET:'getPendingList'. This request should return a JSON string indicating a list pending requests that need to be approved. I'm a little confused about whether I should be using a GET or POST here.
From this website:
GET requests can be cached
GET requests can remain in the browser history
GET requests can be bookmarked
GET requests can be distributed & shared
GET requests can be hacked (ask Jakob!)
So I'm thinking: I don't want the results of this GET to be cached because the pending list could change. On the other hand, using POST doesn't seem to make much sense either.
How should I think about GET and POST? I've been told that GET is the same as a 'read'; it doesn't (or shouldn't) change anything on the server side. This makes sense. What doesn't make sense is the caching part; it wouldn't work for me if someone else cached my GET request because I'm expecting the data to change.
Yahoo's best practices might be worth reading over. They recommend using GET primarily for retrieving information and using POST for updating information. In a separate item, they also recommend that do you make AJAX requests cachable where it makes sense. Check it out, it's a good read.
In short, GET requests should be idempodent. POST requests are not.
If you are altering state, use POST - otherwise use GET.
And don't forget, when talking about caching with GET/POST, that is browser-caching.
Nothing stopping you from caching the data server-side.
Also, in general - JSON calls should be POST (here's why)
So, after some IRC'ing, it looks like the best way to do this is to use GET (in this particular instance), but to prevent caching. There are two ways to do this:
1) Append a random string to your GET request.
This seems like a hacky way to do this but it sounds like it might be the only solution for IE: Prevent browser caching of jQuery AJAX call result.
2) In your response from the server, set the headers to no-cache.
It's not clear what the definitive behavior is on this. Some folks (see the previous link) claim that IE doesn't respect the no-cache directives. Other folks seem to think that this works: Internet Explorer 7 Ajax links only load once.

So why should we use POST instead of GET for posting data? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
How should I choose between GET and POST methods in HTML forms?
When do you use POST and when do you use GET?
Obviously, you should. But apart from doing so to fulfil the HTTP protocol, are there any reasons to do so? Less overhead? Some kind of security thing?
because GET must not alter the state of the server by definition.
see RFC2616 9.1.1 Safe Methods:
9.1.1 Safe Methods
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
If you use GET to alter the state of the server then a search engine bot or some link prefetching extension in a web browser can wreak havoc on your site and (for example) delete all user data just by following links to your site.
There is a nice paper by the W3C about this: URIs, Addressability, and the use of HTTP GET and POST.
1.3 Quick Checklist for Choosing HTTP GET or POST
Use GET if:
The interaction is more like a question (i.e., it is a safe operation such as a query, read operation, or lookup).
Use POST if:
The interaction is more like an order, or
The interaction changes the state of the resource in a way that the user would perceive (e.g., a subscription to a service), or
The user be held accountable for the results of the interaction
Because, if you use GET to alter state, Google can delete your stuff.
When do you use POST and when do you use GET?
How should I choose between GET and POST methods in HTML forms?
If you accept GETs to perform write operations then a malicious hacker could inject somewhere links to perform an unauthorized operation. Your user clicks on a link - and something is deleted from a database. Or maybe some amount of money is transferred away from the user's account if he's still logged in to their online banking.
http://superbank.com/TransferMoney?amount=1000&recipient=2342524
Send a malicious email with an embedded image referencing this link, and as soon as the document is opened, something funny has happened behind the scenes.
GET is limited by the length of URL the browser/server can handle. This used to be as short as 256 characters.
There is atleast one situation where you want a GET to change data on the server. That is when a GET returns data, and you need to record which data was given to a user and when it was given.
If you use complex data types then it must be in a POST it cannot be in a GET. For example testing a WCF web service in a browser can only be done when the contract uses simple data types.
Using GET and POST where it is expected helps to keep your program understandable.
When you use POST, you can see the information being "posted" in the address-bar of the web browser. This is [apparently] not the case when you use the GET method.
This article was somewhere on http://www.w3schools.com/ Once I've found the exact page it was on, I'll repost. :-)

Resources