Philosophically, I am accustomed to always using GET for HTTP requests that do not alter state, and POST for requests that do. However, lately I have run into some difficulties with this that have caused me to make exceptions. I was curious if there is any non-philosophical downside to using the wrong HTTP verbs, such as security concerns like cross-site attacks.
Exception #1
I wanted to trigger a download of a requested list of files dynamically packaged into an archive. However, the list of files could grow so large that, when encoded as querystring parameters in the URL, they exceeded the url length limit in Internet Explorer. To work around this, I ended up triggering the download with a POST.
Exception #2
There is a button that is always displayed, regardless of whether you are logged in or not, but it can only alter state if you are logged in. If you press it when you are not logged in, you are taken to the login page with a querystring parameter indicating the place you were intending to go next. When you log in, it redirects you there to complete your action. However, the redirect can only generate a GET, not a POST. So we have allowed GETs to alter state in this situation.
Are there any exploits or downsides to these exceptions? Do these allow any cross-site request forgery scenarios that cannot be prevented by checking the referer header?
Answer to question in subject: Yes
Exception #1: A GET request can have a body. You don't have to put everything in the URL
Exception #2: Alter the form to use GET when not logged in and POST if logged in.
Using referer is not recommended. There have been all sorts of workarounds, and some corporate software strip it for privacy concerns.
I highly recommend a token based approach to CSRF-mitigation.
Related
I'm developing a Scraping app to extract some information from a sit. To get that information I have to be logged in to that site.
So I use Http post and pass the data needed for login using FormData and log in successfully, so I can browse the private content of that site.
My question Is: "How can I tell if the user is logged in?". What is the simple way to do that using session cookies or something like that?
I'm currently checking the connection by sending an Http Get Request to a Url that I know is available to registered users.
So before I try to login again, I use this method "isLoggedIn" to check the connection. But it is not perfect, I mean, it seems a kind o tricky and not the best way to do that.
Currently, I'm using Dio - a Lib to make Http Request in Dart. But I think it's a general Http matter.
Just to register...
I solve that after checking the difference between a 'logged' and a 'not logged in' response. In my specify case, when I did a get request to the login page, it answers with a response that has a 'CUSTOMER_AUTH' cookie setted with a random String, otherwise, this cookie is not present.
So I just check if this cookie is present and if it has a valid value.
I have a website that has been experiencing errors because of null references due to poorly coded logic regarding the user agent. Basically, there has been a slew of incoming requests that contain no user agent which leads to null reference exceptions in the user agent tracking. (It contained a call to "Request.UserAgent.ToLower()) I am correcting this logic to avoid the error condition. Since I'm certain these requests are coming from specialized tools and not ordinary users, I'm also blocking empty user agents via URL rewrite rules.
I need to test both of these changes. However, I can't seem to find a user agent spoofer that will enable me to generate a simple get request with NO USER AGENT. All of the tools that I have tried will allow me to do a custom agent string, but they won't let that string be left empty and there are no options that I can find to tell it to send no user agent.
So my question is, what tools are available, for a Windows-based system, that I can use to emulate a browser request with NO USER AGENT so that I can verify that my changes are working properly?
I believe that value is coming from the request headers. If yes, just try
Fiddler. Go to composer tab (see below) - by default it adds User-Agent to the request, however when you delete it in the Composer it seems to disappear from the request.
When i forward a request from within a post method, a confirmation alert appear
with a message "page cannot be refreshed without resending the information".
But this alert box doesn't appear when the forward is done from a get method.
What is the reason ?
Please help.
Because a POST, in the HTTP specifications, is intended for requests that are non-idempotent, because they modify state on the server (for example, by adding a new product to a category), that would be modified again if the request was resubmlited (it would create yet a new product in the category, for example).
A GET, on the other hand, is intended for requests that are idempotent. For example, a google search is idempotent. Searching twice for the same thing doesn't modify anything on the server, and resubmitting the same request doesn't have any unwanted effect.
The browser expects web applications to respect this convention, and thus warns the user about this unwanted side-effect before re-submitting a POST request.
The usual practice is to follow the post-redirect-get pattern to let the user refresh after a post without this annoying popup, and without unwanted side-effect.
Because a GET request includes those parameters in the URL (e.g. the URL ends with ?param1=foo¶m2=bar). GET requests usually don't involve sensitive data or actions that change state of the server. From the URL, you know what you're sending.
With a POST, the parameters are "hidden", submitted in the background as part of your HTTP request, and you can't see them by looking at the URL. Those params cause the server to change state, and it could cause issues if that same data was transmitted twice (e.g. you'd accidentally purchase something twice from a web store). The browser lets you know in case you don't realize you'd be resending it.
Is there a way to know if the request has been redirected or forwarded in the doGet method of a Servlet?
In my application, when a user (whose session has timed out) clicks on a file download link, they're shown the login page, which is good. When they login, they are immediately sent the file they requested, without updating the page they see, which is bad. Basically, they get stuck on the login screen (a refresh is required).
What I want to do is interrupt this and simply redirect to the page with the link, when a file is requested as a result of a redirect.
Perhaps there are better ways to solve this?
The redirect happens client-side. The browser is instructed by the previous request to send a new request, so to the server it does not make a difference. The Referer header might contain some useful information, but it's not certain.
When redirecting you can append some parameter, like ?targetPage=dowloadpage and then check if the parameter exists. You may have to put this in a hidden field on the login page if you want it to be transferred through multiple pages.
If you're using container managed authentication, then I don't believe you can detect this since the server will only involve your resource once authentication has been completed successfully.
If you're managing authentication differently, please explain.
It seems common in the Rails community, at least, to respond to successful POST, PUT or DELETE requests by redirecting instead of returning success. For instance, if I PUT a legal change to my user profile, the idiomatic response would be a 302 Redirect to the profile page.
Isn't this wrong? Shouldn't we be returning 200 OK from the request? Or a 201 Created, in the case of a POST request? Either of those, in the HTTP/1.1 Status Definitions are allowed to (or required to) include a response, anyway.
I guess I'm wondering, before I go and "fix" my application, whether there is there a darn good reason why the community has gone the way of redirects instead of successful responses.
I'll assume, your use of the PUT verb notwithstanding, that you're talking about a web app that will be accessed primarily through the browser. In that case, the usual reason for following up a POST with a redirect is the post-redirect-get pattern, which avoids duplicate requests caused by a user refreshing or using the back and forward controls of their browser. It seems that in many instances this pattern is overloaded by redirecting not to a success page, but to the next most likely place the user would visit. I don't think either way you mention is necessarily wrong, but doing the redirect may be more user-friendly at the expense of not strictly adhering to the semantics of HTTP.
It's called the POST-Redirect-GET (PRG) pattern. This pattern will prevent clients from (accidently) re-executing non-idempotent requests when for example navigating forth and back in browser's history.
It's a good general web development practice which doesn't only apply on RoR. I'd just keep it as is.
In a perfect world, yes, probably. However HTTP clients and servers are a mess when it comes to standardization and don't always agree on proper protocol. Redirecting after a post helps avoid things like duplicate form submissions.