Could I send this response
HTTP/1.1 404 Nobody home
instead of
HTTP/1.1 404 Not found
Or it is RFC violation?
Please have a look here, especially at this quote:
The reason phrases listed here are only recommendations -- they MAY be
replaced by local equivalents without affecting the protocol.
If they can be replaced with local variants they can be replaced with other messages as well, however this is not recommended. A better idea might to provide a custom code (with appropriate class of course e.g. 4XX) along with good reason phrase.
Related
In my Zend Framework MVC application I am using only two request methods: GET and POST. I am wondering whether I should put a check in my base controller to throw an exception if the other request types are received (e.g. PUT or DELETE).
As far as I can see there are two areas for consideration:
Would it improve security at all? Am I giving potential hackers a head start if I allow the framework to respond to PUT, DELETE, et al?
Would it interfere with correct operation of the site? For example, do search engine bots rely on requests other than GET and POST?
Your ideas are much appreciated!
The correct response code would be 405 Method Not Allowed, including an Allow: GET, POST header.
10.4.6 405 Method Not Allowed
The method specified in the Request-Line is not allowed for the resource identified by the Request-URI. The response MUST include an Allow header containing a list of valid methods for the requested resource.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
People violate the api of your app/framework/site etc either due to errors or on purpose, to probe your site for weaknesses. (Only matters in frequency if your site is internal only or on the public net.)
If your site supports developers, then that'd be a possible reason to reply with a 405 code of method not allowed. Perhaps only if the session (assuming sessions) is marked as being in developer mode.
If you don't expect valid developers, then I recommend silently swallowing any bad input to make it harder for the bad guys.
Another reason not to give error messages in the normal case: the lack of an error message in a particular case can then be interpreted that the bad data made it further into your stack than other data--outlining a possible attack route.
And finally, error returns (type, delay before responding, and more) can be used to characterize a particular version of an app/framework etc. This can then be used to quickly locate other vulnerable installations once an attack vector is found.
Yes, the above is pessimistic, and I fondly remember the 80's when everybody responded to ping, echo and other diagnostic requests. But the bad guys are here and it is our responsibility to harden our systems. See this TED video for more.
I'm implementing a 'testing mode' with my website which will forbid access to certain pages while they are undergoing construction, making them only accessible to administrators for private testing. I was planning on using the 401 status code, since the page does exist but they are not allowed to use it, and they may or may not be authenticated, yet only certain users (basically me) would still be allowed to access the page.
The thing I'm wondering is if the text after the HTTP/1.1 401 part mattered? Does it have to be Unauthorized or can it basically be whatever you want to put after it, so long as the 401 is still appropriate for the error? I wanted to send a message such as Temporarily Unavailable to indicate that the page is normally available to all visitors, but is undergoing reconstruction and is temporarily unavailable. Should I do this or not?
You may change them.
The status messages (technically called "reason phrases") are only recommendations and "MAY be changed without affecting the protocol)."
See http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html#sec6.1.1
However, you SHOULD :-) still use the codes properly and give meaningful messages. Only use a 401 if your condition is what the RFC says a 401 should be.
Yes, the reason phrase can be changed. It doesn't affect the meaning of the message.
But if you need to say "temporarily unavailable", you need to make it 5xx (server) code. 503 seems right here (see RFC 2616, Section 10.5.4).
You MAY change the text (very few http clients pay any attention to it), but it is better to use the most applicable response code. Afterall, indicating the reason for failure is how the various response codes were intended to be used.
Perhaps this fits:
404 Not Found The requested resource could not be found but may be
available again in the future.[2] Subsequent requests by the client
are permissible.
Is it true that some old proxies/caches will not honor some custom HTTP headers? If so, can you prove it with sections from the HTTP spec or some other information online?
I'm designing a REST API interface. For versioning I'm debating whether to use version as a part of the URL like (/path1/path2/v1 OR /path1/path2?ver=1) OR to use a custom Accepts X-Version header.
I was just reading in O'Reilly's Even Faster Websites about how mainly internet security software, but really anything that has to check the contents of a page, might filter the Accept-Encoding header in order to reduce the CPU time used decompressing and reading the file. The books cites that about 15% of user have this issue.
However, I see no reason why other, custom headers would be filtered. On the other hand, there also isn't really any reason to send it as a header and not with GET is there? It's not really part of the HTTP protocol, it's just your API.
Edit: Also, see the actual section of the book I mention.
I want to change first line of the HTTP header of my request, modifying the method and/or URL.
The (excellent) Tamperdata firefox plugin allows a developer to modify the headers of a request, but not the URL itself. This latter part is what I want to be able to do.
So something like...
GET http://foo.com/?foo=foo HTTP/1.1
... could become ...
GET http://bar.com/?bar=bar HTTP/1.1
For context, I need to tamper with (make correct) an erroneous request from Flash, to see if an error can be corrected by fixing the url.
Any ideas? Sounds like something that may need to be done on a proxy level. In which case, suggestions?
Check out Charles Proxy (multiplatform) and/or Fiddler2 (Windows only) for more client-side solutions - both of these run as a proxy and can modify requests before they get sent out to the server.
If you have access to the webserver and it's running Apache, you can set up some rewrite rules that will modify the URL before it gets processed by the main HTTP engine.
For those coming to this page from a search engine, I would also recommend the Burp Proxy suite: http://www.portswigger.net/burp/proxy.html
Although more specifically targeted towards security testing, it's still an invaluable tool.
If you're trying to intercept the HTTP packets and modify them on the way out, then Tamperdata may be route you want to take.
However, if you want minute control over these things, you'd be much better off simulating the entire browser session using a utility such as curl
Curl: http://curl.haxx.se/
Suppose I have a page on my website to show media releases for the current month
http://www.mysite.com/mediareleases.aspx
And for reasons which it's mundane to go into*, this page MUST be given a query string with the current day of the month in order to produce this list:
http://www.mysite.com/mediareleases.aspx?prevDays=18
As such I need to redirect clients requesting http://www.mysite.com/mediareleases.aspx to http://www.mysite.com/mediareleases.aspx?prevDays=whateverDayOfTheMonthItIs
My question is, if I want google to index the page without the query parameter, should I use status code 302 or 307 to perform the redirect?
Both indicate that the page has "temporarily" moved - which is what I want because the page "moves" every day if you get my meaning.
[*] I'm using a feature of a closed-source .NET CMS so my hands are tied.
Google's documentation seems to indicate that both 302 and 307 are treated equivalently, and that "Googlebot will continue to crawl and index the original location."
But in the face of ambiguity, you might as well dig into the RFCs and try to do the Right Thing, with the naïve hope that the crawlers will do the same. In this case, RFC 2616 § 10.3 contains nearly identical definitions for each response code, with one exception:
302: Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests.
307: Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests.
Which does not strike me as a significant distinction. My reading is that 302 instructs clients that webmasters are untrustworthy, and 307 explicitly tells webmasters that clients will not trust them, so they may freely alter the redirect.
I think the more telling point is the note in 302's definition:
Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client.
Which, to me, indicates that 302 and 307 are largely equivalent, but that HTTP/1.0 clients failed to implement 302 correctly the first time around.
Short answer: neither. In most cases the code you really want to use is 303.
For the long answer, first we need some background.
When getting a redirect code the client can (A) load the new location using the same request type or (B) it can overwrite it and use GET.
The HTTP 1.0 spec did not have 303 and 307, it only had 302, which mandated the (A) behavior. But in practice it was discovered that (A) led to a problem with submitted forms.
Say you have a contact form, the visitor fills it and submits it and the client gets a 302 to a page saying "thanks, we'll get back to you". The form was sent using POST so the thanks page is also loaded using POST. Now suppose the visitor hits reload; the request is resent the same way it was obtained the first time, which is with a POST (and the same payload in the body). End result: the form gets submitted twice (and once more for every reload). Even if the client asks the user for confirmation before doing that, it's still annoying in most cases.
This problem became so prevalent that client producers decided to override the spec and issue GET requests for the redirected location. Basically, it was an oversight in the HTTP 1.0 spec. What clients needed most was a 303 (and behavior (B) above), but instead they only got 302 (and (A)).
If HTTP 1.0 would have offered both 302 and 303 there would have been no problem. But it didn't, so it resulted in a 302 which nobody used correctly. So HTTP 1.1 added 303 (badly needed) but also decided to add 307, which is technically identical to 302, but is a sort of "explicit 302"; it says "yeah, I know the issues surrounding 302, I know what I'm doing, give me behavior (A)".
Now, back to our question. You see now why in most cases you will want 303.
Cases where you want to preserve the request type are very rare. And if you do find yourself such a case, the answer is simple: use 302. Either the client speaks HTTP 1.0, in which case it can't understand 307; or it speaks HTTP 1.1, which means it has no reason to preserve the rebelious behavior of old clients ie. it implements 302 correctly, so use it!
5 years on... note that the behaviour of 307 has been updated by RFC-7231#6.4.7 in June 2014, and is now significantly different from a 302, in that the method may not change:
The 307 (Temporary Redirect) status code indicates that the target
resource resides temporarily under a different URI and the user agent
MUST NOT change the request method if it performs an automatic
redirection to that URI.
Probably not an issue for the original question, but may be relevant to others who come across this question just looking for the difference.
I feel your pain. As for a solution, it's hard to say what search engines will do. It seems that each one has its own way of handling redirects. This link suggests that a 302 will index the contents of the redirected page but still use the main page link, but it's not clear what a 307 will do.
Another way you could consider proceeding is with a javascript redirect and a <noscript> tag explaining what's going on. That will also foul up non-javascript browsers, and you'd have to proceed with caution to avoid Google's sneaky-site detection routine, but I suspect that as long as your noscript contains a hyperlink that matches the new URL you'd be OK.
Either way I'd still pursue doing a purely server-side request if at all possible. Heck, if your expected traffic is light, you could treat your home page as a proxy in the case where there's no querystring. Have it use a background thread to request itself with the querystring and pipe out the results. :-)
edit just saw you're using .NET. Maybe consider this answer from SO: C# Can i modify Request.Form's variables? .