Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 12 months ago.
Improve this question
I found out that there are servers that respond with HTTP 405 to HEAD requests (or another status code including 404 which is confusing IMO, but that's not important now) even though GET requests are responded with HTTP 200. HTTP 405 is defined as...
405 Method Not Allowed
The method specified in the Request-Line is not allowed for the
resource identified by the Request-URI. The response MUST include an
Allow header containing a list of valid methods for the requested
resource.
Alright, I looked at the Allow header and found out that I can use GET to get the resource (even though I only wanted to find out whether the resource exists). Problem solved.
However, my question is... Why would a server disallow the HEAD method? What is the advantage? Are there possible security reasons?
I think it is because they're getting a log full of "the requested resource can only be accessed via SSL" errors when their HTTPS-only site gets tons of HEAD requests via HTTP. I think they're conflating "method" with "protocol" and so thinking the 405 makes sense.
See this post for an example of someone asking about the issue and being told to give a 405.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Are the two URLs equivalent with respect to browser caching and website seo?
1 - http://example.com/resource.html?a=a&b=b
2 - http://example.com/resource.html?b=b&a=a
If the resource for the first URL is cached and the browser needs to find the resource for the second URL can it use the cached resource? I want to know about the caching because I can ensure all internal links use the same parameter order to increase cache performance.
Also, if my server treats these URLs as the same resource, which URL will be indexed by Google? With consideration to SEO, will this count as duplicate content? If so, I could use a 301 response to redirect to the correct URL. (This should also fix the caching problem.)
If the resource for the first URL is cached and the browser needs to find the resource for the second URL can it use the cached resource?
That is not certain. It depends on each browser implementation. But if you always use the same parameter order, then it is not an issue in the first place.
Also, if my server treats these URLs as the same resource, which URL will be indexed by google?
Both will be indexed by Google, but if you use canonical links or if you configure parameters in Google Webmaster Tools, then they can be treated as one when displaying search results (i.e., Google will give the preference to one URL and it will collect the PageRank and other signals from the other).
With consideration to seo, will this count as duplicate content? If so, I could use a 301 response to redirect to the correct URL.
Yes, it will be considered as duplicate content, because the URLs are different. Using a redirect in this case is shooting flies with a canon (IMHO), but it would work.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
After receiving a post request in servlet, I have to do a redirection to another server and at the same time I have to pass an xml file in the redirection request.
For example, I have to do a redirection from my servlet to "http://www.abc.com" and pass an xml file in the request.
I tried the following, but it didn't work.
response.sendRedirect - it is creating only get requests, so not able to send xml
HttpClient or URLConnection - it is creating a new request, not doing the actual redirection
Intermediate jsp - forwarded the request to a intermediate jsp and did a submit from jsp.
It is sending the xml in parameter and not in InputStream
Please let me know how to achieve this.
A redirect (either HTTP or HTML) can only operate on a URL, rather than a form submission, which offers built in support for uploading files via the "multipart/form-data" form encoding, and so you would have to encode your file within the URL itself, which would severely limit you given that the lowest-common-denominator (Internet Explorer) URL maximum length is around 2000 characters. If your files are smaller than that, then you could encode your file as a URL query parameter. Otherwise, it's probably not possible, but I will stand corrected if others know of a way to achieve it.
In my Zend Framework MVC application I am using only two request methods: GET and POST. I am wondering whether I should put a check in my base controller to throw an exception if the other request types are received (e.g. PUT or DELETE).
As far as I can see there are two areas for consideration:
Would it improve security at all? Am I giving potential hackers a head start if I allow the framework to respond to PUT, DELETE, et al?
Would it interfere with correct operation of the site? For example, do search engine bots rely on requests other than GET and POST?
Your ideas are much appreciated!
The correct response code would be 405 Method Not Allowed, including an Allow: GET, POST header.
10.4.6 405 Method Not Allowed
The method specified in the Request-Line is not allowed for the resource identified by the Request-URI. The response MUST include an Allow header containing a list of valid methods for the requested resource.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
People violate the api of your app/framework/site etc either due to errors or on purpose, to probe your site for weaknesses. (Only matters in frequency if your site is internal only or on the public net.)
If your site supports developers, then that'd be a possible reason to reply with a 405 code of method not allowed. Perhaps only if the session (assuming sessions) is marked as being in developer mode.
If you don't expect valid developers, then I recommend silently swallowing any bad input to make it harder for the bad guys.
Another reason not to give error messages in the normal case: the lack of an error message in a particular case can then be interpreted that the bad data made it further into your stack than other data--outlining a possible attack route.
And finally, error returns (type, delay before responding, and more) can be used to characterize a particular version of an app/framework etc. This can then be used to quickly locate other vulnerable installations once an attack vector is found.
Yes, the above is pessimistic, and I fondly remember the 80's when everybody responded to ping, echo and other diagnostic requests. But the bad guys are here and it is our responsibility to harden our systems. See this TED video for more.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
One of our site recently received a lots of attack which all seems similar. By chance, we have a error logging framework which send us error log email when something strange happen or an error raised on the server.
Here is what happen :
Error : The file '/(A(u76U7llazAEkAAAAZTJmYmE1NmMtZTE4YS00YzQ2LTlmYzItNGIxMzZjMzNjOTc4vkp-I-8cYbLrHx25-IfNdMvuKao1))/MostOfOurPublicPage.aspx' does not exist.
Request:
URL: http://Ourwebsite.com/(A(u76U7llazAEkAAAAZTJmYmE1NmMtZTE4YS00YzQ2LTlmYzItNGIxMzZjMzNjOTc4vkp-I-8cYbLrHx25-IfNdMvuKao1))/MostOfOurPublicPage.aspx
User Agent: Mozilla/5.0 (compatible; SiteBot/0.1; +http://www.sitebot.org/robot/)
Referrer:
Host: 213.186.122.2 (Ukraine)
SecuredConnection: False
User-agent shows SiteBot/0.1 but I'm preaty sure its not ... at least I never heard anything about sitebots doing things like that.
Question
So, anyone have any idea of what the heck is that and what can I do to prevent this things because it make our error logging framework sending us something like 100 error logs a day!
Note : I usualy talk french so sorry for my english.
This is just a search bot or crawler. Place a robots.txt file on your web server root (http://www.example.com/robots.txt) and put the text below in it.
user-agent: sitebot
disallow: /
That should keep it away.
Also the strange url it uses, is just a session cookie passed by a url string, instead of a cookie.
These types of issues seem to crop up from time to time. You probably don't want to fully suppress these types of errors as they can be helpful from time to time to determine bad links. What I have done in the past is filter out bot traffic.
either block the traffic at your firewall
filter the bot traffic.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
The question can be seen as both practical and theoretical.
I am designing a system involving a HTTP client (Flash Player application) and an HTTP server "backend". There are registered users each with their own private image library. Images can be uploaded and of course subsequently retrieved.
Since users authenticate with cookies carrying session identifiers, it suddenly became clear to me (and hence the question) that I can provide the following kind of URL for an authenticated client to retrieve an image ('asset' in my terminology). Note that asset identifiers are unique even across users, i.e. no two users will both have an asset with ID of say, 555. Also, an asset identifier is assumed to be REALLY persistent, i.e. the ID is non-reusable. The URL I was thinking of is:
http://myserver/user/asset/<asset_id>
Brackets denote variable value, i.e. obviously these and the 'asset_id' are not to be taken verbatim here, they denote the actual asset identifier. The HTTP request "to" the above URL is expected to carry a cookie header with the user session identifier, which uniquely authenticates and authorizes the user as the owner of the asset requested.
I am very much after permanent URLs ("Cool URIs don't change" as Tim Berners-Lie said once), but obviously, since the asset resources are private to the user that uploads/owns them, they are not to be cached by any intermediate proxies, only user agents.
Can we consider the URL above as a good way to identify a user asset? My worry is that the response will vary depending on whether a valid session identifier cookie header is supplied or not, and so there is not a one-to-one relationship between the URL and the response. But there is not much one can do, is it? Server HAS to check that the user is authorized to retrieve the asset, right? If you guys have any better suggestions for a solution here, I am also anxious to hear it. Thanks.
you've said it all, I wouldn't change a thing about your strategy :-) If an unauthorized user tries to access some asset, simply give him a 403 http code ... that's the correct and expected response in that case
Just becaues a URL doesn't change doesn't mean that every request to that URL must be successful (or even return the same object/asset).
You can easily use that as an identifier and simply tell un-authenticated clients that they are 401 Not Authorized or even that they can't access it at all: 403 Forbidden.