How does server know to return gzipped data? - http

In the header exchange below I see that the server is returning the page Gzipped but I don't see where my browser ever indicated that it could accept GZip. How did the server know?

The content you have reproduced here is not what was sent by your browser; the "general" part is a mix of some of the request data and some of the response data. If you want to see the actual request an response, use something like wireshark.
Coincidentally, it is worth noting that some so-called security products will interfere with your browsers request - a common "enhancement" is to remove or mangle the header asking for compression. Your webserver will honour such requests in the absence of specific configuration to force compression. Google delivers a compressed JavaScript to the client when it sees such behaviour - if it runs on the client then Google start sending compressed content. There are Apache config snippets on the web which can detect and override some such tampering.
But there's no evidence here to suggest that is the case with your setup. You're just not seeing the request headers.

Related

Is HTTP Origin header reliable?

We can change the Origin header in AJAX request or using the Chrome's plugin 'Modify Headers'.
Therefore we can access data from the another host.
So is it reliable approach to handle CORS ?
HTTP_ORIGIN is neither sent by all browsers nor is it secure.
Nothing sent by the browser can ever be considered safe.
HTTP is a plain-text protocol. The ENTIRE request header/body structure can be faked to say anything you want.

Do any modern browsers ever issue an HTTP HEAD request?

I understand what a HEAD request is, and what it could be used for. Will any standard, modern browser ever send a HEAD request? If so, in what context?
A browser will send a HEAD request if that is explicitly requested in an XMLHttpRequest, but I'm fairly certain that the browser will never send a HEAD request of its own accord. My evidence is that the Tornado web server defaults to returning an error for HEAD requests and I've never heard of anyone running into problems related to this (or even being aware of it).
HEAD is mostly obsolete IMHO: on a dynamic web site it is unlikely to be significantly more efficient than a GET, and it can usually be replaced by one of the following:
Conditional GET with If-Modified-Since or If-None-Match (used for caching)
Partial GET with Range header (used for e.g. streaming video)
OPTIONS (used for CORS preflight requests)

Which CDN solutions support caching with content negotiation?

I'm serving a set of resources through content negotiation.
Concretely, any URL can be represented in different formats,
depending on the client's Accept header.
An example of this can be seen at Facebook:
curl -H "Accept: application/json" http://graph.facebook.com/daft-punk
results in JSON
curl -H "Accept: text/turtle" http://graph.facebook.com/daft-punk
results in Turtle
I'm looking for a CDN that caches content based on URL and the client's Accept header.
Example of what goes wrong
CloudFlare doesn't support this: if one client asks for HTML, then all subsequent requests to that URL receive the HTML representation, regardless of their preferences. Others have similar issues.
For example, if I would place CloudFlare over graph.facebook.com(and configure it to cache “extensionless” resources, which it does not by default), then it would behave incorrectly:
I ask for http://graph.facebook.com/daft-punk in JSON through curl;
in response, CloudFlare asks the JSON original from the server, caches it, and serves it.
I ask for http://graph.facebook.com/daft-punk through my browser (thus in HTML);
in response CloudFlare sends the cached JSON (!) representation, even though the original server would have sent the HTML version.
What would be needed instead
The correct behavior would be that CloudFlare asks the server again, since the second client had a different Accept header.
After this, requests with similar Accept headers can be served from cache.
Which CDN solutions support content-negotiation, and also cache negotiated content?
So note that only respecting Accept is not enough; negotiated responses should be cached too.
PS1: It's easy to make your own caching servers support it. For instance, for nginx:
proxy_cache_key "$scheme$host$request_uri$http_accept";
Note how the client's Accept header is part of the key that indexes the cache. I want that on CDN.
PS2: It is not an option to use different URLs for different representations. My application is in the Linked Data domain, where URLs play an important role for identification.
Seems maxcdn still can set up custom nginx rules for content negotiation (despite what their faq says) - http://blog.maxcdn.com/how-to-reduce-image-size-with-webp-automagically/#comment-1048561182
I can't think of any way we would impact this at all at this time. We don't, for example, cache HTML by default. Have you actually seen an issue with this? Have you opened a support ticket?

Are JSON web services vulnerable to CSRF attacks?

I am building a web service that exclusively uses JSON for its request and response content (i.e., no form encoded payloads).
Is a web service vulnerable to CSRF attack if the following are true?
Any POST request without a top-level JSON object, e.g., {"foo":"bar"}, will be rejected with a 400. For example, a POST request with the content 42 would be thus rejected.
Any POST request with a content-type other than application/json will be rejected with a 400. For example, a POST request with content-type application/x-www-form-urlencoded would be thus rejected.
All GET requests will be Safe, and thus not modify any server-side data.
Clients are authenticated via a session cookie, which the web service gives them after they provide a correct username/password pair via a POST with JSON data, e.g. {"username":"user#example.com", "password":"my password"}.
Ancillary question: Are PUT and DELETE requests ever vulnerable to CSRF? I ask because it seems that most (all?) browsers disallow these methods in HTML forms.
EDIT: Added item #4.
EDIT: Lots of good comments and answers so far, but no one has offered a specific CSRF attack to which this web service is vulnerable.
Forging arbitrary CSRF requests with arbitrary media types is effectively only possible with XHR, because a form’s method is limited to GET and POST and a form’s POST message body is also limited to the three formats application/x-www-form-urlencoded, multipart/form-data, and text/plain. However, with the form data encoding text/plain it is still possible to forge requests containing valid JSON data.
So the only threat comes from XHR-based CSRF attacks. And those will only be successful if they are from the same origin, so basically from your own site somehow (e. g. XSS). Be careful not to mistake disabling CORS (i.e. not setting Access-Control-Allow-Origin: *) as a protection. CORS simply prevents clients from reading the response. The whole request is still sent and processed by the server.
Yes, it is possible. You can setup an attacker server which will send back a 307 redirect to the target server to the victim machine. You need to use flash to send the POST instead of using Form.
Reference: https://bugzilla.mozilla.org/show_bug.cgi?id=1436241
It also works on Chrome.
It is possible to do CSRF on JSON based Restful services using Ajax. I tested this on an application (using both Chrome and Firefox).
You have to change the contentType to text/plain and the dataType to JSON in order to avaoid a preflight request. Then you can send the request, but in order to send sessiondata, you need to set the withCredentials flag in your ajax request.
I discuss this in more detail here (references are included):
http://wsecblog.blogspot.be/2016/03/csrf-with-json-post-via-ajax.html
I have some doubts concerning point 3. Although it can be considered safe as it does not alter the data on the server side, the data can still be read, and the risk is that they can be stolen.
http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx/
Is a web service vulnerable to CSRF attack if the following are true?
Yes. It's still HTTP.
Are PUT and DELETE requests ever vulnerable to CSRF?
Yes
it seems that most (all?) browsers disallow these methods in HTML forms
Do you think that a browser is the only way to make an HTTP request?

ASP.NET Response.Cache.SetNoStore() vs. Response.Cache.SetNoServerCaching()

Can anyone break down what these two methods do at a HTTP level.
We are dealing with Akamai edge-caching and have been told that SetNoStore() will cause can exclusion so that (for example) form pages will always post back to the origin server. According to {guy} this sets the HTTP header:
Cache-Control: "no-cache, no-store"
As I was implementing this change to our forms I found SetNoServerCaching(). Well that seems to make a bit more sense semantically, and the documentation says "Explicitly denies caching of the document on the origin-server."
So I went down to the sea sea sea to see what I could see see see. I tried both of these methods and reviewed the headers in Firebug and Fiddler.
And from what I can tell, both these method set the exact same Http Header.
Can anyone explain if there are actual differences between these methods and if so, where are hiding in the http response?!
Theres a few differences,
SetNoStore, essentially stops the browser (and any network resource such as a CDN) from saving any part of the response or request, that includes saving to temp files. This will set the NO-STORE HTTP 1.1 header
SetNoServerCaching, will essentially stop the server from saving files, in ASP.NET There are several levels of caching that can happen, Data only, Partial Requests, Full Pages, and SQL Data. This call should stop the HTTP (Full and Partial) requests being saved on the server. This method should not set the cache-control headers or no-store or no cache.
There is also
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.Cache.SetMaxAge(new TimeSpan(1, 0, 0));
as a possible way of setting cache, this will set the content-expires header.
For a CDN you probably want to set the content-expires header so that he CDN knows when to fetch new content, it if it gets a HIT. You probably don't want no-cache or no-store as this would cause a refetch on every HIT so essentially you are nullifying any benefit the CDN brings to you except they may have a faster backbone connection to the end user than your current ISP but that would be marginal.
Differnce between the two is
HttpCachePolicy.SetNoStore() or Response.Cache.SetNoStore:
Prevents the browser from caching the ASPX page.
HttpCachePolicy.SetNoServerCaching or Response.Cache.SetNoServerCaching:
Stops all origin-server caching for the current response. Explicitly denies caching of the document on the origin-server. Once set, all requests for the document are fully processed.
When these methods are invoked, caching cannot be reenabled for the current response.

Resources