While it is important that people should use approved browsers downloaded from their legitimate sites to use them, is there any way for the server to detect if someone is spoofing the browser (user agent)?
My question is in particular reference to security. What if someone creates a browser (user agent) and does not respect some contracts (for example, Same origin policy of cookies) to exploit vulnerabilities there? This illegitimate browser can claim that it is a genuine user agent by populating the User-Agent header with standard values used in Firefox or Chrome.
Is there any way at the server side to detect if the user is using spoofed user agent so that the server can take counter measures if needed? Or is this the absolute responsibility of the individual using the browser to use approved browsers only (servers have no way to detect it)?
Browsers are just high level user interfaces to HTTP. Prior to the introduction of various security methods there was not much in place to prevent such attacks. Nowadays browsers (Chrome, Firefox, Edge) have restrictions and abide by certain rules/contracts (in order to work properly).
One can spoof(send) anything with a HTTP client (a very lean
"browser") such as CURL.
curl -A "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" http://blah.nonexistent.tld
(The default curl header is something like User-Agent: curl/7.16.3)
A server could restrict or detect uncommon User-Agents to prevent scraping or scanning, but the user agent is nothing but a "name" and could just be changed to a common one as done above.
The security methods (contracts) that have been added such as Same Origin Resource Policy / Cross Origin Resource Sharing / HTTP Only are there to protect the client (browser) and server. They must be implemented by both in order to function properly(securely) to protect against an attack as mentioned. If the client and the server don't properly use the contracts agreed upon, then cookies could be exfiltrated (A modern browser is designed to fail fast and would still prevent this).
If you meant that you were to create your own browser set its User-Agent as Chrome, ignore contracts in place by properly configured servers then they might ignore you. What user cookies would you steal from a "custom" browser, that few people may use?
Related
I was just reading https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie:
Lax: The cookie is not sent on cross-site requests, such as calls to
load images or frames, but is sent when a user is navigating to the
origin site from an external site (e.g. if following a link). This is
the default behavior if the SameSite attribute is not specified.
If this is the default, then doesn't this mean CSRF attacks can't happen?
If someone loads a malicious website that runs Javascript in the background to make a simple POST request to a website the victim is currently logged into, then the default behaviour is that the cookie won't be sent, right?
Also, why would someone choose to use Strict over Lax?
Why would you ever want to prevent a user's browser sending a cookie to the origin website when navigating to that website, which is what Strict does?
CSRF attacks are still possible when SameSite is Lax. It prevents the cross-site POST attack you mentioned, but if a website triggers an unsafe operation with a GET request then it would still be possible. For example, many sites currently trigger a logout with a GET request, so it would be trivial for an attacker to log a user out of their session.
The standard addresses this directly:
Lax enforcement provides reasonable defense in depth against CSRF
attacks that rely on unsafe HTTP methods (like "POST"), but does not
offer a robust defense against CSRF as a general category of attack:
Attackers can still pop up new windows or trigger top-level
navigations in order to create a "same-site" request (as
described in section 5.2.1), which is only a speedbump along the
road to exploitation.
Features like <link rel='prerender'> can be
exploited to create "same-site" requests without the risk of user
detection.
Given that, the reason why someone would use Strict is straightforward: it prevents a broader class of CSRF attacks. There's a tradeoff, of course, since it prevents some ways of using your site, but if those use cases aren't important to you then the tradeoff might be justified.
I developped a client and a server app. In the server's Web.config, I set the property
<add name="Access-Control-Allow-Origin" value="http://domain.tld:4031" />
And, indeed, when I try connecting with a client installed in a different location, I get rejected. But I do not get rejected when I use Chrome's Advanced REST Client from the very same location!
In the extensions, the header of the response indicates
Access-Control-Allow-Origin: http://www.domain.tld:4031
So how comes I still get a answer "200 OK", and the data I requested?
UPDATE:
I do not think this topic is enough of an answer : How does Google Chrome's Advanced REST client make cross domain POST requests?
My main concern is : how comes is it possible to "ask" for those extra permissions. I believe the client shouldn't be allowed to just decide which permission it receives. I thought it was up to the server only. What if I just "ask" for extra permissions to access your data on your computer? It doesn't make sense to me...
The reason that the REST client (or really the browser) is able to bypass the CORS restriction is that it is a client side protection. It isn't the responsibility of the server to provide this protection but it is a feature implemented by most modern browser vendors to protect their users from XSS-hazards.
The following quote from the Wikipedia CORS page sums it up quite good
"Although some validation and authorization can be performed by the server, it is generally the browser's responsibility to support these headers and respect the restrictions they impose." - Wikipedia
You could of course, like the quote imposes, do some server side validation on your own. The "Access-Control-Allow-Origin" header however is more of a indication to the browser that is is okay for the browser to allow the specified origins. It's then up to the browser to decide if it want's to honor this header. In previous versions of for example chrome there actually existed a flag to turn off the same origin policy all together.
Recently I put some hidden links in a web site in order to trap web crawlers. (Used CSS visibility hidden style in order to avoid human users accessing it).
Any way, I found that there were plenty of HTTP requests with a reference of browsers which have accessed the hidden links.
E.g : "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.64 Safari/537.31"
So now my problems are:
(1) Are these web crawlers? or else what can be?
(2) Are they malicious?
(3) Is there a way to profile their behaviour?
I searched on the web but couldn't find any valuable information. Can you please provide me some resources, or any help would be appreciated.
This is a HTTP user agent. They are not malicious at all. It's following the pattern, for example Mozilla/<version> and so on. A browser is a user-agent for example. However, they can be used by attackers and this can be identified by looking at anomalies. You can read this paper.
The Hypertext Transfer Protocol (HTTP) identifies the client software
originating the request, using a "User-Agent" header, even when the
client is not operated by a user.
The answer to your questions are, in order:
They are not web crawlers. They are user agents. Common term for a web developer.
Generally they aren't malicious but they can be, as I suggest, look at the paper.
I don't understand what you mean by profiling behaviour, they aren't malware!
If evil.example.com sets a cookie with a domain attribute set to .example.com, a browser will include this cookie in requests to foo.example.com.
The Tangled Web notes that for foo.example.com such cookie is largely indistinguishable from cookies set by foo.example.com. But according to the RFC, the domain attribute of a cookie should be sent to the server, which would make it possible for foo.example.com to distinguish and reject a cookie that was set by evil.example.com.
What is the state of current browsers implementations? Is domain sent back with cookies?
RFC 2109 and RFC 2965 were historical attempts to standardise the handling of cookies. Unfortunately they bore no resemblance to what browsers actually do, and should be completely ignored.
Real-world behaviour was primarily defined by the original Netscape cookie_spec, but this was highly deficient as a specification, which has resulting in a range of browser differences, around -
what date formats are accepted;
how cookies with the same name are handled when more than one match;
how non-ASCII characters work (or don't work);
quoting/escapes;
how domain matching is done.
RFC 6265 is an attempt to clean up this mess and definitively codify what browsers should aim to do. It doesn't say browsers should send domain or path, because no browser in history has ever done that.
Because you can't detect that a cookie comes from a parent domain(*), you have to take care with your hostnames to avoid overlapping domains if you want to keep your cookies separate - in particular for IE, where even if you don't set domain, a cookie set on example.com will always inherit into foo.example.com.
So: don't use a 'no-www' hostname for your site if you think you might ever want a subdomain with separate cookies in the future (that shouldn't be able to read sensitive cookies from its parent); and if you really need a completely separate cookie context, to prevent evil.example.com injecting cookie values into other example.com sites, then you have no choice but to use completely separate domain names.
An alternative that might be effective against some attack models would be to sign every cookie value you produce, for example using an HMAC.
*: there is kind of a way. Try deleting the cookie with the same domain and path settings as the cookie you want. If the cookie disappears when you do so, then it must have had those domain and path settings, so the original cookie was OK. If there is still a cookie there, it comes from somewhere else and you can ignore it. This is inconvenient, impractical to do without JavaScript, and not watertight because in principle the attacker could be deleting their injected cookies at the same time.
The current standard for cookies is RFC 6265. This version has simplified the Cookie: header. The browser just sends cookie name=value pairs, not attributes like the domain. See section 4.2 of the RFC for the specification of this header. Also, see section 8.6, Weak Integrity, where it discusses the fact that foo.example.com can set a cookie for .example.com, and bar.example.com won't be able to tell that it's not a cookie it set itself.
I would believe Zalewski's book and his https://code.google.com/p/browsersec/wiki/Main over any RFCs. Browsers' implementations of a number of HTTP features are notoriously messy and non-standard.
I have difficulties in seeing the point of the Access-Control-Allow-Origin http header.
I thought that if a client (browser) gets a "no" from a server once, than it will not send any further requests. But chrome and firefox keep sending requests.
Could anyone tell me a real life example where a header like this makes sense?
thanks!
The Access-Control-Allow-Origin header should contain a list of origins which are "allowed" to access the resource.
Thus, determining which domains can make requests to your server for resources.
For example, sending back a header of Access-Control-Allow-Origin: * would allow all sites to access the requested resource.
On the other hand, sending back Access-Control-Allow-Origin: http://foo.example.com will allow access only to http://foo.example.com.
There's some more information on this over at the Mozilla Developer Site
For example
Let's suppose we have a URL on our own domain that returns a JSON collection of Music Albums by Artist. It might look like this:
http://ourdomain.com/GetAlbumsByArtist/MichaelJackson.json
We might use some AJAX on our website to get this JSON data and then display it on our website.
But what if someone from another site wishes to use our JSON object for themselves? Perhaps we have another website http://subdomain.ourdomain.com which we own and would like to use our feed from ourdomain.com.
Traditionally we can't make cross-domain requests for this data.
By specifying other domains that are allowed access to our resource, we now open the doors to cross-domain requests.
CORS implements a two-part security view of cross-origin. The problem it is trying to solve is that there are many servers sitting out there on the public internet written by people who either (a) assumed that no browser would ever allow a cross-origin request, or (b) didn't think about it at all.
So, some people want to permit cross-origin communications, but the browser-builders do not feel that they can just unlock browsers and suddenly leave all these websites exposed. To avoid this, they invented a two-part structure. Before a browser will permit a cross-origin interaction with a server, that server has to specifically indicate that it is willing to allow cross-origin access. In the simple cases, that's Access-Control-Allow-Origin. In more complex cases, it's the full preflight mechanism.
It's still true that servers have to implement appropriate resource access control on their resources. CORS is just there to allow the server to indicate to browsers that it is aware of all the issues.