One time nonce http digest authentication - http

I try to implement a one time none for HTTP digest authentication process. First of all I'm aware of the fact that the authentication is not perfectly secure. Please do not tell me to use something else. The authentication process is working as expected. When the user authenticate successfully I append a http Authentication-Info field with the next nonce. The browser in this case Firefox is not using this nonce for further requests.
Authentication-Info: nextnonce="06e8043d3fb8c26156829c4b55afd13040"
Why is the browser not using my new nonce for future requests? It still uses the old now invalid one!
RFC7616 describes the header field.
https://www.rfc-editor.org/rfc/rfc7616#section-3.5
The value of the nextnonce parameter is the nonce the server
wishes the client to use for a future authentication response.
The server MAY send the Authentication-Info header field with a
nextnonce field as a means of implementing one-time nonces or
otherwise changing nonces. If the nextnonce field is present, ...
RFC2617 describes the syntax in section 3.2.3
https://www.ietf.org/rfc/rfc2617.txt
[Edit]
Is it possible that firefox is not supporting this feature. If I search here
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/WWW-Authenticate
for the header field I can't find a result.
But it is listed as standard header flag here:
https://www.iana.org/assignments/message-headers/message-headers.xhtml

nextnonce is not supported in Firefox, not even Authentication-Info header.
The bug "next nonce digest auth test fails" was opened 18 years ago and it's still not fixed yet.
I downloaded the source code of the latest Firefox version 65.0.1 and searched the project. "Authentication-Info" only appears in netwerk/protocol/http/nsHttpDigestAuth.cpp as a comment, nowhere else.

Related

Routes using cookie authentication from previous version of Paw no longer work on new version

I have a route that uses an authentication cookie set by another route. I've created it like this:
This method no longer works in the new version. Paw complains that there is no set-cookie header in the response from the Authenticate request.
It seems this is because Paw now takes the cookies and handles them differently from other headers. I like this approach because it should make this sort of authentication easier, but, unfortunately, it's not working like I would expect.
Here's how I have configured a newer request:
So, I've set the cookie header to the Response Cookies dynamic value which, I believe, should pass along the cookies set previously. I would think I should select the Authenticate request from the dropdown (since it's the response from this request that actually sets the cookie, but the cookie value disappears if I do that. Instead, I have left the request value as Current Request since that seems to contain the correct value.
I've also noticed the Automatically send cookies setting which I thought might be an easy solution. I removed the manual cookie header from my request leaving this checked in hopes it might automatically send over any cookies from the cookie jar along with the request, but that doesn't seem to work either. No matter what I try, my request fails to produce the desired results because of authentication.
Can you help me understand how to configure these requests so that I can continue using Paw to test session-authenticated routes?
Here are a couple of things that will make you understand how cookies work in Paw (starting from version 2.1):
1. Cookies are stored in Jars
To allow users to keep multiple synchronous sessions, cookies are stored in jars, so you can switch between sessions (jars) easily.
Cookies stored in jars will be sent only if they match the request (hostname, path, is secure, etc.).
2. Cookies from jars are sent by default, unless Cookie header is overridden
If you set a Cookie header manually, cookies stored in jars wont' be sent.
And obviously unless Automatically Send Cookies is disabled.
3. The previous use of "Response Headers" was hacky. Use Response Cookies.
In fact, Set-Cookie (for responses) and Cookie (for requests) have different syntaxes. So you can't send back the original value of Set-Cookie (even though it seemed to be working in most cases).
The new Response Cookies dynamic value you mentioned has this purpose: send back the cookies set by a specific request.
Now, in your case, I would use a Response Cookies dynamic value all the time. As you have only 1 request doing auth / cookie setting, it might be the easiest to handle. Also, maybe check Ignore Domain, Path, Is Secure and Date to make sure your cookie is always sent even if you switch the host (or something else).
Deleting the cookies in the cookie jar and hitting the authentication endpoint again seems to have fixed the problem. Not sure why, but it seems to be working now either manually sending a cookie or using the Automatically send cookies setting.

Are JSON web services vulnerable to CSRF attacks?

I am building a web service that exclusively uses JSON for its request and response content (i.e., no form encoded payloads).
Is a web service vulnerable to CSRF attack if the following are true?
Any POST request without a top-level JSON object, e.g., {"foo":"bar"}, will be rejected with a 400. For example, a POST request with the content 42 would be thus rejected.
Any POST request with a content-type other than application/json will be rejected with a 400. For example, a POST request with content-type application/x-www-form-urlencoded would be thus rejected.
All GET requests will be Safe, and thus not modify any server-side data.
Clients are authenticated via a session cookie, which the web service gives them after they provide a correct username/password pair via a POST with JSON data, e.g. {"username":"user#example.com", "password":"my password"}.
Ancillary question: Are PUT and DELETE requests ever vulnerable to CSRF? I ask because it seems that most (all?) browsers disallow these methods in HTML forms.
EDIT: Added item #4.
EDIT: Lots of good comments and answers so far, but no one has offered a specific CSRF attack to which this web service is vulnerable.
Forging arbitrary CSRF requests with arbitrary media types is effectively only possible with XHR, because a form’s method is limited to GET and POST and a form’s POST message body is also limited to the three formats application/x-www-form-urlencoded, multipart/form-data, and text/plain. However, with the form data encoding text/plain it is still possible to forge requests containing valid JSON data.
So the only threat comes from XHR-based CSRF attacks. And those will only be successful if they are from the same origin, so basically from your own site somehow (e. g. XSS). Be careful not to mistake disabling CORS (i.e. not setting Access-Control-Allow-Origin: *) as a protection. CORS simply prevents clients from reading the response. The whole request is still sent and processed by the server.
Yes, it is possible. You can setup an attacker server which will send back a 307 redirect to the target server to the victim machine. You need to use flash to send the POST instead of using Form.
Reference: https://bugzilla.mozilla.org/show_bug.cgi?id=1436241
It also works on Chrome.
It is possible to do CSRF on JSON based Restful services using Ajax. I tested this on an application (using both Chrome and Firefox).
You have to change the contentType to text/plain and the dataType to JSON in order to avaoid a preflight request. Then you can send the request, but in order to send sessiondata, you need to set the withCredentials flag in your ajax request.
I discuss this in more detail here (references are included):
http://wsecblog.blogspot.be/2016/03/csrf-with-json-post-via-ajax.html
I have some doubts concerning point 3. Although it can be considered safe as it does not alter the data on the server side, the data can still be read, and the risk is that they can be stolen.
http://haacked.com/archive/2008/11/20/anatomy-of-a-subtle-json-vulnerability.aspx/
Is a web service vulnerable to CSRF attack if the following are true?
Yes. It's still HTTP.
Are PUT and DELETE requests ever vulnerable to CSRF?
Yes
it seems that most (all?) browsers disallow these methods in HTML forms
Do you think that a browser is the only way to make an HTTP request?

Customize the Authorization HTTP header

I need to authenticate a client when he sends a request to an API. The client has an API-token and I was thinking about using the standard Authorization header for sending the token to the server.
Normally this header is used for Basic and Digest authentication. But I don't know if I'm allowed to customize the value of this header and use a custom authentication scheme, e.g:
Authorization: Token 1af538baa9045a84c0e889f672baf83ff24
Would you recommend this or not? Or is there a better approach for sending the token?
You can create your own custom auth schemas that use the Authorization: header - for example, this is how OAuth works.
As a general rule, if servers or proxies don't understand the values of standard headers, they will leave them alone and ignore them. It is creating your own header keys that can often produce unexpected results - many proxies will strip headers with names they don't recognise.
Having said that, it is possibly a better idea to use cookies to transmit the token, rather than the Authorization: header, for the simple reason that cookies were explicitly designed to carry custom values, whereas the specification for HTTP's built in auth methods does not really say either way - if you want to see exactly what it does say, have a look here.
The other point about this is that many HTTP client libraries have built-in support for Digest and Basic auth but may make life more difficult when trying to set a raw value in the header field, whereas they will all provide easy support for cookies and will allow more or less any value within them.
In the case of CROSS ORIGIN request read this:
I faced this situation and at first I chose to use the Authorization Header and later removed it after facing the following issue.
Authorization Header is considered a custom header. So if a cross-domain request is made with the Autorization Header set, the browser first sends a preflight request. A preflight request is an HTTP request by the OPTIONS method, this request strips all the parameters from the request. Your server needs to respond with Access-Control-Allow-Headers Header having the value of your custom header (Authorization header).
So for each request the client (browser) sends, an additional HTTP request(OPTIONS) was being sent by the browser. This deteriorated the performance of my API.
You should check if adding this degrades your performance. As a workaround I am sending tokens in http parameters, which I know is not the best way of doing it but I couldn't compromise with the performance.
This is a bit dated but there may be others looking for answers to the same question. You should think about what protection spaces make sense for your APIs. For example, you may want to identify and authenticate client application access to your APIs to restrict their use to known, registered client applications. In this case, you can use the Basic authentication scheme with the client identifier as the user-id and client shared secret as the password. You don't need proprietary authentication schemes just clearly identify the one(s) to be used by clients for each protection space. I prefer only one for each protection space but the HTTP standards allow both multiple authentication schemes on each WWW-Authenticate header response and multiple WWW-Authenticate headers in each response; this will be confusing for API clients which options to use. Be consistent and clear then your APIs will be used.
I would recommend not to use HTTP authentication with custom scheme names. If you feel that you have something of generic use, you can define a new scheme, though. See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p7-auth-latest.html#rfc.section.2.3 for details.
Kindly try below on postman :-
In header section example work for me..
Authorization : JWT eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyIkX18iOnsic3RyaWN0TW9kZSI6dHJ1ZSwiZ2V0dGVycyI6e30sIndhc1BvcHVsYXRlZCI6ZmFsc2UsImFjdGl2ZVBhdGhzIjp7InBhdGhzIjp7InBhc3N3b3JkIjoiaW5pdCIsImVtYWlsIjoiaW5pdCIsIl9fdiI6ImluaXQiLCJfaWQiOiJpbml0In0sInN0YXRlcyI6eyJpZ25vcmUiOnt9LCJkZWZhdWx0Ijp7fSwiaW5pdCI6eyJfX3YiOnRydWUsInBhc3N3b3JkIjp0cnVlLCJlbWFpbCI6dHJ1ZSwiX2lkIjp0cnVlfSwibW9kaWZ5Ijp7fSwicmVxdWlyZSI6e319LCJzdGF0ZU5hbWVzIjpbInJlcXVpcmUiLCJtb2RpZnkiLCJpbml0IiwiZGVmYXVsdCIsImlnbm9yZSJdfSwiZW1pdHRlciI6eyJkb21haW4iOm51bGwsIl9ldmVudHMiOnt9LCJfZXZlbnRzQ291bnQiOjAsIl9tYXhMaXN0ZW5lcnMiOjB9fSwiaXNOZXciOmZhbHNlLCJfZG9jIjp7Il9fdiI6MCwicGFzc3dvcmQiOiIkMmEkMTAkdTAybWNnWHFjWVQvdE41MlkzZ2l3dVROd3ZMWW9ZTlFXejlUcThyaDIwR09IMlhHY3haZWUiLCJlbWFpbCI6Im1hZGFuLmRhbGUxQGdtYWlsLmNvbSIsIl9pZCI6IjU5MjEzYzYyYWM2ODZlMGMyNzI2MjgzMiJ9LCJfcHJlcyI6eyIkX19vcmlnaW5hbF9zYXZlIjpbbnVsbCxudWxsLG51bGxdLCIkX19vcmlnaW5hbF92YWxpZGF0ZSI6W251bGxdLCIkX19vcmlnaW5hbF9yZW1vdmUiOltudWxsXX0sIl9wb3N0cyI6eyIkX19vcmlnaW5hbF9zYXZlIjpbXSwiJF9fb3JpZ2luYWxfdmFsaWRhdGUiOltdLCIkX19vcmlnaW5hbF9yZW1vdmUiOltdfSwiaWF0IjoxNDk1MzUwNzA5LCJleHAiOjE0OTUzNjA3ODl9.BkyB0LjKB4FIsCtnM5FcpcBLvKed_j7rCCxZddwiYnU

How does the AIR cache control work, what is the maximum age of cached content?

I'm currently working on a project that needs to request a url multiple times. Having studied the the HTTP Proxy (Charles) it seems that AIR will cache the first response and then return the same response for each subsequent request.
Does anybody know how to know if the response has been cached other than setting the URLRequest to useCache, but this doesn't say if the response was a cached response or not. The digest isn't set on the URLRequest either, although it does mention this is for swz only, so how does it know if the content is the current content or not? Is the responseHeaders used to find out how long to hold the cache i.e.
Cache-Control: max-age=900
Also does anyone know how to flush/purge the cache or are we at the whim of the GC and in that case how does it know if to leave it in the cache or now?
This makes sense to me, but still I would like to know how to regulate this cache.
Further more: I've tested a set up where parallel URLLoaders (10) are made and created which open the same url to see what happens in that instance. It seems that each parallel request is made until a successful response is given, all subsequent calls are then cached. Calls which are sent out before the successful request is then completed. It looks like the items which are already in being processed do not use the cache and return with correct data.
Additional The AIR runtime doesn't even send a "If-Modified-Since" header, so the cache isn't even honoring HTTP protocol. So it seems as if Adobe has implemented it's own version of a cache which doesn't even use HTTP/1.1 Header Field Definitions. Perfect.
Thanks for any help.
Simon
From the documentation of URLRequest class, it seems that it uses Operating System's HTTP Cache. On a windows 7 OS, it seems that it is using IE's cache.
You can use a HTTP monitor tool like Fiddler, and verify this.
First request is 200, and subsequent requests are 304. After clearing the IE cache. and run the application again, You can see that it results in a HTTP 200 status.

What takes precedence: the ETag or Last-Modified HTTP header?

For two subsequent requests, which of the following two headers is given more weight by browsers should one of them change: ETag or Last-Modified?
According to RFC 2616 – section 13.3.4, an HTTP 1.1 Client MUST use the ETag in any cache-conditional requests, and if both an ETag and Last Modified are present, it SHOULD use both. The ETag header is considered a strong validator (see section 13.3.3), unless explicitly declared weak by the server, whereas the Last Modified header is considered weak unless at least a minute difference exists between it and the Date header. Note, however that the Server is not required to send either (but it SHOULD, if it can).
Note that the Client does not check the headers to see if they have changed; it just blindly uses them in the next conditional request; it is up to the Server to evaluate whether to send the requested content or a 304 Not Modified response. If the Server only sends one, then the Client will use that one alone (although, only strong validators are useful for a Range request). Of course, it is also at the discretion of intermediate caches (unless they have been prevented from caching via Cache Control directives) and the Server as to how they will act upon the headers; the RFC states that they MUST NOT return a 304 Not Modified if the validators are inconsisent, but since the header values are generated by the server, it has quite a bit of leeway.
In practice, I have noticed that Chrome, FireFox, and IE 7+ all send both headers, if available. I also tested the behavior when sending modified headers, which I had already suspected from the information in the RFC. The four clients I tested only sent conditional requests if the page(s) were refreshed or if it was the first time the page had been requested by the current process.
Isn't it more like an "OR" expression. In pseudo code:
if ETagFromServer != ETagOnClient || LastModifiedFromServer != LastModifiedOnClient
GetFromServer
else
GetFromCache
=! is the correct comparison operator. The client is required to keep the literal string received from the server, as conversions can create small differences. You can not assume that 'newer is better'.
Why? Consider the case where the server operator reverts a bad version of a resource. The reverted version is OLDER - but correct.
The client must use the version currently offered by the server; it can use a cached version only if it is the same. Thus the server must check equality, not 'newer'.

Resources