I populate a session cookie server side on the response to a client request. Over the wire the response looks like the below - you can see mycookie has a json format with escaped quotes:
21:13:54.006488 IP (tos 0x0, ttl 64, id 45515, offset 0, flags [DF], proto TCP (6), length 303, bad cksum 0 (->89fb)!)
localhost.http-alt > localhost.57738: Flags [P.], cksum 0xff23 (incorrect -> 0x13f5), seq 1:252, ack 247, win 12751, options [nop,nop,TS val 1223327230 ecr 1223325750], length 251
0x0000: 4500 012f b1cb 4000 4006 0000 7f00 0001 E../..#.#.......
0x0010: 7f00 0001 1f90 e18a e6ce bb1d 282c d580 ............(,..
0x0020: 8018 31cf ff23 0000 0101 080a 48ea 7dfe ..1..#......H.}.
0x0030: 48ea 7836 4854 5450 2f31 2e31 2032 3030 H.x6HTTP/1.1.200
0x0040: 204f 4b0d 0a43 6f6e 7465 6e74 2d54 7970 .OK..Content-Typ
0x0050: 653a 2061 7070 6c69 6361 7469 6f6e 2f6a e:.application/j
0x0060: 736f 6e0d 0a43 6f6e 7465 6e74 2d4c 656e son..Content-Len
0x0070: 6774 683a 2032 330d 0a53 6574 2d43 6f6f gth:.23..Set-Coo
0x0080: 6b69 653a 2070 6965 6b61 726d 613d 227b kie:.mycookie="{
0x0090: 5c22 6372 6561 7465 645c 223a 2031 3438 \"created\":.148
0x00a0: 3132 3331 3633 325c 3035 3420 5c22 7365 1231632\054.\"se
0x00b0: 7373 696f 6e5c 223a 207b 5c22 7573 6572 ssion\":.{\"user
0x00c0: 5c22 3a20 5c22 686c 6565 6e65 795c 227d \":.\"my_name\"}
0x00d0: 7d22 3b20 4874 7470 4f6e 6c79 3b20 5061 }";.HttpOnly;.Pa
0x00e0: 7468 3d2f 0d0a 4461 7465 3a20 5468 752c th=/..Date:.Thu,
0x00f0: 2030 3820 4465 6320 3230 3136 2032 313a .08.Dec.2016.21:
0x0100: 3133 3a35 3120 474d 540d 0a53 6572 7665 13:51.GMT..Serve
0x0110: 723a 2050 7974 686f 6e2f 332e 3420 6169 r:.Python/3.4.ai
0x0120: 6f68 7474 702f 312e 312e 360d 0a0d 0a ohttp/1.1.6....
I use the the following requests code to get the cookie:
with requests.Session() as s:
r = s.post(domain+'login')
c = s.cookies['mycookie']
And c looks like
'"{created: 1481233488\054 session: {user: hleeney}}"'
c[0] is "
I'm using aiohttp on the server side ..
response = web.Response(...)
response.set_cookie(json.dumps({"session":{...}}))
I'm not sure who to blame :D Can anyone help?
I suspect you might blame Python's http.cookies.SimpleCookie.
Current aiohttp master could help you maybe, your problem looks similar to solved issue.
As an option you might blame yourself -- storing unsigned json into cookie is very bad and insecure idea anyway. Usually people use base64 encoded and cryptographically signed strings.
UPD.
Sorry, aiohttp master will not help you -- I've missed that data are mangled by requests, not aiohttp.
To answer 'who to blame' here is difficult. Its probably the user (me) for being a bit ignorant and if he was smarter he would never have run into this problem. But, it could also any of the below depending on your point of view. Its an interesting case-study in software development life cycle and standards.
1)The authors of requests:
It is indeed a funny line of code in the requests library that was mangling the JSON. At the time of writing it is overriding code from http/cookies.py to modify cookie values before returning them across the API. Now, the requests guys are really helpful and very cool. They acknowledge this flaw/sub-optimal implementation although from one perspective it is not really defying RFC 6265 (which supposedly standardises cookie values). Now the flaw is probably supporting a 'feature' for compatibility with some server side cookie code somewhere (my take on it). The module in which the flaw exists is earmarked for obsolescence so a fix and potential interim backwards compatibility issue at a minor version number is fairly deemed undesirable and a waste.
2) The authors of aiohtto_session:
Well gosh darn these are the guys who are putting JSON into the cookie value!!! They are at fault... aren't they? Well, its again complex. Their intent is to provide a simple API for secure sessions using aiohttp as a server. They provide a few implementations. One that is intended for live use is an encrypted cookie that stores session data in an encrypted JSON string. When it is encrypted there are no issues encoding/decoding the cookie. Of course the cookie is not intended for reading on the client side so it never exists there as JSON and JSON never gets transmitted. Another implementation they provide is a 'Simple' session storage. Here they forego the encryption and transmit the session as a raw JSON string. This is problematic because JSON isn't really supposed to be transmitted in a cookie value (see 3 below). However the simple session storage is only meant for testing not for live.. still might be better to provide a simple storage that doesn't potentially blow up other API's but actually having that implementation (JSON without the encryption) probably provides some valuable test coverage scenarios.
3)The authors of RFC 6265:
This RFC was supposed to be definitive in specifying the cookie standard. It sure is better than what preexisted. But I'm not convinced its definitive. The spec for cookie value is just a bit weird and picky IMHO. For one, the english below is open to a slight misinterpretation, for two there seems to be a typo in the omission of a comma and for three .. well again its weird and picky IMHO (H here stands for ignorant because I'm more sure they know why it makes sense)
cookie-value = *cookie-octet / ( DQUOTE *cookie-octet DQUOTE )
cookie-octet = %x21 / %x23-2B / %x2D-3A / %x3C-5B / %x5D-7E
; US-ASCII characters excluding CTLs,
; whitespace DQUOTE, comma, semicolon,
; and backslash
With the way things are these days storing json in a cookie does not sound crazy and people may want to do it notwithstanding potential security holes. The python HTTP APIs seem to me to turn a blind eye to non compliant cooke values - they escape DQUOTE and send backslashes with them. Anyhow, not my soap box.
4) The user:
Me. Well starting out on this journey I was ignorant of all of the above. The cookie standards and their history, python http implementation, requests implementation, aiohttp_session implementation. I was needlessly testing the plaintext value of the cookie on the client side .. although someone may have a genuine reason for doing this in the future. I kinda randomly selected requests to do the client side stuff too and so deserve to have had to delve into the source there.
So in closing and in jest I blame puny humanity for this one for being smart enough to create complexity but not smart enough to not avoid SDLC problems like this.
Related
I am using HttpsURLConnection call to get the response from HTTP servlet with message and error code. Following is some code snippet from my code:-
connection = (HttpsURLConnection) url.openConnection();
connection.setDoInput(true);
connection.setDoOutput(true);
connection.setUseCaches(false);
// Headers
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-type", "text/xml");
connection.setRequestProperty("Accept", "text/plain");
connection.setRequestProperty("Connection", "Keep-Alive");
connection.setRequestProperty("Authorization", authorization);
connection.connect();
From HTTPServlet side, i am setting statuscode and description:-
response.setStatus(code);
response.getWriter().write(returnDescription);
All the above code is existing code and it is working fine except. It should return status code as response code. But few codes are not working like 1001,1002 or 1003. i.e if i set response.setStatus(1001) it returns -1 as responseCode() at client side with "java.io.IOException: Invalid Http response". For any other integer value like 1101,1102, 1232 etc it works fine. I debugged the code and found servlet is setting correct values but client is not able to parse response. And as you change status code with some other numeric value, it get started working correctly! I am getting same behavior in HTTP as well as with HTTPS.
It seems like these non working codes are predefined codes with specific objective and can not be used as status code but i didnt find anything on web. Did anyone experienced the same and what could be the reason.
Thanks in advance! :)
Short version: OpenJDK and others have a parseHttpHeader method that parses exactly three chars of the HTTP status code number, and anything starting with the string '100' is treated as an HTTP continue. The non-continued nature of this servlet conversation confused the client, so it couldn't open the output stream and gave up.
WAAAAY long version:
This one kinda bugged me, because only 100-599 (ish, actually fewer than this) status codes should really work at all. RFC2616 says codes must be three digits and (paraphrasing) you need necessarily only understand the class of the first digit (to allow for extensions).
OpenJDK 6's HttpURLConnection implementation was the first I checked (since you didn't specify) and the code basically does:
grab the first line of the response.
look for HTTP/1. (Doesn't care about 0.9 apparently, and ignores the second digit).
look for everything at the end for the text reason.
try to parse whatever int is in the middle.
GNU Classpath does pretty much the same.
Notably, OpenJDK doesn't particularly vet that against the RFC rules. You could put a billion in there and it would be more-or-less OK with that (at least as far as getResponseCode() cares, anyway...it looks like getInputStream() will barf on any code >=400 in the concrete implementation in sun.net.www.protocol...).
In any case, that didn't answer why you were seeing this oddball behavior for only 100x. OpenJDK looks like it should have thrown IOException of the form "Server returned HTTP 1234...
...or so I thought. HttpURLConnection is abstract, and so a concrete implementation must override at least the abstract methods. Well, the concrete implementation of HttpURLConnection, the abstract's version of getResponseCode() is sorta ignored. Kinda. This implementation calls sun.net.www.http.HttpClient's parseHTTP as part of opening the input stream, which parses out the HTTP/1. and then exactly THREE characters of the code (and then does convoluted things to massage the input stream to having all that stuff retroactively shoved back in in something called an HttpCapture. Yuck.). And if that three chars happens to come out to 100, then the client thinks it has to continue the conversation to get a working InputStream.
Since your servlet is actually done with the transaction already and it's not continuing, the client is getting confused about WTF your servlet is doing and is therefore returning an error (as it should per RFC).
So mystery solved I think. You could put pretty much anything beginning with "100" and get the same behavior (even "100xyz" if your servlet API lets you).
(Android, btw, also does this three-char parse.)
This all technically violates RFC (though, honestly, it's kind of a silly bug). Strictly speaking, only 2xx codes should be treated as totally OK to pass unmolested, but probably you could use a "000" status and pass OK (again, assuming your API lets you put an arbitrary string in there).
Hope that answers your question!
Currently I am working on a ChipCard EMV device decryption. Down below is the related data I have after using the transaction (TLV format as Tag Length Value):
<DFDF54> --- It means KSN
0A
950003000005282005B4
<DFDF59>---- per instruction, it is called Encrypted Data Primative
82 ---- length of value in hex, when more than 255 degits, use 82
00D815F35E7846BF4F34E56D7A42E9D24A59CDDF8C3D565CD3D42A341D4AD84B0B7DBFC02DE72A57770D4F795FAB2CE3A1F253F22E0A8BA8E36FA3EA38EE8C95FEBA3767CDE0D3FBB6741A47BE6734046B8CBFB6044C6EE5F98C9DABCD47BC3FD371F777E7E1DCFA16EE5718FKLIOE51A749C7ECC736CB7780AC39DE062DAACC318219E9AAA26E3C2CE28B82C8D22178DA9CCAE6BBA20AC79AB985FF13611FE80E26C34D27E674C63CAC1933E3F9B1BE319A5D12D16561C334F931A5E619243AF398D9636B0A8DC2ED5C6D1C7C795C00D083C08953BC8679C60
I know BDK for this device is 0123456789ABCDEFFEDCBA9876543210. Per decryption instruction, it mentioned that DFDF59 contains the following tags:
FC<len>/* container for encrypted generic data */
F2<len>/*container for Batch Data*/
... /*Batch Data tags*/
F3<len>/*container for Reversal Data, if any*/
... /*Reversal Data tags*/
Per instruction, it mentioned "MAC variant of MSR DUKPT", where MAC stands for message authentication code, and "Parse the data through TLV format. For encrypted data tag, use TDES_Decrypt_CBC to decrypt it".
I tried to use 3DES DUKPT using KSN, BDK, and encrypted data DFDF59. It wouldn't work. Can anyone in decryption field give me some advice? Our vendor is very reluctant to share their knowledge ...
I have no idea how MAC is really playing a role here in decryption.... I thought MAC is just an integrity check.... I am using session key for 3DES DUKPT that was generated from KSN and BDK. this works for other decryptions in this device, but doesn't solve the DFDF59 (chip card EMV decryption).... That is why I start to wonder whether I am using the right session key or not.... Feel free to just throw ideas out there. Thank you!
If you look closely at DUKPT internals it generates a transaction key out of the current future keys and encryption counter. This 'transaction key' for a specific KSN has several variants (which effectively are just xor masks that you put on the transaction key to differentiate it for PIN, MAC req, MAC rsp, data encryption req and rsp usages). These variants mean that you use a different key to generate PIN and different key to encrypt data (so that you cannot ie. decrypt/attack PINblock when able to select data buffer arbitrarily). Using MAC variant means only that for the encryption operation you will be using a certain mask for the DUKPT transaction key.
The response fragment below is part of a PROPFIND reply:
<D:response>
<D:href>https://dav.mystery-meat.com/top</D:href>
<D:propstat>
<D:prop>
<D:creationdate ns0:dt="dateTime.tz">1970-01-01T00:00:00Z</D:creationdate>
<D:getcontentlanguage>en</D:getcontentlanguage>
<D:getcontentlength>16384</D:getcontentlength>
<D:getcontenttype>httpd/unix-directory</D:getcontenttype>
<D:getlastmodified ns0:dt="dateTime.rfc1123">Thu, 01 Jan 1970 00:00:00 GMT</D:getlastmodified>
<D:resourcetype><D:collection/></D:resourcetype>
</D:prop>
<D:status>HTTP/1.1 200 OK</D:status>
</D:propstat>
</D:response>
The getcontentlength value isn't the total bytes of items within this directory. Is there any predefined meaning for this value in WebDAV or is it simply implementor-defined by each server that happens to report a value?
I.e. is it of any real use?
Read the RFC, as usual it has a perfect definition:
Purpose: Contains the Content-Length header returned by a GET without accept headers.
If that isn't clear, it basically says, if you perform a GET request on the same resource with no Accept-* headers, the response will report a Content-Length that is this value.
So if you have a WebDAV implementation that conforms to the standard, you should be able to easily test this by just executing a GET request on the collection. Chances are you'll get some automatically generated HTML response.
If the response to this GET request is a different size (in bytes) as it reported via {DAV:}getcontentlength, it should be considered a bug.
I think in your particular case it might be a bug. The fact that the reported size for the collection is exactly a power of two, leads me to believe that this particular server returns the result of stat() for that directory, which is simply how much space the directory listing takes up on the filesystem (the same number as when you use ls).
If my hunch is true, the server basically has broken behavior.
I'm writing a small experimental http server in GO using the net/http package, and I need all my replies to have 'identity' transfer encoding. However the http server in GO always returns responses using the 'chunked' transfer.
Is there any way to disable chunked encoding on HTTP server in GO?
It's not clear to me whether responding with "Transfer-Encoding: identity" is valid under the spec (I think maybe you should just leave it out), but...
Inspecting the code here, I see this inside the WriteHeader(code int) function (it's a little bit strange, but this function actually flushes all the headers to the socket):
367 } else if hasCL {
368 w.contentLength = contentLength
369 w.header.Del("Transfer-Encoding")
370 } else if w.req.ProtoAtLeast(1, 1) {
371 // HTTP/1.1 or greater: use chunked transfer encoding
372 // to avoid closing the connection at EOF.
373 // TODO: this blows away any custom or stacked Transfer-Encoding they
374 // might have set. Deal with that as need arises once we have a valid
375 // use case.
376 w.chunking = true
377 w.header.Set("Transfer-Encoding", "chunked")
378 } else {
I believe "hasCL" in the first line above refers to having a content length available. If it is available, it removes the "Transfer-Encoding" header altogether, otherwise, if the version is 1.1 or greater, it sets the "Transfer-Encoding" to chunked. Because this is done right before writing it to the socket, I don't think there's currently going to be any way for you to change it.
Can someone please clear up a bit of MDC and data encryption for me? in rfc 4880, it says:
The plaintext of the data to be
encrypted is passed through the SHA-1
hash function, and the result of the
hash is appended to the plaintext in a
Modification Detection Code packet.
The input to the hash function
includes the prefix data described
above; it includes all of the
plaintext, and then also includes two
octets of values 0xD3, 0x14. These
represent the encoding of a
Modification Detection Code packet tag
and length field of 20 octets.
at first, it seems like the mdc (without its header data) is just: sha1([data]) -> hash_value
then the second sentence up to the semicolon makes it seem like sha1(OpenPGP_CFB_extra_data + [data]) -> hash_value
the stuff after the semicolon makes it seem like I am supposed to do sha1([data] + "\xd3\x14") -> hash_value. (this doesnt make sense at all, but it seems to be what is written)
what is going on?
after getting the correct MDC, what is done with it? is it its own packet, or something like this (according to my understanding) done?:
tag18_header + encrypt(plaintext + "\xd3\x14" + 20 byte hash)
After reading RFC 4880 and parts of the GnuPG source code (g10/cipher.c seems to be the place where this is handled), I interpret it is like this:
0xd3 is the MDC packet tag.
0x14 is the MDC packet length (20 bytes).
The MDC hash is computed like this:
MCD_hash = SHA-1(OpenPGP_CFB_extra_data + [plaintext] + "\xd3\x14")
Then this is appended to the plaintext message and encrypted:
encrypt(OpenPGP_CFB_extra_data + [plaintext] + "\xd3\x14" + MDC_hash)
When decrypted, this hash is verified by computing SHA-1 of everything but the last 20 bytes and comparing the result to the last 20 bytes, as RFC 4880 writes (page 50):
During decryption, the plaintext data should be hashed with SHA-1, including the prefix data as well as the packet tag and length field of the Modification Detection Code packet. The body of the MDC packet, upon decryption, is compared with the result of the SHA-1 hash.