I'm trying to do a network analysis of an Android app to see what information Firebase is collecting. I'm doing a man-in-the-middle using Fiddler and can observe pings to Firebase's app-measurement server. But when I decrypt the content to try to see what's being passed, it looks like the message is encoded or compressed somehow. The messages look like this when opened in a text editor:
POST https://app-measurement.com/a HTTP/1.1
Content-Length: 761
User-Agent: Dalvik/2.1.0 (Linux; U; Android 5.1; Google Nexus 6 Build/LMY47D)
Host: app-measurement.com
Connection: Keep-Alive
Accept-Encoding: gzip
Content-Type: application/x-www-form-urlencoded
öR
_c
_oauto
_r
_pfo
_sys
_uwa
_sysu _fΞ¶Þ. "
_oauto
_et_eΞ¶Þ. D
_si¹þ½‚ÌÓ¹¨_vsÓ랶Þ. ÎÏž¶Þ.Ξ¶Þ._fot €ºÀ¶Þ.Ξ¶Þ._fi ³Ÿ¶Þ._lte œ ²Ÿ¶Þ.(Ξ¶Þ.0Ó랶Þ.BandroidJ5.1RGoogle Nexus 6Zen-us`Ôýÿÿÿÿÿÿÿjmanual_installr<my-app-bundle-name>‚4.6.19166.0322ˆÄwÄwª <text here>°ñÊîüƒÝÏʸÊ'1:849883241272:android:<my-aaid> òdVZyNR1YbAMøºç’1d5adf7b1442fe22˜ŽºªÜàÙè ð
Is anyone familiar with this encoding? How can I decode to view the raw text? I've tried gzip, but that doesn't seem to be it. Thank you!
It's not compressed or really obfuscated in any way. You can see plaintext data right there in the payload if you look carefully. What you're capturing is almost certainly a protobuf in binary form, and you'll have to get a hold of the protobuf definition in order to decode it without putting in the work to reverse engineer it. That's not something Google discloses for API endpoints that are not publicly documented.
Related
Our website is receiving http requests from a user which contains 'Coikoe' tag instead of 'Cookie'.
Http request object received from firefox is mentioned below :
com.pratilipi.servlet.UxModeFilter doFilter: REQUEST : GET http://www.pratilipi.com/books/gujarati HTTP/1.1
Host: http//www.pratilipi.com
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: http://www.pratilipi.com/?action=login
Coikoe: _gat=1; visit_count=1; page_count=2
X-AppEngine-Country: XX
X-AppEngine-Region: xx
X-AppEngine-City: XXXXXX
X-AppEngine-CityLatLong: 12.232344,12.232445
Http request object received from google chrome is mentioned below :
com.pratilipi.servlet.UxModeFilter doFilter: REQUEST : GET http//www.pratilipi.com/books/hindi HTTP/1.1
Host: http//www.pratilipi.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36
Referer: http//www.pratilipi.com
Accept-Language: en-US,en;q=0.8,ta;q=0.6
Coikoe: _gat=1; visit_count=1; page_count=1
X-AppEngine-Country: XX
X-AppEngine-Region: xx
X-AppEngine-City: xxxxxx
X-AppEngine-CityLatLong: 12.232344,12.232445
User is using window 8 system.
Question : Why is this happening and how can I solve it? I have never seen anything like this before. Anyone has come accross anything like this
Thank You
This user will be using some sort of privacy proxy.
The same happens for the Connection request header as explained in Cneonction and nnCoection HTTP headers: the proxy mangles the header so it won't be recognized by the receiver, but by merely shuffling some letters around the TCP packet's checksum will remain the same.
I'm gonna give a rather speculative answer based on some online research.
I went through all the specifications for cookies right from the early drafts and there doesn't seem to be anything about coikoe or misspelling cookies.
I found another user (Pingu) who complained about the same on Twitter about the same. His relevant tweets:
(1) Weird problem: have a device that changes "Cookie" to "Coikoe" in TCP stream and don't know which it is. No deep packet inspection in place.
(2) There is a Linksys Wifi Router, a Cisco Switch adding a VLAN-Tag and a Linux box routing the VLAN to Internet router. Nothing else. #Coikoe
I then asked him about it earlier today. This was his replay:
It must have been something with my routing and iptables setup on the Linux box to allow the guests only limited access.
I can remember the problem. But do not remember how I solved it. It happened from Clients connected to my Guest WiFi.
Given my understanding from your discussion in the comments earlier, I'm guessing that the router sends a coikoe header instead of a cookie if the user has limited connectivity and/or problems with the access point.
Also see this ruby code to see how they have handled the different cookie header:
def set_cookie_header
request.headers["HTTP_COOKIE"] ||= request.headers["HTTP_COIKOE"]
end
I looked lots of other popular forums like reddit, 4chan, stackoverflow, facebook and google, but I could not get anything else. Goodluck with your problem.
Well this is something like a typo mistake, just to confirm , use the following powershell command in the project directory
Get-ChildItem -recurse | Select-String -pattern "Coikoe" | group path | select name
and i hope you will be able to find the mistake you have made.
I am writing a HTTP Webserver. My server has to handle Http multipart requests. In my previous implementation, I was extracting the data with the help of content length header present in every part of request. The client which I was using give content-length header with every part part(file) in the multipart request.
But another client is not giving content-length of each file. In my implementation I use content-length header to extract that much bytes and save them into a file.
Please tell me how can I extract data now.
The Headers which I am getting now are:
POST xxxxxxxxxxxxxxxxxxxxxxx¤tTab=PHOTOxxxxxxxxxxxxxxxx HTTP/1.1
Content-Length: 6829
Content-Type: multipart/form-data; boundary=SnlCg9JqTpQIl6t_mPzByTjZ8bD24kUj; charset=UTF-8
Host: host
Connection: Keep-Alive
User-Agent: Apache-HttpClient/xxxxxxxx
Accept-Encoding: gzip
--SnlCg9JqTpQIl6t_mPzByTjZ8bD24kUj
Content-Disposition: form-data; name="file"; filename="imagesCA5L2CL6_jpg(2)_jpg.jpg"
Content-Type: photo/jpg
**Some Data byte array**
--SnlCg9JqTpQIl6t_mPzByTjZ8bD24kUj--
In this request, there is now content-length header in part data.
EDIT:
Earlier this client used to send content-length header in every part. But for some reason it is not sending it any more. Can anybody suggest any reason for that.
thanks
Like this : Reading file input from a multipart/form-data POST
Reading file input from a multipart/form-data POST
Take a look at RFC 2616 if you want to implement a HTTP/1.1 server. See section 4.4 on how to determine message length. See RFC 2388 on how to implement multipart/form-data.
The real answer is: don't reinvent the wheel, or you'll have to reimplmement a few hundred pages of RFC's. There are tons of libraries and servers out there.
If you do want to write your own web server, for example as an exercise, you would have found those RFC's already, right?
I'm seeing a behaviour in Firefox which seems to me unexpected. This is not particularly repeatable (unfortunately) but does crop up from time to time. Once it starts, it is repeatable on refreshing the page until a full refresh (ctrl-f5) is done. Last time, I managed to get a trace.
Basically, FF4.0.1 is requesting a resource (from an ASP.NET MVC 3 application running under IIS7):
GET http://www.notarealdomain.com/VersionedContent/Scripts/1.0.40.5653/jquery.all.js HTTP/1.1
Host: www.notarealdomain.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1
Accept: */*
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Proxy-Connection: keep-alive
Referer: http://www.notarealdomain.com/gf
If-Modified-Since: Sat, 01 Jan 2000 00:00:00 GMT
It then gets the following response from the server (via a proxy, but I can see in the IIS server logs that the request was made all the way to the server):
HTTP/1.1 304 Not Modified
Date: Mon, 09 May 2011 14:00:47 GMT
Cache-Control: public
Via: 1.1 corp-proxy (NetCache NetApp/6.0.3P2D5)
This seems reasonable - the client makes a conditional request (if-modified-since), and the server responds "ok - use your copy" (304 Not Modified).
The problem is that the client in this case then does not dish up the file - it behaves as if there was no content (ie if an image, it doesn't appear, if js it behaves as if the .js file is missing on the page, if .css then the page renders without the css styles, etc). This is apparent on both the web page itself, and also when using the excellent HttpWatch tool. HttpWatch clearly shows that the browser did have the item in cache, but didn't use it as the content source.
What am I missing here? The above conversation seems reasonable, so why does FF make a conditional request and then not use its cached copy when told to do so? If you then ctrl-F5 to force a full refresh, the behaviour goes away, and only returns sporadically.
We also have some anecdotal evidence that this occurs with FF3 and Chrome too, but we haven't confirmed this with any forensic data.
Has anyone else seen this, and does anyone know what might be going wrong, or what further steps might isolate the problem?
OK - I got to the bottom of this problem. As ever - the answer was staring me in the face!
The problem was not with the caching behaviour at all. Another part of the system was sporadically failing and writing out a zero length file - which was being cached by Firefox.
So when Firefox sent a conditional request to the server and received it's 304/Not Modified, it dutifully went off and used the corrupt zero length version of the file that it found in its cache.
All the signs were there for me to see - it just took a bit of time to get there :)
Thanks all, for your comments and suggestions.
I'm using Apache Abdera to POST atom multipart data to my server, and am having some odd problems that I can't pin down.
It looks like an issue with chunked transfer encoding, but I'm insufficiently experienced to be certain. The problem manifests as the server throwing an error indicating that the request I sent it contains only one mime part, not two as required. I attached Wireshark to the interface and captured the conversation, and it went like this:
POST /sss/col-uri/2ee98ea1-f9ad-4f01-9b1c-cfa3c4a6dc3c HTTP/1.1
Host: localhost
Expect: 100-continue
Transfer-Encoding: chunked
Content-Type: multipart/related; boundary="1306399868259";type="application/atom+xml;type=entry"
The server's response:
HTTP/1.1 100 Continue
My client continues:
198
--1306399868259
Content-Type: application/atom+xml;type=entry
Content-Disposition: attachment; name="atom"
<entry xmlns="http://www.w3.org/2005/Atom"><title xmlns="http://purl.org/dc/terms/">Richard Woz Ere</title><bibliographicCitation xmlns="http://purl.org/dc/terms/">this is my citation</bibliographicCitation><content type="application/zip" src="cid:48bd9436-e8b6-4f68-aa83-5c88eda52fd4" /></entry>
0
b0e9
--1306399868259
Content-Type: application/zip
Content-Disposition: attachment; name="payload"; filename="example.zip"
Content-ID: <48bd9436-e8b6-4f68-aa83-5c88eda52fd4>
Packaging: http://purl.org/net/sword/package/SimpleZip
And at this point the server responds with:
HTTP/1.1 400 Bad Request
Date: Thu, 26 May 2011 08:51:08 GMT
Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.8l DAV/2 mod_wsgi/3.3 Python/2.6.1
Connection: close
Transfer-Encoding: chunked
Content-Type: text/xml
Indicating the error (which is well understood). My server goes on to stream a pile of base64 encoded bits onto the output stream, but in the mean time the server is not listening, it has already decided that the request was erroneous.
Unfortunately, I'm not in charge of the HTTP layer - this is all handled by Abdera using Apache httpclient. My code that does this looks like this:
client.execute("POST", url.toString(), new SWORDMultipartRequestEntity(deposit), options);
Here, the SWORDMultipartRequestEntity is a copy of the standard Abdera MultipartRequestEntity class, with a few extra headers thrown in (see, for example, Packaging in the above snippet); the "deposit" argument is just an object holding the atom part and the inputstream.
When attaching a debugger I get to this line of code fine, and then it disappears into a rat hole and then I get this error back.
Any hints or tips? I've pretty much exhausted my angles of attack!
The only thing that stands out for me is that immediately after the atom:entry document, there is a newline with "0" on it alone, which appears to be chunked transfer encoding speak for "I'm finished". Not sure how it got there, or whether it really has any effect. Help much appreciated.
Cheers,
Richard
The lonely 0 may indeed be a problem. My uninformed guess is that it results from some call to flush(), which then writes the whole buffer as another HTTP chunk. Unfortunately at the point where flush is called, the buffer had already been flushed and its size is therefore zero. So the HttpChunkedOutputFilter (or however it is called) should be taught than an empty buffer does not need to be flushed.
[update:] You should set a breakpoint in the ChunkedOutputStream class, especially the flush method. I just looked at its code and it seems to be ok, but maybe I missed something.
basically, i was wiresharking packets on my PS3 while viewing Motorstorm Leaderboards. The leaderboards are sent to my ps3 in XML format but only after i have been authorised. So can someone please tell me what is happening between these three packets and how i could replicate it in a browser?
Packet 1 From my PS3 to Sony Servers
POST /ranking_view/func/get_player_rank HTTP/1.1
Host: ranking-view-a01.u0.np.community.playstation.net
Connection: Keep-Alive
Content-Length: 213
Authorization: Digest username="c7y-ranking01", realm="c7y-ranking", nonce="2SpsV4WABAA=47a2b36030cd94de1190f6b9f05db1bd5584bc2a", uri="/ranking_view/func/get_player_rank", qop="auth", nc="00000001", cnonce="d4eb1eb60ab4efaea1476869d83a6e0b", response="96b55c6e79f84dd41b46eb66bed1c167"
Accept-Encoding: identity
User-Agent: PS3Community-agent/1.0.0 libhttp/1.0.0
<?xml version="1.0" encoding="utf-8"?><ranking platform="ps3" sv="3.15"><titleid>NPWR00012_00</titleid><board>7</board><jid>Panzerborn#a5.gb.np.playstation.net</jid><option message="false" info="false"/></ranking>
Packet 2 Sony Server Response to my PS3
Date: Fri, 26 Feb 2010 19:06:12 GMT
WWW-Authenticate: Digest realm="c7y-ranking", nonce="a3PFl4WABAA=6d375259676ec79641448a8032a795b8e12ccae4", algorithm=MD5, stale=true, qop="auth"
Content-Length: 401
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=iso-8859-1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Authorization Required</title>
</head><body>
<h1>Authorization Required</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
</body></html>
Packet 3 PS3 response to Sony Servers last packet
POST /ranking_view/func/get_player_rank HTTP/1.1
Host: ranking-view-a01.u0.np.community.playstation.net
Connection: Keep-Alive
Authorization: Digest username="c7y-ranking01", realm="c7y-ranking", nonce="a3PFl4WABAA=6d375259676ec79641448a8032a795b8e12ccae4", uri="/ranking_view/func/get_player_rank", qop="auth", nc="00000001", cnonce="58869490a891002d8c56573496274a3a", response="ca3d6f252d4e398b8f751c201a3f8f08"
Accept-Encoding: identity
User-Agent: PS3Community-agent/1.0.0 libhttp/1.0.0
<?xml version="1.0" encoding="utf-8"?><ranking platform="ps3" sv="3.15"><titleid>NPWR00012_00</titleid><board>7</board><jid>Panzerborn#a5.gb.np.playstation.net</jid><option message="false" info="false"/></ranking>
I tried to replicate this in Firefox and tamper headers as well as in PHP cURL but im getting nowhere. I assume it is to do with the nonce, cnonce and responce variables that keep changing >< please help =)
Nonce, cnonce and so on are related to HTTP Digest Authentication, which is an authentication mechanism that enables authentication without sending a password in plain text. So if you want to cheat in your PS3 game, you'll first have to hack that password out of the MD5 hash, I guess.
And it's not called HTTP packets, on layer 7 you would usually say request/response or similar.
The nonce an nonce and cnonce look like hash codes.
One possible defense mechanism against cheaters could be this:
def ps3client_send_score():
score = "bazillion points"
nonce = md5(score + "something you don't know about")
send_to_server(score, nonce)
On the server side:
def get_client_score(score, nonce):
if md5(score+"something you don't know about")==nonce:
accept_score(score)
else:
reject_score_and_ban_the_fool_if_he_continues_this()
So unless you want to spend weeks trying to find the salt deep in your game, forget it.