HttpBatchResponseMessage responses never fully return on my machine, it only returns the headers but does not complete the request. I tryed the following OData sample, it works just fine on my colleagues machine:
http://aspnet.codeplex.com/SourceControl/latest#Samples/WebApi/OData/v3/ODataEFBatchSample/ReadMe.txt
(super simple to try, download, start, show network requests in the browser or fiddler)
I would really appreciate ideas for further investigations!
The response
from POST to http://localhost:15625/odata/$batch only returns the headers without any content
Cache-Control:no-cache
Content-Length:2114
Content-Type:multipart/mixed; boundary=batchresponse_1c6b4001-17b8-41bb-9579-970cdc358dba
DataServiceVersion:3.0
Date:Tue, 10 Mar 2015 17:03:40 GMT
Expires:-1
Pragma:no-cache
Server:Microsoft-IIS/8.0
X-AspNet-Version:4.0.30319
X-Powered-By:ASP.NET
X-SourceFiles:=?UTF-8?B?QzpcVXNlcnNca2phcnRhblxEb3dubG9hZHNcT0RhdGFFRkJhdGNoU2FtcGxlXE9EYXRhRUZCYXRjaFNhbXBsZVxvZGF0YVwkYmF0Y2g=?=
Figured out ...
The returned headers already tell the length of the response (Content-Length:2114). I tryed to inject some custom classes an figured out:
Works on my colleagues computer
The problem occures when returning a ODataBatchContent, worked as I used a StringContent instead (which doesn't do the job)
The ODataBatchContent serialized perfectly fine when serializing to a MemoryStream
I Tryed
Disabling any Firewall, Antivirus software, Software, etc.
Updating Visual Studio etc., my configuration is nearly identical to the one of from my colleague
Debugging ... a lot
Request
POST to /odata/$batch
Status Code:202 Accepted
Headers:
Accept:multipart/mixed
Accept-Encoding:gzip, deflate
Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4
Cache-Control:no-cache
Connection:keep-alive
Content-Length:1201
Content-Type:multipart/mixed;boundary=batch_b58d-91a6-2b46
DataServiceVersion:1.0
Host:localhost:15625
MaxDataServiceVersion:3.0
Origin:http://localhost:15625
Pragma:no-cache
Referer:http://localhost:15625/Index.html
User-Agent:Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36
Thanks.
Related
I'm trying to do a network analysis of an Android app to see what information Firebase is collecting. I'm doing a man-in-the-middle using Fiddler and can observe pings to Firebase's app-measurement server. But when I decrypt the content to try to see what's being passed, it looks like the message is encoded or compressed somehow. The messages look like this when opened in a text editor:
POST https://app-measurement.com/a HTTP/1.1
Content-Length: 761
User-Agent: Dalvik/2.1.0 (Linux; U; Android 5.1; Google Nexus 6 Build/LMY47D)
Host: app-measurement.com
Connection: Keep-Alive
Accept-Encoding: gzip
Content-Type: application/x-www-form-urlencoded
öR
_c
_oauto
_r
_pfo
_sys
_uwa
_sysu _fΞ¶Þ. "
_oauto
_et_eΞ¶Þ. D
_si¹þ½‚ÌÓ¹¨_vsÓ랶Þ. ÎÏž¶Þ.Ξ¶Þ._fot €ºÀ¶Þ.Ξ¶Þ._fi ³Ÿ¶Þ._lte œ ²Ÿ¶Þ.(Ξ¶Þ.0Ó랶Þ.BandroidJ5.1RGoogle Nexus 6Zen-us`Ôýÿÿÿÿÿÿÿjmanual_installr<my-app-bundle-name>‚4.6.19166.0322ˆÄwÄwª <text here>°ñÊîüƒÝÏʸÊ'1:849883241272:android:<my-aaid> òdVZyNR1YbAMøºç’1d5adf7b1442fe22˜ŽºªÜàÙè ð
Is anyone familiar with this encoding? How can I decode to view the raw text? I've tried gzip, but that doesn't seem to be it. Thank you!
It's not compressed or really obfuscated in any way. You can see plaintext data right there in the payload if you look carefully. What you're capturing is almost certainly a protobuf in binary form, and you'll have to get a hold of the protobuf definition in order to decode it without putting in the work to reverse engineer it. That's not something Google discloses for API endpoints that are not publicly documented.
I have strange problem, cause I don't know how to get background request
I have newly updated site "whitestudio.org", and trying to test in the mentioned above service. Hovewer, google return me 503 response message. Don't know what to do.
Also, tried to test from command line, tested with success. Can anyone help me?
curl -H "Mozilla/5.0 (compatible; GoogleBot/2.1; +http://www.google.com/bot.htm)" whitestudio.org
By the way pagespeed bot header for mobile is
Mozilla/5.0 (iPhone; CPU iPhone OS 8_3 like Mac OS X) AppleWebKit/537.36 (KHTML, like Gecko; Google Page Speed Insights) Version/8.0 Mobile/12F70 Safari/600.1.4
(Wrote for those who want to test in future).
And that error was related to incorrect configured "A" record [DNS configuration - We migrated from one host to another.
Our website is receiving http requests from a user which contains 'Coikoe' tag instead of 'Cookie'.
Http request object received from firefox is mentioned below :
com.pratilipi.servlet.UxModeFilter doFilter: REQUEST : GET http://www.pratilipi.com/books/gujarati HTTP/1.1
Host: http//www.pratilipi.com
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Referer: http://www.pratilipi.com/?action=login
Coikoe: _gat=1; visit_count=1; page_count=2
X-AppEngine-Country: XX
X-AppEngine-Region: xx
X-AppEngine-City: XXXXXX
X-AppEngine-CityLatLong: 12.232344,12.232445
Http request object received from google chrome is mentioned below :
com.pratilipi.servlet.UxModeFilter doFilter: REQUEST : GET http//www.pratilipi.com/books/hindi HTTP/1.1
Host: http//www.pratilipi.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.132 Safari/537.36
Referer: http//www.pratilipi.com
Accept-Language: en-US,en;q=0.8,ta;q=0.6
Coikoe: _gat=1; visit_count=1; page_count=1
X-AppEngine-Country: XX
X-AppEngine-Region: xx
X-AppEngine-City: xxxxxx
X-AppEngine-CityLatLong: 12.232344,12.232445
User is using window 8 system.
Question : Why is this happening and how can I solve it? I have never seen anything like this before. Anyone has come accross anything like this
Thank You
This user will be using some sort of privacy proxy.
The same happens for the Connection request header as explained in Cneonction and nnCoection HTTP headers: the proxy mangles the header so it won't be recognized by the receiver, but by merely shuffling some letters around the TCP packet's checksum will remain the same.
I'm gonna give a rather speculative answer based on some online research.
I went through all the specifications for cookies right from the early drafts and there doesn't seem to be anything about coikoe or misspelling cookies.
I found another user (Pingu) who complained about the same on Twitter about the same. His relevant tweets:
(1) Weird problem: have a device that changes "Cookie" to "Coikoe" in TCP stream and don't know which it is. No deep packet inspection in place.
(2) There is a Linksys Wifi Router, a Cisco Switch adding a VLAN-Tag and a Linux box routing the VLAN to Internet router. Nothing else. #Coikoe
I then asked him about it earlier today. This was his replay:
It must have been something with my routing and iptables setup on the Linux box to allow the guests only limited access.
I can remember the problem. But do not remember how I solved it. It happened from Clients connected to my Guest WiFi.
Given my understanding from your discussion in the comments earlier, I'm guessing that the router sends a coikoe header instead of a cookie if the user has limited connectivity and/or problems with the access point.
Also see this ruby code to see how they have handled the different cookie header:
def set_cookie_header
request.headers["HTTP_COOKIE"] ||= request.headers["HTTP_COIKOE"]
end
I looked lots of other popular forums like reddit, 4chan, stackoverflow, facebook and google, but I could not get anything else. Goodluck with your problem.
Well this is something like a typo mistake, just to confirm , use the following powershell command in the project directory
Get-ChildItem -recurse | Select-String -pattern "Coikoe" | group path | select name
and i hope you will be able to find the mistake you have made.
Q:
I have a web application which published on a server .When i try the web application from another city. the performance is so bad and every thing is slow.
Should i make any enhancement to my code or this is related mostly to the network factors?
Any advices please.
Error/Status Code: 200
Start Offset: 0.194 s
Initial Connection: 193 ms
Time to First Byte: 286 ms
Content Download: 1286 ms
Bytes In (downloaded): 48.6 KB
Bytes Out (uploaded): 0.4 KB
Request Headers:
GET /sch/ScheduleForm.aspx HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:7.0.1) Gecko/20100101 Firefox/7.0.1 PTST/25
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Connection: keep-alive
Response Headers:
HTTP/1.1 200 OK
Date: Wed, 21 Dec 2011 15:17:00 GMT
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Set-Cookie: ASP.NET_SessionId=ane2ncmyyoqwckjmv4bijq45; path=/; HttpOnly
Cache-Control: private
Content-Type: text/html; charset=utf-8
Content-Length: 4938
First of all you must find if the delay is because of the network, or by the server it self, or by the Client Computer or by your page design.
Its rare to have bad performance only because you change city, maybe the other computer have slow connection or bad configuration, or bad isp and you see that slow.
Client Quick Check
The faster way to a quick check of the network responce is to ping your site, open a command promt window and just write "ping www.yoursite.com /t" and see the time. And if your server is on the same country must be under 50ms.
Network - Page design
Now second point of a global check. You can use this site
http://www.webpagetest.org/
to check the speed of your page globally, and get very interesting results, as the time response.
Server
There always the case that you have place your site on a multi use/shared computer, with thousand of sites and bad configuration, so on peak time the server performs badly. I have see this happens a lot of times.
Your Web Application
If the site is slow compare with the development computer then is ether because live have a huge database that you do not have check, ether because a hacker have found a back door and is attacking by making empty accounts as attack, or something similar. This is up to you to find if the delay is because of your calls inside your program.
And more reasons for that delay exist
One more note, use the tools that exist on Chrome, Mozilla, Opera and Safari to see the time response of your site when your page load.
Hope this helps.
Your best starting point would be to find out where the speed issue is coming from. Try Firebug/Chrome debugging tools to see if the time is being taken on the server (ie sending you the first byte of the website) or if it's just plain old loading time (eg images take a long time).
If it's on the server then you have potentially some architectural/coding issues, if it's just delivery time of content then you have network/content issues (compress content with GZip, optimise pngs with PNGOUT that sort of thing).
Good luck :)
That depends heavily how the geographical location of the two cities is. If the two cities are nearby you will most likely not notice any difference.
If the cities are on different continents you may notice for sure latency issues.
If you have latency issues you can not solve them solely by adjusting your code. You need to do something like e.g. geographical load balancing and hosting different servers at different locations.
Facebook e.g. has multiple data centres.. e.g. one at the West-Coast one on the East-Coast and i think they have one in Europe as well. Depending from which location you come from requests are forwarded to the nearest data centre.
I'm seeing a behaviour in Firefox which seems to me unexpected. This is not particularly repeatable (unfortunately) but does crop up from time to time. Once it starts, it is repeatable on refreshing the page until a full refresh (ctrl-f5) is done. Last time, I managed to get a trace.
Basically, FF4.0.1 is requesting a resource (from an ASP.NET MVC 3 application running under IIS7):
GET http://www.notarealdomain.com/VersionedContent/Scripts/1.0.40.5653/jquery.all.js HTTP/1.1
Host: www.notarealdomain.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1
Accept: */*
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Proxy-Connection: keep-alive
Referer: http://www.notarealdomain.com/gf
If-Modified-Since: Sat, 01 Jan 2000 00:00:00 GMT
It then gets the following response from the server (via a proxy, but I can see in the IIS server logs that the request was made all the way to the server):
HTTP/1.1 304 Not Modified
Date: Mon, 09 May 2011 14:00:47 GMT
Cache-Control: public
Via: 1.1 corp-proxy (NetCache NetApp/6.0.3P2D5)
This seems reasonable - the client makes a conditional request (if-modified-since), and the server responds "ok - use your copy" (304 Not Modified).
The problem is that the client in this case then does not dish up the file - it behaves as if there was no content (ie if an image, it doesn't appear, if js it behaves as if the .js file is missing on the page, if .css then the page renders without the css styles, etc). This is apparent on both the web page itself, and also when using the excellent HttpWatch tool. HttpWatch clearly shows that the browser did have the item in cache, but didn't use it as the content source.
What am I missing here? The above conversation seems reasonable, so why does FF make a conditional request and then not use its cached copy when told to do so? If you then ctrl-F5 to force a full refresh, the behaviour goes away, and only returns sporadically.
We also have some anecdotal evidence that this occurs with FF3 and Chrome too, but we haven't confirmed this with any forensic data.
Has anyone else seen this, and does anyone know what might be going wrong, or what further steps might isolate the problem?
OK - I got to the bottom of this problem. As ever - the answer was staring me in the face!
The problem was not with the caching behaviour at all. Another part of the system was sporadically failing and writing out a zero length file - which was being cached by Firefox.
So when Firefox sent a conditional request to the server and received it's 304/Not Modified, it dutifully went off and used the corrupt zero length version of the file that it found in its cache.
All the signs were there for me to see - it just took a bit of time to get there :)
Thanks all, for your comments and suggestions.