trace http session - http

In a developement environement (where often the browser and the http server are on the same machine) i want to study the exact detail of authentication schemas. So i need to trace down every http request/response.
I've tried WireShark, that is very promising. But actually on
windows machines there is a problem in sniffing the traffic on
loopback interface.
Then i've tried a browser plugin, HttpFox
0.8.10 of Firefox 12. It is good in showing requests and responses, but in the specific case of authentication, it doesn't correctly
show the "double hop" authentication, it "collapses" the first
request (the Unauthorized status code) with the next, successful
one.
Then i've tried to work with the logs of httpd, that is my
actual server, but it is required a not trivial effort to create a
log that contains all the request such as headers (the authorization
header).So it doesn't seem a good "debug" technique.
Are there other possibilities?

Go with Wireshark. The answer to this question will address the loopback issue. Wireshark is the best because it really understands the formatting of everything related to HTTP (so long as you are not using HTTPS).

Related

When implementing a web proxy, how should the server report lower-level protocol errors?

I'm implementing an HTTP proxy. Sometimes when a browser makes a request via my proxy, I get an error such as ECONNRESET, Address not found, and the like. These indicate errors below the HTTP level. I'm not talking about bugs in my program -- but how other servers behave when I send them an HTTP request.
Some servers might simply not exist, others close the socket, and still others not answer at all.
What is the best way to report these errors to the caller? Is there a standard method that, if I use it, browsers will convert my HTTP message to an appropriate error message? (i.e. they get a reply from the proxy that tells them ECONNRESET, and they act as though they received the ECONNRESET themselves).
If not, how should it be handled?
Motivations
I really want my proxy to be totally transparent and for the browser or other client to work exactly as if it wasn't connected to it, so I want to replicate the organic behavior of errors such as ECONNRESET instead of sending an HTTP message with an error code, which would be totally different behavior.
I kind of thought that was the intention when writing an HTTP proxy.
There are several things to keep in mind.
Firstly, if the client is configured to use the proxy (which actually I'd recommend) then fundamentally it will behave differently than if it were directly connecting out over the Internet. This is mostly invisible to the user, but affects things like:
FTP URLs
some caching differences
authentication to the proxy if required
reporting of connection errors etc <= your question.
In the case of reporting errors, a browser will show a connectivity error if it can't connect to the proxy, or open a tunnel via the proxy, but for upstream errors, the proxy will be providing a page (depending on the error, e.g. if a response has already been sent the proxy can't do much but close the connection). This page won't look anything like your browser page would.
If the browser is NOT configured to use a proxy, then you would need to divert or intercept the connection to the proxy. This can cause problems if you decide you want to authenticate your users against the proxy (to identify them / implement user-specific rules etc).
Secondly HTTPS can be a real pain in the neck. This problem is growing as more and more sites move to HTTPS only. There are several issues:
browsers configured to use a proxy, for HTTPS URLS will firstly open a tunnel via the proxy using the CONNECT method. If your proxy wants to prevent this then any information it provides in the block response is ignored by the browser, and instead you get the generic browser connectivity error page.
if you want to provide any other benefits one normally wishes from a proxy (e.g. caching / scanning etc) you need to implement a MitM (Man-in-the-middle) and spoof server SSL certificates etc. In fact you need to do this if you just want to send back a block-page to deny things.
There is a way a browser can act a bit more like it was directly connected via a proxy, and that's using SOCKS. SOCKS has a way to return an error code if there's an upstream connection error. It's not the actual socket error code however.
These are all reasons why we wrote the WinGate Internet Client, which is a LSP-based product for our product WinGate. Client applications then learn the actual upstream error codes etc.
It's not a favoured approach nowadays though, as it requires installation of software on the client computer.
I wouldn't provide them too much info. Report what you need through internal logs in case you have to solve the problem. Return a 400, 403 or 418. Why? Perhaps the're just hacking.

Why does Fiddler return "Fiddler] ReadResponse() failed:The server did not return a complete response for this request." for valid requests?

I have a working console app, which sends data to an API. However as soon as I launch fiddler, I get the message:
[Fiddler] ReadResponse() failed: The server did not return a complete response for this request. Server returned 257 bytes.
The first header shown in Fiddler is: HTTP/1.1 504 Fiddler - Receive Failure
which seems to be generated directly by fiddler rather than having come from my API server (.NET).
How can I debug why this is happening, given that fiddler will not show me the raw results from the server? I presume there is an HTTP header error of some sort, which is compatible with my console app but not compatible with Fiddler.
I have been playing with gzip compressed requests, so perhaps one of the headers is incorrect (Content-Length), but with no way to view the raw response, it's very hard to debug this problem.
In the end I got some help from #ErikLaw on this:
Download DebugView https://learn.microsoft.com/en-us/sysinternals/downloads/debugview
In Fiddler's black QuickExec box under the session list, type !spew and hit Enter. Fiddler will begin spewing verbose logging information to DebugView, including all reads and writes to/from the network.
Far more information about the failed request is then shown in DebugView, which led me to the root cause that my web server was closing the connection early, before sending all content.
All credit to Eric.
How can I debug why this is happening, given that fiddler will not show me the raw results from the server?
Use Wireshark to see the actual network traffic. Fiddler's good (it's great), but it's not Wireshark. You'll need to jump through some hoops if your traffic is HTTPS, though.
Wireshark is not as easy to use as Fiddler, but it is significantly more powerful.
Also, if you're on Windows, you need to use your machine's local network IP address (e.g. 192.168.x.y), rather than localhost. See this question.

How to use a webbrowser as a proxy?

Suppose I am logged in and connected to a website in firefox (or any other browser) now I can make download requests in the browser. Suppose I want to use wget or curl using the connection of firefox. Is there a way to use firefox as a systemwide proxy for port 443 and 80? Here is a usage scenario: This would be interesting for a download manager, if the requests are proxied and made by the browser, all the credentials stored in the browser could be used.
So the browser would receive the request on port 443 and replicate it or forward it. Proxy and forwarding are probably not the right words in this context.
I am not aware of any feature of Firefox (or any other mainstream browser) that allows to really use it as some kind of proxy, sorry.
You cannot somehow "use the connection firefox already has", since there is no permanent connection between client and server in an http communication. http is a stateless protocol without some socket permanently kept open. Instead each http request is sent separately, each time a new socket is opened.
However something similar might be "half possible" using a crude workaround:
What you can try however is to simply start a new instance of the browser for each request you want to do. In reality this does not start a new instance, but reuses an already existing instance and typically opens a new tab in there. That way you can "remote control" your already started browser in a primitive way and do downloads, if and only if the url you specify will result in a download. However that all depends on the browser settings, so for example downloads will be stored in files in your local file system where you have to read the payload from again.
This all is not really efficient and convenient which is why it probably does not make much sense. Instead you should create a simple script for such communication. The effort for that is not that high.

See data that an app is secretly sending to web server in the background

I was playing around with fiddler (http proxy) and I noticed that some apps are making http get/post requests in the background and sending data and stats to and from the web. This got me interested and a little concerned to see what data various apps were sending but it seems that most of them are not doing it on port 80 via http but presumably on another port so you can't see the data in fiddler. Is there some way to view and/or potentially block the data being sent?
You're asking: "Using Fiddler, I saw that traffic was being sent by clients to servers. How can I see that traffic?"
Might I suggest you use Fiddler?
You can see the process sending the traffic in the Process column, and you can view the contents of the requests and responses using the Inspectors tab.
I would check out burp suite. It is a proxy you set up in your web browser and shows all of the data that passes through it. There's plenty of tutorials online. Check it out here

asp.net webservice security without changing client side

we need to protect our webservices with SSL (https) or any other security mechanism. Our problem is that current clients (delphi exe's) have references to our http webservices fixed in code and can not change that code.
I've tried to implement URL redirection rule from http to https but that didn't work because of the "hand shake"...Changing client to use https reference did work but saddly we can not do that for every client.
I know this question is in contradiction with encription theories but i'll fire this question anyway if anyone has any type of suggestion/idea to at least make connection or data transfer more secured (either with or without SSL protocol) without changing client side.
Thanks,
Luke
You need some kind of transparent TCP tunneling software/hardware on the clients, so the encryption occurs without the delphi clients noticing it.
My Google search using "transparent encrypted tunneling" keywords got this vendor of such solutions. There's must other vendors with similar solutions.
This is really an networking question.
PS.: hardcoding the URL is the real problem here. After the tunneling palliative is done, change that because this really will cause more headaches in future.
The client will be connecting over a port (non SSL) that will need to remain. What you could possibly do is that if you allow access both http and https you could possibly only allow http from specific IP addresses if you know them? its still not secure, but least you know where the calls are coming from and can do something about that?

Resources