Using wireshark to analyze pcap files - http

I have captured a HTTP request and response in n a pcap file by running curl script from my Linux server. After that i opened it through wireshark. When analyzing packets i saw 404 not found. Does it mean the request i am running is non existent? or the request file ****.html is non existent?
Below is the wireshark data--
9 2017-06-08 17:27:04.035288 x.x.x.x x.x.x.x HTTP 287 GET /****.htm HTTP/1.1
12 2017-06-08 17:27:04.036517 x.x.x.x x.x.x.x HTTP 73 HTTP/1.1 404 Not Found ..
sorry for all the x's and *'s. This is a little confidential..

Related

log full HTTP requests and responses going through a given interface and port

I'm looking for a command I can run to watch port 5000 on my local loopback and log the full HTTP requests and responses on it, live, without dumping to a file and postprocessing it. Desired output would look like:
GET /index.html HTTP/1.1
Host: www.example.com
HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Content-Type: text/html; charset=UTF-8
Content-Encoding: UTF-8
Content-Length: 138
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
ETag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Connection: close
<html>
<head>
<title>An Example Page</title>
</head>
<body>
Hello World, this is a very simple HTML document.
</body>
</html>
I've tried using tshark -i lo0 -d tcp.port==5000,http -Y http but that doesn't print the full content of the HTTP requests and responses, and it prints a lot of extra stuff I don't care about:
13 2.644627 127.0.0.1 → 127.0.0.1 HTTP 387 POST /battsim/loadprofile HTTP/1.1 (application/json)
21 2.692109 127.0.0.1 → 127.0.0.1 HTTP 57 HTTP/1.0 200 OK (application/json)
32 2.706703 127.0.0.1 → 127.0.0.1 HTTP 100 PUT /battsim/loadprofile/3b3135f0-b8aa-4ece-94c2-e9baf1c4998e/data HTTP/1.1 (text/csv)
37 2.722450 127.0.0.1 → 127.0.0.1 HTTP 244 HTTP/1.0 500 INTERNAL SERVER ERROR (text/html)
There are several tools for this purpose and they are often called something like "httpflow". One example is https://github.com/six-ddc/httpflow which looks exactly what you want, i.e. just dumping the data.
The -z follow option of tshark might be what you're looking for. It takes a stream number and outputs the stream content. For instance, tshark -r tmp.pcap -z follow,tcp,ascii,1 outputs the content of TCP stream 1 from the capture file tmp.pcap.
To retrieve all TCP stream contents in your capture file you could use:
for stream in $(tshark -r tmp.pcap -T fields -e tcp.stream | sort -un)
do
tshark -r tmp.pcap -z follow,tcp,ascii,$stream
done
-z follow however requires the whole capture file to extract the stream contents. If you want to extract HTTP requests online, you'll need to use something other than tshark.

Python Scapy - Intercept and modify http packet on localhost

I have apache2 running on localhost and I want to intercept and modify an http request from my localhost. By modifying I want to change the Accept-Encoding attribute of the header to 'identity'. Using Burp-Suite, it works just fine. However, using my scapy script it seems that the packet is already sent because the http response is still encoded.
The scapy script:
from scapy.all import *
def intercept(pkt):
if pkt.haslayer(Raw):
http_content = pkt.getlayer(Raw).load
http_content = http_content.replace("Accept-Encoding: gzip, deflate", "Accept-Encoding: identity")
pkt[Raw].load = http_content
print pkt.show()
send(pkt)
def main():
sniff(iface='lo', filter='tcp port 80', prn=intercept)
if __name__ == '__main__':
main()
This is what I get back as a response:
<skipped>
###[ Raw ]###
load = 'HTTP/1.1 200 OK\r\nDate: Thu, 11 Aug 2016 09:34:38 GMT\r\nServer: Apache/2.4.23 (Debian)\r\nLast-Modified: Thu, 11 Aug 2016 09:34:25 GMT\r\nETag: "7d-539c878b8f8fd-gzip"\r\nAccept-Ranges: bytes\r\nVary: Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Length: 103\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\xb3\xc9(\xc9\xcd\xb1\xe3\xb2\xc9HML\xb1\xe3RPP\xb0)\xc9,\xc9I\xb5\xf3H\xcd\xc9\xc9W\x08\xcf/\xcaI\xb1\xd1\x87\x08q\xd9\xe8CT\xd9$\xe5\xa7TB\x14g\x18!\xabT\x04\xaa0\x82H\x14#\xc5\x13\xd3\x133\xf3\xf4\xf4\xf4l\xf4\x0b#\x06#t\x02\x95\x81m\x05\x00\x1c\x95F\x1d}\x00\x00\x00'
which is encoded.
Can someone help?
Well as far as I know scapy doesn't give you the ability to modify packets that are already created by your system. Of course you can craft and inspect packets but cannot modify already created packets.
As it is correctly pointed out here Scapy sniffs packets without interfering with the host's IP stack.
But for Linux you could try to combine scapy with the nfqueue module. The nfqueue module lets you modify(using scapy) packets that meet a certain iptables rule.

How to make an HTTP GET request manually with netcat?

So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't​‎​‎ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.

Wireshark and Continuation or non-HTTP traffic

I'm writing some code to integrate an in-house app into a DVR to retrieve a video file. This is all reverse engineered as there isn't any official documentation, and I'm having trouble understanding the following sequence of events (captured by playing with the DVR's Android app).
936 72.985204 192.168.0.1 192.168.0.200 HTTP 468 POST /cgi-bin/supervisor/NetworkBk.cgi HTTP/1.1 (application/x-www-form-urlencoded)
937 72.985368 192.168.0.200 192.168.0.1 TCP 54 mit-ml-dev > 41859 [ACK] Seq=1 Ack=415 Win=65535 Len=0
938 73.933676 192.168.0.200 192.168.0.1 HTTP 275 HTTP/1.0 200 OK (video/mpeg4)
939 73.933983 192.168.0.1 192.168.0.200 TCP 54 41859 > mit-ml-dev [ACK] Seq=415 Ack=222 Win=15544 Len=0
940 74.004433 192.168.0.200 192.168.0.1 TCP 74 [TCP segment of a reassembled PDU]
941 74.004887 192.168.0.1 192.168.0.200 TCP 54 41859 > mit-ml-dev [ACK] Seq=415 Ack=242 Win=15544 Len=0
942 74.024669 192.168.0.200 192.168.0.1 HTTP 1346 Continuation or non-HTTP traffic
The HTTP POST requests the video file, which then results in an HTTP OK. I get confused as to what happens next. Isn't the request complete when the HTTP 200 is received? Why then is it continuing to receive TCP data and then getting a HTTP Continuation or non-HTTP traffic? The subsequent TCP packets contain the video file I'm intending to download. When I manually craft a HTTP POST I get the HTTP OK response and then I'm stumped.
This is the code I use to simulate the HTTP POST.
import requests
dc = {"action":"download", "start_time":"2013 7 1 13 59 00", "end_time":"2013 7 14 3 0", "num":"255", "ch":"5"}
r = requests.post("http://192.168.0.200/cgi-bin/supervisor/NetworkBk.cgi", data=dc, auth=(username, password))
This code gets the HTTP 200 OK response, how do I get to the Continuation or non-HTTP traffic? I'm new to this and so am unsure if I've provided enough details. I can provide the HTTP headers if that will help.
Addendum
This is the RAW response of the HTTP OK reply. As far as I can tell, there is nothing there about expecting extra content.
HTTP/1.0 200 OK
Date: Mon, 01 Jul 2013 15:01:34 GMT
nServer: Linux/2.x UPnP/1.0 Avtech/1.0
Expires: 0
Pragma: no-cache
Cache-Control: no-cache
Connection: close
Content-Type: video/mpeg4
Content-Length: 5
0
OK
Why then is it continuing to receive TCP data and then getting a HTTP
Continuation or non-HTTP traffic?
By default, when you make a request, the body of the response is downloaded immediately.
So in this case once the successful POST request is made the DVR will immediately start sending the video data over TCP - most probably as a H.264 Byte Stream. That would account for the non-HTTP traffic you are seeing.
This code gets the HTTP 200 OK response, how do I get to the
Continuation or non-HTTP traffic?
You can override this default behaviour and defer downloading the response body until you access the Response.content attribute with the stream parameter: You could then use something like r.iter_content to iterate over the response data in chunks and then write them to a file. e.g.
import requests
url = "http://192.168.0.200/cgi-bin/supervisor/NetworkBk.cgi"
dc = {"action":"download", "start_time":"2013 7 1 13 59 00", "end_time":"2013 7 14 3 0", "num":"255", "ch":"5"}
r = requests.post(url, data=dc, auth=(username, password), stream=True)
if r.status_code == 200:
with open(path, 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)

Unable to test HTTP PUT-based file upload via Squid Proxy

I can upload a file to my Apache web server using Curl just fine:
echo "[$(date)] file contents." | curl -T - http://WEB-SERVER/upload/sample.put
However, if I put a Squid proxy server in between, then I am not able to:
echo "[$(date)] file contents." | curl -x http://SQUID-PROXY:3128 -T - http://WEB-SERVER/upload/sample.put
Curl reports the following error:
Note: This error response was in HTML format, but I've removed the tags for ease of reading.
ERROR: The requested URL could not be retrieved
ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://WEB-SERVER/upload/sample.put
The following error was encountered:
Unsupported Request Method and Protocol
Squid does not support all request methods for all access protocols.
For example, you can not POST a Gopher request.
Your cache administrator is root.
My squid.conf doesn't seem to be having any ACL/rule that should disallow based on the src or dst IP addresses, or the protocol, or the HTTP method... as I can do an HTTP POST just fine between the same client and the web server, with the same proxy sitting in between.
In case of the failing HTTP PUT case, to see the request and response traffic that was actually occurring, I placed a netcat process in between Curl and Squid, and this is what I saw:
Request:
PUT http://WEB-SERVER/upload/sample.put HTTP/1.1
User-Agent: curl/7.15.5 (i686-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Host: WEB-SERVER
Pragma: no-cache
Accept: */*
Proxy-Connection: Keep-Alive
Transfer-Encoding: chunked
Expect: 100-continue
Response:
HTTP/1.0 501 Not Implemented
Server: squid/2.6.STABLE21
Date: Sun, 13 May 2012 02:11:39 GMT
Content-Type: text/html
Content-Length: 1078
Expires: Sun, 13 May 2012 02:11:39 GMT
X-Squid-Error: ERR_UNSUP_REQ 0
X-Cache: MISS from SQUID-PROXY-FQDN
X-Cache-Lookup: NONE from SQUID-PROXY-FQDN:3128
Via: 1.0 SQUID-PROXY-FQDN:3128 (squid/2.6.STABLE21)
Proxy-Connection: close
<SNIPPED the HTML error response already shown earlier above>
Note: I have anonymized the IP addresses and server names throughout for readability reasons.
Thanks to Amos Jeffries for answering this on squid-users forum. The issue is basically that Squid before version 3.1 does not implement HTTP 1.1 and thus rejects the chunked transfer encoding.

Resources