I have apache2 running on localhost and I want to intercept and modify an http request from my localhost. By modifying I want to change the Accept-Encoding attribute of the header to 'identity'. Using Burp-Suite, it works just fine. However, using my scapy script it seems that the packet is already sent because the http response is still encoded.
The scapy script:
from scapy.all import *
def intercept(pkt):
if pkt.haslayer(Raw):
http_content = pkt.getlayer(Raw).load
http_content = http_content.replace("Accept-Encoding: gzip, deflate", "Accept-Encoding: identity")
pkt[Raw].load = http_content
print pkt.show()
send(pkt)
def main():
sniff(iface='lo', filter='tcp port 80', prn=intercept)
if __name__ == '__main__':
main()
This is what I get back as a response:
<skipped>
###[ Raw ]###
load = 'HTTP/1.1 200 OK\r\nDate: Thu, 11 Aug 2016 09:34:38 GMT\r\nServer: Apache/2.4.23 (Debian)\r\nLast-Modified: Thu, 11 Aug 2016 09:34:25 GMT\r\nETag: "7d-539c878b8f8fd-gzip"\r\nAccept-Ranges: bytes\r\nVary: Accept-Encoding\r\nContent-Encoding: gzip\r\nContent-Length: 103\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03\xb3\xc9(\xc9\xcd\xb1\xe3\xb2\xc9HML\xb1\xe3RPP\xb0)\xc9,\xc9I\xb5\xf3H\xcd\xc9\xc9W\x08\xcf/\xcaI\xb1\xd1\x87\x08q\xd9\xe8CT\xd9$\xe5\xa7TB\x14g\x18!\xabT\x04\xaa0\x82H\x14#\xc5\x13\xd3\x133\xf3\xf4\xf4\xf4l\xf4\x0b#\x06#t\x02\x95\x81m\x05\x00\x1c\x95F\x1d}\x00\x00\x00'
which is encoded.
Can someone help?
Well as far as I know scapy doesn't give you the ability to modify packets that are already created by your system. Of course you can craft and inspect packets but cannot modify already created packets.
As it is correctly pointed out here Scapy sniffs packets without interfering with the host's IP stack.
But for Linux you could try to combine scapy with the nfqueue module. The nfqueue module lets you modify(using scapy) packets that meet a certain iptables rule.
Related
I found that a get message header looks like:
:method: GET
:scheme: https
:authority: server.net
:path: /config
accept: */*
accept-encoding: gzip,deflate
What a connect message header should look like?
This example is from the RFC of http2:
GET /resource HTTP/1.1 HEADERS
Host: example.org ==> + END_STREAM
Accept: image/jpeg + END_HEADERS
:method = GET
:scheme = https
:path = /resource
host = example.org
accept = image/jpeg
I want to know the equivalent of the connect header in http2.
In Http1 is:
CONNECT example.org:443 HTTP/1.1
Host: example.org:443
The format of the CONNECT method in HTTP/2 is specified in section 8.3.
With the formatting you used above looks like:
:method: CONNECT
:authority: proxy.net:8080
As specified, :scheme and :path must be omitted.
The HTTP/2 CONNECT method can also be used for bootstrapping other protocols (see for example WebSocket over HTTP/2), so that, additionally, the :protocol pseudo-header may also be present.
Remember however that this is only a textual representation of HTTP/2; the bytes that actually travel over the network are different since you must encode them using HPACK.
Unless you are actually writing an HTTP/2 implementation, it is better that you use existing libraries (available in virtually any programming language) to send HTTP/2 requests (of any kind): the libraries will take care of converting your CONNECT request into the proper bytes to send over the network.
I am trying to create a MediaServer UPNP program in order to stream video from my phones camera to my PC.
I used Intel device spy to send an M-SEARCH request and used Wireshark to capture the network packets.
Here is the M-SEARCH packet
(Src: 192.168.1.28, Dst: 239.255.255.250; Src Port: 50852, Dst Port: 1900, time 2.09)
M-SEARCH * HTTP/1.1
ST: upnp:rootdevice
MAN: "ssdp:discover"
MX: 5
HOST: 239.255.255.250:1900
Here is the UDP reply
(Src: 192.168.1.23, Dst: 192.168.1.28; Src Port: 53359, Dst Port: 50852)
HTTP/1.1 200 OK
CACHE-CONTROL: max-age=1810
DATE: Wed, 1 Feb 2017 02:07:36 GMT
EXT:
LOCATION: http://192.168.1.23:49156/details.xml
SERVER: Linux/2.x.x, UPnP/1.0, pvConnect UPnP SDK/1.0, TwonkyMedia UPnP SDK/1.1
ST: upnp:rootdevice
USN: uuid:3d64febc-ae6a-4584-853a-85368ca80800::upnp:rootdevice
Content-Length: 0
I do not get a following HTTP GET request to 192.168.1.23. I compared it to other UPNP device responses that worked and could see no difference.
I tried different source ports but with no sucess. Any ideas?
#simonc, Thank you. I did have a \r\n at the end of my message, but I added another one (to the NOTIFY message as well) and now I can see my device.
So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn'tââââ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.
I'm connecting to real-time data on a remote server as a client. I want to send the following to a server and keep the connection open. This is a 'push' protocol.
http://server.domain.com:80/protocol/dosomething.txt?POSTDATA=thePostData
I can call this in a browser and it's fine. However, if I try to use telnet directly in a windows command prompt, the prompt just exits.
GET protocol/dosomething.txt?POSTDATA=thePostData
The same is the case if I use Putty.exe and select Telnet as the protocol. I can't see a way to do this with Hercules at all, as I don't think the server will interpret the GET
Is there any way I can do this?
Thanks.
You have to match the HTTP protocol (RFC2616) to the letter if you want to use telnet. Try something like:
shell$ telnet www.google.com 80
Trying 173.194.43.50...
Connected to www.google.com (173.194.43.50).
Escape character is '^]'.
GET / HTTP/1.1
Host: www.google.com:80
Connection: close
HTTP/1.1 200 OK
Date: Tue, 11 Sep 2012 15:09:51 GMT
...
You need to type the following lines including an "empty line" following the "Connection" line.
GET / HTTP/1.1
Host: www.google.com:80
Connection: close
I know HTTP keep-alive is on by default in HTTP 1.1 but I want to find a way to confirm that it is actually working.
Does anyone know of a simple way to test from a web browser (EG how to make sense of wireshark). I know I need to look for multiple HTTP requests over the same TCP connection but I don't know how to confirm that in wireshark or any other way.
Thanks!
As Ron Garrity said on ServerFault, you can use Curl like this:
curl -Iv http://www.aptivate.org 2>&1 | grep -i 'connection #0'
And it outputs these two lines if keep-alive is working:
* Connection #0 to host www.aptivate.org left intact
* Closing connection #0
And if keep-alive is not working, then it just outputs this line:
* Closing connection #0
If you're on Windows Vista or later, you can use Resource Manager. The Network tab will list all open TCP connections and the process they were started by. Open a browser with one tab, browse to your page, and test.
First, try to capture the traffic to the target website in Wireshark and limit it to what you need with a filter like:
tcp port 80 and host targetwebsite.com
Then load the page in a browser or fetch it by any tool you have. If the target web page refreshes itself or one of the values in it, leave it open until you have at least one change in it.
Now you have enough data and you can stop capturing procedure in Wireshark.
You should see dozens of records and their protocol should be TCP or HTTP. For the purpose of your quick simple check, you will not need TCP records. So, lets remove them by applying another filter. In top of the window there is a "filter" field. Type http there, and wireshark will hide all records but those which have a HTTP protocol.
Now select a record and look at the next level of details, which you can find in the 2nd box bellow all records. Just to be sure you are looking at the right place, the first line there starts with "Frame XYZ". The fourth line starts with "Transmission Control Protocol". Look for the port numbers after "SRC Port" and "DST Port:". Depending on the record, one of these numbers belongs to the webserver (typically 80) and the other one shows port number in your end.
Now check a couple of different GET records. To know if the request is a GET record, check the Info column. If the port numbers in your end are used several times, all those requests were made through HTTP keepalive.
Remember that most browsers will open multiple connections, even if the webserver supports keepalive. So, DO NOT conclude your evaluation by finding just one different port.
The most accurate way is to curl the same URL multiple times.
curl -v http://weibo.com -o /dev/null http://weibo.com -o /dev/null
If the output contains Re-using existing connection, then the HTTP keep-alive feature is working. For example,
* TCP_NODELAY set
* Connected to weibo.com (180.149.138.251) port 80 (#0)
> GET / HTTP/1.1
> Host: weibo.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< ...
< ...
<
{ [236 bytes data]
* Connection #0 to host weibo.com left intact
* Found bundle for host weibo.com: 0x56324121d9a0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host weibo.com
* Connected to weibo.com (180.149.138.251) port 80 (#0)
> GET / HTTP/1.1
> Host: weibo.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< ...
< ...
<
{ [236 bytes data]
* Connection #0 to host weibo.com left intact
Another quick way is to test with ab. But some HTTP servers might not return the Connection: keep-alive header even when they've already turned on keep-alive feature, such as uwsgi. In such cases, ab does NOT send keep-alive requests. That makes ab can only do "positive" detection on HTTP keep-alive.
ab -c 5 -n 50 -k https://www.google.com/
If the result shows
...
Complete requests: 50
Failed requests: 0
Keep-Alive requests: 50 # Pay attention to this line
Total transferred:
...
Then the HTTP keep-alive is enabled.