I came across a HTTP HELP method (https://portswigger.net/research/cracking-the-lens-targeting-https-hidden-attack-surface chapter "Invalid Host") and asked myself:
Are there any more systems that offer something like that?
I was wondering how did the pentester come up with this method.
Google couldn't help me here.
In the specific case, it was about an Apache Traffic Server, whose help could be queried as follows:
HELP / HTTP / 1.1
Host: XX.X.XXX.XX: 8082
HTTP / 1.1 200 Connection Established
Date: Tue, 07 Feb 2017 16:33:59 GMT
Transfer encoding: chunked
Connection: keep-alive
OK
Traffic Server Overseer Port
commands:
get <variable-list>
set <variable-name> = "<value>"
help
exit
example:
OK
get proxy.node.cache.contents.bytes_free
proxy.node.cache.contents.bytes_free = "56616048"
OK
Variable lists are conf / yts / stats records, separated by commas
And then applied specifically as follows:
GET / HTTP / 1.1
Host: XX.X.XXX.XX: 8082
Content-Length: 34
GET proxy.config.alarm_email
HTTP / 1.1 200 Connection Established
Date: Tue, 07 Feb 2017 16:57:02 GMT
Transfer encoding: chunked
Connection: keep alive
...
proxy.config.alarm_email = "nobody#yahoo-inc.com"
I figured out the answer:
This is a protocol specially customized for an Apache Traffic Server by Yahoo.
Apache Traffic Server allows you to create your own protocols using the "New Protocols Plugin": https://docs.trafficserver.apache.org/en/latest/developer-guide/plugins/new-protocol-plugins.en.html.
The protocol created here appears to be line-based.
The scenario was as follows:
An initial load balancer evaluated the host header in the incoming HTTP request in such a way that it forwarded the incoming request to the location entered there. This means that the attacker could determine to which internal location the request should be routed, in this case to an Apache traffic server sitting at IP:Port XX.X.XXX.XX: 8082. The underlying attack was a host header injection (https://portswigger.net/web-security/host-header).
The line-based self-made protocol now evaluated the individual lines of the HTTP request. This is how the information shown was achieved (like explained here https://www.youtube.com/watch?v=zP4b3pw94s0&feature=youtu.be&t=12m40s)
.
This means that the attacker was able to address the internal Apache traffic server via an HTTP request and the individual lines of the request were each understood as individual commands.
A HELP command has now been implemented by Yahoo here.
I have a SIM900 GSM module that I use to send GET and POST requests to servers.
Recently I rented a host for this purpose. I wrote a simple page using asp.net webforms to parse incoming data from the GSM module, everything was working until a few days ago I noticed that I no longer can receive data from my gsm module.
After investigating further I found out that the host I rented keeps returning HTTP 400 errors to my GSM module. These responses are not from IIS but from Microsoft-HTTPAPI/2.0. The request header is this:
GET /test/data?meow HTTP/1.1
Host : www.whatever.com
Connection : keep-alive
And this is the server response(body omitted):
HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Sun, 11 Oct 2020 12:08:28 GMT
Connection: close
Content-Length: 339
I used Postman (application) to simulate the same request and everything worked just fine.
I also made an exact copy of a chrome request header and gave it to the module, but that didn't work either.
Note: I am not using sim900's HTTP commands I am connecting to a certain port(80 in this case) and making a get request manually.
Note 2:I have been given a Plesk panel to manage my website and do not have access to certain server settings.
The request will pass through the http.sys module before entering iis, which will intercept requests that do not comply with the rules, so your response comes from Microsoft-HTTPAPI / 2.0 with a status code of 400. The solution to this can be to modify the registry, but the setting in the registry is based on your application and request, and there is no universal modification method.
How to troubleshoot HTTP 400 errors
Http.sys registry settings for Windows
Another method is to suggest that you use a tool similar to Fiddler to capture the request sent by sim900 and the request sent by postman respectively. After the capture, compare them in detail to find out the differences, and modify the sim900 request to be the same as the postman request, conforming to http .sys rules.
I'm trying to write a .NET web API that will receive HTTP requests from some devices and handle the data sent. I know the exact format of the data being sent and the ip/port that the data is sent to. The problem is that the API does not even seem to respond to the request as the controller method to handle the POST is never called.
I have tested the API with Postman; using the correct data format and host information and it works as intended. In order to ensure some kind of connection attempt is being made by the device, I listened to the port using a nodejs TCP server. There is data being sent and this is the header info that precedes it:
POST / HTTP/1.0
Host: xxx
Connection: keep-alive
User-Agent: xxx
Content-Type: application/json
Transfer-Encoding: chunked
Transfer-Content: chunked
I can't post the body data, but it is in JSON format as expected (but separated into chunks).
Since there are requests being made, data being sent but the API doesn't acknowledge it despite working when tested using Postman, I'm wondering if there is an issue with the head. I've been researching about the headers and I did read that HTTP 1.0 doesn't support chunked transfer-encoding. Could it be that the devices are making erroneous requests? Or are the headers fine and the problem could be elsewhere?
Thank you for your help.
Suppose when we request a resource over HTTP, we get a response as shown below:
GET / HTTP/1.1
Host: www.google.co.in
HTTP/1.1 200 OK
Date: Thu, 20 Apr 2017 10:03:16 GMT
...
But when a browser requests many resources at a time, how can it identify which request got which response?
when a browser requests many resources at a time, how can it identify which request got which response?
A browser can open one or more connections to a web server in order to request resources. For each of those connections the rules regarding HTTP keep-alive are the same and apply to both HTTP 1.0 and 1.1:
If HTTP keep-alive is off, the request is sent by the client, the response is sent by the server, the connection is closed:
Connection 1: [Open][Request1][Response1][Close]
If HTTP keep-alive is on, one "persistent" connection can be reused for succeeding requests. The requests are still issued serially over the same connection, so:
Connection 1: [Open][Request1][Response1][Request3][Response3][Close]
Connection 2: [Open][Request2][Response2][Request4][Response4][Close]
With HTTP Pipelining, introduced with HTTP 1.1, if it is enabled (on most browsers it is by default disabled, because of buggy servers), browsers can issue requests after each other without waiting for the response, but the responses are still returned in the same order as they were requested.
This can happen simultaneously over multiple (persistent) connections:
Connection 1: [Open][Request1][Request2][Response1][Response2][Close]
Connection 2: [Open][Request3][Request4][Response3][Response4][Close]
Both approaches (keep-alive and pipelining) still utilize the default "request-response" mechanism of HTTP: each response will arrive in the order of the requests on that same connection. They also have the "head of line blocking" problem: if [Response1] is slow and/or big, it holds up all responses that follow on that connection.
Enter HTTP 2 multiplexing: What is the difference between HTTP/1.1 pipelining and HTTP/2 multiplexing?. Here, a response can be fragmented, allowing a single TCP connection to transmit fragments of different requests and responses intermingled:
Connection 1: [Open][Rq1][Rq2][Resp1P1][Resp2P1][Rep2P2][Resp1P2][Close]
It does this by giving each fragment an identifier to indicate to which request-response pair it belongs, so the receiver can recompose the message.
I think you are really asking for HTTP Pipelining here. This is a technique introduced in HTTP/1.1, through which all requests would be sent out by the client in order and be responded by the server in the very same order. All the gory details are now in RFC 7230, sec. 6.3.2.
HTTP/1.0 had (or has) a comparable method known as Keep Alive. This would allow a client to issue a new request right after the previous has been answered. The benefit of this approach is that client and server no longer need to negotiate through another TCP handshake for a new request/response cycle.
The important part is that in both methods the order of the responses matches the order of the issued requests over one connection. Therefore, responses can be uniquely mapped to the issuing requests by the order in which the client is receiving them: First response matches, first request, second response matches second request, … and so forth.
I think the answer you are looking for is TCP,
HTTP is a protocol that relies on TCP to establish connection between the Client and the Host
In HTTP/1.0 a different TCP connection is created for each request/response pair,
HTTP/1.1 introduced pipelining, wich allowed mutiple request/response pair, to reuse a single TCP connection, to boost performance (Didnt work very well)
So the request and the corresponding response are linked by the TCP connection they rely on,
It's then easy to associate a specific request with the response it produced,
PS: HTTP is not bound to use TCP forever, for example google is experimenting with other transport protocols like QUIC, that might end up being more efficient than TCP for the needs of HTTP
In addition to the explanations above consider a browser can open many parallel connections, usually up to 6 to the same server. For each connection it uses a different socket. For each request-response in each socket it is easy to determine the correlation.
In each TCP connection, request and response are sequential. A TCP connection can be re-used after finishing a request-response cycle.
With HTTP pipelining, a single connection can be multiplexed for
multiple overlapping requests.
In theory, there can be any number[*1] of simultaneous TCP connections, enabling parallel requests and responses.
In practice, the number of simultaneous connections is usually limited on the browser and often on the server as well.
[*1] The number of simultaneous connections is limited by the number of ephemeral TCP ports the browser can allocate on a given system. Depending on the operating system, ephemeral ports start at 1024 (RFC 6056), 49152 (IANA), or 32768 (some Linux versions).
So, that may allow up to 65,535 - 1023 = 64,512 TCP source ports for an application. A TCP socket connection is defined by its local port number, the local IP address, the remote port number and the remote IP address. Assuming the server uses a single IP address and port number, the limit is the number of local ports you can use.
After receiving http packets from a website I see a request packet which its http header is like this,what does it mean "OpenNMS HttpMonitor\r\n" ?Its source address is not from that web page which I open!
GET / HTTP/1.1\r\n
[Expert Info (Chat/Sequence): GET / HTTP/1.1\r\n]
Request Method: GET
Request URI: /
Request Version: HTTP/1.1
Connection: CLOSE \r\n
User-Agent: OpenNMS HttpMonitor\r\n
\r\n
I believe this may well be Rackspace's monitoring solution for cloud servers. Might be wrong though. Might be worth contacting your hosting provider to see if it's them. You can sort of check this by seeing if your server IP is in the same subnet.
Um, not sure why it is appearing in your context, but OpenNMS is a network monitoring suite that we used to use at work to monitor our network nodes.
http://www.opennms.org/
Your IP may be erroneously being monitored by some corporation? ^^