Sending Raw Tcp Packet Using NetCat to Erlang Server - tcp

I am trying to create a TCP Server which will store incoming TCP Packets as binary, for a Key/Value Store. I already have an Erlang client which can send TCP packets to my Erlang Server, however for the sake of completeness, I want to allow the user to send TCP packets from a command line using clients such as NetCat. The user would adhere to a spec of how to format the data in the TCP Packet such that the Server will be able to understand it. For Example
$ nc localhost 8091
add:key:testKey
Key Saved!
add:value:testValue
Value Saved!
get:key:testKey
Value: testValue
The user interacts with the server by using the add:key/value: and get:key:. What is after that should be taken literally and passed to the server. Meaning a situation like this could be possible, if the user so wanted to.
$ nc localhost 8091
add:key:{"Foo","Bar"}
Key Saved!
add:value:["ferwe",324,{2,"this is a value"}]
Value Saved!
get:key:{"Foo","Bar"}
Value: ["ferwe",324,{2,"this is a value"}]
However, this doesn't seem possible to do as what actually happens is as follows...
I will pre-fill the erlang key/value store (using ETS) using my erlang client with a key of {"Foo","Bar"} and a value of ["ferwe",324,{2,"this is a value"}]. A tuple and list respectively (in this example) as this key/value store has to be able to accommodate ANY erlang compliant data type.
So in the example, currently there is 1 element in the ETS table:
Key
Value
{"Foo","Bar"}
["ferwe",324,{2,"this is a value"}]
I then want to retrieve that entry using NetCat by giving the Key, so I type in NetCat...
$ nc localhost 8091
get:key:{"Foo","Bar"}
My Erlang Server, receives this as <<"{\"Foo\",\"Bar\"}\n">>
My Erlang Server is set up to receive binary which is not an issue.
My question is therefore, can NetCat be used to send unencoded Packets which doesn't escape the quote marks.
Such that my Server is able to receive the Key and just <<"{"Foo","Bar"}">>
Thank you.

My question is therefore, can NetCat be used to send unencoded Packets which doesn't escape the quote marks.
Yes, netcat sends exactly what you give it, so in this case it sends get:key:{"Foo","Bar"} without escaping the quote marks.
Such that my Server is able to receive the Key and just <<"{"Foo","Bar"}">>
<<"{"Foo","Bar"}">> is not a syntactically correct Erlang term. Do you want to get the tuple {"Foo","Bar"} instead, in order to look it up in the ETS table? You can do it by parsing the binary:
Bin = <<"{\"Foo\",\"Bar\"}\n">>,
%% need to add a dot at the end for erl_parse
{ok, Tokens, _} = erl_scan:string(binary_to_list(Bin) ++ "."),
{ok, Term} = erl_parse:parse_term(Tokens),
ets:lookup(my_table, Term).

Related

Client sends messages through one series of ip and receivers answer through another series of ip, after the primary ip is made down and up

In case of SCTP multihoming, the client sends messages through one series of ip and receivers answer through another series of ip, after the primary ip is made down and up.
here I have configured 2 paths, primary path, and secondary path. Initially, all the messages will be transmitted in the primary path. Now im making primary interface down and all the messages will be transmitted in the secondary path.
Once I made the primary interface up, the first transaction is sent via primary path and answer is getting on the secondary path.
This happens only for the 1st transaction after interface made up. from the 2nd transaction, all the messages are going in the primary path and getting the answer back in the primary path itself.
Behaviour in case depends on a few factors, such as:
What SACK chunk is actually confirming. Whether it is actually the confirmation of the DATA that has been received via primary path, or confirmation for something that has been received previously.
Whether it is single SACK or SACK bundled with DATA chunks.
Whether it confirms DATA chunk received via one path or via two paths (e.g. first packet came via your secondary path and another one via primary). In first case according to RFC 4960 chapter 6.4 SACK should be sent via primary path, in the second case behaviour may vary:
An endpoint SHOULD transmit reply chunks (e.g., SACK, HEARTBEAT ACK,
etc.) to the same destination transport address from which it
received the DATA or control chunk to which it is replying. This
rule should also be followed if the endpoint is bundling DATA chunks
together with the reply chunk.
However, when acknowledging multiple DATA chunks received in packets
from different source addresses in a single SACK, the SACK chunk may
be transmitted to one of the destination transport addresses from
which the DATA or control chunks being acknowledged were received.
How strictly particular implementation follows RFC. RFC defined that SACK should be sent to the same destination transport address as source address of the received packet. Strictly speaking RFC does not define what source IP address should be used. E.g. if DATA chunk came in IP packet via IP1-IP2 path, according to RFC it is Okay to send SACK via IP3-IP1 ip path.

Port specification on an asio tcp client application

I'm rewriting a python twisted server in C++ using asio. I have set up the following examples from
http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/tutorial.html:
Daytime.1 - A synchronous TCP daytime client
Daytime.3 - An asynchronous TCP daytime server
and they seem to be functioning correctly. However, what is puzzling me is that when I created the twisted server both it and the client side required explicit specification of the IP addresses and port numbers. I am having a slightly different experience here:
On the client application no specification of the port number is required. I can successfully connect to the server by using only 127.0.0.1 as a command line argument.
Also, I appear to be able to connect to the same server with any legal variant of this IP address, as long as the first byte is 127 (eg 127.1.2.3 connects).
There's a literal in the client code specifying to connect using what I assume is an OS provided "daytime" TCP service. However, there is no reference to this in the server code. Why do I have to specify a particular service to connect to? I also suspect that this service could be related to the behaviour in points 1 and 2.
Now I know that the server has an acceptor socket listening that only establishes the connection once it receives a request but I would like some more details here.
Daytime is well known service in *nix world. You can get the list of known service by looking at /etc/services file and you can see below records in it:
daytime 13/udp # Daytime (RFC 867)
daytime 13/tcp # Daytime (RFC 867)
When service_name is provided with the host name, the tcp endpoint uses below version of getaddrinfo:
int error = ::getaddrinfo(host, service, &hints, result);
Looking at the man page [Emphasis mine]:
int
getaddrinfo(const char *hostname, const char *servname, const struct addrinfo *hints, struct addrinfo **res)
The hostname and servname arguments are either pointers to NUL-terminated strings or the null pointer. An acceptable
value for hostname is either a valid host name or a numeric host address string consisting of a dotted decimal IPv4
address or an IPv6 address. The servname is either a decimal port number or a *service name listed in services(5)*. At
least one of hostname and servname must be non-null.
So, in short, provided the correct service name, it knows the correct port number to use. 13 in case of "daytime" service.

Matching up incoming packets with their corresponding request (with noise)

I'm currently building a black-box fuzzing tool and I have encountered the following problem:
Suppose I send a server a fuzzed packet that I construct and get some packets back from the server. I also get some additional packets from other parts of the same server.
Provided I can look at all the incoming and outgoing packets (this is not a request-response system, it's an RPC-based online game) and I have no information what the response should look like, how do I filter out those packets that were sent in response to the fuzzed packet from the rest of the stream?
Just an example: you send an RPC like "give a player a gun with ID 5" and the server sends that player RPCs like "give me an array of the guns you have" and "tell me how much ammo you got". I want to see how the server reacts if I send malformed input, e.g. negative or big integers, in this case. My problem is the fact that the server sends these on a random basis all the time, so I want to filter out the requests that are sent in response to my fuzzed RPC.
A statistical approach will do as I assume there's no way to determine this with full confidence.
The fact that "it's not a request-response system, it's RPC-based" should not change a thing to the classic scheme - unless you/I missed some details from your question:
You must construct a tuple structure from the request with (source IP, destination IP, source port, destination port),
and then watch for reverse tuple (destination IP, source IP,
destination port, source port) packets to catch the response(s).
EDIT: for TCP of course - for connectionless protocols, well, that's a game of heuristic guesses.

How to write http layer sniffer

I want to write an application layer sniffer (SMTP/ftp/http).
Based on my searchs, first (and perhaps hardest!) step is to reassemble the tcp stream of the sniffed connections.
Indeed, what I need is something like the "follow TCP stream" option of wireshark, but I need a tool which do it on live interface and automatically. As I know, Tshark can extract TCP streams data from the saved pcap files automatically (link) but not from live interfaces. Can Tshark do it on live interfaces???
As I know, TCPflow can do exactly what I want, however, it can not handle IP defragmentation and SSL connections (I want to analyse the SSL content in the case I have the server private key).
Finally, I also try bro network monitor. Although it provides the list of TCP connections (conn.log), I was not able to get TCP connections contents.
Any suggestion about mentioned tools or any other useful tool is welcome.
Thanks in advance, Dan.
perl Net::Inspect library might help you. It also comes with a tcpudpflow which can write tcp and udp flows into separate files, similar to tcpflow. It works on pcap files or can do live captures. The library handles IP fragmenting. It also comes with a httpflow tool to extract HTTP requests and responses (including decompression, chunked encoding..). It does not currently handle SSL.
As the author of this library I don't think that extracting TCP flows is the hardest part, the HTTP parser (exluding decompression, including chunked mode) is nearly twice as big than IP and TCP combined.
This example works for reassembling application data of a single protocol:
tshark -Y "tcp.dstport == 80" -T fields -d tcp.port==80,echo -e echo.data
It captures live http data, reassembles it, and outputs it as raw hex.
I can add a small script to parse the hex into ascii if you like.
I want to analyse the SSL content in the case I have the server private key
TL;DR: This can't be done with a capturing tool alone.
Why not: Because each SSL session generates a new secret conversation key, and you can't decrypt the session without this key. Having the private server key is not enough. The idea behind this is that if someone captures your SSL traffic, saves it, and then a year later he "finds" the private server key, then he still won't be able to decrypt your traffic.

Nagle-Like Problem

so I have this real-time game, with a C++ sever with disabled nagle using SFML library , and client using asyncsocket, also disables nagle. I'm sending 30 packets every 1 second. There is no problem sending from the client to the server, but when sending from the server to the clients, some of the packets are migrating. For example, if I'm sending "a" and "b" in completly different packets, the client reads it as "ab". It's happens just once a time, but it makes a real problem in the game.
So what should I do? How can I solve that? Maybe it's something in the server? Maybe OS settings?
To be clear: I AM NOT using nagle but I still have this problem. I disabled in both client and server.
For example, if I'm sending "a" and "b" in completly different packets, the client reads it as "ab". It's happens just once a time, but it makes a real problem in the game.
I think you have lost sight of the fundamental nature of TCP: it is a stream protocol, not a packet protocol. TCP neither respects nor preserves the sender's data boundaries. To put it another way, TCP is free to combine (or split!) the "packets" you send, and present them on the receiver any way its wants. The only restriction that TCP honors is this: if a byte is delivered, it will be delivered in the same order in which it was sent. (And nothing about Nagle changes this.)
So, if you invoke send (or write) on the server twice, sending these six bytes:
"packet" 1: A B C
"packet" 2: D E F
Your client side might recv (or read) any of these sequences of bytes:
ABC / DEF
ABCDEF
AB / CD / EF
If your application requires knowledge of the boundaries between the sender's writes, then it is your responsibility to preserve and transmit that information.
As others have said, there are many ways to go about that. You could, for example, send a newline after each quantum of information. This is (in part) how HTTP, FTP, and SMTP work.
You could send the packet length along with the data. The generalized form for this is called TLV, for "Type, Length, Value". Send a fixed-length type field, a fixed-length length field, and then an arbitrary-length value. This way you know when you have read the entire value and are ready for the next TLV.
You could arrange that every packet you send is identical in length.
I suppose there are other solutions, and I suppose that you can think of them on your own. But first you have to realize this: TCP can and will merge or break your application packets. You can rely upon the order of the bytes' delivery, but nothing else.
You have to disable Nagle in both peers. You might want to find a different protocol that's record-based such as SCTP.
EDIT2
Since you are asking for a protocol here's how I would do it:
Define a header for the message. Let's say I would pick a 32 bits header.
Header:
MSG Length: 16b
Version: 8b
Type: 8b
Then the real message comes in, having MSG Length bytes.
So now that I have a format, how would I handle things ?
Server
When I write a message, I prepend the control information (the length is the most important, really) and send the whole thing. Having NODELAY enabled or not makes no difference.
Client
I continuously receive stuff from the server, right ? So I have to do some sort of read.
Read bytes from the server. Any amount can arrive. Keep reading until you've got at least 4 bytes.
Once you have these 4 bytes, interpret them as the header and extract the MSG Length
Keep reading until you've got at least MSG Length bytes. Now you've got your message and can process it
This works regardless of TCP options (such as NODELAY), MTU restrictions, etc.

Resources