CanI I drop packet manually? - tcp

I need only part of http response, but receive too much
useless packets. I wanna maintain session(escape from re-TCP,ssl handshaking), but need only first part of packet. Is it possible?
How can I do?

Related

What is the function of 802.11 (Retry Field)?

I have done some research but I'm still confuse with this field. I know its purpose is for retransmission if the bit is set to 1. However, is it correct for me to say that if the packet is sent but it does not receive acknowledgement back then it will try to send again and thus the bit is set to 1 in Retry field.
Correct explanation? If I am wrong, how do I explain it then?
It is a wireless frame that is re-transmitted because the receiver did not acknowledge it. For detail information refer : https://www.youtube.com/watch?v=0wm9zK7kDvU

Understanding tshark output

I am trying to understand the output of network data captured by tshark using the following command
sudo tshark -i any ‘tcp port 80’ -V -c 800 -R ‘http contains <filter__rgument>' > <desired_file_location>
Accordingly, I get some packets in output each starting with a line something like this:
Frame 5: 1843 bytes on wire (14744 bits), 1843 bytes captured (14744 bits) on interface 0
I have some basic questions regarding a packet:
Is a frame and a packet the same thing (used interchangeably)?
Does a packet logically represent 1 request (in my case HTTP request)? If not, can a request span across multiple packets, or can a packet contain multiple requests? A more basic question will be what does a packet represent?
I see a lot of information being captured in the request. Is there a way using tshark to just capture the http headers and http reqeust body? Basically, my motive of this whole exercise is to capture all these requests to replay them later.
Any pointers in order to answer these doubts will be really helpful.
You've asked several questions. Here are some answers.
Are frames and packets the same things?
No. Technically, when you are looking at network data and that data includes the Layer 2 frame header, you are looking at a frame. The IP packet inside of that frame is just data from Layer 2's point of view. When you look at the IP datagram (or strip off the frame header), you are now looking at a packet.
Ultimately, I tell people that you should know the difference and try to use the terms properly, but in practice it's not an extremely important distinction.
Does a packet represent a single request?
This really depends. With HTTP 1.0 and 1.1, you could look at it this way, though there's no reason that, if the client has a significant amount of POST data to send, the request can't span multiple packets. It is better to think of a single "connection" or "session" as a single request/response. (This is not necessarily strictly true with HTTP 1.1, but it is generally true)
With HTTP 2.0, this is by design not true. A single connection or session is used to handle multiple data streams (requests/responses).
How can I get at the request headers?
This is far too lengthy for me to answer here. The simplest thing to do, most likely, is to simply fire up WireShark, go into the filter bar and type "http." As soon as you hit the dot, you will see a list of all of the different sub-elements that you can look at. You can use these in tshark using the '-Y' option, and you can additionally specify columns that you would like to display (so you can add and remove columns, effectively).
An alternative way to see this information is to use the filter expression button to bring up the protocols selector. If you scroll down to HTTP, you can select it and then see all of the fields that are available.
When looking through these, realize that some of the fields are in the top-level rather than within request or response. For example, content-length appears as a field under http rather than http.request.content_length. This is because content-length is a field common to all requests and responses.

OnOffApplication with TCP retransmission

I was doing some experiments.
And I used OnOffApplication to generate the traffic.
However things didn't seem right.
And i use
MaxBytes to send the amount of traffic that I want.
And the traffic is heavy.
So there will be some packets being dropped.
And it seems OnOffApplication doesn't care about the dropped packets. ( I'm not sure. It's my guess)
It only send the packets until it reaches MaxBytes , and doesn't care about whether the packet is received or not.
Is my guess right?
And, if my guess is right, then is there any alternative choice that I can use.
To generate traffic that each flow has a certain size, and have to re-transmit until all packets in the same flow is received.
My code is in below
OnOffHelper source ("ns3::TcpSocketFactory", Address (InetSocketAddress(r_ipaddr, port)));
source.SetAttribute ("OnTime", RandomVariableValue (ConstantVariable (1)));
source.SetAttribute ("OffTime", RandomVariableValue (ConstantVariable (0)));
source.SetAttribute ("DataRate", DataRateValue (DataRate(linkBw)));
source.SetAttribute("PacketSize",UintegerValue (packetSize));
source.SetAttribute ("MaxBytes", UintegerValue (tempsize*1000));
From the application point of view, OnOff is only a packet generator. It sends packets with specific characteristics (rate, max number etc). It does not track them. That's by design.
If you use TCP though, then the socket will track and make sure that any lost segments are re-transmitted.
The application will generate the MaxBytes in terms of load, but the actual packets transmitted on the wire (or the air) may differ due to the fact that TCP (by design) does not respect the message boundaries, as it is a bytestream oriented protocol. So it may boundle data packets together, or packet segments, with re-trasnmitted segmets etc.

Idea to handle lost packets (theory)

Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
We need to use UDP.
Problem
We need to make the quality better. How to handle lost packets? We need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently (we can't guarantee 100%, so we only want it better), without retransmitting them? We can do everything, this is theory.
There are different logic's to handle these things.It depends on what application you are using. Are you doing real time video streaming? stringent requirements?
As you said you have a buffer, you can actually maintain a buffer for the packets and then send an acknowledgement for the lost packets (if you feel you can wait).
As this is video application, send acknowledgements only to the key frames. make sure that you have a key or I frame and then do interpolation at the rx side.
Look into something called forward error correction, fountain codes, luby codes. Here, you will encode the packets 1 and 2 and produce packet 3. If packet1 is lost, use packet3 and packet2 to get the packet1 back at the rx side. Basically you send redundant packets. Its little harsh on network but you get most of the data.

Split CRLF between TCP payloads

I'm currently writing a low-level HTTP parser and have run into the following issue:
I am receiving HTTP data on a packet-by-packet basis, i.e. TCP payloads one at a time. When parsing this data, I am using the HTTP protocol standards of searching for CRLF to delineate header lines, chunk data (in the case of chunked-encoding), and the dual CRLF to delineate header from body.
My question is: do I need to worry about the possibility of CRLF being split between two TCP packet payloads? For example, the HTTP header will finish with CRLFCRLF. Is it possible that two subsequent TCP packets will have CR, and then LFCRLF?
I am assuming that yes; this is a case to worry about, since the application (HTTP) and TCP layers are rather independent of each other.
Any insight into this would be highly appreciated, thank you!
Yes, it is possible that the CRLF gets split into different TCP packets. Just think about the possibility that a single HTTP header is exactly one byte longer than the TCP MTU. In that case, there is only room for the CR, but not for the NL.
So no matter how tricky your code will get, it must be able to handle this case of splitting.
What language are you working in? Does it not have some form of buffered read functionality for the socket, so you don't have this issue?
The short answer to your question is yes, theoretically you do have to worry about it, because it is possible the packets would arrive like that. It is very unlikely, because most HTTP endpoints will tend to send the header in one packet and the body in subsequent packets. This is less by convention and more by the nature of the way most socket-based programs/languages work.
One thing to bear in mind is that while the protocol standards are quite clear about the CRLF separation, many people who implement HTTP (clients in particular, but to some degree servers as well) don't know/care what they are doing and will not obey the rules. They will tend to separate lines with LF only - particularly the blank line between the head and the body, the number of code segments I have seen with this problem I could not count up to quickly. While this is technically a protocol violation, most servers/clients will accept this behaviour and work around it, so you will need to as well.
If you can't do some kind of buffered read functionality, there is some good news. All you need to do is read a packet at a time into memory and tag the data on to the previous packet(s). Every time you have read a packet, scan your data for a double CRLF sequence, if you don't find it, read the next packet, and so on until you find the end of the head. This will be relatively small memory usage, because the head of any request shouldn't ever be more than 5-6KB, which given an ethernet MTU of (averaging around) 1450 bytes means you shouldn't ever need to load more than 4 or 5 packets into memory to cope with it.

Resources