I need to classify TCP traffic base on video and non-video. So i need to find characteristics of each flow.
My classification is Flow-based and one of my variables is incoming packet length. but it is not accurate as
P(video|1200Bytes)= 0.04
P(non-Video|1200Bytes) = 0.22
I need some help to find some variables, at least two more to decide more accurately if a flow containing a video or not.
Can anyone help me please?
Thanks
Checking a single packet by itself may not yield a good result, why don't you try profiling based on the source of the packet? If a source sends so much linked TCP packets, then it could be video.
Related
I'm a student and I'm taking right now an Operating Systems course. I've stumbled upon a strange answer for a question while learning for exam and I couldn't find an explanation for it.
Question: Suppose we have an Operating System which runs on low physical memory. Thus the designers decided to make the buffer (that handles all the work that is connected to the network) as small as possible. What can be the smallest size of the buffer?
Answer: Can't be implemented with one byte only, but can be implemented with 2 bytes size.
My thoughts: It has 4 answers, one of them is "3 bytes or more" so I thought that it's the right answer because in order to establish a connection you need at list to be able to send a header of tcp/udp or similar package that contains all the connection info, so I have no idea why it's the right answer (according to the reference). Maybe some degenerate case?
Thanks for help.
The buffer has to be at least as large as the packet size on the network. That will depend upon the type of hardware interface. I know of no network system, even going back to the days of dialup, that used anything close to 2 bytes.
Maybe, in theory, you could have a network system that used 2-byte packets. The same logic would allow you to use 1-byte packets (transmitting fractions of a byte in a packets).
Sometimes I wonder about the questions CS professors come up with. I guess that's why:
Those who can do, do;
Those who can't do, teach;
Those who can't do and can't teach, teach PE.
I've been thinking about wireless networking a little bit recently, and I came upon a realization last night that I can't find an answer to: how do clients know when they can transmit and not stomp over another clients' transmission?
I assume there is documentation for this sort of thing available, but I've been unable to find anything useful over a half hour of casual Google queries, probably because I don't know the right terms. Apologies in advance if this is a silly question . . .
Here's why I'm confused: based on my understanding of how RF hardware works, we can model the transmission medium as a safe shared register between different RF clients (because what one client broadcasts can be overwritten by other clients and get a muddle between the two). But safe registers only have consensus number 1, so how can we establish who can transmit at any given point? I'm assuming that only one client can transmit at once -- perhaps this is my fundamental misunderstanding?
Even the use of a randomized consensus protocol seems unwieldy, because the only ones I know of use atomic registers, not safe registers, and also have no upper bound, so two identical devices with the same random seed would proceed for a very long time.
Thanks!
Please check: Carrier sense multiple access with collision avoidance
I would like to analyze a given TCP connection and record the total number of losses, as well as detailed analysis such as how many losses were of type tripple-duplicate ACKs, single-timeouts, double-timeouts, triple timeouts, etc.
Can anybody suggest a good tool for this?
Thanks in advance!
I suggest that you use tcptrace. It has more detailed TCP statistics than Wireshark, along the lines you are looking for. (use the long output format)
I've been creating a reliable networking protocol similar to TCP, and was wondering what a good default value for a re-transmit threshold should be on a packet (the number of times I resend the packet before assuming that the connection was broken). How can I find the optimal number of retries on a network? Also; not all networks have the same reliability, so I'd imagine this 'optimal' value would vary between networks. Is there a good way to calculate the optimal number of retries? Also; how many milliseconds should I wait before re-trying?
This question cannot be answered as presented as there are far, far too many real world complexities that must be factored in.
If you want TCP, use TCP. If you want to design a custom-protocol for transport layer, you will do worse than 40 years of cumulative experience coded into TCP will do.
If you don't look at the existing literature, you will miss a good hundred design considerations that will never occur to you sitting at your desk.
I ended up allowing the application to set this value, with a default value of 5 retries. This seemed to work across a large number of networks in our testing scenarios.
I am looking for the way to calculate the one-way delay in a packet-switched network. I do not want to use NTP or PTP (Network Time Protocol, Precision Time Protocol).
Consider the scenario:
Host-1 Sends the packet to Host-2. Both Hosts have different Clock rates and the hosts are located in different countries.
Packet may be UDP/TCP/Layer 2 Frame.
Is there any way to sync the clock rates of two hosts so as to calculate the one-way delay.
Now how do you guys calculate the one way delay without relying on a timing protocol. I am looking some generic formula to do this.
I would much appreciate the answers for this question.
Thanks a ton in advance.
Synchronizing clocks is exactly what [S]NTP are meant to accomplish. If there was a simpler way, the protocols would be simpler. You can approximate RTT without them, but one-way delay is hard to do.
No, you cannot. Measuring a one-way delay requires synchronized clocks (and NTP is typically not good enough for this task, independent synch to reliable clocks is necessary).
Read RFC 4656 for the gory details. There are two available implementations, OWAMP in C and Jowamp in Java.
refer to UTP in bittorrent, it calcs qdelay, no need to sync at both sides, however, it may be not what you want.
I use iperf to do network testing like that. You might get some insight by looking at how they do it.