I just read in my CS book :
At the source computer, the message or the file/document to be sent to another computer is firstly divided into very small parts called Packets.
Each packet is given a number serialwise e.g., 1,2,3...
All these packets are then sent to the address of the destination computer.
The destination computer receives the packets in random manner ( It may even receive packet 10 before packet 1 arrives). If a packet is garbled or lost, it is demanded again.
If this is the case (especially 4th one) then how can I play a song while it's being downloaded. According to 4th statement if packets come in random order then the song/movie shouldn't start before it's completely downloaded.
Il explain you what happen with the video, but for audio streaming it's almost the same. The audio has only a lower bit rate.
The client use the buffer to mitigate the effect of end-to-end delay.
The client tries to download the video's packets faster than it will process them and it saves them in the buffer. This operation is called prefetching.
Doing prefetching allows client to process and show the video even if some packets will arrive after others.
At the start of the video you have to wait that some pockets arrives in your buffer. When client's buffer have enought of them it let you see the video. For example on YouTube you see a little circle until your buffer is full enouth.
For example you start a video on YouTube of 35MB, the client calculate a 500Kbit buffer and will wait until 2Kbit. It means that you have to wait until client has downloaded 2Kbit of video.
If your connection go down client will continue to use packets stored in the buffer until it will be empty. At that point you have to wait that the clients download again 2Kbit of pockets.
If your connection is too fast and the buffer became full it stops to ask packets until buffer has some space again.
Notice that when you pause a streaming video your client still downloads.
Related
I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.
BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:
Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
Master sends a packet that is not received by the slave.
Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.
Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.
How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.
How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.
I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.
The slave knows the channel map. If one packet is not received from the master, it will listen again after one connection interval on the next channel. If that it also not received, it adds one extra connection interval and next channel.
The slave also stores a timestamp (or event counter) when the last received packet from the master was detected, regardless of if the crc was correct or not. This is called the anchor point. This is not the same time point used for supervision timeout.
The amount of time between the anchor point and the next expected packet is multiplied by the master + slave accuracy (for example 500 ppm) to get a receive window, plus 16 microseconds. So the slave listens this amount of time before and after the expected packet time of arrival.
Consider the example of a download stream that can be throttled (eg. torrent client, dropbox sync, etc). How does a program apply backpressure to the network?
My thoughts are that, from a software perspective you can choose to read from a socket at a certain speed. But how does the socket you're reading from know that you only want your device to receive data so quickly? Does the actual NIC apply backpressure over the network somehow? If so, by what mechanism?
Backpressure is embedded in TCP/IP protocol. If slow consumer does not read bytes from connection in timely manner, producer is unable to put more bytes than there are buffer memory on sending and receiving sides.
In contrast, UDP messages are not counted and can be dropped if there is no free memory on receiver side to store them.
I am told to increase TCP buffer size in order to process messages faster.
My Question is, no matter what buffer i am using for TCP message(ByteBuffer, DirectByteBuffer etc), whenever CPU receives interrupt from say NIC, to handle network request to read the socket data, does OS maintain any buffer in memory outside Address Space of requesting process(i.g. the process which is listening on that socket)
or
whatever way CPU receives network data, it will always be written in a buffer of process address space only and no buffer(including 'Recv-Q' and 'Send-Q' of netstat command) outside of the address space is maintained for this communication?
The process by which the Linux network stack receives data is a bit complicated. I wrote a comprehensive guide to the Linux network stack that explains everything you need to know starting from the device driver up to a userland program's socket receive queue.
There are many places buffers are maintained in the kernel:
The DMA ring where packets are written by the NIC after they've arrived.
References to the packets on the DMA ring are used to process the packet.
Eventually, the packet data is added to process' receive queue, if the receive queue is not full already.
Reads from the socket will pull packets from the process' receive queue.
If packet sniffing is occurring, packet data is duplicated and sent to any filters added by the packet sniffing code.
The full process of how data is moved, accounted for, and dropped (when required) is described in the blog post linked above.
Now, if you want to process messages faster, I assume you mean you want to reduce your packet processing latency, correct? If so, you should consider using SO_BUSYPOLL which can help reduce packet processing latency.
Increasing the receive buffer just increases the number of packets that can be queued for a userland socket. To increasing packet processing power, you need to carefully monitor and tune each component of the network stack. You may need to use something like RPS to increase the number of CPUs processing packets.
You will also want to monitor each component of your network stack to ensure that available buffers and CPU processing power is sufficient to handle your packet workload.
See:
http://linux.die.net/man/3/setsockopt
The options are SO_SNDBUF, and SO_RCVBUF. If you directly use the C-API, the call is setsockopt itself. If you use some kind of framework look up how to set socket options. This is indeed a kernel-side buffer, not one held by your process. It determines how many bytes the kernel can hold ready for you to fetch from a call to read/receive. It also affects the flow control mechanism of TCP.
You are being told to increase the socket send or receive buffer sizes. These are associated with the socket, in the TCP part of the kernel. See setsockopt() and SO_RCVBUF and SO_SNDBUF.
I'm using lwip stack on my embedded platform. I have connected the board to my PC via ethernet. My application running on board, dumps the image data out of ethernet. PC applications waits for header, after header it decodes the data and displays the image.
This is for debug purpose only. My images are 4MBytes and i receive 20 Frames per second. So it will be 80MBytes data per second.
Is is advisable to use TCP or UDP?
I tried using TCP, but my send buffers becomes full and it will wait around 200ms to receive acknowledge. Mean time i loose 5-6 images coming from sensor. Can this be fixed if i use UDP?
Thanks,
Sathya
I suggest you apply some kind of compression to your images before sending them to the network.
That said, if you use UDP, you may get better transferrate, but you do need receiving code that can handle lost packets (discard image or request resend or pad affected area)
When downloading a file from a Web Site, speeds of many megabytes per second can be acheived. If TCP needs to break up and individually send packets over 1500 bytes, then how are these speeds possible? Doesn't the client have to wait for every 1500 byte fragment which should take a while?
Thanks
Doesn't the client have to wait for every 1500 byte fragment which
should take a while
No. That's the magic of TCP, you don't have to ACK every segment, you can ACK once in a while. The server can push lots of segments before the client positively must acknowledge at least some.
TCP uses a concept called "windows". A sender can push data into a window, causing it to shrink. The receiver acknowledges data, causing the window to expand. If the receiver doesn't acknowledge data, the transfer grinds to a halt.
In modern TCP knowing when to acknowledge data is the gist of the protocol. Doing it too often or not often enough has enormous impacts on performance.