Zigbee device not responding - zigbee

I am sending one file from Zig bee coordinator to end device.
While sending file I am sending it in chunks of 64 bytes plus header which forms packet of 71 bytes.
Now since the size of the file is too large about 2000 packets are to be transmitted.
The problem is I can successfully transmit few of the packets and then the device doesn't send any response.The no. of packets that can be successfully transmitted is about 90 and this is not fixed it varies from some 20s to 90s and not more than that.
So can anyone please tell me what exactly is causing this thing to happen...

try reducing the size of packet from 64 bytes to 32 bytes...
I was facing same problem and this solved it.

Related

TCP avoid delayed acknowledge

I'm having an application on a micro controller with a small TCP/IP stack. This application waits for a connection on a specific TCP port. As soon as the TCP connection is established the micro controller need to send 8 KB of data. Because the TX buffer of the TCP socket is only 1 KB tall I will need to send 8 segments of data. The TX buffer size can't be changed!
But now I have the problem, that after every segment a delay of 200 ms occurs. I know that this is caused by the delayed ack (which is on Windows 200 ms). But in my use case this means the whole process includes 1400 ms of delay time, which is just wasted time.
Is there any possibility that I can force the PC to acknowledge the data instantly (maybe a bit in the TCP header)? I can't change anything on the PC side.
Or should I instead sent two 512 byte tall segments instead of the 1 KB segment? Would this fix / trick out the issue? I have read that the PC will acknowledge the data if a second segement arrives.
What is the right way to solve such a usecase?

What is the actual bit transfer rate over a network transferring 100kb/s?

When transferring, say, 1GB worth of data over the internet this data is split into packets, each packet containing a small piece of data and aach of these packets are part of a frame.
Eg. Windows reports that you are transferring the file in 100kb/s over a TCP connection, but this appears to be the amount of data from the file being transferred per second, and does not seem to include the ip or tcp header, or ethernet frame.
What is the actual amount of traffic on the network needed to transfer at this speed? Or is that data actually already included in the transfer speed, but just small enough that it makes no significant difference?
Also, IP supports up to 1500 byte / packet (I think?), but what is the common size of data packets when loading, say, an HD image on reddit?
Sorry for the rather basic questions I probably should have figured out myself by now...
It depends on where you look at the transfer rate:
Task Manager will report all the transferred bytes (i.e. the sum of all the packets including their headers).
A file transfer program will report the transmitted payload.
Task Manager
If you look at Task Manger / Network, you can see the transmitted bytes together with the number of transmitted packets (unicast or non-unicast).
That data comes from the network driver (or at least something close to it), so it makes sense to report the total amount of data here (otherwise each packet would need to be inspected to calculate the payload).
There is also a graph showing the transfer rate. Those numbers could easily be compared with the reported numbers in file transfer software.
File transfer program
A file transfer program on the other hand, does not know the details about the packets being created in the lower layers (those could be any size). So the only option here is to report the amount of transmitted payload data / part of the file, which also makes more sense to the user.
Network packets
On normal networks (there could also be jumbo frames), a TCP-packet (full ethernet frame) is around 1500 bytes when fully loaded (on my system (IPv4) the packets are 1514 bytes with a total header size of 54 bytes -- 14 for the Ethernet header, 20 for the IP header and 20 for the TCP header). Those could be split in smaller packets along the way in the network, but in most cases they won't.
Transfer rate
When transferring a file (or other large datastream), on average 2 full packets (1514 bytes) will be sent each time, and 1 small packet (54 bytes) is received (the [ACK] packet). In this optimal case we have 2 x 1460 payload, and 2 x 54 bytes of overhead on the sending side + 54 bytes on the receiving side. When comparing to the maximum transfer rate of the internet connection, we also have to consider some latency.
Not all transmissions are optimal:
There could be packets that never arrived or where the checksum was wrong, so a retransmit would be needed.
In some cases data could be sent in smaller parts, causing a higher overhead/payload ratio (but with small chunks Nagle's algorithm could take care of that).
Certain software could be reading the file contents into small buffers (say 4096 bytes). Those could then be split in 2 x 1460 and 1 x 1176, introducing some extra overhead.
Conclusion
It's hard to tell or calculate the exact ratio transferred_bytes/payload. It depends on the quality of the internet connection (lost packets, retransmits), the software or API calls used to transfer the data, and even the underlying network (small frames vs jumbo frames for example).
A typical full-size TCP/IPv4 packet on the Internet is of size 1500B (maximum transmission unit (MTU)), of which (minimum) 20B are of TCP header and (minimum) 20B are of IPv4. This MTU was chosen to be compatible with Ethernet. Furthermore, there are application headers (e.g., HTTP for web, SIP/RTP/RTCP for voice call, etc.) included in this packet. The minimum MTU is 576B for IPv4 and 1280B for IPv6. One can see MTU on Linux with ifconfig command.
The best way to figure these values is by using a pcap tool/network analyzer such as Wireshark. Also refer to wiki pages or a good networking book for headers and fields of the protocols.
I'm pretty sure that the reported transer rate doen't include all the headers and overhead of the different layers in the protocal stack, since the reported thruput usually comes from some user-space application which would only get the actual data from the network stream object. It would need to do additional work to find out about all the headers and frames and other overhead that occurred in the different layers and affected the actual physical transmission.

TCP file Transfer window size

I'm trying to reverse engineer an application, and i need help understanding how TCP window size works. My MTU is 1460
My application transfers a file using TCP from point A to B. I know the following:
The file is split into segments of size 8K
Each segment is compressed
Then each segment is sent to point B over TCP. These segment for a text file can be of size 148 Bytes, and for a pdf 6000 Bytes.
For a text file, am i supposed to see the segments of 148 attached to one another to form one large TCP stream? and then it is split according to the Window Size?
Any help is appreciated.
The receiver application should see the data in teh same way, the sender application sent it. TCP uses byte-streaming and so it collects all the bytes in an in-order manner and delivers it to the application. MTU is largely an internal semantics to TCP and does not take into application-layer packet boundaries. If TCP has enough data to send in its send buffer (each TCP socket has its own send buffer, btw), then it will package its next segment worth MTU size and sends it; to be more precise, it deducts TCP and IP header from the MTU size.

How can a file transfer go so quickly if the size of a packet cannot exceed 1500 bytes?

When downloading a file from a Web Site, speeds of many megabytes per second can be acheived. If TCP needs to break up and individually send packets over 1500 bytes, then how are these speeds possible? Doesn't the client have to wait for every 1500 byte fragment which should take a while?
Thanks
Doesn't the client have to wait for every 1500 byte fragment which
should take a while
No. That's the magic of TCP, you don't have to ACK every segment, you can ACK once in a while. The server can push lots of segments before the client positively must acknowledge at least some.
TCP uses a concept called "windows". A sender can push data into a window, causing it to shrink. The receiver acknowledges data, causing the window to expand. If the receiver doesn't acknowledge data, the transfer grinds to a halt.
In modern TCP knowing when to acknowledge data is the gist of the protocol. Doing it too often or not often enough has enormous impacts on performance.

Send large data on UDP socket

I need to send and receive very large data using udp. Unfortunately udp provides 8192 bytes per diagram, so there is need to divide data into smaller pieces.
I'm using Qt and QUdpSocket. There is a QByteArray with length of 921600 I want to send to client. I want to send 8192 bytes each time.
What is the fast way to split a QByteArray?
You should never need to explicitly split the data, just step through it 8 KB at a time. Typically the functions that write data to a socket (including QUdpSocket::writeDatagram, it seems) accept a pointer to the first byte and a byte count, so you can just provide a pointer into the array.
Do note that sending 8 KB datagrams is quite aggressive; they will very likely be fragmented at the IP layer which can affect delivery speed and reliability negatively.
Research the concept of "path MTU", and try to use that for the sends, it might be faster although it will result in more datagrams.
Actually the length field on a UDP header is 16 bits so a UDP datagram can be up to ~65k (minus the size of the header stuff).
However, as unwind pointed out, it will most likely be fragmented along the way to fit the smallest MTU on the path to the destination.
8192 bytes is the default receive buffer size for Windows operating systems. The MTU is likely 1500 bytes if you are using Ethernet. Any UDP datagram larger than that will be fragmented. If your datagram encounters a device along the route with an even smaller MTU, it will be fragmented yet again.
You can use the QByteArray.mid(int start, int len) method (see documentation here) to get a QByteArray of length len starting from start.
Just make len your datagram size and start with 0*len, 1*len, 2*len, ... until everything is sent.

Resources