How does synchronous data transfer start or end the transmission of data? - asynchronous

I understand with asynchronous data transfer the is a start and stop bit. And that synchronous uses timing signals with the help of an electronic clock. But how does the computer know when the transmission will start and stop and if there are errors in the blocks of data?

Related

Need advice on serial port software access implementation on single CPU core device

I am designing the software controlling several serial ports, operating system is OpenWrt. The device application is running in is single core ARM9 # 450 MHz. Protocol for serial ports is Modbus.
The problem is with Modbus slave implementation. I designed it in real-time manner, looping reading data from serial port (port is open in non-blocking mode with 0 characters to wait for and no timeouts). The sequence/data stream timeout is about 4 milliseconds # 9600/8/N/1 (3.5 characters as advised). Timeout is checked if application does not see anything in the buffer, therefore if application is slower that incoming stream of characters, timeouts mechanism will not take place - until all characters are removed from the buffer and bus is quiet.
But I see that CPU switches between threads and this thread is missed for about 40-60 milliseconds, which is a lot to measure timeouts. While I guess serial port buffer still receives the data (how long this buffer is?), I am unable to assess how much time passed between chars, and may treat next message as continuation of previous and miss Modbus request targeted for my device.
Therefore, I guess, something must be redesigned (for the slave - master is different story). First idea coming to my mind is to forget about timeouts, and just parse the incoming data after being synchronized with the whole stream (by finding initial timeout). However, there're several problems - I must know everything about all types of Modbus messages to parse them correctly and find out their ends and where next message starts, and I do not see the way how to differentiate Modbus request with Modbus response from the device. If developers of the Modbus protocol would put special bit in command field identifying if message is request or response... but it is not the case, and I do not see the right way to identify if message I am getting is request or response without getting following bytes and checking CRC16 at would-be byte counts, it will cost time while I am getting bytes, and may miss window for the response to request targeted for me.
Another idea would be using blocking method with VTIME timeout setting, but this value may be set to tenths of seconds only (therefore minimal is 100 ms), and this is too much given another +50 ms for possible CPU switching between threads, I think something like timeout of 10 ms is needed here. It is a good question if VTIME is hardware time, or software/driver also subject to CPU thread interruptions; and the size of FIFO in the chip/driver how many bytes it can accumulate.
Any help/idea/advice will be greatly appreciated.
Update per #sawdust comments:
non-blocking use has really appeared not a good way because I do not poll hardware, I poll software, and the whole design is again subject to CPU switching and execution scheduling;
"using the built-in event-based blocking mode" would work in blocking mode if I would be able to configure UART timeout (VTIME) in 100 us periods, plus if again I would be sure I am not interrupted during the executing - only then I will be able to have timing properly, and restart reading in time to get next character's timing assessed properly.

Is buffering time at the transmitting end included in RTT?

Good day!
I know this is a simple question but I can't find its answer, whenever I look for RTT, it is usually loosely defined. So, is buffering time in the transmitting node included in RTT -received by ping-?
RTT simply means "round-trip time." I'm not sure what "buffering" you're concerned about. The exact points of measurement depend on the exact ping program you're using, and there are many. For BusyBox, the ping implementation can be found here. Reading it shows that the outgoing time is stamped when the outgoing ICMP packet is prepared shortly before sendto() is called, and the incoming time is stamped when the incoming ICMP packet is parsed shortly after recvfrom() is called. (Look for the calls to monotonic_us().) The difference between the two is what's printed. Thus the printed value includes all time spent in the kernel's networking stack, NIC handling and so on. It also, at least for this particular implementation, includes time the ping process may have been waiting for a time slice. For a heavily loaded system with scheduling contention this could be significant. Other implementations may vary.

Implementation standard of retransmission scheduling in a communication protocol

So I am implementing a communication protocol over UDP, which obviously means that I will be managing all error handling and retransmission myself. My originial plan was to have a timestamp as part of the header in my packets and keep all non-acknowledged packets in a retransmission queue. Then I would maintain a round trip delay time and a round trip time out (how long to wait for acknowledgement before retransmitting the packet), by measuring the time a packet was acknowledged and compare this time with the timestamp for the packet in the retransmission queue.
Someone then told me that having timestamps as part of packet headers is normally something that is used in streaming protocols, since timing is of big importance in those cases (timing is of no importance in my case). However, I did a little research, and it seems like UDT uses timestamps in this way although UDT is not a streaming protocol. Actually, what I am trying to implement is kinda similar to UDT.
I guess I could easily remove the timestamp field of the header, and just keep track of the sending times of the packets in another queue running parallel to my retransmission queue.
I'm just curious to hear what you guys think?

How would I simulate TCP-RTM using/in NS2?

Here is a paper named "TCP-RTM: Using TCP for Real Time Multimedia Applications" by Sam Liang, David Cheriton.
This paper is to adapt tcp to be used in Real time application.
The two major modification which i actually want you to help me are:
On application-level read on the TCP connection, if there is no in sequence data queued to read but one or more out-of-order packets are queued for the connection, the first contiguous range of out-of-order packets is moved from the out-of-order queue to the receive queue, the receive pointer is advanced beyond these packets, and the resulting data delivered to the application. On reception of an out-of-order packet with a sequence number logically greater than the current receive pointer (rcv next ptr) and with a reader waiting on the connection, the packet data is delivered to the waiting receiver, the receive pointer is advanced past this data and this new receive pointer is
returned in the next acknowledgment segment.
In the case that the sender’s send-buffer is full due to large amount of backlogged data, TCP-RTM discards the oldest data segment in the buffer and accepts the new data written by the application. TCP-RTM also advances its send-window past the discarded data segment. This way, the application write calls are never blocked and the timing of the sender application is not broken.
They actually changed the 'tcpreno with sack' version of tcp in an old linux 2.2 kernel in real environment.
But, I want to simulate this in NS2.
I can work with NS2 e.g., analyzing, making performance graphs etc. I looked all the related files but can't find where to change.
So, would you please help me to do this.

TCP Flow control in AS3?

I am currently working on a Flash socket client for a pre-existing service/standard. The service uses TCP flow control to throttle itself and the Flash socket is reading in everything as fast as it can despite not being able to process it as fast as it's being taken in. This causes the bytesAvailable on the socket to keep increasing and the server never knows that the client has fallen behind.
In short, is there any way to limit the size of bytesAvailable for a Flash Socket object or throttle it in some other way?
Note: Rewriting the server isn't a viable option at the current time as it's a standard and the client's utility drops immensely if server-side changes are needed
After research I've found that the Actionscript Socket class will start throttling when CPU is maxed out on the system (likely due to running out of resources/slow response times).
This has actually solved my problem as I've written the code such that it strikes a balance between how many frames per second the app "wants" and how many bytesAvailable are in the socket. If bytesAvailable is too high the app will process non-stop and drive CPU to 100%, ultimately causing the socket to slow down.
I don't think it is possible. There is no low-level API in AS3 that can manipulate with bytes on TCP level. But you can implement throttle on higher level.
For example: before you put bytes into Socket's byteArray, check how many data you have put there for the last couple seconds. If this value is too high - postpone operation.

Resources