How to calculate Broadcast speed status in Ant Media Server? - ant-media-server

As I see in Dashboard, I see my broadcast speed status changes sometimes. How is calculate stream broadcast speed status?

It is calculated in this function as follows.
speed=(last received packet pts in ms - first recived packet pts in ms) / (last packet receive time in ms - first packet receive time in ms)
where
received packet pts in ms: the ms equivalent of the presentation timestamp of the packet
packet receive time in ms: the Unix time in ms
Example:
Let Ant Media Server receive the first packet at 1591985868368 (Unix time) and the PTS value for this packet is 0 in ms.
Let Ant Media Server receive the 30th packet at 1591985869500 (Unix time) and the PTS value for this packet is 1230 in ms.
Then
speed = (1230-0)/(1591985869500-1591985868368)
= 1230/1132
= 1.09x
Note that: Generally PTS values are not in ms but in timestamp so it must be converted to ms first.

Related

Are Transmission delay and queuing delay overlapping in nature. I am confused, the time transmission delays will take will also be added in queuing?

Queuing Delay At the queue, the packet experiences a queuing delay as it waits to be transmitted onto the link. The length of the queuing delay of a specific packet will depend on the number of earlier-arriving packets that are queued and waiting for transmission onto the link. If the queue is empty and no other packet is currently being transmitted, then our packet’s queuing delay will be zero. On the other hand, if the traffic is heavy and many other packets are also waiting to be transmitted, the queuing delay will be long. We will see shortly that the number of packets that an arriving packet might expect to find is a function of the intensity and nature of the traffic arriving at the queue. Queuing delays can be on the order of microseconds to milliseconds in practice.
Transmission Delay Assuming that packets are transmitted in a first-come-first-served manner, as is common in packet-switched networks, our packet can be transmitted only after all the packets that have arrived before it have been transmitted. Denote the length of the
packet by L bits, and denote the transmission rate of the link from router A to router B by R bits/sec. For example, for a 10 Mbps Ethernet link, the rate is R = 10 Mbps; for a 100 Mbps Ethernet link, the rate is R = 100 Mbps. The transmission delay is L/R. This is the amount of time required to push (that is, transmit) all of the packet’s bits into the link. Transmission delays are typically on the order of microseconds to milliseconds in practice.

testing NTP for time sync between nodes in a local network

I need your expertise:
I have a Xilinx zynq board and a desktop computer that are syncing their time with an NTP server (stratum 3) the NTP server is a desktop computer which is syncing time with NTP pool, now in order to test and calculate the time differences between the embedded system (Zynq) and the desktop computer I am using a simple echo method described below:
Note: All the communication are through wireless network except local NTP server and NTP pool.
client sends its time to the server
server reads the packet and compares its time to the packet's time and prints it
server puts its time to another packet sends it to the client
client gets the packet reads it and print the diff time
This gives me around 1-2 millisecond time difference
Now the problem is, testing with another method: simple send and receive instead of an echo method, meaning one system only sends the packet with its timestamp and the other one only reads and prints the time difference, result in 10 times bigger time difference! I was wondering if you guys know what could be the reason behind it?
The reason is the wireless device has a queue which buffers 10 packets before sending any packets and that makes this process longer than normal.

Reasonable RTP stream timeout period

I'm not an expert in networks. So I would like to know what is the maximum reasonable time bewteen receiving two rtp packets upon which if passed the stream would be considered timedout.

TCP Socket no connection timeout

I open a TCP socket and connect it to another socket somewhere else on the network. I can then successfully send and receive data. I have a timer that sends something to the socket every second.
I then rudely interrupt the connection by forcibly losing the connection (pulling out the Ethernet cable in this case). My socket is still reporting that it is successfully writing data out every second. This continues for approximately 1hour and 30 minutes, where a write error is eventually given.
What specifies this time-out where a socket finally accepts the other end has disappeared? Is it the OS (Ubuntu 11.04), is it from the TCP/IP specification, or is it a socket configuration option?
Pulling the network cable will not break a TCP connection(1) though it will disrupt communications. You can plug the cable back in and once IP connectivity is established, all back-data will move. This is what makes TCP reliable, even on cellular networks.
When TCP sends data, it expects an ACK in reply. If none comes within some amount of time, it re-transmits the data and waits again. The time it waits between transmissions generally increases exponentially.
After some number of retransmissions or some amount of total time with no ACK, TCP will consider the connection "broken". How many times or how long depends on your OS and its configuration but it typically times-out on the order of many minutes.
From Linux's tcp.7 man page:
tcp_retries2 (integer; default: 15; since Linux 2.2)
The maximum number of times a TCP packet is retransmitted in
established state before giving up. The default value is 15, which
corresponds to a duration of approximately between 13 to 30 minutes,
depending on the retransmission timeout. The RFC 1122 specified
minimum limit of 100 seconds is typically deemed too short.
This is likely the value you'll want to adjust to change how long it takes to detect if your connection has vanished.
(1) There are exceptions to this. The operating system, upon noticing a cable being removed, could notify upper layers that all connections should be considered "broken".
If want a quick socket error propagation to your application code, you may wanna try this socket option:
TCP_USER_TIMEOUT (since Linux 2.6.37)
This option takes an unsigned int as an argument. When the
value is greater than 0, it specifies the maximum amount of
time in milliseconds that transmitted data may remain
unacknowledged before TCP will forcibly close the
corresponding connection and return ETIMEDOUT to the
application. If the option value is specified as 0, TCP will
use the system default.
See full description on linux/man/tcp(7). This option is more flexible (you can set it on the fly, just right after a socket creation) than tcp_retries2 editing and exactly applies to a situation when you client's socket doesn't aware about server's one state and may get into so called half-closed state.
Two excellent answers are here and here.
TCP user timeout may work for your case: The TCP user timeout controls how long transmitted data may remain unacknowledged before a connection is forcefully closed.
there are 3 OS dependent TCP timeout parameters.
On Linux the defaults are:
tcp_keepalive_time default 7200 seconds
tcp_keepalive_probes default 9
tcp_keepalive_intvl default 75 sec
Total timeout time is tcp_keepalive_time + (tcp_keepalive_probes * tcp_keepalive_intvl), with these defaults 7200 + (9 * 75) = 7875 secs
To set these parameters on Linux:
sysctl -w net.ipv4.tcp_keepalive_time=1800 net.ipv4.tcp_keepalive_probes=3 net.ipv4.tcp_keepalive_intvl=20

RTT timing for TCP packet using Wireshark

I want to calculate the Round Trip timing for the TCP packets.
But in wireshark, I don't see any particular field for the RTT timing for a TCP packet like its there for the RTP packet.
Wireshark do calculates the RTT graph but i am not finding as how it has been calculated.
Can someone help me out in finding the formula used for the same?
There is nothing inside TCP that gives the round-trip time. It's estimated by the kernel based on how long it takes to receive an ACK to data that was sent. It records the timestamp of when a given sequence number went out and compares it to the timestamp of the corresponding ACK. The initial 3-way handshake gives a decent starting value for this.
However, this is only an estimate as the receiver is free to delay ACKs for a short period if it feels it can respond to multiple incoming packets with a single reply.
RTT frequently changes over the duration of the session due to changing network conditions. The effect is (obviously) more pronounced the further away the endpoints.
If you want to get the values of the RTT calculated by wireshark/tshark, the following did the trick for me to print them on stdout:
tshark -r myfile.pcap -Y 'ip.addr == AA.BB.CC.DD' -T fields -e tcp.analysis.ack_rtt
(where I used the display filter after -Y to restrict the analysis to only one remote host)
If you are using wireshark , it show the iRtt =initial Round Trip and the Rtt of each sent packet , just look at "show packet in new window /seq/ack analyses "

Resources