I have a C++ application running on windows xp 32 system sending and receiving short tcp/ip packets.
Measuring (accurately) the arrival time I see a quantization of the arrival time to 16 millisecond time units. (Meaning all packets arriving are at (16 )xN milliseconds separated from each other)
To avoid packet aggregation I tried to disable the NAGLE algorithm by setting the IPPROTO_TCP option to TCP_NODELAY in the socket variables but it did not help
I suspect that the problem is related to the windows schedular which also have a 16 millisecond clock..
Any idea of a solution to this problem ?
Thanks
Use a higher resolution timer such as QueryPerformanceTimer() or __rdtsc() being aware of their downfalls.
Similarly note if you are using wait functions you may want to call timeBeginPeriod() for 1ms resolution or even implement a busy-way delay function wrapped around a higher resolution timer.
Related
I have an audio playing app that runs on several distributed devices, each with their own clock. I am using QAudioOutput to play the same audio on each device, and UDP broadcast from a master device to synchronize the other devices, so far so good.
However, I am having a hard time getting an accurate picture of "what is playing now" from QAudioOutput. I am using the QAudioOutput bufferSize() and bytesFree() to estimate what audio frame is currently being fed to the sound system, but the bytesFree() value progresses in a "chunky" fashion, so that (bufferSize() - bytesFree()) / bytesPerFrame doesn't give the number of frames remaining in the buffer, but some smaller number that bounces around relative to it.
The result I am getting now is that when my "drift indicator" updates, it will run around 0 for several seconds, then get indications in the -15 to -35 ms range every few seconds for maybe 20 seconds, then a correcting jump of about +120ms. Although I could try to analyze this long term pattern and tease out the true drift rate (maybe a couple of milliseconds per minute), I'd much rather work with more direct information if it's available.
Is there any way to read the true number of frames remaining in the QAudioOutput buffer while it is playing a stream?
I realize I could minimize my problems by radically reducing the buffer size and feeding QAudioOutput with a high priority process, but I'd rather have a solution that uses longer buffers and isn't so fussy about what it runs on - target platforms vary from Windows 10 machines to Raspberry Pi Zero Ws, possibly to Android phones.
I've been using packETH for a while and I have always wondered one thing.
When I set packet generation speed on Gen-b option, I realized packETH doesn't really send packets as set.
I think when I use packETH on a virtual machine, maximum speed tends to decrease.
Even if I set number of packets to send : 40000000 and set packets per second : 4000000, the operation wouldn't be finished in 10 seconds and instead I think packETH tries to send out packets as fast as possible but can't quite reach that speed and decides to send out packets slower and therefore taking longer for the operation to finish.
So, what decides packETH's maximum packet generation/transfer speed?
Does it automatically adjust the maximum speed so that the receiving server can intake all the packets correctly?
Thank you so much in advnace.
I've read about packETH and I didn't found anything related to be a multi-threaded package sender, so there should be a problem. What you want is a multithreaded package sender which can receive any amount of packages and send them in parallel. But first, let focus on packETH:
You have tried which configuration?
In the Auto mode you can choose one of five distribution modes. Except the random mode you can see different timings by choosing different mode. In the random mode the generator tries to be smart :). Beside timing you can also specify the amount of traffic for each stream. In the manual mode you select all of the parameters by hand.
Here is where I've found it: http://packeth.sourceforge.net/packeth/GUI_version.html
Related to a multithreaded sender I would suggest trafgen, let's expose some features:
This will help you at not worrying about limit
Process a number of packets and then exit. If the number of packets is 0, then this is equivalent to infinite packets resp. processing until interrupted. Otherwise, a number given as an unsigned integer will limit processing.
This will ensure paralelism
Specify the number of processes trafgen shall fork(2) off. By default trafgen will start as many processes as CPUs that are online and pin them to each, respectively. Allowed value must be within interval [1,CPUs].
Context
We have got un unsteady transmission channel. Some packets may be lost.
Sending a single network packet in any direction (from A to B or from B to A) takes 3 seconds.
We allow a signal delay of 5 seconds, no more. So we have a 5-second buffer. We can use those 5 seconds however we want.
Currently we use only 80% of the transmission channel, so we have 1/4 more room to utilize.
The quality of the video cannot be worsened.
Problem
We need to make the quality better. How to handle lost packets?
Solution proposition
A certain thing - we cannot use TCP in this case, because when TCP detects some problems, it requests retransmission of lost data. That would mean that a packet would arrive after 9 seconds, which is more than the limit.
Therefore we need to use UDP and handle those errors ourselves. How to do it then? How to make sure that not so many packets will be lost as currently, without retransmitting them?
Its a complicated solution, but by far the best option will be to add forward error correction (FEC). This is how images are transfered form space probes where the latency is measures in minutes or hours. Its also use by cell phones where delayed packets are bad for two way communication.
A not as good, but eaiser to implement option is to use UDT. This is a UDP with tcp like retranmission library, but allows you much more controll over the protocol.
I have a switch working with SNMP protocol. I want to get/log or monitor the data of bandwith for switch and connected devices/ports. the amount of incoming or outgoing data have to be calculated periodically into a log file simply.
As another option, a simple program for monitoring the network bandwith, total data traffic etc. of SNMP network may be useful for me. But it have to be so compact and light software. many programs are not freeware and their sizes are very big. Is there a solution to do that process? Thanks..
Interfaces monitored through SNMP report their data usage in the ifInOctets and ifOutOctets counters. The numbers they report can't be used directly; you need to sample them every X minutes or seconds, where X gets smaller the faster the interface. You simply subtract the previous number from the current one to give you how much traffic went by during those X minutes. Watch out for wrapping as it gets to the 32 bit integer limit (it certainly won't send negative traffic ;-) The number X will be greatly affected by how long it takes to wrap a 32 bit number at the interfaces maximum speed.
If you have a high speed switch, ideally you should actually use the ifHCInOctets and ifHCOutOctets if your switch supports it. These are 64-bit numbers and won't wrap frequently and thus X can become much much larger. But not all devices support them.
I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).