Get total Latency - UDP Audio Communication - networking

Okay i am currently trying to make a Voice chat software using NAudio and c#.
But i currently have a problem, latency seems to bet worse and worse the longer the application runs.
Now, i am a total beginner, so i have no idea what can be the cause of it.
But to troubleshoot, i would like to know if i can get the total latency to see how much it adds over time.
Total Latency = Input buffer + network latency + output buffer (and more if there is any, i am using UDP).
So if i have something like:
Label.text = TotalLatency();
It will get updated all the time.
while (!bStop)
{
byte[] datanbefore = waveStream.GetBuffer();
autoResetEvent.WaitOne();
waveStream.Position = 0;
captureBuffer.Read(offset, waveStream, halfBuffer, LockFlag.None);
readFirstBufferPart = !readFirstBufferPart;
offset = readFirstBufferPart ? 0 : halfBuffer;
//TODO: Fix this ugly way of initializing differently.
//Mute Mic when button is checked
if (MuteMic.Checked)
{
waveStream = new MemoryStream(halfBuffer);
}
byte[] datanaudio = waveStream.GetBuffer();
udpClient.Send(datanaudio, datanaudio.Length, otherPartyIP.Address.ToString(), 5550);
}
So here is the sending part. I am not really sure how the buffering works, as i started the application using a free sample, and have been changing it here and there, but some parts still remain, but i think that buffer can be improved though.
while (!bStop)
{
//Receive data.
byte[] byteData = udpClient.Receive(ref remoteEP);
waveProvider.AddSamples(byteData, 0, byteData.Length);
}
Here is the Receive part, and it´s much simpler, it just get´s the data from the UDP, ass it to a buffer and play it.

You can work out roughly the input and output latency by knowing the buffer sizes of WaveIn and WaveOut. By default in NAudio they are each 100ms.
For network latency, you could try timestamping your audio packets although the clocks of both machines would need to be in sync.

Related

Memory leak while sending response from rebus handler

I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.

Lags while sending a lot of data

I building a little game with a multiplayer, my problem is when I sending the bullets is there is a lag when it's exceeding the amount about 80.
I using UDP type, my connect code to the server:
udp = socket.udp()
udp:settimeout(0)
udp:setpeername(address, port)
My udp:send of the bullet to the server:
udp:send('%S03'..startX..','..startY..','..bulletAngleX..','..bulletAngleY)
Server: retrieving the bullets and sending them back to the rest of the clients:
elseif code == '%S03' then
local bulleta = string.gmatch(params, "[^,]+")
local sX = tonumber(bulleta())
local sY = tonumber(bulleta())
local dX = tonumber(bulleta())
local dY = tonumber(bulleta())
for i,v in ipairs(clients) do
udp:sendto('%C01'..math.random(120, 200)..','..sX..','..sY..','..dX..','..dY, v['ip'], tonumber(v['port']))
end
end
Client: getting the bullets data and creating them in a table:
elseif code == '%C01' then
local xy = string.gmatch(re, "[^,]+")
local dis = tonumber(xy())
local xStart = tonumber(xy())
local yStart = tonumber(xy())
local xAngle = tonumber(xy())
local yAngle = tonumber(xy())
table.insert(bullets, {distance = dis, sX = xStart, sY = yStart, x = xStart, y = yStart, dx = xAngle, dy = yAngle})
The updating of the x and y cordition of the bullets is happening in the client when he gets the bullets x, y and removing the bullets when they distance are more then 300 pixel from the first position.
But my problem is still that there is lags when I am shooting..
I'm not extremely familiar with networking details, however it is quite likely you are simply sending too many packets, and need to bundle or otherwise compress the data you send to reduce the overall number of UDP messages you are sending.
From this Love2d networking tutorial (also using UDP):
It's very easy to completely saturate a network connection if you
aren't careful with the packets we send (or request!), so we hedge our
chances by limiting how often we send (and request) updates.
(For the record, ten times a second is considered good for most normal
games (including many MMOs), and you shouldn't ever really need more
than 30 updates a second, even for fast-paced games.)
We could send updates for every little move, but we'll consolidate the
last update-worth here into a single packet, drastically reducing our
bandwidth use.
I can't confirm if this is the problem without seeing all/more of your code. However, sending the data for each bullet separately, to each client, many times per second, the bandwidth usage could become very high, most notably sending the data for each bullet separately.
I would first try bundling the data for every bullet together before sending the bundled data to each client, which will greatly reduce the number of individual packets being sent out. Also, in case you aren't, make sure you are not sending the packets in love.update, which is called very, very often. Instead, make a seperate function for updating over the network and use timers to call it only about once every 100ms.
Let me know if any of this is already being accounted for in the code, or show us a larger context of your networking code.

enqueueWriteBuffer locking up when sending non-32 bit aligned data

I am working on an opencl project and I have run into an issue where if I try and send data to from the cpu to global memory then sometimes it locks up the application. This happens sporadically. I can run it x times in a row and the next time it locks. It only appears to be happening if I try and send data that is not 32 bits wide. For example, I can send float and int just fine, but when I try short, char, or half then I get random lockups. It is not dying with badly initialized data or something, because it does run, just not all the time. I also put some logging in and found that it is always locking up just after attempting to write one of these non-standard size data arrays. I am running on an NVIDIA GeForce GT 330M. Below is a snippet of the code I am running to send the data. I am using the c++ interface on the host side.
cl_half *m_aryTest;
shared_ptr< cl::Buffer > m_bufTest;
m_aryTest = new cl_half[m_iNeuronCount];
m_bufTest = shared_ptr<cl::Buffer>( new cl::Buffer(m_lpNervousSystem->ActiveContext(), CL_MEM_READ_ONLY | CL_MEM_USE_HOST_PTR, sizeof(m_aryTest)*m_iNeuronCount, m_aryTest));
kernel.setArg(8, *(m_bufTest.get()));
printf("m_bufTest.\n");
m_lpQueue->enqueueWriteBuffer(*(m_bufTest.get()), CL_TRUE, 0, sizeof(m_aryTest)*m_iNeuronCount, m_aryTest, NULL, NULL);
Does anyone have any ideas on why this is happening?
Thanks

Simple algorithm for reliable communications

So, I have worked on large systems in the past, like an iso stack session layer, and something like that is too big for what I need, but I do have some understanding of the big picture. What I have now is a serial point to point communications link, where some component is dropping data (often).
So I am going to have to write my own, reliable delivery system using it for transport. Can someone point me in the directions for basic algorithms, or even give a clue as to what they are called? I tried a Google, but end up with post graduate theories on genetic algorithms and such. I need the basics. e.g. 10-20 lines of pure C.
XMODEM. It's old, it's bad, but it is widely supported both in hardware and in software, with libraries available for literally every language and market niche.
HDLC - High-Level Data Link Control. It's the protocol which has fathered lots of reliable protocols over the last 3 decades, including the TCP/IP. You can't use it directly, but it is a template how to develop your own protocol. Basic premise is:
every data byte (or packet) is numbered
both sides of communication maintain locally two numbers: last received and last sent
every packet contains the copy of two number
every successful transmission is confirmed by sending back an empty (or not) packet with the updated numbers
if transmission is not confirmed within some timeout, send again.
For special handling (synchronization) add flags to the packet (often only one bit is sufficient, to tell that the packet is special and use). And do not forget the CRC.
Neither of the protocols has any kind of session support. But you can introduce one by simply adding another layer - a simple state machine and a timer:
session starts with a special packet
there should be at least one (potentially empty) packet within specified timeout
if this side hasn't sent a packet within the timeout/2, send an empty packet
if there was no packet seen from the other side of communication within the timeout, the session has been termianted
one can use another special packet for graceful session termination
That is as simple as session control can get.
There are (IMO) two aspects to this question.
Firstly, if data is being dropped then I'd look at resolving the hardware issues first, as otherwise you'll have GIGO
As for the comms protocols, your post suggests a fairly trivial system? Are you wanting to validate data (parity, sumcheck?) or are you trying to include error correction?
If validation is all that is required, I've got reliable systems running using RS232 and CRC8 sumchecks - in which case this StackOverflow page probably helps
If some components are droping data in a serial point to point link, there must exist some bugs in your code.
Firstly, you should comfirm that there is no problem in the physical layer's communication
Secondly, you need some konwledge about data communication theroy such like ARQ(automatic request retransmission)
Further thoughts, after considering your response to the first two answers... this does indicate hardware problems, and no amount of clever code is going to fix that.
I suggest you get an oscilloscope onto the link, which should help to determine where the fault lies. In particular look at the baud rate of the two sides (Tx, Rx) to ensure that they are within spec... auto-baud is often a problem?!
But look to see if drop out is regular, or can be sync-ed with any other activity.
on the sending side;
///////////////////////////////////////// XBee logging
void dataLog(int idx, int t, float f)
{
ubyte stx[2] = { 0x10, 0x02 };
ubyte etx[2] = { 0x10, 0x03 };
nxtWriteRawHS(stx, 2, 1);
wait1Msec(1);
nxtWriteRawHS(idx, 2, 1);
wait1Msec(1);
nxtWriteRawHS(t, 2, 1);
wait1Msec(1);
nxtWriteRawHS(f, 4, 1);
wait1Msec(1);
nxtWriteRawHS(etx, 2, 1);
wait1Msec(1);
}
on the receiving side
void XBeeMonitorTask()
{
int[] lastTick = Enumerable.Repeat<int>(int.MaxValue, 10).ToArray();
int[] wrapCounter = new int[10];
while (!XBeeMonitorEnd)
{
if (XBee != null && XBee.BytesToRead >= expectedMessageSize)
{
// read a data element, parse, add it to collection, see above for message format
if (XBee.BaseStream.Read(XBeeIncoming, 0, expectedMessageSize) != expectedMessageSize)
throw new InvalidProgramException();
//System.Diagnostics.Trace.WriteLine(BitConverter.ToString(XBeeIncoming, 0, expectedMessageSize));
if ((XBeeIncoming[0] != 0x10 && XBeeIncoming[1] != 0x02) || // dle stx
(XBeeIncoming[10] != 0x10 && XBeeIncoming[11] != 0x03)) // dle etx
{
System.Diagnostics.Trace.WriteLine("recover sync");
while (true)
{
int b = XBee.BaseStream.ReadByte();
if (b == 0x10)
{
int c = XBee.BaseStream.ReadByte();
if (c == 0x03)
break; // realigned (maybe)
}
}
continue; // resume at loop start
}
UInt16 idx = BitConverter.ToUInt16(XBeeIncoming, 2);
UInt16 tick = BitConverter.ToUInt16(XBeeIncoming, 4);
Single val = BitConverter.ToSingle(XBeeIncoming, 6);
if (tick < lastTick[idx])
wrapCounter[idx]++;
lastTick[idx] = tick;
Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new Action(() => DataAdd(idx, tick * wrapCounter[idx], val)));
}
Thread.Sleep(2); // surely we can up with the NXT
}
}

Wowza + Flex not reproducing whole audio

I'm writing a flex app, which must record an audio and then playback. It records just fine, I can hear the flv on the server, but when it comes to the playback it cuts the end a little bit, and each time I ask to reproduce again it cuts a little bit more. What can it be? I guess it's something related to buffer management, but I don't know exactly. Any thoughts?
EDIT: Here's the code I'm using to playback. It is called from a mediator:
var streamPlayClient:Object = new Object();
this.stream.client = streamPlayClient;
streamPlayClient.onPlayStatus = function(infoObject:Object):void {
if (infoObject.code == "NetStream.Buffer.Flush") {
stopPlayback();
}
}
this.stream.play("flv:" + this.streamName);
As it turns out, I have to handle the NetStream.Buffer.Empty event, instead of the NetStream.Play.Complete or the NetStream.Buffer.Flush.

Resources