At where TCP New Reno set the threshold value once a packet is dropped in NS3 - tcp

In TCP New Reno it set the threshold value to half of the current CWND once a packet drop is identified. I need to find the method, that does the task.
In tcp-l4-protocol.h it uses TcpClassicRecovery as the recovery method. In TcpClassicRecovery entering phase, it uses the following code segment to set the current CWND,
void
TcpClassicRecovery::EnterRecovery (Ptr<TcpSocketState> tcb, uint32_t dupAckCount,
uint32_t unAckDataCount, uint32_t lastSackedBytes)
{
NS_LOG_FUNCTION (this << tcb << dupAckCount << unAckDataCount << lastSackedBytes);
NS_UNUSED (unAckDataCount);
NS_UNUSED (lastSackedBytes);
tcb->m_cWnd = tcb->m_ssThresh;
tcb->m_cWndInfl = tcb->m_ssThresh + (dupAckCount * tcb->m_segmentSize);
}
Then I assume before calling the EnterRecovery method, the cwnd is already updated. I need to find the place that cwnd is updated.
I also updated TcpNewReno::GetSsThresh and analyzed the output. But it's also not the method I need as it doesn't cut the cwnd to half.
NOTE: I'm using seventh.cc to inspect cwnd. It always drops the cwnd to 1072. The graph I'm getting is also included. What I need to do is drop the cwnd to half of the value once a packet is dropped. Maybe the seventh.cc is not using the default tcp-l4-protocol.h. If so how I can change it?

I found the answer. The problem was with the seventh.cc. It does not use the default layer 4 TCP protocol.
To run the default layer 4 TCP protocol (TCP New Reno), I found an example, which is tcp-large-transfer.cc. It's located in ns-3.30/examples/tcp/tcp-large-transfer.cc.

I just wanted to add a quick note: the code to change cwnd is in the very snippet in your question. Specifically, it is this line:
tcb->m_cWnd = tcb->m_ssThresh;
Much of the state of a TCP Socket is actually stored in the the tcb which is a Ptr<TcpSocketState>.

Related

Understanding UNIX termios VMIN and VTIME

I am currently working on a simple serial interface on a UNIX based device and cant find a definitive answer to the following:
I am currently trying to determine if a 'pure time read' (VMIN = 0, VTIME >0) will return half way through reading to n_bytes, as the timer is started when read is called, not when the first character is received.
For example, if I send a message to the device on the other end of the serial interface and I want a response I'd attempt the following (pseudo code):
m_tty.c_cc[VMIN] = 0;
m_tty.c_cc[VTIME] = 5; //i.e. > 0
write(myFileHandle, myData, sizeof(myData));
usleep(sizeof(myData) * 100); //assuming 100 us per char to Tx.
read(myFileHandle, myRxData, expectedMinNumBytes);
I am unclear as to whether read() would return if the first byte arrived just as the timer was about to expire, or if it would continue until 'expectedMinNumBytes' once the first is received?
Thanks for the help in advance!
This is a pure timed read. If there is available data, the read is immediately satisfied. If there is no data, the timer is started at the time read is called, and the read returns: either because the timer expires (returns 0) or a single byte is available.

Identification of packets in a byte stream

I'm having a bit of a problem with the communication to an accelerometer sensor. The sensor puts out about 8000 readings/second continuously. The sensor is plugged in to a usb port with an adaper and shows up as com4. My problem is that I can't seem to pick out the sensor reading packets from the byte stream. The packets have the size of five bytes and have the following format:
High nibble Low nibble
Byte 1 checksum, id for packet start X high
Byte 2 X mid X low
Byte 3 Y high Y mid
Byte 4 Y low Z high
Byte 5 Y mid Y low
X, y, z is the acceleration.
In the documentation for the sensor it states that the high nibble in the first byte is the checksum (calculated Xhigh+Xlow+Yhigh+Ylow+Zhigh+Zlow) but also the identification of the packet start. I'm pretty new to programming against external devices and can't really grasp how the checksum can be used as an identifier for the start of the package (wouldn't the checksum change all the time?). Is this a common way for identifying the start of a packet? Does anyone have any idea how to solve this problem?
Any help would be greatly appreciated.
... can't really grasp how the checksum can be used as an identifier for the start of the package (wouldn't the checksum change all the time?).
Yes, the checksum would change since it is derived from the data.
But even a fixed-value start-of-packet nibble would (by itself) not be sufficient to (initially) identify (or verify) data packets. Since this is binary data (rather than text), the data can take on the same value as any fixed-value start-of-packet. If you had a trivial scan for this start-nibble, that algorithm could easily misidentify a data nibble as the start-nibble.
Is this a common way for identifying the start of a packet?
No, but given the high data rate, it seems to be a scheme to minimize the packet size.
Does anyone have any idea how to solve this problem?
You probably have to initially scan every sequence of bytes five at a time (i.e. the length of a packet).
Calculate the checksum of this "packet", and compare it to the first nibble.
A match indicates that you (may) have packet alignment.
A mismatch means that you should toss the first byte, and test the next possible packet that would start with what was the second byte (i.e. shift the 4 remaining bytes and append a new 5th byte).
Once packet alignment has been achieved (or assumed), you need to continually verify the checksum of every packet in order to confirm data integrity and ensure packet data alignment. Any checksum error should force another hunt for correct packet data alignment (starting at the 2nd byte of the current "packet").
What you need to do is get some free SerialPortTerminal in c# import in your project and first check all the data and packets you are getting, unless you have already done that. Than just to read you will need to do something like...
using System;
using System.IO.Ports;
using System.Windows.Forms;
namespace SPE
{
class SerialPortProgram
{
// Create the serial port with basic settings
private SerialPort port = new SerialPort("COM4", 9600, Parity.None, 8, StopBits.One);
[STAThread]
static void Main(string[] args)
{
// Instatiate this class
new SerialPortProgram();
}
private SerialPortProgram()
{
Console.WriteLine("Incoming Data:");
// Attach a method to be called when there // is data waiting in the port's buffer
port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived);
// Begin communications
port.Open();
// Enter an application loop to keep this thread alive
Application.Run();
}
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
// Show all the incoming data in the port's buffer
Console.WriteLine(port.ReadExisting());
}
}
}

OpenBSD serial I/O: -lpthead makes read() block forever, even with termios VTIME set?

I have an FTDI USB serial device which I use via the termios serial API. I set up the port so that it will time-out on read() calls in half a second (by using the VTIME parameter), and this works on Linux as well as on FreeBSD. On OpenBSD 5.1, however, the read() call simply blocks forever when no data is available (see below.) I would expect read() to return 0 after 500ms.
Can anyone think of a reason that the termios API would behave differently under OpenBSD, at least with respect to the timeout feature?
EDIT: The no-timeout problem is caused by linking against pthread. Regardless of whether I'm actually using any pthreads, mutexes, etc., simply linking against that library causes read() to block forever instead of timing out based on the VTIME setting. Again, this problem only manifests on OpenBSD -- Linux and FreeBSD work as expected.
if ((sd = open(devPath, O_RDWR | O_NOCTTY)) >= 0)
{
struct termios newtio;
char input;
memset(&newtio, 0, sizeof(newtio));
// set options, including non-canonical mode
newtio.c_cflag = (CREAD | CS8 | CLOCAL);
newtio.c_lflag = 0;
// when waiting for responses, wait until we haven't received
// any characters for 0.5 seconds before timing out
newtio.c_cc[VTIME] = 5;
newtio.c_cc[VMIN] = 0;
// set the input and output baud rates to 7812
cfsetispeed(&newtio, 7812);
cfsetospeed(&newtio, 7812);
if ((tcflush(sd, TCIFLUSH) == 0) &&
(tcsetattr(sd, TCSANOW, &newtio) == 0))
{
read(sd, &input, 1); // even though VTIME is set on the device,
// this read() will block forever when no
// character is available in the Rx buffer
}
}
from the termios manpage:
Another dependency is whether the O_NONBLOCK flag is set by open() or
fcntl(). If the O_NONBLOCK flag is clear, then the read request is
blocked until data is available or a signal has been received. If the
O_NONBLOCK flag is set, then the read request is completed, without
blocking, in one of three ways:
1. If there is enough data available to satisfy the entire
request, and the read completes successfully the number of
bytes read is returned.
2. If there is not enough data available to satisfy the entire
request, and the read completes successfully, having read as
much data as possible, the number of bytes read is returned.
3. If there is no data available, the read returns -1, with errno
set to EAGAIN.
can you check if this is the case?
cheers.
Edit: OP traced back the problem to a linking with pthreads that caused the read function to block. By upgrading to OpenBSD >5.2 this issue was resolved by the change to the new rthreads implementation as the default threading library on openbsd. more info on guenther# EuroBSD2012 slides

OMX_ErrorUnsupportedSetting error event after assigning buffers to audio decoder component

I'm getting the OMX_ErrorUnsupportedSetting error event after providing the buffers to an audio decoder component on Raspberry Pi. I tried anything that came into my mind to change the parameters but still the callback arrives. Is there any way in the OpenMAX standard to try to investigate what parameter is causing that event?
This is what I'm doing:
Created the component;
disabled all the ports;
set state to idle;
set port format to use OMX_AUDIO_CodingAAC;
set port definition to use OMX_AUDIO_CodingAAC, 4 buffers of 6144 bytes each;
set profile to these values (not sure if needed): profileType.nSampleRate = 48000; profileType.nFrameLength = 0; profileType.nChannels = 6; profileType.nBitRate = 288000; profileType.nAudioBandWidth = 0; set OMX_PARAM_CODECCONFIGTYPE with bCodecConfigIsComplete to 1;
set OMX_IndexParamBrcmDecoderPassThrough to true.
After all the buffers are sent to the component, I suddenly get OMX_ErrorUnsupportedSetting event and the port is not enabled. Any idea of what I may be doing wrong or how I can inspect the parameter which is causing the error?
I've been told by the manufacturer that the reason this is happening is that no audio decoder except PCM is available at the moment.

Simple algorithm for reliable communications

So, I have worked on large systems in the past, like an iso stack session layer, and something like that is too big for what I need, but I do have some understanding of the big picture. What I have now is a serial point to point communications link, where some component is dropping data (often).
So I am going to have to write my own, reliable delivery system using it for transport. Can someone point me in the directions for basic algorithms, or even give a clue as to what they are called? I tried a Google, but end up with post graduate theories on genetic algorithms and such. I need the basics. e.g. 10-20 lines of pure C.
XMODEM. It's old, it's bad, but it is widely supported both in hardware and in software, with libraries available for literally every language and market niche.
HDLC - High-Level Data Link Control. It's the protocol which has fathered lots of reliable protocols over the last 3 decades, including the TCP/IP. You can't use it directly, but it is a template how to develop your own protocol. Basic premise is:
every data byte (or packet) is numbered
both sides of communication maintain locally two numbers: last received and last sent
every packet contains the copy of two number
every successful transmission is confirmed by sending back an empty (or not) packet with the updated numbers
if transmission is not confirmed within some timeout, send again.
For special handling (synchronization) add flags to the packet (often only one bit is sufficient, to tell that the packet is special and use). And do not forget the CRC.
Neither of the protocols has any kind of session support. But you can introduce one by simply adding another layer - a simple state machine and a timer:
session starts with a special packet
there should be at least one (potentially empty) packet within specified timeout
if this side hasn't sent a packet within the timeout/2, send an empty packet
if there was no packet seen from the other side of communication within the timeout, the session has been termianted
one can use another special packet for graceful session termination
That is as simple as session control can get.
There are (IMO) two aspects to this question.
Firstly, if data is being dropped then I'd look at resolving the hardware issues first, as otherwise you'll have GIGO
As for the comms protocols, your post suggests a fairly trivial system? Are you wanting to validate data (parity, sumcheck?) or are you trying to include error correction?
If validation is all that is required, I've got reliable systems running using RS232 and CRC8 sumchecks - in which case this StackOverflow page probably helps
If some components are droping data in a serial point to point link, there must exist some bugs in your code.
Firstly, you should comfirm that there is no problem in the physical layer's communication
Secondly, you need some konwledge about data communication theroy such like ARQ(automatic request retransmission)
Further thoughts, after considering your response to the first two answers... this does indicate hardware problems, and no amount of clever code is going to fix that.
I suggest you get an oscilloscope onto the link, which should help to determine where the fault lies. In particular look at the baud rate of the two sides (Tx, Rx) to ensure that they are within spec... auto-baud is often a problem?!
But look to see if drop out is regular, or can be sync-ed with any other activity.
on the sending side;
///////////////////////////////////////// XBee logging
void dataLog(int idx, int t, float f)
{
ubyte stx[2] = { 0x10, 0x02 };
ubyte etx[2] = { 0x10, 0x03 };
nxtWriteRawHS(stx, 2, 1);
wait1Msec(1);
nxtWriteRawHS(idx, 2, 1);
wait1Msec(1);
nxtWriteRawHS(t, 2, 1);
wait1Msec(1);
nxtWriteRawHS(f, 4, 1);
wait1Msec(1);
nxtWriteRawHS(etx, 2, 1);
wait1Msec(1);
}
on the receiving side
void XBeeMonitorTask()
{
int[] lastTick = Enumerable.Repeat<int>(int.MaxValue, 10).ToArray();
int[] wrapCounter = new int[10];
while (!XBeeMonitorEnd)
{
if (XBee != null && XBee.BytesToRead >= expectedMessageSize)
{
// read a data element, parse, add it to collection, see above for message format
if (XBee.BaseStream.Read(XBeeIncoming, 0, expectedMessageSize) != expectedMessageSize)
throw new InvalidProgramException();
//System.Diagnostics.Trace.WriteLine(BitConverter.ToString(XBeeIncoming, 0, expectedMessageSize));
if ((XBeeIncoming[0] != 0x10 && XBeeIncoming[1] != 0x02) || // dle stx
(XBeeIncoming[10] != 0x10 && XBeeIncoming[11] != 0x03)) // dle etx
{
System.Diagnostics.Trace.WriteLine("recover sync");
while (true)
{
int b = XBee.BaseStream.ReadByte();
if (b == 0x10)
{
int c = XBee.BaseStream.ReadByte();
if (c == 0x03)
break; // realigned (maybe)
}
}
continue; // resume at loop start
}
UInt16 idx = BitConverter.ToUInt16(XBeeIncoming, 2);
UInt16 tick = BitConverter.ToUInt16(XBeeIncoming, 4);
Single val = BitConverter.ToSingle(XBeeIncoming, 6);
if (tick < lastTick[idx])
wrapCounter[idx]++;
lastTick[idx] = tick;
Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new Action(() => DataAdd(idx, tick * wrapCounter[idx], val)));
}
Thread.Sleep(2); // surely we can up with the NXT
}
}

Resources