Changing Nreadings tinyos - tinyos

When I change Nreadings from 1 to 2 in the oscilloscope header file, I end up getting 4 bytes of data from one sensor. My doubt is whether these 4 bytes are in the form of 2 sets of 2 bytes at different instants? If so should I average these 2 sets before I display them?

The Oscilloscope application samples a sensor every fixed interval (defined as DEFAULT_INTERVAL in the header file), and as soon as it collects NREADINGS samples, it sends a packet containing these readings. Then, the readings counter is reset to zero.
So if you change NREADINGS to 2, a packet will be sent every two samples (and it will contain two readings). Since a size of a sample is 2 bytes (uint16_t), this results in 4 bytes of readings data per packet. What you do with such data depends on what you want to achieve. Oscilloscope comes with a Java application that displays data received by the BaseStation application on a graph (see README.txt).
I think that everything is explained in the source code:
/* Number of readings per message. If you increase this, you may have to
increase the message_t size. */
NREADINGS = 10,
And the packet definition:
typedef nx_struct oscilloscope {
nx_uint16_t version; /* Version of the interval. */
nx_uint16_t interval; /* Samping period. */
nx_uint16_t id; /* Mote id of sending mote. */
nx_uint16_t count; /* The readings are samples count * NREADINGS onwards */
nx_uint16_t readings[NREADINGS];
} oscilloscope_t;

Related

At where TCP New Reno set the threshold value once a packet is dropped in NS3

In TCP New Reno it set the threshold value to half of the current CWND once a packet drop is identified. I need to find the method, that does the task.
In tcp-l4-protocol.h it uses TcpClassicRecovery as the recovery method. In TcpClassicRecovery entering phase, it uses the following code segment to set the current CWND,
void
TcpClassicRecovery::EnterRecovery (Ptr<TcpSocketState> tcb, uint32_t dupAckCount,
uint32_t unAckDataCount, uint32_t lastSackedBytes)
{
NS_LOG_FUNCTION (this << tcb << dupAckCount << unAckDataCount << lastSackedBytes);
NS_UNUSED (unAckDataCount);
NS_UNUSED (lastSackedBytes);
tcb->m_cWnd = tcb->m_ssThresh;
tcb->m_cWndInfl = tcb->m_ssThresh + (dupAckCount * tcb->m_segmentSize);
}
Then I assume before calling the EnterRecovery method, the cwnd is already updated. I need to find the place that cwnd is updated.
I also updated TcpNewReno::GetSsThresh and analyzed the output. But it's also not the method I need as it doesn't cut the cwnd to half.
NOTE: I'm using seventh.cc to inspect cwnd. It always drops the cwnd to 1072. The graph I'm getting is also included. What I need to do is drop the cwnd to half of the value once a packet is dropped. Maybe the seventh.cc is not using the default tcp-l4-protocol.h. If so how I can change it?
I found the answer. The problem was with the seventh.cc. It does not use the default layer 4 TCP protocol.
To run the default layer 4 TCP protocol (TCP New Reno), I found an example, which is tcp-large-transfer.cc. It's located in ns-3.30/examples/tcp/tcp-large-transfer.cc.
I just wanted to add a quick note: the code to change cwnd is in the very snippet in your question. Specifically, it is this line:
tcb->m_cWnd = tcb->m_ssThresh;
Much of the state of a TCP Socket is actually stored in the the tcb which is a Ptr<TcpSocketState>.

How to change parameters in Contiki 2.7 simulation?

I started learning on Contiki OS. I am trying to analyze few parameters like energy efficiency, latency, delivery ratio etc with different deployment scenarios. First I should change some parameter like:
Channel check rate to 16/s (I use rpl-sink)
RPL mode of operation to NO_DOWNWARD_ROUTE
Send interval to 5s
UDP application packet size to 100 Bytes
Could you please tell me how to change these parameter in Contiki 2.7?
My answers for reference:
Channel check rate to 16/s (I use rpl-sink)
#undef NETSTACK_RDC_CHANNEL_CHECK_RATE
#define NETSTACK_RDC_CHANNEL_CHECK_RATE 16
RPL mode of operation to NO_DOWNWARD_ROUTE
It's called non-storing mode. To enable it:
#define RPL_CONF_WITH_NON_STORING 1
Send interval to 5s
Depends on the application; there is no standard name for this parameter. If we're talking about ipv6/rpl-collect/, you should #define PERIOD 5 in project-conf.h.
UDP application packet size to 100 Bytes
The payload is constructed in udp-sender.c:
uip_udp_packet_sendto(client_conn, &msg, sizeof(msg),
&server_ipaddr, UIP_HTONS(UDP_SERVER_PORT));
So in order to change the payload size, you need to change the size of the locally-defined anonymous struct variable called msg. You can add some dummy fields to it, for example.
struct {
uint8_t seqno;
uint8_t for_alignment;
struct collect_view_data_msg msg;
char dummy[100 - 2 - sizeof(struct collect_view_data_msg)];
} msg;

attaining 100% packet transmissions at high frequency

Using the Oscilloscope file I am trying to sample at a rate of 10ms using tinyos with micaz motes.If I sample at 10ms which means I should get 100 packets/second,I get only 50 packets/second successfully received and displayed on the terminal window.To rectify this I went into the following directory:/tos/sensorboards/mts300 and opened the Accelp.nc file.The relevant part of code is as shown below:
async command uint8_t ConfigY.getRefVoltage()
{
return ATM128_ADC_VREF_OFF;
}
async command uint8_t ConfigY.getPrescaler() {
return ATM128_ADC_PRESCALE_64;
}
command error_t SplitControl.start() {
call AccelPin.makeOutput();
call AccelPin.set();
call Timer.startOneShot(14); //orignally at 17ms
return SUCCESS; }
I changed the timer value to 14ms in the above code instead of the orignal 17ms.So this allowed me to get 100% packet efficiency that is 100 packets/second at a sampling rate of 10ms. But after doing this,I noticed that I was getting disturbance in the signal even when the accelerometer was completely still. Is there a way I can eliminate this disturbance and also get 100% packet transmissions and am I doing the right thing to get 100% transmission success? Changing the return value of pre-scalar does not seem to have much effect at all.

Reading a long text from GPRS Shield with Arduino

I am having hell with this and I know it is probably really simple. I am trying to read a text message from my Seeed GPRS shield. I have the shield setup as a software serial and I am displaying the information received from the GPRS to the serial monitor. I am currently sending all AT commands over serial while I work on my code. To display the data from the software serial to the serial monitor, I am using the following code.
while(GPRS.available()!=0) {
Serial.write(GPRS.read());
}
GPRS is my software serial obviously. The problem is, the text is long and I only get a few characters from it. Something like this.
+CMGR: "REC READ","1511","","13/12/09,14:34:54-24" Welcome to TM eos8
This text is a "Welcome to T-Mobile" text that is much longer. The last few characters shown are scrambled. I have done some research and have seen that I can mod the serial buffer size to 256 instead of the default 64. I want to avoid this because I am sure there is an easier way. Any ideas?
Have you tried reading into a character array, one byte at a time? See if this helps:
if (GPRS.available()) { // GPRS talking ..
while(GPRS.available()) { // As long as it is talking ..
buffer[count++]=GPRS.read();     
// read char into array
if(count == 64) break; // Enough said!
}
Serial.write(buffer,count); // Display in Terminal
clearBufferArray();
count = 0;
}
You need to declare the variables 'buffer' and 'count' appropriately and define the function 'clearBufferArray()'
Let me know if this helps.
Looks like this is simply the result of the lack of flow control in all Arduino serial connections. If you cannot pace your GPRS() input byte sequence to a rate that guarantees the input FIFO can't overflow, then your Serial.write() will block when the output FIFO fills. At that point you will be dropping new GPRS input bytes on the floor until Serial output frees up more space.
Since the captured output is apparently clean up to about 64 bytes, this suggests
a) a 64 byte buffer,
b) a GPRS data rate much higher than the Serial one, and
c) that the garbage data is actually the occasional valid byte from later in the sequence.
You might confirm this by testing the return code from Serial.write. If you get back zero, that byte is getting lost.
If you were using 9600 for Serial and 57600 for GPRS, I would expect somewhat more than 64 bytes to come through before the output gets mangled, but if the GPRS rate is more than 64x the Serial rate, the entire output FIFO could fill up within a single output byte transmission time.
Capturing to an intermediate buffer should resolve your issue, as long as it is large enough for the whole message. Similarly, extending the size of either the source (in conjunction with testing the Serial.write) or destination (without any additional code) FIFOs to the maximum datagram size should work.
I've had the same problem trying to read messages and get 64 characters. I overcame it by adding a "delay(10)" in the loop calling the function that does the read from the GPRS. Seems to be enough to overcome the race scenario. - Using Arduino Mega.
void loop() {
ReadmyGPRS();
delay(10); //A race condition exists to get the data.
}
void ReadmyGPRS(){
if (Serial1.available()){ // if data is comming from GPRS serial port
count = 0; // reset counter
while(Serial1.available()) // reading data into char array
{
buffer[count++]=Serial1.read(); // writing data into array
if(count == 160)break;
}
Serial.write(buffer,count);
}
}

Identification of packets in a byte stream

I'm having a bit of a problem with the communication to an accelerometer sensor. The sensor puts out about 8000 readings/second continuously. The sensor is plugged in to a usb port with an adaper and shows up as com4. My problem is that I can't seem to pick out the sensor reading packets from the byte stream. The packets have the size of five bytes and have the following format:
High nibble Low nibble
Byte 1 checksum, id for packet start X high
Byte 2 X mid X low
Byte 3 Y high Y mid
Byte 4 Y low Z high
Byte 5 Y mid Y low
X, y, z is the acceleration.
In the documentation for the sensor it states that the high nibble in the first byte is the checksum (calculated Xhigh+Xlow+Yhigh+Ylow+Zhigh+Zlow) but also the identification of the packet start. I'm pretty new to programming against external devices and can't really grasp how the checksum can be used as an identifier for the start of the package (wouldn't the checksum change all the time?). Is this a common way for identifying the start of a packet? Does anyone have any idea how to solve this problem?
Any help would be greatly appreciated.
... can't really grasp how the checksum can be used as an identifier for the start of the package (wouldn't the checksum change all the time?).
Yes, the checksum would change since it is derived from the data.
But even a fixed-value start-of-packet nibble would (by itself) not be sufficient to (initially) identify (or verify) data packets. Since this is binary data (rather than text), the data can take on the same value as any fixed-value start-of-packet. If you had a trivial scan for this start-nibble, that algorithm could easily misidentify a data nibble as the start-nibble.
Is this a common way for identifying the start of a packet?
No, but given the high data rate, it seems to be a scheme to minimize the packet size.
Does anyone have any idea how to solve this problem?
You probably have to initially scan every sequence of bytes five at a time (i.e. the length of a packet).
Calculate the checksum of this "packet", and compare it to the first nibble.
A match indicates that you (may) have packet alignment.
A mismatch means that you should toss the first byte, and test the next possible packet that would start with what was the second byte (i.e. shift the 4 remaining bytes and append a new 5th byte).
Once packet alignment has been achieved (or assumed), you need to continually verify the checksum of every packet in order to confirm data integrity and ensure packet data alignment. Any checksum error should force another hunt for correct packet data alignment (starting at the 2nd byte of the current "packet").
What you need to do is get some free SerialPortTerminal in c# import in your project and first check all the data and packets you are getting, unless you have already done that. Than just to read you will need to do something like...
using System;
using System.IO.Ports;
using System.Windows.Forms;
namespace SPE
{
class SerialPortProgram
{
// Create the serial port with basic settings
private SerialPort port = new SerialPort("COM4", 9600, Parity.None, 8, StopBits.One);
[STAThread]
static void Main(string[] args)
{
// Instatiate this class
new SerialPortProgram();
}
private SerialPortProgram()
{
Console.WriteLine("Incoming Data:");
// Attach a method to be called when there // is data waiting in the port's buffer
port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived);
// Begin communications
port.Open();
// Enter an application loop to keep this thread alive
Application.Run();
}
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
// Show all the incoming data in the port's buffer
Console.WriteLine(port.ReadExisting());
}
}
}

Resources