I started learning on Contiki OS. I am trying to analyze few parameters like energy efficiency, latency, delivery ratio etc with different deployment scenarios. First I should change some parameter like:
Channel check rate to 16/s (I use rpl-sink)
RPL mode of operation to NO_DOWNWARD_ROUTE
Send interval to 5s
UDP application packet size to 100 Bytes
Could you please tell me how to change these parameter in Contiki 2.7?
My answers for reference:
Channel check rate to 16/s (I use rpl-sink)
#undef NETSTACK_RDC_CHANNEL_CHECK_RATE
#define NETSTACK_RDC_CHANNEL_CHECK_RATE 16
RPL mode of operation to NO_DOWNWARD_ROUTE
It's called non-storing mode. To enable it:
#define RPL_CONF_WITH_NON_STORING 1
Send interval to 5s
Depends on the application; there is no standard name for this parameter. If we're talking about ipv6/rpl-collect/, you should #define PERIOD 5 in project-conf.h.
UDP application packet size to 100 Bytes
The payload is constructed in udp-sender.c:
uip_udp_packet_sendto(client_conn, &msg, sizeof(msg),
&server_ipaddr, UIP_HTONS(UDP_SERVER_PORT));
So in order to change the payload size, you need to change the size of the locally-defined anonymous struct variable called msg. You can add some dummy fields to it, for example.
struct {
uint8_t seqno;
uint8_t for_alignment;
struct collect_view_data_msg msg;
char dummy[100 - 2 - sizeof(struct collect_view_data_msg)];
} msg;
Related
Currently I implement code bluetooth low engine (BLE) for STM32L476 + X-NUCLEO-IDB04A1 base on example "sensor demo".
In "Sensor Demo" example, it only code to send data to smart phone. And don't have receive data.
I think can use function below to read data:
tBleStatus aci_gatt_read_charac_val(uint16_t conn_handle, uint16_t attr_handle)
And can read data from HCI_Event_CB(hciReadPacket->dataBuff);
However I don't know how to get parameter "uint16_t attr_handle" for function
tBleStatus aci_gatt_read_charac_val(uint16_t conn_handle, uint16_t attr_handle)
Could you explain for me about this problem?
That would be the value of the handle for this connection.
When IDB04A1 successfully connects to the smart phone, it shall send a HCI_LE_META_EVENT with information for this connection. Connection_Handle can be found in the event, to be specific, a 16-byte value:
(offset 6 | offset 5)
I would like to use an Arduino as an i2c slave. But I require that the Arduino acts as multiple devices by registering itself with multiple i2c addresses.
This is probably not something one would normally do, but here is my reason for doing it:
I want to use an Arduino to act as Telemetry sensors for Spektrum Telemetry. The Telemetry receiver has a few i2c plugs which connects to multiple sensors (current 0x02, voltage 0x03, airspeed 0x11, etc) each that have a fixed i2c address which the Telemetry receiver expects.
I would like to use one Arduino to act as all these devices by registering itself with all of the above addresses, and responding appropriately with the readings.
I could use one Arduino per sensor, which seems silly as I can perform all these readings with one Arduino pro-mini.
I know you can register the Arduino using
Wire.begin(0x02);
But I require something similar to this (pseudo code)
Wire.begin(0x02, 0x03, 0x11);
And when a request is received, I need to know with what address the Arduino was queried.
For example (pseudo code)
void receiveEvent(byte address, int bytesReceived){
if(address == 0x02){
// Current reading
}
else if(address == 0x03){
// Voltage reading
}
else if(address == 0x11){
// Airspeed reading
}
}
Any advice would be appreciated.
It is not possible to make the Arduino listen to to multiple slave addresses by using the Wire library since Wire.begin() only allows to pass a single slave address.
Even the Atmel ATmega microcontroller on which most Arduinos are based only allows its hardware 2-wire serial interface (TWI) to be set to a single 7-bit address via its 2-wire address register TWAR. However, it is possible to work around this limitation by masking one or more address bits using the TWI address mask register TWAMR as documented (somewhat briefly) in e.g. this ATmega datasheet section 22.9.6:
The TWAMR can be loaded with a 7-bit Salve (sic!) Address mask. Each of the bits in TWAMR can mask (disable) the corresponding address bits in the TWI address Register (TWAR). If the mask bit is set to one then the address match logic ignores the compare between the incoming address bit and the corresponding bit in TWAR.
So we would first have to set up the mask bits based on all I2C addresses we want to respond to by OR'ing them and shifting right to match the TWAMR register layout (TWAMR holds mask in bit7:1, bit0 is unused):
TWAMR = (sensor1_addr | sensor2_addr | sensor3_addr) << 1;
The main problem from here on will be to find out which particular I2C address was queried (we only know it was one that matches the address mask).
If I interpret section 22.5.3 correctly, stating
The TWDR contains the address or data bytes to be transmitted, or the address or data bytes received.
we should be able to retrieve the unmasked I2C address from the TWDR register.
ATmega TWI operation is interrupt-based, more specifically, it utilizes a single interrupt vector for a plethora of different TWI events indicated by status codes in the TWSR status register.
In the TWI interrupt service routine, we'll have to
make sure the reason why we've entered the ISR is because we've been queried. This can be done by checking TWSR for status code 0xA8 (own SLA+R has been received)
decide which sensor data to send back to the master based on what I2C address was actually queried by checking the last byte on the bus in TWDR.
This part of the ISR could look something like this (untested):
if (TWSR == 0xA8) { // read request has been received
byte i2c_addr = TWDR >> 1; // retrieve address from last byte on the bus
switch (i2c_addr) {
case sensor1_addr:
// send sensor 1 reading
break;
case sensor2_addr:
// send sensor 2 reading
break;
case sensor3_addr:
// send sensor 3 reading
break;
default:
// I2C address does not match any of our sensors', ignore.
break;
}
}
Thanks for asking this interesting question!
I really do like vega8's answer, but I'd also like to mention that if your I2C master isn't going to clock things incredibly fast, using a software-based implementation of I2C would also be feasible and give you the freedom you want.
You might want to consider that approach if rough calculation shows that the time spent in the TWI ISR is too high and interrupts might start to overlap.
void setup()
{
Wire.begin(0x11 | 0x12); // Adr 11 and 12 are used for Alt and Speed by Spectrum DX
Wire.onRequest(requestEvent); // register callback function
TWAMR = (0x11 | 0x12) << 1; // set filter for given adr
}
void requestEvent() {
int adr = TWDR >> 1; // move 1 bit to align I2C adr
if (adr == 0x12) // check for altitude request at adr 12
Wire.write(tmpSpektrumDataAlt, 16); // send buffer
if (adr == 0x11) // check for speed request at adr 11
Wire.write(tmpSpektrumDataSpd, 16); // send buffer
}
This works with a Spectrum DX8 with telemetry module.
The Spectrum interface was made public on Sectrums home page. Technical documents.
There could be other devices on the I2C bus, the TWAMR should be set with as less bits as possible. So I think the better way to calculate the mask is:
AddrOr = Addr1 | Addr2 | Addr3 | Addr4 ...
AddrAnd = Addr1 & Addr2 & Addr3 & Addr4 ...
TWAMR = (AddrOr ^ AddrAnd) << 1
while TWAR can be set as either AddrOr or AddrAnd
In this way we can limit the possibility of address conflict to the minimum
I am having hell with this and I know it is probably really simple. I am trying to read a text message from my Seeed GPRS shield. I have the shield setup as a software serial and I am displaying the information received from the GPRS to the serial monitor. I am currently sending all AT commands over serial while I work on my code. To display the data from the software serial to the serial monitor, I am using the following code.
while(GPRS.available()!=0) {
Serial.write(GPRS.read());
}
GPRS is my software serial obviously. The problem is, the text is long and I only get a few characters from it. Something like this.
+CMGR: "REC READ","1511","","13/12/09,14:34:54-24" Welcome to TM eos8
This text is a "Welcome to T-Mobile" text that is much longer. The last few characters shown are scrambled. I have done some research and have seen that I can mod the serial buffer size to 256 instead of the default 64. I want to avoid this because I am sure there is an easier way. Any ideas?
Have you tried reading into a character array, one byte at a time? See if this helps:
if (GPRS.available()) { // GPRS talking ..
while(GPRS.available()) { // As long as it is talking ..
buffer[count++]=GPRS.read();
// read char into array
if(count == 64) break; // Enough said!
}
Serial.write(buffer,count); // Display in Terminal
clearBufferArray();
count = 0;
}
You need to declare the variables 'buffer' and 'count' appropriately and define the function 'clearBufferArray()'
Let me know if this helps.
Looks like this is simply the result of the lack of flow control in all Arduino serial connections. If you cannot pace your GPRS() input byte sequence to a rate that guarantees the input FIFO can't overflow, then your Serial.write() will block when the output FIFO fills. At that point you will be dropping new GPRS input bytes on the floor until Serial output frees up more space.
Since the captured output is apparently clean up to about 64 bytes, this suggests
a) a 64 byte buffer,
b) a GPRS data rate much higher than the Serial one, and
c) that the garbage data is actually the occasional valid byte from later in the sequence.
You might confirm this by testing the return code from Serial.write. If you get back zero, that byte is getting lost.
If you were using 9600 for Serial and 57600 for GPRS, I would expect somewhat more than 64 bytes to come through before the output gets mangled, but if the GPRS rate is more than 64x the Serial rate, the entire output FIFO could fill up within a single output byte transmission time.
Capturing to an intermediate buffer should resolve your issue, as long as it is large enough for the whole message. Similarly, extending the size of either the source (in conjunction with testing the Serial.write) or destination (without any additional code) FIFOs to the maximum datagram size should work.
I've had the same problem trying to read messages and get 64 characters. I overcame it by adding a "delay(10)" in the loop calling the function that does the read from the GPRS. Seems to be enough to overcome the race scenario. - Using Arduino Mega.
void loop() {
ReadmyGPRS();
delay(10); //A race condition exists to get the data.
}
void ReadmyGPRS(){
if (Serial1.available()){ // if data is comming from GPRS serial port
count = 0; // reset counter
while(Serial1.available()) // reading data into char array
{
buffer[count++]=Serial1.read(); // writing data into array
if(count == 160)break;
}
Serial.write(buffer,count);
}
}
I'm having a bit of a problem with the communication to an accelerometer sensor. The sensor puts out about 8000 readings/second continuously. The sensor is plugged in to a usb port with an adaper and shows up as com4. My problem is that I can't seem to pick out the sensor reading packets from the byte stream. The packets have the size of five bytes and have the following format:
High nibble Low nibble
Byte 1 checksum, id for packet start X high
Byte 2 X mid X low
Byte 3 Y high Y mid
Byte 4 Y low Z high
Byte 5 Y mid Y low
X, y, z is the acceleration.
In the documentation for the sensor it states that the high nibble in the first byte is the checksum (calculated Xhigh+Xlow+Yhigh+Ylow+Zhigh+Zlow) but also the identification of the packet start. I'm pretty new to programming against external devices and can't really grasp how the checksum can be used as an identifier for the start of the package (wouldn't the checksum change all the time?). Is this a common way for identifying the start of a packet? Does anyone have any idea how to solve this problem?
Any help would be greatly appreciated.
... can't really grasp how the checksum can be used as an identifier for the start of the package (wouldn't the checksum change all the time?).
Yes, the checksum would change since it is derived from the data.
But even a fixed-value start-of-packet nibble would (by itself) not be sufficient to (initially) identify (or verify) data packets. Since this is binary data (rather than text), the data can take on the same value as any fixed-value start-of-packet. If you had a trivial scan for this start-nibble, that algorithm could easily misidentify a data nibble as the start-nibble.
Is this a common way for identifying the start of a packet?
No, but given the high data rate, it seems to be a scheme to minimize the packet size.
Does anyone have any idea how to solve this problem?
You probably have to initially scan every sequence of bytes five at a time (i.e. the length of a packet).
Calculate the checksum of this "packet", and compare it to the first nibble.
A match indicates that you (may) have packet alignment.
A mismatch means that you should toss the first byte, and test the next possible packet that would start with what was the second byte (i.e. shift the 4 remaining bytes and append a new 5th byte).
Once packet alignment has been achieved (or assumed), you need to continually verify the checksum of every packet in order to confirm data integrity and ensure packet data alignment. Any checksum error should force another hunt for correct packet data alignment (starting at the 2nd byte of the current "packet").
What you need to do is get some free SerialPortTerminal in c# import in your project and first check all the data and packets you are getting, unless you have already done that. Than just to read you will need to do something like...
using System;
using System.IO.Ports;
using System.Windows.Forms;
namespace SPE
{
class SerialPortProgram
{
// Create the serial port with basic settings
private SerialPort port = new SerialPort("COM4", 9600, Parity.None, 8, StopBits.One);
[STAThread]
static void Main(string[] args)
{
// Instatiate this class
new SerialPortProgram();
}
private SerialPortProgram()
{
Console.WriteLine("Incoming Data:");
// Attach a method to be called when there // is data waiting in the port's buffer
port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived);
// Begin communications
port.Open();
// Enter an application loop to keep this thread alive
Application.Run();
}
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
// Show all the incoming data in the port's buffer
Console.WriteLine(port.ReadExisting());
}
}
}
I have an FTDI USB serial device which I use via the termios serial API. I set up the port so that it will time-out on read() calls in half a second (by using the VTIME parameter), and this works on Linux as well as on FreeBSD. On OpenBSD 5.1, however, the read() call simply blocks forever when no data is available (see below.) I would expect read() to return 0 after 500ms.
Can anyone think of a reason that the termios API would behave differently under OpenBSD, at least with respect to the timeout feature?
EDIT: The no-timeout problem is caused by linking against pthread. Regardless of whether I'm actually using any pthreads, mutexes, etc., simply linking against that library causes read() to block forever instead of timing out based on the VTIME setting. Again, this problem only manifests on OpenBSD -- Linux and FreeBSD work as expected.
if ((sd = open(devPath, O_RDWR | O_NOCTTY)) >= 0)
{
struct termios newtio;
char input;
memset(&newtio, 0, sizeof(newtio));
// set options, including non-canonical mode
newtio.c_cflag = (CREAD | CS8 | CLOCAL);
newtio.c_lflag = 0;
// when waiting for responses, wait until we haven't received
// any characters for 0.5 seconds before timing out
newtio.c_cc[VTIME] = 5;
newtio.c_cc[VMIN] = 0;
// set the input and output baud rates to 7812
cfsetispeed(&newtio, 7812);
cfsetospeed(&newtio, 7812);
if ((tcflush(sd, TCIFLUSH) == 0) &&
(tcsetattr(sd, TCSANOW, &newtio) == 0))
{
read(sd, &input, 1); // even though VTIME is set on the device,
// this read() will block forever when no
// character is available in the Rx buffer
}
}
from the termios manpage:
Another dependency is whether the O_NONBLOCK flag is set by open() or
fcntl(). If the O_NONBLOCK flag is clear, then the read request is
blocked until data is available or a signal has been received. If the
O_NONBLOCK flag is set, then the read request is completed, without
blocking, in one of three ways:
1. If there is enough data available to satisfy the entire
request, and the read completes successfully the number of
bytes read is returned.
2. If there is not enough data available to satisfy the entire
request, and the read completes successfully, having read as
much data as possible, the number of bytes read is returned.
3. If there is no data available, the read returns -1, with errno
set to EAGAIN.
can you check if this is the case?
cheers.
Edit: OP traced back the problem to a linking with pthreads that caused the read function to block. By upgrading to OpenBSD >5.2 this issue was resolved by the change to the new rthreads implementation as the default threading library on openbsd. more info on guenther# EuroBSD2012 slides