How to increase message size on a network - networking

I would like to know how to increase the beacon size using the OMNET++ simulator. I saw that it is possible to perform this configuration through the cpacket constructor.
cPacket::cPacket ( const char * name = NULL,
short kind = 0,
int64 bitLength = 0
)
My goal is to analyze how beacon size can influence message delivery latency between two nodes

Use setBitLength() or setByteLength(). If one want to increase the current size of cPacket instance, use addBitLength() or addByteLength().

Related

How to change parameters in Contiki 2.7 simulation?

I started learning on Contiki OS. I am trying to analyze few parameters like energy efficiency, latency, delivery ratio etc with different deployment scenarios. First I should change some parameter like:
Channel check rate to 16/s (I use rpl-sink)
RPL mode of operation to NO_DOWNWARD_ROUTE
Send interval to 5s
UDP application packet size to 100 Bytes
Could you please tell me how to change these parameter in Contiki 2.7?
My answers for reference:
Channel check rate to 16/s (I use rpl-sink)
#undef NETSTACK_RDC_CHANNEL_CHECK_RATE
#define NETSTACK_RDC_CHANNEL_CHECK_RATE 16
RPL mode of operation to NO_DOWNWARD_ROUTE
It's called non-storing mode. To enable it:
#define RPL_CONF_WITH_NON_STORING 1
Send interval to 5s
Depends on the application; there is no standard name for this parameter. If we're talking about ipv6/rpl-collect/, you should #define PERIOD 5 in project-conf.h.
UDP application packet size to 100 Bytes
The payload is constructed in udp-sender.c:
uip_udp_packet_sendto(client_conn, &msg, sizeof(msg),
&server_ipaddr, UIP_HTONS(UDP_SERVER_PORT));
So in order to change the payload size, you need to change the size of the locally-defined anonymous struct variable called msg. You can add some dummy fields to it, for example.
struct {
uint8_t seqno;
uint8_t for_alignment;
struct collect_view_data_msg msg;
char dummy[100 - 2 - sizeof(struct collect_view_data_msg)];
} msg;

Sending 20-byte characteristic values with CurieBLE

The documentation for Arduino/Genuino 101's CurieBLE library states the following, in the section "Service Design Patterns":
A characteristic value can be up to 20 bytes long. This is a key
constraint in designing services. [...] You could also combine readings into a single characteristic, when a given sensor or actuator has multiple values associated with it.
[e.g. Accelerometer X, Y, Z => 200,133,150]
This is more efficient, but you need to be careful not to exceed the 20-byte limit. The accelerometer characteristic above, for example, takes 11 bytes as a ASCII-encoded string.
However, the typed Characteristic constructors available in the API are limited to the following:
BLEBoolCharacteristic
BLECharCharacteristic
BLEUnsignedCharCharacteristic
BLEShortCharacteristic
BLEUnsignedShortCharacteristic
BLEIntCharacteristic
BLEUnsignedIntCharacteristic
BLELongCharacteristic
BLEUnsignedLongCharacteristic
BLEFloatCharacteristic
BLEDoubleCharacteristic
None of these types of Characteristics appear to be able to hold a 20-byte string. (I have tried the BLECharCharacteristic, and it appears to pertain to a single char, not a char array.)
Using CurieBLE, how does one go about using a string as a characteristic value, as described in the documentation as an efficient practice?
Yours issue described here in official arduino 101 example. Few strings of code how to set an array:
BLECharacteristic heartRateChar("2A37", // standard 16-bit characteristic UUID
BLERead | BLENotify, 2);
...
const unsigned char heartRateCharArray[2] = { 0, (char)heartRate };
heartRateChar.setValue(heartRateCharArray, 2);
As you can see characteristic's value set using "setValue" function with desired array as an argument. You can pass a String as char* pointing to an array.

AWS .Net SDK Create an Amazon EC2 instance with a specific EBS Volume Size

I am developing a .net web application that creates and manages ec2 instances programatically. As of now, when I create new instances, the size of the disk volume is fixed: defined by the image (AMI) I believe.
I would like to Predefine the size of the disk volume when creating a new instance so that I don't need to run a resize operation afterwards. Is that possible? Which would be the best approach?
I have a few ideas:
Define the volume size on the RunInstancesRequest object. But I think there is no such option.
Create a copy of the AMI image with a different disk size and use that one to request a new EC2 instance. Can this be done?
Any other/better ways?
In case that helps, I attach the code I currently use to request new instances:
var launchRequest = new RunInstancesRequest()
{
ImageId = amiID,
InstanceType = type,
MinCount = 1,
MaxCount = 1,
SecurityGroupIds = groups
};
var launchResponse = ec2Client.RunInstances(launchRequest);
var instances = launchResponse.Reservation.Instances;
var myInstance = instances.First();
You need to set the (integer GiB) value of the VolumeSize of the EbsBlockDevice in the launchRequest.BlockDeviceMappings before launch.
Remember that if you specify a snapshot, the volume size must be equal to or larger than the snapshot size. Also, if you're creating the volume from a snapshot and don't specify a volume size, the default is the snapshot size.
TIP: Always check the Boolean value of DeleteOnTermination as well and do not assume it has a default value of True for root volumes as in AWS console.
You can find out more on EbsBlockDevice properties here

Lags while sending a lot of data

I building a little game with a multiplayer, my problem is when I sending the bullets is there is a lag when it's exceeding the amount about 80.
I using UDP type, my connect code to the server:
udp = socket.udp()
udp:settimeout(0)
udp:setpeername(address, port)
My udp:send of the bullet to the server:
udp:send('%S03'..startX..','..startY..','..bulletAngleX..','..bulletAngleY)
Server: retrieving the bullets and sending them back to the rest of the clients:
elseif code == '%S03' then
local bulleta = string.gmatch(params, "[^,]+")
local sX = tonumber(bulleta())
local sY = tonumber(bulleta())
local dX = tonumber(bulleta())
local dY = tonumber(bulleta())
for i,v in ipairs(clients) do
udp:sendto('%C01'..math.random(120, 200)..','..sX..','..sY..','..dX..','..dY, v['ip'], tonumber(v['port']))
end
end
Client: getting the bullets data and creating them in a table:
elseif code == '%C01' then
local xy = string.gmatch(re, "[^,]+")
local dis = tonumber(xy())
local xStart = tonumber(xy())
local yStart = tonumber(xy())
local xAngle = tonumber(xy())
local yAngle = tonumber(xy())
table.insert(bullets, {distance = dis, sX = xStart, sY = yStart, x = xStart, y = yStart, dx = xAngle, dy = yAngle})
The updating of the x and y cordition of the bullets is happening in the client when he gets the bullets x, y and removing the bullets when they distance are more then 300 pixel from the first position.
But my problem is still that there is lags when I am shooting..
I'm not extremely familiar with networking details, however it is quite likely you are simply sending too many packets, and need to bundle or otherwise compress the data you send to reduce the overall number of UDP messages you are sending.
From this Love2d networking tutorial (also using UDP):
It's very easy to completely saturate a network connection if you
aren't careful with the packets we send (or request!), so we hedge our
chances by limiting how often we send (and request) updates.
(For the record, ten times a second is considered good for most normal
games (including many MMOs), and you shouldn't ever really need more
than 30 updates a second, even for fast-paced games.)
We could send updates for every little move, but we'll consolidate the
last update-worth here into a single packet, drastically reducing our
bandwidth use.
I can't confirm if this is the problem without seeing all/more of your code. However, sending the data for each bullet separately, to each client, many times per second, the bandwidth usage could become very high, most notably sending the data for each bullet separately.
I would first try bundling the data for every bullet together before sending the bundled data to each client, which will greatly reduce the number of individual packets being sent out. Also, in case you aren't, make sure you are not sending the packets in love.update, which is called very, very often. Instead, make a seperate function for updating over the network and use timers to call it only about once every 100ms.
Let me know if any of this is already being accounted for in the code, or show us a larger context of your networking code.

Simple algorithm for reliable communications

So, I have worked on large systems in the past, like an iso stack session layer, and something like that is too big for what I need, but I do have some understanding of the big picture. What I have now is a serial point to point communications link, where some component is dropping data (often).
So I am going to have to write my own, reliable delivery system using it for transport. Can someone point me in the directions for basic algorithms, or even give a clue as to what they are called? I tried a Google, but end up with post graduate theories on genetic algorithms and such. I need the basics. e.g. 10-20 lines of pure C.
XMODEM. It's old, it's bad, but it is widely supported both in hardware and in software, with libraries available for literally every language and market niche.
HDLC - High-Level Data Link Control. It's the protocol which has fathered lots of reliable protocols over the last 3 decades, including the TCP/IP. You can't use it directly, but it is a template how to develop your own protocol. Basic premise is:
every data byte (or packet) is numbered
both sides of communication maintain locally two numbers: last received and last sent
every packet contains the copy of two number
every successful transmission is confirmed by sending back an empty (or not) packet with the updated numbers
if transmission is not confirmed within some timeout, send again.
For special handling (synchronization) add flags to the packet (often only one bit is sufficient, to tell that the packet is special and use). And do not forget the CRC.
Neither of the protocols has any kind of session support. But you can introduce one by simply adding another layer - a simple state machine and a timer:
session starts with a special packet
there should be at least one (potentially empty) packet within specified timeout
if this side hasn't sent a packet within the timeout/2, send an empty packet
if there was no packet seen from the other side of communication within the timeout, the session has been termianted
one can use another special packet for graceful session termination
That is as simple as session control can get.
There are (IMO) two aspects to this question.
Firstly, if data is being dropped then I'd look at resolving the hardware issues first, as otherwise you'll have GIGO
As for the comms protocols, your post suggests a fairly trivial system? Are you wanting to validate data (parity, sumcheck?) or are you trying to include error correction?
If validation is all that is required, I've got reliable systems running using RS232 and CRC8 sumchecks - in which case this StackOverflow page probably helps
If some components are droping data in a serial point to point link, there must exist some bugs in your code.
Firstly, you should comfirm that there is no problem in the physical layer's communication
Secondly, you need some konwledge about data communication theroy such like ARQ(automatic request retransmission)
Further thoughts, after considering your response to the first two answers... this does indicate hardware problems, and no amount of clever code is going to fix that.
I suggest you get an oscilloscope onto the link, which should help to determine where the fault lies. In particular look at the baud rate of the two sides (Tx, Rx) to ensure that they are within spec... auto-baud is often a problem?!
But look to see if drop out is regular, or can be sync-ed with any other activity.
on the sending side;
///////////////////////////////////////// XBee logging
void dataLog(int idx, int t, float f)
{
ubyte stx[2] = { 0x10, 0x02 };
ubyte etx[2] = { 0x10, 0x03 };
nxtWriteRawHS(stx, 2, 1);
wait1Msec(1);
nxtWriteRawHS(idx, 2, 1);
wait1Msec(1);
nxtWriteRawHS(t, 2, 1);
wait1Msec(1);
nxtWriteRawHS(f, 4, 1);
wait1Msec(1);
nxtWriteRawHS(etx, 2, 1);
wait1Msec(1);
}
on the receiving side
void XBeeMonitorTask()
{
int[] lastTick = Enumerable.Repeat<int>(int.MaxValue, 10).ToArray();
int[] wrapCounter = new int[10];
while (!XBeeMonitorEnd)
{
if (XBee != null && XBee.BytesToRead >= expectedMessageSize)
{
// read a data element, parse, add it to collection, see above for message format
if (XBee.BaseStream.Read(XBeeIncoming, 0, expectedMessageSize) != expectedMessageSize)
throw new InvalidProgramException();
//System.Diagnostics.Trace.WriteLine(BitConverter.ToString(XBeeIncoming, 0, expectedMessageSize));
if ((XBeeIncoming[0] != 0x10 && XBeeIncoming[1] != 0x02) || // dle stx
(XBeeIncoming[10] != 0x10 && XBeeIncoming[11] != 0x03)) // dle etx
{
System.Diagnostics.Trace.WriteLine("recover sync");
while (true)
{
int b = XBee.BaseStream.ReadByte();
if (b == 0x10)
{
int c = XBee.BaseStream.ReadByte();
if (c == 0x03)
break; // realigned (maybe)
}
}
continue; // resume at loop start
}
UInt16 idx = BitConverter.ToUInt16(XBeeIncoming, 2);
UInt16 tick = BitConverter.ToUInt16(XBeeIncoming, 4);
Single val = BitConverter.ToSingle(XBeeIncoming, 6);
if (tick < lastTick[idx])
wrapCounter[idx]++;
lastTick[idx] = tick;
Dispatcher.BeginInvoke(DispatcherPriority.ApplicationIdle, new Action(() => DataAdd(idx, tick * wrapCounter[idx], val)));
}
Thread.Sleep(2); // surely we can up with the NXT
}
}

Resources