Ns3 getdeconds() problem. Use this function to get time delay but failed - simulator

I use ns3 to do my Vanets simulation.
When I create a packet,I use Simulator::Now().GetSeconds() to obtain time information and put it into header (create_time).
After receive this packet, receiver use Simulator::Now().GetSeconds() to get receive_time.
Then time delay will equal to (receive_time-create_time).
When I check the result, 20% of timedelay is smaller than zero.
It doesn't make sense.
But I don't know where the problem happen.
Or I create too many node, my hardware can not afford?

Related

What decides packETH's maximum packet generation speed?

I've been using packETH for a while and I have always wondered one thing.
When I set packet generation speed on Gen-b option, I realized packETH doesn't really send packets as set.
I think when I use packETH on a virtual machine, maximum speed tends to decrease.
Even if I set number of packets to send : 40000000 and set packets per second : 4000000, the operation wouldn't be finished in 10 seconds and instead I think packETH tries to send out packets as fast as possible but can't quite reach that speed and decides to send out packets slower and therefore taking longer for the operation to finish.
So, what decides packETH's maximum packet generation/transfer speed?
Does it automatically adjust the maximum speed so that the receiving server can intake all the packets correctly?
Thank you so much in advnace.
I've read about packETH and I didn't found anything related to be a multi-threaded package sender, so there should be a problem. What you want is a multithreaded package sender which can receive any amount of packages and send them in parallel. But first, let focus on packETH:
You have tried which configuration?
In the Auto mode you can choose one of five distribution modes. Except the random mode you can see different timings by choosing different mode. In the random mode the generator tries to be smart :). Beside timing you can also specify the amount of traffic for each stream. In the manual mode you select all of the parameters by hand.
Here is where I've found it: http://packeth.sourceforge.net/packeth/GUI_version.html
Related to a multithreaded sender I would suggest trafgen, let's expose some features:
This will help you at not worrying about limit
Process a number of packets and then exit. If the number of packets is 0, then this is equivalent to infinite packets resp. processing until interrupted. Otherwise, a number given as an unsigned integer will limit processing.
This will ensure paralelism
Specify the number of processes trafgen shall fork(2) off. By default trafgen will start as many processes as CPUs that are online and pin them to each, respectively. Allowed value must be within interval [1,CPUs].

Best approach for transfering large data chunks over BLE

I'm new to BLE and hope you will be able to point me towards the right implementation approach.
I'm working on an application in which the peripheral (battery operated) device continuously aggregate sensor readings.
On the mobile side application there will be a "sync" button, upon button press, I would like to transfer all the sensor readings that were accumulated in the peripheral to the mobile application.
The maximal duration between sync's can be several days, hence, the accumulated data can reach a size of 20Kbytes.
Now, I'm wondering what will be the best approach to perform the data transfer from the peripheral to the central application.
I thought about creating an array of characteristics where each characteristic will contain a fixed amount of samples (e.g. representing 1hour of readings).
Then, upon sync, I will:
Read the characteristics count (how many 1hours cells).
Then read the characteristics (1hour cells) one by one.
However, I have no idea if this is a valid approach ?
I'm not sure if this is the most "power efficient" way that I can
use.
I'm not sure if Characteristic READ is the way to go, or maybe
I need to use indication instead.
Any help here will be highly appreciated :)
Thanks in advance, Moti.
I would simply use notifications.
Use one characteristic which you write something to in order to trigger the transfer start.
Then have another characteristic which you simply stream data over by sending 20 bytes at a time. Most SDKs for BLE system-on-a-chips have some way to control the flow of data so you don't send too fast. Normally by having a callback triggered when it is ready to take the next notification.
In order to know the size of the data being sent, you can for example let the first notification contain the size, and rest of them the data.
This is the most time and power efficient way since there can be sent many notifications per connection interval, compared if you do a lot of reads instead which normally requires two round trips each. Don't use indications since they also require basically two round trips per indication. They're also quite useless anyway.
You could possibly increase the speed also by some % by exchanging a larger MTU (which leads to lower L2CAP/ATT headers overhead).

Is set-then-query a sensible way to detect command completion over SCPI?

I'm using a SCPI device over RS-232. I have a number of situations where I want to know that a command has completed before continuing. What seems sensible to me is to end with a query to force the device to send a response line after everything is done, something like the following:
:volt 12.34;volt?
It is my understanding that volt? executes strictly after volt 12.34 and thus, when I get the response line for volt?, I'll know that the voltage has been set. I can also check the response value as a sanity check.
Does this approach acheive what I want? Should the response value always correspond to the value I put in (if it was a valid value)? Is there a better way to do this that I'm not seeing?
(I would have used a SCPI tag, but it does not seem to exist.)
*OPC? is a query returning 1 when the last operation completed, so:
:volt 12.34;*OPC?
would return 1 only after the operation has completed.
But if you want to make sure what volt is, you still need to query it (for example if you gave out of range voltage. I know some devices might still return 1 for the *OPC but leaving voltage as it is / nearest possible value)

Get static values from MPU6050 DMP

I have a problem in getting clear and not jumping values from MPU9050 DMP. I used Jeff Rowberg's code. The problem is when I use the code all is perfect, YPR is very smooth. But when I use that in my program with delay I have jumping values over time. Depending on the delay, jumping values vary.
I used a delay because I'm reading the serial values by unity and unity needs a little delay on the Arduino side to read the data. Can someone please tell me what the problem is and how I can fix it?
Thanks a lot.
It is likely that the fifo buffer is overflowing, leading to incorrect data. This would happen if you put in a delay that lasted for longer than your dmp frequency is. One strategy you could use is to read the data as fast as you can from dmp, but only send data over the serial port every other or every three readings, depending on what kind of delay you need between readings.
If you edit your question with what your dmp frequency is and what your desired serial frequency is I could try to help more.

What is the difference between the delay and the jitter in the context of real time applications?

According to Wikipedia Jitter is the undesired deviation from true periodicity of an assumed periodic signal, according to a papper on QoS that I am reading jitter is reffered to as delay variation. Are there any definition of the jitter in the context of real time applications? Are there applications that are sensitive to jitter but not sensitive to delay? If for example a streaming application use some kind of buffer to store packets before show them to the user, is it possible that this application is not sensitive to delay but is sensitive to jitter?
Delay: Is the amount of time data(signal) takes to reach the destination. Now a higher delay generally means congestion of some sort of breaking of the communication link.
Jitter: Is the variation of delay time. This happens when a system is not in deterministic state eg. Video Streaming suffers from jitter a lot because the size of data transferred is quite large and hence no way of saying how long it might take to transfer.
If your application is sensitive to jitter it is definitely sensitive to delay.
In Real-time Protocol (RTP, RFC3550), a header contains a timestamp field. The value of it usually comes from a monotonically incremented counter and the frequency of the increment is the clock-rate. This clock-rate must be the same all over the participant wants something with the timestamp field. The counters have different base offsets, because the start time may different or they contains it because of security reason, etc... All in all we say the clocks are not syncronized.
To show it in an example consider if we refer to snd_timestamp and rcv_timestamp the most recent packet sender timestamp from the RTP header field and receiver timestamp generated by the receiver using the same clock-rate.
The wrong conclusion is that
delay_in_timestamp_unit = rcv_timestamp - snd_timestamp
If the receiver and sender clock-rate has different base offset (and they have), this not gives you the delay, also it doesn't consider the wrap around the 32bit unsigned integer.
But monitoring the time for delivering packets is somehow necessary if we want a proper playout adaption algorithm or if we want to detect and avoid congestions.
Also note that if we have syncronized clocks delay_in_timestamp_unit might be not punctually represent the pure network delay, because of components at the sender or at the receiver side retaining these packets after and/or before the timestamp added and/or exemined. So if you calculate a 2seconds delay between the participant, but you know your network delay is around 100ms, then your packets suffer additional delays at the sender or/and at the receiver side. But that additional delay is somehow (or at least you hope that it is) constant, so the only delay changes in time is - hopefully - the network delay. So you should not say that if packet delay > 500ms then we have a congestion, because you have no idea what is the actual network delay if you use only one packet sender and receiver timestamp information.
But the difference between the delays of two consecutive packets might gives you some information about weather something wrong in the network or not.
diff_delay = delay_t0 - delay_t1
if diff_delay equals to 0 the delay is the same, if it greater than 0 the newly arrived packets needed more time then the previous one, and if it smaller than 0 it needed less time.
And from that relative information based on two consecutive delays you could say something.
How you determine the difference between two delay if the clocks are not syncronized?
Consider you stored the last timestamps in rcv_timestamp_t1 and snd_timestamp_t1
diff_delay = (rcv_timestamp_t0 - snd_timestamp_t0) - (rcv_timestamp_t1 - snd_timestamp_t1)
but that would be problem without maintaining the base offsets of the sender and the receiver, so reordering it:
diff_delay = (rcv_timestamp_t0 - rcv_timestamp_t1) - (snd_timestamp_t0 - snd_timestamp_t1)
and here you can subtract rcv timestamps from each other and it eliminates the offset rcv and snd contain, and then you can extract the rcv_diff from snd_diff and it gives you the information about the difference of the delays of two consecutive packets in the unit of the clock-rate.
Now, according to RFC3550 jitter is "An estimate of the statistical variance of the RTP data packet interarrival time".
In order to finally get to the point your question is
"What is the difference between the delay and the jitter in the context of real time applications?"
Tiny note, but real-time applications usually refer to systems processing data in a range of nanoseconds, so I think you refer to end-to-end systems.
Also despite of several altered definition of jitter, it all uses the difference of the delays of arrived packets and thus provide you information about the relative changes of the network delay, meanwhile delay itself is an absolute value of the time of delivery.

Resources