How can i reduce the packet loss rate?currtenly,i use the 'ask',my main node send 0x12,0x34 to the child node,then the child node send the data to my main node,but the loss packte rate very high,so Is there any way to fix it?
Related
Looking at:
OMNET++: How to obtain wireless signal power?
and
https://github.com/inet-framework/inet/blob/master/examples/wireless/scaling/omnetpp.ini
there seem to be no power consumption related settings to packets that are sent in a UnitDiskRadio.
Is there a way of setting packet power consumption in a unit disk radio medium, or, conversely, communication range in ApskScalarRadioMedium?
UnitDiskRadio is a simplified version of a radio, where you are not interested in the transmission, propagation, attenuation etc. details. You just want to have a clear cut transmission distance. Above that, the transmission always fails, below that the transmission always succeed. This is simple, fast and suitable if you want to simulate high level behavior like application level or routing. You really don't care how much your radio draws from a power grid (or battery) in this case.
On the other hand, if you are interested in low level details, the whole radio transmission process should be modeled. In this case, you model the power draw and based on that transmission and there is no clear cut transmission range. Whether a transmission succeeds is a probabilistic outcome depending on power, antenna configuration, encoding, modulation, noise and a lot of other stuff, so you cannot set it as a simple "range".
TLDR: No, you cannot set both of them on the same radio.
PS: and make sure that you do not mix and match various power parameters. The first question you linked is about getting the power of a received packet (i.e. how strong that signal was when it was received). The second link show how to configure the transmission power (that goes out on the antenna), and in the question you are referring to power consumption which is a third thing, meaning how much you draw from a battery to make the transmission. They are NOT the same thing.
I am new to VEINS/Omnet++ and trying various broadcast suppression techniques and would like to calculate the packet loss ratio. I assume I have to use this formula :
Packet Loss Ratio = TotalLostPackets / SentPackets
But since some nodes send 0 packets, is there an easy way to specify this in the Omnet++ .anf config file or maybe in VEINS without doing manual adjustments? Otherwise if any node sends a 0 packet, then all graphs appear as infinity.
Thank you!
This does not directly answer your question, but I would warn against using this equation in a simulation where not all nodes might send the same number of packets or where broadcasts are sent. Each packet sent as a broadcast can potentially be received by many other nodes meaning that even a simulation where only 1 packet is sent might also record 7 successful receptions and 5 packet losses. Your equation would calculate the loss rate as 5/1=500% whereas I would find a rate of 5/12=42% more reasonable.
As a side effect of calculating loss rate as "fail/(success+fail)" you will not need to take special care for nodes that did not send/receive packets.
I'm sending 1k data using TCP/IP (using FreeRTOS + LwiP). From documents I understood that TCP/IP protocol has its flow control inside its stack itself, but this flow control is dependent on the Network buffers. I'm not sure how this can be handled in my scenario which is described below.
Receive data of 1k size using TCP/IP from wifi (this data rate will be in 20Mb/s)
The received Wifi data is put into a queue of 10k size10 block, each block having a size of 1K
From the queue, each block is taken and send to another interface at lower rate 1Mb/s
So in this scenario, do I have to implement flow control manually between data from wifi <-> queue? How can I achieve this?
No you do not have to implement flow control yourself, the TCP algorithm takes care of it internally.
Basically what happens is that when a TCP segment is received from your sender LwIP will send back an ACK that includes the available space remaining in its buffers (the window size). Since the data is arriving faster than you can process it the stack will eventually send back an ACK with a window size of zero. This tells the sender's stack to back off and try again later, which it will do automatically. When you get around to extracting more data from the network buffers the stack should re-ACK the last segment it received, only this time it opens up the window to say that it can receive more data.
What you want to avoid is something called silly window syndrome because it can have a drastic effect on your network utilisation and performance. Try to read data off the network in big chunks if you can. Avoid tight loops that fill a buffer 1-byte at a time.
I know I need to get the RTT but I really don't get how to calculate it.
For some contextualization:
I have a ns2 script which simulates a server and a receiver with 2 routers in the way. All three links - server to first router, first router to second router, second router to receiver - have different transmission speeds.
The propagation delay is 10ms for the first two links and 3ms for the third one.
I'm sending a 3MB file in 1000 bytes packages (TCP's default packet size), 3146 packages total.
I don't want you to calculate it (the RTT) for me of course, I just want to know how to do it. :\
You can use Agent/Ping and collect RTT. Here is an example TCL snippet
There is only one problem if there is to much trafic de ping packet will be drop (you can see if you collect trace-all)
I'm currently designing a sensor network that will have small ATtiny85 probes that each have a temperature sensor, a barometer, and a humidity sensor. I think I will use these (http://goo.gl/TqaDjl) to communicate as they are low cost and don't need much range. Im not sure though how I will get the probes to communicate with the main control, as the transmitter transmits digitally and I will have +20 probes that all need to send data without signals overlapping or getting messed up every minute. I think the easiest way would be to time the probes so that they don't overlap in transmission but I'm not sure.
Questions:
-Is using RF the cheapest and best option for this system?
-How can I prevent communication overlapping?
-What is the easiest way to send data digitally from an arduino (or ATtiny85)?
I guess I'm late to the party, but I'll offer some insight into collision control with a ton of chattering transmitters on one link, a la 802.11. This is somewhat packetized.
If two transmitters try to transmit at the same time, you're bound to get a mangled mess of rotten bacon at the receivers.
A simplified version of WiFi-style collisions would be good. Basically, it uses preambles that can be detected, and for longer transmissions that have a higher chance of conflicting, it can use shorter request/clear to send packets.
While this is likely overkill, I'd go for preambles. Start by transmitting a steady stream of something recognizable, like in hex, 555533330f0f00ff which is basically alternating 1s and 0s but with changing frequency(0101, then 0011, then 00001111, and so on), a readily recognizable pattern that is unlikely to be given off by stray radiation or noise.
This pattern could undergo a shift so there's a finite set of other preambles that should be bitwise-shifted relative to the original.
If a transmitter detects this preamble, it should STOP and wait. If you limit all packets to a certain temporal length, collisions should not occur if you wait sufficient time between packets. If during the time of one packet, a preamble is heard, then your station should wait for the full length of the transmission(listening to its length and other header fields so it knows how long to wait). Once the packet is done, your station can transmit its preamble.
This is where the WiFi resemblance stops and simpler protocols take over.
Note that if 2 stations are waiting on a packet they can start their preambles almost simultaneously. To resolve this, each station should have a different zero bit flipped in its preamble. If it detects a 1 for that bit, it sees that there's another station preambling, and should back off.
Each station should wait a certain delay(up to you) after each packet so other stations can start their transmissions.
A few sketches of the communication patterns show that this is sufficient for your needs.
Now if it's a master-slave-style system as long as you only have one network it should be easier since there should only be one outstanding request that would involve a slave transmitting.
Those will be by far the cheapest method. As for the best method, there are a variety of choices much better, but more expensive. A network of Xbee modules comes to mind, but those are much more expensive than $1.25 a pair.
Using the RF modules is very do-able however. To prevent communication overlapping, put a RF transmitter and receiver on each sensor node and the main hub. The main hub can send "hey sensor1 give me your data", which gets broadcasted to all of the sensors. However, only sensor1 will realize "hey I am sensor 1, here is my data" which the hub will listen for. Then, the hub will go on and say "hey sensor2 send me your data" and so on and so forth.
I think your original approach may be best. The approach of putting a Tx and Rx on every device may be affordable, but I question if it will work. With 20 devices transmitting on the same frequency, which one will the receiver "hear". Most important, how will a device receive any remote transmitter's signal when its own transmitter is very close? Keep in mind: these are AM radios and will "send" a carrier even if not sending any data. Get a small number of transmitters before trying to go full scale.
To avoid the problem of receiving the one active transmitter among the soup of inactive transmitters, you want only 1 transmitter powered at 1 time. You would control Vcc to one transmitter, turn it on, send the burst of data, and then power it off.
-How can I prevent communication overlapping?
You can't -- you have to accept that there will be occasional overlaps. Add a CRC to the transmitted data so that the receiver can detect garbage.
The timing of the multiple transmitters is surely a project in itself. You surely don't want to run them all at the same transmission period. They may not collide at the beginning, but when two devices did drift together and start colliding, they would stay together and collide for a long time, until the clocks drifted apart.
I would start with something simple. For example with three devices, run the transmissions at 2000 ms, 2200 ms, 2400 ms period (use EEPROM to configure). That way, if a pair happens to collide at one data point, then next transmissions that pair will be 200 ms apart.