What is the meaning of non-trivial QoS in Grid Computing? - grid

What is the meaning of non-trivial QoS in Grid Computing ?
can somebody explain me what does that meeans ?

Trivial QoS parameters (basic prerequisites):
error free (eg. no distortion)
multicast is not broadcast
packet reaches certain destination
List item
...
Non trivial QoS parameters:
delay
jitter
loss
effort
For guaranteeing non trivial transmission properties you need to use DiffServ or IntServ.

Related

UnitDiskRadioMedium no power consumption settings? (omnetpp)

Looking at:
OMNET++: How to obtain wireless signal power?
and
https://github.com/inet-framework/inet/blob/master/examples/wireless/scaling/omnetpp.ini
there seem to be no power consumption related settings to packets that are sent in a UnitDiskRadio.
Is there a way of setting packet power consumption in a unit disk radio medium, or, conversely, communication range in ApskScalarRadioMedium?
UnitDiskRadio is a simplified version of a radio, where you are not interested in the transmission, propagation, attenuation etc. details. You just want to have a clear cut transmission distance. Above that, the transmission always fails, below that the transmission always succeed. This is simple, fast and suitable if you want to simulate high level behavior like application level or routing. You really don't care how much your radio draws from a power grid (or battery) in this case.
On the other hand, if you are interested in low level details, the whole radio transmission process should be modeled. In this case, you model the power draw and based on that transmission and there is no clear cut transmission range. Whether a transmission succeeds is a probabilistic outcome depending on power, antenna configuration, encoding, modulation, noise and a lot of other stuff, so you cannot set it as a simple "range".
TLDR: No, you cannot set both of them on the same radio.
PS: and make sure that you do not mix and match various power parameters. The first question you linked is about getting the power of a received packet (i.e. how strong that signal was when it was received). The second link show how to configure the transmission power (that goes out on the antenna), and in the question you are referring to power consumption which is a third thing, meaning how much you draw from a battery to make the transmission. They are NOT the same thing.

IEEE 802.15.4 Superframe Structure Slot alignment reason

Consider IEEE 802.15.4 Protocol superframe structure
(Image Src: Google)
IEEE 802.15.4 Superframe Structure
In this structure Contention Access Period(CAP) is always followed by Contention Free Period(CFP).
So is there any particular reason behind keeping CAP first and then CFP? Could it be other way around?
Thank you.
It can't really be the other way around because that is what is in the standard. Obviously, you are free to implement your own use of the radio but then I guess it wouldn't be 802.15.4!
The designers of the standard probably had good reason to place the CAP before the CFP (and if you are really interested I imagine it will be documented somewhere in the IEEE meeting minutes etc). My guess is that I think it would have these following benefits:
devices have to wake up their receiver to listen for the beacon frame, and thus if they have any ad-hoc comms to perform (like collecting a pending message or negotiating a connection etc) they can do it straight away and then go to sleep for the rest of the superframe
having the CAP first allows any devices that do not have a GTS to power down their radio for as long as possible
having the CAP first provides time for devices to negotiate a GTS before the CFP starts, thus reducing the latency to their first GTS (i.e. it would be possible to hear a beacon, associate, and obtain a GTS prior to the very next CFP)

How do clients on wireless networks decide who can transmit at any given time?

I've been thinking about wireless networking a little bit recently, and I came upon a realization last night that I can't find an answer to: how do clients know when they can transmit and not stomp over another clients' transmission?
I assume there is documentation for this sort of thing available, but I've been unable to find anything useful over a half hour of casual Google queries, probably because I don't know the right terms. Apologies in advance if this is a silly question . . .
Here's why I'm confused: based on my understanding of how RF hardware works, we can model the transmission medium as a safe shared register between different RF clients (because what one client broadcasts can be overwritten by other clients and get a muddle between the two). But safe registers only have consensus number 1, so how can we establish who can transmit at any given point? I'm assuming that only one client can transmit at once -- perhaps this is my fundamental misunderstanding?
Even the use of a randomized consensus protocol seems unwieldy, because the only ones I know of use atomic registers, not safe registers, and also have no upper bound, so two identical devices with the same random seed would proceed for a very long time.
Thanks!
Please check: Carrier sense multiple access with collision avoidance

Zigbee beaconing vs non beaconing

When using a non beaconing Zigbee network, I know that the 802.15.4 spec defines the use of CSMA-CA to control when two devices get access to a channel to make sure no two nodes "step on each others toes" so to speak. My understanding is that very simply, it requires each node to "listen before talking". Is that correct? Is there more information on the Zigbee implementation of this? In other words, where do I go to learn more about how to program a Zigbee chip to implement the same?
Also, if i have 20 end nodes sending data asynchronously to one coordinator, is the channel access mechanism enough to ensure that they do not broadcast at the same time and flood the coordinator? If five nodes (for example) attempt to broadcast at the same time, how will mutual exclusion be ensured? Where can I get some details on that?
Thanks
Rishi
The maximum size of a 802.15.4 packet is 1024 bits of payload. So the maximum duration of the frame (running in standard 250kbps rate on the 2.4GHz band) is about 5ms when you take preamble etc into account. If your end devices are polling at 1 poll/second it should easily manage 20 end nodes I think. If it gets too much the exponential backoff should ease the collision rate.
I'm sure you've seen these when searching, but just in case:
http://www.prismmodelchecker.org/casestudies/zigbee.php
http://www.dagstuhl.de/Materials/Files/07/07101/07101.FruthMatthias.Slides.pdf
http://www-public.it-sudparis.eu/~gauthier/Tools/802_15_4_MAC_PHY_Usage.pdf

Compensating for jitter

I have a voice-chat service which is experiencing variations in the delay between packets. I was wondering what the proper response to this is, and how to compensate for it?
For example, should I adjust my audio buffers in some way?
Thanks
You don't say if this is an application you are developing yourself or one which you are simply using - you will obviously have more control over the former so that may be important.
Either way, it may be that your network is simply not good enough to support VoIP, in which case you really need to concentrate on improving the network or using a different one.
VoIP typically requires an end to end delay of less than 200ms (milli seconds) before the users perceive an issue.
Jitter is also important - in simple terms it is the variance in end to end packet delay. For example the delay between packet 1 and packet 2 may be 20ms but the delay between packet 2 and packet 3 may be 30 ms. Having a jitter buffer of 40ms would mean your application would wait up to 40ms between packets so would not 'lose' any of these packets.
Any packet not received within the jitter buffer window is usually ignored and hence there is a relationship between jitter and the effective packet loss value for your connection. Packet loss typically impacts users perception of voip quality also - different codes have different tolerance - a common target might be that it should be lower than 1%-5%. Packet loss concealment techniques can help if it just an intermittent problem.
Jitter buffers will either be static or dynamic (adaptive) - in either case, the bigger they get the greater the chance they will introduce delay into the call and you get back to the delay issue above. A typical jitter buffer might be between 20 and 50ms, either set statically or adapting automatically based on network conditions.
Good references for further information are:
- http://www.voiptroubleshooter.com/indepth/jittersources.html
- http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a00800945df.shtml
It is also worth trying some of the common internet connection online speed tests available as many will have specific VoIP test that will give you an idea if your local connection is good enough for VoIP (although bear in mind that these tests only indicate the conditions at the exact time you are running your test).

Resources