How to create a application specific adaptive congestion control algorithm based on parameter like the port number in ns2 - tcp

I have been assigned the project to Create a application specific adaptive congestion control algorithm based on parameter like the port number in ns2 network simulator. How do I go about it ? I don't know how to use port number in ns2 simulator. Professor said that some knowledge of kernel would be required to proceed. He provided an image for reference which I have attached. I am confused as to how should I trace the port number
I know how to do link state, distance vector routing in ns2 in which you just import rtProto dv/ls. There do exist similar libraries for tcp tahoe,reno,vegas,etc but how should I highlight the port number in the project ? Would it be seen in trace file ?

Related

j1939 custom module communication

I am in the middle of building a custom Canbus IO module based off an Atmega2560 chip. The module will have 10 high current outputs with pwm control and current feedback, 20 digital inputs, 5 analog inputs and 4 0-5v analog outputs. I've been working on figuring out the J1939 message structure I'll be using. These modules will be slave devices being controlled by a master ECU. Since more than one of these modules may be on the same network, I've added a CAN Address switch to each module so the user can select the CAN ID 1-255 of each unit. The plan was to have each module broadcast the state of all the digital inputs in a single CAN message. If I Select 0xFF00 as the PGN ID and then use the 8 bytes to represent the state in bit form for the status of the input for that particular module then how does the master know which module the message came from? Is the module address in the CANbus message?
I've been looking at the arduino CANBUS examples and it looks like you can filter based on PGN but I don't see anything to filter based on source address or destination address.
Can someone provide some clarification on how I might do this?
Yes, as doynax mentioned, the 8 least significant bits are always reserved for the source address of a node. If you will be placing this network on a vehicle's CAN bus, it is important to note that if you do not claim a source address on the vehicle bus, you may get a NAK from the main ECU whenever you try to place foreign information on the bus.
In order to prevent this issue, you may have to do an claim address procedure for each custom node on your network. This is where you send out PGN 60928 as a broadcast (destination address 0xFF) and every node on the network should respond on that PGN with their own source address (assuming all nodes comply to this specification, not all do). If a common source address is seen on the replies, then you know that it is not available.
See the following slideshow for more information starting on page 39
J1939

Doing BLE pairing and communication remotely

I tried looking everywhere to understand how BLE pairing works but unable to find answers. Lets say I have a small device, like a raspberry pi with a BLE dongle. What I'd like to do is to allow BLE pairing and then subsequent communication with a BLE peripheral (such as a BLE temperature sensor) using software only.
My aim is to try and see if I can control the pairing and then getting the temperature, without touching the sensor at all, so that in future, I can just remotely log into the raspberry pi, turn on bluetooth, obtain the temperature reading and then afterwards turn it off again. And in future if I need to obtain the reading again, I'll repeat the process.
So:
Can this "simple" scenario be achieved using some software based control?
If not, then which parts require manual input and then which don't?
The BLE sensor should not be in advertisement mode or broadcasting the information. It should only send the data to paired devices.
Any and all answers appreciated! :-)
Most Bluetooth low energy devices do not require pairing at all so check first that your sensors do have this requirement. If they do then you need to determine which specific pairing procedure is required. Bluetooth defines various ways to authenticate during pairing and these generally relate to the I/O capabilities of the two devices and are called Association Models. In some cases, pairing "just works" (the name of the simplest association model) and no user interaction is required. In others, say if one device has a keyboard but the other has a display and no keyboard, the second device will display a random 6 digit number and the user must key that number into the first device. All of this is defined in the Bluetooth Core specification.
In your case the pairing procedure will be defined for your sesnors in the manufacturer documentation so check there first. Not that you should only have to do this once per device, not every time you want to read the sensor.
Accessing sensor data remotely needs a gateway which I guess is what your Pi will do. The Bluetooth SIG defined a set of RESTFul HTTP APIs for exactly this purpose. See https://www.bluetooth.com/develop-with-bluetooth/white-papers
The SIG also provides a gateway developer resource for Raspberry Pi which you can download including source code which is written in node.js. See https://www.bluetooth.com/develop-with-bluetooth/developer-resources-tools
Good luck

how can you count the number of packet losses in a file transfer?

One of my networks course projects has to do with 802.11 protocol.
Me and my parther thought about exploring the "hidden terminal" problem, simulating it.
We've set up a private network. We have 2 wireless terminals that will attempt to send a file
to a 3rd terminal that is connected to the router via ethernet. RTS/CTS will be disabled.
To compare results, we'd like to measure the number of packet collisions that occured during the transfer so as to conclude that is due to RTS being disabled.
We've read that it is imposible to measure packet collisions as it is basically noise. We'll have to make do with counting the packets that didnt recieve an "ACK". Basically, the number of retransmitions.
How can we do that?
I suggested that instead of sending a file, we could make the 2 wireless terminals ping the 3rd terminal continually. The ping feature automatically counts the ping packets that didnt recieve the "pong". Do you think its a viable approach?
Thank you very much.
No, you'll get incorrect results. Ping is an application, i.e. working at application (highest) level of the network. 802.11 protocol operates at MAC layer - there are at least 2 layers separating between ping and 802.11. Whatever retransmissions happen at MAC layer - they are hidden by the layers above it. You'll see failure in ping only if all the retransmissions initiated by lower levels have failed.
You need to work on the same level that you're investigating - in your case it's the MAC layer. You can use a sniffer (google for it) to get the statistics you want.

beginner in GSM: develop GSM locator

i'm a beginner, and i trying build a GSM embedded device that could send SMS to a mobile phone, so that the phone can locate the location of the device.
I have searched this website for similar topic, what come to me is triangulation calculation.
My question is how do i know which tower the GSM device is near to, and how to connect to these three tower to calculate the location?
In order to do cell triangulation, you need to know the geographic position of the cell towers.
Either you undertake a huge effort to build a cell tower inventory or you are the network operator. In practice, only the network operators render this service, some allowing to query locations via an interface. However, this is not standardized.
you have to purchase GSM module. connect it with microcontroller
read at commands provided by manufacturer first.
there is a AT command for your application
try searching following in datasheet:
AT+CREG.
first configure module using this command.(refer your datasheet of gsm module).
then turn on engineering mode using command:
AT+QENG=2,1. (refer datasheet)
now it will give you automatically ncell,bcch,dbm,bsic,c1,c2,mcc,mnc,lac & cellid
it will give you this periodically or by query according to your command.
there are many websites that can triangulate device feeding them this info
i.e. opencellid.org/
i hope this helps!

microcontroller based sensor project: ZIGBEE vs GSM

My application will be having many microcontrollers with sensors monitoring a large area. The application requires all these microcontrollers to send the data to the master microcontroller. From the master microcontroller the data must go to desktop PC via serial connection and to a mobile application. Which one (Zigbee/GSM) will be suitable?
This completely depends on what you mean by "Wide Area." A few hundred square meters? A few hundred square kilometers? Zigbee is more cost effective and simpler to implement if you're within range. You could even mesh your nodes together to extend the collective reach of your network. Otherwise, well, you have no choice but to use something like GSM.
RF Line-of-sight range on readily available XBee Modules can be up to 2 miles. Higher power models can be had with 40 mile LOS range.
If you are within range, I would recommend ZigBee as that saves you the cost from having a sim-card in each device.
Buy zigbee modules that have SDK and HW ability for mesh networking. That will give you ability to talk to far nodes via routing nodes. Unfortunately zigbee can not do adhoc mesh networking so you need to know in advance what your routers will be, or to program your nodes routing your self. Another 802.15.4 module (by Synapse) can solve all this since it supports ad hoc mesh networking for you via SNAP protocol. It is not zigbee compatible, but Synapse modules are already adopted by big players like Garmin so this should not bother you. They also run much longer on batteries then zigbee modules. They can also give onboard analog and digital IO accessible without need for additional MCU (although you can connect it and give control to it if you wish). There is also USB stick that will enable your PC to talk to these modules.

Resources