How to minimize the flooding of RREQ packet in AODV if an intermediate node has replied the source with the path? - networking

Suppose we have a condition in AODV protocol
RREQ(route request) packet in AODV(MANET protocol) goes on moving to the destination even if a node at TTL=1 has replied for the route request.For example,n1,n2 and n3 are 3 nodes at TTL=1 and n2 replies to source S but n1 and n3 have rebroadcasted the RREQ packet towards destination D which would perhaps create unnecessary flooding in the network . Now I thought a naive solution to minimize this flooding that n2 will also broadcast another packet containing information that it has replied to the RREQ for S to D probably using something like a higher Destination sequence number in it or containing the same Broadcast ID as the RREQ. But what it will do is create another chance of flooding . So,are there any possible ways by which we could minimize this problem in a more effective manner?NOTE:AODV is a reactive routing protocol in Mobile Ad-Hoc network systems which rely on table routing .

This is a research topic. There are several solutions provided for the same. On of the efficient solution is provided as:
Source node starts broadcasting for the first time with a small TTL value of 1. This RREQ reaches adjacent nodes, they checks weather they contain an updated route for the destination. Those having an updated route for the destination replies with RREP and rest of the nodes can't rebroadcast because TTL is expired. If no one has the route, the RREQ is rebroadcasted by source with one increased value of TTL=2. This way, RREQ packet is rebroadcasted only when the nodes do not have path for the destination.
This method also increases flooding of RREQ packets but it is an optimization problem, still it is one of the good method to solve this problem.
Hope its clear now.

The Condition Of node in Aodv protocol is just check the receiving packet type, if its RREQ means just forward to all of its neighbours .if u want to minimize RREQ u can add your condition in Recvpacket() functions.better is u can use number of hops to create a new condition.

Related

Understanding format of RIB dumps from Oregon Route-views

I am working on a project in which I need to analyse the rib-dumps from the Oregon Routeviews Project.
I download the .bz2 file from here for a specific time and date for a specific node. These files are generated every 2 hours.
Then I unzipped and parsed using a zebra parser.
In the end, I get a text file with almost a million entries in the following format
194.33.63.0/24 58511 8468 31493 31493
There are also a lot of entries with the same last number but different IP in the beginning.
For example
194.28.28.0/22 58511 31500 50911
194.28.28.0/23 58511 31133 50911
My inference is that these numbers are Autonomous System numbers and they somehow denote BGP Hops, but I am not clear how they relate to the IP address in the starting. And what exactly is the source/destination AS?
I really think you should go and do some reading on how BGP works and what the routeing information carried by the BGP messages you are looking at is and means.
To get you started...
...a route in BGP speak is a prefix and some attributes. Key among the attributes are the next-hop and the AS-Path. In announcing a route to a BGP peer (neighbour) the BGP router is saying that it can reach the prefix and if packets with destinations in the prefix are forwarded to the next-hop, they will be forwarded on towards their destination. The AS-PATH lists the ASes through which packets are (expected to) travel on their way to the destination.
So what you are seeing is reachable prefixes and the AS-PATH attribute for each one. I'm guessing you left out the next-hop (for eBGP, that will generally be the/an address of the BGP router which is advertising the route -- but in any case all eBGP routes will generally have the same next-hop).
The AS-PATH can be read from left to right: the first AS is the one from whom the route was learnt, the last AS is the one that contains the prefix. Packets forwarded to the next-hop are (currently) expected to travel through those ASes, in that order, on their way to their destination. So the first AS would be the source -- the immediate source of the route. The last AS can be called the destination, but is also known as the origin -- the origin of the route.
[Technically, the AS-Path should be read from right to left, and lists the ASes which the route has traversed this far. Most of the time that's the same as reading left to right for packets traversing the network towards their destination.]
as-50911 origin or destination,
as-58511 source
194.28.28.0/22 should be the owner of as-50911 origin
I think you are confused about /23 or /22. 194.28.28.0/23 its not different IP. Its actually the same IP with different prefix length, i.e., /23. The autonomous systems registered their IP addresses with prefix lengths in IRR. Less specific, i.e., /22 means more end node. More specific, i.e., /23 means less end node. Moreover, You should read about prefix length.

Why is an empty TCP segment at right edge of receive window not acceptable?

The TCPv4 specification (RFC 793) classifies a received segment as unacceptable if it has zero length, a sequence number equal to RCV.NXT+RCV.WND while the receive window is not zero (second row in the table).
This essentially means that the segment will be discarded, other than possibly sending an ACK. No ACK processing or send window update will be done.
What is the justification of this?
Consider this scenario:
Host A sends all possible data segments to host B, just exhausting the receive window of B.
Host A shortly also sends an empty segment, e.g. a window update or acknowledgement of received data. This segment has sequence number equal to the right edge of the receive window of host B (RCV.NXT+RCV.WND), since it was set to the latest SND.NXT of host A.
The mentioned data packets are lost in the network or delayed, and host B receives the empty segment first.
Host B will classify the empty segment as not acceptable, and drop it, ignoring any acknowledgement or window update.
Is there some part that I am not understanding correctly? Is this scenario really possible?
note: I ask here instead of on networkengineering.stackexchange.com since I encountered the issue while implementing a TCP/IP stack and these protocol details seem closer to programming than what is commonly understood as network engineering.

Network layer Lan network

When we sent packets from one router to another router on the network layer and the packet size is greater than the MTU (maximum transferable unit) of the router, we have to fragment the packet. My questions is: suppose we need to add padding bits in last fragment, then where do we add padding bits (in the LSB or MSB) and how does the destination router differentiate between packet bits or padding bits?
I want you to consider the following things before:
Limit on the maximum size of IP data-gram is imposed by data link protocol.
IP is the highest layer protocol that is implemented both at routers and hosts.
Reassembly of original data-grams is only done at destination host. This takes off the extra work that need to be done by the routers present in the network core.
I will use the information from the following image to help you get to the answer with an example.
Here initial length of the packet is 2400 bytes which needs to to fragmented according to MTU limit of 1000 bytes.
There are only 13 bits available for the fragment offset and the offset is given as a multiple of eight bytes. This is why the data fields in first and second fragment has size of 976 bytes (It is the highest number divisible by 8, which is smaller than 1000 - 20 bytes). This makes first and second fragment of total size of 996 bytes. The last fragment contains the remaining of 428 bytes of payload (with 448 bytes of total).
Offset can be calculated as 0; 976/8 = 122 and 1952/8 = 244.
When these fragments reach the destination host, reassembly needs to be done. Host uses identification, flag and fragmentation offset for this task. In order to make sure which fragments belong to which data-gram, host uses source, destination addresses and identification to uniquely identify them. Offset values and more fragment bits are used to determine whether all fragments have arrived or not.
Answer to your question
The need to divide payload into multiples of 8 is only required for non-last fragment. Reason of using offset dividing by 8 helps the host to identify the starting address of the next fragment. The host don't need the address of the next fragment if it encounters the last fragment. Thus, no need to worry about payload being multiple of 8 in case of last fragment. Host checks the more fragment flag to identify the last fragment.
A bit of additional information: It is not the responsibility of the network layer to guarantee the delivery of the data-gram. If it encounters that one or more fragment(s) have not arrived then, it simply discards the whole data-gram. Transport layer, which is working above network layer, will take care of this thing, if it is using TCP, by asking the source to re-transmit the data.
Reference: Computer Networking-A Top Down Approach, James F. Kurose, Keith W. Ross (Fifth Edition)
You don't need to add any padding bits. All bits will be push on down the route until the full frame has been sent.

Chord DHT response method

I am building a Chord DHT in Go (however the language part isn't important).
And I am trying to figure out the response behavior between nodes. If I want to send a successor request message to Node C, but it has to go to Node A -> Node B first, then arriving at Node C. What is the best way for Node C to respond to the original Node.
I have come up with the distinct methods, but dont know which one is more idomatic for DHTs.
When each node makes a request, it waits for the response on the original TCP connection, this makes it so the response takes the reverse path it originally took
Make the request then forget about it, when Node C recieves the request it sends the response directly back to the original node, indicated by the sender (IPAddress) field in the request message.
Send the response to the sender NodeID just as it were any other message, so it would be routed around the Chord ring.
I cant figure out which is the best method to use.
The only reason you use routing in Chord, is to find resources. That's why you shouldn't just know the accessor and predecessor but also additional nodes in distances of 2^n. This way you can achieve a lookup performance of O(log N). You can read the Wikipedia article about Chord for details.
So you should attach to the message you are sending to Node C the source-node's address, so that C can respond directly. This will have a much better performance over all.

Is it possible that in a network,delay from router A to B is different from delay from router B to A

considering that metric is delay in distance vector routing algorithm,
is it possible that delay from router A to B is different from router B to A.
if yes, under which conditions??
thanks.
The algorithm assumes the graph is bidirectional. Of course, it's possible for the delays to be different in each direction in practice: for example, if B is transmitting heavily to A, then traffic from A to B is likely to be faster than from B to A, since traffic from B will have to get in line at the end of a queue.
Delay and metric are two different things.
Delay is the time it takes for a packet to traverse the network. If a link is heavily utilized in one direction and there is some kind of buffering device (such as a switch) on the link you might have different delays in the network traffic depending on direction.
Metrics are values associated with entries in a routing table that indicates "costs" of different routes. If A and B have static routing entries they can definitely be configured with different metrics for each direction of the same link.
Are you assuming both hypothetical circumstances run at the exact same time? If not I suppose there could be a spike on the traffic for one of the routers at any given time that bogs down your 'wanted' traffic.
Certainly this is possible, but to give you more details you probably need to be more specific with the question.
With regards to your specific question about Metrics and Distance Vector routing algorithms, yes, A can be configured to think that B is further away than B thinks A is, although as mentioned by one of the other answers, that doesn't necessarily mean the delay is different although it may in fact be.
In practice though, there are lots of questions to consider:
Is router A adjacent to router B? If not, then you certainly could have different delays because inbound packets may take a different path than outbound packets.
If they are adjacent, what kind of connectivity do they have? Are they the same kind of router? Imagine a router at the end of an aysymmetric DSL line. Of course the propagation delay wouldn't be aysymmetric, but delay could be higher in one direction as a result of traffic congestion. (This scenario also gives a concrete example of why you might want A to think the link to B has a higher cost than B thinks the link to A has.)
In practice, the definition of delay makes a big difference too. Are you thinking of just the computed cost? Or just propagation delay? Or just the link cost? If router B is sending more traffic than router A, it may take longer for responding packets from B to A to be processed by B than A takes when sending the packets (the same may apply for intermediary switches, especially in the case of things like multicast packets--some routers and/or switches take longer to process multicast and other "special" packets). So in this scenario the actual delay may be different, but the cost the DVP is using thinks it is the same.
Hope this answer helps. Good luck,
--jed

Resources