Traceroute: Can it trace a path from A to B correctly? - networking

Traceroute is an application to trace the path from A to B. (A is your location and B is the server you want to trace). In Windows, you can type tracert. The main algorithm is:
send UDP with TTL = 1
Server A1 received, and return ICMP packet to A because TTL is expired.
--> know first machine between. For example A1.
send UDP with TTL = 2
Server A1 received, and send this UDP to server A2.
Server A2 received, and return ICMP packet to A because TTL is expired
--> know second machine between. In this example is A2.
Do it until to B. we can track down: A -> A1 -> A2 -> ... ->B
Does this algorithm work correctly? Because at different time, an intermediate server can send a message to different server. For example, at first time, UDP message is sent to A1, but at a later time, it can send to another server, for example, B1. So, trace route will not work properly.
Did I misunderstand something?

From the man page :
traceroute tracks the route packets take from an IP network on
their
way to a given host
So if you are trying to find one of the possible paths your packet may take, you'll find a friend in traceroute .
Now because routing tables do not change every minute, the packets that you send will most probably take the same path as traced by traceroute.
Another important point that cannot be missed is the record route option in the IP v4 header.
Once you specify that you want to use this option, every router in the path will add it's ip address to the options in the header. You can read more about it here. The catch being that the destination gets to know about the intermediate hops , not the source.
I see that you missed the role of icmp echo request and reply messages in the description of traceroute. In case this was not intentional , take a look.
Update : You can see the record route option in action by doing a ping -R
ping -R Turns on route recording for the Echo Request packets, and
display the route buffer on returned packets (ignored by many
routers).

The algorithm works properly. Indeed, routing may change due to considerations of different servers along the way, such as server load or availability. Let's say you want to send message from A to B. If the route is not changeable, what will happen if some server on the route is down? If the routing couldn't be adjusted dynamically, that would result in inability to deliver the message to the destination in this example. Here is a different example: let's say you have a server that is used for some heavy computation during the day but it's idle during the night. It's possible to allow it to pass traffic only during the night, so any routing using it will need to be changed at day.
To conclude all this we can definitely say that without dynamic routing the internet couldn't have existed in its' present form.
Addition:
Tracert sends message from A to B. It shows hops along the way. These hops constitute a valid route from A to B at the time of the execution. There is no guarantee that connection between 2 adjacent points along the way is valid after the hop has been completed. The only thing guaranteed is that for each hop there was a link between it's 2 endpoints when the message sent by tracert passed there.

Related

nmap host discovery and data-length option

I am doing host discovery only (-sn) option, trying to determine active hosts that are up and running.
My first command was:
nmap -sn -PS21,22,25,53,80,443,3389,8000,8080,42000 -PA80,443,8080,42000 -PU53 xxx.xxx.xxx.xxx/27
I am scanning public IP's and the above command produces a result stating that 18 hosts are up.
However, when I run the above command with --data-length "option" (either 32 or 56), it produces a result with only 8 hosts up.
I was expecting to see more hosts, if anything... but not less. (The data-length option adds a bytes of data to every packet to simulate the ping tool and it may help evade firewall rules set to drop 0 byte packets).
I am reading Fydors book however I am having trouble understanding the behavior above.
Any ideas?
Thanks
--data-length adds data to every packet. Your TCP discovery options (-PS, -PA) are sending packets that do not usually contain data. In this case, these packets are more likely to be dropped or ignored since they are unusual. The case where --data-length is useful is for the -PE (ICMP Echo Request) discovery option. ICMP Echo Request datagrams are usually sent with some data payload, but Nmap defaults to empty probes, so IDS products like Snort will sometimes block or alert on these probes.

HTTP and HTTPS Response in wireshark

Can I measure the response time for the http/https website using the wireshark packet. most of the website/blog only show how to check http response time only, if I want to know the HTTPS response time, how is it?thank you.
Configure Wireshark to decrypt SSL, and then measure the response time as with HTTP (i.e., by subtracting the packet times). One easy way to decrypt SSL traffic is to configure your browser to save pre-master secrets to a log file and configuring Wireshark to look for secrets in that log file. As an example, to configure Chrome, you set the environment variable SSLKEYLOGFILE to the full path of the log file and restart any Chrome processes (including background processes). Then in Wireshark, open Preferences >> Protocols >> SSL and point the Pre-Master-Secret log at the same file. There is a more detailed walk through at: https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
To get the response time, find a packet from the request/response conversation. Some of those packets should be highlighted in green with a protocol of "HTTP" if it was successfully decrypted. Right click on one of the packets and select "Follow >> SSL Stream". This should filter all the packets in the main window, limiting them to the TCP stream of interest. From there, you can scroll to find the last packet from the request, the first packet from the response, and the last packet from the response. Then, depending on what you mean by response time, just subtract the two times which cover that. For example, if you want the time from when the request was sent to the time when the response started, just subtract the time of the last request packet from the time of the first response packet.
You should also be able to use the other websites you referred to in your question to get the response timing. The process is essentially the same once you have the SSL stream decrypted.

sendto() return error code ENETDOWN

I met very strange situation.
In my program, the sendto() function returns error code ENETDOWN(Network is down) even though network is up and ping tryout success.
It's happened only when UDP stream connects to other network through several gateways. It's not always and happened sometimes.
If i run same code under same sub network, there is no error like ENETDOWN.
So, i trace sendto() function to Kernel area.
The neigh_hh_output() function in ip_finish_output2() of iop_output.c calls hh->hh_output() and it returns ENETDOWN error code.
Under normal operation, hh->hh_output() function is assigned to dev_queue_xmit() of dev.c and packet's sent to network.
When issue was happened, it seems assigned to neigh_blackhole() function in neigh_destroy() of neighbour.c. The neigh_blackhole() returns -ENETDOWN code.
But, i don't know when the neigh_destroy() is called and why that function is called.
I'm struggling with this problem for several weeks.
My test machine is placed like below description.
Test machine --- gateway(1.1.1.1) --- firewall(1.1.1.2) --- network ---- Destination.
First time, UDP connection establish between my test machine and Destination and gateway address of my test machine is 1.1.1.1.
Traffic has no problem between test machine destination. After some time or right after, suddenly, transmit traffic fail with "Network is Down" error(Error number 100, ENETDOWN).
At this time, if i tryout ping to destination in my test machine, ping response OK.
When i capture packet front of my test machine, ICMP redirect message comes from gateway(1.1.1.1). Its information is "Redirect for Host" and New gate address is "1.1.1.2".
When my test machine's OS(Linux 3.0.35) received ICMP redirect message, it changes virtual function pointer of hh->hh_output() from ev_queue_xmit() to neigh_blackhole(). Eventually, neigh_blackhole() return -ENETDOWN code.
So, change gate address of my test machine to 1.1.1.2. After that, "Network is down" error is not happened again.
I think that it's strange operation. sendto() function doesn't return ENETDOWN code as the man page. But, it's return ENETDOWN code.
Anyway, if the sendto() function returns -ENETDOWN even though network interface is update, how to overcome this error? Do i re-connect UDP stream?
I wonder this issue is bug of Linux kernel 3.0.35.
If i will know or find thing about this issue, i will update in here.
Please reference my case if anyone has similar issue with me.
It is claimed that a neighbor will be deleted for a variety of reasons including the host changed its layer 2 address while retaining its layer 3 address or is no longer reachable. See this. It also can be deleted if the gateway for the neighbor sends an ICMP redirect and the processing of redirects is enabled in the kernel.
If the neighbor is in the process of being deleted, then the packet is dispatched to neigh_blackhole which unconditionally returns -ENETDOWN. See the code here.
The man page for sendto() would lead you to believe that you shouldn't get -ENETDOWN under such circumstances, but this appears to be incorrect.
I would try to get a network capture when this occurs and look for ICMP messages indicating your destination is not reachable or for a change in the MAC address for the destination (or possibly a duplicate IP address) via ARP packets or the MAC addresses on the arriving packets from the destination.

Matching up incoming packets with their corresponding request (with noise)

I'm currently building a black-box fuzzing tool and I have encountered the following problem:
Suppose I send a server a fuzzed packet that I construct and get some packets back from the server. I also get some additional packets from other parts of the same server.
Provided I can look at all the incoming and outgoing packets (this is not a request-response system, it's an RPC-based online game) and I have no information what the response should look like, how do I filter out those packets that were sent in response to the fuzzed packet from the rest of the stream?
Just an example: you send an RPC like "give a player a gun with ID 5" and the server sends that player RPCs like "give me an array of the guns you have" and "tell me how much ammo you got". I want to see how the server reacts if I send malformed input, e.g. negative or big integers, in this case. My problem is the fact that the server sends these on a random basis all the time, so I want to filter out the requests that are sent in response to my fuzzed RPC.
A statistical approach will do as I assume there's no way to determine this with full confidence.
The fact that "it's not a request-response system, it's RPC-based" should not change a thing to the classic scheme - unless you/I missed some details from your question:
You must construct a tuple structure from the request with (source IP, destination IP, source port, destination port),
and then watch for reverse tuple (destination IP, source IP,
destination port, source port) packets to catch the response(s).
EDIT: for TCP of course - for connectionless protocols, well, that's a game of heuristic guesses.

How does a http client associate an http response with a request (with Netty) or in general?

Is a http end point suppose to respond to requests from a particular client in order that they are received?
What about if it doesn't make sense to in the case of requests handled by cluster behind a proxy or in requests handled with NIO where one request is finished faster than the other?
Is there a standard way of associating a unique id with each http request to associate with the response? How is this handled in clients like http componenets httpclient or curl?
The question comes down to the following case:
Suppose, I am downloading a file from a server and the request is not finished. Is a client capable of completing other requests on the same keep-alive connection?
Whenever a TCP connection is opened, the connection is recognized by the source and destination ports and IP addresses. So if I connect to www.google.com on destination port 80 (default for HTTP), I need a free source port which the OS will generate.
The reply of the web server is then sent to the source port (and IP). This is also how NAT works, remembering which source port belongs to which internal IP address (and vice versa for incoming connections).
As for your edit: no, a single http connection can execute one command (GET/POST/etc) at the same time. If you send another command while you are retreiving data from a previously issued command, the results may vary per client and server implementation. I guess that Apache, for example, will transmit the result of the second request after the data of the first request is sent.
I won't re-write CodeCaster's answer because it is very well worded.
In response to your edit - no. It is not. A single persistent HTTP connection can only be used for one request at once, or it would get very confusing. Because HTTP does not define any form of request/response tracking mechanism, it simply would not be possible.
It should be noted that there are other protocols which use a similar message format (conforming to RFC822), which do allow for this (using mechanisms such as SIP's cSeq header), and it would be possible to implement this in a custom HTTP app, but HTTP does not define any standard mechanism for doing this, and therefore nothing can be done that could be assumed to work everywhere. It would also present a problem with the response for the second message - do you wait for the first response to finish before sending the second response, or try and pause the first response while you send the second response? How will you communicate this in a way that guarantees messages won't become corrupted?
Note also that SIP (usually) operates over UDP, which does not guarantee packet ordering, making the cSeq system more of a necessity.
If you want to send a request to a server while another transaction is still in progress, you will need to create a new connection to the server, and hence a new TCP stream.
Facebook did some research into this while they were building their CDN, and they concluded that you can efficiently have 2 or 3 open HTTP streams at any one time, but any more than that reduces overall transfer time because of the extra packet overhead cost. I would link to the blog entry if I could find the link...

Resources