Check a message on a certain port - networking

I have written an integration test in c++ which sends a message (a simple text) to a client using gRPC.
I have specified localhost as a client and used certain port, e.g., 8000.
How can I check on localhost (windows 10) to see if the message has really arrived?
The integration test passes and I want to know whether the result is correct or not.

you can use a network sniffer like Wireshark to capture the network traffic, start Wireshark (or any other tool), set it to capture on the loopback network interface, run your test and filter for the port you are using. you will see if the messages arrive.

Related

How and why do servers use random port numbers?

Well-known services usually use a pre-defined port number on the server-side.
However, I realized that that is not always the case. Some services and games for example seem to pick a random port from a pre-defined range.
when you connect to a pre-defined port number, you send a request first, so the client's port can be determined, but if the service's port is not predetermined, how does the client know to which port to send the request? Also, what is the reason for always using a different port and how does this happen?
how does the client know to which port to send the request?
This depends on the specific protocol. For example with protocols like SIP, H.323 or FTP there are predefined port numbers for the signaling channel. The actual data transfer though is done by new connections on dynamic ports. These ports are advertised within the signaling channel.
In other cases there is no such signaling channel on a predefined port number. This is typically the case for servers which have no IANA assigned port number. It also happens when multiple instances (with different configurations) of the server should run on the same system and these simply cannot use the same port number. In this case the relevant IP and port might be advertised for example through DNS SRV records. And of course there might be other ways, like publishing the information on some web site or similar.
Also, what is the reason for always using a different port ...
Again it depends on the specific protocol. With SIP, H.323 or FTP for example the data connection is specific to the client and it will simply use a port which is free on the system for this. And there can be multiple connections at the same time from the same or from different clients which all use different ports. Any restrictions regarding the range of the port are usually only done to work better with firewalls, so that these don't need to open a huge port range but can allow a smaller port range and thus lower the attack surface.
... and how does this happen?
Just let the system pick a random port by not giving a specific value. Or if a port should be used from a range then it will simply figure out which port is available by trying to bind to the port and continue with the next if the binding failed.
if the service's port is not predetermined, how does the client know to which port to send the request?
The port has to be known ahead of time, entered by the user, or advertised somewhere the client can find it.
what is the reason for always using a different port
Many reasons: security, network/firewall restrictions, etc.

How does the client knows which transport protocol to use?

Let's assume that I start a server at one of the computers in my private network (192.168.10.10:9900).
Now when making a request from some other computer in the same network, how does the client computer (OS?) knows which protocol to use / which protocol the server follows ? [TCP or UDP]
EDIT: As mentioned in the answers, I was basically looking for a default protocol which will be used by the client in the absence of any transport protocol information.
TCP / UDP protocols work at the transport layer level (TCP / IP MODEL) and its main difference is that TCP has a method to ensure the arrival of messages while UDP is lighter because of its virtue is to be faster in Information delivery. The use of one protocol or another is always defined by the application that will use it.
So the reference you put on the private server with ip: port 192.168.10.10:9900 is very vague to be more precise we could say that we have an Apache web server running on the ip: port 192.168.10.10:9900 (the port for default is 80 when installing the server, but it can be changed in the configuration).
Now the web servers (apache, IIS, etc.) work using the TCP protocol because when a client (computer, cell phone, etc.) consults a page through a browser (Chrome, Firefox, etc.), the ideal thing is that all the website and not just some pieces. This is why this type of servers chose and use this protocol in the first instance since they seek that in the end the result is that the user obtains the complete page regardless of whether a few more milliseconds are sacrificed in the validations involved in using TPC.
Now going to the client side. The user when visiting a web page from any browser (Chrome, Firefox, etc.) will use TCP since this protocol is already configured in the browser to send the query messages and subsequently receive the messages with the same form Website information.
Now this behavior is going to be repeated for any client / server application. For example, to change the type of application on the UDP side, we can observe the operation of DHCP services which are used to receive an IP when connecting any device to a Wi-Fi network. In this case, this service seeks to be as fast as possible (instead of the most reliable) since you want the device to connect as quickly as possible to the network, so use the UDP protocol and in this case any equipment when connecting To a WIFI network you will send your messages using this protocol.
Finally, if you want to know promptly about the type of TCP / UDP protocol used by a specific application, you can search on the Wireshark application which allows you to scan the messages that leave the device or show the protocol used in the different layers of the application.
There is no reason any client would make a request to your server, so why would it care what protocol it follows? Clients don't just randomly connect to things to see if there's a server there. So it doesn't make any difference to any client.
Normally, the client computer will use the TCP protocol as default. If you start the server using UDP protocol mode, then when you use curl -XGET 192.168.10.10:9900/test-page, it will give you back an curl: (7) Failed to connect to 192.168.10.10 port 9900: Connection refused error. You can try it, use the nc -lvp 9900 -u, it will give you that result.
The answers here are pointing to some default protocol. Its' not that, Whenever you start an application let say HTTP server, the server's internal has code to open a socket(which can be TCP or UDP), since HTTP:80 is a TCP protocol the code creates a TCP socket. Similarly for other network application it depends on their requirement what kind of transport layer protocol to use (TCP Or UDP). Like a DNS client will create a UDP socket to connect to DNS server, since DNS:53 is mostly over UDP. Both TCP and UDP have different use cases, advantages and disadvantages. Depending on there uses cases / advantages / disadvantages of UDP/TCP decision is taken to implement server using either of them.

Internet Connectivity Check

Hi all and thanks in advance for your help.
I have a situation where I have a need to test an unstable Internet connection on one internal network and send out an email alert on any issues (obviously requiring an Internet connection) through another network - .
I have hardware with dual nic and plan on writing something simple in vb.
Is there a way I can disable ping on my 'good' connection forcing it through the test network yet allowing smtp?
I've looked into routing and done some basic testing but it seems the ping automatically reroutes through the good network shortly after the bad network fails?
Any advice warmly received.
You should not even think about blocking ICMP. This is a good way to cause many, many problems. Instead, you should explicitly specify which interface to use for the ping requests.
If you're using a command-line ping, you usually do this by specifying the source IP to use for sending the packets. For example, on OS X, you can run the command
ping -c 5 -S 10.0.1.13 8.8.8.8
to ping Google's public DNS server (8.8.8.8) using the interface whose IP address is 10.0.1.13. If the interface with that IP is down, the ping will fail.
The specific flag varies from implementation to implementation.
If you're writing your own ping code, IIRC, you just need to bind the socket to that source address with the bind() system call.

Ethernet Data Traffic hidden from capture

I have a puzzle I am not able to figure out, I would appreciate any help.
I am connected to a remote desktop using windows default remote desktop utility (Windows 8 locally, Windows 7 remotely).
The remote desktop is not in the same sub-network as my own.
Connection is made through default port 3389. Using Wireshark locally I can confirm the TCP connection being established and the data flow.
Running Wireshark in the remote desktop, I don`t see any flow of data between the two computers.
If I send a ICMP ping from the remote desktop to my computer, it works well and I can see it in Wireshark both remotely as well as locally. But if I send the ICMP ping from my computer to the remote desktop, it fails. I see it leaving my computer through Wireshark, but it never reaches the remote desktop (I don`t see it in Wireshark).
I don't think it is a firewall issue (specially since it can't explain why Wireshark won`t capture the port 3389 RPC flow).
Does anyone have any idea of what might be going on?
I found the main issue.
In Wireshark, turns out it is possible to configure the capture interface with a filter.
To change it, go to: Capture->Interfaces
On the interface being used, stop capturing to enable the Options, there it is possible to configure a capture filter.

Routing traffic with TUN/TAP interface

I am new to network programming and try to understand managing traffic via TUN/TAP interface.
Since I have almost nonexistent system programming skills, and feel confident on Java; I use OpenVPN tun/tap driver and ready made Java binding for it. It works on TAP mode.
As an example application I am trying to imitiate no encryption, no authentication client server VPN application.
I can catch Ethernet Frame packets, but for the routing part, I failed miserably. (I can modify route/arp tables.)
Do anybody know how OpenVPN send packets from client to server, and from server to target. Opening sockets from Java looks like an alternative; but I was hoping that modifying packets(change IPs and/or MAC addresses) and writing back to the virtual tap interface would be enough. Is it so?
Can I inject packets to send other locations, or by default received packet moves towards application layer?
-- Edit:
Scneario
Client Tap0 _____ Server Tap0 ______ Target
Eth0 Eth0
Target: Ping from client, move through tap interfaces, target see only server ip (anonymization)
What I achived so far.
Catch traffic at client tap0 interface.
I coulnt forward traffic at server Tap so to fasten things I used Java socket programming between client-server.
Now I read packets from socket at server, and try to OpenVPN Tap driver's write method to move forward but I am not sure where do I fail. I see packets with tcpdump at server tap0, but they do not pass to server eth0.
My most important question is if I modify packet(ip, mac address) and call write method, is it possible that packet moves forward. (Or does it move to application layer whatever you change??)
Any help would be appreciated.
1. Routing is a Layer 3 (IP) problem and handled by the OS. As for the Ethernet frames on Layer 2, you have multiple options. In any case, you'll have to parse the incoming packets' headers and extract the MAC address, and decide based on the MAC where to pass the packet: To a specific client, all clients (broadcasts) or the local tap interface.
Option 1: On each client, use a tun device, and let the server use a tap device. Assign pseudo MAC addresses to each client, respond accordingly to ARP requests from the server's OS and let the OS on the server take care of the rest. Applicationwise, you'll only have to forward all incoming packets to the tap device and all outgoing packets to the client to which you assigned this MAC.
Option 2: Let the clients choose their own MAC address and forward ARP-requests through the network. The server application has to decide for incoming packets from a client whether to forward the packet to a client, or send it to the local tap device if the address matches the local device's MAC.
In both cases, clients pass all packets from their local tun/tap device to the server and vice versa.
2. You can do almost anything. A packet is only "received" when you decide to write it to the tap device, and you can of course temper with any packets, or inject new ones, ...
As a final comment, I've found that toying with tun devices is conceptually simpler, because they work on Layer 3. You'll have to open a tun device on the server for each client, but within your application you'll have to do nothing but to forward anything coming from the device to the single client, and vice versa.

Resources