I am having a weird problem.
I have a service running on port 8888 on one of my many servers in a cluster.
When I run nmap on my gateway to get all the IPs inside my network, this service miraculously dies. Since nmap does a port scan too, It might have something to do with it. I am not sure.
The nmap command I am using is this:
sudo nmap -oX ${FILE_NAME} ${IP_DOMAIN} -A -O --osscan-guess
Can some tell me what might be happening ?
While Nmap developers try to limit the danger, Nmap scans can still crash services. The most likely culprit for crashing a service (as opposed to crashing an entire machine) is the service version detection scan phase (-sV, implied in your command by -A). This scan sends a series of data packets to the service in an attempt to elicit a response which can be matched against Nmap's database of known services. When a match is found, Nmap stops sending probes. That means that an unknown service can get lots of probes sent to it which contain binary data, command strings, and other data that your service is not expecting.
A well-written network service will not crash on any input; your service has a bug of some sort. Avoiding this sort of crash usually means avoiding scanning that service:
You can use the Exclude directive in your nmap-service-probes data file to instruct Nmap to never send these service probes to port 8888.
You can avoid scanning port 8888 at all by changing the ports you scan with -p. Later versions of Nmap will support the --exclude-ports option, too.
You can make sure you are using the latest version of Nmap. If your service's fingerprint was added to the nmap-service-probes file, then Nmap will stop sending probes when it detects it, which may avoid sending the later probe that crashes it.
You can reduce the intensity of the service scan with the --version-intensity option. This prevents Nmap from sending so many service probes, which may eliminate the one that is crashing your service.
Finally, if this service is a standard one and not something custom to your own network, you can report it to The Network Scanning Watch List so that other users can avoid crashing it as well.
Related
The nebula-storage service failed to start, the storage logs show that the port is occupied, but I checked that port 9780 is not occupied either. Also the configuration files are original and unmodified.
There could be several reasons why the nebula-storage service is failing to start:
Another process is already using port 9780. You can use the lsof -i tcp:9780 command to check which process is using the port.
There is a problem with the nebula-storage service itself. You can try restarting the service.
EDIT: OBE - figured it out. Provided in answer for anyone else who has this issue.
I am working in an offline environment and am unable to connect to a kafka broker, on machine 1, from a separate machine, machine 2, on a LAN connection through a single switch.
Machine 1 (where Kafka and ZK are running):
server.properties
listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
advertised.listeners=PLAINTEXT://<ethernet_IPv4_m1>:9092
zookeeper.connect=localhost:2181
I am starting kafka/ZK from the config files located in kafka_2.12-2.8.0/config and the running the appropritate .bat from kafka_2.12-2.8.0/bin/windows.
On machine 2 I am able to ping <ethernet_IPv4_m1> and get results; however, I fail to get a TCP connection if I run Test-NetConnection <ethernet_IPv4_m1> -p 9092 while kafka is running. In python 3.8.11, using KafkaConsumer from kafka-python, I receive the NoBrokersAvailable error when using <ethernet_IPv4_m1>:9092 as the bootstrap_server. Additionally if I run a python:3.8.12-buster docker container with a '/bin/bash' entrypoint, and follow along with the kafka-listener walkthrough I am unable to connect to the broker. I'm in the exact situation as Scenario 1 provided in the link, but the walkthrough assumes you can connect to the broker. I have also tried opening the 9092 port in my Windows Defender for in/outbound traffic (on both machines) and still have no luck. Neither Kafka, nor networking, are my strong suits and every tutorial/answer I find refers to changing the listener and advertised.listener in the kafka server.properties file - I think I correctly did this, but am unsure. This is everything I have tried so far, any recommendations would be greatly appreciated. Thank you.
For M1, the private network was the active network.
Go to control panel -> Firewall & network protection -> advanced settings (must be admin) -> setup inbound/outbound rules for port 9092 for the active network.
What are typical results of nmap 198.168.1.1 for an average Joe? What would be a red flag?
PORT STATE SERVICE
111/tcp filtered rpcbind
What does this mean in context and is it something to worry about?
Basically, RCPBind is a service that enables file sharing over NFS,The rpcbind utility is a server that converts RPC program numbers into universal addresses. It must be running on the host to be able to make RPC calls on a server on that machine. When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve So if you have the use for file sharing, It's fine, otherwise unneeded and are a potential security risk.
You can disable them by running the following commands as root:
update-rc.d nfs-common disable
update-rc.d rpcbind disable
That will prevent them from starting at boot, but they will continue if already running until you reboot or stop them yourself.
And if you are looking to get into the system through this, There are lots of reading material available in the google.
The goal
Allow a browser to exchange information with a service running locally. Allow the service to figure out the user (logon session in Windows) who runs the browser. Avoid, if possible, storing a TLS certificate and private key on the machine. A bonus task: provide a solution for the setup where an anti-virus software like Kaspersky or Sophos proxies all TCP connections.
The story
The underlying OS is Windows, but can be any modern OS. There is a daemon running in the system. In case of Windows this is a Windows service. There is a JavaScript loaded by an Internet browser from a remote server which sends data to the daemon. The daemon does not have an HTTP/HTTP server. Instead the daemon opens N ports and listens for incoming connection. The N is a low two digits number.
The JS initiates TCP connections to a selected group of ports K in the range N. In the current implementation JS attempts to load JS scripts from 127.0.0.1:port-number. The daemon accepts the connection and immediately closes it (kinda port knocking). The daemon recovers the data from the ports "knocked" by the JS.
In the current implementation the backend chooses a unique tuple of ports, for example a 3 ports combination. The tuple is a key identifying the browser session. The service collects "knocks" - the ports accessed by a specific OS process. The service queries the backend using the collected ports.
One of the goals of the solution is to avoid implementation of HTTP/HTTPS server in the service and save maintenance of a SSL certificate.
The problem
The order in which JS connects to the ports is not defined. Specifically two browsers can run knocking sessions simultaneously.
The service can fail to open some of the ports in the range N because the
ports are busy.
The order is not critical because the server chooses a unique combination from the range N. I need the system to tolerate missing ports. I was thinking about choosing more than one tuple and using more than one range N.
The question
How can I adopt FEC for the problem? Does the design make sense?
I am doing ssh on several remote servers, Some of the servers doesn't respond and some of them might be down.
To preclude such scenarios I used ConnectTimeout in ssh command. It was timing out as I configured It to be.
My current way of doing ssh
ssh -o LogLevel=Error -oConnectTimeout=5 -oBatchMode=yes -l becomeaccount servername './command.sh'
All was going good until one day when I found a stale ssh connection on one of my server. It was on for more than 3 days.
So now I think I might have missed something, I tried to google it and found there is something called as ServerAliveInterval...would that solve my problem? how is it different than ConnectTimeOut ?
The "ServerAliveInterval" specifies a periodic polling time between the SSH server and client. The intent is twofold:
(1) To close down idle ssh sessions where either
[a] one side or the other crashes hard (i.e.: machine failure/poweroff)
[b] one side or the other changes IP addresses
(2) To MAINTAIN idle ssh sessions over a NAT that would tear down (or terminate) idle TCP sessions
ServerAliveInterval affects the "ssh" client. There's a corresponding parameter for the "sshd" server. (There is also a TCPKeepAlive option too.) If you're seeing orphaned sshd sessions on your remote servers, you should consider making appropriate changes in the remote servers's sshd_config. If you can't implement changes in the remote server's sshd_config but still need to have idle logins die, check to see if your shell has an idle-timeout ("bash" does.)