I am doing ssh on several remote servers, Some of the servers doesn't respond and some of them might be down.
To preclude such scenarios I used ConnectTimeout in ssh command. It was timing out as I configured It to be.
My current way of doing ssh
ssh -o LogLevel=Error -oConnectTimeout=5 -oBatchMode=yes -l becomeaccount servername './command.sh'
All was going good until one day when I found a stale ssh connection on one of my server. It was on for more than 3 days.
So now I think I might have missed something, I tried to google it and found there is something called as ServerAliveInterval...would that solve my problem? how is it different than ConnectTimeOut ?
The "ServerAliveInterval" specifies a periodic polling time between the SSH server and client. The intent is twofold:
(1) To close down idle ssh sessions where either
[a] one side or the other crashes hard (i.e.: machine failure/poweroff)
[b] one side or the other changes IP addresses
(2) To MAINTAIN idle ssh sessions over a NAT that would tear down (or terminate) idle TCP sessions
ServerAliveInterval affects the "ssh" client. There's a corresponding parameter for the "sshd" server. (There is also a TCPKeepAlive option too.) If you're seeing orphaned sshd sessions on your remote servers, you should consider making appropriate changes in the remote servers's sshd_config. If you can't implement changes in the remote server's sshd_config but still need to have idle logins die, check to see if your shell has an idle-timeout ("bash" does.)
Related
What are typical results of nmap 198.168.1.1 for an average Joe? What would be a red flag?
PORT STATE SERVICE
111/tcp filtered rpcbind
What does this mean in context and is it something to worry about?
Basically, RCPBind is a service that enables file sharing over NFS,The rpcbind utility is a server that converts RPC program numbers into universal addresses. It must be running on the host to be able to make RPC calls on a server on that machine. When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve So if you have the use for file sharing, It's fine, otherwise unneeded and are a potential security risk.
You can disable them by running the following commands as root:
update-rc.d nfs-common disable
update-rc.d rpcbind disable
That will prevent them from starting at boot, but they will continue if already running until you reboot or stop them yourself.
And if you are looking to get into the system through this, There are lots of reading material available in the google.
I have a web server running Alpine linux and OpenSSH. When I power on the server, for about an hour or two I am able to open SSH connections and send commands fine. However, after that, even though the server is up, it does not respond to pings and I cannot SSH in to it. The server is still running, and I can still access the website being served from it. Why does this happen, and how can I avoid it?
I am looking to use asyncssh with python3.7 (asyncio)
Here is what I want to build:
A remote device would be running a client that does a call-home to a centralized server. I want the server to be able to execute commands on client using reverse ssh tunnels on the incoming connection. I cannot use forward ssh (regular ssh) because the client could be behind NAT and server might not know the address of the client. I prefer client doing a call-home and then server managing the client.
The program for a POC should use python3 + an async implementation of ssh. I see asyncssh as the only viable choice (please suggest if you have an alternate):
Client: Connect to server and accepts reverse ssh tunnels to be opened on same outbound connection
Server: Accepts connection from client and keeps the session open. The server then opens reverse ssh tunnel to the client. For e.g. the server program should open 3 reverse ssh tunnnels on the incoming connection. Each of these tunnels would run one command ['ls', 'sleep 30 && date', 'sleep 5 && cat /proc/cpuinfo']
Server program should print the received response for each of these commands (one should come back amost immediately, other after 5 and other after 30).
I looked at the documentation, and I could not see examples of using multiple reverse ssh tunnels.
Anyone has experience using this? Can you point me to examples?
Developer of asyncssh has provided an example:
As of now, this is in develop branch. I have tested it and it does the job perfectly!
https://asyncssh.readthedocs.io/en/develop/#reverse-direction-example
[If you are checking this after a while, you might find it in master documentation.]
The goal
Allow a browser to exchange information with a service running locally. Allow the service to figure out the user (logon session in Windows) who runs the browser. Avoid, if possible, storing a TLS certificate and private key on the machine. A bonus task: provide a solution for the setup where an anti-virus software like Kaspersky or Sophos proxies all TCP connections.
The story
The underlying OS is Windows, but can be any modern OS. There is a daemon running in the system. In case of Windows this is a Windows service. There is a JavaScript loaded by an Internet browser from a remote server which sends data to the daemon. The daemon does not have an HTTP/HTTP server. Instead the daemon opens N ports and listens for incoming connection. The N is a low two digits number.
The JS initiates TCP connections to a selected group of ports K in the range N. In the current implementation JS attempts to load JS scripts from 127.0.0.1:port-number. The daemon accepts the connection and immediately closes it (kinda port knocking). The daemon recovers the data from the ports "knocked" by the JS.
In the current implementation the backend chooses a unique tuple of ports, for example a 3 ports combination. The tuple is a key identifying the browser session. The service collects "knocks" - the ports accessed by a specific OS process. The service queries the backend using the collected ports.
One of the goals of the solution is to avoid implementation of HTTP/HTTPS server in the service and save maintenance of a SSL certificate.
The problem
The order in which JS connects to the ports is not defined. Specifically two browsers can run knocking sessions simultaneously.
The service can fail to open some of the ports in the range N because the
ports are busy.
The order is not critical because the server chooses a unique combination from the range N. I need the system to tolerate missing ports. I was thinking about choosing more than one tuple and using more than one range N.
The question
How can I adopt FEC for the problem? Does the design make sense?
I am having a weird problem.
I have a service running on port 8888 on one of my many servers in a cluster.
When I run nmap on my gateway to get all the IPs inside my network, this service miraculously dies. Since nmap does a port scan too, It might have something to do with it. I am not sure.
The nmap command I am using is this:
sudo nmap -oX ${FILE_NAME} ${IP_DOMAIN} -A -O --osscan-guess
Can some tell me what might be happening ?
While Nmap developers try to limit the danger, Nmap scans can still crash services. The most likely culprit for crashing a service (as opposed to crashing an entire machine) is the service version detection scan phase (-sV, implied in your command by -A). This scan sends a series of data packets to the service in an attempt to elicit a response which can be matched against Nmap's database of known services. When a match is found, Nmap stops sending probes. That means that an unknown service can get lots of probes sent to it which contain binary data, command strings, and other data that your service is not expecting.
A well-written network service will not crash on any input; your service has a bug of some sort. Avoiding this sort of crash usually means avoiding scanning that service:
You can use the Exclude directive in your nmap-service-probes data file to instruct Nmap to never send these service probes to port 8888.
You can avoid scanning port 8888 at all by changing the ports you scan with -p. Later versions of Nmap will support the --exclude-ports option, too.
You can make sure you are using the latest version of Nmap. If your service's fingerprint was added to the nmap-service-probes file, then Nmap will stop sending probes when it detects it, which may avoid sending the later probe that crashes it.
You can reduce the intensity of the service scan with the --version-intensity option. This prevents Nmap from sending so many service probes, which may eliminate the one that is crashing your service.
Finally, if this service is a standard one and not something custom to your own network, you can report it to The Network Scanning Watch List so that other users can avoid crashing it as well.