remsh on ssh enabled machines - unix

Recently all our testing machines have been moved to a secured shell network. As a result, ip addresses of all these machines has now been changed and we have to access these machines using SSH protocol now onwards.
However, I am not able to access any target machine which is also enabled for SSH using "remsh" to perform some task.
I have checked the existence of ".rhosts" file and and entry of the target machine's ip entry into "/etc/hosts" file.
Kindly let me know if I need to change/look any where else to make remsh work?

Remsh, rlogin, rsh, and rcp are not a secure systems as information is sent as plain text between the machines and because the hosts verification is not done with secret keys but is host-based and can be forged. I would think that you have changed to ssh precisely for these reasons.
Luckily you can do all the same things using ssh. For example, after configuring the machines to use public & private key pairs, you can run commmands on remote machine automatically (by supplying password or using passwordless keys):
ssh user#remotehost command-to-be-run
If you haven't used ssh much earlier, there are a lot of things to learn, but isn't that fun? As a result you will also know how to do state of the art secure connections. You will want to learn especially about public key authentication.
There are lots of tutorials on the Internet how to create and use keys and use ssh. http://www.olearycomputers.com/ll/ssh_guide.html seems like a good starting point. https://engineering.purdue.edu/ECN/Support/KB/Docs/SSHReplacingRhosts discusses specifically replacing .rhosts authentication with a key pair.

Related

How to encrypt gRPC connections without certificates?

I'm going to be using gRPC for a device to device connection over a network (my device will be running Linux and collecting patient data from various monitors, gRPC will be used by a Windows client system to grab and display that data).
I obviously want to encrypt the data on the wire, but dealing with certificates is going to be a problem for various reasons. I can easily have the server not ask for the client cert, but so far I've been unable to find a way around the client validating the server's cert.
I've got several reasons I don't want to bother with a server cert:
The data collection device (the gRPC server) is going to be assigned an IP and name via DHCP in most cases. Which means that when that name changes (at install time, or when they move the device to a different part of the hospital), I have to automatically fixup the certs. Other than shipping a self-signed CA cert and key with the device, I don't know how to do that.
There are situations where we're going to want to point client to server via IP, not name. Given that gRPC can't do a cert for an IP (https://github.com/grpc/grpc/issues/2691), this becomes a configuration that we can't support without doing something to give a name to a thing we only have an IP for (hosts file on the Windows client?). Given the realities of operating in a hospital IT environment, NOT supporting use of IPs instead of names is NOT an option.
Is there some simple way to accommodate this situation? I'm far from an expert on any of this, so it's entirely possible I've missed something very basic.
Is there some simple way to set the name that the client uses to check the server to be different than the name it uses to connect to the server? That way I could just set a fixed name, use that all the time and be fine.
Is there some way to get a gRPC client to not check the server certificate? (I already have the server setup to ignore the client cert).
Is there some other way to get gRPC to encrypt the connection?
I could conceivably set things up to have the client open an ssh tunnel to the server and then run an insecure gRPC connection across that tunnel, but obviously adding another layer to opening the connection is a pain in the neck, and I'm not at all sure how comfortable the client team is going to be with that.
Thanks for raising this question! Please see my inline replies below:
I obviously want to encrypt the data on the wire, but dealing with
certificates is going to be a problem for various reasons. I can
easily have the server not ask for the client cert, but so far I've
been unable to find a way around the client validating the server's
cert.
There are actually two types of checks happening on the client side: certificate check and the hostname verification check. The former checks the server certificate, to make sure it is trusted by the client; the latter checks the target name with server's identity on the peer certificate. It seems you are suffering with the latter - just want to make sure because you will need to get both of these checks right on the client side, in order to establish a good connection.
The data collection device (the gRPC server) is going to be assigned
an IP and name via DHCP in most cases. Which means that when that name
changes (at install time, or when they move the device to a different
part of the hospital), I have to automatically fixup the certs. Other
than shipping a self-signed CA cert and key with the device, I don't
know how to do that.
There are situations where we're going to want to point client to
server via IP, not name. Given that gRPC can't do a cert for an IP
(https://github.com/grpc/grpc/issues/2691), this becomes a
configuration that we can't support without doing something to give a
name to a thing we only have an IP for (hosts file on the Windows
client?). Given the realities of operating in a hospital IT
environment, NOT supporting use of IPs instead of names is NOT an
option.
gRPC supports IP address(it is also mentioned in the last comment of the issue you brought up). You will have to put your IP address in the SAN field of server's certificate, instead of the CN field. It's true that it will be a problem if your IP will change dynamically - that's why we need DNS domain name, and set up the PKI infrastructure. If that's a bit heavy amount of work for your team, see below :)
Is there some simple way to accommodate this situation? I'm far from
an expert on any of this, so it's entirely possible I've missed
something very basic.
Is there some simple way to set the name that the client uses to check
the server to be different than the name it uses to connect to the
server? That way I could just set a fixed name, use that all the time
and be fine.
You can directly use IP address to connect, and override the target name in the channel args. Note that the overridden name should match the certificate sent from the server. Depending on which credential type you use, it could be slightly different. I suggest you read this question.
Is there some way to get a gRPC client to not check the server
certificate? (I already have the server setup to ignore the client
cert).
Is there some other way to get gRPC to encrypt the connection?
Note that: Even if you don't use any certificate on the wire, if you are sure the correct credential type(either SSL or TLS) is used, then the data on the wire is encrypted. Certificate helps you to make sure the endpoint to which you are connecting is verified. Failing to use certificates will leave your application to Man-In-The-Middle attacks. Hope this can help you better understand the goals and make the right judgement for your team.

Accessing web-server in private network from arbitrary clients

I am looking for the best practice to allow accessing a web-server in a private network from a client running on an arbitrary machine (possibly on another private network). Here is the typical setup:
Web-server is running on a Linux machine behind NAT (not directly accessible by the client) and is being used for JPEG streaming.
Client my run on any device and cannot be directly accessed by the web-server.
An intermediate server (Linux machine) which can be directly accessed by both the web-server and clients (since it owns a static IP address).
The first solution that comes to my mind is using the intermediate server as a web proxy and creating a SSH tunnel between the intermediate machine and the web-server, however it requires clients to make changes to their proxy settings which is not an option in my case. The other solution which would possibly work is having the intermediate machine to forward HTTP requests to the web-server, though I do not know of any easy way to do so. Since it looks like a common case to me, I feel like there should be some service which does that. I am also open to other solutions which would offer transparent access to the web-server.

Making use of ssh keys for authentication in other applications?

Let's say I want to set up a poor man's authentication scheme for a simple network service.
I don't want to bother with username/password authentication, for simplicity I just want to have a list of public keys in my application and anyone who can prove they are the owner of that key can use my service.
For the purposes of my application it would greatly simplify the authentication process since all my users are on the local network and they all use Unix. Anytime I onboard a new user I can just ask them for their ssh public key.
Is there a simple way to reuse the mechanism involved in ssh public key authentication in a non-ssh application? This is question is intended to be language agnostic.
If you just have a list of users that can use your application and you have no need to see who did what.
You can setup your server so that it listens only on localhost (127.1) rather than 0.0.0.0, and provide a restricted sshd, forwarding the port required to connect to the application
~/.ssh/authorized_keys will provide a list of the authorized keys that can be used.
ssh -I private_key_file <hostname> -L 3000:localhost:3000
For a basic setup and help with configuring your sshd, check out this answer:
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Note: Be warned that if you don't lock it down, any user will have full shell access on your box where the machine is hosted.
A dirty hack from top of my head: could you wrap the application so that it would create an actual SSH tunnel from localhost to your server, and use that for ?
Assuming you are talking about a web based application. What you are really looking for is X.509 Client certificates (1.3.6.1.5.5.7.3.2). This will allow you to identify a user individually to your application.
These face the same issues that are usually faced when looking at key distribution. Which is generally considered a hard problem.
If you wanted to head down this road here is what you would need to do.
Generate a root certificate (once)
Setup web server with appropriate modules to parse the certificate (nginx/apache)
Generate a certificate for each user (openssl)
Download cerificiate from centralized server. (maybe use their ssh pub key here)
Install the x509 cert locally (OS Dependent)
On the server side, you would need to process the cert as part of the web-server (nginx or apache should have modules to do this) and then pass the name onto your application as a header field which you can then process internally.
This is a much better security solution than usernames and passwords, however is complex because of the key distribution issue. Most people wouldn't bother since in most applications it is easy enough to integrate logins with LDAP or radius.

Why is RDP Considered Less Secure Than LogMeIn or VPN?

I've heard from more than one IT Manager that they don't allow users to user RDP to connect to their internal network from the outside, because it's not safe. They claim that if they'd allow their users to do so, then anyone from the outside will have access to their network as well.
I'm not getting it. In order to use RDP, you need a user name and password, and you can't get in without it. The same is for using Gmail, online banking, and any other web service.
So what do they use instead? LogMeIn. Or a VPN connection, and then use internal RDP. VPN also requires a user name and password.
If they're afraid of a brute-force attack, then someone can brute-force attack the VPN server or LogMeIn just the same. And if these other technologies have lockouts (after x number of failed attempts) then why can't the same be set up for RDP?
Similarly, people always say that VPN is very secure because it uses a "tunnel". I don't fully understand what that means, but regardless, why can't the username and password be cracked the same way any website or web service which uses a user name and password can be.
With proper configuration, RDP is capable of 128-bit RC4 encryption, virtually any port or set of port allocations, and has proven to be relatively bug-free, with only extremely minor flaws ever discovered.
On the other hand, the secure tunnel created in a VPN is far more secure than Remote Desktop. All your data is encrypted for safe transfer from one remote location to another.
Moreover, VPN only allows shared content to be accessed remotely to tighten the security. If your device falls in the wrong hands, they won’t be able to access and manipulate unshared data and resources.
The bottom line is that both RDP and VPN have their own advantages, however, with high security, better performance and manageability, VPN seems to be a clear winner in the competition of Remote Desktop VS VPN service.

data encryption between 2 servers on file request

I've a quick question:
I have 2 websites, 1 has some links to file downloads. Those files are hosted on another server.
I need to encrypt the request data between the 2 servers..can I do it just using a SSL certificate?
Any other/better idea?
Those files are private docs, so I don't want the server 2 or any other people being able to track the file requests between the servers.
Thanks
Yes, use SSL (or actually TLS) if you want to achieve transport level security. If these are two servers that you control you can configure your own self signed certificates. If you want to make sure that only the two servers can communicate with each other, then require client-authentication, where both the server and client use a certificate/private key pair.
Most of the time the trick is to implement a sensible key management procedure. Setting up a web server to handle TLS using certificates should not be too hard.
SSL certificate will work fine for ensuring the transfer is encrypted. Even just a self signed certificate will be fine for this purpose (provided you can tell the client you're going to use to accept the self signed cert)
Alternatively if it's two linux machines then scp (secure copy) is a great tool where it'll connect via ssh and grab the files. (There probably is a windows scp tool but I don't know it)
Rsync also supports going via ssh
As for tracking the request... there's nothing you can do to prevent any device between your computer and the destination computer logging the fact a connection was made but the encryption should prevent anyone from getting to the actual data you're sending.

Resources