I know that rsync can enable / disable the ssh encryption protocol during the file transfer. So, if the ssh encryption protocol has been disabled, does it mean that rsync does not do any encryption at all?
Also, the reason why I asked the above question is we use the rsync module as part of our file transfer and there is nothing in the module that specifies that ssh encryption will be used.
If rsync does not use any encryption, then I can theoretically open a port on both source and destination machines and push the file from source to destination.
If you use the rsync:// protocol scheme (i.e. when you connect to a rsyncd daemon) then no encryption will be used (although password authentication is done using a MD4-based challenge-response system and is probably still reasonably secure).
If you use the hostname:/some/path scheme then rsync transparently calls SSH, which encrypts everything, and uses SSH's native authentication mechanisms. As far as I can tell, some OpenSSH versions supported an option Ciphers null in the configuration file, but this has been removed in later versions.
Generally you shouldn't worry about encryption overhead, unless you are working in a 1 Gbit network or you have old computers.
rsync performs no encryption on its own. If you don't use ssh, nor do you tunnel the rsync traffic through stunnel or some kind of VPN, then no encryption is performed. Yes, you can save some CPU cycles this way.
Related
My application needs to access files from a remote FTPS or SFTP server depending on what my app user wants to connect to. I need to be able to access file content in a folder or create a folder.
1) What login properties differ for an FTPS and SFTP server that a user must enter?
2) Is there any way I can detect if it is an SFTP or FTPS server?
SFTP doesn't have any authentication. SFTP protocol is supposed to be used over SSH connection, thus it relies on SSH for authentication. So all authentication mechanisms of SSH apply. The list of such mechanisms is extensive - you can authenticate using password, a private key ("public-key authentication"), X.509 certificate (not a popular option), keyboard-interactive (challenge-response) dialog, also via GSS-API you can use Kerberos and possibly other mechanisms. FTPS as FTP-over-TLS can also use various mechanisms. FTP uses username/password by default, but potentially one can implement some tricky mechanisms using SITE command. TLS protocol includes client-side authentication using X.509 certificates, pre-shared symmetric keys, plain PKI keys, OpenPGP keys.
SFTP and FTP/FTPS are completely different protocols. Servers run on different ports. If you want to implement protocol autodetection, you can try the following: connect to the server, and if it sends a welcome SSH message within 200-500 ms, you know that it's an SSH (and potentially SFTP) server. If it sends a welcome FTP message, it's an FTP server (this includes explicit TLS mode of FTPS). If it sends nothing, then it can be a TLS server and you can have implicit FTPS over this connection.
I had a requirement to mount a NFS. After several trial and errors, I could mount a NFS file system from NAS on my Linux system. We were also evaluating if cifs can be used when NFS does not work. man pages were too confusing and could not find any lucid explanation on web. My question is - if NFS is a problem can mount -t cifs be used in place ? Is cifs always available as replacement for nfs.
It's hard to answer, because it depends on the server.
NFS and CIFS aren't different filesystems - they're different protocols for accessing a server side export.
Generally speaking:
NFS is what Unix uses, because it aligns neatly with the Unix permissions model.
CIFS is (generally) what Windows uses. (It uses a different permissions model too).
Key differences between the two are that CIFS operates in a user context - a user accesses a CIFS share. Where NFS operates in a host context - the host mounts an NFS filesystem, and local users permissions are mapped (in a variety of ways, depending on NFS version and authentication modes).
But because - pretty fundamentally - they use different permissioning and authorization mechanisms, you can't reliably just mount an NFS export as CIFS. It relies on the server supporting it, and handling the permission mapping. You would need to ask the person who owns that server for details.
CIFS is not always available (but often is). When it works, NFS works better for unixy clients than CIFS tends to be.
To see if there's CIFS on the server, use the smbclient(1) program, possibly 'smbclient -L servername'.
To use CIFS from unix, you typically need to know a user name and password for the CIFS server, and reference them in the mount command or fstab entry. You can put the password in a file that is protected and use the file for the mount.
If you don't know the CIFS server admin to get a user/pass, you have many problems.
Recently all our testing machines have been moved to a secured shell network. As a result, ip addresses of all these machines has now been changed and we have to access these machines using SSH protocol now onwards.
However, I am not able to access any target machine which is also enabled for SSH using "remsh" to perform some task.
I have checked the existence of ".rhosts" file and and entry of the target machine's ip entry into "/etc/hosts" file.
Kindly let me know if I need to change/look any where else to make remsh work?
Remsh, rlogin, rsh, and rcp are not a secure systems as information is sent as plain text between the machines and because the hosts verification is not done with secret keys but is host-based and can be forged. I would think that you have changed to ssh precisely for these reasons.
Luckily you can do all the same things using ssh. For example, after configuring the machines to use public & private key pairs, you can run commmands on remote machine automatically (by supplying password or using passwordless keys):
ssh user#remotehost command-to-be-run
If you haven't used ssh much earlier, there are a lot of things to learn, but isn't that fun? As a result you will also know how to do state of the art secure connections. You will want to learn especially about public key authentication.
There are lots of tutorials on the Internet how to create and use keys and use ssh. http://www.olearycomputers.com/ll/ssh_guide.html seems like a good starting point. https://engineering.purdue.edu/ECN/Support/KB/Docs/SSHReplacingRhosts discusses specifically replacing .rhosts authentication with a key pair.
I've a quick question:
I have 2 websites, 1 has some links to file downloads. Those files are hosted on another server.
I need to encrypt the request data between the 2 servers..can I do it just using a SSL certificate?
Any other/better idea?
Those files are private docs, so I don't want the server 2 or any other people being able to track the file requests between the servers.
Thanks
Yes, use SSL (or actually TLS) if you want to achieve transport level security. If these are two servers that you control you can configure your own self signed certificates. If you want to make sure that only the two servers can communicate with each other, then require client-authentication, where both the server and client use a certificate/private key pair.
Most of the time the trick is to implement a sensible key management procedure. Setting up a web server to handle TLS using certificates should not be too hard.
SSL certificate will work fine for ensuring the transfer is encrypted. Even just a self signed certificate will be fine for this purpose (provided you can tell the client you're going to use to accept the self signed cert)
Alternatively if it's two linux machines then scp (secure copy) is a great tool where it'll connect via ssh and grab the files. (There probably is a windows scp tool but I don't know it)
Rsync also supports going via ssh
As for tracking the request... there's nothing you can do to prevent any device between your computer and the destination computer logging the fact a connection was made but the encryption should prevent anyone from getting to the actual data you're sending.
Apart from enhanced authentication options offered by SSH, is there any difference between basic working of SSH and SSL protocols ?
I am asking since we can use SFTP or FTP over SSL, both would require authentication.
What is the difference between SSH and SSL and why would we care?
SSL stands for "Secure Sockets Layer". We care because it enables browsers to transmit data to and from a web server in a secure cryptographic way to make life hard for third party spies monitoring all internet traffic.
SSH stands for "Secure Shell". We care because it enables a networked computer 1 to provide access to a shell on networked computer 2. The user can have a level of confidence that spies listening to the insecure channel cannot decrypt data sent between the networked computers.
SSL and SSH both have to do with providing a system to encrypt and decrypt data over an insecure channel.
When a browser visits a URL which begins with "https://", the browser speaks HTTP over an SSL connection.
SSL enabled Web Servers (for example Apache HTTP Server) can be configured to use SSL to become a "secure web server". A website served up by a secure web server will cause users to access the URL through the "https://" protocol instead of "http://". With the https protocol the users can have a level of confidence that third party spies monitoring the internet channel will only receive encrypted content.
SSL is a Protocol that could be implemented in the 6th layer (Presentation layer) of the OSI Model.
SSH has its own transport protocol independent from SSL, so that means SSH DOES NOT use SSL under the hood.
Cryptographically, both Secure Shell and Secure sockets Layer are equally secure.
An SSL Termination Proxy can handle incoming SSL connections, decrypting the SSL and passing on the unencrypted request to other servers.
SSL lets you use a PKI (public-key infrastructure) via signed certificates. With SSH you have to exchange the key fingerprints manually through another protocol like ftp or carrier pigeon.
The main difference is that SSL lets you use a PKI (via signed certificates). In SSH you have to exchange the key fingerprints out-of-band. But you might want to do without a PKI anyway, in which case it's a tie.
For a nice explanation, see http://www.snailbook.com/faq/ssl.auto.html
SSH and SSL are similar protocols that both use most of the same cryptographic primitives under the hood, so they are both as secure as each other. One advantage of SSH is that using key-pair authentication is actually quite easy to do, and built right into the protocol.
With SSL it's a bit of a mess involving CA certificates and other things. After you have the PKI in place you also need to configure your services to use the PKI for authentication instead of its internal password database; this is a nightmare on some services and a piece of cake on others. It also means you need to go to the hassle of signing all of your user's keys so they can log in with them.
Most competent users can grok SSH keys in no time but it takes a bit longer to get their heads around SSL keys (the extra CA certs and key certs confused me when I first discovered it).
Pick what's supportable. SSH+SFTP is great for Unix people, but FTP over SSL is probably easier to do if your users are Windows-based and are pretty clueless about anything other than Internet Exploiter (and you don't mind risking that your users will choose insecure passwords).
Cryptographically they are both equally secure (given that same ciphers are used). Other than that they are entirely different protocols...
First of all, TLS server authentication is optional since its protocol supports fully anonymous server authentication. In SSH this is mandatory.
TLS uses X.509 certificates for client and server authentication, which would require some sort of PKI. SSH does not scale in this point but offers a wider range of authentication methods: password, public key, etc.
Another difference is that SSH allows multiple connections and supports remote program execution, terminal management, TCP tunneling and so on.