How does scp traffic flow between two remote hosts? - sftp

If you issue a scp command between 2 remote servers, will the traffic flow directly between the hosts, or will it flow from Remote1 => Local Machine => Remote2?
For example I issue this command on my laptop:
scp user1#remote1.com:/Files user2#remote2.com:/Files

Newer versions of scp (since 2011) have the option -3 which will route the traffic through your local machine. This is useful if the hosts are on different networks and can't see each other. Found this on SuperUser. From your linked article it seems like normally the hosts will try to connect directly to each other.

Looks like it can be done.

If your linux/bsd/unix or Mac do not have the -3 option, just compile the last version from: http://www.openssh.org/portable.html
It is as simple as:
./configure; make ; sudo make install
It will be installed on /usr/local/bin by default. I just did that on my Mac OS X Lion.

Related

Ubuntu (Oracle VM) - Mounted Samba shares hang indefinitely

I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.

VPN killswitch using UFW, but now openvpn3 no longer can start automatically

I successfully implemented this, which blocks all internet connections on my Linux machine UNLESS it connects via a specific VPN :
https://www.comparitech.com/blog/vpn-privacy/how-to-make-a-vpn-kill-switch-in-linux-with-ufw/
If I manually execute openvpn3 session-start --config ~/Desktop/config.ovpn, it successfully connects via the VPN.
I used to have this command in a script (that has #!/bin/bash as header) which ran at device bootup without any issues, UNTIL I configured ufw for the killswitch above (now ufw runs on device bootup).
I use openvpn3 so using instructions in the above tutorial for openvpn commands didn't work at all.
I even tried using a sleep in my bash script to get it to wait a while until after bootup. Doesn't work. But if I issue the connection command manually in the command prompt, it works.
Please help! I need it to connect automatically. Much appreciated!
After spending a whole day on this, I figured out a solution. I found an article that guided me : https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/
I set up a service item using systemd (systemctl) just for that command to connect. Here is what my entry looks like :
#/etc/systemd/system/connectvpn.service
[Unit]
Description=Connect VPN
After=ufw.service network.target
Requires=ufw.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/connect
#/usr/local/bin/connect
#!/bin/bash
openvpn3 session-start --config /home/xyz/Desktop/config.ovpn
Working nicely now, connects to the VPN on bootup.

mount.nfs: requested NFS version or transport protocol is not supported

NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!

Is the editor Atom able to open projects on a remote server?

Atom is able to open a project, and to show the whole tree of the project on the left side, a really nice feature.
Now I'm using SSH on Host OS to access a Guest OS (say Red Hat Enterprise Linux, RHEL) on Virtualbox, is there a way of Atom located in Host OS to open a project located on RHEL?
Well yes there is!
You just need to configure sshfs, optionally with autofs. Then you can access the files as if they are stored locally. I've used this with Atom and it works seamlessly.
Instructions for Ubuntu
Install sshfs
$ sudo apt-get install sshfs
Mount the remote directory on a local mountpoint
$ sshfs [user#]host:[dir] mountpoint
Combining it with autofs
The following link has instructions for a setup using autofs.
Note: This requires you to setup SSH for the root user.
http://www.mccambridge.org/blog/2007/05/totally-seamless-sshfs-under-linux-using-fuse-and-autofs/
Additionally to that post, I've added some tricks for an even more seamless experience.
Enhance performance
I've noticed a significant performance boost by adding this SSH config to /root/.ssh/config:
Ciphers arcfour
Compression no
Note: This does make the connection less secure.
Make it appear as a disk
If you set the mount point to a directory in /media, the mount point will show up as a disk in your file browser. For example /media/sshfs.
I would recommend the Remote sync plugin for this. I have a python environment set up on a linux box and i connect to it from my PC.
It allows me to upload changes automatically when i save a file and also define files to be monitored for changes.
Not 100% what you're looking for, but there's the Remote-Edit package: https://atom.io/packages/remote-edit
This will allow you to define the connection parameters for the server, and will then allow you to browse and edit the files found on the server.
Complement to Remco's sshfs answer above:
If you use different users in the client and server hosts, consider using the 'idmap' option of sshfs.
I use different users in my working host and in the development or testing VMs.
Example:
using option '-o idmap=user' will automatically translate UID/GID of the remote host to the UID/GID of the connecting user in the local host
Files owned by remote user (devuser) in remote host (devhost1) will appear as belonging to the connecting user (locuser) in local host (clienthost)
locuser#clienthost:~$ sshfs devuser#devhost1:/var/www ~/dev/www -o idmap=user
locuser#clienthost:~$ ls -lR ~/dev/www
(...)
-rw-rw-r-- 1 locuser locuser 269 abr 1 11:37 index.html
-rw-rw-r-- 1 locuser locuser 249 abr 3 03:59 page1.html
-rw-rw-r-- 1 locuser locuser 1118 abr 2 15:07 page2.html
-rw-rw-r-- 1 locuser locuser 847 abr 3 03:20 page3.html
(...)
The mapping can also be made explicit (userx <-> usery). For more details see man sshfs
I am writing this answer because none of the other answers worked for me.
Mounting as a directory & browsing with atom (#Remco Haszing answer) was a brilliant one.
but in my case, atom wants to index all of the remote project & its a heavy one. and it gets not responding.
using remote-sync package was good when you working locally then want to upload the files to server.
Actually the remote-edit is the package meant to do this job. (editing files remotely on ssh)
the problem with this is, it has been abandon.
These help me as its replacements:
https://atom.io/packages/remote-edit-ni
https://atom.io/packages/remote-editor

Why is my Symfony 2.0 site running slowly on Vagrant with Linux host?

I've got a Symfony 2.0 application running using Vagrant with a Linux guest and host O/S (Ubuntu). However, it runs slowly (e.g. several seconds for a page to load, often more than 10s) and I can't work out why. My colleagues who are running the site locally rather than on Vagrant VM have it running faster.
I've read elsewhere that Vagrant VMs run very slowly if NFS isn't enabled, but I have enabled that. I'm also using the APC cache to try and speed things up, but still the problems remain.
I ran xdebug against my site using the instructions at http://webmozarts.com/2009/05/01/speedup-performance-profiling-for-your-symfony-app, but I'm not clear where to start with analysing the data from this. I've got as far as opening it in KCacheGrind and looking for the high numbers under "Incl." and "Self", but this has just shown that php::session_start takes quite a long time.
Any suggestions as to what I should be trying here? Sorry for the slightly broad question, but I'm stumped!
I've seen similar problem on my OS X host, I forgot to enable NFS!
On windows Host, the performance impact is less true...
For my very small website, I have kickly 12649 files... So the 1000+ files limit is quite easily reached.
So my two cents: enable NFS like this in your Vagrantfile:
config.vm.share_folder "v-root", "/vagrant", ".." , :nfs => true
And from the experts:
It’s a long known issue that VirtualBox shared folder performance degrades quickly as the number of files in the shared folder increases. As a project reaches 1000+ files, doing simple things like running unit tests or even just running an app server can be many orders of magnitude slower than on a native filesystem (e.g. from 5 seconds to over 5 minutes).
If you’re seeing this sort of performance drop-off in your shared folders, NFS shared folders can offer a solution. Vagrant will orchestrate the configuration of the NFS server on the host and will mount of the folder on the guest for you.
Note: NFS is not supported on Windows hosts. According to VirtualBox, shared folders on Windows shouldn’t suffer the same performance penalties as on unix-based systems. If this is not true, feel free to use our support channels and maybe we can help you out.
Edit:
On windows, I have found another solution, I am using symlinks (ln -fs) on vendor folders within my projects that links to non-shared folders. This reduce the amount of files seen by the windows host, the antivirus, etc.
Where I work, we've tried out two solutions to the problem of Vagrant + Symfony being slow. I recommend the second one (nfs and bind mounts).
The rsync approach
To begin with, we used rsync. Our approach was slightly different to that outlined in AdrienBrault's answer. Rather, we had code like the following in our Vagrantfile:
config.vm.define :myproj01 do |myproj|
# Networking & Port Forwarding
myproj.vm.network :private_network, type: "dhcp"
# NFS Share
myproj.vm.synced_folder ".", "/home/vagrant/current", type: 'rsync', rsync__exclude: [
"/.git/",
"/vendor/",
"/app/cache/",
"/app/logs/",
"/app/uploads/",
"/app/downloads/",
"/app/bootstrap.php.cache",
"/app/var",
"/app/config/parameters.yml",
"/composer.phar",
"/web/bundles",
"/web/uploads",
"/bin/behat",
"/bin/doctrine*",
"/bin/phpunit",
"/bin/webunit",
]
# update VM sooner after files changed
# see https://github.com/smerrill/vagrant-gatling-rsync#working-with-this-plugin
config.gatling.latency = 0.5
end
As you might notice from the above, we kept files in sync using the Vagrant gatling rsync plugin.
The improved NFS approach, using bind mounts (recommended solution)
The rsync approach solves the speed issue, but we found some problems with it. In particular, the one-way nature of it (as opposed to sharing folders) was annoying when files (like composer.lock or Doctrine migrations) were generated on the VM, or when we wanted to access code in /vendor. We had to SFTP in to copy things back - and, in the case of new files, do it before they were cleared by the next run of the gatling plugin!
Therefore, we moved to a solution which uses binding mounts to handle folders like cache and logs differently. Not having those shared increased the speed dramatically.
The relevant bits of the Vagrantfile are as follows:
# Binding mounts for folders with dynamic data in them
# This must happen before provisioning, and on every subsequent reboot, hence run: "always"
config.vm.provision "shell",
inline: "/home/vagrant/current/bin/bind-mounts",
run: "always"
The bind-mounts script referenced above looks like this:
#!/bin/bash
mkdir -p ~vagrant/current/app/downloads/
mkdir -p ~vagrant/current/app/uploads/
mkdir -p ~vagrant/current/app/var/
mkdir -p ~vagrant/current/app/cache/
mkdir -p ~vagrant/current/app/logs/
mkdir -p ~vagrant/shared/app/downloads/
mkdir -p ~vagrant/shared/app/uploads/
mkdir -p ~vagrant/shared/app/var/
mkdir -p ~vagrant/shared/app/cache/
mkdir -p ~vagrant/shared/app/logs/
sudo mount -o bind ~vagrant/shared/app/downloads/ ~/current/app/downloads/
sudo mount -o bind ~vagrant/shared/app/uploads/ ~/current/app/uploads/
sudo mount -o bind ~vagrant/shared/app/var/ ~/current/app/var/
sudo mount -o bind ~vagrant/shared/app/cache/ ~/current/app/cache/
sudo mount -o bind ~vagrant/shared/app/logs/ ~/current/app/logs/
NFS + binding mounts is the approach I'd recommend.
ATM, basically, do not put your website code on the /vagrant shared folder.
As it's shared between your VM and host O/S, it's slower; and I didn't find any efficient solution to get good performance.
The solution we're using is to serve our developments apps from the classic /var/www, and keep them in sync with our local copy with rsync.
By following the instructions on this article Speedup Symfony2 on Vagrant boxes helped me out to solve this issue reducing the page loads from 6-10 seconds to 1 second on my Symfony2 project. Basically all the fix is to set the sync type between the host and the guest (the vagrant VM box) with NFS instead of using the VirtualBox shared folder system which is very slow.
Also adding this code below to AppKernel.php on the Symfony2 project, changes the cache and log directory to the shared memory directory (/dev/shm) on the vagrant box instead of writing those to the NFS share, so it improves the page load speed even better.
<?php
class AppKernel extends Kernel
{
// ...
public function getCacheDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/cache/' . $this->getEnvironment();
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/logs';
}
return parent::getLogDir();
}
}
I use sshfs for sharing directories between host OS and VM (Expan drive for windows)
It is much faster than native VBox directory sharing

Resources