Ubuntu (Oracle VM) - Mounted Samba shares hang indefinitely - mount

I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!

I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.

Related

mount.nfs: requested NFS version or transport protocol is not supported

NFS Mount is not working in my RHEL 7 AWS instance.
When I do a
mount -o nfsvers=3 10.10.11.10:/ndvp2 /root/mountme2/
I get the error:
mount.nfs: requested NFS version or transport protocol is not supported
Can anyone point me where I am wrong?
Thanks.
Check the nfs service is started or reboot the nfs service.
sudo systemctl status nfs-kernel-server
In my case this package was not running and the issue was in /etc/exports file where i was having same IP address for two machines.
So i commented one ip address for the machine and restarted nf-kernel-server using
sudo systemctl restart nfs-kernel-server and reload the machine.
It worked.
A precision which might be useful for the dump (like me): systemctl status nfs-server.service and systemctl start nfs-server.service must be executed on the server!
Some additional data
If, like me, you've deleted a VM without shutting it down right you might also need to manually edit the file /etc/exports because NFS is trying to connect to it and fails but doesn't continue with the next, it just dies.
After that you can manually restart as mentioned in other answers.
In my case, a simple reload didn't suffice. I had to perform a full restart:
sudo systemctl status nfs-kernel-server
In my case, it didn't work correctly with version NFS 4.1.
So in Vargantfile in each place where is type: 'nfs' I added coma and nfs_version: 4, nfs_udp: false
Here is more detailing explanation NFS
If you're giving a specific protocol to connect with, also check to make sure your NFS server has that protocol enabled.
I got this error when trying to start up a Vagrant box, and my nfs server was running. It turns out that the command Vagrant uses is:
mount -o vers=3,udp,rw,actimeo=1 192.168.56.1:/dir/on/host /vagrant
Which specifically asks for UDP. My server was running but it was not configured to enable connecting over UDP. After consulting /etc/nfs.conf, I created /etc/nfs.conf.d/10-enable-udp.conf with the following contents to enable udp:
[nfsd]
udp=y
The name of the file doesn't matter, as long as it's in the conf.d directory and ends in .conf. Depending on your distribution it may be configured differently. You can directly edit nfs.conf, but using a conf.d file is more likely to preserve the changes after upgrading your system.
Try to ping IP address of the server "ping " from client "ping , if you get reply then install nfs server on the host. Then edit /etc/exports file don't forget to add port along with IP address
I got the solution: make an entry in nfs server /etc/nfsmount.conf with Defaultvers=3 .
There will # Defaultvers=3 just unhash it and then mount on nfs client.
Issue will be resolved!

mount: nfs access denied by server

Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/

Why is my Symfony 2.0 site running slowly on Vagrant with Linux host?

I've got a Symfony 2.0 application running using Vagrant with a Linux guest and host O/S (Ubuntu). However, it runs slowly (e.g. several seconds for a page to load, often more than 10s) and I can't work out why. My colleagues who are running the site locally rather than on Vagrant VM have it running faster.
I've read elsewhere that Vagrant VMs run very slowly if NFS isn't enabled, but I have enabled that. I'm also using the APC cache to try and speed things up, but still the problems remain.
I ran xdebug against my site using the instructions at http://webmozarts.com/2009/05/01/speedup-performance-profiling-for-your-symfony-app, but I'm not clear where to start with analysing the data from this. I've got as far as opening it in KCacheGrind and looking for the high numbers under "Incl." and "Self", but this has just shown that php::session_start takes quite a long time.
Any suggestions as to what I should be trying here? Sorry for the slightly broad question, but I'm stumped!
I've seen similar problem on my OS X host, I forgot to enable NFS!
On windows Host, the performance impact is less true...
For my very small website, I have kickly 12649 files... So the 1000+ files limit is quite easily reached.
So my two cents: enable NFS like this in your Vagrantfile:
config.vm.share_folder "v-root", "/vagrant", ".." , :nfs => true
And from the experts:
It’s a long known issue that VirtualBox shared folder performance degrades quickly as the number of files in the shared folder increases. As a project reaches 1000+ files, doing simple things like running unit tests or even just running an app server can be many orders of magnitude slower than on a native filesystem (e.g. from 5 seconds to over 5 minutes).
If you’re seeing this sort of performance drop-off in your shared folders, NFS shared folders can offer a solution. Vagrant will orchestrate the configuration of the NFS server on the host and will mount of the folder on the guest for you.
Note: NFS is not supported on Windows hosts. According to VirtualBox, shared folders on Windows shouldn’t suffer the same performance penalties as on unix-based systems. If this is not true, feel free to use our support channels and maybe we can help you out.
Edit:
On windows, I have found another solution, I am using symlinks (ln -fs) on vendor folders within my projects that links to non-shared folders. This reduce the amount of files seen by the windows host, the antivirus, etc.
Where I work, we've tried out two solutions to the problem of Vagrant + Symfony being slow. I recommend the second one (nfs and bind mounts).
The rsync approach
To begin with, we used rsync. Our approach was slightly different to that outlined in AdrienBrault's answer. Rather, we had code like the following in our Vagrantfile:
config.vm.define :myproj01 do |myproj|
# Networking & Port Forwarding
myproj.vm.network :private_network, type: "dhcp"
# NFS Share
myproj.vm.synced_folder ".", "/home/vagrant/current", type: 'rsync', rsync__exclude: [
"/.git/",
"/vendor/",
"/app/cache/",
"/app/logs/",
"/app/uploads/",
"/app/downloads/",
"/app/bootstrap.php.cache",
"/app/var",
"/app/config/parameters.yml",
"/composer.phar",
"/web/bundles",
"/web/uploads",
"/bin/behat",
"/bin/doctrine*",
"/bin/phpunit",
"/bin/webunit",
]
# update VM sooner after files changed
# see https://github.com/smerrill/vagrant-gatling-rsync#working-with-this-plugin
config.gatling.latency = 0.5
end
As you might notice from the above, we kept files in sync using the Vagrant gatling rsync plugin.
The improved NFS approach, using bind mounts (recommended solution)
The rsync approach solves the speed issue, but we found some problems with it. In particular, the one-way nature of it (as opposed to sharing folders) was annoying when files (like composer.lock or Doctrine migrations) were generated on the VM, or when we wanted to access code in /vendor. We had to SFTP in to copy things back - and, in the case of new files, do it before they were cleared by the next run of the gatling plugin!
Therefore, we moved to a solution which uses binding mounts to handle folders like cache and logs differently. Not having those shared increased the speed dramatically.
The relevant bits of the Vagrantfile are as follows:
# Binding mounts for folders with dynamic data in them
# This must happen before provisioning, and on every subsequent reboot, hence run: "always"
config.vm.provision "shell",
inline: "/home/vagrant/current/bin/bind-mounts",
run: "always"
The bind-mounts script referenced above looks like this:
#!/bin/bash
mkdir -p ~vagrant/current/app/downloads/
mkdir -p ~vagrant/current/app/uploads/
mkdir -p ~vagrant/current/app/var/
mkdir -p ~vagrant/current/app/cache/
mkdir -p ~vagrant/current/app/logs/
mkdir -p ~vagrant/shared/app/downloads/
mkdir -p ~vagrant/shared/app/uploads/
mkdir -p ~vagrant/shared/app/var/
mkdir -p ~vagrant/shared/app/cache/
mkdir -p ~vagrant/shared/app/logs/
sudo mount -o bind ~vagrant/shared/app/downloads/ ~/current/app/downloads/
sudo mount -o bind ~vagrant/shared/app/uploads/ ~/current/app/uploads/
sudo mount -o bind ~vagrant/shared/app/var/ ~/current/app/var/
sudo mount -o bind ~vagrant/shared/app/cache/ ~/current/app/cache/
sudo mount -o bind ~vagrant/shared/app/logs/ ~/current/app/logs/
NFS + binding mounts is the approach I'd recommend.
ATM, basically, do not put your website code on the /vagrant shared folder.
As it's shared between your VM and host O/S, it's slower; and I didn't find any efficient solution to get good performance.
The solution we're using is to serve our developments apps from the classic /var/www, and keep them in sync with our local copy with rsync.
By following the instructions on this article Speedup Symfony2 on Vagrant boxes helped me out to solve this issue reducing the page loads from 6-10 seconds to 1 second on my Symfony2 project. Basically all the fix is to set the sync type between the host and the guest (the vagrant VM box) with NFS instead of using the VirtualBox shared folder system which is very slow.
Also adding this code below to AppKernel.php on the Symfony2 project, changes the cache and log directory to the shared memory directory (/dev/shm) on the vagrant box instead of writing those to the NFS share, so it improves the page load speed even better.
<?php
class AppKernel extends Kernel
{
// ...
public function getCacheDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/cache/' . $this->getEnvironment();
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/logs';
}
return parent::getLogDir();
}
}
I use sshfs for sharing directories between host OS and VM (Expan drive for windows)
It is much faster than native VBox directory sharing

Cannot connect to beaglebone.local

I need to know how to connect to a beaglebone (or beagleboard) with SSH when I plug it into a new network with an ethernet cable like this:
$ ssh root#beaglebone.local
So far I've only been able to access it like this, if I know the IP address:
$ ssh root#<ip_address>
But I don't always know the IP address of the board on new networks so I'm hoping to access it with with a name like: beaglebone.local.
Right now when I try to do this I get this error:
"ssh: Could not resolve hostname beaglebone.local: nodename nor servname provided, or not known"
I checked the hostname and hosts files, and added "127.0.0.1 beaglebone" to the hosts on the beaglebone, but not sure what else I can do?
# cat /etc/hostname
beaglebone
# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
127.0.0.1 beaglebone
I had a similar issue running my beaglebone on Angstrom-Cloud9-IDE-GNOME-eglibc-ipk-v2012.05-beaglebone-2012.04.22.img.xz. In this distribution, "beaglebone.local" should appear on the network after the system boots.
About 50% of the time after reboot, "beaglebone.local" would not appear on the network (although the bone would be available by IP address). When this happened, "systemctl status avahi-daemon.service" showed that the avahi-daemon failed with "exit code 255". Interestingly, a subsequent "systemctl start avaihi-daemon.service" would always be successful and "beaglebone.local" would appear on the network.
Also "journalctl | grep avahi" returned a single message stating something like "Daemon already runnin gon PID NNN".
So, I "fixed" the problem by adding the line "ExecStartPre=/bin/rm -f /var/run/avahi-daemon/pid" to the [Service] section of /lib/systemd/system/avahi-daemon.service. With this addition, "beaglebone.local" now appears on the network 100% of reboots.
I say "fixed" (i.e., in quotes) because I have not been able to track down the root cause that is leaving around the stray avahi pid file(s) and thus don't have a true fix.
-- Frank
For 'beaglebone.local' to work, your host machine must recognize Zeroconf. The BeagleBone uses Avahi to tell other systems on the LAN that it is there and serving up applications and that it should be called a 'beaglebone'. If there are more than one, the second one is generally called 'beaglebone-2.local'.
I hate answering my own questions. The following hack will work until a better way emerges:
This shell script (where xxx.xxx.xxx is the first three numbers in your computer's IP) will find your beaglebone or beagleboard (that is plugged-into ethernet on a new network with DHCP) by looping through all the ip address on the subnet and attempting to login to each as root. If it finds one then try your password. If it doesn't work just hit enter until the loop starts again. If it doesn't find the board then something else is probably wrong.
for ip in $(seq 1 254); do ssh root#xxx.xxx.xxx.$ip -o ConnectTimeout=5; [ $? -eq 0 ] && echo "xxx.xxx.xxx.$ip UP" || : ; done
UPDATE 1
Today I plugged-in the beaglebone and saw Bonjour recognize that it joined the network. So I tried it and it worked. No idea why it decided to all of the sudden but it did. Strange, but true.
I had this issue quite often with Mac OS X 10.7. But unlike Frank Halasz "systemctl status avahi-daemon.service" shown no failure. And in fact the problem was on the Mac side. Restarting Bonjour with the following commands fixed the issue.
$ sudo launchctl unload /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist
$ sudo launchctl load -F /System/Library/LaunchDaemons/com.apple.mDNSResponder.plist

Unable to rsync between my server and my Mac

I have a server where I store data from Mac A and Mac B.
I use rsync to keep the files updated between my Macs.
I run the following code unsuccessfully
#!/bin/zsh
# to copy files from my server to my folder
rsync -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
# to copy files from my folder to my server
rsync -Pav ~/Dropbox/Courses/math $Masi:~/private/
I get the following error message
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: unexplained error (code 255) at io.c(600) [receiver=3.0.5]
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.5]
I have ssh keys in place so the connection should work, since I can use scp without problems.
How can you use rsync between my server and one of my Macs?
I used to do a lot of this. Just ran a test, a few suggestions.
Spell out your entire user#host pattern
Run the ssh connection sans the rsync first, you may need to first approve your fingerprint
You do not seem to pass a flag to protect extended attributes, this can yield broken files on OS X. If you do not need resource forks, you are OK, but most of the time you do need them.
My test case:
$ rsync -Pav ~/Desktop/ me#remote.example.com:~/rsyc-test
In that case, all the files within ~/Desktop were copied to the remote host, in my home dir. Since the directory 'rsyc-test' did not exist, it was made for me. I had a .app on my Desktop, it made it over, surprisingly, it works. Even some .webloc files made it and appear to work, though I do not trust it.
I would strongly suggest adding in the -E flag
-E, --extended-attributes
Apple specific option to copy extended attributes, resource
forks, and ACLs. Requires at least Mac OS X 10.4 or suitably
patched rsync.
I ran a new test, moved a Interarchy bookmark to my desktop, I know for a fact these break if they are copied sans resource forks. Running without the -E versus with the -E, there is a difference of 152 bytes in xfered data. The first file on the remote machine did not work, the second transfered file did work.
I can not help but notice in your example one of your paths is ~/Dropbox so this may all not matter, since DropBox, the app, does not at all support resource forks currently, though I hear there are plans to in the future.
You also are not sending in the --delete flag, if your end goal is a mirror of your data, you are not getting that, if your end goal is backups that continually grows, keeping everything that was ever on the source, the lack of --delete is good.
Other notes:
You can exclude those silly .DS_Store files
--exclude '.DS_Store'
You can also set rsync up in a way to be a true mirror, so you would not need to run your other command, see the man page for details.
My final working command to shove the Desktop of my laptop to a remote machine:
$ rsync -PEav --delete --exclude '.DS_Store' ~/Desktop/ me#remote.example.com:~/rsycn-test
Check "$Masi". Is that the hostname you are trying to reach?
Try the following command to debug it:
rsync -e 'ssh -v' -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
The Connection refused usually happens when there is a connection issue to the remote (e.g. firewall).
In your case the problem is that $Masi variable is empty. If it's not variable, use Masi.
As per this error:
ssh: connect to host port 22: Connection refused
Notice the double space above after the host word.
the connect to host message doesn't say to which host, so you're trying to connect to empty host. So it sound like a typo in the host name.

Resources