I'm a newbie for linux mount system. So I have a problem with NFS mount.
I have two PC(PC1, PC2) and PC1 has two harddisks(sda1, sdb1)
at PC1, each disk mounted like this
/dev/sda1 on /nfsshare type etx4 (rw,errors=remount-ro)
/dev/sdb1 on /nfsshare/more type ext4 (rw)
and this is PC1:/etc/exports file
#PC1:/etc/exports file
/nfsshare [PC2's IP] (rw, sync, no_root_squash, no_subtree_check)
/nfsshare/more [PC2's IP] (rw, sync, no_root_squash, no_subtree_check)
I want to show each of PC1:/nfsshare and PC1:/nfsshare/more at PC2
So I mounted like this
mount -t nfs PC1:/nfsshare PC2:/nfs
mount -t nfs PC1:/nfsshare/more PC2:/nfs/more
mount done without any problem or warning message
and I can show changes of PC1:/nfsshare from PC2:/nfs
But I couldn't show any changes of PC1:/nfsshare/more from PC2:/nfs/more
How could I fix this?? I want to show all of changes included in sub-tree under PC1:/nfsshare from PC2:/nfs
You'll want to fix the typo in your exports. You said you have
/nfsshare/tron [PC2's IP] (rw, sync, no_root_squash, no_subtree_check)
but you intend to export "more" rather than "tron".
Also, consider testing a simpler non-nested mount point, such as "/nfsmore", and then fixing things up with symlinks.
Related
I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.
I am trying to create a simple hosting platform for my clients. I am deploying all of my apps via docker on a VPS behind nginx-proxy. For wordpress applications I want to be able to limit disk-space so that my clients do not use too much and affect other applications. I bind mount all volumes to a single directory so that I can back-up easily with cron.
I've change the file system to overlay2 and am on centos 7.
[root#my-ip ~]# docker info
Server:
Containers: 12
Running: 12
Paused: 0
Stopped: 0
Images: 11
Server Version: 19.03.1
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
When I run a wordpress container with the --storage-opt size=10G I get the following error:
docker: Error response from daemon: --storage-opt is supported only for overlay over xfs with 'pquota' mount option.
This is an example of the bind mount I am using:
-v /DOCKER_VOLUMES/wordpress/appname/www/html:/var/www/html
How do I fix this? Can you please provide a full list of instructions to enable it?
from the Docs:
This (size) will allow to set the container rootfs size to 120G at creation time. This option is only available for the devicemapper, btrfs, overlay2, windowsfilter and zfs graph drivers. For the devicemapper, btrfs, windowsfilter and zfs graph drivers, user cannot pass a size less than the Default BaseFS Size. For the overlay2 storage driver, the size option is only available if the backing fs is xfs and mounted with the pquota mount option. Under these conditions, user can pass any size less than the backing fs size.
so the pquota should be enabled on your system
you can edit the file /etc/default/grub like so, and restart your machine:
GRUB_CMDLINE_LINUX_DEFAULT="rootflags=uquota,pquota"
and try to rerun your command with --storage-opt size=10G
I'm trying to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop.
Host system: Mac OSX 10.10
Parallels Desktop: 10
Running Virtual OS: CentOS 7 (minimal / command line)
I have the the Parallels tools installed and in CentOS I see the shared folder: /media/psf/Shared-Folder
When I set the Nginx server root to that folder I get a 403 Forbidden. I know it is a configuration parameter that needs editing because if I change SELinux to Permissive, the files are served correctly in NGINX.
When checking how the files are mounted I see this:
root root system_u:object_r:removable_t:s0 /media/psf/Shared-Folder/
I can see the 'removable_t' context - however - my issue is that I cannot seem to find a way to allow the httpd service to serve files that are mounted as removable storage.
I have tried:
chcon -R -t public_content_t /media/psf/Shared_Folder/
chcon -R -t httpd_sys_content_t /media/psf/Development-Projects/
and in all cases I get a "chcon: failed to change context of: '...': Operational not supported" error.
Checking /usr/sbin/getsebool -a | grep http I do not see any option to allow httpd to access removable storage mounts.
Last item: I do not believe I can change the way Parallels mounts the shared folders.
Question: Is there a way to keep SELinux enforcing but to allow NGINX to directly access shared OSX folders that are connected via Parallels Desktop?
What you need to do is use semanage.To get it you have to install policycoreutils-python.
The same type of question has already been asked Here. Cheers!
Am trying to mount a NFS device in my linux machine.
My /etc/fstab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rsize=32768,wsize=32768,intr,noatime 1 0
My /etc/mtab is like this,
192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs nfs rw,addr=192.168.0.5 0 0
I have enabled NFS in my NAS device.
When i type mount " mount -t nfs -v 192.168.0.5:/volume2/Asterisk_Recordings /var/spool/newnfs/" I get like this,
mount.nfs: timeout set for Thu Aug 1 07:01:04 2013
mount.nfs: trying text-based options 'vers=4,addr=192.168.0.5,clientaddr=192.168.1.1'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 192.168.0.5:/volume2/Asterisk_Recordings
Any possible reasons?
Thanks in advance.
This error can also occur if the /etc/hosts file on the nfs server maps the hostname of the client to an incorrect IP address, or the IP address of the client to an incorrect hostname. It is quick and easy to check, so worth doing before looking for other problems. Note that, if you do have to change any entries then the nfs-server has to be stopped and re-started, as it reads the hosts file only when it is started.
Is there a config file on the NAS where to put allowances for clients? E.g. in debian based OS the config file is "/etc/exports" and you would put there "/volume2/Asterisk_Recordings 192.168.1.1(rw,sync)" and activate this with "exportfs -a" (your NAS may do this automatically if you update the config via a web interface, I guess.) Check also https://stackoverflow.com/questions/22246477/mounting-nfs-results-in-access-denied-by-server.
Remember to add IP addresses/hostnames of your NFS' clients to /etc/hosts.allow of NFS' server
nfs: clienthost2, clienthost2, clienthost3
You might restart nfs config and nfs service on the NFS server as well as run export again.
systemctl restart nfs-config.service
systemctl status nfs.service
exportfs -arv
I have a Debian 10 system with a Debian 10 VM running inside it. I wanted to access a physical partition from the hard drive on the VM. I mounted the physical drive on the host and exported it. I was not able to mount it on the guest continually getting a access denied error
The solution after many hours was to add the no_all_squash option in the exports file. This is supposed to be the default but I needed to add it explicitly. As soon as I did that the problem went away and I could mount the file system. Unfortunately I could not see the files on the fs.
/media/dev 192.168.100.0/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
On the server I could see the files and on the host I could not.
I had to change the line to
/media/dev 192.168.100.0/255.255.255.0(rw,sync,no_subtree_check,no_root_squash,no_all_squash)
to see the actual files that were on the file sets
I saw this error presumably due to an older NFS client and adding -o nfsvers=3 fixed the issue for me e.g. mount -t nfs -o nfsvers=3 x.x.x.x:/nfs_mount /mnt/nfs_mount
Or in /etc/fstab
x.x.x.x://nfs_mount /mnt/nfs_mount nfs proto=tcp,port=2049,nfsvers=3 0 0
Ref: https://www.thegeekdiary.com/mount-nfs-access-denied-by-server-while-mounting-how-to-resolve/
I've got a Symfony 2.0 application running using Vagrant with a Linux guest and host O/S (Ubuntu). However, it runs slowly (e.g. several seconds for a page to load, often more than 10s) and I can't work out why. My colleagues who are running the site locally rather than on Vagrant VM have it running faster.
I've read elsewhere that Vagrant VMs run very slowly if NFS isn't enabled, but I have enabled that. I'm also using the APC cache to try and speed things up, but still the problems remain.
I ran xdebug against my site using the instructions at http://webmozarts.com/2009/05/01/speedup-performance-profiling-for-your-symfony-app, but I'm not clear where to start with analysing the data from this. I've got as far as opening it in KCacheGrind and looking for the high numbers under "Incl." and "Self", but this has just shown that php::session_start takes quite a long time.
Any suggestions as to what I should be trying here? Sorry for the slightly broad question, but I'm stumped!
I've seen similar problem on my OS X host, I forgot to enable NFS!
On windows Host, the performance impact is less true...
For my very small website, I have kickly 12649 files... So the 1000+ files limit is quite easily reached.
So my two cents: enable NFS like this in your Vagrantfile:
config.vm.share_folder "v-root", "/vagrant", ".." , :nfs => true
And from the experts:
It’s a long known issue that VirtualBox shared folder performance degrades quickly as the number of files in the shared folder increases. As a project reaches 1000+ files, doing simple things like running unit tests or even just running an app server can be many orders of magnitude slower than on a native filesystem (e.g. from 5 seconds to over 5 minutes).
If you’re seeing this sort of performance drop-off in your shared folders, NFS shared folders can offer a solution. Vagrant will orchestrate the configuration of the NFS server on the host and will mount of the folder on the guest for you.
Note: NFS is not supported on Windows hosts. According to VirtualBox, shared folders on Windows shouldn’t suffer the same performance penalties as on unix-based systems. If this is not true, feel free to use our support channels and maybe we can help you out.
Edit:
On windows, I have found another solution, I am using symlinks (ln -fs) on vendor folders within my projects that links to non-shared folders. This reduce the amount of files seen by the windows host, the antivirus, etc.
Where I work, we've tried out two solutions to the problem of Vagrant + Symfony being slow. I recommend the second one (nfs and bind mounts).
The rsync approach
To begin with, we used rsync. Our approach was slightly different to that outlined in AdrienBrault's answer. Rather, we had code like the following in our Vagrantfile:
config.vm.define :myproj01 do |myproj|
# Networking & Port Forwarding
myproj.vm.network :private_network, type: "dhcp"
# NFS Share
myproj.vm.synced_folder ".", "/home/vagrant/current", type: 'rsync', rsync__exclude: [
"/.git/",
"/vendor/",
"/app/cache/",
"/app/logs/",
"/app/uploads/",
"/app/downloads/",
"/app/bootstrap.php.cache",
"/app/var",
"/app/config/parameters.yml",
"/composer.phar",
"/web/bundles",
"/web/uploads",
"/bin/behat",
"/bin/doctrine*",
"/bin/phpunit",
"/bin/webunit",
]
# update VM sooner after files changed
# see https://github.com/smerrill/vagrant-gatling-rsync#working-with-this-plugin
config.gatling.latency = 0.5
end
As you might notice from the above, we kept files in sync using the Vagrant gatling rsync plugin.
The improved NFS approach, using bind mounts (recommended solution)
The rsync approach solves the speed issue, but we found some problems with it. In particular, the one-way nature of it (as opposed to sharing folders) was annoying when files (like composer.lock or Doctrine migrations) were generated on the VM, or when we wanted to access code in /vendor. We had to SFTP in to copy things back - and, in the case of new files, do it before they were cleared by the next run of the gatling plugin!
Therefore, we moved to a solution which uses binding mounts to handle folders like cache and logs differently. Not having those shared increased the speed dramatically.
The relevant bits of the Vagrantfile are as follows:
# Binding mounts for folders with dynamic data in them
# This must happen before provisioning, and on every subsequent reboot, hence run: "always"
config.vm.provision "shell",
inline: "/home/vagrant/current/bin/bind-mounts",
run: "always"
The bind-mounts script referenced above looks like this:
#!/bin/bash
mkdir -p ~vagrant/current/app/downloads/
mkdir -p ~vagrant/current/app/uploads/
mkdir -p ~vagrant/current/app/var/
mkdir -p ~vagrant/current/app/cache/
mkdir -p ~vagrant/current/app/logs/
mkdir -p ~vagrant/shared/app/downloads/
mkdir -p ~vagrant/shared/app/uploads/
mkdir -p ~vagrant/shared/app/var/
mkdir -p ~vagrant/shared/app/cache/
mkdir -p ~vagrant/shared/app/logs/
sudo mount -o bind ~vagrant/shared/app/downloads/ ~/current/app/downloads/
sudo mount -o bind ~vagrant/shared/app/uploads/ ~/current/app/uploads/
sudo mount -o bind ~vagrant/shared/app/var/ ~/current/app/var/
sudo mount -o bind ~vagrant/shared/app/cache/ ~/current/app/cache/
sudo mount -o bind ~vagrant/shared/app/logs/ ~/current/app/logs/
NFS + binding mounts is the approach I'd recommend.
ATM, basically, do not put your website code on the /vagrant shared folder.
As it's shared between your VM and host O/S, it's slower; and I didn't find any efficient solution to get good performance.
The solution we're using is to serve our developments apps from the classic /var/www, and keep them in sync with our local copy with rsync.
By following the instructions on this article Speedup Symfony2 on Vagrant boxes helped me out to solve this issue reducing the page loads from 6-10 seconds to 1 second on my Symfony2 project. Basically all the fix is to set the sync type between the host and the guest (the vagrant VM box) with NFS instead of using the VirtualBox shared folder system which is very slow.
Also adding this code below to AppKernel.php on the Symfony2 project, changes the cache and log directory to the shared memory directory (/dev/shm) on the vagrant box instead of writing those to the NFS share, so it improves the page load speed even better.
<?php
class AppKernel extends Kernel
{
// ...
public function getCacheDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/cache/' . $this->getEnvironment();
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->getEnvironment(), array('dev', 'test'))) {
return '/dev/shm/appname/logs';
}
return parent::getLogDir();
}
}
I use sshfs for sharing directories between host OS and VM (Expan drive for windows)
It is much faster than native VBox directory sharing