I have the following in my /etc/fstab file:
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
sv-01:/mnt/UEF/home/user/Videos/complete /home/user/Videos nfs defaults,noauto,user 0 0
and when I issue the command sudo mount -a -v, I get the following output
mount: proc already mounted on /proc
mount: /dev/mmcblk0p1 already mounted on /boot
nothing was mounted
but when I copy paste the above segment of code and issue the below command, the folder mounts perfectly.
sudo mount sv-01:/mnt/UEF/home/user/Videos/complete /home/user/Videos
What could possibly be causing this?
You specified "noauto" parameter for sv-01:/mnt/UEF/home/user/Videos/complete.
From mount manual:
mount -a [-t type] [-O optlist]
(usually given in a bootscript) causes all filesystems mentioned in
fstab (of the proper type and/or having or not having the proper
options) to be mounted as indicated, except for those whose line
contains the noauto keyword. Adding the -F option will make mount
fork, so that the filesystems are mounted simultaneously.
Related
I also did 'df -h' to see if it's properly mounted or not and then it's showing it is:
gs://bucketname/* 1.0P 0 1.0P 0% /home/admin/(directory in vm instance)
But inside the directory in vm instance when I am typing ls -l , I am gettin total 0. What's the reason behind it?
You cannot just make up arguments; see mounting.md for reference.
The command for static mounting is:
mkdir /path/to/mount/point
gcsfuse my-bucket /path/to/mount/point
or for dynamic mounting, with a bucket name:
mkdir /path/to/mount/point
gcsfuse /path/to/mount/point
To unmount:
fusermount -u /path/to/mount/point
I am writing an initramfs, executed in busybox, in which I mount a partition using those commands:
/bin/busybox mount -n -t proc proc /proc
mount -n -t devtmpfs devtmpfs /dev
mount -n -t sysfs sysfs /sys
mount -n -t tmpfs inittemp /mnt
mkdir /mnt/saved
mount -n -t "${rootfstype}" -o "${rootflags}" ${device} /mnt/saved
But when the system starts up, I have this error:
mount: mounting /dev/mmcblk0p2 on /mnt/saved failed: No such file or directory
I know that when the device is not found, there is a message like Device does not exist, so I think the problem is coming from the directory /mnt/saved that is not correctly created yet.
I tried adding an ls -l /mnt after the mkdir to check that the directory is correctly created, but most of the time, if I do so, the error disappears. So I though the problem might be synchronization problem (of the tmpfs, weird!) So I tried some other things like creating a dummy file in the directory to force a kind of synchronisation. This works, but is a dirty workaround and I want to find the real cause of the problem to build a clean solution.
By the time I was writing my question, I finally found the solution by myself… I post it anyway just in case somebody is stuck like me.
Actually, the mount command of busybox does not show a message about device, if it cannot find it, but always show No such file or directory.
My problem was actually coming from the root device which was not ready yet, and so not in the /dev directory yet. In order to make it work correctly, I simply added this line before the mount:
while ${rootwait} && ! [ -b "${device}" ]; do sleep 1; done
I am trying to mount a file that will act as a read/write HFS+ filesystem. I am using arch linux based distro so I installed hfsprogs and hfsutils. In debian based distros hfsprogs should be enough.
I created a 8G file like this:
dd if=/dev/zero of=test.img bs=1024 count=0 seek=$[1000*8000]
Then I did the formatting:
mkfs.hfsplus -v TestImg test.img
After that when I try to mount the file I get:
mkdir /tmp/sun
sudo mount -t hfsplus -o loop,rw,offset=0 test.img /tmp/sun
mount: /tmp/sun: mount failed: Operation not permitted
Parted shows that offset it ok:
sudo parted -m test.img unit B print
1:0B:8191999999B:8192000000B:hfs+::;
I also tried to use fdisk with the file creating sun partition table but that did not help either. Can you help me please with creating HFS+ rw filesystem as a file?
I used loop device inappropriately.
The correct steps are:
Create file
dd if=/dev/zero of=test.img bs=100MB count=10 seek=$[10*8]
Create blocked device mapped to that file:
losetup -fP test.img
At this point blocked device /dev/loop0 got created.
Create filesystem:
mkfs.hfsplus test.img
Mount to your folder
mount -o rw,loop /dev/loop0 /tmp/loop_test
I have a single vmware disk image file with vmdk extension
I am trying to mount this and explore all of the partitions (including hidden ones).
I've tried to follow several guides, such as : http://forums.opensuse.org/showthread.php/469942-mounting-virtual-box-machine-images-host
I'm able to mount the image using vdfuse
vdfuse -w -f windows.vmdk /mnt/
After this I can see one partition and an entire disk exposed
# ll /mnt/
total 41942016
-r-------- 1 te users 21474836480 Feb 28 14:16 EntireDisk
-r-------- 1 te users 1569718272 Feb 28 14:16 Partition1
Continuing with the guide I try to mount either EntireDisk or Partition1 using
mount -o loop,ro /mnt/Partition1 mnt2/
But that gives me the error 'mount: you must specify a filesystem type'
In trying to find the correct type I tried
dd if=/mnt/EntireDisk | file -
which outputs a ton of information but of note is:
/dev/stdin: x86 boot sector; partition 1: ....... FATs ....
So i tired to mount as a vfat but that gave me
mount: wrong fs type, bad option, bad superblock ...etc
What am I doing wrong?
For newer Linux systems, you can use guestmount to mount the third partition within a VMDK image:
guestmount -a xyz.vmdk -m /dev/sda3 --ro /mnt/vmdk
Alternatively, to autodetect and mount an image (less reliable), you can try:
guestmount -a xyz.vmdk -i --ro /mnt/vmdk
Do note that the flag --ro simply mounts the image as read-only; to mount the image as read-write, just replace it with the flag --rw.
Installation
guestmount is contained in following packages per distro:
Ubuntu: libguestfs-tools
OpenSuse: guestfs-tools
CentOS / Fedora: libguestfs-tools-c
Troubleshooting
error: could not create appliance through libvirt
$ guestmount -a file.vmdk -i --ro /mnt/guest
libguestfs: error: could not create appliance through libvirt.
Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct
Original error from libvirt: Cannot access backing file '/path/to/file.vmdk' of storage file '/tmp/libguestfssF6WKX/overlay1.qcow2' (as uid:107, gid:107): Permission denied [code=38 int1=13]
Solution: use LIBGUESTFS_BACKEND=direct, as suggested:
LIBGUESTFS_BACKEND=direct guestmount -a file.vmdk -i --ro /mnt/guest
fusermount: user has no write access to mountpoint
LIBGUESTFS_BACKEND=direct guestmount -a file.vmdk -i --ro /mnt/guest/
fusermount: user has no write access to mountpoint /mnt/guest
libguestfs: error: fuse_mount failed: /mnt/guest/, see error messages above
Solution: use sudo, or change file permissions on the mountpoint
You can also use qemu:
For .vdi disks
sudo modprobe nbd
sudo qemu-nbd -c /dev/nbd1 ./linux_box/VM/image.vdi
if they are not installed, you can install them (issuing this command in Ubuntu)
sudo apt install qemu-utils
and then mount it with:
mount /dev/nbd1p1 /mnt
For .vmdk disks
sudo modprobe nbd
sudo qemu-nbd -r -c /dev/nbd1 ./linux_box/VM/image.vmdk
notice that I use the option -r, that's because VMDK version 3 must be read only to be able to be mounted by qemu
and then I mount it with
mount /dev/nbd1p1 /mnt
I use nbd1, because nbd0 sometimes gives: 'mount: special device /dev/nbd0p1 does not exist'
For .ova disks
tar -tf image.ova
tar -xvf image.ova
The above will extract the .vmdk disk and then mount it.
Install affuse, then mount using it.
affuse /path/file.vmdk /mnt/vmdk
The raw disk image is now found under /mnt/vmdk.
Check its sector size:
fdisk -l /mnt/vmdk/file.vmdk.raw
# example
Disk file.vmdk.raw: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000da525
Device Boot Start End Sectors Size Id Type
/mnt/vmdk/file.vmdk.raw1 * 2048 41943039 41940992 20G 83 Linux
Multiply sector size and start sector. In the example it would be 2048*512:
echo '2048*512' | bc
1048576
Mount the raw file using that offset:
mount -o ro,loop,offset=1048576 /mnt/vmdk/file.raw /mnt/vmdisk
The disk should now be mounted and readable on /mnt/vmdisk.
Here is an answer from commandlinefu.com that worked for me:
kpartx -av <image-flat.vmdk>; mount -o /dev/mapper/loop0p1 /mnt/vmdk
You can also activate LVM volumes in the image by running
vgchange -a y
and then you can mount the LV inside the image.
To unmount the image, umount the partition/LV, deactivate the VG for the image
vgchange -a n <volume_group>
then run
kpartx -dv <image-flad.vmdk>
to remove the partition mappings.
You can take a look in this article for a download link for VMware Virtual Disk Development Kit (VDDK). Once downloaded and installed:
vmware-mount -p path_to_vmdk will show the partitions inside the VMDK file. For example:
Nr Start Size Type Id Sytem
-- ---------- ---------- ---- -- ------------------------
1 2048 461371392 BIOS 83 Linux
Then just do:
sudo vmware-mount path_to_vmdk 1 /mnt/mount_point
I tried guestmount, but it is very, very slow. Underneath it creates a virtual machine, uses KVM and so on. Crazy stuff, slow as hell.
Have you got the software package for ntfs?
Try
apt-get install ntfs-3g
on debian based systems.
I have 2 drives connected to the server both are 500GB.
drive 1 =/dev/sdc
drive 2 =/dev/sdb
I've partitioned the second drive /dev/sdb in 2 partitions having /dev/sdb1 & /dev/sdb2
What I was looking for is to mount 2 drives on one directory which is /home.
So I did this mount function
mount -l /dev/sdb /mnt/sdb
mount -l /dev/sdc1 /mnt/sdc1
mount -l /dev/sdc2 /backup
then mhddfs /mnt/sdb,/mnt/sdc1 /home -o allow_other
So 2 partitions are mounted to /home
And added this to /etc/ftab
/dev/sdb /mnt/sdb ext3 usrjquota=quota.user,jqfmt=vfsv0 1 1
/dev/sdc1 /mnt/sdc1 ext3 usrjquota=quota.user,jqfmt=vfsv0 1 1
/dev/sdc2 /backup ext4 usrjquota=quota.user,jqfmt=vfsv0 1 1
mhddfs#/mnt/sdb,/mnt/sdc1 /home fuse logfile=/var/log/mhddfs.log defaults,allow_other 0 0
My problem
first of all when reboot server the mhddfs is not automounted so I need to run the command manually through ssh "mhddfs /mnt/sdb,/mnt/sdc1 /home -o allow_other"
And sometimes when huge files are uploaded to /home directory it gets disconnected give this error message "`/home': Transport endpoint is not connected" so I have to umount and remount /home to resolve the problem.
Can you help me know what's wrong with my steps and what to do to resolve both problems.
I had the same issue. I wanted to extend my /home folder on my server by adding a second drive and chose to use mhddfs. I already had a whole harddrive entirely dedicated to my /home, the system being hosted on a separate drive - this has made things easier.
Here is how I proceeded, after my new harddisk was set up and formated:
I created two new mount points: /mnt/home1 and /mnt/home2
I edited /etc/fstab file to :
change my older harddisk moint point from /home to /mnt/home1
Set up my new harddisk mount point on /mnt/home2
Told mhddfs to merge /mnt/home1 and /mnt/home2 into /home
Here is the result in my etc/fstab:
UUID=f29aa9e5-5988-4603-9ecd-5c24dd804d94 /mnt/home1 ext4 defaults 0 2
UUID=e535c3fc-0842-4557-be85-55277912a058 /mnt/home2 ext4 defaults 0 2
mhddfs#/mnt/home1,/mnt/home2 /home fuse defaults,allow_other 0 0
Of course, you have to follow all these steps without restarting the machine (otherwise you will have no more /home directory).
It works pretty well. My older harddrive is now almost 100% full and my system began to write on the newer one, but practicaly speaking you don't even notice it. Everthing you see is a "normal" /home folder and mhddfs coordinates this in a totally transparent way.
I have tried with forcing fsck disk check on startup to make sure everything was ok - I set up the last parameter for mhddfs on /etc/fstab to "0" to make sure fsck does not create problem. Everything runs well, it seems pretty stable.