deal with an improperly unmounted disk - mount

I didn't properly unmount a new external disk which I mounted like:
jeremy#jr:~$ sudo mount -t exfat /dev/sdc1 /home/jeremy/exfat
FUSE exfat 1.2.3
and now I am stuck -
jeremy#jr:~$ df -h
df: /home/jeremy/exfat: Transport endpoint is not connected
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 765M 9.5M 756M 2% /run
/dev/sda1 213G 200G 2.1G 100% /
tmpfs 3.8G 165M 3.6G 5% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 765M 116K 765M 1% /run/user/1000
I can't go forward (mounting) or backward (unmounting) -
jeremy#jr:~$ sudo mount -t exfat /dev/sdc1 /home/jeremy/exfat
FUSE exfat 1.2.3
fuse: failed to access mountpoint /home/jeremy/exfat: Transport endpoint is not connected
the mtab file looks like
jeremy#jr:~$ more /etc/mtab
....
/dev/sdb1 /home/jeremy/exfat fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0
my primitive grasp of matters led me to try:
jeremy#jr:~$ ls /mnt
sdb1
jeremy#jr:~$ umount /mnt
umount: /mnt: not mounted
jeremy#jr:~$ umount /mnt/sdb1
umount: /mnt/sdb1: not mounted
jeremy#jr:~$ umount /home/jeremy/exfat
umount: /home/jeremy/exfat: Transport endpoint is not connected
since the disk now appears at /sdc1 after i unplugged/replugged it in the vain hopes that that would clear the /etc/mtab entry I tried:
jeremy#jr:~$ sudo mount -t exfat /dev/sdc1 /home/jeremy/exfat
FUSE exfat 1.2.3
fuse: failed to access mountpoint /home/jeremy/exfat: Transport endpoint is not connected
yeesh - this is worse than mating dogs

sudo umount -f /home/jeremy/exfat
seems to have done the trick, a hint in that direction from the os instead of 'not mounted' would have been great

Related

Failed to write file to disk in WordPress While uploading media files

I am getting following error while uploading media in WordPress on AWS server.
"Failed to write file to disk"
I have changed folder permission by using the following command:
sudo chmod 755 -R /var/www/html/wp-content/uploads
Still, it is not working.
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 7.9G 417M 7.5G 6% /run
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/xvda1 8.0G 8.0G 3.7M 100% /
tmpfs 1.6G 0 1.6G 0% /run/user/1000
Any help will be appreciated.
Thanks
depending on your webserver/php configuration you may end up with folder ownership and PHP process running via different users/groups.
Start with giving
sudo chmod 777 -R /var/www/html/wp-content/uploads
or even
sudo chmod 777 -R /var/www/html/wp-content
But it it works, but 755 doesn't - that's a sign of your system misconfiguration. That may be fine for development, but should never be used in production.
I figured out this issue with the help #David. As He told, there was the memory issue on the server side.
As I checked the disk was full.
/dev/xvda1 8.0G 8.0G 3.7M 100% /
The main directory / was full 8.0 GB used of 8.0 GB.
Solution:
Just Upgrade your disk size.

Ubuntu16.04: Fatal error: cannot create 'R_TempDir'

At the beginning, I install R-3.2 in the ubuntu 16.04 by the command: sudo apt-get install r-base. But I want to update the R to R.3.4. I followed the instruction to update my my R as the following link:https://askubuntu.com/questions/909689/upgrading-r-version-3-3-in-ubuntu-16-04.
After I updated my R to R-3.4, and I run the command: R, but there is error:
Fatal error: cannot create 'R_TempDir'
But after I run the command: sudo R, there is no error, and I can enter into the R environment.
and the df output is:
~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 132052640 0 132052640 0% /dev
tmpfs 26414104 9376 26404728 1% /run
/dev/sda1 15348720 14618356 0 100% /
tmpfs 132070516 108 132070408 1% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 132070516 0 132070516 0% /sys/fs/cgroup
/dev/sdb1 2064113824 520939540 1438300352 27% /data
tmpfs 26414104 12 26414092 1% /run/user/111
tmpfs 26414104 0 26414104 0% /run/user/2098
Could you tell me how to solve this issue, I mean I just want to run the command: R to enter the environment.
Thanks!

How to cleanup docker?

We are using docker for continuous builds. I have removed the unwanted images and containers. Just have 4 images of 5GB max. But looks like something else is eating up all the disk space. Any tips how to cleanup and improve space?
Filesystem Size Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 26M 9.5G 1% /run
/dev/sda1 456G 428G 5.2G 99% /
tmpfs 48G 7.4M 48G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 48G 0 48G 0% /sys/fs/cgroup
tmpfs 100K 0 100K 0% /run/lxcfs/controllers
tmpfs 9.5G 0 9.5G 0% /run/user/1000
none 456G 428G 5.2G 99% /var/lib/docker/aufs/mnt/4b96935f7fb6b517031df23849292a06eab92013d0610d922588688132013a5e
shm 64M 0 64M 0% /var/lib/docker/containers/c3b48e0215e05e13f79466de64cb0a2b4646cef30e020e651c59cb1950f0d70d/shm
none 456G 428G 5.2G 99% /var/lib/docker/aufs/mnt/4388442c65c13654a7d1cd51894aa5c06137166628a0a52d6854abc230417140
shm 64M 0 64M 0% /var/lib/docker/containers/027ce91cd66eca1ed134decdb8c13e6676fd34b1f6affe406513220037e63936/shm
none 456G 428G 5.2G 99% /var/lib/docker/aufs/mnt/13595cb64024d0d8f3cf7c09f90e37baccee08ea9b9b624c41d971a195d614e0
shm 64M 0 64M 0% /var/lib/docker/containers/3212761e701699313a127d50a423677c1d0ddaf9099ae37e23b25b8caaa72b37/shm
none 456G 428G 5.2G 99% /var/lib/docker/aufs/mnt/149839edbef826cdf289e66988c206dd6afebdb4257cc22e91551f65ea034f77
shm 64M 0 64M 0% /var/lib/docker/containers/9c084651b9ecf035257a34b9dd4415689e4c685e660e3013ad9673955834be
After Docker version 1.13, there are new prune commands to cleanup docker environment.
use docker system df to get statistics about containers, images and their disk usages with reclaimable space.
use docker system prune to get rid of stopped containers, dangling volumes and dangling images.
To remove only stopped containers you can use docker container prune and docker image prune to remove all dangling images.
The common mistake is to forget to delete the volumes.
For CI and CD it's a good practice to use docker rm -v when removing a container, with the -v you are sure that the auto-created volumes are deleted.
To be sure, type docker volume ls if you have some values here that means that some of your containers created some volumes.
You can get rid of them all at once with docker volume rm $(docker volume ls -q) (it will not remove any currently used volumes and will display an error message instead, but use with caution)
You can clean up docker artifacts by using the following command (depending on what you need to clean up):
docker container prune
docker image prune
docker network prune
docker volume prune
These commands support '-a' option that will delete all of the artifacts if specified (for example docker container prune -a will remove all the containers)
In case you need to clean up everything you may want to use:
docker system prune -a
This is like all of the commands mentioned above combined.

mount -a does not mount Fstab file

I have the following in my /etc/fstab file:
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
sv-01:/mnt/UEF/home/user/Videos/complete /home/user/Videos nfs defaults,noauto,user 0 0
and when I issue the command sudo mount -a -v, I get the following output
mount: proc already mounted on /proc
mount: /dev/mmcblk0p1 already mounted on /boot
nothing was mounted
but when I copy paste the above segment of code and issue the below command, the folder mounts perfectly.
sudo mount sv-01:/mnt/UEF/home/user/Videos/complete /home/user/Videos
What could possibly be causing this?
You specified "noauto" parameter for sv-01:/mnt/UEF/home/user/Videos/complete.
From mount manual:
mount -a [-t type] [-O optlist]
(usually given in a bootscript) causes all filesystems mentioned in
fstab (of the proper type and/or having or not having the proper
options) to be mounted as indicated, except for those whose line
contains the noauto keyword. Adding the -F option will make mount
fork, so that the filesystems are mounted simultaneously.

Is it possible mount two separate tmpfs filesystems during boot?

When i execute the df command i can see that tmpfs is mounted on /. What i need is to create a directory in /etc, say tmp and then mount another tmpfs on /etc/tmp. Can i do it by adding another entry in /etc/fstab saying tmpfs should be mounted on /etc/tmp.
Yes, for example (in /etc/fstab)
tmpfs /etc/tmp tmpfs defaults,size=50% 0 0

Resources