Why does Openstack Swift services put all its data/files in root and not my specified partition? - openstack

I deployed using kolla-ansible 5.0.0.
I used fdisk to create a new xfs sda4 primary partition with size of 1.7 TB and then I created the rings following this documentation for kolla-ansible:
https://github.com/openstack/kolla-ansible/blob/master/doc/source/reference/swift-guide.rst
After I deployed, swift seems to work fine. However /dev/sda4 is not mounted to /srv/node/sda4 and all of swift's files or data gets put in root.
output of fdisk -l showing my sda4 disk partition I want swift to use:
[root#openstackstorage1 swift]# fdisk -l
Disk /dev/sda: 1999.8 GB, 1999844147200 bytes, 3905945600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000c22f6
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 718847 358400 83 Linux
/dev/sda2 718848 2815999 1048576 82 Linux swap / Solaris
/dev/sda3 2816000 209663999 103424000 8e Linux LVM
/dev/sda4 209664000 3905945599 1848140800 83 Linux
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
output of df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg01-lv_root 98G 3.4G 95G 4% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 9.0M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/openstackvg01-lv_openstackstorage 2.8T 75G 2.7T 3% /var/lib/docker
/dev/sda1 347M 183M 165M 53% /boot
tmpfs 782M 0 782M 0% /run/user/0
this output of df -h /srv/node/sda4 shows a logical volume of root disk is mounted on /srv/node/sda4.
[root#openstackstorage1 swift]# df -h /srv/node/sda4/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg01-lv_root 98G 3.4G 95G 4% /
but shouldn't my /dev/sda4 partition I made be mount to /srv/node/sda4 ?
Not sure what I did wrong and need guidance please

The reason this was not working is cause my /dev/sda4 was not an xfs filesystem......I just had to run mkfs.xfs –f –I size=1024 –L sda4 /dev/sda4 on my partition I created and then I mount it myself mount -t xfs -L sda4 /srv/node/sda4
I then had to restart all swift services and now all swift files and data are being stored in /srv/node/sda4 where /dev/sda4 is mounted.

Related

Google Cloud Shell `No space left on device` even though disk not full?

I am trying to replace my local development machine with Google Cloud Sheel. When running yarn on Cloud Shell, the system says I am out of space. But df tells me there is lots of space remaining (only 68% used on /home)
#cloudshell:~/***$ df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 62742040 43483680 19241976 70% /
tmpfs 65536 0 65536 0% /dev
tmpfs 8200748 0 8200748 0% /sys/fs/cgroup
/dev/disk/by-id/google-home-part1 5028480 3229288 1520716 68% /home
/dev/sda1 62742040 43483680 19241976 70% /root
/dev/root 2006736 1012260 994476 51% /lib/modules
shm 65536 0 65536 0% /dev/shm
tmpfs 8200748 904 8199844 1% /google/host/var/run
user#cloudshell:~/**$ pwd
/home/user/***
user#cloudshell:~/***$ mkdir test
mkdir: cannot create directory ‘test’: No space left on device
Am I missing something? Why does the system say out of space when there is 32% left?
According to the Cloud Shell documentation:
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
...
If you encounter a no space left on device error, you'll need to remove files from your home directory using the Cloud Shell terminal to free up space.
So it is possible that when it reaches a certain threshold of disk usage that message will pop up.

RStudio Amazon EC2 Instance running out of disk space

I have set up an AWS EC2 Instance which had running RStudio. I have been able to log into RStudio using 000.000.000.000:8787. I was getting some errors regarding writing to the disk and some memory errors at the last time of using RStudio so I decided to "stop" the EC2 Instance and then "start" it back up again. All of a sudden I am unable to log back into RStudio using the IP address assigned on port 8787.
It appears I ran out of memory on the EC2 instance, however all I have on the instance is a few R scripts and some small datasets. So, I believe it could be the case that R has used up the space somehow. I run df -Th in the terminal and the output is:
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 7.9G 0 7.9G 0% /dev
tmpfs tmpfs 1.6G 29M 1.6G 2% /run
/dev/xvda1 ext4 49G 49G 0 100% /
tmpfs tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/loop0 squashfs 90M 90M 0 100% /snap/core/8268
/dev/loop1 squashfs 18M 18M 0 100% /snap/amazon-ssm-agent/1480
/dev/loop2 squashfs 90M 90M 0 100% /snap/core/8213
/dev/loop3 squashfs 55M 55M 0 100% /snap/core18/1650
/dev/loop4 squashfs 55M 55M 0 100% /snap/core18/1288
/dev/loop5 squashfs 768K 768K 0 100% /snap/gifski/1
tmpfs tmpfs 1.6G 0 1.6G 0% /run/user/1000
So it looks like /dev/xvda1 ext4 49G 49G 0 100% / is using all the space available.
I also run sudo du -aBM -d 1 . | sort -nr | head -20 Which gives:
277M .
102M ./rstudio-1.2.5019-amd64.deb.1
102M ./rstudio-1.2.5019-amd64.deb
37M ./rstudio-server-1.2.1335-amd64.deb
34M ./ROBUSTNESS_Add_FAKE_dates_model_R_code_function_to_make_09_01_2020.out
4M ./R_code_function_to_make_average_shap_plots_07_01_2020.out
1M ./permissions_on_aws.txt
1M ./iris_test.pdf
1M ./R_code_function_to_make_average_shap_plots_07_01_2020.err
1M ./R_code_function_to_make_average_shap_plots.out
1M ./ROBUSTNESS_Add_FAKE_dates_model_R_code_function_to_make_09_01_2020.err
1M ./.ssh
1M ./.profile
1M ./.gnupg
1M ./.cache
1M ./.bashrc
1M ./.bash_logout
1M ./.bash_history
0M ./R_code_function_to_make_average_shap_plots.err
0M ./.sudo_as_admin_successful
I additionally run find / -size +10M+
/proc/kcore
find: ‘/proc/3338/task/3338/fd/6’: No such file or directory
find: ‘/proc/3338/task/3338/fdinfo/6’: No such file or directory
find: ‘/proc/3338/fd/5’: No such file or directory
find: ‘/proc/3338/fdinfo/5’: No such file or directory
/home/USER/myRfiles/myEnvironment.RData
Okay so my myEnvironment.RData is taking up a lot of space. I check how much by navigating to it using FileZilla and I see that its taking up 9.8 GB in size. Thats quite a lot but I have 50GB of space available so where is the other 40 GB gone?
I have also used sudo apt-get autoremove which only removed 4MB of space. I also removed the .rstudio files incase that was taking up too much space.
EDIT:
I run du -cha --max-depth=1 / | grep -E "M|G" which gives this output:
100M /boot
15M /sbin
5.5G /usr
du: cannot access '/proc/3811/task/3811/fd/4': No such file or directory
du: cannot access '/proc/3811/task/3811/fdinfo/4': No such file or directory
du: cannot access '/proc/3811/fd/3': No such file or directory
du: cannot access '/proc/3811/fdinfo/3': No such file or directory
919M /snap
1.2G /var
224M /lib
5.8M /lib32
7.2M /etc
15M /bin
11G /home
19G /
19G total
I have access to 50 GB of space on the Amazon AWS EC2 Instance and here it's telling me I am only using 19GB...
Additional step (mostly for my thought process)
I then run du -cha --max-depth=1 /home | grep -E "M|G" since it takes up the most memory. With the following output:
277M /home/ubuntu
11G /home/MYUSER
11G /home
11G total
I go a little deeper: du -cha --max-depth=1 /home/MYUSER | grep -E "M|G"
Which gives:
7.3M /home/MYUSER/MlBayesOpt
728M /home/MYUSER/chapter_3
188M /home/MYUSER/chapter_1
25M /home/MYUSER/gganim
58M /home/MYUSER/.cargo
2.5M /home/MYUSER/financial_markets_R
165M /home/MYUSER/.rstudio
9.6G /home/MYUSER/pollution
11G /home/MYUSER
11G total
So I see 11GB being used with 9.6 GB in the folder pollution. I have 50GB available so I don't mind spending 10GB on this folder.

Resizing /Dev/SDA1 : Google Cloud

I am a total noob on this one. I have a google cloud SUSE instance which is running a VM image. I am trying to install a package but I think it's running out of space.
What I want to do is to assign some of the 120G space to my /dev/sda1 partition. I have read the google's guide but I am not sure which section should I be following.
>df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 13G 0 13G 0% /dev
tmpfs 13G 0 13G 0% /dev/shm
tmpfs 13G 9.7M 13G 1% /run
tmpfs 13G 0 13G 0% /sys/fs/cgroup
/dev/sda1 36G 34G 0 100% /
tmpfs 2.6G 0 2.6G 0% /run/user/490
tmpfs 2.6G 0 2.6G 0% /run/user/1004
tmpfs 2.6G 0 2.6G 0% /run/user/1006
>sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
└─sda1 8:1 0 36G 0 part /
Without reboot increase boot size in GCP cloud VM
Check first disk usage using df -h if usage of /dev/sda1 more than 80% it's dangerous.
Update disk size on the fly
Increase disk size from console first
SSH inside VM : sudo growpart /dev/sda 1
Resize your file system : sudo resize2fs /dev/sda1
Verify : df -h
Increase the size of existing persistent disk:
Login to Google Cloud Platform
Goto Compute Engine -> Disks
Locate your VM's boot disk(default disk), open it
Click Edit
Enter a new size, please note that you won't be able to decrease this size later.
Reboot your VM, you should be able to see new size of disk.
This is just an addition to Prateeks answer. After changing the size, you need to run (linux only) to reboot:
sudo reboot
Give it some time, close your console if you get no response. Then run df again to see the new size.
Super late to the party but using sudo growpart /dev/sda 1 worked for me

Persistent disk size is not changing - Google Compute Engine

I changed size of my persistent disk from 10GB to 20GB.
Screenshot
Now when I run df command in my server, I can still see only 10GB space.
user#edudrona-prod-vm:~$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 10186040 6755924 2889652 71% /
udev 10240 0 10240 0% /dev
tmpfs 1535964 8528 1527436 1% /run
tmpfs 3839908 0 3839908 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 3839908 0 3839908 0% /sys/fs/cgroup
tmpfs 767984 0 767984 0% /run/user/1003
I am just running simple Wordpress site using Bitnami. Except from changing 10GB to 20GB, I did not make any change to increase disk space. Do I have to play around with settings anywhere else as well?
Update:
I got following output from resize2fs command:
user#edudrona-prod-vm:~$ sudo resize2fs /dev/sda1
resize2fs 1.42.12 (29-Aug-2014)
The filesystem is already 2620416 (4k) blocks long. Nothing to do!
Take a look at the documentation here.
resize2fs alone is not sufficient because what it does is resizing to fill the extents of the carrier partition, but your /dev/sda1 is still the old size. You need to resize that first. On some operating systems the whole root partition upsizing is done automatically on boot, so try rebooting to see if just that does the trick. Otherwise, you'll need to follow the manual steps. Be careful and make sure to back things up first.

Openstack Instance does not use the entire hard disk

I created new vm instance using "Ubuntu Server 10.04 LTS (Lucid Lynx) - 32 bits" image and m1.small falvour which has 20 GB Disk (OpenStack Icehouse). When i logging to the vm and run df -h , I found that the VM does not use the entire assigned HD. The command results are shown as the following:
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 1.4G 595M 721M 46% /
none 1005M 144K 1005M 1% /dev
none 1007M 0 1007M 0% /dev/shm
none 1007M 36K 1007M 1% /var/run
none 1007M 0 1007M 0% /var/lock
none 1007M 0 1007M 0% /lib/init/rw
The "fdisk -l" shows the DH size is 20 GB:
Disk /dev/vda: 21.5 GB, 21474836480 bytes
4 heads, 32 sectors/track, 327680 cylinders
Units = cylinders of 128 * 512 = 65536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000cb9da
Device Boot Start End Blocks Id System
/dev/vda1 * 17 32768 2096128 83 Linux
I need the vm to take the full space assigned to it. Any idea how could I fix it? I want the solution to be applied on each vm I create, so I do not want to manually update the VM after instantiation. I also must use 10.04 image ( can not upgrdate to 14.04)
The problem here is the image. I grabbed that one and ran it up, it's pretty simple to run a
sudo resize2fs /dev/vda1
which will resize the filesystem to the size of the partition, which seems to be 2GB. Beyond that, you have to increase the partition size. For that I think you're probably best off using virt-resize, there are some good howto's out there e.g. askubuntu, in essence:
SSH into your openstack controller node
source keystonerc_admin (or whatever yours may be called)
nova list --all-tenants | grep <instance_name> or just grab the server guid from horizon
nova show <server_guid> and note which nova host your machine is running on. Also note the instance name (e.g. instance-00000adb)
SSH into that nova node
virsh dumpxml instance-00000adb and look for the image file. On mine, this is /var/lib/nova/instances/<server_guid>/disk but that may not always be the case?
yum install libguestfs-tools
truncate -r /var/lib/nova/instances/d887249a-0d95-473e-b4f2-41f71df4dbb5/disk /var/lib/nova/instances/d887249a-0d95-473e-b4f2-41f71df4dbb5/disk.new
truncate -s +2G /var/lib/nova/instances/d887249a-0d95-473e-b4f2-41f71df4dbb5/disk.new
virt-resize --expand /dev/sda1 /var/lib/nova/instances/d887249a-0d95-473e-b4f2-41f71df4dbb5/disk /var/lib/nova/instances/d887249a-0d95-473e-b4f2-41f71df4dbb5/disk.new
mv disk disk.old ; mv disk.new disk
NB - mine didn't quite work when I booted that up again, not got time to investigate yet but it can't be far off that, and hopefully this helps.
Once you've managed to boot that up again, then you can shut it down and create a snapshot from horizon. You can then use that snapshot just like any other image, and launch all subsequent VMs directly from there.
HTH.

Resources