openstack image-delete can not delete image - openstack

Our openstack is kilo and manage vmware 6.
We backup using the following command:
nova backup 762ecf86-f5eb-4c7c-a512-3cf15e23dd1a \
backup-snapshot-a-daily-$(date "+%Y%m%d-%H:%M") daily 1
A new image appears when running glance image-list.
To delete it, we run glance image-delete without errors.
We run glance image-list again and find that the image has disappeared.
Yet in vsphere client, in vmfs, the directory is still here.
After one minute, we run glance image-list again, the image is still there.
So how can we delete this image?

Related

Artifactory backup directory is not recognized

I wanted to change the backup to a different disk. I mounted the disk to /mnt2 on centos and when I navigate to Admin > Backups > Backup Daily > Edit backup-daily Backup, I see an option Server Path For Backup. I tried the following two things.
I entered the mount directory /mnt2 and hit run now. The background job fails with the following error in logs.
An error occurred while performing a backup: Backup directory provided
in configuration: '/mnt2' cannot be created or is not a directory.
I also tried creating a tmp2 directory on local drive and entered /tmp2 and hit run now. The background job fails with the same error as above.
Note 1:
I restarted the docker container just to see if it's not picking up file system changes in real time. That did not work.
Note 2:
There is a browse button next to Sever Path for Backup and I dont see /mnt2 or /tmp2 directories I created. I couldnot find anything useful in the documentation either.
How do I change the backup directory for artifactory?
The setup is artifactory with docker.
For artifactory docker instance, a volume needs to be specified so it maps to the local folder, say, /opt/artifactory/.
In my case, /var/opt/jfrog/artifactory(docker) is mapped to /opt/artifactory(local)
I am supposed to create a folder here -- /opt/artifactory/backup_mount. Give read and write access for 1039 user and group. It shows up in artifactory UI as /var/opt/jfrog/artifactory/backup_mount.
Note:
If you create a directory, it shows up without any docker restart.
If you create a mount, restart docker so the mount is recognized.

How to make devstack persist changes after reboot of system in ubuntu 16.10?

I created a debian image which is of QCOW2 type and launched a instance using the same image.
instance was successfully running and image creation was succesful too.
I want to persist this and all other changes i will make into devstack even after host reboot.
I tried running:
screen -c stack-screenrc
but running that script shows the following results....
enter image description here
I referred following link
https://ask.openstack.org/en/question/5423/rebooting-with-devstack/
but rejoin-stack.sh script doesn't exists in my devstack.
any alternative suggestions?
You only have to write the next:
script /dev/null
screen -c stack-screenrc

Glance image registration from remote server

I am trying to register a image from a server to remote openstack glance installation. Basically I have processed the image locally through a shell script and now want to import it in a glance running on a different system.
Thanks in advance
Looks like you just need to upload the image to Glance:
Make sure you have glance-client installed
pip install python-glanceclient
source openrc #An openrc file with creds for that remote openstack installation, see [1] for reference
glance image-create --container-format CONTAINER_FORMAT --disk-format DISK_FORMAT --name IMAGE_NAME --file a-path-to-local-image-file --progress See "glance help image-create" for params description
That's it. The image will be uploaded to remote glance installation over HTTP.
You can list images there via glance image-list
[1] http://docs.openstack.org/icehouse/install-guide/install/apt/content/ch_clients_openrc_files.html

sudoers - Google Compute Engine - no access to root

I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.

Using rhc snapshot-save Returns Empty File

I have a WordPress site on OpenShift and I'm attempting to backup the site. I've used commands:
rhc tidy-app
and
rhc snapshot-save
After reporting a snapshot is being pulled down, "Success" is displayed a few seconds later but only an empty tar.gz file is created (it's supposed to be about ~50mb).
This incident occurred before and usually, after a few repeated attempts, eventually worked. I've tried several times now without the backup being downloaded.
Anyone have any thoughts? Thanks.
FYI, the gear is well below the size and file count quotas
I came across this post because I was having the same issue. I was getting empty hello.tar.gz files when running the following command:
rhc snapshot save -a hello
After some research I found that I was missing an option. The hello.tar.gz file contained the expected contents after running the following command:
rhc snapshot save -a hello --deployment

Resources