Open Stack Volume won't attach - openstack

I am using openstack to create a Centos7 VM.
I can get the VM to run but the installer hits a snag at the first page.
It needs a Disk to install to (Installation Destination)
I thought this was the volume that I attached using the openstack app. I used the volume's edit attachments and it pops up saying it will attach it; the volume is never listed as attached to ANY instance I attach it to.
It also needs an Installation Source, which I was using the URL from the mirror site I used. Here is the URL:
ISO URL
I used the net Install ISO. I tried the same url for the installation source and I also tried the URL but change isos to os or this:
OS URL
Thanks for any help.

when you create VMs in Openstack you are not supposed to go through the installation process. In the cloud you use cloud images that are ready to boot.
You should use a Centos cloud image.
Try to load this Centos7 image into your openstack glance:
http://ubuntu.mirror.cloud.switch.ch/engines/images/2016-04-15/centos7.raw
You should be able to boot your VM and boot with the username centos and the public key you provide with cloud-init.

Related

Artifactory UI monitoring service status showing "online (0 of 0 nodes)" after migrating JFrog platform to new virtual machine

I have an existing JFrog/Artifactory Pro 7.27.10 RPM-based install (on a CentOS 8 VM) that I recently migrated to a new (Centos Stream 8 VM) running JFrog/Artifactory Pro 7.31.13 (also installed via RPM).
After copying my existing master.key file from the original JFrog install to the relevant directory, I started up the Artficatory Pro service on the new VM and proceeded to migrate my data using the "Simple migration with downtime" process described in this JFrog whitepaper. Everything worked fine, Artifactory is running as expected on the new VM and all my data appears good. I moved my frontend proxy DNS aliases over to the new VM and shut down the proxy on the old VM.
One problem I am now noticing is that in the Artifactory admin UI, the Monitoring > Service Status now doesn't appear to report my Artifactory/JFrog platform microservice status any more. It does show Artifactory with the correct backend IP address (running on port 8082) but then the "Status" shows "Online" with (0 of 0 nodes) and the ">" fold down arrow shows nothing when clicked. I went back to my old Artifactory instance and checked and it was still showing the single node with all of the individual JFrog platform service statuses properly.
My guess is that I missed something in the migration process and/or something else needs to be configured to allow the services to show up on the monitoring page, however I'm at a loss as to what this is or even where to look for it. I looked through the system.full-template.yaml but nothing seems obvious there. And while the Artifactory docs are usually fairly comprehensive, the page about monitoring doesn't seem to give much insight about how this is configured or what to do if it's missing. Also not sure if the initial startup of Artifactory on the new VM before I migrated by data affected how the monitoring was configured such that it now doesn't work with the imported data (unfortunately I didn't check the monitoring UI in the new VM before I did the data migration so I can't say for sure if it was initially working)
A couple of other details which may be relevant:
when migrating my VM, I kept the same (FQDN) hostname, but the IP address was different
I used the same frontend (nginx) proxy configurations on both the old/new VMs though I'm not sure if this is relevant here or not.
With the exception of going from CentOS 8 to Stream 8, the VM configurations themselves should be nearly identical as I create them from a kickstart (which was only updated for the new stream repo paths). Again not sure whether this is relevant at all here.
Any ideas on where I should be looking to figure out how to fix this?

ACORE API, assistance with errors and deployment

I'm having trouble with setting up ACORE API's and then having them work on a website.
Background:
Azerothcore running 3.3.5 on a debian standalone server, this has the Database, Core files and runs both the world and auth server basically a standard setup that is shown in the how-to wiki.
I also have a standalone web server, on the same subnet, but it's a separate server running linux and normal web server stuff, this has a wordpress installation with azerothcore plugin for user signup etc.
I'm trying to add the player map (https://github.com/azerothcore/playermap) and the ACORE-API set of functions (server status, arenastats, BG que and wow statistics) (https://github.com/azerothcore/acore-api)
Problem:
I understand the acore-api must be run in a container (docker or whatever) on the server, which I have done and it binds to port 3000, I can then go to the local ip:3000 and it brings up this error. (all db's etc are connecting and soap is working)
error 404 when navigating to IP:3000
I do get a few errors when running NPM install seen here: I'm not sure if they would be causing any issues or not.
screenshot of NPM errors on install
But further that, when I put say 'serverstatus' on the webserver (separate server) and configure the config.ts file I can't seem to get anything to display.
I'm not sure what I'm doing wrong but is the same scenario for all of the different functions for the acore-api
How are these meant to be installed and function? I feel I'm missing a vital step.
Likewise, with PLAYERMAP I have edited the comm_conf.php and set the realmd_id, but when loading the page, I do get the map, but the uptime is missing and no players are shown?
Could someone assist if possible?
Seems like an issue with NodeJS version. Update your NodeJS to latest LTS version 16.13.0 (https://nodejs.org)

How to run ESXi on Openstack under a KVM VM

We run Openstack with KVM as hypervisor and now need to run ESXi 6 or 7 inside a VM (nested virtualization). This is mainly for converting disks to proper vmdk disks, not really running any VMs under ESXi (that is why we are not using a barebone and run esxi as hv)
We run this very same setup under Proxmox without bigger issues, the main point was using the vmxnet driver for the NIX. That is exactly where we fail with Openstack. It seems there is no such driver, using e1000 does not working. Booting the installation iso leads to 'no nic found' in the very end.
We are using Openstack Xena with Debian-Buster as compute (running libvirt) on kernel 5.10/5.14.
Any hints how to get this up and running?
Using https://github.com/virt-lightning/esxi-cloud-images i managed to get it working for 6.5/6.7 but not 7.0.
One seems to not be able to install ESXi via ISO on the an OpenStack instance itself (directly), since no matter if you use e1000 (6.x) or e1000e (7.x) for the installation, the installer will not be able to find the NIC during the installation. Also for the 6.x installer under Openstack, it could not find any disks (with or without the SATA flag).
Instead, I used the repo above to build an pre-installed esxi images shipped via qcow - it is build on my local machine and thus my local libvirt. Not sure yet why this makes a huge difference, maybe the nova based abstraction or something else hinders Openstack (no verification yet).
Building the 6.5/6.7 based qcow2 image locally, importing it via glance (ensure you use e1000 for 6.x and e1000e for 7.x) and then creating a new instance.
This will get you up and running on 6.5/6.7 with proper DHCP and network configuration.
For 7.x the interface is detected, but somehow DHCP is not working. I tried with q35 and different other options, but could not get 7.x to work until know.
I created a fork at https://github.com/EugenMayer/esxi-cloud-images to
proper expose credentials one can login
remove ansible zuul usere with a predefined public key by the author
cleanup the readme

I am unable to vagrant up

I am using a file from GitHub
It has a vagrant file with it. When I run vagrant up command in my terminal, I get an error.
The terminal should show READ ABOVE message when successful download
I want to type in the address to the site on my browser to start a local development server.
Its pretty old file and the repo was using puphpet but this project seems dead for 2 years, the website is down.
In your case, vagrant is trying to download the box from internet but the owner of this box hosted it under the puphpet domain not available anymore
I am not sure what's the best way to help now:
find another more recent example and start from there
if you want to fix this, you will need https://github.com/LearnWebCode/vagrant-lamp/blob/master/puphpet/config.yaml#L6 and use a different box available on vagrant site, ubuntu 16.04 is pretty old now but you can search one from vagrant box

GCE: cannot login, The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item

I cannot ssh into my Google Compute Engine (GCE) Wordpress instance anymore.
It was working one month ago when I tried last.
I use the Google built-in SSH client in a Chrome browser window.
Yesterday I tried an got the following message:
The VM guest environment is outdated and only supports the deprecated
'sshKeys' metadata item. Please follow the steps here to update.
The "Steps here" link navigates to https://cloud.google.com/compute/docs/images/configuring-imported-images#install_guest_environment which does not seem to help me much.
I am not aware of any changes that I may have made.
How can I fix this?
It looks like your instance's disk is full, and so the SSH keys can't be created in the temp directory. You can do the following:
Stop your instance and wait for it to shut down
Click on the disk your instance is using, and choose "edit" at the top
Enter a larger disk size, and save
Go back to your instance and start it up again
You should now be able to connect via SSH. While you're in there, check to see what filled up your hard disk so you can prevent this from happening again (maybe a rogue program is printing out too many logs, etc).
If you're seeing this on Debian 8 or 9, the most likely reason for this is that the google-compute-engine.* packages that allow SSH access to the instance have been removed by apt-get autoremove.
If you have an open SSH connection to the machine or can use a tool like gcloud, running apt-get update && sudo apt-get install gce-compute-image-packages should fix this.
If you no longer have any SSH access, there is a procedure available on the GCP docs site that can be used to restore it.
I've created a bug report here for this.
Might be a bit late, but you can
1) Stop the VM
2) Edit and enable serial console
3) Use the serial connection to login and update the VM
recent days, I meet similar problem, later I find the permission rights of my home directory fools me, as a lazy-bone, I chmod 777 ~
After did that, I cannot ssh via my terminal, even cannot ssh via browser, only get 'The VM guest environment is outdated and only supports the deprecated 'sshKeys' metadata item, Plese follow the steps here to update'. Sounds like you must set 755 to your home dir, not just care your 700 .ssh or 600 authorized_keys.
I met the similar issue after I created a FreeBSD VM, gcloud ssh not works, but I am lucky that I can use the browser window ssh to my VM. Then I manually add the google_compute-engine public key to the .ssh/authorized_keys, now it work, I can use the gcloud ssh to connect. But not sure if this is a better/security way.

Resources