Restart of Nova services in Devstack - openstack

I have tried restarting the Nova services after some modifications in nova.conf file.
While trying to restart I am getting the error that unrecognized service.
I know that way to restart nova services in devstack was different.
Someone provide me the clear way for doing the same.

Another way:
screen -x to list available screen sessions.
ctrl+a, " to get list of available windows.
Go to n-cpu and stop it with ctrl+c, than press arrow up to get last command in terminal and press enter to run it.
Depends on you conf changes you will be need to restart other services (the start with 'n-', for example, n-api, n-cond and so one).
I'm not sure if it is ok to recommend URC channel on the StackOverflow, but also there is #openstack-nova channel in IRC (on freenode server). It could be useful (and faster way to get answer) to ask there, if you have problems with something nova-related.

I found the way for same:
* First cd to folder devstack
* You can find the file ./rejoin_stack.sh
* Execute the same.
* It will be executed and screen will be opened for access.
* press ctrl + a + shift + '
* Then it will be listing the running services
* can move to service which needs to be stopped by scrolling towards it.
* on the service which is needs to be stopped press enter
* then press ctrl + c , it will stop the service
* then press up arrow key to run the service again
* service will be restarted successfully.

Not exactly a way of restarting Nova, but could help, too. It's a hack, though.
The ./rejoin_stack.sh has been removed from DevStack, and even screen does not seem to work for me; it does not show the menu and does not accept the CTRL sequences (I blame my running DevStack in Docker for it, but not important).
So this is where the hack comes handy. All DevStack services are restarted by running ./unstack.sh and ./stack.sh, but during that time, nova.conf modifications are dropped. The hack is to modify /devstack/lib/nova which works as a template for nova.conf, so that after rerunning ./stack.sh the nova.conf contains the intended values.

Newer versions of DevStack runs it's services as systemd unit files so, you can use systemctl to manage them.
List all services:
sudo systemctl list-units devstack#*
Restart any individual service (replace n-cpu.service with the service name):
sudo systemctl restart devstack#n-cpu.service
Restart all services.
sudo systemctl restart devstack#*
Refer OpenStack Documentation for more details.

Related

Update .py file on nginx, gunicorn, supervisor

I don't know why this is so difficult, but everytime I update a file in my flask application I have to restart gunicorn so that the file updates on the server. I am mostly a front-end developer and don't play with servers enough to remember these things, and I have to spend hours google searching various phrases to find the right commands. This time I can't seam to find anything, and the file I created to save these things has conveniently disappeared.
My server:
Ubuntu 18.04
nginx
gunicorn
supervisor
I am updating a .py file. I placed the updated version on the server using ftp. I'm logged into the server, using ssh, through a git bash shell. sudo systemctl gunicorn restart give me the error Failed to restart gunicorn.service: Unit gunicorn.service not found.. Rereading and restarting supervisor does not do the trick, and neither does restarting nginx. Is there not a simple command to apply updates? I'm use to using servers on general hosting sites, and updating a file via ftp just works. I was really enjoying learning flask up until this point, but now I regret it. I keep thinking that there has to be some kind of simple trick to make such a simple thing go smoothly, but I'm at the end of my rope trying to figure this out. Any suggestions?
I finally found it.
sudo supervisorctl stop app_name
sudo supervisorctl start app_name

Udev does not have permissions to write to the file system

I have been fighting with udev all afternoon. Basically I have created a rule that detects when a mass storage device is plugged into the system. This rule works and I can get it to execute a script without any issues, here it is for review purposes:
ACTION=="add", KERNEL=="sd?*", SUBSYSTEM=="block", RUN+="/usr/local/bin/udevhelper.sh"
The problem I am running into is that the script is executed as some sort of strange user that has read-only permissions to the entire system. The script I am executing is quite simple:
#!/bin/sh
cd /usr/local/bin
touch .drivedetect
echo "1" > .drivedetect
exit
Basically I would like udev to run this script and simply output a 1 to a file named .drivedetect within the /usr/local/bin folder. But as I mentioned before it sees the rule and executes the rule when I plug in a drive however when it tries to run the script it comes back with file system is read-only script quit with error code 1.
I am currently running this on a raspberry pi zero and the latest Debian image. udev is still being run from init.d from what I can tell because there is no systemd service for it registered. Any help would be great and if you need any more information just ask.
Things I've tried:
MODE="0660"
GROUP="plugdev"
Various combinations of RUN+="/bin/sh -c '/path/to/script'" and /bin/bash
OPTIONS="last_rule"
And last but not least I tried running the script under the main username as well
#!/bin/sh
su pi drivedetect
I had same issue, when I just used
udevadm control --reload-rules
after editing a udev rule. But, if I do:
sudo /etc/init.d/udev restart
The script can edit a file.
It's not enough to reboot. I have to do the restart after booting. It then works as expected until the next reboot.
This is on an rpi with stretch-lite.

"Homestead Improved" Vagrant VM - Failed to restart php7.0-fpm.service: Unit php7.0-fpm.service not found

I'm trying to get a Homestead Improved Vagrant VM instance running on Windows.
See Homestead Improved on Github. I'm following this easy introduction:
https://www.sitepoint.com/quick-tip-get-homestead-vagrant-vm-running/
My steps are:
git clone https://github.com/swader/homestead_improved my_project
cd my_project
bin/folderfix.sh
vagrant up
Machine boots and is ready. Then provisioner is running. Then I get the follwoing error message:
==> default: Failed to restart php7.0-fpm.service: Unit php7.0-fpm.service not found.
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Any hints what to do?
This has been fixed on the repo-level, and should never happen again if you run git pull inside your homestead improved cloned folder (but outside of the VM, not SSH-ed into it). If your machine is already running, you might have to apply the steps below, too. But new machines (so new clones of Homestead Improved) will not have this happen any more). Explanation of what happened is here.
#daniel-sixl please try to re-download/re-clone and start from scratch, everything should be working just fine now.
Old solution:
Try to change php7.0-fpm to php7.1-fpm - the box was auto-updated to the new version.
You can do this by going into /etc/nginx/sites-available and changing the required file - its name will match the site you defined, as per that post you linked. So probably /etc/nginx/sites-available/homestead.app.
--
Edit: added more detailed instructions for people very new to it all.
Ok, so what you need to do is, once you're in the sites-available folder, edit the homestead.app file. Something like sudo vim homestead.app will do just fine, it'll open a basic text editor (that's quite nightmarish to use when you're new at it, so just be patient :) ) Sudo is important, because you are editing a file that only an admin has access to.
Once you're "in", do the following:
press / (this activates "search") and input php7.0-fpm. This should take you to the line which contains that phrase. If you press / again and press Enter, that works like "find next", so it'll go to the next line having the phrase, or restart from the top if no other lines contain it.
when your cursor is on the line with php7.0-fpm (you can move it around with arrows of course), press i. This activated "insert" mode. Now you can edit the file.
change the 7.0 to 7.1.
press ESC to exit edit mode, and go back into read-only mode.
repeat for each line with 7.0
once done, while in read-only mode (ESC to make sure), type :x. Yeah, like an emoticon with cross lips. Press Enter. That's short for "Save and Exit".
you will now be in the folder again, from where you should execute sudo service nginx restart.
The new configuration should now take effect, and everything should start working.

sudoers - Google Compute Engine - no access to root

I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.

Nginx and passenger 3.0.0 on mac - why does it fail on startup?

I've been trying to set up nginx 0.8.53 and passenger 3.0.0 on my dev
environment - osx snow leopard and REE. I manually compiled nginx
with the passenger module linked in.
When I tried running passenger, it had a problem - ENV['PATH']
appeared to be null, so the split on it when call
PlatformInfo.find_command raised an exception. It was called when
trying to find out the osname - looking for the sw_vers command.
I tweaked the source and told it that it was macosx and then it
complained that it couldn't find the Rails 2.3.8 gem. This is
probably related to the first problem.
I'm not sure how to troubleshoot this? When I su -i and sudo nobody,
both users let me start irb and see the expected value for
ENV['PATH'], so I'm not sure why it's not working when passenger is
running?
One possibility: Passenger launches as the user that owns the config/environment.rb file (or of the config.ru file, if you have one) - make sure that file's owner is something sensible.
I don't know how you start Nginx, but you can write a launcher script for Nginx that starts Nginx with a specific environment, like this:
#!/bin/bash
export PATH=whatever
exec /path/to/nginx

Resources