How to re-run cloud-init without reboot - openstack

I am using openstack to create a VM using 'nova boot' command. My image is cloud-init enabled. I pass a --user-data script which is a bash shell format for cloud-init to run during VM boot up time. All this happens successfully.
Now my use-case is to re-run cloud-init to execute the same user-data script without rebooting the VM. I saw /usr/bin/cloud-init options and they do talk about running specific modules but nothing is able to make it execute the same user-data script. How can this be achieved ? Any help would be appreciated.

While re-running all of cloud-init without reboot isn't a recommended approach, the following commands will allow you to accomplish this on a system.
The commands have been updated so to re-run you need to clean out the existing config:
sudo cloud-init clean --logs
cloud-init typically runs multiple boot stages in order due to systemd service dependencies. If you want to repeat that process without a reboot you can run the following 4 commands:
Detect local datasource (cloud platform):
sudo cloud-init init --local
Detect any datasources which require network up and run "cloud_init_modules" defined in /etc/cloud/cloud.cfg:
sudo cloud-init init
Run all cloud_config_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=config
Run all cloud_final_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=final
Beware: things like ssh host keys maybe regenerated.

In order for cloud-init to reset, you need to execute rm -rf /var/lib/cloud/instances.
Then re run the cloud-init start and it will run the full boot script process again.

Since this keeps popping up in search results, what works for me:
Delete semaphores in /var/lib/cloud/instances/i-xxxxxxx/sem. Cloud-init will not re-run if these files are present.
Edit /var/lib/cloud/instances/i-xxxxxxxx/scripts/part-001. This is your user-data script.
Execute only the user scripts module of cloud-init. This will not re-download user data but execute the already downloaded (and now, modified) script from step 2.
sudo /usr/bin/cloud-init single -n cc_scripts_user

Given that this post was actively touched 6 months ago I wanted to provide a more complete answer here from cloud-init upstream.
The original question: "how to re-run a user-data script again at a later time with cloud-init"
Generally user-scripts are only run once per-instance by the config module config-user-scripts. If the instance-id in metadata doesn't change it won't re-run.
The per-instance semaphores can be bypassed with the following command line by telling it to run the user-scripts module regardless of instance-id:
sudo cloud-init single --name scripts-user --frequency always
Per the other suggesting to re-running all of cloud-init without system reboot. It isn't a recommended approach because some parts of cloud-init are run at systemd generator timeframe to detect new datasource types. That said, the following commands will allow you to accomplish this without reboot on a system.
cloud-init supports a clean subcommand to remove all semaphore files and allow cloud-init to re-run all config modules again. Beware that this will mean SSH host-keys are regenerated and .ssh config files re-written so it could impact your ability to get back into the VM.
To clean all semaphores so cloud-init modules will all re-run on next boot:
sudo cloud-init clean --logs
cloud-init typically runs multiple boot stages in sequence due to systemd service dependencies. If you want to repeat that process without a reboot you can run the following 4 commands:
Detect local datasource (cloud platform):
sudo cloud-init init --local
Detect any datasources which require network up and run "cloud_init_modules" defined in /etc/cloud/cloud.cfg:
sudo cloud-init init
Run all cloud_config_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=config
Run all cloud_final_modules defined in /etc/cloud/cloud.cfg:
sudo cloud-init modules --mode=final

To run the packages module of cloud-config part of cloud-init, you can run
# cloud-init-cfg all config
To run the runcmd module of cloud-config part of cloud-init, you can run
# cloud-init-cfg all final

Related

How to shut down a computer (host)

I know it sounds weird, but I have a case where Deno would need to shutdown its own host (and kill its own process therefore). Is this possible?
I am specifically needing this for linux (lubuntu), if that's relevant. I guess this requires sudo rights, which sucks but would be an option.
For those interested in details: I'm coding a minecraft server software and if the server has no player for 30 minutes, it will shut itself down to save some power. A raspberry PI that runs 24/7 anyways, has a wake on lan feature, so that it can boot again. After boot, the server manager software would automatically start as a linux service.
You can create a subprocess to do this:
await Deno.run({ cmd: ["shutdown", "-h", "now"] }).status();
Concepts
Deno is capable of spawning a subprocess via Deno.run.
--allow-run permission is required to spawn a subprocess.
Spawned subprocesses do not run in a security sandbox.
Communicate with the subprocess via the stdin, stdout and stderr streams.
Use a specific shell by providing its path/name and its string input switch, e.g. Deno.run({cmd: ["bash", "-c", "ls -la"]});
See also command line - Shutdown from terminal without entering password? - Ask Ubuntu for ideas on how to avoid needing sudo to call shutdown or alternative commands that you can invoke from Deno instead.
To extend on what #mfulton2 wrote, here is how I made it work, so that I did not need to start the program with sudo rights, but still was able to shut down the computer without the use of sudo outside or within the app.
Open or create the following file: sudo nano /etc/sudoer
Add the line username ALL = NOPASSWD: /sbin/shutdown
Add this line %admin ALL = NOPASSWD: /sbin/shutdown
In your deno script, write Deno.run({ cmd: ["shutdown", "-h", "now"]}).status();
Execute script!
Keep in mind that any experienced linux user would potentially tell you that this is very dangerous (it probably is) and that it might not be the very best way. But IMHO, the damage this can cause is minor enough, as it only affects the shutdown command.

how to resolve "Error: No module named 'airflow.www'" while starting airflow websever

Getting below error while starting Airflow webserver
balajee#Balajees-MacBook-Air.local:~$ airflow webserver -p 8080
[2018-12-03 00:29:37,066] {init.py:51} INFO - Using executor SequentialExecutor
[2018-12-03 00:29:38,776] {models.py:271} INFO - Filling up the DagBag from /Users/balajee/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Error: No module named 'airflow.www'
Fixed for me
pip3 uninstall -y gunicorn
pip3 install gunicorn==19.4.0
I got this problem this morning, and I found a strange solution, may it helps you. I think maybe you just need to change the command running directory.
I install airflow basic dependence in my virtualenv directory venv with PyCharm help, and I use PyCharm build-in Terminal tab to directly access my venv, and I use airflow initdb to init sqlite database to store all my logs and ops, then according to the official tutorial I use airflow webserver to start the webserver. But somehow today I use my Mac terminal, and start virtulenv, and start airflow webserver, and I encounter this problem with:
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
Error: No module named 'airflow.www'
[2019-05-26 07:45:27,130] {cli.py:833} ERROR - No response from gunicorn master within 120 seconds
[2019-05-26 07:45:27,130] {cli.py:834} ERROR - Shutting down webserver
And I tried #Evgeniy Sobolev's solution by reinstall gunicorn and nothing changed, but when I still using my PyCharm Terminal, it can still running successfully. I guess maybe it is because the first directory you init your db and running webserver is critical. By default when I use PyCharm Terminal to init db and start webserver is the Project root directory, like:
(venv) root#root:~/GitHub/FakeProject$ airflow webserver
But today I check into venv to start virtualenv, and the root directory changed!
root#root:~/GitHub/FakeProject/SubDir$ source venv/bin/activate
(venv) root#root:~/GitHub/FakeProject/SubDir$ airflow webserver
** Error **
So in this way it encounters Error: No module named 'airflow.www', so I check out the directory, and the webserver running successfully just like PyCharm Terminal:
(venv) root#root:~/GitHub/FakeProject/SubDir$ cd ..
(venv) root#root:~/GitHub/FakeProject$ airflow webserver
** It works **
I thought maybe airflow store some metadata (like setup a PATH, maybe) in the first time init your airflow db, so you can not change your command running directory.
I hope it may help somebody in the future. Just check your directory!
Looks like you have a problem with gunicorn.
Try to execute this two commands:
sudo -H pip3 uninstall -y gunicorn
sudo -H pip3 install gunicorn
It should resolve your problem, cause airflow show you not clear error message related to gunicorn problems
I did this steps for the problem happens:
create a separate virtualenv only for airflow (I use anaconda distribution)
activate this env with conda activate
install airflow: pip install apache-airflow
at this moment the error No module named 'airflow.www' was showed for me
To fix follow this steps:
Look for where is your gunicorn in: whereis gunicorn
gunicorn have to stay only in your virtualenv directory: /home/yourname/anaconda3/envs/airflow_env/bin/gunicorn
If it stay in two directories, let it just in your airflow enviroment. Remove it all from another.
Another way to verify if gunicorn is in another directories is printing your PATH variable: echo $PATH. Look for gunicorn in /home/yourname/.local/bin and another anaconda directories from PATH. Remove all references. Remove gunicorn from conda base env as well: pip uninstall gunicorn.
With this steps, I think your problem will be solved.
I used anaconda distribution, but I think the same process can be done without it. I used airflow 1.10.0 and python 3.6.
If you have defined a custom home directory for airflow other than default one (~/airflow) during the installation:
You need first export the custom path:
export AIRFLOW_HOME=/your/custom/path/airflow
Go to the airflow directory and then Run the webserver
airflow webserver -p 8080
Run scheduler too
airflow scheduler
please check if gunicorn is installed already in server. for me it was installed in /usr/local/bin and it was taking precedence over gunicorn version installed with airflow. uninstall earlier one or fix $PATH variable
I solved this by starting the webserver from the airflow folder itself.
I was previously trying to open the server from the home directory but the required modules could not be found which may be the case here.
Late to the party but could help others who get here.
I got the same issue using latest airflow version 2.5.0
Make sure env variable AIRFLOW_HOME is pointing to right location
Thanks all for sharing
I added sudo and it actually worked just fine.
I got the same error today and a sudo did the trick to me

Can't run psql with postgres running as launch daemon

This is the third time I'm setting up Postgres on a new machine (OS X 10.9 this time), and the third time I'm having problems with the connection.
Basically, I'm at the point where I've created a database cluster and can start postgres using:
postgres -D /usr/local/pgsql/data
But I want it to run in the background as a launch daemon, so I
sudo launchctl load -w /Library/LaunchDaemons/org.macports.postgresql91-server.plist
It seems like the daemon is launched successfully. But when I type psql I get the same old error message I've been dealing with every single time I try to set up Postgres:
psql: could not connect to server: No such file or directory. Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Any ideas on what might be causing this?
The psql you're running is the old version bundled by Apple in Mac OS X and added to the default PATH. Use the one from Homebrew, by fixing your path or entering the path specifically.
Alternately, explicitly connect to the server by overriding the default socket directory:
psql -h /tmp
See also:
this superuser answer
How to modify PATH for Homebrew?
Update:
In this case it looks like the server is genuinely not starting. Check the permissions on the data directory (apparently /usr/local/pgsql/data) and check the Console.app logs for relevant messages from launchd.
Update:
You must fix the permissions so the postgres user (or postgres_, depending on how you installed) has ownership. Check the launchd config file to see what user it runs as, and sudo chown -R postgres /usr/local/pgsql/data to change ownership. Replace postgres with postgres_ if that's what your launchd config says

Gitlab: Problems running Unicorn, Resque with Passenger/Nginx

I have installed a Gitlab on a brand new Ubuntu (10.04) and it is working almost correctly. Gitlab is reachable on HTTP, I can push/pull data via git to the server. There is one thing missing though, the activity feed is not updating. So I thought there is something wrong with the git hooks. I completely followed the installation process from Gitlab except I'd like to use Passenger to run Nginx in order to deploy multiple apps.
I was running the the sudo -u gitlab -H bundle exec rake gitlab:env:info RAILS_ENV=production to see if everything is set up correctly, but it said, Redis is not running. ps aux says, redis-server is up. So it is not the git hooks. Gitlab docu says, restart the gitlab service to solve that problem. In this case I get an error which I think is the problem I need to solve:
$ sudo /etc/init.d/gitlab restart
Error, unicorn not running!
My question is, how can I get around this problem? How can I run unicorn, I thought the gitlab service would start it? Am I not using Nginx? Before I start reinstalling the whole thing firstly without using Passenger, I thought I might ask the question here beforehand.
As mentioned by the OP pabera, nginx and mysql must be started, for the other components of GitLab (redis, unicorn, and now sidekiq) to run properly.
The official /etc/init.d/gitlab is here.
I have my own version of gitlabd (here), because I manage sidekiq in my own script, and I don't need to run the script as root.
You can see the run order for all the services in this script:
ssh
Apache and/or NGiNX
mysql
redis
GitLab (which will start unicorn and sidekiq)
Kind of a poke in the dark...
In the GitLab installation.md README is states:
"
Start your GitLab instance:
sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart
"
I did the first AND the second and got this exact error. However, I skipped the "or" and continued to the Nginx commands and it seems to work.
Hope this helps!

Can I use node-inspector with meteor?

Can I use node-inspector with meteor?
I tried "--debug" option, and succeed to connect debug port.
But, cannot to access my codes.
exec "$DEV_BUNDLE/bin/node" "--debug" "$METEOR" "$#"
You might have countered the same problem like I have:
On Linux machine, Meteor script will spawn two processes:
Process1: node "meteor files"
Process2: node "your meteor files"
When you run exec "$DEV_BUNDLE/bin/node" "--debug" "$METEOR" "$#", it spawn process1 in debug mode but process2 still run in normal mode. This is why you can not see your files.
I just run the regular meteor script and send kill -s USR1 to process2 then you can see your file in node-inspector

Resources