Ansible: updated router hostname not picked up in ansible_net_hostname - networking

I tried to run the "first extended playbook" from the Ansible documentation site https://docs.ansible.com/ansible/latest/network/getting_started/first_playbook.html.
The playbook does the following:
Gather facts from a VyOS router
Debug to print ansible_net_hostname
Change the host name
Gather facts again.
Debug to print the updated host name
After I updated the host name, the ansible_net_hostname variable was unchanged. Upon investigation I found the following:
The hostname was change (verified by logging into router)
I got the same results with Cisco IOS.
If I run the vyos_facts in a second playbook I see the updated host name
If I run setup before I gather the facts the second time, I still get the original name
Same if I run meta: clear_facts
If I run clear_facts but do not gather facts a second time, the fact is cleared and I get an error on the second debug msg, as expected.
If I run pause for 10 seconds after the update, I still see the old valiue
So why does the second invocation of vyos_facts still return the original value?
Environment:
Docker container under Ubuntu 20.04
Ansible 2.9.6
Python 3.8.5

Related

Euca 5.0 Ansible Console Task Failing

Background:
I am only able to get past the ansible console install/config tasks by adding --region localhost to anywhere in: /usr/share/eucalyptus-ansible/roles/cloud-post/tasks/console.yml wherever it calls tools that take that argument.
Otherwise each sub task fails like this: ["euca-describe-images: error: connection error (('Connection aborted.', gaierror(-2, 'Name or service not known')))"]
Running the commands from that playbook directly on the euca server being configured gives the same result unless I specify --region localhost
Problem:
I'm stuck here: [cloud-post : update console route53 system domain for eucalyptus-cloud authentication]
Error: "euform-update-stack: error (ValidationError): No updates are to be performed.", "stderr_lines": ["euform-update-stack: error (ValidationError): No updates are to be performed."]
All services are running except the ImagingBackend is Not Ready
No instances are running according to euca-describe-instances
Images are available:
IMAGE ami-5be483c81cf8bd65c eucalyptus-console-image-5-0-823/eucalyptus-console-image-5-0-823.raw.manifest.xml 000216594841 available private x86_64 machine instance-store hvm
TAG image ami-5be483c81cf8bd65c type eucalyptus-console-image
TAG image ami-5be483c81cf8bd65c version 5.0.823
IMAGE ami-f31092ddb73e29af9 eucalyptus-service-image-v5.0.100/eucalyptus-service-image.raw.manifest.xml 000216594841 available privatx86_64 machine instance-store hvm
TAG image ami-f31092ddb73e29af9 provides imaging,loadbalancing
TAG image ami-f31092ddb73e29af9 type eucalyptus-service-image
TAG image ami-f31092ddb73e29af9 version 5.0.100
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
node:
hosts:
exp-enc-[01:02].lan.com:
EDIT:
Solved. Details are in the comments of the marked answer.
The name error most likely means that DNS for the domain cloud.lan.com is not being correctly delegated to your deployment. To test this, check if the nameserver is found:
dig +short NS cloud.lan.com
you should see "ns1.cloud.lan.com" and then should be able to use that nameserver to resolve services, e.g.
dig +short ec2.cloud.lan.com #ns1.cloud.lan.com
which should be the IP of the host for the compute service.
The second item is a bug in the ansible playbook that occurs when the stack is already present and up to date. To work around it, you can either update your playbook or delete the stack before running the playbook. Depending on how far the playbook progressed you may have a script to do this:
/usr/local/bin/console-manage-stack -a delete
the related playbook change is https://github.com/AppScale/ats-deploy/pull/36

Error: Could not find pg_ctl executable for version 11 (PostgreSQL 11) + let's encrypt [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a VPS hosting with a domain redirecting to it.
I have LAMP stack for my main website using WordPress CMS.
Plus I am using Odoo as my back-end with python and PostgreSQL in a sub-domain.
Everything was working fine until I installed Certbot Let’s Encrypt to obtain an SSL certificate by following these tutorials
For My Wordpress i installed this plugin:
WP Encryption – One Click single / wildcard Free SSL certificate & force HTTPS
Which got me in a loop because it forced the https i will explain it later on
So when the plugin didn't work i searched for another way for the whole VPS with these tutorials:
How To Secure Apache with Let's Encrypt on Ubuntu 16.04
How To Secure Apache with Let's Encrypt on Ubuntu 18.04
After completing the second tutorial for ubuntu 18.04 i noticed that all my domain traffic is going to https and it got stuck in a loop saying same as i said above
"ERR_TOO_MANY_REDIRECTS which means Site redirected too many times"
and couldn't access the website front-end for the wordpress in the doamin.
Then when i applied
"Step 3 — Allowing HTTPS Through the Firewall"
my internet connection got interpreted and when i got back to the ssh session i found my self locked out of the server and did not find any way to get back in.
And when i tired to use the sub-domain that has Odoo on it i have got the same error
"ERR_TOO_MANY_REDIRECTS which means Site redirected too many times"
Until here i was hopeless and did't know what to do.
I contacted my VPS server provider and told him about what exactly happened. Then some how he managed to get me into the server again with a URL to the terminal i still couldn't access the server using ssh clients like putty.. so when i entered the server after he provided me with the URL first thing noticed is that he "rebooted the VPS" will get to this in a second.
So first thing i did was removing the wordpress plugin "WP Encryption" and update the wordpress site-url in wp_options table in mysql database because the plugin changed it from http to https so i changed it back and that solved the ERR_TOO_MANY_REDIRECTS for my wordpress website.
Then the second thing i did was disabling the ufw firewall that i enabled in the tutorial in Step 3 above.
I instantly got my connection to the server back using ssh client putty but what i have noticed again is the postgres service was inactive and went down with the reboot of the VPS. i tried to start the service but it didn't a gave me this error.
Failed to start postgresql.service: Unit postgresql.service is masked.
i searched for a solution and found these commands to unmask
sudo systemctl unmask postgresql
sudo systemctl enable postgresql
sudo systemctl restart postgresql
and then the service has started and everything sames OK when i run the status command
service postgresql status
the response is
● postgresql.service - LSB: PostgreSQL RDBMS server
Loaded: loaded (/etc/init.d/postgresql; generated)
Active: active (exited) since Thu 2020-03-26 05:54:09 UTC; 2h 22min ago
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 2286)
Memory: 0B
CGroup: /system.slice/postgresql.service
but when i try to connect to postgres through the default port with odoo it says:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"
after many searches i made i found the posgres main cluster is also inactive or down i tried to start it with this command
pg_ctlcluster 11 main start
but i get this error
Job for postgresql#11-main.service failed because the service did not take the steps required by its unit configuration. See "systemctl status postgresql#11-main.service" and "journalctl -xe" for details.
and when i run the command as requested
systemctl status postgresql#11-main.service
i get this error
● postgresql#11-main.service - PostgreSQL Cluster 11-main Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled) Active: failed (Result: protocol) since Thu 2020-03-26 15:22:15 UTC; 14s ago Process: 18930 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 11-main start (code=exited, status=1/FAILURE)
alone with
systemd[1]: Starting PostgreSQL Cluster 11-main...
postgresql#11-main[18930]: Error: Could not find pg_ctl executable for version 11
systemd[1]: postgresql#11-main.service: Can't open PID file /run/postgresql/11-main.pid (yet?) after start: No such file or
systemd[1]: postgresql#11-main.service: Failed with result 'protocol'.
systemd[1]: Failed to start PostgreSQL Cluster 11-main.
I guessed Let's Encrypt added an ssl configuration to the pg_hba.conf and postgres.conf like id did with apache so i searched for them and commented the "ssl on" lines and restarted postgres service along with the main cluster but nothing happened still the the same error which is
Error: Could not find pg_ctl executable for version 11
I know i shouldn't run pg_ctl directly under Ubuntu/Debian. I must use pg_ctlcluster instead, which is installed by postgresql-common. I saw the main page documentation. But when i run "sudo pg_ctlcluster 11 main reload" command i always get the above Error telling me that he could not find pg_ctl executable
I have searched a lot for this problem but nothing worked how can i solve the pg_ctl executable in version 11 ??
Ps:
I am using Ubuntu 19.10 (GNU/Linux 5.3.0-24-generic x86_64)
Odoo 11 with postgres 11 as the database odoo can't connect to postgres as i mentioned before
edit:
Unfortunately i can't do a restore or recover the server to fix postgres package because my last backup of the server was on 19/3 and today is 26/3 i have an important data between this period
Update 27/3/2020 4:06 AM
I compared my last server backup with the production server and found a lot of postgres files missing!! like int this path /usr/lib/postgres/11/ and /etc/postgres/11/ i think postgres some how got damaged and lost some files in the reboot of the server >>> but found the data files of the database located in /var/lib/postgres/11/ <<< Can i read them in my backup server ? i will try and let you know
So finally after a hours of digging
All PostgreSQL files where damaged and missing and i lost hope of repairing them i don't know what caused that but it has a relation with the accidental reboot of the server.
So i managed to find the main cluster data file for my important database information for the production server in this path
/var/lib/postgres/11/
and i took a backup from it by zipping the whole folder using this command
zip -r main.zip main/
then i did a full purge and reinstall for postgres usuing these commands from here
apt-get --purge remove postgresql\*
to remove everything PostgreSQL from your system. Just purging the postgres package isn't enough since it's just an empty meta-package.
Once all PostgreSQL packages have been removed, run:
rm -r /etc/postgresql/
rm -r /etc/postgresql-common/
rm -r /var/lib/postgresql/
userdel -r postgres
groupdel postgres
Then i installed postgres with this command to match odoo11
sudo apt-get install postgresql libpq-dev -y
then creating the ODOO PostgreSQL User
sudo su - postgres -c "createuser -s odoo" 2> /dev/null || true
Now everything is okay odoo should work fine but you still don't have any database
So to bring back the backup from the cluster folder we took earlier we need to move the zip file to the same directory we took it from which is
/var/lib/postgres/11/
but before that you should stop postgres service
sudo systemctl stop postgresql
and make sure it has stopped
sudo systemctl status postgresql
after that rename the main cluster that postgres uses right now because its empty and we don't need it because we are replacing it with our backed up cluster
mv /var/lib/postgres/11/main /var/lib/postgres/11/main_old
then move the zip file from where you backed it up to the postgres cluster folder with this command
mv /backups/main.zip /var/lib/postgres/11/
unzip the folder in the same path by using this command
unzip -a /var/lib/postgres/11/main.zip
after unzipping the folder give the ownership to your postgres user and group
chown -R postgres:postgres main
Then you are good to go. Start Postgres service
sudo systemctl start postgresql
sudo systemctl status postgresql
and make sure you also start the main cluster service
pg_ctlcluster 11 main start
if you stopped odoo make sure to start it also
service odoo-server start
Ps: I solved ERR_TOO_MANY_REDIRECTS for the odoo sub-domain by commenting ssl configurations in my odoo.config Apache2 virtual host that lets encrypt updated before and everything got back to where left it before installing lets encrypt.
I guess i will leave it here and won't use ssl in production again till i figure out how to use it in a test server .. thanks for your time i hope my question and answer helps someone in the future
Try adding 'pg_path' in your odoo configuration file.
Like: pg_path = /path/to/postgresql/binaries
Generally '/usr/lib/posrgresql/11/bin' is the binary directory.

Proxy authentication using wget on cygwin

My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password

Error: Port 5000 is not open, could not start functions emulator

✔ Deploy complete!
Project Console: https://console.firebase.google.com/project/socialape-6b2f7/overview
Ayhan-MacBookPro:socialape-functions macbook$ firebase serve
=== Serving from '/Users/macbook/Desktop/socialape-functions'...
Error: Port 5000 is not open, could not start functions emulator.
Run lsof -t -i tcp:5000 | xargs kill from your Terminal.
A common cause for this error occurs when the Firebase emulator is not cleanly shut down (e.g., closing your IDE that's running the emulator in an embedded Terminal session) This will leave the process running in the background and occupies the emulator's default port.
To resolve the conflict, find the process ID running on the port (here 5000) from your Terminal command line and then kill it.
The above one-liner finds the process ID and pipes it directly to kill (h/t #manav).
For additional info, check out: Find (and kill) process locking port 3000 on Mac
The bug seems to be on not your end
It is caused by a bug in a dependency (node portfinder).
A quick fix to edit it might be to use the old version of node portfinder (v 1.0.21). Alternatively, you can do it by editing node_modules/firebase-tools/lib/emulator/controller.js and changing yield pf.getPortPromise({ port, stopPort: port }) to yield pf.getPortPromise({ port, stopPort: port + 1 }).
You can see the answer to your question completely here in this SO link.
If you are facing this issue in macOS Pro then this solution is for you.
In MacOS , Port 5000 may be claimed by a new "AirPlay Receiver".
This can be disabled in Settings -> Sharing:
I'm also adding the Screenshot of settings panel for disabling AirPlay Receiver.
Disabling the AirPlay Receiver (if you do not need it) frees up port 5000.

Camera (/dev/video0) dependencies in systemd service Ubuntu 16.04

I need to run some services on boot up which I have successfully accomplished using systemd services. (Lots of answers already available).
Now, one of my service requires access to /dev/video0 while bootup when a certain user is logged in. (I am doing auto login which is working fine).
So how do I check that whether the /dev/video0 is available before starting my systemd service while bootup.
I came across something called udev for doing this, I followed this link
but I am not getting desired output as after editing /lib/udev/rules.d/99-systemd.rules files as mentioned in the link and starting my service manually it's not starting, any help is appreciated.
Finally after struggling for a day I found the answer -
I made a script in /etc/systemd/system which contains
[Unit]
Description='some description of my file write according to you'
[Service]
Type=forking
ExecStart='path to script'
[Install]
WantedBy=multi-user.target
and It executes a script which contains
#!/bin/bash
modprobe uvcvideo
Now after rebooting all the services are running properly
mod probe uvcvideo command check for running video driver and enable it at the time of bootup so that It is available for my systemd process
Thanks

Resources