Cannot attach letsencrypt to dokku - dokku

I get the following error when running dokku letsencrypt myapp. I'm on Dokku 0.7.0
root#resend:~# dokku letsencrypt production
=====> Let's Encrypt production...
-----> Updating letsencrypt docker image...
latest: Pulling from dokkupaas/letsencrypt-simp_le
420890c9e918: Already exists
e4a2ae244258: Already exists
5c6ac6d1c950: Already exists
Digest: sha256:18a19b34beceba79dd5be458abe7e132fc7486da1da19cc4d0395ad4578031ef
Status: Image is up to date for dokkupaas/letsencrypt-simp_le:latest
done
-----> Enabling ACME proxy for production...
-----> Getting letsencrypt certificate for production...
- Domain 'production.resend.io'
- Domain 'resend.io'
- Domain 'ws.resend.io'
- Domain 'www.resend.io'
darkhttpd/1.11, copyright (c) 2003-2015 Emil Mikulic.
listening on: http://0.0.0.0:80/
2016-08-01 22:34:32,324:INFO:__main__:1211: Generating new account key
2016-08-01 22:34:38,247:INFO:requests.packages.urllib3.connectionpool:758: Starting new HTTPS connection (1): acme-staging.api.letsencrypt.org
2016-08-01 22:34:38,593:INFO:requests.packages.urllib3.connectionpool:758: Starting new HTTPS connection (1): acme-staging.api.letsencrypt.org
2016-08-01 22:34:38,754:INFO:requests.packages.urllib3.connectionpool:758: Starting new HTTPS connection (1): acme-staging.api.letsencrypt.org
2016-08-01 22:34:39,294:INFO:requests.packages.urllib3.connectionpool:758: Starting new HTTPS connection (1): letsencrypt.org
TOS hash mismatch. Found: 6373439b9f29d67a5cd4d18cbc7f264809342dbf21cb2ba2fc7588df987a6221.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for production...
done

Looks like this is temporary while some dependencies update. There is a workaround documented here:
https://github.com/dokku/dokku-letsencrypt/issues/73

For me i just update the letsencrypt plugin on dokku by launching
dokku plugin:update letsencrypt master
After relaunch dokku letsencrypt:auto-renew

Related

Why does azure iotedge device can't run edge module image uri , which pulled from azure container registry?(IOTEDGE_WORKLOADURI is required)

Set up : Azure iot edge running on raspberry linux arm32v7.
used raspberry pi 4
IoTedge version : iotedge 1.4.3
Signed in to the Azure container registry from the edge device. Built and pushed a custom module to the container registry.Pulled that module image from the Azure container registry and tried to run the module using the docker run <image> command.
But it shows an error:
Unhandled exception. System.InvalidOperationException: Environment variable IOTEDGE_WORKLOADURI is required.
at Microsoft.Azure.Devices.Client.Edge.EdgeModuleClientFactory.CreateInternalClientFromEnvironmentAsync()
at SampleModuletest.ModuleBackgroundService.ExecuteAsync(CancellationToken cancellationToken) in /app/ModuleBackgroundService.cs:line 23
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at Program.<Main>$(String[] args) in /app/Program.cs:line 7
output screenshot
Found a post, but it's not the same problem I'm guessing. Outpus is different.
link
I have some doubts and if anyone can clear it would be very helpful.
What are the possible methods to deploy azure iot edge modules.
2.Is it possible to deploy a module from an edge device using a pulled module image from container registry? ?
Thanks in advance. Any suggestions would be appreciated.
Currently running edge modules:
NAME STATUS DESCRIPTION Config
edgeAgent running Up an hour mcr.microsoft.com/azureiotedge-agent:1.4
edgeHub running Up an hour mcr.microsoft.com/azureiotedge-hub:1.4
Docker images:
Docker images
iotedge check:
```
Configuration checks (aziot-identity-service)
---------------------------------------------
√ keyd configuration is well-formed - OK
√ certd configuration is well-formed - OK
√ tpmd configuration is well-formed - OK
√ identityd configuration is well-formed - OK
√ daemon configurations up-to-date with config.toml - OK
√ identityd config toml file specifies a valid hostname - OK
√ aziot-identity-service package is up-to-date - OK
√ host time is close to reference time - OK
√ preloaded certificates are valid - OK
√ keyd is running - OK
√ certd is running - OK
√ identityd is running - OK
√ read all preloaded certificates from the Certificates Service - OK
√ read all preloaded key pairs from the Keys Service - OK
√ check all EST server URLs utilize HTTPS - OK
√ ensure all preloaded certificates match preloaded private keys with the same ID - OK
Connectivity checks (aziot-identity-service)
--------------------------------------------
√ host can connect to and perform TLS handshake with iothub AMQP port - OK
√ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with iothub MQTT port - OK
Configuration checks
--------------------
√ aziot-edged configuration is well-formed - OK
√ configuration up-to-date with config.toml - OK
√ container engine is installed and functional - OK
√ configuration has correct URIs for daemon mgmt endpoint - OK
√ aziot-edge package is up-to-date - OK
√ container time is close to host time - OK
√ DNS server - OK
‼ production readiness: logs policy - Warning
Container engine is not configured to rotate module logs which may cause it run out of disk space.
Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
You can ignore this warning if you are setting log policy per module in the Edge deployment.
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
‼ production readiness: Edge Hub's storage directory is persisted on the host filesystem - Warning
The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ Agent image is valid and can be pulled from upstream - OK
√ proxy settings are consistent in aziot-edged, aziot-identityd, moby daemon and config.toml - OK
Connectivity checks
-------------------
√ container on the default network can connect to upstream AMQP port - OK
√ container on the default network can connect to upstream HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to upstream AMQP port - OK
√ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
32 check(s) succeeded.
3 check(s) raised warnings. Re-run with --verbose for more details.
2 check(s) were skipped due to errors from other checks. Re-run with --verbose for more details.
When you execute the docker run <image> command, it will attempt to spin up your module with no additional configuration. However, you're using the Azure IoT Edge SDK, which requires additional settings. One of those is the IOTEDGE_WORKLOADURI environment variable.
To answer your questions directly:
What are the possible methods to deploy azure iot edge modules.
There's one way of doing this on an Azure IoT Edge device. It's by creating a deployment manifest in your IoT Hub. That deployment manifest will tell the Azure IoT Edge runtime on your device to pull the correct containers and set them up. You can learn how to do that here
Is it possible to deploy a module from an edge device using a pulled module image from container registry?
I'm going to assume you mean on an edge device, not from. You can execute a docker pull command to get the container, but deploying it really needs to happen with the beforementioned deployment manifest.

SSL: Certbot + AWS Lightsail + LetsEncrypt + Really Simple SSL Plugin

Scenario:
Current server # example.com is running an older version of amazon AWS Lightsail with wordpress (ubuntu) and we just had a new certificate issued using letsencrypt. All is well. Original cert was requested with wildcard, so functional for any subdomain.
Now, we needed to spin up a fresh new server for a subdomain, let's call it development.example.com.
The new AWS lightsail instances now are no longer Ubuntu but Debian!
The idea was to install certbot in the new Debian instance and then copy over the certificate files from the primary server # example.com.
I've done this successfully in the past when it was going from Ubuntu to Ubuntu but now that the new instance is Debian, the Really Simple SSL plugin does not recognize that a certificate is installed.
STEPS I took to move the certificate files:
What I've done before is simply to copy /etc/letsencrypt/* from one server to another and then follow the steps outlined in the AWS documentation here:
https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress#complete-the-prerequisites-lets-encrypt-wordpress
In this case, performing the steps 7.4, 7.5, 7.6 and section 8.
However, steps described in section 8.1 do not appear valid in this document anymore for Debian, because there is no such location on Debian:
sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/wp-config.php
AND because it seems an .htaccess does not exist either.
sudo chmod 666 /opt/bitnami/apps/wordpress/conf/htaccess.conf
Are there additional steps now which I've missed to be able to copy the necessary files for SSL to work properly on this new subdomain server now running Debian?
I was going to go through a new certificate request in the development server but wouldn't that invalidate the certificate currently installed for the primary domain?
In other words, how to properly copy the SSL files from the main Ubuntu server and configure the Debian subdomain server so that both wordpress installations have SSL correctly installed?
Thank you #mikemoy indeed, one can issue multiple wildcard certificates from different servers in a subdomain. Just went ahead and issued a new certificate.

Error: Could not find pg_ctl executable for version 11 (PostgreSQL 11) + let's encrypt [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a VPS hosting with a domain redirecting to it.
I have LAMP stack for my main website using WordPress CMS.
Plus I am using Odoo as my back-end with python and PostgreSQL in a sub-domain.
Everything was working fine until I installed Certbot Let’s Encrypt to obtain an SSL certificate by following these tutorials
For My Wordpress i installed this plugin:
WP Encryption – One Click single / wildcard Free SSL certificate & force HTTPS
Which got me in a loop because it forced the https i will explain it later on
So when the plugin didn't work i searched for another way for the whole VPS with these tutorials:
How To Secure Apache with Let's Encrypt on Ubuntu 16.04
How To Secure Apache with Let's Encrypt on Ubuntu 18.04
After completing the second tutorial for ubuntu 18.04 i noticed that all my domain traffic is going to https and it got stuck in a loop saying same as i said above
"ERR_TOO_MANY_REDIRECTS which means Site redirected too many times"
and couldn't access the website front-end for the wordpress in the doamin.
Then when i applied
"Step 3 — Allowing HTTPS Through the Firewall"
my internet connection got interpreted and when i got back to the ssh session i found my self locked out of the server and did not find any way to get back in.
And when i tired to use the sub-domain that has Odoo on it i have got the same error
"ERR_TOO_MANY_REDIRECTS which means Site redirected too many times"
Until here i was hopeless and did't know what to do.
I contacted my VPS server provider and told him about what exactly happened. Then some how he managed to get me into the server again with a URL to the terminal i still couldn't access the server using ssh clients like putty.. so when i entered the server after he provided me with the URL first thing noticed is that he "rebooted the VPS" will get to this in a second.
So first thing i did was removing the wordpress plugin "WP Encryption" and update the wordpress site-url in wp_options table in mysql database because the plugin changed it from http to https so i changed it back and that solved the ERR_TOO_MANY_REDIRECTS for my wordpress website.
Then the second thing i did was disabling the ufw firewall that i enabled in the tutorial in Step 3 above.
I instantly got my connection to the server back using ssh client putty but what i have noticed again is the postgres service was inactive and went down with the reboot of the VPS. i tried to start the service but it didn't a gave me this error.
Failed to start postgresql.service: Unit postgresql.service is masked.
i searched for a solution and found these commands to unmask
sudo systemctl unmask postgresql
sudo systemctl enable postgresql
sudo systemctl restart postgresql
and then the service has started and everything sames OK when i run the status command
service postgresql status
the response is
● postgresql.service - LSB: PostgreSQL RDBMS server
Loaded: loaded (/etc/init.d/postgresql; generated)
Active: active (exited) since Thu 2020-03-26 05:54:09 UTC; 2h 22min ago
Docs: man:systemd-sysv-generator(8)
Tasks: 0 (limit: 2286)
Memory: 0B
CGroup: /system.slice/postgresql.service
but when i try to connect to postgres through the default port with odoo it says:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"
after many searches i made i found the posgres main cluster is also inactive or down i tried to start it with this command
pg_ctlcluster 11 main start
but i get this error
Job for postgresql#11-main.service failed because the service did not take the steps required by its unit configuration. See "systemctl status postgresql#11-main.service" and "journalctl -xe" for details.
and when i run the command as requested
systemctl status postgresql#11-main.service
i get this error
● postgresql#11-main.service - PostgreSQL Cluster 11-main Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled) Active: failed (Result: protocol) since Thu 2020-03-26 15:22:15 UTC; 14s ago Process: 18930 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 11-main start (code=exited, status=1/FAILURE)
alone with
systemd[1]: Starting PostgreSQL Cluster 11-main...
postgresql#11-main[18930]: Error: Could not find pg_ctl executable for version 11
systemd[1]: postgresql#11-main.service: Can't open PID file /run/postgresql/11-main.pid (yet?) after start: No such file or
systemd[1]: postgresql#11-main.service: Failed with result 'protocol'.
systemd[1]: Failed to start PostgreSQL Cluster 11-main.
I guessed Let's Encrypt added an ssl configuration to the pg_hba.conf and postgres.conf like id did with apache so i searched for them and commented the "ssl on" lines and restarted postgres service along with the main cluster but nothing happened still the the same error which is
Error: Could not find pg_ctl executable for version 11
I know i shouldn't run pg_ctl directly under Ubuntu/Debian. I must use pg_ctlcluster instead, which is installed by postgresql-common. I saw the main page documentation. But when i run "sudo pg_ctlcluster 11 main reload" command i always get the above Error telling me that he could not find pg_ctl executable
I have searched a lot for this problem but nothing worked how can i solve the pg_ctl executable in version 11 ??
Ps:
I am using Ubuntu 19.10 (GNU/Linux 5.3.0-24-generic x86_64)
Odoo 11 with postgres 11 as the database odoo can't connect to postgres as i mentioned before
edit:
Unfortunately i can't do a restore or recover the server to fix postgres package because my last backup of the server was on 19/3 and today is 26/3 i have an important data between this period
Update 27/3/2020 4:06 AM
I compared my last server backup with the production server and found a lot of postgres files missing!! like int this path /usr/lib/postgres/11/ and /etc/postgres/11/ i think postgres some how got damaged and lost some files in the reboot of the server >>> but found the data files of the database located in /var/lib/postgres/11/ <<< Can i read them in my backup server ? i will try and let you know
So finally after a hours of digging
All PostgreSQL files where damaged and missing and i lost hope of repairing them i don't know what caused that but it has a relation with the accidental reboot of the server.
So i managed to find the main cluster data file for my important database information for the production server in this path
/var/lib/postgres/11/
and i took a backup from it by zipping the whole folder using this command
zip -r main.zip main/
then i did a full purge and reinstall for postgres usuing these commands from here
apt-get --purge remove postgresql\*
to remove everything PostgreSQL from your system. Just purging the postgres package isn't enough since it's just an empty meta-package.
Once all PostgreSQL packages have been removed, run:
rm -r /etc/postgresql/
rm -r /etc/postgresql-common/
rm -r /var/lib/postgresql/
userdel -r postgres
groupdel postgres
Then i installed postgres with this command to match odoo11
sudo apt-get install postgresql libpq-dev -y
then creating the ODOO PostgreSQL User
sudo su - postgres -c "createuser -s odoo" 2> /dev/null || true
Now everything is okay odoo should work fine but you still don't have any database
So to bring back the backup from the cluster folder we took earlier we need to move the zip file to the same directory we took it from which is
/var/lib/postgres/11/
but before that you should stop postgres service
sudo systemctl stop postgresql
and make sure it has stopped
sudo systemctl status postgresql
after that rename the main cluster that postgres uses right now because its empty and we don't need it because we are replacing it with our backed up cluster
mv /var/lib/postgres/11/main /var/lib/postgres/11/main_old
then move the zip file from where you backed it up to the postgres cluster folder with this command
mv /backups/main.zip /var/lib/postgres/11/
unzip the folder in the same path by using this command
unzip -a /var/lib/postgres/11/main.zip
after unzipping the folder give the ownership to your postgres user and group
chown -R postgres:postgres main
Then you are good to go. Start Postgres service
sudo systemctl start postgresql
sudo systemctl status postgresql
and make sure you also start the main cluster service
pg_ctlcluster 11 main start
if you stopped odoo make sure to start it also
service odoo-server start
Ps: I solved ERR_TOO_MANY_REDIRECTS for the odoo sub-domain by commenting ssl configurations in my odoo.config Apache2 virtual host that lets encrypt updated before and everything got back to where left it before installing lets encrypt.
I guess i will leave it here and won't use ssl in production again till i figure out how to use it in a test server .. thanks for your time i hope my question and answer helps someone in the future
Try adding 'pg_path' in your odoo configuration file.
Like: pg_path = /path/to/postgresql/binaries
Generally '/usr/lib/posrgresql/11/bin' is the binary directory.

How to set up a secure connection between Filbeat and Elasticsearch using SSL

I'm unable to setup an SSL connection between Filebeat and Elasticsearch.
My knowledge is lacking when it comes to SSL. I'm using X-Pack to generate a certificate using the certutil command. bin/xpack/certutil ca generates a certificate authority under the name elastic-stack-ca.p12.
Then
$ bin/x-pack/certutil cert --ca elastic-stack-ca.p12
Which I believe creates a certificate signed by that CA. This results in the file elastic-certificates.p12. From here I'm clueless.
I tried testing to see if the certificates work by setting up a HTTPS connection to ES.
I put
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /path/to/elastic-certificates.p12
xpack.security.http.ssl.certificate: /path/to/elastic-certificates.p12
xpack.security.http.ssl.certificate_authorities: [ "/path/to/elastic-stack-ca.p12" ]
However, this brings up quite a few errors one of them being
caught exception while handling client http traffic, closing connection
When I add the https IP and the CA in Kibana it fails to connect with ES.
I would like to know how to successfully set up https. Also how can a SSL connection be established between two servers. One containing Filebeat, but no X-Pack and the receiving server with ES on it alongside X-Pack installed.
After adding those SSL settings in your elasticsearch.yml, you also need to add the password to the Elasticsearch keystore and truststore. You should've set a password when you ran the certutil command. You can do that with:
$ echo password | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.keystore.secure_password
$ echo password | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.truststore.secure_password
Make sure you restart Elasticsearch after making these changes.

Firebase serve not working

I've been migrating to Firebase 3.0, and with the new changes, we have to use firebase serve on the CLI for local development, and I believe this defaults to port 5000. However, after going through the init process, running firebase serve doesn't do anything after "Starting Firebase development server..." even with specifying port 5000. Attempted fixes:
Tried with other ports, like 5001
Reinstalled Node (4.x and 6.x)
Reinstalled NPM
Removed firebase-cli (since firebase-tools is now being used)
Reinstalled firebase-tools with npm
Tweaked firebase init endlessly
Tried on different user accounts on my computer
Restarted computer
Checked that port 5000 was free by $lsof -i tcp:5000
Tested address variants like localhost:5000 and like 127.x and 192.x
Here is the debug log:
[debug] ----------------------------------------------------------------------
[debug] Command: /usr/local/bin/node /usr/local/bin/firebase serve -p 5000 --debug
[debug] CLI Version: 3.0.0
[debug] Platform: darwin
[debug] Node Version: v6.2.0
[debug] Time: Sun May 22 2016 01:29:59 GMT+0200 (CEST)
[debug] ----------------------------------------------------------------------
[debug]
[info] Starting Firebase development server...
[info]
[info] Project Directory: /Users/user/Documents/localdev/spfwork
Any thoughts on how to fix?
Thank you for your help.
Fixed - firebase serve from firebase-tools (npm) was missing a logger for some errors, which I added on a pull request here: https://github.com/firebase/firebase-tools/pull/143
My error was that localhost was not starting for some reason, so I changed the command to firebase serve -p 5000 -o 127.0.0.1, and specifying the listen port allowed the server to start successfully.
For reference, the error was Error: getaddrinfo ENOTFOUND localhost
You could just change your /etc/hosts file and use firebase serve normally.
To do this:
Launch Terminal
Type sudo nano /etc/hosts and press Return
Enter your admin password Paste
Paste
##
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
Save the file by pressing Ctrl + O
Exit with Ctrl + X
This should fix it.
See this for more
If Firebase cannot find the 'Public' folder, this error might show up. In that case, the error can be resolved by putting index.html and other static files and app assets of the website within the public folder, and executing firebase deploy at the Firebase CLI again.

Resources