Certbot or Let’s Encrypt add excess ".com" to domain - nginx

I have server with multiple domains. And certbot works fine for all of them, excepts one.
Let's use "blog.domain.staging.com" as example for this question.
certbot certonly -t -n --dry-run --agree-tos --renew-by-default --email "${LE_EMAIL}" --webroot -w /usr/share/nginx/html -d blog.domain.staging.com
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator webroot, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for blog.domain.staging.com
Using the webroot path /usr/share/nginx/html for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. blog.domain.staging.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching https://blog.domain.staging.com.com/.well-known/acme-challenge/YWnMAB2mDZFl4-tEO1BCrKc3vP6yeAL2JVZP-A-BRV4: Error getting validation data
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: blog.domain.staging.com
Type: connection
Detail: Fetching
https://blog.domain.staging.com.com/.well-known/acme-challenge/YWnMAB2mDZFl4-tEO1BCrKc3vP6yeAL2JVZP-A-BRV4:
Error getting validation data
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
You can see that hostname has additional ".com" and I don't know why and how to fix that.

There was a typo in the nginx config. Redirect link from http to https was with additional ".com". That was the problem.

Related

Set up nginx to route subdomains to specific ports in home server (e.g. for Nextcloud)

I have a home server.
Local IP: 192.168.0.6
And assume public IP: 1.2.3.4
On the server I have Docker and NGINX installed.
Let's say that I also own a domain: mydomain.com.
I would like to have local and remote access to my Nextcloud Web UI using following subdomain: nextcloud.mydomain.com as Nextcloud requires an actual domain for initial setup.
This subdomain (nextcloud.mydomain.com) currently has it's A record set to 1.2.3.4 (my public IP). Is it correct approach or should I go with a different approach such as CNAME records for the main domain (mydomain.com)? Not sure on this one.
My general question is how can I configure NGINX in a way that it will point nextcloud.mydomain.com to my Nextcloud container instance.
I was able to access Nextcloud Web UI locally through 192.168.0.6:8443 so I assume that this is the port that NGINX should route to from the subdomain.
My Nextcloud instance exists as a docker container set up with default attributes:
--publish 80:80 \
--publish 8080:8080 \
--publish 8443:8443 \
I haven't configured NGINX yet at all to not mess anything.
I would be grateful for any explanation what steps I should take and if my understanding of the entire concept is correct.

Certbot get ssl certificate HAproxy

I'm new to networking and I need to add an SSL certificate to my load balancer. For that, I'm using Certbot.
Instructions: https://certbot.eff.org/instructions?ws=haproxy&os=ubuntufocal
Basically it says to login to the server using SSH and then install certbot
Then, to run this command
sudo certbot certonly --standalone
It tells me to temporarily stop my web server to get the certificate, so I ran:
sudo service ssh stop
After running the certbot command I get the following error:
Could not bind TCP port 80 because it is already in use by another process on
this system (such as a web server). Please stop the program in question and then
try again.
So I ran:
sudo netstat -tulpn | grep :80
Output:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 59283/nginx: master
tcp6 0 0 :::80 :::* LISTEN 59283/nginx: master
Now, If I stop the nginx service with "sudo service nginx stop" and run the above command again I don't get any services listening in port 80. So I retry the Certbot command once more:
sudo certbot certonly --standalone
I get the following error:
Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
Domain: totaldomainoftheworldclub.tech
Type: dns
Detail: no valid A records found for totaldomainoftheworldclub.tech; no valid AAAA records found for totaldomainoftheworldclub.tech
Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.
And that's it, I don't know what else to do.
If you have trouble with normal validation, you can try using DNS challenge
Please note, that for DNS challenges, the following DNS providers are supported: cloudflare, cloudxns, digitalocean, dnsimple, dnsmadeeasy, gehirn, google, linode, luadns, nsone, ovh, rfc2136, route53, sakuracloud.
You can check how to use DNS challenges and what additional configuration it requires in the certbot docs. But basically, you will need to create some kind API key in your domain DNS server and then provide it to certbot. Then when validating it will automatically add a new DNS record using API for validation purposes.
You can also run DNS challenges in different machines or even in Google Function or AWS lambda. Check certbot-lambda for example.

Port-forwarding to a local domain using socat

I'm trying to setup port forwarding from localhost to a local server using socat. The server is available via http://my-local-domain.
Here is what I tried:
socat -d -d tcp-listen:8081,reuseaddr,fork tcp:my-local-domain:80
When I open the browser and go to http://localhost:8081, I see my original localhost page, not the page when I navigate to my-local-domain.
How does one create port-forwarding to a local domain using socat?
I found that I'm not able to use Port 80 because the Host appears as localhost to NGINX, and so localhost serves the request [which would explain the original issue].
You can verify this by:
Opening nginx.conf
Adding ..."Host=$host"... to the log_format
Tailing the access logs [tail -f /usr/local/nginx/logs/access.log]
You'll notice that Host is always localhost, and so localhost serves the request.
The way to solve this is to change the Host info from localhost to my-local-server:
Localhost:8081 --> [change Host info] --> my-local-server:80
The way I found to do this was to create a proxy via Node.JS [as a go-between] as follows:
Create proxy.js
Copy the contents of the code from this gist and paste into proxy.js
Run the following command in the terminal to create proxy to web server:
PORT_LISTEN=8091 PORT_TARGET=80 HOST_TARGET="my-local-server" HOST_ORIGIN="my-local-server" node proxy.js
Run socat to proxy
socat -d -d tcp-listen:8081,reuseaddr,fork tcp:localhost:8091
So now we have the following:
Localhost:8081 --> Localhost:8091 --> my-local-server:80
This is what worked.

nginx enables https on port 80 and 8080 only

I know almost nothing about nginx, please help me to see if it can be achieved ?
A public network IP with only 80 and 8080 ports open, Such as 182.148.???.135
A domain name with an SSL certificate, Such as mini.????.com
This domain name can resolve to this IP.
Using the above conditions, how to enable https ? So that I can pass visit https://mini.????.com to the target server 182.148.???.135
Thank you very much for your help!
Just came accross an issue. Doesn’t matter if its a local setup or one with a domain name.
When you create a symbolic frpom sites-available to sites-enabled you have to use the whole path to each location.
e.g. you can’t
cd /etc/nginx/sites-available/
ln -s monitor ../sites-enabled/
It has to be:
ln -s /etc/nginx/sites-available/monitor /etc/nginx/sites-enabled/
Inside /etc/nginx/sites-available you should have just edited the default file to change the root web folder you specified and left the server name part alone. Restart nginx, should work fine. You don’t need to specify the IP of your droplet. That’s the whole purpose of the default file.
You only need to copy the default file and change server names when you want to set up virtual hosts.

Could not resolve hostname, ping works

I have installed RasPi Raspbian, and now I can't do ssh or git clone, only local host names are being resolved it seems. And yet ping works:
pi ~ $ ssh test.com
ssh: Could not resolve hostname test.com: Name or service not known
pi ~ $ git clone gitosis#test.com:test.git
Cloning into 'test'...
ssh: Could not resolve hostname test.com: Name or service not known
fatal: The remote end hung up unexpectedly
pi ~ $ ping test.com
PING test.com (174.36.85.72) 56(84) bytes of data.
I sort of worked around it for github by using http://github.com instead of git://github.com, but this is not normal and I would like to pinpoint the problem.
Googling for similar issues but the solutions offered was either typo correction, or adding domains to hosts file.
This sounds like a DNS issue. Try switching to another DNS server and see if it works.
OpenDNS
208.67.222.222
208.67.220.220
GoogleDNS
8.8.8.8
8.8.4.4
Try reseting te contents of the DNS client resolver cache.
(For windows) Fireup a command prompt and type:
ipconfig /flushdns
If you are a linux or mac user, they have their own way of flushing the dns.
Had the same error, I just needed to specify a folder:
localmachine $ git pull ssh://someusername#127.0.0.1:38765
ssh: Could not resolve hostname : No address associated with hostname
fatal: The remote end hung up unexpectedly
localmachine $ git pull ssh://someusername#127.0.0.1:38765/
someusername#127.0.0.1's password:
That error message is just misleading.
if you've a network-manager installed
check /etc/nsswitch.conf
if you've got a line
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
remove the **[NOTFOUND=return]**
restart /etc/init.d/networking
the [NOTFOUND=return] prevents futher lookups if the first nameservwe doesn't respond correctly
This may be an issue with the proxy. Kindly unset and try.
git config --global --unset http.proxy
git config --global --unset https.proxy

Resources