I'm new to networking and I need to add an SSL certificate to my load balancer. For that, I'm using Certbot.
Instructions: https://certbot.eff.org/instructions?ws=haproxy&os=ubuntufocal
Basically it says to login to the server using SSH and then install certbot
Then, to run this command
sudo certbot certonly --standalone
It tells me to temporarily stop my web server to get the certificate, so I ran:
sudo service ssh stop
After running the certbot command I get the following error:
Could not bind TCP port 80 because it is already in use by another process on
this system (such as a web server). Please stop the program in question and then
try again.
So I ran:
sudo netstat -tulpn | grep :80
Output:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 59283/nginx: master
tcp6 0 0 :::80 :::* LISTEN 59283/nginx: master
Now, If I stop the nginx service with "sudo service nginx stop" and run the above command again I don't get any services listening in port 80. So I retry the Certbot command once more:
sudo certbot certonly --standalone
I get the following error:
Certbot failed to authenticate some domains (authenticator: standalone). The Certificate Authority reported these problems:
Domain: totaldomainoftheworldclub.tech
Type: dns
Detail: no valid A records found for totaldomainoftheworldclub.tech; no valid AAAA records found for totaldomainoftheworldclub.tech
Hint: The Certificate Authority failed to download the challenge files from the temporary standalone webserver started by Certbot on port 80. Ensure that the listed domains point to this machine and that it can accept inbound connections from the internet.
And that's it, I don't know what else to do.
If you have trouble with normal validation, you can try using DNS challenge
Please note, that for DNS challenges, the following DNS providers are supported: cloudflare, cloudxns, digitalocean, dnsimple, dnsmadeeasy, gehirn, google, linode, luadns, nsone, ovh, rfc2136, route53, sakuracloud.
You can check how to use DNS challenges and what additional configuration it requires in the certbot docs. But basically, you will need to create some kind API key in your domain DNS server and then provide it to certbot. Then when validating it will automatically add a new DNS record using API for validation purposes.
You can also run DNS challenges in different machines or even in Google Function or AWS lambda. Check certbot-lambda for example.
Related
I'm trying to renew an expired certbot SSL for Nginx on Ubuntu 18. I'm getting... well, various weirdness, but the certbot error is:
Certbot failed to authenticate some domains (authenticator: nginx).
The Certificate Authority reported these problems: Domain:
mysite.co.uk Type: connection Detail: ...: Fetching
http://mysite.co.uk/.well-known/acme-challenge/rx6m9QMdK0h16ZOJYsq5sx_AZbxI4zWGvJ6o_kt3b-A:
Connection reset by peer
I've got the site running on HTTP:
server {
listen 80;
listen [::]:80;
server_name www.mysite.co.uk mysite.co.uk;
root /var/www/html;
}
...the nginx.conf is telling it to keep its PID in /run/nginx.pid, I can start and stop it via service nginx start|stop and everything's good:
curl -I http://www.mysite.co.uk/
HTTP/1.1 200 OK
I'm not clear how this /.well-known/acme-challenge/ thing is supposed to be working - there's certainly no such folder in /var/www/html, but I did read that certbot starts it's own server (??) to manage authentication and it's wise to stop your own while renewing.
So, as root, I do:
cat /run/nginx.pid
> 124876
service nginx stop
lsof -i -P -n | grep LISTEN
> nothing on 80 or 443
cat /run/nginx.pid
> file doesn't exist
certbot certonly --nginx
I know there's a certbot renew command but I'm getting the same results with each, so... anyway. It correctly picks up the domain name from the existing conf, prompts me to renew, and eventually spits out the error above. I also see a couple lines added to nginx error.log:
[notice] 125028#125028: signal process started
[error] 125028#125028: invalid PID number "" in "/run/nginx.pid"
Sure enough, nginx is started and is listening on 80 and 443. I didn't start it. It's also got a new PID. If I try service nginx restart, it fails because it's trying to bind to ports that this other (certbot's ??) Nginx process is already using.
At all times, whether via "proper" nginx or this certbot zombie one, my site is happily returning HTTP 200's to external requests. I've never got a "Connection reset by peer" error myself. Even when I manually created a /var/www/html/.well-known/acme-challenge/test file it's always served it fine.
So.. what in the almighty shenannigans is going on? Why is certbot starting an nginx instance it can't see? Why doesn't it stop it? Is it supposed to be creating something in /.well-known/acme-challenge/? Is my nginx instance somehow interfering? What should be happening? What am I doing wrong??
Ok, I still don't understand the weirdness with certbot starting its own nginx and not stopping it and mucking up PIDs and all that... but certbot can now see our server and renew the SSL certs. And after two days of IT swearing blind that it wasn't being blocked by a firewall rule... it was the firewall.
Sigh.
I'm using the gitea versioning system in a docker environment. The gitea used is a rootless type image.
The http port mapping is “8084:3000” and the ssh port mapping is “2224:2222”.
I generated the keys on my Linux host and added the generated public key to my Gitea account.
1.Test environment
Later I created the ssh config file nano /home/campos/.ssh/config :
Host localhost
HostName localhost
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
After finishing the settings i created the myRepo repository and cloned it.
To perform the clone, I changed the url from ssh://git#localhost:2224/campos/myRepo.git to git#localhost:/campos/myRepo.git
To clone the repository I typed: git clone git#localhost:/campos/myRepo.git
This worked perfectly!
2.Production environment
However, when defining a reverse proxy and a domain name, it was not possible to clone the repository.
Before performing the clone, I changed the ssh configuration file:
Host gitea.domain.com
HostName gitea.domain.com
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
Then I tried to clone the repository again:
git clone git#gitea.domain.com:/campos/myRepo.git
A connection refused message was shown:
Cloning into 'myRepo'...
ssh: connect to host gitea.domain.com port 2224: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I understand the message is because by default the proxy doesn't handle ssh requests.
Searching a bit, some links say to use "stream" in Nginx.
But I still don't understand how to do this configuration. I need to continue accessing my proxy server on port 22 and redirect port 2224 of the proxy to port 2224 of the docker host.
The gitea.conf configuration file i use is as follows:
server {
listen 443 ssl http2;
server_name gitea.domain.com;
# SSL
ssl_certificate /etc/nginx/ssl/mycert_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/mycert.key;
# logging
access_log /var/log/nginx/gitea.access.log;
error_log /var/log/nginx/gitea.error.log warn;
# reverse proxy
location / {
proxy_pass http://192.168.10.2:8084;
include myconfig/proxy.conf;
}
}
# HTTP redirect
server {
listen 80;
server_name gitea.domain.com;
return 301 https://gitea.domain.com$request_uri;
}
3. Redirection in Nginx
I spent several hours trying to understand how to configure Nginx's "stream" feature. Below is what I did.
At the end of the nginx.conf file I added:
stream {
include /etc/nginx/conf.d/stream;
}
In the stream file in conf.d, I added the content below:
upstream ssh-gitea {
server 10.0.200.39:2224;
}
server {
listen 2224;
proxy_pass ssh-gitea;
}
I tested the Nginx configuration and restart your service:
nginx -t && systemctl restart nginx.service
I viewed whether ports 80,443, 22 and 2224 were open on the proxy server.
ss -tulpn
This configuration made it possible to perform the ssh clone of a repository with a domain name.
4. Clone with ssh correctly
After all the settings I made, I understood that it is possible to use the original url ssh://git#gitea.domain.com:2224/campos/myRepo.git in the clone.
When typing the command git clone ssh://git#gitea.domain.com:2224/campos/myRepo.git, it is not necessary to define the config file in ssh.
This link helped me:
https://discourse.gitea.io/t/password-is-required-to-clone-repository-using-ssh/5006/2
In previous messages I explained my solution. So I'm setting this question as solved.
I am trying to run Nginx, but I am getting the error below:
bind() to 0.0.0.0:80 failed (10013: An attempt was made to access a
socket in a way forbidden by its access permissions)
Please provide some help on what changes I need to do to make it working?
I have tried running on ports other than 80 and it works. but I need it to be running on 80.
Note: I am running on Windows 7 with command prompt running as Administrator.
If the port is already in use, you can change the default port of 80 to a different port that is not in use (maybe 8070). In conf\nginx.conf:
server {
listen 8070;
...
}
After startup, you should be able to hit localhost:8070.
tl;dr
netsh http add iplisten ipaddress=::
Faced similar issue. Run the above command in command prompt.
This should free up port 80, and you'd be able to run nginx.
Description:
netsh http commands are used to query and configure HTTP.sys settings and parameters.
add iplisten :
Adds a new IP address to the IP listen list, excluding the port number.
"::" means any IPv6 address.
For more netsh http commands refer the netsh http commands documentation.
Hope this helps!!
You have to be admin or root to bind port 80. Something you can do if you cannot run as root, is that your application listens to other port, like 8080, and then you redirect messages directed to 80 to 8080. If you are using Linux you redirect messages with iptables.
nginx: [emerg] bind() to 0.0.0.0:80 failed (10013: An attempt was made to access a socket in a way forbidden by its access permissions)
I got a similar problem, My 80 port was listening to IIS (windows machine). Stopping IIS freed up 80 port.
The problem got resolved...!!
Please check if another Proxy is running under port 80 ---> in my case IIS was running as a reverse proxy, so nginx could not start..
Stopping IIS, and starting of NGXIN solved the problem
My Tomcat server was running on port 80. Changed the port number in conf\nginx.conf file and it started to work.
This is an old question but since I had this problem recently I thought of posting another possible reason in this problem.
If the user is using Docker and has already tried all proposed solutions as stated above and is wondering why port 80 is trying to bind although on your configurations you are overwriting the port to non root port e.g. listen 8080; it seems that the newer NGINX images have a default nginx.conf file in /etc/nginx/conf.d.
Sample:
$ grep -r 80 /etc/nginx/
/etc/nginx/conf.d/default.conf: listen 80;
On my case I removed it on my Dockerfile:
RUN set -x \
&& rm -f /etc/nginx/nginx.conf \
&& rm -f /etc/nginx/conf.d/default.conf
Next step pass from my custom configurations:
COPY ["conf/nginx.conf", "/etc/nginx/nginx.conf"]
I have elasticsearch 1.4 and kibana4 running on an Amazo EC2 instance running RHEL7.
Kibana4 is running as a standalone process and is not deployed in a web container such as nginx.It is listening on Port 5601.(the default port). I would like to have kibana listen on port 80.
Can this be achieved without using nginx? If yes how?
You need to set capabilities CAP_NET_BIND_SERVICE to bind non root process to a privileged port (<1024)
to make kibana listen on port 80 :
1- edit kibana port in /etc/kibana/kibana.yml
server.port : "80"
2- run the following commands :
sudo setcap cap_net_bind_service=+epi /usr/share/kibana/bin/kibana
sudo setcap cap_net_bind_service=+epi /usr/share/kibana/bin/kibana-plugin
sudo setcap cap_net_bind_service=+epi /usr/share/kibana/bin/kibana-keystore
sudo setcap cap_net_bind_service=+epi /usr/share/kibana/node/bin/node
Edit file {kibana-directory}/config/kibana.yml. Find this line:
port: 5601
and change it to:
port: 80
Setting the port 80 in config file will trigger the following error
kibana[11777]: FATAL Error: listen EACCES: permission denied 0.0.0.0:80
due to the fact that kibana service by default executes under the user kibana
You can change the user to root, but this will trigger the following warning
kibana[11639]: Kibana should not be run as root. Use --allow-root to continue.
So running kibana service under root user is something not recommended. Better make a port forwarding rule, or a HTTP redirect if you have a web server.
Full settings here: https://www.elastic.co/guide/en/kibana/current/settings.html
This should be added to config/kibana.yml
server.port: 80
And run kibana server with sudo. Make sure no process is using port 80 at the same time.
I am using Ubuntu 12.04LTS 64 bit pc.JBOSS as my local pc server and i have a project which is using mysql as database and struts framework.I can easily access my project using
http://localhost:8080
but when I want to access my project using
https://localhost:8080
It shows an error.
The connection was interrupted
The connection to 127.0.0.1:8080 was interrupted while the page was loading.
I have also checked
$ sudo netstat -plntu | grep 8080
this command which output is
"tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 5444/java"
If i kill this process,my project also killed.
and i also mentioned u that my 80 port is free also.
Can you tell me what is the problem is occured for which I cannot access my project in my local pc using https.
Advance Thanks for helping.
SSL has to be on a different port. Here is the breakdown:
http:// watched on port, typically 80
https:// watched on a different port, typically 443
You need to RUN SSL on a different port.
Listen 8081
SSL VirtualHost
<VirtualHost *:8081>
# SSL Cert info here
....
</VirtualHost>
> service httpd restart