I am using a nginx reverse proxy to serve gitlab web app on port 80. ie nginx reverse proxy will redirect queries to http://ip-address/gitlab to http://ip-address:8000/gitlab . I have updated 'external_url' in my 'gitlab.rb' file. Everything is working (ie I am able to access the gitlab web-intrface via http://ip-address/gitlab ), except the generated git clone URLs. When I create new git projects, the repo URL is shown as http://ip-addeess:8000/gitlab/user/testproject.git. ie the port is still there. How can I remove the port?
The displayed repository URL is generated from the parameter external_url in your gitlab.rb file.
You should set it like this:
external_url 'http://ip-address/gitlab'
Then run sudo gitlab-ctl reconfigure to apply this change.
Add "proxy_set_header Host $http_host;" in your "location / { directive.
Then restart nginx.
It should resolve your issue
Related
I'm a newbie to Nginx. I cannot access my Node.js application that I deploy on AWS EC2 using Nginx reverse proxy. If I do curl http://localhost:3000 I can see the application is running successfully on the server(I'm using pm2 for running node server). But when I try to access it in my browser or postman using public DNS I get the error This site can't be reached and the request gets timeout. Here's my Nginx configuration (I have followed a number of tutorials for this)
The configuration file is named nginx.conf and is in /etc/nginx/sites-enabled directory. If I do sudo nginx -t it says syntax is ok and the test is successful. Also I can see the Nginx is running using command sudo systemctl status nginx What could be the possible reason for this behaviour?
I figured it out the problem wasn't with the Nginx configuration actually I needed to allow public access for port 80 on my ec2 instance which is blocked by default. I allowed port 80 and everything is working fine. This blog helped me. Visit it for me details on how to enable port 80 for your ec2 instance.
I have a Cpanel/WHM server with Nginx installed as reverse proxy (with default Nginx manager), so i tried to replace Nginx with Engintron.
I installed Engintron and uninstalled Nginx via Cpanel Nginx manager.
As a result i got website not working and Nginx couldn't start (status from WHM Engintron page), so i should reinstall Nginx?
I don't know if Engintron contains already Nginx or not
I tried to reinstall Nginx and website work again, but i don't know if Engintron is serving the website or Nginx
Engintron contains nginx installation within their script. So if you install only engintron, it will be default instead nginx in reverse proxy mode with apache.
You can check your nginx installed configuration file location by
sudo nginx -t
(it will show location of configuration file along with testing the same).
See if the nginx configuration file matches the engintron configuration file or not. If you are confused here also then run
$ sudo nginx -T
this will dump the output of configuration file which you may tally with Engintron config file.
Last option would be to uninstall both the nginx and make a clean installation of Engintron.
Try to run these commands first:
/usr/sbin/nginx -s stop
/usr/local/cpanel/scripts/restartsrv_nginx start
If the problem still exists, Uninstall Nginx and all Ruby packages from EasyApache via WHM and reinstall Nginx again.
I know almost nothing about nginx, please help me to see if it can be achieved ?
A public network IP with only 80 and 8080 ports open, Such as 182.148.???.135
A domain name with an SSL certificate, Such as mini.????.com
This domain name can resolve to this IP.
Using the above conditions, how to enable https ? So that I can pass visit https://mini.????.com to the target server 182.148.???.135
Thank you very much for your help!
Just came accross an issue. Doesn’t matter if its a local setup or one with a domain name.
When you create a symbolic frpom sites-available to sites-enabled you have to use the whole path to each location.
e.g. you can’t
cd /etc/nginx/sites-available/
ln -s monitor ../sites-enabled/
It has to be:
ln -s /etc/nginx/sites-available/monitor /etc/nginx/sites-enabled/
Inside /etc/nginx/sites-available you should have just edited the default file to change the root web folder you specified and left the server name part alone. Restart nginx, should work fine. You don’t need to specify the IP of your droplet. That’s the whole purpose of the default file.
You only need to copy the default file and change server names when you want to set up virtual hosts.
I have multiple upstream servers from an nginx load balancer:
upstream app {
# Make each client IP address stick to the same server
# See http://nginx.org/en/docs/http/load_balancing.html
ip_hash;
# Use IP addresses: see recommendation at https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
server 1.1.1.1:6666; # app-server-a
server 2.2.2.2:6666; # app-server-a
}
Right now I ue the servers in an active/passove configuration by taking down each servers (eg systemctl myapp stop) then letting nginx detect the server is down.
However I'd like to be able to change the upstream server dyamically, without having to take either app server or nginx OSS down. I'm aware of the proprietary upstream_conf module for nginx Plus but am using nginx OSS.
How can I dynamically dynamically reconfigure the upstream server on nginx OSS?
You can use:
openresty an OSS nginx bundle with lua scripting ability
nginx with lua scripting (you can configure it by yourself using nginx OSS and luajit) to achieve this.
dynx can achieve exactly what you are looking for, it's still work in progress but the dynamic upstream functionality is there and it's configurable through a rest API.
I'm adding the details on how to deploy and configure dynx:
you need to have a docker swarm up and running (for testing purpose
you can have a 1 swarm machine), follow the docker documentation to do that.
after you need to deploy the stack, for example, with this command (you need to be on the dynx git root):
docker stack deploy -c docker-compose.yml dynx
To check if the application deployed correctly, you can use this command:
docker stack services dynx
To configure an location you can use through the api you can for instance do:
curl -v "http://localhost:8888/configure?location=/httpbin&upstream=http://www.httpbin.org/anything&ttl=10"
To test if it works:
curl -v http://localhost:8666/httpbin
Do not hesitate to contact me or open an issue on github if you are not able to get it to work
I have the following
a virtual docker repo docker-virtual
a remote docker repo dockerhub
a local docker repo docker-local
docker-local is the default deployment repo. Can I use a multidomain certificate to configure the virtual repo in my reverse proxy?
Does the certificate need to support the local repo?
"Does the certificate need to support the local repo?"
Not really, as long as you are using the Default Deployment Repository feature of your Virtual docker repository in Artifactory, you only have to use one registry endpoint with the client for pushing and pulling images.
Wildcard certificates are good if you are going to work with more than just one registry endpoint. For example, consider this Nginx configuration snippet and the "server_name" directive specifically:
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.art-prod.com art-prod;
...
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
...
}
The regular expression here should capture the sub-domain portion of the URL, which would make it available for use later when re-writing the URL from "/v2/' to the full URI of the Artifactory API that includes the actual repository name. In this case your configuration will be handling more than just one hostname, so it'll be best if you used a wildcard certificate for *.art-prod.com.