Nginx location / vs /artifactory - nginx

I am looking at the nginx configuration to set up a docker repository
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
## add ssl entries when https has been set in config
ssl_certificate /etc/nginx/ssl/demo.pem;
ssl_certificate_key /etc/nginx/ssl/demo.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.art.local art.local;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/art.local-access.log timing;
## error_log /var/log/nginx/art.local-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location /artifactory/ {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Why is the location directive set to /artifactory Vs / the root location

The location directive is /artifactory/ and not / because you are using a public context. That is to say, all access to Artifactory will be in the form of servername/artifactory/ and not servername/. This has the advantage that you can use the same URL for multiple applications, for example, something like this:
Artifactory -> servername/artifactory/
Jenkins -> servername/jenkins/
My Custom Service -> servername/myapp/
In other words, it allows you to reuse the same servername (and port) with different contexts for different applications. If your reverse proxy was listening at the root level, all the requests would be forwarded to Artifactory.
Now to answer your specific question, why does Artifactory do this? That is likely for clarity/consistency since the default tomcat shipped with Artifactory uses the artifactory keyword for its context. You are of course free to remove the public context from the NGINX configuration and everything will work as expected with the root context servername/, provided you make all the necessary changes (removing it from the rewrites, location, and X-Artifactory-Override-Base-Url).

Wow, this is old but ranked high on Jfrogs 'Community' Portal. Basically if you are trying to configure a reverse proxy in front of Artifactory you will have an uphill job. From Artifactory 7.x they split out ui functions and api functions across ports 8082 and 8081. Maybe good for some technical reason, but really bad for anyone trying to configure a reverse proxy in front of it. Our only currently working nginx configurations are in front of Artifactory 6.x implementations. In 7.x they made things even harder by pulling the reverse proxy config generator. The examples they have for both nginx and haproxy today on their website DO NOT WORK. The haproxy example is closest but uses old syntax that has been update in past years (regirp becomes http-request replace-path). Honestly, we are looking for alternatives to Artifactory due to lack of real support on the Internet apart from additional paid service level plans above the license cost.

Related

Nginx/Pyramid custom SSL port

As a prefix, I have been using the following stack for some time with great success:
NGINX - web proxy
SSL - configured in nginx
Pyramid web application, served by gunicorn
The above combo works great, here is a working configuration.
server {
# listen on port 80
listen 80;
server_name portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
server {
# listen on port 80
listen 80;
server_name www.portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
#ssl server
server {
listen 443 ssl;
ssl on;
ssl_certificate /usr/local/etc/letsencrypt/live/portalapi.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/portalapi.example.com/privkey.pem;
server_name www.portalapi.example.com;
client_max_body_size 10M;
client_body_buffer_size 128k;
location ~ /.well-known/acme-challenge/ {
root /usr/local/www/nginx/portalapi;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:8005;
#proxy_intercept_errors on;
allow all;
}
error_page 404 500 502 503 504 /index.html;
location = / {
root /home/luke/ecom2/dist;
}
}
Now, this is how I serve my public facing apps, it works very well. For all my internal applications, I used to simply direct users to an internal domain example: http://subdomain.company.domain , again this worked well for a long time.
Now in the wake of KRACK attack although we have some very thorough firewall rules to prevent a lot of attacks, I want to force all internal traffic through SSL, and I don't want to use a self signed certificate, I want to use lets encrypt so I can auto-renew certificates which makes administration much easier (and cheaper).
In order to use lets encrypt, I need to have a public facing DNS and server to perform the ACME challenge (for auto renewing). Now again this was a very easy thing to setup in nginx, and the below config works perfectly for serving static content:
What it does is if a user from the internet accesses intranet.example.com it simply shows a forbidden message. However, if a local user tries, they get forwarded to intranet.example.com:8002 and the port 8002 is only available locally, so there is no way external users can access a webpage on this site
geo $local_user {
192.168.155.0/24 0;
172.16.10.0/28 1;
172.16.155.0/24 1;
}
server {
listen 80;
server_name intranet.example.com;
client_max_body_size 4M;
client_body_buffer_size 128k;
# Space for lets encrypt to perform challenges
location ~ /\.well-known/ {
root /usr/local/www/nginx/intranet;
}
if ($local_user) {
# If user is local, redirect them to SSL proxy only available locally
return 301 https://intranet.example.com:8002$request_uri;
}
# Default block all non local users see
location / {
root /home/luke/forbidden_html;
index index.html;
}
# This server block is only available to local users inside geo $local_user
# this block listens on an internal port only, so it is never availble to
# external networks
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
root /home/luke/ecom2/dist;
index index.html;
deny all;
}
}
Now, here comes the pyramid/nginx marrying problem, if I use the same above configuration, but have the below settings for my server on 8002:
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:6543;
proxy_intercept_errors on;
deny all;
}
}
I run into all sorts of problems, first off inside pyramid I was using the following code in my views/templates
request.route_url # get route url for desired function
Now using request.route_url with the above settings should cause https://intranet.example.com:8002/login to route tohttps://intranet.example.com:8002/welcome but in reality, this setup would forward a user to: http://intranet.example.com/welcome Again this is not correct.
And if I use route_url with the NGINX proxy setting:
proxy_set_header Host $http_host;
I get the error: NGINX to return a 400 error:
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Then I used the same nginx settings (header $htto), but thought I would change to using:
request.route_path
My theory was this should force everything to stay on the same url prefix, and just forward a user from https://intranet.example.com:8002/login to https://intranet.example.com:8002/welcome but in reality, this setup performed the same way as using route_url.
proxy_set_header Host $http_host;
I then get an error when navigating to https://intranet.example.com:8002
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Can anyone assist with the correct setup in order for me to serve my application on https://intranet.example.com:8002
EDIT:
Have also tried:
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host:$server_port;
proxy_pass http://10.1.1.16:8002;
proxy_intercept_errors on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# root /home/luke/ecom2/dist;
# index index.html;
deny all;
}
Which gives the same result.
I’ve checked a similar configuration and your last example seems correct,
at least for a simplistic gunicorn/pyramid app combination.
Seems something is missing in your puzzle )
Here’s my code (I’m new to Pyramid so something might be done better)
helloworld.py
from pyramid.config import Configurator
from pyramid.renderers import render_to_response
def main(request):
return render_to_response('templates:test.pt', {}, request=request)
with Configurator() as config:
config.add_route('main', '/')
config.add_view(main, route_name='main')
config.include('pyramid_chameleon')
app = config.make_wsgi_app()
templates/test.pt
<html>
<body>
Route url: ${request.route_url('main')}
</body>
</html>
My nginx config
server {
listen 80;
server_name pyramid.lan;
location / {
return 301 https://$server_name:8002$request_uri;
}
}
server {
listen 8002;
server_name pyramid.lan;
ssl on;
ssl_certificate /usr/local/etc/nginx/cert/server.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/server.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://127.0.0.1:5678;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is how I run gunicorn:
gunicorn -w 1 -b 127.0.0.1:5678 helloworld:app
And yes, it works:
$ curl --insecure https://pyramid.lan:8002/
<html>
<body>
Route url: https://pyramid.lan:8002/
</body>
</html>
$ curl -D - http://pyramid.lan
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.2
Date: Thu, 02 Nov 2017 20:41:50 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://pyramid.lan:8002/
Lets figure out what might go wrong in your case
http 400 usually pops up when you go over httP instead of httpS to a server awaiting httpS requests. If there’s no typo in the post and it indeed occurs when you navigate to https://intranet.example.com:8002 it would be nice to see a curl request showing this and a tcpdump showing what’s happening. Actually you can easily reproduce it by simply typing http://intranet.example.com:8002
another idea is that you’re doing a redirect from your app and the link gets broken when the redirect occurs. I better description on how the user may navigate from https://intranet.example.com:8002/login to .../welcome would be helpful
one more idea is that your app is not that simple and you use some middlewares / customization that makes the default logic work differently and your X-Forwarded-Proto header gets ignored - in this case the behavior would be just as you described
The issue here is, obviously, the missing port within the Location directives that your backend produces.
Now, why is the port missing? Most certainly, because of the following code:
proxy_set_header Host $host;
Note that $host itself does not contain $server_port, unlike $http_host, so, your backend would have no way of knowing which port you meant if you just use $host all by itself.
Note that proxy_redirect default of default expects Location to correspond with the value from proxy_pass in order to do its magic (according to documentation), so, your explicit header setting likely interferes with such logic.
As such, from the nginx point of view, I see multiple possible independent solutions:
remove proxy_set_header Host, and let proxy_redirect do its magic;
set proxy_set_header Host appropriately, to include the port number, e.g., using $host:$server_port or $http_host as you see fit (if that doesn't work, then perhaps the deficiency is actually within your upstream app itself, but fear not -- read below);
provide a custom proxy_redirect setting, e.g., proxy_redirect https://pyramid.lan/ / (equivalent to proxy_redirect https://pyramid.lan/ https://pyramid.lan:8002/), which will ensure that all the Location responses will have the proper port; the only way this wouldn't work is if your upstream does non-HTTP redirects with the missing port.

Artifactory Browsing With Nginx & HTTP SSO Too Slow

I have setup a reverse proxy between Nginx and Artifactory, following instructions from here : https://www.jfrog.com/confluence/display/RTF/nginx
I've also enabled HTTP SSO in Artifactory so that a user authenticated by Artifactory is able to log in to Artifactory automatically. Instructions followed from here : https://www.jfrog.com/confluence/display/RTF/Single+Sign-on
Everything is working except that Artifactory is really slow. When I go to the website (eg. artifactory.myorg.com/webapp/#/home,) a progress wheel comes up and it keeps rolling and on every page.
If I turn off Nginx and access Artifactory using its embedded Tomcat engine then everything works fine.
Is there anything I can do to fix this ?
Update
The browsing is fine as soon as I turn off the following setting:
proxy_set_header REMOTE_USER $remote_user;
I am guessing that Artifactory is currently processing this user setting for every request and maybe I need to do something at Tomcat side or to Artifactory settings to resolve that.
Here's how my nginx/artifactory config looks (They were generated by Reverse Proxy setup page in Artifactory 4.4):
ssl_certificate /etc/ssl/certs/dummy.crt;
ssl_certificate_key /etc/ssl/keys/dummy.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
server {
listen 443 ssl;
server_name dummy.net;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/dummy-access.log;
error_log /var/log/nginx/dummy-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory$ /artifactory/webapp/ redirect;
location /artifactory/ {
auth_pam "Secure Zone";
auth_pam_service_name "sevice";
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://127.0.0.1:8081/artifactory/;
proxy_set_header DUMMY_USER $remote_user;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Yes. Using Nginx as a reverse proxy should not add noticeable overhead, and could speed up the experience if you use it to serve the static assets.
Your testing so far as implicated Nginx, so posting your related Nginx configuration would be helpful.
But I'll go out a limb and make a guess without seeing it. You are likely using proxy_pass in Nginx to send requests on to Artifactory. If Artifactory is on the same host as Nginx, the proxy_pass address should be a port on 127.0.0.1. If you are instead including a domain name there, then your traffic might doing some like being routed from Nginx back to a load balancer, through CloudFlare, or some other inefficient route.
After trying to reproduce your scenario a few times would recommend to try one more thing to isolate the problem.
Try to set a fix username in the REMOTE_USER value, instead of a variable.
proxy_set_header REMOTE_USER username;
BTW, from the snippet it appears the header name is DUMMY_USER and in the example you specified REMOTE_USER. Make sure you the header name is the same as configured in Artifactory under the Admin > Security | HTTP-SSO .
If this issue still reproduces, please contact support#jfrog.com.

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Deploying Meteor to production with Meteor-Up, SSL and NGINX

I'm having difficulty deploying my meteor app ("myApp" below) into production using meteor-up with https and NGINX as a proxy. In particular, I think I am having trouble configuring the correct ports and/or paths.
The deployment has worked in most respects. It is running on a digital ocean droplet with a mongohq (now compose.io) database. My mup setup, mup reconfig (run now many times on my mup.json file) and mup deploy commands with meteor-up all report no errors. If I ssh into my ubuntu environment on digital ocean and run status myApp it reports myApp start/running, process 10049, and when I check my mongohq database, I can see the expected collections for myApp were created and seeded. I think on this basis that the app is running properly.
My problem is that I cannot locate it visiting the site, and having no experience with NGINX servers, I cannot tell if I am doing something very basic and wrong setting up the ports and forwarding.
I have reproduced the relevant parts of my NGINX config file and mup.json file below.
The behavior I expected with the setup below is that if my meteor app listens on port 3000 in mup.json the app should appear when I visit the site. In fact, if I set mup.json's env.PORT to 3000, when visiting the site my browser tells me there is a redirect loop. If I change mup's env.PORT to 80, or leave the env.PORT out entirely, I receive a 502 Bad Gateway message - this part is to be expected because myApp should be listening on localhost:3000 and I wouldn't expect to find anything anywhere else.
All help is MUCH appreciated.
MUP.JSON (in relevant part, lmk if more needs to be shown)
"env": {
"PORT": 3000,
"NODE_ENV": "production",
"ROOT_URL": "http://myApp.com",
"MONGO_URL": // working ok, not reproduced here,
"MONGO_OPLOG_URL": // working ok I think,
"MAIL_URL": // working ok
}
NGINX
server_tokens off;
# according to a digital ocean guide i followed here, https://www.digitalocean.com/community/tutorials/how-to-deploy-a-meteor-js-application-on-ubuntu-14-04-with-nginx, this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name myApp.com;
# redirect non-SSL to SSL
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
# HTTPS
server {
listen 443 ssl spdy;
# this domain must match Common Name (CN) in the SSL certificate
server_name myApp.com;
root html;
index index.html index.htm;
ssl_certificate /etc/nginx/ssl/tempcert.crt;
ssl_certificate_key /etc/nginx/ssl/tempcert.key;
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'long string I didn't reproduce here'
add_header Strict-Transport-Security "max-age=31536000;";
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Also note that the SSL certificates are configured and work fine so I think it is something with how the ports, paths and forwarding is configured. I don't know where the redirect loop is coming from.
For anyone coming across this in the future, I was able to solve things by removing the force-ssl package from my bundled meteor app. Apparently force-ssl and an NGINX proxy are either redundant or if used together can cause too many redirects. This was not well-documented in the materials I was able to locate.
If there is a configuration that supports using force-ssl together with a proxy that serves some purpose and is preferable to removing the package altogether, please post as I would be interested to know. Thanks.
I believe you can keep the force-ssl package as long as you add the X-Forward-Proto header to your Nginx config.
Example:
proxy_set_header X-Forward-Proto https;
Additionally, make sure you have the X-Forward-For set as well, though that's already in the example you posted.
Source
As the documentation of the force-ssl package says , you have to set the x-forwarded-proto header to https :
So your location field in the nginx configuration will be like :
location / {
#your own config...
proxy_set_header X-Forwarded-Proto https;
}
I'm running meteor behind an NGinx proxy. I got the error about too many redirects after installing force-ssl.
What worked to remove force-ssl and then add the following lines to my location in my nginx config:
proxy_set_header X-Forward-Proto https;
proxy_set_header X-Nginx-Proxy true;
Works perfectly now.

Using GitLab behind nginx enabled basic_auth?

I've successfully installed GitLab for management of private repositories (it's quite awesome!).
The problem I am having is by default, Gitlab login is presented when anyone hits my subdomain. I would like to protect the entire area with a basic_auth layer before the user gets the GitLab login screen. Unfortunately, this breaks my ability to push/pull from GitLab when it's enabled.
my nginx config to enable basic_auth:
auth_basic "Restricted";
auth_basic_user_file htpasswd;
Any ideas on how I can enable basic_auth without breaking git / gitlab functionality?
Add this to /etc/gitlab/gitlab.rb:
nginx['custom_gitlab_server_config'] = "auth_basic 'Restricted';\n auth_basic_user_file htpasswd;\n"
And run gitlab-ctl reconfigure
Kind of a hack at the moment but give this is a shot.
Edit your nginx site configuration to add / modify the following locations
location ^~ /api/v3/internal/allowed {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;}
location / {
auth_basic "Gitlab Restricted Access";
auth_basic_user_file /home/git/gitlab/htpasswd.users;
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
Leaving your #gitlab location block as is.
The trick is you let /api/v3/internal/allowd bypass the authentication. If you look at the logs when you do an git pull / push a request is made to the server whether or not to allow it. And on the standard nginx config with htpasswd that request would be blocked because the server has no idea about the authentication required.
Anyway not sure if there's a better alternative (couldn't find any) but this seems to work for me.
Your issue is that you want set a password restriction for public access to GitLab, but let Gitlab-Shell access the local GitLab instance without restriction.
You can have 2 nginx configurations depending on the IP interface. Change the line listen 0.0.0.0:80 default_server to listen 127.0.0.1:80 default_server.
https://github.com/gitlabhq/gitlabhq/blob/v7.7.2/lib/support/nginx/gitlab#L37-38

Resources