Artifactory Browsing With Nginx & HTTP SSO Too Slow - http

I have setup a reverse proxy between Nginx and Artifactory, following instructions from here : https://www.jfrog.com/confluence/display/RTF/nginx
I've also enabled HTTP SSO in Artifactory so that a user authenticated by Artifactory is able to log in to Artifactory automatically. Instructions followed from here : https://www.jfrog.com/confluence/display/RTF/Single+Sign-on
Everything is working except that Artifactory is really slow. When I go to the website (eg. artifactory.myorg.com/webapp/#/home,) a progress wheel comes up and it keeps rolling and on every page.
If I turn off Nginx and access Artifactory using its embedded Tomcat engine then everything works fine.
Is there anything I can do to fix this ?
Update
The browsing is fine as soon as I turn off the following setting:
proxy_set_header REMOTE_USER $remote_user;
I am guessing that Artifactory is currently processing this user setting for every request and maybe I need to do something at Tomcat side or to Artifactory settings to resolve that.
Here's how my nginx/artifactory config looks (They were generated by Reverse Proxy setup page in Artifactory 4.4):
ssl_certificate /etc/ssl/certs/dummy.crt;
ssl_certificate_key /etc/ssl/keys/dummy.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
server {
listen 443 ssl;
server_name dummy.net;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
access_log /var/log/nginx/dummy-access.log;
error_log /var/log/nginx/dummy-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory$ /artifactory/webapp/ redirect;
location /artifactory/ {
auth_pam "Secure Zone";
auth_pam_service_name "sevice";
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://127.0.0.1:8081/artifactory/;
proxy_set_header DUMMY_USER $remote_user;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Yes. Using Nginx as a reverse proxy should not add noticeable overhead, and could speed up the experience if you use it to serve the static assets.
Your testing so far as implicated Nginx, so posting your related Nginx configuration would be helpful.
But I'll go out a limb and make a guess without seeing it. You are likely using proxy_pass in Nginx to send requests on to Artifactory. If Artifactory is on the same host as Nginx, the proxy_pass address should be a port on 127.0.0.1. If you are instead including a domain name there, then your traffic might doing some like being routed from Nginx back to a load balancer, through CloudFlare, or some other inefficient route.

After trying to reproduce your scenario a few times would recommend to try one more thing to isolate the problem.
Try to set a fix username in the REMOTE_USER value, instead of a variable.
proxy_set_header REMOTE_USER username;
BTW, from the snippet it appears the header name is DUMMY_USER and in the example you specified REMOTE_USER. Make sure you the header name is the same as configured in Artifactory under the Admin > Security | HTTP-SSO .
If this issue still reproduces, please contact support#jfrog.com.

Related

Nginx location / vs /artifactory

I am looking at the nginx configuration to set up a docker repository
###########################################################
## this configuration was generated by JFrog Artifactory ##
###########################################################
## add ssl entries when https has been set in config
ssl_certificate /etc/nginx/ssl/demo.pem;
ssl_certificate_key /etc/nginx/ssl/demo.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.art.local art.local;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/art.local-access.log timing;
## error_log /var/log/nginx/art.local-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
chunked_transfer_encoding on;
client_max_body_size 0;
location /artifactory/ {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Why is the location directive set to /artifactory Vs / the root location
The location directive is /artifactory/ and not / because you are using a public context. That is to say, all access to Artifactory will be in the form of servername/artifactory/ and not servername/. This has the advantage that you can use the same URL for multiple applications, for example, something like this:
Artifactory -> servername/artifactory/
Jenkins -> servername/jenkins/
My Custom Service -> servername/myapp/
In other words, it allows you to reuse the same servername (and port) with different contexts for different applications. If your reverse proxy was listening at the root level, all the requests would be forwarded to Artifactory.
Now to answer your specific question, why does Artifactory do this? That is likely for clarity/consistency since the default tomcat shipped with Artifactory uses the artifactory keyword for its context. You are of course free to remove the public context from the NGINX configuration and everything will work as expected with the root context servername/, provided you make all the necessary changes (removing it from the rewrites, location, and X-Artifactory-Override-Base-Url).
Wow, this is old but ranked high on Jfrogs 'Community' Portal. Basically if you are trying to configure a reverse proxy in front of Artifactory you will have an uphill job. From Artifactory 7.x they split out ui functions and api functions across ports 8082 and 8081. Maybe good for some technical reason, but really bad for anyone trying to configure a reverse proxy in front of it. Our only currently working nginx configurations are in front of Artifactory 6.x implementations. In 7.x they made things even harder by pulling the reverse proxy config generator. The examples they have for both nginx and haproxy today on their website DO NOT WORK. The haproxy example is closest but uses old syntax that has been update in past years (regirp becomes http-request replace-path). Honestly, we are looking for alternatives to Artifactory due to lack of real support on the Internet apart from additional paid service level plans above the license cost.

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Google OAuth2 OmniAuth Provider callback not working with GitLab behind reverse proxy

I've installed GitLab 8.0.2 on a VM, and I have an nginx reverse proxy set up to direct HTTP traffic to the VM. I am able to view the main login page for GitLab, but when I try to login using the Google OAuth2 method, the callback fails to log me in after entering my correct credentials. I simply get directed back to the GitLab login page.
Where might the problem be? The reverse proxy settings? GitLab settings (ie. Google OAuth config)?
Below is my nginx conf:
upstream gitlab {
server 192.168.122.134:80;
}
server {
listen 80;
server_name myserver.com;
access_log /var/log/nginx/gitlab.access.log;
error_log /var/log/nginx/gitlab.error.log;
root /dev/null;
## send request back to gitlab ##
location / {
proxy_pass http://gitlab;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Interestingly, the old setup I had used iptables to redirect port 81 on the host machine to port 80 on the GitLab VM, and, in that case, the Google OAuth callback worked. I'd prefer to have people simply use standard port 80 for accessing my GitLab instance, though, so I want this reverse proxy method to work.
GitLab 8.x has quite a few new things. Although I don't see anything specifically wrong with your nginx.conf file, it is pretty short compared to the example in the GitLab repository. Look through https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/support/nginx/gitlab-ssl to get an idea of the configuration you should consider adding.
Once your nginx.conf file is updated, read through GitLab OmniAuth documentation and the Google OAuth2 integration documentation under 'Providers' on that OmniAuth page. Make sure you provide the correct callback URL to Google when registering.

Deploying Meteor to production with Meteor-Up, SSL and NGINX

I'm having difficulty deploying my meteor app ("myApp" below) into production using meteor-up with https and NGINX as a proxy. In particular, I think I am having trouble configuring the correct ports and/or paths.
The deployment has worked in most respects. It is running on a digital ocean droplet with a mongohq (now compose.io) database. My mup setup, mup reconfig (run now many times on my mup.json file) and mup deploy commands with meteor-up all report no errors. If I ssh into my ubuntu environment on digital ocean and run status myApp it reports myApp start/running, process 10049, and when I check my mongohq database, I can see the expected collections for myApp were created and seeded. I think on this basis that the app is running properly.
My problem is that I cannot locate it visiting the site, and having no experience with NGINX servers, I cannot tell if I am doing something very basic and wrong setting up the ports and forwarding.
I have reproduced the relevant parts of my NGINX config file and mup.json file below.
The behavior I expected with the setup below is that if my meteor app listens on port 3000 in mup.json the app should appear when I visit the site. In fact, if I set mup.json's env.PORT to 3000, when visiting the site my browser tells me there is a redirect loop. If I change mup's env.PORT to 80, or leave the env.PORT out entirely, I receive a 502 Bad Gateway message - this part is to be expected because myApp should be listening on localhost:3000 and I wouldn't expect to find anything anywhere else.
All help is MUCH appreciated.
MUP.JSON (in relevant part, lmk if more needs to be shown)
"env": {
"PORT": 3000,
"NODE_ENV": "production",
"ROOT_URL": "http://myApp.com",
"MONGO_URL": // working ok, not reproduced here,
"MONGO_OPLOG_URL": // working ok I think,
"MAIL_URL": // working ok
}
NGINX
server_tokens off;
# according to a digital ocean guide i followed here, https://www.digitalocean.com/community/tutorials/how-to-deploy-a-meteor-js-application-on-ubuntu-14-04-with-nginx, this section is needed to proxy web-socket connections
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# HTTP
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name myApp.com;
# redirect non-SSL to SSL
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
# HTTPS
server {
listen 443 ssl spdy;
# this domain must match Common Name (CN) in the SSL certificate
server_name myApp.com;
root html;
index index.html index.htm;
ssl_certificate /etc/nginx/ssl/tempcert.crt;
ssl_certificate_key /etc/nginx/ssl/tempcert.key;
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'long string I didn't reproduce here'
add_header Strict-Transport-Security "max-age=31536000;";
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Also note that the SSL certificates are configured and work fine so I think it is something with how the ports, paths and forwarding is configured. I don't know where the redirect loop is coming from.
For anyone coming across this in the future, I was able to solve things by removing the force-ssl package from my bundled meteor app. Apparently force-ssl and an NGINX proxy are either redundant or if used together can cause too many redirects. This was not well-documented in the materials I was able to locate.
If there is a configuration that supports using force-ssl together with a proxy that serves some purpose and is preferable to removing the package altogether, please post as I would be interested to know. Thanks.
I believe you can keep the force-ssl package as long as you add the X-Forward-Proto header to your Nginx config.
Example:
proxy_set_header X-Forward-Proto https;
Additionally, make sure you have the X-Forward-For set as well, though that's already in the example you posted.
Source
As the documentation of the force-ssl package says , you have to set the x-forwarded-proto header to https :
So your location field in the nginx configuration will be like :
location / {
#your own config...
proxy_set_header X-Forwarded-Proto https;
}
I'm running meteor behind an NGinx proxy. I got the error about too many redirects after installing force-ssl.
What worked to remove force-ssl and then add the following lines to my location in my nginx config:
proxy_set_header X-Forward-Proto https;
proxy_set_header X-Nginx-Proxy true;
Works perfectly now.

Using GitLab behind nginx enabled basic_auth?

I've successfully installed GitLab for management of private repositories (it's quite awesome!).
The problem I am having is by default, Gitlab login is presented when anyone hits my subdomain. I would like to protect the entire area with a basic_auth layer before the user gets the GitLab login screen. Unfortunately, this breaks my ability to push/pull from GitLab when it's enabled.
my nginx config to enable basic_auth:
auth_basic "Restricted";
auth_basic_user_file htpasswd;
Any ideas on how I can enable basic_auth without breaking git / gitlab functionality?
Add this to /etc/gitlab/gitlab.rb:
nginx['custom_gitlab_server_config'] = "auth_basic 'Restricted';\n auth_basic_user_file htpasswd;\n"
And run gitlab-ctl reconfigure
Kind of a hack at the moment but give this is a shot.
Edit your nginx site configuration to add / modify the following locations
location ^~ /api/v3/internal/allowed {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;}
location / {
auth_basic "Gitlab Restricted Access";
auth_basic_user_file /home/git/gitlab/htpasswd.users;
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
Leaving your #gitlab location block as is.
The trick is you let /api/v3/internal/allowd bypass the authentication. If you look at the logs when you do an git pull / push a request is made to the server whether or not to allow it. And on the standard nginx config with htpasswd that request would be blocked because the server has no idea about the authentication required.
Anyway not sure if there's a better alternative (couldn't find any) but this seems to work for me.
Your issue is that you want set a password restriction for public access to GitLab, but let Gitlab-Shell access the local GitLab instance without restriction.
You can have 2 nginx configurations depending on the IP interface. Change the line listen 0.0.0.0:80 default_server to listen 127.0.0.1:80 default_server.
https://github.com/gitlabhq/gitlabhq/blob/v7.7.2/lib/support/nginx/gitlab#L37-38

Resources