Artifactory 7.x behind nginx issue - nginx

I have a fresh installation of Artifactory 7.2.1(docker based)which is working fine, but I want to access it via nginx proxy, and that's not working.
my artifactory is running under http://192.168.211.207:8082/
Custom base URL is set to: http://192.168.211.207:8081/artifactory ->which is redirecting me to http://192.168.211.207:8082/
Now, I have an nginx server which is running on the same server, also via docker.
When I try to access:
http://192.168.211.207 -> redirects me to https://192.168.211.207/artifactory + 502 Bad Gateway
https://192.168.211.207 ->redirects me to https://192.168.211.207/ui + 502 Bad Gateway
http://192.168.211.207/artifactory -> redirects to https + 502 Bad Gateway
https://192.168.211.207/artifactory -> 502 Bad Gateway
I do not really understand what is behind port 8081 since I am not able to use it in any circumstances. The port 8082 is working, but not behind a nginx proxy.
Here is my docker-compose file:
version: '2'
services:
artifactory:
image: docker.bintray.io/jfrog/artifactory-pro:7.2.1
container_name: artifactory
ports:
- 8081:8081
- 8082:8082
volumes:
- /data/artifactory:/var/opt/jfrog/artifactory
restart: always
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
nginx:
image: docker.bintray.io/jfrog/nginx-artifactory-pro:7.2.1
container_name: nginx
ports:
- 80:80
- 443:443
depends_on:
- artifactory
links:
- artifactory
volumes:
- /data/nginx:/var/opt/jfrog/nginx
environment:
- ART_BASE_URL=http://localhost:8081/artifactory
- SSL=true
# Set SKIP_AUTO_UPDATE_CONFIG=true to disable auto loading of NGINX conf
#- SKIP_AUTO_UPDATE_CONFIG=true
restart: always
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
and here is my nginx config file:
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/example.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/example.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80 ;
server_name ~(?<repo>.+)\.artifactory artifactory;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory-access.log timing;
## error_log /var/log/nginx/artifactory-error.log;
if ( $repo != "" ){
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
}
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://localhost:8082;
}
}
}
I can't figure out what I am doing wrong here, but is possible to miss something since I am not an nginx expert.
Does someone spot the issue?
Does someone have an example config file for nginx and artifactory 7.x?

Thank you all for the answers. I have been able to get in touch with support, and after talking with a specialist they confirmed that in the version 7.x they don't support webcontext anymore, therefore in my case, the only way to run two artifactory was to create separate subdomains.
In order to be clear for future visitors of this topic, the jFrog Support confirmed me that starting with version 7.0 and newer, Artifactory does not support /webcontext feature anymore, and they don't plan to support it.
Therefore mydomain.com/artifactory-one and mydomain.com/artifactory-two is not anymore possible, you have to do it using subdomains.
mydomain.com/artifactory-one -> artifactory-one.mydomain.com
mydomain.com/artifactory-two -> artifactory-two.mydomain.com

Probably the issue is here. As you are running it in docker container nginx in container doesn't correctly process this - >proxy_pass http://localhost:8082;
Use IP instead. It worked for me

try this
location ~ ^/artifactory/ {
proxy_pass http://127.0.0.1:8081;
}

Here is my Nginx reverse proxy configuration with AWS NLB in front of the Nginx reverse proxy
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/tls.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/tls.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 80;
server_name ~(?<repo>.+)\.artifactory artifactory;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/artifactory-access.log timing;
## error_log /var/log/nginx/artifactory-error.log;
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
rewrite ^/artifactory/?$ / redirect;
if ( $repo != "" ) {
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break;
}
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_pass http://artifactory:8082/;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header Host $http_host;
add_header Strict-Transport-Security always;
location /artifactory/ {
if ( $request_uri ~ ^/artifactory/(.*)$ ) {
proxy_pass http://artifactory:8081/artifactory/$1;
}
proxy_pass http://artifactory:8081/artifactory/;
}
}
}

Related

How do I prevent nginx from redirecting to HTTPS in this particular setup?

I have a somewhat messy setup (no choice) where a local computer is made available to the internet through port forwarding. It is only reachable through [public IP]:8000. I cannot get a Let's Encrypt certificate for an IP address, but the part of the app that will be accessed from the internet does not require encryption. So instead, I'm planning on making the app available from the internet at http://[public IP]:8000/, and from the local network at https://[local DNS name]/ (port 80). The certificate used in the latter is issued by our network's root CA. Clients within the network trust this CA.
Furthermore, some small changes are made to the layout of the page when accessed from the internet. These changes are made by setting an embedded query param.
In summary, I need:
+--------------------------+--------------------------+----------+--------------------------------------+
| Accessed using | Redirect to (ideally) | URL args | Current state |
+--------------------------+--------------------------+----------+--------------------------------------+
| http://a.b.c.d:8000 | no redirect | embedded | Arg not appended, redirects to HTTPS |
| http://localhost:8000 | no redirect | embedded | Arg not appended, redirects to HTTPS |
| http://[local DNS name] | https://[local DNS name] | no args | Working as expected |
| https://[local DNS name] | no redirect | no args | Working as expected |
+--------------------------+--------------------------+----------+--------------------------------------+
For the two top rows, I don't want the redirection to HTTPS, and I need ?embedded to be appended to the URL.
Here's my config:
upstream channels-backend {
server api:5000;
}
# Connections from the internet (no HTTPS)
server {
listen 8000;
listen [::]:8000;
server_name [PUBLIC IP ADDRESS] localhost;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
# Do not allow access to /admin/ from the internet.
return 404;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
proxy_pass http://channels-backend/;
}
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend/;
# I want to add "?embedded" to the URL if accessed through http://[public IP]:8000.
# I do not want to redirect to HTTPS.
try_files $uri $uri/ /$uri.html?embedded =404;
}
}
# Upgrade requests from local network to HTTPS
server {
listen 80;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
server_name [local DNS name] [local IP] localhost;
# This works; it redirects to HTTPS.
return 301 https://$http_host$request_uri;
}
# Server for connections from the local network (uses HTTPS)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name [local DNS name] [local IP] localhost;
ssl_password_file /etc/nginx/certificates/global.pass;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.2 TLSv1.1;
ssl_certificate /etc/nginx/certificates/certificate.crt;
ssl_certificate_key /etc/nginx/certificates/privatekey.key;
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
proxy_pass http://channels-backend/admin/;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
# Proxy to backend
proxy_read_timeout 30;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_redirect off;
proxy_pass http://channels-backend/;
}
# ignore cache frontend
location ~* (service-worker\.js)$ {
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
proxy_no_cache 1;
}
location / {
root /var/www/frontend/;
# Do not add "?embedded" argument.
try_files $uri $uri/ /$uri.html =404;
}
}
The server serves both the frontend and an API developed using React and Django RF, in case it matters. It's deployed using Docker.
Any pointers would be greatly appreciated.
Edit: I commented out everything except the first server (port 8000), and requests are still being redirected to https://localhost:8000 from http://localhost:8000. I don't understand why. I'm using an incognito tab to rule out cache as the problem.
Edit 2: I noticed that Firefox sets an Upgrade-Insecure-Requests header with the initial request to http://localhost:8000. How can I ignore this header and not upgrade insecure requests? This request was made by Firefox, and not the frontend application.
Edit 3: Please take a look at the below configuration, which I'm now using to try to figure out the issue. How can this possibly result in redirection from HTTP to HTTPS? There's now only one server block, and there's nothing here that could be interpreted as a wish to redirect to https://localhost:8000 from http://localhost:8000. Where does the redirect come from? Notice that I replaced some parts with redirects to Google, Yahoo and Facebook. I'm not redirected to any of these. I'm immediately upgraded to HTTPS, which should not be supported at all with this configuration. It's worth mentioning that the redirect ends in SSL_ERROR_RX_RECORD_TOO_LONG. The certificate is accepted when accessing https://localhost/ (port 80) using the original configuration.
upstream channels-backend {
server api:5000;
}
# Server for connections from the internet (does not use HTTPS)
server {
listen 8000;
listen [::]:8000 default_server;
server_name localhost [public IP];
keepalive_timeout 70;
access_log /var/log/nginx/access.log;
underscores_in_headers on;
ssl off;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location /admin/ {
# Do not allow access to /admin/ from the internet.
return 404;
}
location /static/rest_framework/ {
alias /home/docker/backend/static/rest_framework/;
}
location /static/admin/ {
alias /home/docker/backend/static/admin/;
}
location /files/media/ {
alias /home/docker/backend/media/;
}
location /api/ {
proxy_pass http://channels-backend/;
}
location / {
if ($args != "embedded") {
return 301 https://google.com;
# return 301 http://$http_host$request_uri?embedded;
}
return 301 https://yahoo.com;
# root /var/www/frontend/;
# try_files $uri $uri/ /$uri.html =404;
}
}
Boy, do I feel stupid.
In my docker-compose.yml file, I had accidentally mapped port 8000 to 80:
nginx-server:
image: nginx-server
build:
context: ./
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
- "0.0.0.0:8000:80" # Oops
So any request on port 8000 was received by nginx as a request on port 80. Thus, even a simple config like...
server {
listen 8000;
return 301 https://google.com;
}
... would result in an attempt to upgrade to HTTPS (causes include unexpected caching of redirects, possibly default behavior, etc.) on port 80. I was thoroughly confused, but fixing my compose instructions fixed the problem:
nginx-server:
image: nginx-server
build:
context: ./
dockerfile: .docker/dockerfiles/NginxDockerfile
restart: on-failure
ports:
- "0.0.0.0:80:80"
- "0.0.0.0:443:443"
- "0.0.0.0:8000:8000" # Fixed

Senaite LIMS (Plone 4.3.18) css not working on Nginx with https enabled

I've installed and set up senaite.lims, which is a Plone extension, running on Plone 4.3.18 installed by the Unified Installer, and adding senaite.lims to the buildout.cfg eggs.
It's running fine on port 8080, and I can get Nginx to work redirecting / to :8080, but when I start using https, suddenly the css of the site doesn't work anymore.
I looked at the source, and the produced html page shows a link to the stylesheet with http://.... which I don't know if may cause problems, but if I actually try to open the .css file in the browser it works fine.
I set up and tried both with port 80 redirecting the https, and serving both a version of http and https, but neither one would get the page to render using .css. If anyone has any tips, or sees something wrongly configured in the nginx below, any help would be greatly appreciated.
Here is my nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
default_type application/octet-stream;
include /etc/nginx/mime.types;
sendfile on;
keepalive_timeout 75;
upstream plone {
server 127.0.0.1:8080;
}
server {
listen 80;
listen 443 ssl http2;
server_name 99.99.99.99; # changed for posting on SO
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
error_log /var/log/nginx/nginx.vhost.error.log;
location / {
proxy_pass http://localhost:8080/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_buffer_size 128k;
proxy_buffers 8 128k;
proxy_busy_buffers_size 256k;
}
}
}
You missed to rewrite the URL, e.g:
rewrite ^(.*)$ /VirtualHostBase/$scheme/$host/senaite/VirtualHostRoot/$1 break;
Here is a complete working config for SENAITE:
server {
listen 80;
server_name senaite.mydomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name senaite.mydomain.com;
# https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
include snippets/ssl-senaite.mydomain.com.conf;
include snippets/ssl-params.conf;
include snippets/well-known.conf;
access_log /var/log/nginx/senaite.access.log;
error_log /var/log/nginx/senaite.error.log error;
# Allow Cross-Origin Resource Sharing from our HTTP domain
add_header "Access-Control-Allow-Origin" "http://senaite.ridingbytes.com";
add_header "Access-Control-Allow-Credentials" "true";
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS";
add_header "X-Frame-Options" "SAMEORIGIN";
if ($http_cookie ~* "__ac=([^;]+)(?:;|$)" ) {
# prevent infinite recursions between http and https
break;
}
# rewrite ^(.*)(/logged_out)(.*) http://$server_name$1$2$3 redirect;
location / {
set $backend http://haproxy;
# API calls take a different backend w/o caching
if ($uri ~* "##API") {
set $backend http://api;
}
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
rewrite ^(.*)$ /VirtualHostBase/$scheme/$host/senaite/VirtualHostRoot/$1 break;
# proxy_pass $backend;
proxy_pass http://plone;
}
}

Configuring docker virtual repository in artifactory

I have a local docker repository and a remote docker repository , I created a virtual docker repository combining both. In order to access this repository from the client side, does this need to be added to the reverse proxy as well?
Here is the current reverse proxy configuration
upstream artifactory_lb {
server myserver.mycompany.com:8081 backup;
server myserver.mycompany.com:8081;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
ssl_certificate /etc/nginx/ssl/multidomain_cert_files/mycert.pem;
ssl_certificate_key /etc/nginx/ssl/multidomain_cert_files/mykey.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4';
ssl_session_cache shared:SSL:10m;
server {
listen 80;
listen 443 ssl;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/myrepo_images/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
allow all;s
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://artifactory_lb/artifactory/;
}
}
Yes. Docker registries are referenced by their host name, only. This means that you'll need two virtual hosts in your reverse proxy with different hostnames (use the server_name directive for that), mapping to different Artifactory repositories.
The following example config (shortened) should do the trick:
server {
listen 2222 ssl;
server_name local-repo.my-artifactory.com;
rewrite ^/(v1|v2)/(.*) /api/docker/myrepo_images/$1/$2;
# <insert remaining configuration directives here>
}
server {
listen 2222 ssl;
server_name virtual-repo.my-artifactory.com;
rewrite ^/(v1|v2)/(.*) /api/docker/myrepo_virtual/$1/$2;
# <insert remaining configuration directives here>
}
Now you should be able to access both registries using the regular docker commands:
$ docker pull virtual-repo.my-artifactory.com:2222/foo/bar:latest
$ docker pull local-repo.my-artifactory.com:2222/foo/bar:latest
$ docker push local-repo.my-artifactory.com:2222/foo/bar:latest

Artifactory bad gateway error

I am trying to use artifactory as a docker registry. But pushing docker images gives a Bad Gateway error.
Following is my nginx configuration
upstream artifactory_lb {
server artifactory01.mycomapany.com:8081;
server artifactory01.mycomapany.com:8081 backup;
server myLoadBalancer.mycompany.com:8081;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl default_server;
ssl_certificate /etc/nginx/ssl/self-signed/self.crt;
ssl_certificate_key /etc/nginx/ssl/self-signed/self.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
server_name myloadbalancer.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/docker_repo/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
proxy_pass http://myloadbalancer.company.com:8081/artifactory/;
}
}
The docker command I use to push images is
docker push myloadbalancer:2222/image_name
Nginx error logs show the following error 24084 connect() failed (111: Connection refused) while connecting to upstream, client: internal_ip, server: , request: "GET /artifactory/inhouse HTTP/1.0", upstream: "http:/internal_ip:8081/artifactory/repo"
What am I missing?
This can be fixed by changing the proxy pass to point to any of the upstream servers.
proxy_pass http://artifactory_lb;

Unable to push docker images to artifactory

I set up artifactory as a docker registry and am trying to push an image to it
docker push nginxLoadBalancer.mycompany.com/repo_name:image_name
This fails with the following error
The push refers to a repository [ nginxLoadBalancer.mycompany.com/repo_name] (len: 1)
unable to ping registry endpoint https://nginxLoadBalancer.mycompany.com/v0/
v2 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v2/: Bad Request
v1 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v1/_ping: Bad Request
This is my nginx conf
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
There are no errors in the nginx error log. What might be wrong?
I verfied that the SSL verification works fine with the set up. Do I need to set up authentication before I push images?
I also verified artifactory server is listening on port 2222
Update,
I added the following to the nginx configuration
location /v1 {
proxy_pass http://myNginxLb.company.com:8080/artifactory/api/docker/docker-local/v1;
}
With this it now gives a 405 - Not allowed error when trying to push to the repository
I fixed this by removing the location /v1 configuration and also changing proxy pass to point to the upstream servers

Resources