I have a local docker repository and a remote docker repository , I created a virtual docker repository combining both. In order to access this repository from the client side, does this need to be added to the reverse proxy as well?
Here is the current reverse proxy configuration
upstream artifactory_lb {
server myserver.mycompany.com:8081 backup;
server myserver.mycompany.com:8081;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
ssl_certificate /etc/nginx/ssl/multidomain_cert_files/mycert.pem;
ssl_certificate_key /etc/nginx/ssl/multidomain_cert_files/mykey.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4';
ssl_session_cache shared:SSL:10m;
server {
listen 80;
listen 443 ssl;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/myrepo_images/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
allow all;s
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://artifactory_lb/artifactory/;
}
}
Yes. Docker registries are referenced by their host name, only. This means that you'll need two virtual hosts in your reverse proxy with different hostnames (use the server_name directive for that), mapping to different Artifactory repositories.
The following example config (shortened) should do the trick:
server {
listen 2222 ssl;
server_name local-repo.my-artifactory.com;
rewrite ^/(v1|v2)/(.*) /api/docker/myrepo_images/$1/$2;
# <insert remaining configuration directives here>
}
server {
listen 2222 ssl;
server_name virtual-repo.my-artifactory.com;
rewrite ^/(v1|v2)/(.*) /api/docker/myrepo_virtual/$1/$2;
# <insert remaining configuration directives here>
}
Now you should be able to access both registries using the regular docker commands:
$ docker pull virtual-repo.my-artifactory.com:2222/foo/bar:latest
$ docker pull local-repo.my-artifactory.com:2222/foo/bar:latest
$ docker push local-repo.my-artifactory.com:2222/foo/bar:latest
Related
with JSF 2.3, Jakarta EE 8 and Wildfly 23 / Payara 5
Uploading a file with <h:input> or <p:fileUpload> works fine but fails when Nginx is turned on. The file is never received by the backing bean.
is there any configuration to add to the server? (Payara or Wildfly)
the Nginx config file has surely errors in it?
app.conf:
upstream payara{
least_conn;
server localhost:8080 max_fails=3 fail_timeout=5s;
server localhost:8181 max_fails=3 fail_timeout=5s;
}
server {
if ($host = nocodefunctions.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
access_log /var/log/nginx/payara-access.log;
error_log /var/log/nginx/payara-error.log;
#Replace with your domain
server_name nocodefunctions.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name nocodefunctions.com;
ssl_certificate /etc/letsencrypt/live/xxxxx/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/xxxxx/privkey.pem; # managed by Certbot
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
location /nocodeapp-web-front-1.0 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 240;
proxy_send_timeout 240;
proxy_read_timeout 240;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://payara$request_uri;
}
location = / {
proxy_pass http://payara;
return 301 https://nocodefunctions.com/nocodeapp-web-front-1.0;
}
}
The issue was: my file was larger than the size limit for uploads by nginx, which is set by default to 1m.
The solution consists in adding client_max_body_size 8M; (or any other value) to the config file, more details available in this SO post.
I am trying to set up nginx for my localhost on a linux cntainer
Here is the config
## server configuration
server {
listen 443 ssl;
listen 80 ;
## add ssl entries when https has been set in config
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
server_name localhost;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/localhost-access.log timing;
## error_log /var/log/nginx/localhost-error.log;
rewrite ^/$ /artifactory/webapp/ redirect;
rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect;
location /artifactory/ {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
## server configuration
server {
listen 6555 ssl;
server_name localhost;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/localhost-access.log timing;
## error_log /var/log/nginx/localhost-error.log;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/docker-virtual/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location /artifactory/ {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://localhost:8081/artifactory/;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
After I retsrat nginx I get the following error
nginx: [warn]nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
"localhost" on 0.0.0.0:80, ignored
Also when I navigate to the browser I get a connection refused on localhost:443
What might be wrong?
your server can't resolve your domain name localhost as one IP address
you may have a duplicate entry of your local virtual host name in hosts file
lines should be seen in host file
127.0.0.1 localhost
0.0.0.0 localhost
delete or modify second one
This problem could be caused also by running virtual DNS service like unbound
if you are running so, be sure to configure it correctly
Is it possible to use Nginx reverse proxy with SSL Pass-through so that it can pass request to a server who require certificate authentication for client.
It means server will need to have certificate of client server and will not need certificate of Nginx reverse proxy server.
Not sure how much it can work in your situation, but newer (1.9.3+) versions of Nginx can pass (encrypted) TLS packets directly to an upstream server, using the stream block :
stream {
server {
listen 443;
proxy_pass backend.example.com:443;
}
}
If you want to target multiple upstream servers, distinguished by their hostnames, this is possible by using the nginx modules ngx_stream_ssl_preread and ngx_stream_map. The concept behind this is TLS Server Name Indication.
Dave T. outlines a solution nicely. See his answer on this network.
From the moment that we want to do ssl pass-through, the ssl termination will take place to the backend nginx server. Also i haven't seen an answer that takes care of the http connections as well.
The optimal solution will be a Nginx that is acting as a Layer 7 + Layer4 proxy at the same time. Something else that is rarely a subject of discussion is the IP Address redirection. When we use a proxy, this must be configured on the proxy, and not to the backend server like usually.
Lastly, the client ip address must be preserved, hence we must use the proxy protocol to do this correctly.
Sounds confusing? It's not much.
I came up with a solution that i currently using in production is works flawlessly.
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
variables_hash_bucket_size 1024;
variables_hash_max_size 1024;
map_hash_max_size 1024;
map_hash_bucket_size 512;
types_hash_bucket_size 512;
server_names_hash_bucket_size 512;
sendfile on;
tcp_nodelay on;
tcp_nopush on;
autoindex off;
server_tokens off;
keepalive_timeout 15;
client_max_body_size 100m;
upstream production_server {
server backend1:3080;
}
upstream staging_server {
server backend2:3080;
}
upstream ip_address {
server backend1:3080; #or backend2:3080 depending on your preference.
}
server {
server_name server1.tld;
listen 80;
listen [::]:80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Connection "";
#add_header X-Upstream $upstream_addr;
proxy_redirect off;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_buffers 16 16k;
proxy_buffer_size 64k;
proxy_cache_background_update on;
proxy_pass http://production_server$request_uri;
}
}
server {
server_name server2.tld;
listen 80;
listen [::]:80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Connection "";
#add_header X-Upstream $upstream_addr;
proxy_redirect off;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
proxy_cache_background_update on;
proxy_pass http://staging_server$request_uri;
}
}
server {
server_name 192.168.1.1; #replace with your own main ip address
listen 80;
listen [::]:80;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Connection "";
#add_header X-Upstream $upstream_addr;
proxy_redirect off;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
proxy_cache_background_update on;
proxy_pass http://ip_address$request_uri;
}
}
}
stream {
map $ssl_preread_server_name $domain {
server1.tld production_server_https;
server2.tld staging_server_https;
192.168.1.1 ip_address_https;
default staging_server_https;
}
upstream production_server_https {
server backend1:3443;
}
upstream staging_server_https {
server backend2:3443;
}
upstream ip_address_https {
server backend1:3443;
}
server {
ssl_preread on;
proxy_protocol on;
tcp_nodelay on;
listen 443;
listen [::]:443;
proxy_pass $domain;
}
log_format proxy '$protocol $status $bytes_sent $bytes_received $session_time';
access_log /var/log/nginx/access.log proxy;
error_log /var/log/nginx/error.log debug;
}
Now the only thing is yet to be done is to enable proxy protocol to the backend servers. The example below will get you going:
server {
real_ip_header proxy_protocol;
set_real_ip_from proxy;
server_name www.server1.tld;
listen 3080;
listen 3443 ssl http2;
listen [::]:3080;
listen [::]:3443 ssl http2;
include ssl_config;
# Non-www redirect
return 301 https://server1.tld$request_uri;
}
server {
real_ip_header proxy_protocol;
set_real_ip_from 1.2.3.4; # <--- proxy ip address, or proxy container hostname for docker
server_name server1.tld;
listen 3443 ssl http2 proxy_protocol; #<--- proxy protocol to the listen directive
listen [::]:3443 ssl http2 proxy_protocol; # <--- proxy protocol to the listen directive
root /var/www/html;
charset UTF-8;
include ssl_config;
#access_log logs/host.access.log main;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
default_type "text/plain";
}
location / {
index index.php;
try_files $uri $uri/ =404;
}
error_page 404 /404.php;
# place rest of the location stuff here
}
Now everything should work like a charm.
I am trying to use artifactory as a docker registry. But pushing docker images gives a Bad Gateway error.
Following is my nginx configuration
upstream artifactory_lb {
server artifactory01.mycomapany.com:8081;
server artifactory01.mycomapany.com:8081 backup;
server myLoadBalancer.mycompany.com:8081;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl default_server;
ssl_certificate /etc/nginx/ssl/self-signed/self.crt;
ssl_certificate_key /etc/nginx/ssl/self-signed/self.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
server_name myloadbalancer.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/docker_repo/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
proxy_pass http://myloadbalancer.company.com:8081/artifactory/;
}
}
The docker command I use to push images is
docker push myloadbalancer:2222/image_name
Nginx error logs show the following error 24084 connect() failed (111: Connection refused) while connecting to upstream, client: internal_ip, server: , request: "GET /artifactory/inhouse HTTP/1.0", upstream: "http:/internal_ip:8081/artifactory/repo"
What am I missing?
This can be fixed by changing the proxy pass to point to any of the upstream servers.
proxy_pass http://artifactory_lb;
I set up artifactory as a docker registry and am trying to push an image to it
docker push nginxLoadBalancer.mycompany.com/repo_name:image_name
This fails with the following error
The push refers to a repository [ nginxLoadBalancer.mycompany.com/repo_name] (len: 1)
unable to ping registry endpoint https://nginxLoadBalancer.mycompany.com/v0/
v2 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v2/: Bad Request
v1 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v1/_ping: Bad Request
This is my nginx conf
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
There are no errors in the nginx error log. What might be wrong?
I verfied that the SSL verification works fine with the set up. Do I need to set up authentication before I push images?
I also verified artifactory server is listening on port 2222
Update,
I added the following to the nginx configuration
location /v1 {
proxy_pass http://myNginxLb.company.com:8080/artifactory/api/docker/docker-local/v1;
}
With this it now gives a 405 - Not allowed error when trying to push to the repository
I fixed this by removing the location /v1 configuration and also changing proxy pass to point to the upstream servers