Parse Server & Dashboard restarting on PM2 causing 502 Bad Gateway - nginx

I have installed Parse Server and Parse Dashboard on an AWS EC2 instance (Ubuntu 14.04). I am using pm2 process manager to configure and run both applications and nginx to serve as a proxy server.
Both applications are working fine and I can access them from the client. The problem however is that when I access Parse Dashboard (which internally performs calls to local Parse Server) majority of the POST commands are returned with 502 Bad Gateway error.
After some investigation I suspect pm2 is the problem as it keeps on restarting applications after some time since all the POST commands are executed at the same time. I set the max_memory_restart parameter to 500M and killed & restarted apps but no difference.
I must mention I am using pm2 for the first time. So did I configure pm2 wrong or am I missing something here? pm2 error logs are empty.
Nginx error log shows the following:
2016/08/12 08:39:56 [error] 7792#0: *23 connect() failed (111: Connection refused) while connecting to upstream, client: xxx, server: xxx, request: "POST /parse/classes/AccountingDrawer HTTP/1.1", upstream: "http://127.0.0.1:1337/parse/classes/AccountingDrawer", host: "xxx", referrer: "https://xxx/dashboard/apps/myapp/browser/_Role"
Nginx default config
server {
listen 443;
server_name xxx;
root /usr/share/nginx/html;
index index.html index.htm;
# log files
access_log /var/log/nginx/parse.access.log;
error_log /var/log/nginx/parse.error.log;
ssl on;
# Use certificate and key provided by Let's Encrypt:
ssl_certificate /etc/letsencrypt/live/xxx/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# Pass requests for /parse/ to Parse Server instance at localhost:1337
location /parse/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:1337/parse/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
# Pass requests for /dashboard/ to Parse Dashboard instance at localhost:4040
location /dashboard/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4040/dashboard/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location / {
try_files $uri $uri/ =404;
}
}
pm2 ecosystem.json
{
"apps" : [{
"name" : "parse-server-wrapper",
"script" : "/usr/bin/parse-server",
"watch" : true,
"merge_logs" : true,
"cwd" : "/home/parse",
"env": {
...
},
"max_memory_restart": "500M",
"instances" : 2,
"exec_interpreter" : "node",
"exec_mode" : "cluster"
},
{
"name" : "parse-dashboard-wrapper",
"script" : "/usr/bin/parse-dashboard",
"watch" : true,
"merge_logs" : true,
"cwd" : "/home/parse",
"max_memory_restart": "500M",
"env": {
"HOST": "localhost",
"PORT": "4040",
"MOUNT_PATH": "/dashboard",
"PARSE_DASHBOARD_ALLOW_INSECURE_HTTP": 1,
...
}
}]
}

Related

Nginx only responding to 1 of multiple https servers

The problem I'm facing is that I have nginx configured for 2 HTTPS servers and 1 is responding and working correctly but the other one with a near identical server config is showing "connection refused".
System:
Description: Ubuntu 22.04 LTS
nginx version: nginx/1.18.0 (Ubuntu)
I am working with a default nginx.conf file and have unlinked the default sites-available entry and each server_name is a subdomain with its own SSL cert & key. When I check the access and error logs there are no entries describing why subdomain2 connection is refused, or even log entries showing a connection attempt was made. Both cert/key pairs were generated by the IT dept at a university and since 1 is working fine I have good reason to think both pairs are valid.
I'm no nginx expert but I've setup multiple subdomains like this on different systems with success and am not sure what's going on. I've double & triple checked the basic stuff like making sure a valid sym-link exists in sites-enabled, no errors show up on nginx restart or systemctl status, and obviously the machine itself is listening on 0.0.0.0:https per netstat output as well as subdomain1 working correctly. I've also verified that the proxy_pass destination works when I use subdomain1 to point to it (also verified with curl on the nginx host).
Let me know if there is any other information I can provide.
Any help is appreciated.
Thanks
/etc/nginx/sites-available/subdomain1:
server {
listen 443 ssl;
server_name subdomain1.base.edu;
ssl_certificate /path/server.crt;
ssl_certificate_key /path/server.key;
client_max_body_size 0;
add_header Strict-Transport-Security max-age=15768000;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
/etc/nginx/sites-available/subdomain2:
server {
listen 443 ssl;
server_name subdomain2.base.edu;
ssl_certificate /path/server.crt;
ssl_certificate_key /path/server.key;
location / {
proxy_pass http://127.0.0.1:10123;
}
}
UPDATE (nginx -T output)
user#host:/etc/nginx/sites-available$ sudo nginx -T
[sudo] password:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
.....
# configuration file /etc/nginx/sites-enabled/subdomain1.base.edu:
# top-level http config for websocket headers
# If Upgrade is defined, Connection = upgrade
# If Upgrade is empty, Connection = close
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name subdomain1.base.edu;
return 302 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name subdomain1.base.edu;
ssl_certificate /path/.ssl/server.crt;
ssl_certificate_key /path/.ssl/server.key;
client_max_body_size 0;
add_header Strict-Transport-Security max-age=15768000;
include /etc/nginx/sites-available/shinyapps;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# websocket headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
}
# configuration file /etc/nginx/sites-available/shinyapps:
location /5627 {
proxy_pass http://localhost:5627/;
proxy_redirect / $scheme://$http_host/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
proxy_buffering off;
}
# configuration file /etc/nginx/sites-enabled/subdomain2.base.edu:
#
# bustalab1 domain to proxy localhost shiny apps
server {
listen 80;
server_name subdomain2.base.edu;
# Tell all requests to port 80 to be 302 redirected to HTTPS
return 302 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name subdomain2.base.edu;
ssl_certificate /path/.ssl/subdomain2.crt;
ssl_certificate_key /path/.ssl/subdomain2.key;
error_log /var/log/nginx/subdomain2_err.log debug;
access_log /var/log/nginx/subdomain2_acc.log;
location / {
proxy_pass http://127.0.0.1:10123;
}
}

Can't access my Vapor app from tutorial on my web server

I want to deploy a Vapor app on my server to use it as backend for my iOS app.
I'm pretty new to this topic. The only thing I did before was deploying a Django backend on the same server. I rebuild my server to set up the Vapor backend.
To begin, I wanted to deploy a Vapor app as basic as possible.
I followed this tutorial (it's short):
https://medium.com/#ankitank/deploy-a-basic-vapor-app-with-nginx-and-supervisor-1ef303320726
I followed the steps and didn't get errors.
The problem is, when I try to call [IP]/hello like in the tutorial, I get 502 Bad Gateway as answer.
Nginx gives me this error:
connect() failed (111: Connection refused) while connecting to upstream, client: [IP], server: _, request: "GET /hello HTTP/1.1", upstream: "http://127.0.0.1:8080/hello", host: "[IP]"
I hope you can help me with this. :)
Update 1:
I changed the config to this:
server {
listen 80;
listen [::]:80;
server_name [DOMAIN];
error_log /var/log/[DOMAIN]_error.log warn;
access_log /var/log/[DOMAIN]_access.log;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
large_client_header_buffers 8 32k;
location / {
# redirect all traffic to localhost:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_read_timeout 86400;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 10m;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml|html|mp4)$ {
access_log off;
expires 30d;
root /home/[AppName]/Public;
}
}
Unfortunately I still get this one:
2019/12/01 14:48:04 [error] 6801#6801: *1 connect() failed (111: Connection refused) while connecting to upstream, client: [IP], server: [DOMAIN], request: "GET /hello HTTP/1.1", upstream: "http://127.0.0.1:8080/hello", host: [DOMAIN]
Update 2:
The error was related to this line:
proxy_pass http://127.0.0.1:8080/;
I had to change it to this:
proxy_pass http://localhost:8080/;
It seems like localhost is not the same.
Now I can run the app via "vapor run" and I can access it. :)
Big thanks to #imike for all the help!!!
You could try my 100% works production config with SSL and websockets support
server {
listen 443;
listen [::]:443;
server_name mydomain.com;
error_log /var/log/mydomain.com_error.log warn;
access_log /var/log/mydomain.com_access.log;
ssl on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_ciphers 'HIGH:!aNULL:!MD5:!kEDH';
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
ssl_stapling on;
ssl_stapling_verify on;
large_client_header_buffers 8 32k;
location / {
# redirect all traffic to localhost:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_read_timeout 86400;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 10m;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml|html|mp4)$ {
access_log off;
expires 30d;
root /apps/myApp/Public;
}
}
In the end of config you can see that static files from Public folder nginx will return directly without Vapor app running.
In your config.swift file you should use FileMiddleware only for macOS where you test the app without nginx cause this middleware is really slow, so I suggest you to put it into compiler check
#if os(macOS)
middlewares.use(FileMiddleware.self) // Serves files from `Public/` directory
#endif
The error was related to this line in the config file:
proxy_pass http://127.0.0.1:8080/;
I had to change it to this:
proxy_pass http://localhost:8080/;
It seems like localhost was not the same.
Now I can run the app via "vapor run" and I can access it. :)
Big thanks to #imike for all the help! He solved it!

nginx why all routes works but one is "301 Moved Permanently "?

I went to similar questions but without any success.
Let say I have two node.js app turning on a server:
// App GoodMorning
var express = require('express');
app.post('/breakfast', function (req, res) {
console.log("Eating breakfast");
res.sendStatus(200);
});
app.get('/', function (req, res) {
res.send('GoodMorning');
});
app.listen(3000, function () {
console.log('GoodMorning app listening on port 3000!');
});
and
// App GoodEvening
var express = require('express');
app.post('/diner', function (req, res) {
console.log("Eating diner");
res.sendStatus(200);
});
app.get('/', function (req, res) {
res.send('GoodEvening');
});
app.listen(4000, function () {
console.log('GoodEvening app listening on port 4000!');
});
And let's say Nginx is used as a reverse proxy server. So it has to send requests to the correct port, right ? So the "magic" file is like that:
# HTTP - redirect all requests to HTTPS:
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS - proxy requests on to local Node.js app:
server {
listen 443;
server_name iamhungry.com;
ssl on;
# Use certificate and key provided by Let's Encrypt:
ssl_certificate /etc/letsencrypt/live/iamhungry.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/iamhungry.com/privkey.pem;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
# Pass requests for / to localhost:3000:
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
# Pass requests for /homepageevening to localhost:4000:
location /homepageevening {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:4000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
# Pass requests for /diner to localhost/diner:4000:
location /diner {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost/diner:4000/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
Then the following requests do the following results:
$ curl iamhungry.com
$ GoodMorning // OK
$ curl -X POST iamhungry.com/breakfast
--> I see "Eating brakfast" in the log file of breakfast.js // OK
$ curl iamhungry.com/homepageevening
$ GoodEvening // OK
$ curl -X POST iamhungry.com/diner -I
HTTP/1.1 301 Moved Permanently // why ?!
--> And I see nothing in the log file of evening.js // why ?!
I am not at ease with these proxy concepts. I went through the documentation of nginx and I found no help about that. I wonder if my way of understanding the way it works is correct.
OK thank you #RichardSmith. I had to fix the configuration file:
location /diner {
...
proxy_pass http://localhost:4000/diner;
And do my tests with curl http://iamhungry.com -I -L instead of curl http://iamhungry.com -I to actually follow the rerouting.
Here is what I was missing:
A 301 is not an error. I was doing my test in the terminal using curl http://iamhungry.com -I but using the option -L curl http://iamhungry.com -I -L allowed me to follow the redirection and then get the end of the line ! So 301 is in fact normal using nginx because redirecting is its role.
Port number is attached to the domain name. Thanks #RichardSmith.
from #RichardSmith link: [...]the part of a normalized request URI matching the location is replaced by a URI specified in the directive.

RoR 5.0.0 ActionCable wss WebSocket handshake: Unexpected response code: 301

Hello I'm trying to serve a simple chat using ror 5.0.0 beta (with puma)
working on production mode (in localhost there are no problems).
This is my Nginx configuration:
upstream websocket {
server 127.0.0.1:28080;
}
server {
listen 443;
server_name mydomain;
ssl_certificate ***/server.crt;
ssl_certificate_key ***/server.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers
HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/jenkins.access.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3000;
proxy_read_timeout 90;
proxy_redirect http://localhost:3000 https://mydomain;
location /cable/{
proxy_pass http://websocket/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
break;
}
}
This is config/redis/cable.yml
production:
url: redis://localhost:6379/1
development:
url: redis://localhost:6379/2
test:
url: redis://localhost:6379/3
and config/environments/production.rb
# Action Cable endpoint configuration
config.action_cable.url = 'wss://mydomain/cable'
# config.action_cable.allowed_request_origins = [ 'http://example.com', /http:\/\/example.*/ ]
# Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
config.force_ssl = false
And this is the error i'm receiving:
application-[...].js:27 WebSocket connection to 'wss://mydomain/cable' failed: Error during WebSocket handshake: Unexpected response code: 301
Any tips? :) Thanks
I solved adding phusion passenger.
nginx config is now :
server{
listen 80;
passenger_enabled on;
passenger_app_env production;
passenger_ruby /../ruby-2.3.0/ruby;
root /path to application/public;
client_max_body_size 4G;
keepalive_timeout 10;
[...]
location /cable{
passenger_app_group_name websocket;
passenger_force_max_concurrent_requests_per_process 0;
}
}
You have to remove default folder config/redis/cable.yml and move that file to /config only.
For SSL just enable default ssl options and it will works .-)
Thanks everyone for the help
Your websocket URI is /cable/ and not /cable, so the latter will hit the location / block. Try:
location /cable {
rewrite ^/cable$ / break;
rewrite ^/cable(.*)$ $1 break;
proxy_pass http://websocket;
...
}
Also, not sure you need a break; in there. I presume the missing } between the two location blocks is just a typo in the question.
EDIT1: Added rewrite to restore correct upstream mapping.
EDIT2: Alternative solution is to explicitly rewrite /cable to /cable/ like this:
location = /cable { rewrite ^ /cable/ last; }
location /cable/ {
proxy_pass http://websocket/;
...
}
I spend almost 5 hours yesterday trying to solve this particular problem. I ended up using a separate domain for the websocket connection called ws.example.com as everything else resulted in a 301 redirect.
Here's is my nginx.conf file. I've removed the SSL parts, but you could just insert your own. Note that you need nginx 1.4+ as everything prior to this version doesn't support websocket proxying.
upstream socket {
server unix:/mysocket fail_timeout=0;
}
upstream websocket {
server 127.0.0.1:28080;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name ws.example.com;
access_log off;
# SSL configs here
location / {
proxy_pass http://websocket/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
server {
listen 443 ssl;
server_name example.com;
# SSL configs here
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://socket;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
I read somewhere that allowed_request_origins didn't work as expected so I went the safe way (until the bug is fixed) and turned the checker of completely using ActionCable.server.config.disable_request_forgery_protection = true.
Here's my cable.ru file for starting action cable.
require ::File.expand_path('../../config/environment', __FILE__)
Rails.application.eager_load!
require 'action_cable/process/logging'
Rails.logger.level = 0
ActionCable.server.config.disable_request_forgery_protection = true
run ActionCable.server
I'm also using the latest rails version from Github.
gem "rails", github: "rails/rails"

Unable to push docker images to artifactory

I set up artifactory as a docker registry and am trying to push an image to it
docker push nginxLoadBalancer.mycompany.com/repo_name:image_name
This fails with the following error
The push refers to a repository [ nginxLoadBalancer.mycompany.com/repo_name] (len: 1)
unable to ping registry endpoint https://nginxLoadBalancer.mycompany.com/v0/
v2 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v2/: Bad Request
v1 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v1/_ping: Bad Request
This is my nginx conf
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
There are no errors in the nginx error log. What might be wrong?
I verfied that the SSL verification works fine with the set up. Do I need to set up authentication before I push images?
I also verified artifactory server is listening on port 2222
Update,
I added the following to the nginx configuration
location /v1 {
proxy_pass http://myNginxLb.company.com:8080/artifactory/api/docker/docker-local/v1;
}
With this it now gives a 405 - Not allowed error when trying to push to the repository
I fixed this by removing the location /v1 configuration and also changing proxy pass to point to the upstream servers

Resources