serving multiple docker microservices behind nginx proxy - nginx

I'm trying to figure out how to dynamically proxy several microservices behind a single nginx proxy via docker. I have been able to pull it off with a single app, but I would like to dynamically add microservices. I'm like to do this without restarting nginx and disrupting users.
Is this possible or should i create a config file for each microservice? I've included samples below:
localhost = simple welcome page
localhost/service1 = microservice
localhost/service2 = microservice
localhost/serviceN = microservice
docker-compose.yml
---
version: '2'
services:
app:
build: app
microservice1:
image: registry.local:4567/microservice1:latest
microservice2:
image: registry.local:4567/microservice2:latest
proxy:
build: proxy
ports:
- "80:80"
proxy.conf
server {
listen 80;
resolver 127.0.0.11 valid=5s ipv6=off;
set $upstream "http://app";
location / {
proxy_pass $upstream$request_uri;
}
}

I was also facing the same issue, I had microservices in the Flask and I had to deploy them in a single EC2 instance as a staging environment.
I had the directory structure as below:
SampleProject
|\_microservices
||\
|| \_A
|| |-docker-compose.yml
|| |-Dockerfile
| \
| \_B
| |-docker-compose.yml
| |-Dockerfile
|
|
|\_docker
| \_web
| |-Dockerfile
| |_nginx
| |-nginx.conf
|
|-docker-compose.yml(Nginx)
For Nginx the docker-compose.yml is given below:
version: '3.7'
services:
web:
build:
context: .
dockerfile: ./docker/web/Dockerfile
ports:
- "80:80"
networks:
default:
external:
name: microservices
And the configurations for Nginx is given below:
upstream files_to_text {
server microserviceA:5000;
}
upstream text_cleaning {
server microserviceB:5050;
}
server {
listen 80;
location /microserviceA {
proxy_pass http://files_to_text;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /microserviceB {
proxy_pass http://text_cleaning;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
To enforce SSL I used AWS Certificate Manager along with Application Load Balancer.
There are 3 steps:
Create an Application Load Balancer with default settings, In the register targets create a target by picking up your EC2 instance with HTTP protocol.
Monitor the Health of the Target group if healthy then edit the listeners of Application Load Balancer remove the default HTTP listener and add HTTPS listener. While adding HTTPS listener we need to specify the default actions as Forward To and select your target group and in Default SSL certificate select the certificate that you created with AWS Certificate Manager
Final step to add DNS name of Application Load Balancer to your name settings where you have purchased the domains.

Config file for each microservice in /etc/nginx/sites-available/ with a symlink in /etc/nginx/sites-enabled/
sample proxy.conf for each where you put app/microservice1/microservice2 as $MICRO_SERVICE,
upstream REPLACEME_SERVICENAME {
server $MICRO_SERVICE:PORT fail_timeout=0;
}
server {
listen 80;
server_name REPLACEME_SITENAME.REPLACEME_DOMAIN;
proxy_pass http://REPLACEME_SERVICENAME;
}
Force-SSL:
upstream REPLACEME_SITENAME.REPLACEME_DOMAIN {
server $MICRO_SERVICE fail_timeout=0;
}
server {
# We only redirect from port 80 to 443
# to enforce encryption
listen 80;
server_name REPLACEME_SITENAME.REPLACEME_DOMAIN;
return 301 https://REPLACEME_SITENAME.REPLACEME_DOMAIN$request_uri;
}
server {
listen 443 ssl http2;
server_name REPLACEME_SITENAME.REPLACEME_DOMAIN;
# If you require basic auth you can use these lines as an example
#auth_basic "Restricted!";
#auth_basic_user_file /etc/nginx/private/httplock;
# SSL
ssl_certificate /etc/letsencrypt/live/REPLACEME_SITENAME.REPLACEME_DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/REPLACEME_SITENAME.REPLACEME_DOMAIN/privkey.pem;
proxy_connect_timeout 75s;
proxy_send_timeout 75s;
proxy_read_timeout 75s;
proxy_http_version 1.1;
send_timeout 75s;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH";
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $remote_addr;
proxy_pass http://REPLACEME_SITENAME.REPLACEME_DOMAIN;
}
}
I also have a repo where I build a tiny nginx service for a raspberryPi in my closet that serves everything in my house to the WAN:
https://github.com/joshuacox/local-nginx/
there's a Makefile to help with creating new services as well.

Related

Setup a Reverse Proxy & a Load Balancer on ports 80 and 443 with Nginx

I want to use Nginx to act as a reverse proxy to serve multiple services on fixed URL and IP while managing their SSL certificates AND as a load balancer to send other requests to a Traefik container hosted on a orchestrator. When acting as a load balancer, I don't need Nginx to handle SSL certificates
The issue is that both the reverse proxy and the load balancer have to listen on ports 80 and 443 in order to give access to everything but I can't figure out how to configure Nginx to do that.
I can only make either one of those works at a time but not both at the same time.
The global infrastructure is available at https://i.stack.imgur.com/BxGnP.png
I have a Nomad (orchestrator) & Consul (service mesh) Cluster with 3 servers (S0, S1 & S2) and 3 clients (C0, C1 & C2).
On the servers, there are Nomad and Consul web UIs. On the clients there is the Traefik Docker container serving various services as a reverse proxy and handling SSL certificates.
In front of all of that, I have the Nginx on Debian 10.
Here are the configuration files I'm currently. The reverse proxy part is working but the load balancer part isn't. (I'm using Ansible to deploy everything so these files are jinja templates)
nomad.j2:
upstream nomad_panel{
{% for server_addr in agents["server-neteau_ip_v4"]["value"] %}
server {{ agents["server-neteau_ip_v4"]["value"][server_addr] }}:4646;
{% endfor %}
}
server {
listen 80;
server_name nomad_URL;
location ^~/.well-known/ {
root /var/lib/certbot;
}
location /{
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name nomad_URL;
ssl_certificate /etc/letsencrypt/live/nomad_URL/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/nomad_URL/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/nomad_URL/chain.pem;
location /{
proxy_pass http://nomad_panel;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 310s;
proxy_buffering off;
# The Upgrade and Connection headers are used to establish
# a WebSockets connection.
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# The default Origin header will be the proxy address, which
# will be rejected by Nomad. It must be rewritten to be the
# host address instead.
proxy_set_header Origin "${scheme}://${proxy_host}";
}
}
consul.j2:
upstream consul_panel{
{% for server_addr in agents["server-neteau_ip_v4"]["value"] %}
server {{ agents["server-neteau_ip_v4"]["value"][server_addr] }}:8500;
{% endfor %}
}
server {
listen 80;
server_name consul_URL;
location ^~/.well-known/ {
root /var/lib/certbot;
}
location /{
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name consul_URL;
ssl_certificate /etc/letsencrypt/live/consul_URL/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/consul_URL/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/consul_URL/chain.pem;
location /{
#resolver 8.8.8.8 valid=30s;
proxy_pass http://consul_panel;
}
}
traefik.j2:
upstream web{
{% for client_addr in agents["client-neteau_ip_v4"]["value"] %}
server {{ agents["client-neteau_ip_v4"]["value"][client_addr] }}:8080;
{% endfor %}
}
upstream secure_web{
{% for client_addr in agents["client-neteau_ip_v4"]["value"] %}
server {{ agents["client-neteau_ip_v4"]["value"][client_addr] }}:8443;
{% endfor %}
}
server {
listen 80 default_server;
server_name _;
location /{
#resolver 8.8.8.8 valid=30s;
proxy_pass http://web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 310s;
proxy_buffering off;
# The Upgrade and Connection headers are used to establish
# a WebSockets connection.
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Mandatory to preserve URL in host (and to survive upstream's redirections)
proxy_set_header Host $host;
# The default Origin header will be the proxy address, which
# will be rejected by Nomad. It must be rewritten to be the
# host address instead.
proxy_set_header Origin "${scheme}://${proxy_host}";
}
}
server {
listen 443;
server_name _;
location /{
#resolver 8.8.8.8 valid=30s;
proxy_pass http://secure_web;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 310s;
proxy_buffering off;
# Mandatory to preserve URL in host (and to survive upstream's redirections)
proxy_set_header Host $host;
# The Upgrade and Connection headers are used to establish
# a WebSockets connection.
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# The default Origin header will be the proxy address, which
# will be rejected by Nomad. It must be rewritten to be the
# host address instead.
proxy_set_header Origin "${scheme}://${proxy_host}";
}
}

wso2 api manager 3.2.0 nginx load balancing this site can’t be reached error?

I want to use loadbalancing for wso2 api manager 3.2.0 using Nginx. when call https://localhsot:443 in nginx server,
it redirects to https://api.am.wso2.com/publisher, but can not reach this site error occurs.
could you please me guide, what is wrong?
Nginx config:
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
upstream sslapi.am.wso2.com {
server 172.24.64.114:9443;
server 172.24.64.114:9443;
}
upstream sslgw.am.wso2.com {
server 172.24.64.114:8243;
server 172.24.64.114:8243;
}
server {
listen 80;
server_name api.am.wso2.com;
rewrite ^/(.*) https://api.am.wso2.com/$1 permanent;
}
server {
listen 443 ssl;
server_name api.am.wso2.com;
proxy_set_header X-Forwarded-Port 443;
ssl_certificate /etc/nginx/ssl/apimanager.crt;
ssl_certificate_key /etc/nginx/ssl/apimanager.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://sslapi.am.wso2.com;
}
}
server {
listen 443 ssl;
server_name gw.am.wso2.com;
proxy_set_header X-Forwarded-Port 443;
ssl_certificate /etc/nginx/ssl/apimanager.crt;
ssl_certificate_key /etc/nginx/ssl/apimanager.key;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 5m;
proxy_send_timeout 5m;
proxy_pass https://sslgw.am.wso2.com;
}
}
}
and deployment.toml config in server(172.24.64.114):
[transport.https.properties]
proxyPort = 443
[server]
hostname = "api.am.wso2.com"
node_ip = "172.24.64.114"
#offset=0
mode = "single" #single or ha
base_path = "${carbon.protocol}://${carbon.host}:${carbon.management.port}"
#discard_empty_caches = false
server_role = "default"
and hosts config in (172.16.11.239) server:
172.0.0.1 localhost
172.24.64.114 api.am.wso2.com
and hosts config in (172.24.64.114) server:
172.24.64.114 api.am.wso2.com
After invoke nginx url (172.24.64.116) it redirects to 172.24.64.114 that is site is not reachable!
When you configure the API Manager with Proxy Port configurations, it is required to specify a Hostname as well. The same Hostname needs to be configured in the Nginx under server configurations. Further, under upstream, you have to configure the IP address of the API Manager nodes to direct the requests.
Since you are having a dedicated Nginx server (.116) in the middle, configure the Nginx server's IP address (.116) and the Hostname of the API Manager (api.am.wso2.com) in the Client node's (.239) Hosts entry. This will make sure that when you type the Hostname: api.am.wso2.com in the Client's node, the request will be dispatched to the Nginx server and then the Nginx will make the communication with the Upstream servers that have been configured.
Try out configuring the Hosts entries correctly in the Client's node and verify the behavior. A sample entry in the Client's Hosts will be as following
172.24.64.116 api.am.wso2.com

Keycloak invalid redirect uri with Couchbase Sync Gateway OpenID Connect Nginx

I am having trouble hooking up OpenID Connect between a Keycloak server and Couchbase Sync Gateway. My setup is as follows: I have an nginx that is providing SSL termination and reverse proxy to Keycloak and Sync Gateway. So my keycloak authentication address is like:
https://auth.domain.com
And my Sync Gateway bucket is at:
https://sg.domain.com/sync_gateway
I have setup a confidential client in keycloak with Authorization Code and the redirect url for it is:
https://sg.domain.com/sync_gateway/_oidc_callback
I am using the built in OpenIDConnectAuthenticator in Couchbase Lite for .NET. When my app takes a user to the Keycloak login page, I am getting:
Invalid parameter: redirect_uri
The login url that is being passed passed to my app is:
https://auth.domain.com/auth/realms/realm/protocol/openid-connect/auth?access_type=offline&client_id=couchbase-sync-gateway&prompt=consent&redirect_uri=http%3A%2F%2Fsg.domain.com%2Fsync_gateway%2F_oidc_callback&response_type=code&scope=openid+email&state=
in which I can see that the redirect_uri is http. It should be https.
My Sync Gateway config is:
{
"log": ["*"],
"databases": {
"sync_gateway": {
"server": "http://cbserver:8091",
"bucket": "sync_gateway",
"users": { "GUEST": { "disabled": true, "admin_channels": ["*"] } },
"oidc": {
"providers": {
"keycloakauthcode": {
"issuer":"https://auth.domain.com/auth/realms/realm",
"client_id":"couchbase-sync-gateway",
"validation_key":"myclientid",
"register":true
}
}
}
}
}
}
My nginx config is:
events {
worker_connections 768;
multi_accept on;
}
http {
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
large_client_header_buffers 4 32k;
upstream auth_backend {
server server1:port1;
}
upstream cb_sync_gateway {
server server2:port2;
}
server { # AUTH
listen 443 ssl;
server_name auth.domain.com;
ssl on;
ssl_certificate /local/ssl/domain_com.crt;
ssl_certificate_key /local/ssl/domain_com.key;
add_header Content-Security-Policy upgrade-insecure-requests;
location / {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://auth_backend;
}
}
server {
listen 443 ssl;
server_name sg.domain.com;
ssl on;
ssl_certificate /local/ssl/domain_com.crt;
ssl_certificate_key /local/ssl/domain_com.key;
add_header Content-Security-Policy upgrade-insecure-requests;
location / {
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://cb_sync_gateway;
}
}
}
Keycloak standalone-ha.xml has proxy setup as per: https://github.com/ak1394/keycloak-dockerfiles
I'm not sure if this is to do with the nginx setup or the keycloak setup.
Any ideas?
I was able to fix this; probably not in the best way but it is working for now. I needed to also set in nginx config:
proxy_redirect http:// https://
and in Keycloak, put the following valid redirect urls:
http://sg.domain.com/sync_gateway/_oidc_callback
If anyone finds a way to do this without having the insecure valid redirect I would be very keen to know as I know this is not recommended.
EDIT:
I have posted in Couchbase Forums and it seems like it could be a bug in Couchbase Mobile (Coucbase Lite or Sync Gateway). They have filed a ticket in Couchbase Lite for .NET.

nginx stopped using server directive/proxy stopped working

Suddenly my nginx configuration stopped working.
events {}
http {
upstream node-app {
server qa:3000;
}
server {
listen 8080;
server_name name.com;
location / {
proxy_pass http://node-app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
server_name name.com;
root /var/www/name.com/webapp;
auth_basic "Password required";
auth_basic_user_file /etc/nginx/.htpasswd;
location ~ \.css {
include /etc/nginx/mime.types; # css files wont be loaded if mime type wont be text/css
}
}
}
Nothing gets logged/works for connections to port 8080. I have tested if it is caused by the proxy by removing location block and instead using configuration from server at port 80 configuration, it is still not working.
I am using docker-compose to setup nginx and server listening at port 3000. Nothing has changed in the docker configuration since last time things were working.
Any help is welcome.

Unable to push docker images to artifactory

I set up artifactory as a docker registry and am trying to push an image to it
docker push nginxLoadBalancer.mycompany.com/repo_name:image_name
This fails with the following error
The push refers to a repository [ nginxLoadBalancer.mycompany.com/repo_name] (len: 1)
unable to ping registry endpoint https://nginxLoadBalancer.mycompany.com/v0/
v2 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v2/: Bad Request
v1 ping attempt failed with error: Get https://nginxLoadBalancer.mycompany.com/v1/_ping: Bad Request
This is my nginx conf
upstream artifactory_lb {
server mNginxLb.mycompany.com:8081;
server mNginxLb.mycompany.com backup;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/my-certs/myCert.pem;
ssl_certificate_key /etc/nginx/ssl/my-certs/myserver.key;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl;
server_name mNginxLb.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/my_local_repo_key/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://artifactory_lb;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
There are no errors in the nginx error log. What might be wrong?
I verfied that the SSL verification works fine with the set up. Do I need to set up authentication before I push images?
I also verified artifactory server is listening on port 2222
Update,
I added the following to the nginx configuration
location /v1 {
proxy_pass http://myNginxLb.company.com:8080/artifactory/api/docker/docker-local/v1;
}
With this it now gives a 405 - Not allowed error when trying to push to the repository
I fixed this by removing the location /v1 configuration and also changing proxy pass to point to the upstream servers

Resources