How do I serve statical Jira's files with nginx? - nginx

My actual nginx configuration is like this:
server {
listen 443;
server_name jira.domain.com;
ssl_certificate /etc/ssl/certs/jira.crt;
ssl_certificate_key /etc/ssl/private/jira.key;
ssl on;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
}
}
How do I serve statical file with Jira standalone edition?

nginx has to be used as a proxy server with Jira standalone edition. There is a similar question here. Alternatively you can deploy the Jira jar edition in a container(jboss or tomcat) and couple nginx with it.

Related

nginx reverse proxy for application

I use nginx for reverse proxy with domain name. I've some application publish on IIS and i want to proxy different location name for each application.
For example;
Domain name on nginx :
example.com.tr
application end points for app:
1.1.1.1:10
1.1.1.2:10
upstream for app in nginx.conf:
upstream app_1 {
least_conn;
server 1.1.1.1:10;
server 1.1.1.2:10;
}
server {
listen 443 ssl;
server_name example.com.tr;
proxy_set_header X-Forwarded-Port 443;
ssl_certificate /etc/cert.crt;
ssl_certificate_key /etc/cert.key;
location /app_1/ {
proxy_pass http://app_1/;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-REAL-SCHEME $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
access_log /etc/nginx/log/access.log;
error_log /etc/nginx/log/error.log;
}
}
When I try to access example.com.tr/app_1/ , I can access application but not all data.
I inspected this site and so many requests of application were failed.
All requests sended to example.com.tr/uri instead of example.com.tr/app_1/uri. How can I fix this ?
thanks,
You need a transparent path proxy setup. Means NGINX should use the requested URI without removing the matched location from it.
proxy_pass http://app_1;
Remove the tailing slash to tell NGINX not to do so. Using an upstream definition is great but make sure you apply keepalive.

How can I configure access from external ip to internal ip on GCP through nginx reverse proxy?

Can't connect to application through External IP.
I started gerrit code review application on GCP's vm instance(CentOS 7).
It works on http://localhost:8080 and I can't connect to it through external IP. Also I tried to create NGINX reverse proxy, but probably my configuration is wrong. By the way after installing NGINX, the starter page were shown on external ip.
# nginx configuration /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
auth_basic "Welcomme to Gerrit Code Review Site!";
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
gerrit.config
[httpd]
listenUrl = proxy-http://127.0.0.1:8080/
You use localhost as a server_name. I think that may cause conflict, because you connect to your server externally. You don't need server_name, cause you are going connect to your server by ip. And I recommend you enable logs in your nginx config. It will help you with bug fixing.
I recommend you try this config:
server {
listen 80;
access_log /var/log/nginx/gerrit_access.log;
error_log /var/log/nginx/gerrit_error.log;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
add a line in /etc/hosts
127.0.0.1 internal.domain
Update proxy config
proxy_pass http://internal.domain:8080;
It works with me

NGINX proxy to a Zeit Now deployment

I have several application server running several Node applications (via PM2).
I have one NGINX server which has the SSL certificate for the domain and reverse-proxies to the Node applications.
Within the NGINX configuration file I set the domains with their location block like this:
server {
listen 443 ssl;
server_name
geolytix.xyz;
ssl_certificate /etc/letsencrypt/live/geolytix.xyz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/geolytix.xyz/privkey.pem;
location /demo {
proxy_pass http://159.65.61.61:3000/demo;
proxy_set_header HOST $host;
proxy_buffering off;
}
location /now {
proxy_pass https://xyz-heigvbokgr.now.sh/now;
proxy_set_header HOST $host;
proxy_buffering off;
}
}
This only works for the application server. The proxy to the Zeit Now deployment yields a bad gateway. The application itself work as expected if I go to the Zeit Now address of my deployment.
Does anybody know whether I might be missing some settings to proxy to Zeit Now?
now servers require the use of SNI for https connections. Like almost all modern webservers.
You need do add
proxy_ssl_server_name on;
to your configuration.
The smallest location block would be the following:
location / {
proxy_set_header host my-app.now.sh;
proxy_ssl_server_name on;
proxy_pass https://alias.zeit.co;
}

Nginx Sub domain setup

I'm trying to setup Nginx so I can have sub domains like
www.MySite.com - Main website (Works correctly)
jenkins.MySite.com - sub domain for Jenkins
gitlab.MySite.com - sub domain for Gitlab
I've tried following various tutorials and I seem to have included everything required to make this work, but still to no avail.
I've followed this: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins
and various other sources online.
[Nginx Server Block]
I've edited my nginx.conf file, I've created a new nginx/sites-available conf file for Jenkins and symlinked it to sites-enabled.
This is my default jenkins JENKINS_ARGS
JENKINS_ARGS="--webroot=/var/cache/jenkins/war --httpListenAddress=127.0.0.1 --httpPort=$HTTP_PORT -ajp13Port=$AJP_PORT"
This is an example of my jenkins server block in nginx
server
{
listen 80;
return 301 https://$host$request_uri;
}
server
{
listen 443;
server_name jenkins.MySite.com;
#ssl_certificate /etc/nginx/cert.crt;
#ssl_certificate_key /etc/nginx/cert.key;
#ssl on;
#ssl_session_cache builtin:1000 shared:SSL:10m;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
#ssl_prefer_server_ciphers on;
access_log /var/log/nginx/jenkins/access.log;
location /
{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://127.0.0.1:8080;
proxy_read_timeout 90;
proxy_redirect http://127.0.0.1:8080 https://jenkins.MySite.com;
}
}
I've also created an A record in DigitalOcean - Network
and also a CNAME
Much help would be appreciated.
Thanks
All these 3-setups need separate ngnix config files and supervirosor files as you did for main site. make soft link of those files and put them in respective etc/nginx/sites-avai and sites-enable and also soft link the supervisor files to etc/supervisor/conf.d
To check whether the nginx file is properly configured, you need to test it.
sudo nginx -t

How to create Kubernetes cluster serving its own container with SSL and NGINX

I'm trying to build a Kubernetes cluster with following services inside:
Docker-registry (which will contain my django Docker image)
Nginx listenning both on port 80 and 443
PostgreSQL
Several django applications served with gunicorn
letsencrypt container to generate and automatically renew signed SSL certificates
My problem is a chicken and egg problem that occurs during the creation of the cluster:
My SSL certificates are stored in a secret volume that is generated by the letsencrypt container. To be able to generate the certificate, we need to show we are owner of the domain name and this is done by validating a file is accessible from the server name (basically this consist of Nginx being able to serve a staticfile over port 80)
So here occurs my first problem: To serve the static file needed by letsencrypt, I need to have nginx started. The SSL part of nginx can't be started if the secret hasn't been mounted and the secret is generated only when let's encrypt succeed...
So, a simple solution could be to have 2 Nginx containers: One listening only on port 80 that will be started first, then letsencrypt then we start a second Nginx container listening on port 443
-> This kind of look like a waste of resources in my opinion, but why not.
Now assuming I have 2 nginx containers, I want my Docker Registry to be accessible over https.
So in my nginx configuration, I'll have a docker-registry.conf file looking like:
upstream docker-registry {
server registry:5000;
}
server {
listen 443;
server_name docker.thedivernetwork.net;
# SSL
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if ($http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting plus add_header
auth_basic "registry.localhost";
auth_basic_user_file /etc/nginx/conf.d/registry.password;
add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host; # required for docker client's sake
proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
}
The important part is the proxy_pass that redirect toward the registry container.
The problem I'm facing is that my Django Gunicorn server also has its configuration file in the same folder django.conf:
upstream django {
server django:5000;
}
server {
listen 443 ssl;
server_name example.com;
charset utf-8;
ssl on;
ssl_certificate /etc/nginx/conf.d/cacert.pem;
ssl_certificate_key /etc/nginx/conf.d/privkey.pem;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
client_max_body_size 20M;
location / {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_django;
}
location #proxy_to_django {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
#proxy_pass_header Server;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 65;
proxy_read_timeout 65;
proxy_pass http://django;
}
}
So nginx will successfully start only under 3 conditions:
secret is mounted (this could be addressed by splitting Nginx into 2 separate containers)
registry service is started
django service is started
The problem is that django image is pulling its image from the registry service, so we are in a dead-lock situation again.
I didn't mention it but both registry and django have different ServerName so nginx is able to both serve them
The solution I though about it (but it's quite dirty!) would be to reload nginx several time with more and more configurations:
I start docker registry service
I start Nginx with only the registry.conf
I create my django rc and service
I reload nginx with both registry.conf and django.conf
If there was a way to make nginx start ignoring failing configuration, that would probably solve my issues as well.
How can I cleanly achieve this setup?
Thanks for your help
Thibault
Are you using Kubernetes Services for your applications?
With a Service to each of your Pods, you have a proxy for the Pods. Even if the pod is not started, as long as the Service is started nginx will find it when looking it up as the Service has an IP assigned.
So you start the Services, then start nginx and whatever Pod you want in the order you want.

Resources