I have install a free Artifactory Server (Community Edition and edition license 7.29.8 rev 72908900 )
So when I can't configure url HTTP or HTTPS url
When I launch Artifactory web In http (Administration ==> General ==> HTTP Setting) are unavailable.
I have install NGINX server and I can't launch artifactory in https.
I use the same VM to NGIX and Artifactory.
I have found this documentation: https://www.jfrog.com/confluence/display/JFROG/HTTP+Settings & https://www.jfrog.com/confluence/display/JFROG/HTTP+Settings & https://www.jfrog.com/confluence/display/JFROG/Configuring+NGINX
My configuration nginx server:
## add ssl entries when https has been set in config
##ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /etc/ssl/certs/domain.crt;
ssl_certificate_key /etc/ssl/private/domain.key;
ssl_session_cache shared:SSL:1m;
##ssl_prefer_server_ciphers on;
## server configuration
server {
listen 443 ssl;
listen 8080;
server_name <Server_Name>;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
## access_log /var/log/nginx/<Server_Name>-access.log timing;
## error_log /var/log/nginx/<Server_Name>-error.log;
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_pass https://<Artifactory_IP>:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass https://<Artifactory_IP>:8081;
}
}
}
And all are KO
Can you help me?
I juste want to launch artifactory in https://x.x.x.x:80802 for example
HTTP Settings is not supported in Artifactory Community Edition. That said, you may want to check out the free-tier option for testing this configuration and additional features at: https://jfrog.com/start-free
similar query: HTTPS Settings is disabled in freshly started artifactory-cpp-ce - how do I enable it?
Related
I have a problem with nginx and proxy_pass. I try to secure connection to old server without option to upgrade apache there. I can't establish there ssl connection with tls 1.2. So i Tried to secure it by reverse proxy in nginx with some success. when i open website like http://example.com or https://example.com connection is secure and it works well. But there are other sites whitch have links like https://example.com/login https://example.com/investitions (basicly every uri example.com/foo/bar/ ect.)and those connections are insecure. my nginx config looks like this:
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate ssl.crt;
ssl_certificate_key ssl.key;
ssl_client_certificate ca.crt;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
location / {
proxy_set_header X-Scheme https;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr
proxy_pass http://baza.example.com/;
}
Please help me.
I am trying to run odoo in https mode using nginx but its not working. This is how I tried,
sudo apt-get install nginx
cd /etc/nginx/sites-available
sudo openssl genrsa -des3 -passout pass:odoo -out server.temp.key 2048
sudo openssl req -new -passin pass:odoo -key server.temp.key -out server.csr
sudo openssl rsa -in server.temp.key -out server.key
sudo rm server.temp.key
sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
my certificate file,
upstream odoo {
server localhost:8069 weight=1 fail_timeout=3000s;
}
server {
listen 443;
listen [::]:443 ipv6only=on;
server_name odoo.example.com;
ssl on;
ssl_ciphers ALL:!ADH:!MD5:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
# Specifies the maximum accepted body size of a client request,
# as indicated by the request header Content-Length.
client_max_body_size 200m;
# add ssl specific settings
keepalive_timeout 60;
# increase proxy buffer to handle some OpenERP web requests
proxy_buffers 16 64k;
proxy_buffer_size 128k;
location / {
proxy_pass http://odoo;
# Force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
# Set headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
# Let the Odoo web service know that we're using HTTPS, otherwise
# it will generate URL using http:// and not https://
proxy_set_header X-Forwarded-Proto https;
# Set timeouts
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
send_timeout 3600;
# By default, do not forward anything
proxy_redirect off;
}
# Cache some static data in memory for 60mins.
# under heavy load this should relieve stress on the Odoo web interface a bit.
location ~* /[0-9a-zA-Z_]*/static/ {
proxy_cache_valid 200 60m;
proxy_buffering on;
expires 864000;
proxy_pass http://odoo;
}
access_log /var/log/nginx/odoo-ssl.access.log;
error_log /var/log/nginx/odoo-ssl.error.log;
}
After this I restarted nginx,enabled proxy mode in odoo config and restarted odoo server, but still my site runs in http mode. I have not given any domain name to my site. Is that compulsory before setting up nginx?
Ok, let's start from the beginning. In order to have set Odoo with ssl you need:
1) domain name
2) proper config for reverse proxy(you are using nginx so it will be easy fix)
3) ssl certificate
4) updated Odoo config
I have wrote down some hints to the above points
1) I assume that you have a domain pointing to your server. If not then you need to visit your domain control panel and set dns(simply put your server IP in "A" value). Sample tutorial on this(see point 5):
https://www.cier.tech/blog/blog-1/post/how-to-publish-your-website-on-amazon-ec2-linux-ubuntu-server-13
2) Sample Odoo config:
upstream odoo {
server 127.0.0.1:8069;
}
upstream odoochat {
server 127.0.0.1:8072;
}
# http -> https
server {
listen 80;
server_name odoo.mycompany.com; #replace with your domain
rewrite ^(.*) https://$host$1 permanent;
}
server {
listen 443;
server_name odoo.mycompany.com; #replace with your domain
proxy_read_timeout 720s;
proxy_connect_timeout 720s;
proxy_send_timeout 720s;
# Add Headers for odoo proxy mode
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
# SSL parameters - update with your cert details
ssl on;
ssl_certificate /etc/ssl/nginx/server.crt;
ssl_certificate_key /etc/ssl/nginx/server.key;
ssl_session_timeout 30m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
# log
access_log /var/log/nginx/odoo.access.log;
error_log /var/log/nginx/odoo.error.log;
# Redirect requests to odoo backend server
location / {
proxy_redirect off;
proxy_pass http://odoo;
}
location /longpolling {
proxy_pass http://odoochat;
}
# common gzip
gzip_types text/css text/less text/plain text/xml application/xml application/json application/javascript;
gzip on;
}
As you can see there is also upstream for the chat as it works on the other port.
Remember to create a shortcut in the sites-enabled:
ln -s /etc/nginx/sites-available/yoursite.com /etc/nginx/sites-enabled/yoursite.com
Later on test nginx config and restart it:
nginx -t
service nginx restart
Mentioned config comes from:
https://www.odoo.com/documentation/10.0/setup/deploy.html
4) Update your Odoo config with:
- proxy_mode = True
- workers = you need to have more than one worker if you want the "chat" and "discuss" modules to work properly.
I have configured my nginx based on the documentation provided and articles available from web. It's not completely working specifically http to https.
I tried different changes but still not be able to execute successfully...Please have a look.
Few imp points : My . nodejs app is running on port 3000.
Ghost blog running on 2368.
HTTP — redirect all traffic to HTTPS
server {
listen 80;
server_name domainname.com www.domainname.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name www.domainname.com;
error_page 497 https://www.domainname.com$request_uri;
ssl_certificate /etc/letsencrypt/live/domainname.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/domainname.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers KEY_HERE;
ssl_prefer_server_ciphers on;
# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;
# OCSP Stapling ---
# fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
## verify chain of trust of OCSP response using Root CA and Intermediate certs
# ssl_trusted_certificate /etc/ssl/certs/dhparam.pem;
resolver 8.8.8.8;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /blog {
proxy_pass http://localhost:2368;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
This issue is resolved.
Everything is correct with nginx configuration. Issue was with Google console platform. There is a check box in GCP config with name Allow HTTP traffic, which was unchecked by default. I made the change and it started working. Thanks for the reply.
I recommend you to do as below:
location / {
return 301 https://$host$request_uri;
}
I have a Linux box running Ubuntu 14.04 with about 50gb of memory.
I've got a 5 or 6 Ruby-on-Rails web applications, each with a Unicorn App server, all served by an Nginx reverse proxy server.
Each app is hosted in a sub-directory.
eg:
www.webserver.com/app1
www.webserver.com/app2
Each app gets maybe 50-100 requests per day. They are all little apps to facilitate business processes at my firm.
My Nginx config file looks something like this:
upstream app1 {
#path to Unicorn SOCK file;
}
upstream app2 {
#path to Unicorn SOCK file;
}
upstream app3 {
#path to Unicorn SOCK file;
}
# ...several more apps
server {
listen 443 ssl;
access_log #path;
error_log #path;
ssl_certificate #path;
ssl_certificate_key #path;
add_header X-UA-Compatible "IE=Edge,chrome=1";
root /srv/apps/app1/public;
location /app1 {
proxy_pass http://app1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /app2 {
proxy_pass http://app2;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
location /app3 {
proxy_pass http://app3;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
# ...several more apps
}
This setup has worked without issue for a year or so, but I have this nagging feeling I'm doing this all wrong....
Am I going to run into problems if I keep adding apps? Is there a better way to do this?
Update:
By "problems," I mean:
static resource path collisions?
memory issues? namely, using more than I need to accomplish same behavior?
And by "a better way to do this," I mean:
other than sending requests to the relevant unicorn server by parsing out the name of the sub-directory in the URL
should I be using a single Nginx reverse proxy to serve multiple apps?
For the same configuration into differents apps, you can use include directive.
Example, create file named /etc/nginx/global_proxy.conf with this content :
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
And in nginx.conf your section /appX :
location /appX {
proxy_pass http://appX;
include /etc/nginx/global_proxy.conf;
}
And to increase your security, i recommend you adding dhparam, and add this to SSL configuration :
# SSL :
# drop SSLv3 (POODLE vulnerability)
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Recommanded ciphers
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# Diffie–Hellman key exchange (D–H)
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# config to enable HSTS(HTTP Strict Transport Security)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
# force timeouts if one of backend is died
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
To generate dhparam.pem file :
openssl dhparam -out dhparam.pem 4096
I have a cluster glassfish instance running in Ubuntu 12.04 server with nginx as the front-end.
I have configured glassfish upstream in nginx conf file and proxy params are all set.
nginx.conf
glassfish_custer ( upstream name )
Now the problem is,
I added a file realm in glassfish with username and password entries to enable basic authentication for one of my applications.
I added necessary login config params in web.xml file, bundled war and deployed in glassfish server and when I fire url,
http://domain.com/application
It falls in redirect loop
https://domain.com/application
It happens only when I enable basic authentication. If I switch off, everything is working as expected.
I think I need to set some proxy header params and change auth settings in glassfish admin console for http listener ?
If anyone experienced this issue before, Please let me know....
In short, How to make basic authentication works in nginx load balancer with glassfish as the upstream
UPDATE 1:
nginx.conf
## http redirects to https ##
server {
#listen [::]:80;
listen 80;
server_name domain.com www.domain.com;
location / {
try_files $uri $uri/ #backend;
}
location #backend {
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header x-forwarded-for $remote_addr;
proxy_pass http://glassfish_servers;
proxy_intercept_errors on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
# Strict Transport Security
# add_header Strict-Transport-Security max-age=2592000;
# rewrite ^/.*$ https://$host$request_uri? permanent;
}
server {
listen 443 ssl;
#listen [::]:443 ssl;
server_name domain.com www.domain.com;
location / {
try_files $uri $uri/ #backend;
}
## default location ##
location #backend {
proxy_buffering off;
proxy_pass http://glassfish_servers;
proxy_intercept_errors on;
#proxy_http_version 1.1;
#proxy_set_header Connection "";
# force timeouts if the backend dies
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
# set headers
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
#proxy_redirect off;
}
ssl_certificate /etc/nginx/ssl/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/domain_com.key;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!$
}
Answering my own question.
Having this xml configuration in web.xml was the root cause of the redirection loop.
Since I added "CONFIDENTIAL" as the authority value, http request were getting redirected to https when request hit backend glassfish instance.
I changed this value to "NONE" and everything worked like charm.
<security-constraint>
<web-resource-collection>
<web-resource-name>wholesale</web-resource-name>
<url-pattern>/acme/wholesale/*</url-pattern>
<http-method>GET</http-method>
<http-method>POST</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>PARTNER</role-name>
</auth-constraint>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
Make the following changes
Change <transport-guarantee>CONFIDENTIAL</transport-guarantee>
to
<transport-guarantee>NONE</transport-guarantee>
Also, make sure to set proper proxy header values in nginx conf file (or) if you configured sites conf files separately in sites-available folder, pls add the following proxy headers
proxy_set_header x-forwarded-for $remote_addr;
proxy_intercept_errors on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;