Stuggling to set up caching for PyPi server via NGINX/UWSGI - nginx

I'm trying to configure caching of an PyPi Server via NGINX/uWSGI and failing miserably.
My /sites-available/pypi config is as follows:
uwsgi_cache_path /mnt/pypi/nginx-cache
levels=1:2
keys_zone=pypiserver_cache:10m
max_size=10g
inactive=60m
use_temp_path=off;
server {
listen 80 default_server;
listen 443 default_server ssl;
ssl_certificate /etc/ssl/certs/domain.pem;
ssl_certificate_key /etc/ssl/private/domain.key;
client_max_body_size 5M;
location / {
uwsgi_cache pypiserver_cache;
uwsgi_buffering on;
uwsgi_cache_key $request_uri;
add_header X-uWSGI-Cache $upstream_cache_status;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/internal_pypi.socket;
}
}
NGINX runs and reports no errors but requesting the same package multiple times does not cache it (proven by curling the URL and observing the header X-uWSGI-Cache: MISS) and there is nothing being stored in /mnt/pypi/nginx-cache.
Let me know if I can provide any more helpful info, thanks!
References:
https://github.com/pypiserver/pypiserver#serving-thousands-of-packages
http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html

This is resolved, in my case, the pypi server python file needed some changes.
application = pypiserver.app(
root="/mnt/pypi/directory",
redirect_to_fallback=False,
password_file="path/to/file",
cache_control=3600 #this needed to be added
)

Related

'The change you wanted was rejected' error on all 'Devise' actions after installing SSL certificate

I configured nginx to use SSL certificate(got it from sslforfree.com) but a weird behavior is happening after that. Site is running fine but I'm unable to do any Devise action, e.g. If someone was logged in before using SSL, they can't logout and others can't login/register.
I'm configuring this on Digital-Ocean one-click rails droplet.
Following observations may help:
Nginx.error.log
1 - client closed connection while SSL handshaking
2 - SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number)
- I researched and found out it is happening due to problem in SSL configurations, I tried using Mozilla's generated ones but no success.
Rails Server Log
1 - 422 Unprocessable Entity
2 - ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken)
nginx.conf
upstream puma {
server unix:///home/rails/apps/calwinkle/shared/tmp/sockets/calwinkle-puma.sock;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name calwinkle.com www.calwinkle.com;
# Redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
return 301 https://$host$request_uri;
}
server {
# listen 80;
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name calwinkle.com www.calwinkle.com;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
# intermediate configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
root /home/rails/apps/calwinkle/current/public;
access_log /home/rails/apps/calwinkle/current/log/nginx.access.log;
error_log /home/rails/apps/calwinkle/current/log/nginx.error.log info;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #puma;
location #puma {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 10M;
keepalive_timeout 10;
}
What I think is, somehow my devise controller is still trying to access using http and I've redirected all http requests to https with 301 and this is causing authenticity token to expire.
I've tried to remove redirection and accept both http and https but that caused an error in nginx configuration.
Given your situation, it looks like you are setting wrong headers. So cookies/sessions are being saved on http.
Can you try and add following two lines in your /etc/nginx/sites-available/* and /etc/nginx/sites-enabled/*
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
After doing that run:
sudo service nginx restart
Additionally, clear your sessions and cookies in browser.
If your site is live with users(which I don't think should be without https), you may need to destroy existing sessions of users.
Hope it helps.

How to replace Nginx default error 400 "The plain HTTP request was sent to HTTPS port" page with Play! Framework backend.

I have a website using Play! framework with multiple domains proxying to the backend, example.com and example.ca.
I have all http requests on port 80 being rewritten to https on port 443. This is all working as expected.
But when I type into the address bar http://example.com:443, I'm served nginx's default error page, which says
400 Bad Request
The plain HTTP request was sent to HTTPS port
nginx
I'd like to serve my own error page for this, but I just can't seem to get it working. Here's a snipped of my configuration.
upstream my-backend {
server 127.0.0.1:9000;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location / {
proxy_pass http://my-backend;
}
error_page 400 502 error.html;
location = /error.html {
root /usr/share/nginx/html;
}
}
It works when my Play! application is shut down, but when it's running it always serves up the default nginx page.
I've tried adding the error page configuration to another server block like this
server {
listen 443;
ssl off;
server_name example.com;
error_page [..]
}
But that fails with the browser complaining about the certificate being wrong.
I'd really ultimately like to be able to catch and handle any errors which aren't handled by my Play! application with a custom page, or pages. I'd also like this solution to work if the user manually enters the site's IP into the address bar instead of the server name.
Any help is appreciated.
I found the answer to this here https://stackoverflow.com/a/12610382/4023897.
In my particular case, where I want to serve a static error page under these circumstances, my configuration is as follows
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location = /error.html {
root /usr/share/nginx/html;
autoindex off;
}
location / {
proxy_pass http://my-backend;
}
# If they come here using HTTP, bounce them to the correct scheme
error_page 497 https://$host:$server_port/error.html;
}

Accessing site on nginx by https by default

I have website on nginx server! I want to make accessing the site by https by default(on specified port, I wrote in below)! I mean, when I write in browser - mysite.net:90, or www.mysite.net:90, it will go on https, instead of http! I've already tried to redirect requests with "rewrite" in server block, and "return", but it doesn't work.
This is how my server block looks now:
server {
listen 90;
listen 9090 ssl;
server_name example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /var/www/path;
fastcgi_param HTTPS on;
fastcgi_param HTTP_SCHEME https;
......
}
You may find this forum post useful:
https://www.digitalocean.com/community/questions/http-https-redirect-positive-ssl-on-nginx
Basically you need to create a redirection from your HTTP instance where all requests are automatically redirected to HTTPS.
Like this:
server {
listen 90;
server_name example.com;
# Redirect all requests to https
return 301 https://$server_name$request_uri;
}
server {
listen 9090 ssl;
server_name example.com;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
root /var/www/path;
fastcgi_param HTTPS on;
fastcgi_param HTTP_SCHEME https;
}
Try that and see if that works for you.
But basically you for the first instance, you are simply creating a redirection and all the real configuration will be on the second one.

nginx CORS Issues with MAXCDN and Easydns with Digital Ocean

I am having issues with CORS, specifically with max cdn. CORS was working properly with maxcdn until a few days ago. I have posted my host config and the cors header is included.
I am stumped at this point, as I have done the following to troubleshoot:
Disabled a rocket-cache specific configuration for nginx included in
the server block.
I have changed caching methods - rather than redis-hhvm I have tried
switching over to fcgi-hhvm with rocket cache.
I have disabled rocket cache after clearing it's cache - then purging
the entire cache, and used a third party plugin for wordpress
specifically for linking the cdn.
I am using SNI with SPDY on maxcdn - I have a cert just for the subdomain (cdn.jurisdesk.com). And I am using Digitalocean for hosting.
Below is my current nginx config (everything was working properly until a few days ago which prompted me to speak with maxcdn support - who are great by the way, and extremely knowledgeable when it comes to advanced configurations specifically using nginx).
server {
server_name www.jurisdesk.com;
ssl_certificate_key /path/to/key/foobar.key;
ssl_certificate /path/to/cert/foobar.crt;
listen *:80;
listen *:443 ssl spdy;
listen [::]:80 ipv6only=on;
listen [::]:443 ssl spdy ipv6only=on;
return 301 https://jurisdesk.com$request_uri;
}
server {
server_name jurisdesk.com;
listen *:80;
listen [::]:80;
return 301 https://jurisdesk.com$request_uri;
}
server {
server_name jurisdesk.com;
listen *:443 ssl spdy;
listen [::]:443 ssl spdy;
ssl on;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
ssl_certificate_key /path/to/key/foobar.key;
ssl_certificate /path/to/cert/foobar.crt;
access_log /var/log/nginx/jurisdesk.com.access.log rt_cache_redis;
error_log /var/log/nginx/jurisdesk.com.error.log;
root /var/www/jurisdesk.com/htdocs;
index index.php index.html index.htm;
include common/redis-hhvm.conf;
include rocket-nginx/rocket-nginx.conf;
include common/wpcommon.conf;
include common/locations.conf;
location ~ \.(ttf|ttc|otf|eot|woff|woff2|font.css|css|js)$ {
add_header Access-Control-Allow-Origin "*";
}
}
I have also added CORS to rocket-nginx.conf - as this is something I've been tinkering with lately and reflects a change to my config - however I have also removed the directive to eliminate that as the cause of the problem.

Nginx - Stop forcing HTTPS on subdomain

I have a site which is ran with nginx, and with the structure where we have a load balancer, and currently only one web server behind it (currently no real traffic so one web server only).
Anyways, in load balancer nginx config, we forced HTTPS on each request:
server {
listen 80;
server_name www.xyz.com xyz.com
return 301 https://www.xyz.com$request_uri;
}
This works fine, but now I want to say "on this subdomain - dev.xyz.com, allow HTTP too and don't do the forcing".
At first, the server_name param was "any", and thought that might be the problem, so I specifically typed the names as in the above samples, and when I type http://www.dev.xyz.com, I get redirected back to the https://www.xyz.com.
Below server block, we have SSL definitions too:
server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/ssl/xyz.com.pem;
ssl_certificate_key /etc/nginx/ssl/xyzPrivateKeyNginx.key;
keepalive_timeout 70;
server_name www.xyz.com;
root /usr/local/nginx/html;
client_max_body_size 25M;
client_body_timeout 120s;
# Add trailing slash if missing
rewrite ^([^\.]*[^/])$ $1/ permanent;
}
Thanks! :)
it turned out the solution was simple, I only inserted a simple redirect:
server {
listen 80;
server_name www.dev.xyz.com
location / {
proxy_pass http://xxyyzz;
}
}
Where xxyyzz is:
upstream xxyyzz{
ip_hash;
server 10.100.1.100:80;
}
Thanks anyways!

Resources