I have setup webserver(Nginx).
and I have deploy applications(charaset:Shift-Jis).
After deploy, accessed to applications But,There is not charset in Nginx returned response "Context-Type" header.
Could you teach me Nginx settings it.
I tried the under but it was useless.
Settings
# nginx.conf
server {
location / {
proxy_pass http://myapp.com:8080/;
charset Shift-Jis;
}
}
Response
# for example (CSS)
・・・
real)content-type: text/css
hope)content-type: text/css; charset=UTF-8
・・・
Best regards.
this is work to me.
charset utf-8;
charset_types *;
set charset_types value to (*)
You are passing the request to a proxy. In this case Headers and Charset are set by the proxy. Set the proper encoding at your Proxy Application. You can override existing charset with: override_charset on;
charset utf-8;
override_charset on;
if you want to set the source charset too:
source_charset koi8-r;
I use this config in nginx.conf.
http {
map $sent_http_content_type $charset {
default '';
~^text/ utf-8;
application/javascript utf-8;
application/rss+xml utf-8;
application/json utf-8;
application/geo+json utf-8;
}
charset $charset;
charset_types *;
...
}
Related
I have a website on an EC2 instance with NGINX and Ubuntu 18.0.4. I have setup the server block for the site the site is loading correctly. I have also enabled caching for images, javascript and css files.
I want to exclude certain paths from getting cached and I tried some config examples available on the internet. None of them are working and I am getting a 404 error for the locations that I am trying to exclude.
My server block is as follows:
# Expires map
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
~font/ max;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
expires $expires;
location /path1/ {
add_header Cache-Control "private";
}
location /path2/subpath1/ {
add_header Cache-Control "private";
}
location /path2/subpath2/ {
add_header Cache-Control "private";
}
location /path2/subpath3/ {
add_header Cache-Control "private";
}
}
The paths (path1, path2) which I have tried to exclude from caching are returning a 404 not found error. Can someone please help?
I am building a static website on my Olimex Lime2 board (Armbian OS) using Nginx as my webserver. My problem that no matter what static site builder or theme I use, when I go to view the public site, there is no CSS styling. Here is the public site: https://natehn.com
I have tried several themes on Hugo and Jekyll, with little or no modification to the default settings. This is why I think the issue is with Nginx.
I have explored this question and done plenty of Googling but was unable to determine a solution. I'm self-taught and don't know what I am looking for. Hopefully I missed something simple and this is an easy fix.
Here is my nginx.conf:
events {}
# Expires map
http {
map $sent_http_content_type $expires {
default off;
text/html 7d;
text/css max;
application/javascript max;
~image/ max;
}
server {
listen 80;
server_name natehn.com;
location / {
return 301 https://$server_name$request_uri;
}
}
server{
listen 443 ssl http2;
server_name natehn.com;
charset UTF-8; #improve page speed by sending the charset with the first response.
location / {
root /home/nathan/blog/public;
index index.html;
autoindex off
}
#Caching (save html pages for 7 days, rest as long as possible, no caching on frontpage)
expires $expires;
location #index {
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-cache, no-store';
etag off;
expires off;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /var/www/;
#}
#Compression
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# Logs
access_log /var/log/nginx/natehn.com.com_ssl.access.log;
error_log /var/log/nginx/natehn.com_ssl.error.log;
# SSL Settings:
ssl_certificate /etc/letsencrypt/live/natehn.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/natehn.com/privkey.pem;
# Improve HTTPS performance with session resumption
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
# Enable server-side protection against BEAST attacks
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
# Disable SSLv3
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Lower the buffer size to increase TTFB
ssl_buffer_size 4k;
#CAUSED ERROR
# Diffie-Hellman parameter for DHE ciphersuites
# $ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 4096
#ssl_dhparam /etc/ssl/certs/dhparam.pem;
# Enable HSTS (https://developer.mozilla.org/en-US/docs/Security/HTTP_Strict_Transport_Security)
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
# Enable OCSP stapling (http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox)
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/natehn.com/fullchain.pem;
resolver 192.34.59.80 66.70.228.164 valid=300s;
resolver_timeout 5s;
}
}
And here is my sites-available/natehn.com, which is linked to sites-enabled:
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
root /home/nathan/blog/public;
# Add index.php to the list if you are using PHP
index.html;
server_name natehn.com www.natehn.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
I have explored everything I know. Any tips on where to look for potential solutions? Let me know if there is something else you need to look at.
Many thanks :) N
The css link has been blocked because of mixed content - the page is loaded over https:// but the href for the css is plain http://
Same will be true for your favicon etc.
As a general approach, to use the same protocol as the parent page, simply miss that off the href, for example:
<link rel="stylesheet" href="//natehn.com/css/style-white.css">
Edit: Better solution for site builders is to set the base URL to make sure constructed hrefs always use the correct protocol, https in your case:
For Hugo, see baseURL in https://gohugo.io/getting-started/configuration/
For Jekyll, see baseurl in https://jekyllrb.com/docs/configuration/options/
Wordpress and others have similar options.
I'm using X-Accel to serve protected content, using X-Accel-Redirect.
Is it possible to serve only a part of the file? for example, bytes range 0-x, or first 5 minutes of a video (my final goal)
It's important to do that on the server-side, so the client will not have access to the rest of the file.
Currently this is how I send the whole file:
X-Accel-Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Type: application/octet-stream
Content-Length: {file_size}
Content-Disposition: attachment; filename="myfile.mp4"
Accept-Ranges: bytes
X-Accel-Buffering: yes
X-Accel-Redirect: /protected/myfile.mp4
Nginx conf:
location /protected {
internal;
alias /dir/of/protected/files/;
if_modified_since off;
output_buffers 2 1m;
open_file_cache max=50000 inactive=10m;
open_file_cache_valid 15m;
open_file_cache_min_uses 1;
open_file_cache_errors off;
}
The massive hack would be to use nginx to proxy to itself with a Range header that would limit the request to a range of bytes
something like this (not tested so this probably wont work, but the idea should work):
{
... snip config ...
server {
listen 80 default_server;
listen [::]:80 default_server;
root /html;
index index.html;
location / {
proxy_pass http://localhost/content;
add_header Range btyes=0,100000;
}
location /content {
try_files $uri $uri/ =404;
}
}
}
I haven't tested Slice and X-Accel together. If each file can have a different limit defined by the backend you might configure Slice in the location and send the limit with the X-Accel-Redirect URL as below:
X-Accel-Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Content-Type: application/octet-stream
Content-Length: {file_size}
Content-Disposition: attachment; filename="myfile.mp4"
Accept-Ranges: bytes
X-Accel-Buffering: yes
X-Accel-Redirect: /protected/myfile.mp4?s=0&e=$PHP_VAR
Nginx.conf
location /protected {
slice; # enable slicing
slice_start_arg s;
slice_end_arg e;
internal;
alias /dir/of/protected/files/;
if_modified_since off;
output_buffers 2 1m;
open_file_cache max=50000 inactive=10m;
open_file_cache_valid 15m;
open_file_cache_min_uses 1;
open_file_cache_errors off;
}
A global file limit
You would need to redirect the original request including the Slice parameters to truncate the file being served.
Nginx conf:
location /sample {
slice; # enable slicing
slice_start_arg s;
slice_end_arg e;
internal;
alias /dir/of/protected/files/;
if_modified_since off;
output_buffers 2 1m;
open_file_cache max=50000 inactive=10m;
open_file_cache_valid 15m;
open_file_cache_min_uses 1;
open_file_cache_errors off;
}
location /protected {
rewrite ^ /sample&s=0&e=1024; # replace for the desired file limit in bytes
}
If the rewrite directive above doesn't work, I suggest the following option using proxy_pass.
location /protected {
set $file_limit 1024 # replace for the desired file limit in bytes
set $delimiter "";
if ($is_args) {
set $delimiter "&";
}
set $args $args${delimiter}s=0&e=$file_limit;
proxy_pass $scheme://127.0.0.1/sample;
}
I would like NGINX send the content-type application/xml
for all response of a location:
My configuration is base on the documentation http://wiki.nginx.org/HttpCoreModule#types :
server {
server_name my_name
listen 8088;
keepalive_timeout 5;
location / {
proxy_pass http://myhost;
types { }
default_type application/xml
}
}
But the server always send as content-type "text/xml".
Any idea?
I am trying to use NginX as a reverse proxy for a few IIS Servers. The goal is to have NginX sit in from of the IIS / Apache servers caching static items such as CSS / JS / Images. I am also trying to get NginX to automatically minify js / css files using its perl module.
I found a sample script for minification here:
http://petermolnar.eu/linux-tech-coding/nginx-perl-minify-css-js/
With the scrip everything works fine, except the reverse proxy breaks.
Questions:
Is what i am trying to accomplish even possible? I want NginX to first minify the scripts before saving them to cache.
Can nginX automtically set the proper expires headers so that static items are cached as long as possible, and only replaced when querystrings are changed (jquery.js?timestamp=march-2012)
Can NginX GZIP the resources before sending them out.
Can NGinx Forward requests or serve up a "Down For Maintenance page" if it cannot connec to back end server.
Any help would be greatly appreciated.
Here is what i have in my sites-enabled/default so far.
server {
location / {
proxy_pass http://mywebsite.com;
proxy_set_header Host $host;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
location #minify {
perl Minify::minify_handler;
}
location ~ \.css$ {
try_files $uri.min.css #minify;
}
location /*.js {
expires 30d;
}
}
Nginx is the ideal solution for reverse-proxy, it's also Unix way "do one thing and do it well". So I'd advice you to split content serve and minification process out instead of using third-party plugins to do many things at once.
Best practice is to do minify&obfuscate phase on local system before you do a deployment on production, this is easy to say and not hard to do, see the google way to compress static assets. Once you got assets ready-to-use, we can setup nginx configuration.
Answers:
use minify&obfuscate before deploy it on production
you can find assets by regexp (directory name or file extension)
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
use gzip on and gzip_static on to serve gzipped files instead of compress it every time when request is coming.
use try_files to detect the maintenance page exists or not
try_files $uri /system/maintenance.html #mywebsite;
if (-f $document_root/system/maintenance.html) {
return 503;
}
See the full nginx config for your case:
http {
keepalive_timeout 70;
gzip on;
gzip_http_version 1.1;
gzip_disable "msie6";
gzip_vary on;
gzip_min_length 1100;
gzip_buffers 64 8k;
gzip_comp_level 3;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml application/xml;
upstream mywebsite {
server 192.168.0.1 # change it with your setting
}
server {
try_files $uri /system/maintenance.html #mywebsite;
location #mywebsite {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mywebsite;
}
location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
gzip_static on;
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
}
}