Delete cache files with lua from inside nginx - nginx

I have nginx running and it saves cache files to the local disk. I have to clear that cache from time to time manually. I thought about adding an extra location like /clear_cache where I delete the local files directly with Lua, since it can be embedded in nginx.
I did some research and found things like rewrite_by_lua or content_by_lua. Is it possible to access/modifiy the underlying fs with Lua or is that restricted?

Yeah You can remove file:
location /clear_cache {
content_by_lua_block {
//file creation
local f = assert(io.open("/newFile.txt", 'wb')) -- open in "binary" mode
f:write(body)
f:close()
//Remove file
os.remove("/newFile.txt")
}
}

Related

Is it possible to create Nginx routing configuration from file?

I tried to look everywhere but I was not able to find any solution. I need to create Nginx routing configuration based on a data in my custom file. File will be updated automatically and look like this.
api_key_1: instance_id_1
api_key_2: instance_id_2
And in nginx.conf I expect something like this
upstream instance_id_1 {
server 127.0.0.1:8080;
}
upstream instance_id_2 {
server 127.0.0.1:8081;
}
map $http_x_instance_id $pool {
api_key_1 "instance_id_1";
api_key_2 "instance_id_2";
}
Is it possible to create the map {} part dynamically according to content of my config file?
How would I solve this task.
Use include directive in your nginx config:
map $http_x_instance_id $pool {
include /path/to/instances;
}
Install inotify-tools.
Write a script for watching your file (see some examples here). On every file change execute something like this:
sed -n 's/\(.*\):[[:blank:]]*\(.*\)/\1 "\2";/p' /path/to/your/custom/file >/path/to/instances
nginx -reload

How to notify if a resource completely download in nginx

I need to know about full downloading a resource from server. My server is configured with NginX web server, and I want to do something if and only if the resource downloaded completely by user.
If you are using Nginx to handle downloading your files (using XSendfile), you should add a specific access_log block to your download handling block in your Nginx configs (in your "server" block). It would be something like this:
location /download_music/ {
internal;
root /usr/share/nginx/MY_SITE/media/music;
access_log /var/log/nginx/download.MY_SITE.log download;
}
The word "download" at the end of the access_log line is actually a log format which you should add it to the nginx main config file (/etc/nginx/nginx.conf) in the "http" block. It could be something like this:
http {
...
log_format download '{ "remote_addr" : "$remote_addr", "time":"$time_local", "request":"$request", '
'"traffic":$body_bytes_sent, "x_forwarded_for":"$http_x_forwarded_for" }';
...
}
You can change this format to what format you want (you will use it in your script later). If you monitor this log file (using "tail -f /var/log/nginx/download.MY_SITE.log") you will see that any time a download is paused or finished, a line of log will be added to this file.
The next step is using rsyslog and the "imfile" and "omprog" modules. You should add these configs at the end of the config file of rsyslog (/etc/rsyslog.conf):
$ModLoad imfile
$InputFileName /var/log/nginx/download.MY_SITE.log
$InputFileTag nginx-access
$InputFileStateFile state-nginx-access
$InputFileSeverity info
$InputFileFacility local3
$InputFilePollInterval 1
$InputRunFileMonitor
$ModLoad omprog
$actionomprogbinary /usr/share/nginx/MY_SITE/scripts/download.py
$template LogZillaDbInsert,"%hostname:::lowercase%\9%pri%\9%programname%\9%msg%\n"
local3.* :omprog:;RSYSLOG_TraditionalFileFormat
Pay attention to this line:
/usr/share/nginx/MY_SITE/scripts/download.py
This is the address of the script which would be called every time a log entry is added to the log file and the whole log entry will be available in this script using (in Python code):
line = sys.stdin.readline()
Then you can parse the line and find whatever you have logged including the downloaded file size (on every pause/finish event). Now, you can simply include the file size in the download URL and retrieve it in this script and compare it with the downloaded bytes. If these two numbers are equal, it means that download has been finished successfully. Moreover, you can do any other thing you want in this script (for example, expire download link, increase download count on DB, ...)

How can I compress / gzip my mimified .js and .css files before publishing to AWS S3?

I ran Google pagespeed and it suggests compressing my .js and .css
Eliminate render-blocking JavaScript and CSS in above-the-fold content
Show how to fix
Enable compression
Compressing resources with gzip or deflate can reduce the number of bytes sent over the network.
Enable compression for the following resources to reduce their transfer size by 210.9KiB (68% reduction).
Compressing http://xx.com/content/bundles/js.min.js could save 157.3KiB (65% reduction).
Compressing http://xx.com/content/bundles/css.min.css could save 35.5KiB (79% reduction).
Compressing http://xx.com/ could save 18.1KiB (79% reduction).
During my publish I have a step that uses Windows Powershell to move a .js and .css mimified bundle to S3 and this goes to cloudfront.
Is there some step I could add in the PowerShell script that would compress the .js and .css files?
Also once the files are compressed then do I have to do anything more than change the name so as to tell my browser that it will need to try and accept a gzip file?
You can add to your upload script the needed code to gzip compress the files.
Some example code could be this:
function Gzip-FileSimple
{
param
(
[String]$inFile = $(throw "Gzip-File: No filename specified"),
[String]$outFile = $($inFile + ".gz"),
[switch]$delete # Delete the original file
)
trap
{
Write-Host "Received an exception: $_. Exiting."
break
}
if (! (Test-Path $inFile))
{
"Input file $inFile does not exist."
exit 1
}
Write-Host "Compressing $inFile to $outFile."
$input = New-Object System.IO.FileStream $inFile, ([IO.FileMode]::Open), ([IO.FileAccess]::Read), ([IO.FileShare]::Read)
$buffer = New-Object byte[]($input.Length)
$byteCount = $input.Read($buffer, 0, $input.Length)
if ($byteCount -ne $input.Length)
{
$input.Close()
Write-Host "Failure reading $inFile."
exit 2
}
$input.Close()
$output = New-Object System.IO.FileStream $outFile, ([IO.FileMode]::Create), ([IO.FileAccess]::Write), ([IO.FileShare]::None)
$gzipStream = New-Object System.IO.Compression.GzipStream $output, ([IO.Compression.CompressionMode]::Compress)
$gzipStream.Write($buffer, 0, $buffer.Length)
$gzipStream.Close()
$output.Close()
if ($delete)
{
Remove-Item $inFile
}
}
From this site: Gzip creation in Powershell
Powershell community extensions has a scriptlet for gZipping files, and it's very easy to use:
Write-Gzip foo.js #will create foo.js.gz
mv foo.js.gz foo.js -Force
You don't have to rename your files, just add a Content-Encoding header and set it to gzip.
Since Amazon S3 is intended to only serve static files, it doesn't compress files (assets), that's why you need to compress them by yourself:
Compress your .js and .css with gzip: I don't know about howto with PowerShell, but I do with Python, I suggest to make a python deployment script (even better a fabfile) and integrate the compression and push code on it.
"Also once the files are compressed then do I have to do anything more than change the name so as to tell my browser that it will need to try and accept a gzip file?" : Good Questions ! It is not necessary to change the name of the compressed file, I suggest to don't rename. However, you:
MUST set also the header "Content-Encoding:gzip" otherwise browsers will not know the file.
Must set the headers 'Content-Type': type (type = 'application/javascript' or 'text/css')
Must set the header 'x-amz-acl': 'public-read': to make it public accessible.
I suggest also to set the header "Cache-Control:max-age=TIME" (example: TIME=31104000, for 360 days): To help browsers caching it (Better performance)
This will work whether served from origin or with cloudfront. But remember, if you serve them with cloudfront, You will need to invalidate all the files after each push, otherwise the old version will live up to 24 hours from the push. Hope this helps, I can provide a python solution if needed.

In nginx config, how to show something to log file directly in the config file?

I'm config a nginx, and debugging the config file,
How to show something from the config file directly to log file?
for example:
location ..... {
to_log "some string";
}
There is no direct way (on the todo list of the echo nginx module), but this solution seems fine https://serverfault.com/questions/404626/how-to-output-variable-in-nginx-log-for-debugging

How to configure nginx to serve versioned css and javascript files

My css files named main.css?v=1.0.0 , site.css?=v.1.0 .
how to configure nginx to serve this kind of files(not extension with .css or .js but with a version number)
FYI:
all the files are in the right path and erro message in chrome dev tool console is file not found(404)
Thanks !
When you're trying to access main.css?v=1.0.0 or main.css?v=2.0.0 any webserver will point it to the same file, main.css
.
Well in your situation coud create a separate location for your versioned file, and then use the next code in the nginx config:
location = /main.css {
if ($arg_v) {
rewrite ([0-9]+) /where/maincss/versions/stored/main-$avg_v.css last;
}
// otherwise default main.css
}
The same thing'll be for the any other file

Resources