Sending multiple html files to a web client with HttpLuaModule‎ - nginx

When I run this script, only the header is displayed:
ngx.exec('/header.html')
ngx.exec('/footer.html')
What's the best way to serve my template?
I've also tried this:
local f = io.read('/header.html','r')
ngx.print(f)
but I can't seem to get past a 404 error.

Your description is not very clear, but
ngx.exec() allows you to access a location with args.
If you want to show the html file, you can write a lua script and use the content_by_lua_file command.
Nginx config:
locaction /test {
default_type "text/plain";
content_by_lua_file "x.lua";
}
Lua Script:
local ngx = require 'ngx'
local f = io.open('header.html', 'r')
local html_text = f:read()
ngx.print(html_text)
You can access 127.0.0.1/test through the nginx server
and see your html file now.

Related

Is it possible to have live reloading of lua scripts using openresty and docker?

I'm newish to LUA and want to practice some LUA scripting using nginx/openrestry.
Is there a workflow where I can use docker that runs openresty, and link my laptops filesystem to my docker container such that when I make a change to my lua script, I can quickly reload openrestry server so I can quickly see my lua changes take effect?
Any help or guidance would be appreciated.
You can disable Lua code cache — https://github.com/openresty/lua-nginx-module#lua_code_cache — add lua_code_cache off inside the http or server directive block. This is not actually “hot reload”, it's more like php request lifecycle:
every request served by ngx_lua will run in a separate Lua VM instance
You can think of it as if the code is hot–reloaded on each request.
However, pay attention to this:
Please note however, that Lua code written inlined within nginx.conf [...] will not be updated
It means that you should move all your Lua code from the nginx config to Lua modules and only require them:
server {
lua_code_cache off;
location /foo {
content_by_lua_block {
-- OK, the module will be imported (recompiled) on each request
require('mymodule').do_foo()
}
}
location /bar {
content_by_lua_block {
-- Bad, this inlined code won't be reloaded unless nginx is reloaded.
-- Move this code to a function inside a Lua module
-- (e.g., `mymodule.lua`).
local method = ngx.req.get_method()
if method == 'GET' then
-- handle GET
elseif method == 'POST' then
-- handle POST
else
return ngx.exit(ngx.HTTP_NOT_ALLOWED)
end
}
}
}
Then you can mount your Lua code from the host to the container using --mount or --volume: https://docs.docker.com/storage/bind-mounts/

Upload file to Nginx with cURL

All,
I'm trying to upload a local file to my remote Nginx server via cURL. I have built Nginx from source with the upload module and the DAV module. At the bottom of the Nginx page, there is an example form to upload a file. I'm not sure how I would implement the form, and (several) Google searches have returned little helpful information about uploading directly to Nginx via cURL.
Current tech stack:
Nginx
Green Unicorn
Flask
Of all the different avenues I've tried, the following is the one that seems the most appropriate for the task.
curl -X POST -F "image=#example.gif" http://54.226.64.199/upload
However, the response is underwhelming.
I've tried --uploade-file as well, the response is a 405. From what I've read, upload only accepts a POST command, not PUT, hence why I get a 405.
I don't need a full solution (would be great!), only pointing in the right direction.
Any help is appreciated. Thanks
EDIT: sorry wanted to include part of my .conf
location /upload {
upload_store /tmp;
#upload_pass #none;
upload_store_access all:rw;
upload_cleanup 400 404 499 500-505;
}
You can do this by specifying filename into URL, without using any external module :
location ~ "/upload/([0-9a-zA-Z-.]*)$" {
alias /storage/www/upload/$1;
client_body_temp_path /tmp/upload_tmp;
dav_methods PUT DELETE MKCOL COPY MOVE;
create_full_put_path on;
dav_access group:rw all:r;
}
And use : curl -T example.gif http://54.226.64.199/upload/example.gif

Blank POST with nginx upload module and chunked upload

I am using the nginx upload module to accept large uploads for a PHP application. I have configured nginx following this blog post (modified for my needs).
Here is (the applicable portion of) my nginx configuration:
server {
# [ ... ]
location /upload {
set $upload_field_name "file";
upload_pass /index.php;
upload_store /home/example/websites/example.com/storage/uploads 1;
upload_resumable on;
upload_max_file_size 0;
upload_set_form_field $upload_field_name[filename] "$upload_file_name";
upload_set_form_field $upload_field_name[path] "$upload_tmp_path";
upload_set_form_field $upload_field_name[content_type] "$upload_content_type";
upload_aggregate_form_field $upload_field_name[size] "$upload_file_size";
upload_pass_args on;
upload_cleanup 400-599;
client_max_body_size 200M;
}
}
In the client side JavaScript, I am using 8MB chunks.
With this configuration, I am able to upload any file that is one chunk or smaller. However, when I try to upload any file that is more than one chunk, the response I get from the server for each intermediate chunk is blank, and the final chunk triggers the call to the PHP application without any incoming POST data.
What am I missing?
It turns out that #Brandan's blog post actually leaves out one important directive:
upload_state_store /tmp;
I added that and now everything works as expected.

Message "X-Accel-Mapping header missing" in Nginx error log

I am running a Rails 3 site on Ubuntu 8.04 with Nginx 1.0.0 and Passenger 3.0.7.
In my Nginx error.log I started seeing the message X-Accel-Mapping header missing quite a lot. Googling lead me to the docs of Rack::Sendfile and to the Nginx docs.
Now, my app can be accessed through several domains and I am using send_file in my app to deliver some files specific to the domain they are requested from, e.g., if you come to domain1.com/favicon.ico I look up the favicon in at public/websites/domain1/favicon.ico.
This works fine and I don't think I need/want to get Nginx involved and create some private area where I store those files, as the samples in the Rack::Sendfile docs suggest.
How can I get rid of the error message?
this message means that Rack::Sendfile disabled X-Accel-Redirect for you, because you have missing configuration for it in nginx.conf...
I'm using Nginx + Passenger 3 + Rails 3.1.
Gathered information from this pages I've figured it out:
http://wiki.nginx.org/X-accel
http://greenlegos.wordpress.com/2011/09/12/sending-files-with-nginx-x-accel-redirect
http://code.google.com/p/substruct/source/browse/trunk/gems/rack-1.1.0/lib/rack/sendfile.rb?r=355
Serving Large Files Through Nginx via Rails 2.3 Using x-sendfile
I have controller which maps /download/1 requests to storage files which have their own directory structure, like this: storage/00/00/1, storage/01/0f/15 etc. So I need to pass this through Rails, but then I need to use send_file method which will use X-Accel-Redirect to send the final file to the browser through nginx directly.
Within the code I have this:
send_file(
'/var/www/shared/storage/00/00/01',
:disposition => :inline,
:filename => #file.name # an absolute path to the file which you want to send
)
I replaced the filename for this example purposes
Now I had to add these lines to my nginx.conf:
server {
# ...
passenger_set_cgi_param HTTP_X_ACCEL_MAPPING /var/www/shared/storage/=/storage/;
passenger_pass_header X-Accel-Redirect;
location /storage {
root /var/www/shared;
internal;
}
# ...
}
The path /storage is not visible from outside world, it is internal only.
Rack::Sendfile gets the header X-Accel-Mapping, extracts the path from it and replaces /var/www/shared/storage with /storage.... Then it spits out the modified header:
X-Accel-Redirect: /storage/00/00/01
which is then processed by nginx.
I can see this works correctly as the file is downloaded 100x faster than before and no error is shown in the logs.
Hope this helps.
We used the similar technique as NoICE described, but i replaced the "hard-coded" directory containing all the files with the regular expression describing the folder containing the folders containing the files.
Sounds hard, yeah? Just take a look on these (/etc/nginx/sites-available/my.web.site):
location /assets/(.+-[a-z0-9]+\.\w+) {
root /home/user/my.web.site/public/assets/$1;
internal;
}
location /images/(.+)(\?.*)? {
root /home/user/my.web.site/public/images/$1;
internal;
}
This should be used with this check:
location / {
# ...
if (-f $request_filename) {
expires max;
break;
}
# ...
}
to prevent the statics from Rails processing.
I did by this manual
https://mattbrictson.com/accelerated-rails-downloads
my server sends file path /private_upload/file/123/myfile.txt, the file is in /data/myapp-data/private_upload/file/123/myfile.txt
# Allow NGINX to serve any file in /data/myapp-data/private_upload
# via a special internal-only location.
location /private_upload {
internal;
alias /data/myapp-data/private_upload;
}
# ---------- BACKEND ----------
location #backend
{
limit_req zone=backend_req_limit_per_ip burst=20 nodelay;
proxy_pass http://backend;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping /=/; # this header is required, it does nothing
include /etc/nginx/templates/myapp_proxy.conf;
}

nginx + ssi + remote uri access does not work

I have a setup where my nginx is in front with apache+PHP behind.
My PHP application cache some page in memcache which are accessed by nginx directly except some dynamic part which are build using SSI in Nginx.
The first problem I had was nginx didnt try to use memcache for ssi URI.
<!--# include virtual="/myuser" -->
So I figured that if I force it to use a full URL, it would do it.
<!--# include virtual="http://www.example.com/myuser" -->
But in logs file (both nginx and apache) I can see that a slash has been added at the beginning of the url
http ssi filter "/http://www.example.com/myuser"
In the source code of the SSI module I see a PREFIX that seems to be added, but I can really tell if I can disable it.
Anybody got this issue?
Nginx version : 0.7.62 on Ubuntu Karmic 64bits
Thanks a lot
You can configure nginx to include remote URLs despite you cannot refer them directly in SSI instructions. In site config create location with local path and named remote location that points where you want to. For example:
server {
....
location /remote {
proxy_pass #long_haul; # or use "try_files" to provide fallback
}
location #long_haul {
proxy_pass http://porno.com;
}
....
}
and in served html use include directive that refers /remote path:
<!--# include virtual="/remote/rest-of-url&and=parameters" -->
Note that you may customize URL that is passed further with variables and regexp. For example:
location ~/remote(.+) {
proxy_pass #long_haul$1?$args;
}
It has nothing about nginx, you just can't do that. SSI doesn't accept remote uri. you can only specify a local file path.
See
http://en.wikipedia.org/wiki/Server_Side_Includes

Resources