Lua's package path in nginx - nginx

I am now programming in Lua with nginx. I write my Lua file and place it in a directory in /usr/local/nginx/lua. Then in nginx.conf I write a location, such as
location /lua {
lua_need_request_body on;
content_by_lua_file lua/test.lua;
}
When I access this location through Nginx, the Lua script will be executed.
In a Lua file, one usually can include your own Lua module, and indicate the search path
common_path = '../include/?.lua;'
package.path = common_path .. package.path
In common Lua programming, a relative path is relative to my Lua file.
But with nginx, the relative paths are relative to the directory where I start Nginx.
Like, I am in /usr/local/nginx and execute sbin/nginx, then in Lua package.path will be the /usr/local/include.
If I am in /usr/local/nginx/sbin and execute ./nginx, then in Lua package.path will be /usr/local/nginx/include.
I think the directory I start the nginx server should not been limited,
but I don't know how to resolve this.

You want to modify the Lua package.path to search in the directory where you have your source code. For you, that's lua/.
You do this with the lua_package_path directive, in the http block (the docs say you can put it in the top level, but when I tried that it didn't work).
So for you:
http {
#the scripts in lua/ need to refer to each other
#but the server runs in lua/..
lua_package_path "./lua/?.lua;;";
...
}
Now your lua scripts can find each other even though the server runs one directory up.

You can use $prefix now.
For example
http{
lua_package_path "$prefix/lua/?.lua;;";
}
And start your nginx like this
nginx -p /opt -c /etc/nginx.conf
Then the search path will be
/opt/lua

Related

Is it possible to have live reloading of lua scripts using openresty and docker?

I'm newish to LUA and want to practice some LUA scripting using nginx/openrestry.
Is there a workflow where I can use docker that runs openresty, and link my laptops filesystem to my docker container such that when I make a change to my lua script, I can quickly reload openrestry server so I can quickly see my lua changes take effect?
Any help or guidance would be appreciated.
You can disable Lua code cache — https://github.com/openresty/lua-nginx-module#lua_code_cache — add lua_code_cache off inside the http or server directive block. This is not actually “hot reload”, it's more like php request lifecycle:
every request served by ngx_lua will run in a separate Lua VM instance
You can think of it as if the code is hot–reloaded on each request.
However, pay attention to this:
Please note however, that Lua code written inlined within nginx.conf [...] will not be updated
It means that you should move all your Lua code from the nginx config to Lua modules and only require them:
server {
lua_code_cache off;
location /foo {
content_by_lua_block {
-- OK, the module will be imported (recompiled) on each request
require('mymodule').do_foo()
}
}
location /bar {
content_by_lua_block {
-- Bad, this inlined code won't be reloaded unless nginx is reloaded.
-- Move this code to a function inside a Lua module
-- (e.g., `mymodule.lua`).
local method = ngx.req.get_method()
if method == 'GET' then
-- handle GET
elseif method == 'POST' then
-- handle POST
else
return ngx.exit(ngx.HTTP_NOT_ALLOWED)
end
}
}
}
Then you can mount your Lua code from the host to the container using --mount or --volume: https://docs.docker.com/storage/bind-mounts/

'init_by_lua_block' directive not getting executed on nginx start

I wanted one of my lua script to get executed whenever nginx server starts or is reloaded. I tried using init_by_lua_block and init_by_lua_file directive but i dont see any log traces for the lua script in init_by_lua_block when i run the nginx docker. My http block looks like below. nginx.config is located in container path/etc/nginx/nginx.conf .
http {
sendfile on;
init_by_lua_block /etc/nginx/lua/init.lua;
include /etc/nginx/conf.d/proxy-config.conf;
}
Can anyone please tell me what I am missing here?
init_by_lua_block
syntax: init_by_lua_block { lua-script }
https://github.com/openresty/lua-nginx-module#init_by_lua_block
init_by_lua_block expects inlined Lua code, not a path to the Lua file.
Use dofile to execute Lua script:
init_by_lua_block {
dofile('/etc/nginx/lua/init.lua')
}
https://www.lua.org/manual/5.1/manual.html#pdf-dofile
or use init_by_lua_file:
init_by_lua_file /etc/nginx/lua/init.lua;
UPD:
You should use the NOTICE logging level (or higher) in init_by_lua_* directives because your error_log configuration is not yet applied in this phase:
Under the hood, the init_by_lua runs in the nginx configuration loading phase, so your error_log configuration in nginx.conf does not take effect until the whole configuration is loaded successfully (due to the bootstrapping requirements: the configuration loading MAY fail). And nginx initially uses a logger with the NOTICE filtering level upon startup which is effect in the whole first configuration loading process (but not subsequent configuration (re)loading triggered by the HUP signal).
https://github.com/openresty/lua-nginx-module/issues/467#issuecomment-82647228
So, use ngx.log(ngx.NOTICE, ...) (or ngx.WARN, ngx.ERR, etc. — see https://github.com/openresty/lua-nginx-module#nginx-log-level-constants) to see the output in the log.
Alternatively you can use print. It's equivalent to ngx.log(ngx.NOTICE, ...) under the hood: https://github.com/openresty/lua-nginx-module#print

uWSGI: How can I mount a paste-deploy (Pyramid) app?

What I have:
I have a Pyramid application that is built from a Paste ini, served by uWSGI and proxied by nginx. It works great. Here is the nginx config:
server {
listen 80;
server_name localhost;
access_log /var/log/myapp/nginx.access.log;
error_log /var/log/myapp/nginx.error.log warn;
location / {
uwsgi_pass localhost:8080;
include uwsgi_params;
}
}
Here is the uWSGI ini configuration:
[uwsgi]
socket = 127.0.0.1:8080
virtualenv = /srv/myapp/venv
die-on-term = 1
master = 1
logto = /var/log/myapp/uwsgi.log
This configuration is located inside Pyramid's production.ini, such that I serve the application with this command:
uwsgi --ini-paste-logged production.ini
All of this works just fine.
What I want to do:
One simple change. I want to serve this application as a subfolder, rather than as the root. Rather than serving it from http://localhost, I want to serve it from http://localhost/myapp.
And now everything is broken.
If I change the nginx location directive from / to /myapp or /myapp/, I get 404s, because the WSGI application receives uris that are all prepended with /myapp.
The uWSGI solution appears to be to mount the WSGI callable on the subfolder, and then pass the --manage-script-name option, at which point uWSGI should magically strip the subfolder prefix from the uri and fix the issue.
However, the documentation and every other resource I've found have only given examples of the form:
mount = /myapp=myapp.py
I don't have a myapp.py that contains a WSGI callable, because my callable is being built by PasteDeploy.
So, is it possible to mount the WSGI callable from within the Paste ini? Or am I going to have to split the uwsgi configuration out of the Paste ini and also define a separate wsgi.py with a call to paste.deploy.loadapp to generate a wsgi callable that I can mount?
Or is there another way to serve this app as a subfolder from nginx while not messing up the url reversing?
Yes, it's definitely possible to mount your Pyramid as a subdirectory with Nginx. What you'll need to use is the Modifier1 option from uWSGI like so:
location /myapp {
include uwsgi_params;
uwsgi_param SCRIPT_NAME /myapp;
uwsgi_modifier1 30;
uwsgi_pass localhost:8080;
}
The magic value of 30 tells uWSGI to remove the parameter of SCRIPT_NAME from the start of PATH_INFO in the request. Pyramid receives the request and processes it correctly.
As long as you're using the standard Pyramid machinery to generate URLs or paths within your application, SCRIPT_NAME will automatically be incorporated, meaning all URLs for links/resources etc are correct.
The documentation isn't the clearest, but there's more on the modifiers available at: https://uwsgi-docs.readthedocs.org/en/latest/Protocol.html
I wanted to do what you suggest but this is the closest solution I could find: if you are willing to modify your PasteDeploy configuration, you can follow the steps at: http://docs.pylonsproject.org/docs/pyramid/en/1.0-branch/narr/vhosting.html
Rename [app:main] to [app:mypyramidapp] and add a section reading:
[composite:main]
use = egg:Paste#urlmap
/myapp = mypyramidapp
I also had to add this to my nginx configuration:
uwsgi_param SCRIPT_NAME '';
and install the paste module
sudo pip3 install paste
I wonder if there is a way to "mount" a PasteDeploy as to original question asked...
I've hit this very problem with my deployment after switching from Python2 to Python3.
with Python2 I used the uwsgi_modifier1 30; trick, but it doesn't work anymore with Python3, as described here: https://github.com/unbit/uwsgi/issues/876
It is very badly documented (not at all? I know it from reading the uWSGI source code), but --mount option accepts the following syntax:
--mount=/app=config:/path/to/app.ini
Please note: with --mount you also need --manage-script-name option.
There are other problems with it: https://github.com/unbit/uwsgi/issues/2172
It's trivial to write a wrapper script around Paste-Deploy app, which is the way I deploy now:
from paste.script.util.logging_config import fileConfig as configure_logging
from paste.deploy import loadapp as load_app
from os import environ
config_file = environ['INI_FILE']
configure_logging(config_file)
application = load_app('config:' + config_file)
Save it to e.g. app.py and you can use it with --mount /app=app.py, the INI_FILE environment var should point to your .ini file.
As a side note - I consider moving away from uWSGI, it's buggy and documentation lacks a lot.

Can I use Clojure with nginx?

This is a follow up to my question here. I've set up a home server (just my other laptop running ubuntu and nginx) and I want to serve clojure files.
I am asking help for understanding how this process works. I am sorry at this point I am confused and I think I need to start over. I am asking a new question because I want to use nginx not lein ring server, as suggested in the answer for that question.
First I started a project guestbook with leiningen and I ran lein ring server and I see "Hello World" at localhost:3000. As far as I understand this has nothing to do with nginx!
How does nginx enter in this process? At first I was trying to create a proxy server with nginx and that worked too, but I did not know how serve clojure files with that setup.
This is what I have in my nginx.conf file adapted from this answer:
upstream ring {
server 127.0.0.1:3000 fail_timeout=0;
}
server {
root /home/a/guestbook/resources/public;
# make site accessible from http://localhost
server_name localhost;
location / {
# first attempt to serve request as file
try_files $uri $uri/ #ring;
}
location #ring {
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_pass http://ring;
}
location ~ ^(assets|images|javascript|stylesheets|system)/ {
expires max;
add_header Cache-Control public;
}
}
So I want to use my domain example.com (not localhost); how do I go about doing this?
EDIT
As per #noisesmith's comment I will opt to go with lein uberjar option. As explained here, it appears very easy to create one:
$ lein uberjar
Unpacking clojure-1.1.0-alpha-20091113.120145-2.jar
Unpacking clojure-contrib-1.0-20091114.050149-13.jar
Compiling helloworld
[jar] Building jar: helloworld.jar
$ java -jar helloworld.jar
Hello world!
Can you also direct me to the right documentation about how I can use this uberjar with nginx?
Please try Nginx-Clojure module. You can run clojure Ring handlers with Nginx without any Java Web Server, eg. Jetty.
For starters, don't use lein to run things in production. You can use lein uberjar to create a jar file with all your deps ready to run, and java -jar to run the app from the resulting jar. There is also the option of running lein ring uberwar to create a war archive to be run inside tomcat, which provides some other conveniences (like log rotation and integration with /etc/init.d as a service etc. on most Linux systems).
nginx sits in front of your app, on port 80. It will serve up the content by proxying your app. This is useful because nginx has many capabilities (especially regarding security) that you then don't need to implement in your own app, including optional integration with https and selinux integration. Using nginx in front of your app also prevents you from needing to run java as root (typically only the root user can use port 80). Furthermore you can let nginx serve static assets directly, rather than having to serve them from your app.

nginx + ssi + remote uri access does not work

I have a setup where my nginx is in front with apache+PHP behind.
My PHP application cache some page in memcache which are accessed by nginx directly except some dynamic part which are build using SSI in Nginx.
The first problem I had was nginx didnt try to use memcache for ssi URI.
<!--# include virtual="/myuser" -->
So I figured that if I force it to use a full URL, it would do it.
<!--# include virtual="http://www.example.com/myuser" -->
But in logs file (both nginx and apache) I can see that a slash has been added at the beginning of the url
http ssi filter "/http://www.example.com/myuser"
In the source code of the SSI module I see a PREFIX that seems to be added, but I can really tell if I can disable it.
Anybody got this issue?
Nginx version : 0.7.62 on Ubuntu Karmic 64bits
Thanks a lot
You can configure nginx to include remote URLs despite you cannot refer them directly in SSI instructions. In site config create location with local path and named remote location that points where you want to. For example:
server {
....
location /remote {
proxy_pass #long_haul; # or use "try_files" to provide fallback
}
location #long_haul {
proxy_pass http://porno.com;
}
....
}
and in served html use include directive that refers /remote path:
<!--# include virtual="/remote/rest-of-url&and=parameters" -->
Note that you may customize URL that is passed further with variables and regexp. For example:
location ~/remote(.+) {
proxy_pass #long_haul$1?$args;
}
It has nothing about nginx, you just can't do that. SSI doesn't accept remote uri. you can only specify a local file path.
See
http://en.wikipedia.org/wiki/Server_Side_Includes

Resources