Passenger/nginx not loading Sinatra app - nginx

I'm trying to run a few Sinatra apps under sub-uri's, but it seems that Passenger isn't picking them up as Rack applications.
From the nginx-error log: 403 error, directory index of "/web/archive/sites/archive/app1/" is forbidden. If I place index.html in that directory, that HTML file renders.
The apps run fine with rackup on my local machine, so I feel the application code is irrelevant. I have also SSH'd in as both the nginx user (user nginx at the top of the nginx config), and as the archive user, which is the user used to deploy all of these applications, and which owns all of the directories and files. I have no problem navigating to any of these files with either user.
Also, this setup works fine if I move it over to a subdomain, like archive.domain.com, and then have the app symlinks live right in /web/archive/sites (rather than /web/archive/sites/archive), and otherwise use pretty much the same nginx config, which is why I don't believe this is a permissions issue.
nginx config
server {
listen 80;
server_name domain.com;
location /archive {
root /web/archive/sites;
passenger_enabled on;
passenger_base_uri /app1;
passenger_base_uri /app2
}
}
directory structure
/web
|
+-- archive/
|
+-- sites/
| |
| +-- archive/
| |
| +-- app1 -> /web/archive/apps/app1/current/public
| |
| +-- app2 -> /web/archive/apps/app2/current/public
|
+-- apps/
|
+-- app1/
| |
| +-- current/
| |
| +-- public/
| |
| +-- config.ru
|
+-- app2/
|
+-- (same as app1/)

The passenger_base_uri together with the root in your configuration example do not match with the directory structure (e.g /web/archive/sites/app1 vs /web/archive/sites/archive/app1). As far as I understand Passenger does not consider the location but only root and passenger_base_uri.
Try changing the configuration into
server {
listen 80;
server_name domain.com;
location /archive {
root /web/archive/sites/archive;
passenger_enabled on;
passenger_base_uri /app1;
passenger_base_uri /app2
}
}

Related

Extracting a subfolder from (GNU)tar archive

What is the right spell to extract a subfolder from a tar archive when you do not know the base name?
I have a tar archive with the following structure:
myarchive.tar
|
+-- StrangeName_Including_variable_parts
|
+-- bin
| |
| +-- Uninteresting_stuff
|
+-- src
|
+-- Stuff
|
+-- I_need
My problem is to extract "src" and put it somewhere on my disk regardless of the name of base directory (StrangeName_Including_variable_parts).
I tried something like:
tar xf myarchive.tar -C destination src
But it doesn't seem to do what I need.
What I would need is:
destination
|
+-- src
|
+-- Stuff
|
+-- I_need
Extract with wildcard:
tar -xf myarchive.tar -C destination --wildcards '*/src'
See: man tar

After setting up server blocks, Nginx is not serving my domain name

I am in the process of deploying my MERN app to a Digital Ocean droplet (Ubuntu 20.04 server).
I followed the steps in the following tutorial to install Nginx. [I have completed all the previous steps]
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04
When I visit http://134.122.112.22, I see the Nginx landing page, as expected.
However, after setting up server blocks, when I visit http://sundaray.io, I get the following error.
sundaray.io is my domain name.
When I run the command /var/log/nginx/error.log, I see the following:
How can I fix the error?
EDIT-1
SERVER BLOCK
In order to create the server block, I executed the following commands in order:
mkdir -p /var/www/sundaray.io/html
nano /etc/nginx/sites-available/sundaray.io
Then, I pasted in the following configuration block.
server {
listen 80;
listen [::]:80;
root /var/www/sundaray.io/html;
index index.html index.htm index.nginx-debian.html;
server_name sundaray.io www.sundaray.io;
location / {
try_files $uri $uri/ =404;
}
}
ERROR
Executing the command cat /var/log/nginx/error.log gave me the following result:
EDIT-2
Executing chown -R nginx:nginx /var/www/sundaray.io/html threw the following error:
EDIT-3
Executing ps -elf |grep nginx gave the following result:
EDIT-4
When I executed the command ls /var/www/sundaray.io/html, I got the following result:
1. chmod 777 is NEVER a good idea.
NGINX operats under a runuser. How to check the runuser:
ps -elf |grep nginx. Normaly it will be nginx. Instead of 777 (Open for everyone) do chmod -R 755 /path/to/folder and chown -R nginx:nginx /path/to/folder
But agree. For troubleshooting it can be used. But back to your problem.
2. Directory-Listing disabled.
The error is telling you nginx can not list the directory content. Which is the default behavior. Make sure
root /var/www/sundaray.io/html;
This path exists AND
index index.html index.htm index.nginx-debian.html;
there are one of these files located!
Without any of these files NGINX can't load the default index file on /. Put something in /var/www/sundaray.io/html like
printf "<html>\n<body>\n<h1>Hello from NGINX</h1>\n</body>\n</html>" > /var/www/sundaray.io/html/index.html && chown nginx:nginx /var/www/sundaray.io/html/index.html. This should generate an index.html for you.
If you just want to test your server configuration without any files:
location / {
return 200 "Hello on $host$uri\n";
}

nginx configuration index.html different path

I'm going crazy with the nginx configuration.
I have this directory tree
site
|-- .tmp
| +-- partials
| | |-- partial.js
| | |
| +-- serve
| +-- app
| | |-- app.css
| |
| |-- index.html
|
+-- libs
| +-- folder_a
| |
| +-- folder_b
|
+-- transpiled
| +-- app
| | |-- app.js
Also in my index.html I have the paths like
<script src="../bower_components/jquery/dist/jquery.js"></script>
<script src="app/app.js"></script>
So, if I'm right, my root should be the transpiled directory.
What is a valid nginx configuration for serving my site ?
conf example added
http {
server {
location / {
root html/my_site;
index .tmp/serve/index.html;
try_files $uri transpiled/$uri;
}
}
}
I don't know nginx but i have read something about try_files.
try_files directive should search the file $uri into transpiled/$uri.
So if i have app/app.js try files should search the file in transpiled/app/app.js
This won't work

How to run BrowserMob Proxy with RobotFramework and Capture HAR files

I have written this code in robotframework
${proxy}= | Evaluate | sys.modules['selenium.webdriver'].Proxy() sys, selenium.webdriver |
${proxy.http_proxy}= | Set Variable | 127.0.0.1:8080 |
Create Webdriver | Firefox proxy=${proxy} |
Go To | http://www.knowledgefarm.in/tst/a.html |
And I am running BrowserMob proxy from command line like this:
browsermob-proxy.bat --address 127.0.0.1 --port 8080
Now, when i run robotframework, it open the browser and simple give this message on page.
HTTP ERROR: 404
Problem accessing /tst/a.html. Reason:
Not Found
Powered by Jetty://
Two questions:
why my pages are not loaded. (it works if I remove proxy setting)
After the workaround, how do I specify to generate HAR file and how should I specify the location of HAR file
The above code does not actually start a proxy. To start a proxy, you need to run these commands.
Create | http context | localhost:8080 | http
Post | /proxy
${json} | Get Response Body
${port} | Get Json Value | ${json} | /port
${proxy}= | Evaluate | sys.modules['selenium.webdriver'].Proxy() | sys,selenium.webdriver
${proxy.http_proxy}= | Set Variable | 127.0.0.1:${port}
Create Webdriver | Firefox | proxy=${proxy}
Go To | ${LOGIN URL}
Set Request Body | pageRef=LOGIN&captureContent=false&captureHeaders=true
PUT | /proxy/${port}/har
${json} | HttpLibrary.HTTP.Get Response Body
OperatingSystem.Create File | D:\\myfile.har | ${json}

Have nginx access_log and error_log log to STDOUT and STDERR of master process

Is there a way to have the master process log to STDOUT STDERR instead of to a file?
It seems that you can only pass a filepath to the access_log directive:
access_log /var/log/nginx/access.log
And the same goes for error_log:
error_log /var/log/nginx/error.log
I understand that this simply may not be a feature of nginx, I'd be interested in a concise solution that uses tail, for example. It is preferable though that it comes from the master process though because I am running nginx in the foreground.
Edit: it seems nginx now supports error_log stderr; as mentioned in Anon's answer.
You can send the logs to /dev/stdout. In nginx.conf:
daemon off;
error_log /dev/stdout info;
http {
access_log /dev/stdout;
...
}
edit: May need to run ln -sf /proc/self/fd /dev/ if using running certain docker containers, then use /dev/fd/1 or /dev/fd/2
If the question is docker related... the official nginx docker images do this by making softlinks towards stdout/stderr
RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
REF: https://microbadger.com/images/nginx
Syntax: error_log file | stderr | syslog:server=address[,parameter=value] | memory:size [debug | info | notice | warn | error | crit | alert | emerg];
Default:
error_log logs/error.log error;
Context: main, http, stream, server, location
http://nginx.org/en/docs/ngx_core_module.html#error_log
Don't use: /dev/stderr
This will break your setup if you're going to use systemd-nspawn.
For a debug purpose:
/usr/sbin/nginx -g "daemon off;error_log /dev/stdout debug;"
For a classic purpose
/usr/sbin/nginx -g "daemon off;error_log /dev/stdout info;"
Require
Under the server bracket on the config file
access_log /dev/stdout;
When running Nginx in a Docker container, be aware that a volume mounted over the log dir defeats the purpose of creating a softlink between the log files and stdout/stderr in your Dockerfile, as described in #Boeboe 's answer.
In that case you can either create the softlink in your entrypoint (executed after volumes are mounted) or not use a volume at all (e.g. when logs are already collected by a central logging system).
In docker image of PHP-FPM, i've see such approach:
# cat /usr/local/etc/php-fpm.d/docker.conf
[global]
error_log = /proc/self/fd/2
[www]
; if we send this to /proc/self/fd/1, it never appears
access.log = /proc/self/fd/2
Based on the official docker Nginx image this is already in place
For more information you can see the Dockerfile written for the 1-alpine version which soft link the access logs and error logs to the stdout and stderr respectively. Other docker tags also have it.
ref: https://github.com/nginxinc/docker-nginx/blob/1.23.1/stable/alpine/Dockerfile#L118-L119

Resources