I've got my nginx server running a php8.0 and Laravel 8.0 web app on an ec2 instance running Ubuntu 18.04.
So the web app works just fine, I can access the DB, look up data and insert new data, all the requests for this things are POST or GET requests with no params. But when I try to do GET requests that have parameters the app just shows a white screen, like nothing is happening. For example:
This GET request doesn't work:
http://mypublicamazonurl.com/clients-update?id=2&client_name=John&client_last_name=Doe&client_email=johndoe%40gmail.com&client_phone_number=9999&action=update
This GET request does work:
http://mypublicamazonurl.com/labels
My server conf file looks like this:
server {
listen 80;
root /var/www/Proyecto-Final-IAW/las-olivas/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name 'mypublicurl.com' 'www.mypublicurl.com';
location / {
try_files $uri $uri/ /index.php$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
This is my very first time setting a server so any help/tip is appreciated and hopefully I was clear.
Related
I have a django backend and react frontend.
I want to serve the react on / and use /admin, /api and /auth for Django. Here's what I have in my Nginx.
upstream backend {
server 127.0.0.1:8000;
}
server {
listen 80;
server_name x.x.x.x;
root /home/user/folder/frontend;
index index.html index.htm;
# for serving static
location /static {
alias /home/user/folder/backend/staticfiles;
}
# for serving react built files
location / {
try_files $uri $uri/ /index.html;
}
# for everything django
location ~^/(admin|api|auth) {
include snippets/proxyinfo.conf;
proxy_pass http://backend;
}
}
With the above, the expected behavior is
/ uses the default root folder, /home/user/folder/frontend and loads the built index files from react accordingly
/(admin|api|auth) points to django
/static loads static files saved in the /home/user/folder/backend/staticfiles folder.
So not sure why when I hit example.com/static/myfile.css, Nginx is going to /home/user/folder/frontend/static/myfile.css
I'd expect none of the above configuration says that's what it should do, so what magic is going on?
I thought this answer was self explanatory enough, yet Nginx keeps doing whatever it likes.
I'm using nginx/1.18.0 (if that matters)
Try adding root inside the location / directive too.
Like this:
upstream backend {
server 127.0.0.1:8000;
}
server {
listen 80;
server_name x.x.x.x;
root /home/user/folder/backend/staticfiles;
# for serving static
location /static {
alias /home/user/folder/backend/staticfiles;
}
# for serving react built files
location / {
root /home/user/folder/frontend;
try_files $uri $uri/ /index.html;
}
# for everything django
location ~^/(admin|api|auth) {
include snippets/proxyinfo.conf;
proxy_pass http://backend;
}
}
Also have a look at those QAs:
serve react frontend and php backend on same domain with nginx
Nginx -- static file serving confusion with root & alias
Deploy both django and react on cloud using nginx
from ngix documentation here, it seems you are missing a / at the end of your paths. this trailing / can cause a lot of pain in many languages to be the root cause of many errors.
please give it a try like this:
# for serving static
location /static/ {
alias /home/user/folder/backend/staticfiles/;
}
Quick question, shouldn't be too hard... hopefully.
I'm running Nginx with two Laravel projects being hosted. The following directories are where they are being stored.
/var/www/site.kara
/var/www/site.arkmanager
The location of the projects are http://10.0.0.2/kara and http://10.0.0.2/arkmanager.
The site.kara laravel project is loading images just fine. However I have an issue with the site.arkmanager locating images in my CSS file. It is also having issues loading FontAwesome webfonts files as well, due to the css code looking in the root of the server and not in the project directory. (CSS Code Below) I look at the console and there is an error...
So, according to this error its trying to get the image file from the server root directory? It isn't adding the /arkmanager portion of the image location... the image location is: http://10.0.0.2/arkmanager/images/app/hero-background.png. So I'm thinking that I screwed up my NGINX default file somehow, even though its not giving me any errors when running sudo nginx -t. A little bit of insight would be helpful in solving this issue.
My CSS class property, if its relevant to the problem, putting here just in case:
#hero {
width: 100%;
height: 100vh;
background: url("/images/app/hero-background.png") top center;
background-size: cover;
position: relative;
}
Here is my /etc/nginx/sites-available/default file contents. I am not running SSL due to the fact that this is a local server (second computer) inside my internal network and not available to the public.
# HTTP Server Block
server {
# Port that the web server will listen on.
listen 80 default_server;
# Root Folder
root /var/www/;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
# IP ADDRESS
server_name _;
# Root Location
location / {
# URLs to attempt, including pretty ones.
try_files $uri $uri/ /index.php$is_args$args;
}
# Karas Worlds Nested Location
location /kara {
alias /var/www/site.kara/public/;
try_files $uri $uri/ #kara;
# PHP FPM configuration.
location ~ \.php$ {
#Include Fast CGI Snippets
include snippets/fastcgi-php.conf;
# Define the PHP Script Filename
fastcgi_param SCRIPT_FILENAME $request_filename;
# With php-fpm (or other unix sockets)
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #kara {
rewrite /kara/(.*)$ /kara/index.php?/$1 last;
}
# End Karas World Nested Location
# Ark Manager Nested Location
location /arkmanager {
alias /var/www/site.arkmanager/public/;
try_files $uri $uri/ #arkmanager;
# PHP FPM configuration.
location ~ \.php$ {
#Include Fast CGI Snippets
include snippets/fastcgi-php.conf;
# Define the PHP Script Filename
fastcgi_param SCRIPT_FILENAME $request_filename;
# With php-fpm (or other unix sockets)
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
}
location #arkmanager {
rewrite /arkmanager/(.*)$ /arkmanager/index.php?/$1 last;
}
# Ark Manager Nested Location
# PHP FPM configuration.
location ~ \.php$ {
#Include Fast CGI Snippets
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets)
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
}
# We don't need .ht files with nginx.
location ~ /\.ht {
deny all;
}
}
Also side note here is a screenshot of my npm run dev command running and compiling successfully, just in case anyone thought that the CSS file might have been borked.
This is what what the application looks like on my local dev machine:
https://prnt.sc/vtu6f6
https://prnt.sc/vtu6y0
This is what it looks like on the local ubuntu server running nginx:
https://prnt.sc/vtu6r5
https://prnt.sc/vtu77h
I'm trying to verify a file upload for SSL certificate.
The file needs to be .well-known/acme-challenge/file
I have successfully placed the file as above, but while accessing the same file from the web http://weburl.com/.well-known/acme-challenge/file, 404 error is coming up.
When I place the same file in .well-known/ the file can be access from the path http://weburl.com/.well-known/file successfully.
My nginx configuration:
server {
listen 80;
server_name weburl.com;
root /var/www/html;
location ~ /.well-known {
allow all;
}
location ~ /\.well-known/acme-challenge/ {
allow all;
root /var/www/html;
try_files $uri =404;
break;
}
}
You have to grant permissions for www-data user.
sudo chown -R www-data:www-data .well-known
In the first case it looks for /var/www/html/.well-known/file.
In the second case it looks for /var/www/html/file.
What you intend is for it to find /var/www/html/.well-known/acme-challenge/file
This is because you specify root in the location block, which changes where it reads the file from.
So instead of this:
location ~ /\.well-known/acme-challenge/ {
allow all;
root /var/www/html; # <================= Your problem, sir
try_files $uri =404;
break;
}
You should have this:
location ~ /\.well-known/acme-challenge/ {
allow all;
try_files $uri =404;
break;
}
Shameless plug: If you're just doing simple virtual hosting and you're familiar with node at all you might like Greenlock.
If you have installed the LetsEcnrypt module on Plesk, but for some reason you need to authorize for eg. example.com manually like we do.
Add you authorization code to
/var/www/vhosts/default/htdocs/.well-known/acme-challenge
instead of expected (domain webroot)
/var/www/vhosts/example.com/htdocs/.well-known/acme-challenge
To find so I had to check /var/www/vhosts/system/example.com/conf/httpd.conf
Here's my scenario:
I have a vagrant cloud set up at an IAAS provider. It uses a .json file as its catalog to direct download requests from vagrant over to their corresponding .box files on the server.
My goal is to hide the .json file from the browser so that a surfer cannot hit it directly at, say: http://example.com/catalog.json and see the json output as that output lists the url of the box file itself. However, I still need vagrant to be able to download and use the file so it can grab the box.
In the NGINX docs, it mentions the "internal" directive which seems to offer what I want to do via try_files, but I think I'm either mis-interpreting what it does or just plain doing it wrong. Here's what I'm working with as an example:
First, I have two sub-domains.
One for the .json catalog at: catalog.example.com
A second for the box files at: boxes.example.com
These are mapped, of course, to respective folders on the server, etc.
With that in mind, in sites-available/site.conf, I have the following server blocks:
server {
listen 80;
listen [::]:80;
server_name catalog.example.com;
server_name www.catalog.example.com;
root /var/www/catalog;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve json files to scripts only with content type header application/json
location ~ \.json$ {
internal;
add_header Content-Type application/json;
}
}
server {
listen 80;
listen [::]:80;
server_name boxes.example.com;
server_name www.boxes.example.com;
root /var/www/boxes;
# Use try_files to trigger internal directive to serve json files
location / {
try_files $uri =404;
}
# Serve box files to scripts only with content type application/octet-stream
location ~ \.box$ {
internal;
add_header Content-Type application/octet-stream;
}
}
The NGINX documentation for the internal directive states:
Specifies that a given location can only be used for internal requests. For external requests, the client error 404 (Not Found) is returned. Internal requests are the following:
requests redirected by the error_page, index, random_index, and try_files directives;
Based on that, my understanding is that my server blocks grab any path for those sub-domains and then, passing it through try_files, should make that available when called via vagrant, yet hide it from the browser if I hit the catalog or a box url directly.
I can confirm that the files are not accessible from the browser; however, they're also unaccessible to vagrant as well.
Am I mis-understanding internal here? Is there a way to achieve my goal?
Make sure for the sensitive calls the server listens on localhost only
Create a tunnel between the machine running vagrant (using an arbitrary port) and your IAAS provider machine (on the web server port, for example).
Create a user on your IAAS machine who is only allowed to interact with the forwarded web-server port (via sshd_config)
Use details from below
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Reference the tunneled server using http://:/path in both your catalog.json url and your box file url
Use a server block in your NGINX config which listens to the 127.0.0.1:80 only and doesn't use server_name. You can even add default_server to this so that anything that doesn't match other virtual host will hit this block
Use two locations in your config with different roots to serve files from /var/www/catalog and /var/www/boxes respectively.
Set regex locations for your .json and .box files and use a try_files block to accept the $uri or redirect to 444 (so you know it hit your block)
Deny the /boxes and /catalog otherwise.
See the below nginx config for example
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
server_name www.example.com;
root /var/www;
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
server {
listen 80;
listen [::]:80;
server_name store.example.com; # I will use an eCommerce platform eventually
root /var/www/store;
}
server {
listen 127.0.0.1:80;
listen [::]:80;
root /var/www;
location ~ \.json$ {
try_files $uri $uri/ =444;i
add_header Content-Type application/json;
}
location ~ \.box$ {
try_files $uri $uri/ =444;
add_header Content-Type octet/stream;
}
location ~ /(catalog|boxes) {
deny all;
return 403;
}
}
I think all you need here is to change the access level to the file. There is 3 access level (execute, read and write) you can remove the execute access level from your file. On the server consul run the command:
chmod 766 your_file_name
you can see:
here
and here
for more information.
The basic installation is working, on linux mint OS. resolving the domain on 'localhost' confirms that nginx is running.
however, the issue i am running into stems from the generation of my own server block. its very basic:
server {
listen 80;
listen [::]:80;
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from alias.
server_name tokum.com www.tokum.com;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.html;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
}
as you can see, i have created an alias for www.tokum.com in this server block. attempting to resolve this url in a browser, i am greeted with the lovely 'server not found' message.
my feeling is that it surrounds the 'try_files' functionality, but i cannot be sure why.
No other resources have been created on the server other than my tokum.com server block file, which is located at the path /etc/nginx/sites-available/tokum.com. Any help is most appreciated.