Nginx can't find phpMyAdmin - nginx

Hello I have installed the newest stable Version of Nginex (1.4.4) and want to install also phpMyAdmin, unfortunately the following error appears when I try to open phpMyAdmin in my browser through http:// 192 . . . /phpmyadmin:
404 Not Found
nginx/1.4.4
What exactly is the reason that Nginx can't find the phpMyAdmin file?
This is the content of my /etc/nginx/nginx.conf file:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Many Greetings

It's really not a good idea to make a symbolic in your document root, that's just asking for trouble having a phmyadmin shortcut in www.
The right way to do it is to create a file called php and add to /etc/nginx/sites-available. You can copy the file below but change the port number to something else.
server {
listen 30425;
# Don't want to log accesses.
#access_log /dev/null main;
access_log /var/log/nginx/php.acces_log main;
error_log /var/log/nginx/php.error_log info;
root /usr/share/phpmyadmin;
index index.php index.html index.htm;
error_page 401 403 404 /404.php;
location ~ .*.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SERVER_NAME $http_host;
fastcgi_ignore_client_abort on;
}
}
Now you need to make symbolic link to sites enabled directory
ln -s /etc/nginx/sites-available/php//etc/nginx/sites-enabled
now you need to add this code to Logging Settings part of nginx.conf
log_format main ‘$remote_addr – $remote_user [$time_local] ‘
‘”$request” $status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘ ;
Now you can access your phpmyadmin with http://example.com:30425

I have followed the instructions on https://www.digitalocean.com/community/articles/how-to-install-phpmyadmin-on-a-lemp-server and linked nginx and phpMyAdmin through the following command:
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html
(the folder "www" was replaced in the newer versions of Nginx through "html")
sudo service nginx restart
If I now try to start phpMyAdmin through
"http:// 192 . . . /phpmyadmin"
the following error appears in my browser:
403 Forbidden
nginx/1.4.4
If I try to start phpMyAdmin through
"http:// 192 . . . /phpmyadmin/index.php"
the index.php file only gets downloaded.
Many Greetings

The answers provided here did not help me for reasons unknown. I suspect that other parts of my conf file ruled out going into the phpmyadmin directory.
Not wanting to spend too long fixing it to work I simply copied my working conf file from www.example.com to phpmyadmin.example.com and set the directory to /usr/share/phpmyadmin. I then did nginx -s reload and I was good to go without ever having to get to the bottom of the 403/502 errors that I encountered doing it properly. I also believe that http://phpmyadmin.example.com is more secure from random hackers trying their scripts than the http://www.example.com/phpmyadmin domain, however it does entail having the wildcard setup correctly for the DNS for your domain, i.e. so that the address resolves to your server.
Hope this helps anyone frustrated in the simple task of getting phpmyadmin working with nginx.

As user1941083 explain.
You just need to create a symbolic link between phpMyAdmin and your site’s directory.
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/www
And after that, modify nginx setting in /etc/nginx/sites-available.
Change :
index index.html index.html;
to :
index index.html index.html index.php;

Simply try by using http://domain-name-or-ip-address/phpmyadmin/index.php
Sometime Nginx do not use index.php automatically, we have to tell it, either by changing location in nginx.conf file or by adding index.php manually into url, just like above.

I know this is late though but I had the same problem after following some tutorials and suggestions online but to no avail. I also tried #STH answer on this page but still error still exist.
I had to restart the following services:
sudo systemctl restart nginx.service
sudo systemctl restart php-fpm
THIS DID THE TRICK: Configure the Daemons to Start at Boot, to ensure that all of the LEMP programs start automatically after any server restarts:
sudo systemctl enable nginx mysqld php-fpm

Try to create the symbolic link between the phpmyadmin location and nginx www:
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html
after this you can get the 403 Forbidden but should add the index.php inside the server tag where index.html and others are:
/etc/nginx/sites-available/default
and then restart nginx

Related

Nuxt 3 with client-side only rendering doesn't load

I want to build a client-side only application via Nuxt 3, and just as the docs describe here I've added ssr: false to my nuxt config.
I then used the nuxi build command to build the application, but it still says it needs to be run using node.
I proceed to run nuxi generate as I would normally do for static hosting.
According to the output from the generate command, I should be able to just deploy the public folder to any static web hosting. However, when I do this, I just get a completely white page.
I have tried running the same commands without ssr: false, and that does render a page, but that causes none of my javascript to work.
Edit: minimal reproducible example
So I've just follewd these steps from the nuxt docs.
Without making any code changes, except for editing my nuxt config, I've run generate.
This is what my nuxt config looks like right now;
import { defineNuxtConfig } from 'nuxt'
// https://v3.nuxtjs.org/api/configuration/nuxt.config
export default defineNuxtConfig({
ssr: false,
})
I then ran npx serve .output/public as suggested in the comments, and that seemed to work just fine locally.
I then copied the public folder to my web server, but the same issue persists, just a white screen is visible.
Maybe I should clarify my question a little more: is it still possible to host a nuxt SPA, without running a node process on the server, just as it was before in nuxt 2?
Right now I just switched to a server rendered application, as I don't see another solution.
Using NGINX to serve a static generated build will fail currently, because nginx do not support the MIME type for mjs files (sended as octet-stream).
To fix it just add "application/javascript" as mimetype for mjs files in the nginx configuration.
Example to adapt to your needs:
1- Create in your project root folder a file named "nginx.conf" with:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
types
{
application/javascript mjs;
}
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
With "include /etc/nginx/mime.types;" we are loading the default nginx mime types, and after that extending that list with "application/javascript mjs;".
2 - Then you could use that file in your image build step as follow (check the line 2):
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY .output/public /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Now you can deploy the image. Enable the gzip line if you are not using compresion already.

How to configure a proxy with a subdomain servername

I have the following vhost configuration in nginx:
upstream mybackendsrv {
server backend:5432;
}
server {
listen 80;
server_name sub.domain.org;
location / {
proxy_pass http://mybackendsrv;
}
}
When I use a server_name like sub.domain.org, I get the default nginx fallback and my server is not matched.
When I use a server_name like customroute, I get the correct behaviour and my server is matched.
I googled this issue a bit and I believe that subdomain matching is supported in nginx so I'm not sure what's wrong. I checked the access.log and error.log and I get no relevant log.
Any idea how to diagnose this?
I should be able to display route matching logic in debug mode in nginx, but I'm not sure how to accomplish this.
Any help is appreciated.
After investigation, it seems the problem was unrelated to the fact that our URL was a subdomain.
To debug the situation, a $host variable was introduced in the log_format directive in /etc/nginx/nginx.conf:
log_format main '$remote_addr - $host - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
This $host variable allowed to understand that there was a problem with sub.domain.org: when we accessed sub.domain.org, the host was changed to the NGINX server's hostname, contrary to customroute which host was not changed.
It appears sub.domain.org was not a simple DNS config but was an Apache proxy pass configuration. Apache was changing the host name when passing the request, causing NGINX to not match the rewritten host because it received in the request host it's own host instead of the target host.
To correct this behavior, we had to add the following configuration in Apache: ProxyPreserveHost on.
Once we restarted Apache, it the host was preserved and our server_name sub.domain.org was correctly matched in NGINX.

How can I host multiple apps under one domain name?

Say I own a domain name: domain, and I host a static blog at www.domain.com. The advantage of having a static site is that I can host it for free on sites like netlify.
I'd now like to have several static webapps under the same domain name, so I don't have to purchase a domain for each webapp. I can do this by adding a subdomain for my apps. Adding a subdomain is easy enough. This video illustrates how to do it with GoDaddy for example. I can create a page for my apps called apps.domain.com where apps is my subdomain.
Say, I have several static webapps: app1, app2, app3. I don't want a separate subdomain for each of these, e.g., app1.domain.com. What I'd like instead is to have each app as a subfolder under the apps subdomain. In other words, I'd like to have the following endpoints:
apps.domain.com/app1
apps.domain.com/app2
apps.domain.com/app3
At the apps.domain.com homepage, I'll probably have a static page listing out the various apps that can be accessed.
How do I go about setting this up? Do I need to have a server of some sort (e.g., nginx) at apps.domain.com? The thing is I'd like to be able to develop and deploy app1, app2, app3 etc. independently of each other, and independently of the apps subdomain. Each of these apps will probably be hosted by netlify or something similar.
Maybe there's an obvious answer to this issue, but I have no idea how to go about it at the moment. I would appreciate a pointer in the right direction.
Something along the lines of below should get you started if you decide to use nginx. This is a very basic setup. You may need to tweak it quite a bit to suit your requirements.
apps.domain.com will serve index.html from /var/www
apps.domain.com/app1 will server index.html from /var/www/app1
apps.domain.com/app2 will server index.html from /var/www/app2
apps.domain.com/app3 will server index.html from /var/www/app3
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name apps.domain.com;
root /var/www;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /app1 {
}
location /app2 {
}
location /app3 {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I initially solved this problem using nginx. But, I was very unhappy with that because I needed to pay for a server, and needed to set up the architecture for it etc.
The easiest way to do this, that I know of today, is to make use of URL rewrites. E.g. Netlify rewrites, Next.js rewrites.
Rewrites allow you to map an incoming request path to a different destination path.
Here is an example usage in my website.
Just one addition: if you're hosting the apps on an external server you might want to setup nginx and use the proxy plugin to forward incoming requests from your nginx installation to the external webserver:
web-browser -> nginx -> external-web-server
And for the location that needs to be forwarded:
location /app1 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass https://url-of-external-webserver;
}
It would seem that you're asking the question prematurely — what actual issues are you having when doing what you're trying to do using the naive approach?!
It is generally the best idea to have each app run on its own domain or subdomain; this is done to prevent XSS attacks, where vulnerability in one of your apps may result in your whole domain becoming vulnerable. This is because security features are generally implemented in the browser on a per-domain basis, where it is presumed that the whole domain is under the control of a single party (e.g., running a single app, at the end of the day).
Otherwise, there's really nothing that special that must be done to have multiple apps on a single domain. Provided that your paths within each app are correct (e.g., they're either relative, or absolute with the full path of the location of the specific app), there's really not any specific issues to be aware of, frankly.

nginx webdav could not open collection

I have built nginx on a freebsd system with the following configuration parameters:
./configure ... –with-http_dav_module
Now this is my configuration file:
user www www;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
# reserve 1MB under the name 'proxied' to track uploads
upload_progress proxied 1m;
sendfile on;
#tcp_nopush on;
client_max_body_size 500m;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
#upload_store /var/tmp/firmware;
client_body_temp_path /var/tmp/firmware;
server {
server_name localhost;
listen 8080;
auth_basic "Restricted";
auth_basic_user_file /root/.htpasswdfile;
create_full_put_path on;
client_max_body_size 50m;
dav_access user:rw group:r all:r;
dav_methods PUT DELETE MKCOL COPY MOVE;
autoindex on;
root /root;
location / {
}
}
}
Now, the next things I do are check the syntax of the confiuration file by issuing a nginx -t and then do a graceful reload as follows: nginx -s reload.
Now, when I point my web-browser to the nginx-ip-address:8080 i get the list of my files and folders and so on and so forth (I think that is due to the autoindex on feature).
But the problem is that when I try to test the webdav using cadaver as follows:
cadaver http://nginx-ip-address:8080/
It asks me to enter authorization credentials and then after I enter that it gives me the following error:
Could not open Collection: 405 Not Allowed
And the following is the nginx-error-log line which occurs at the same time:
*125 no user/password was provided for basic authentication, client: 172.16.255.1, server: localhost, request: "OPTIONS / HTTP/1.1", host: "172.16.255.129:8080"
The username and pass work just fine wheni try to access it from the web-browser, then what is happening here?
It turns out that the webdav module in-built in nginx is broken and to enable full webdav, we need to add the following external 3rd party module: nginx-dav-ext-module.
Link to its github: https://github.com/arut/nginx-dav-ext-module.git
The configure parameter would now be:
./configure --with-http_dav_module --add-module=/path/to/the/above/module
The built in one just provides the PUT DELETE MKCOL COPY MOVE dav methods.
The nginx-dav-ext-module adds the following additional dav methods: PROPFIND OPTIONS
You will also need to edit the configuration file to add the following line:
dav_ext_methods PROPFIND OPTIONS;
After doing so check if the syntax of the conf file is intact by issuing: nginx -t
and then soft reload (gracefully) nginx: nginx -s reload
And Voila! you should now be able to use cadaver or any other dav client program to get into the directories.
I cannot believe that I solved this, it drove me nuts for a while!

NGINX [emerg] unknown directive "upload_pass" error in a config file

I have installed Nginx 1.2.0 with Passenger on my Mac Mini running Lion Server. I used the instructions from the link below.
https://github.com/coverall/nginx
I will state upfront that I am new to Nginx & Passenger. I am working on a Ruby on Rails project that I would like to host on the server. When I try to start Nginx I get the following error:
[emerg] unknown directive "upload_pass" in /usr/local/etc/nginx/virtualhosts/adam.localhost.coverallcrew.com.conf:20
Here are lines 19 & 20 from the file in question. This is a file that I assume was included in the Nginx installation. The only config file I have done anything with is nginx.conf where I added the lines to hopefully host my Rails application.
# pass request body to here
upload_pass #fast_upload_endpoint;
This is my second attempt at doing extensive web searches on how to correct this error. I had hoped to find if I needed to add something to nginx.conf or something to get upload_pass defined somewhere but only found solutions where the directive was indeed missing.
I took a look at nginx.conf. There are a lot of statements commented out. Here are the ones that are not:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
server_name_in_redirect off;
port_in_redirect off;
client_max_body_size 8m;
client_body_buffer_size 128k;
include upstreams/*.conf;
include virtualhosts/*.conf;
include third-party/*.conf;
server {
listen 8080;
server_name www.lightbesandbox2.com;
root /Sites/iktusnetlive_ror/public;
passenger_enabled on;
}
}
Another question: Do I need these virtual hosts that were including in the Nginx install?
Any help would be appreciated.
It appears your Nginx is not compiled with the upload_pass module so it does not understand that directive. I am not certain how to do this with homebrew, but you can compile it in:
./configure --add-module=/path/to/upload_pass/source

Resources