I want to build a client-side only application via Nuxt 3, and just as the docs describe here I've added ssr: false to my nuxt config.
I then used the nuxi build command to build the application, but it still says it needs to be run using node.
I proceed to run nuxi generate as I would normally do for static hosting.
According to the output from the generate command, I should be able to just deploy the public folder to any static web hosting. However, when I do this, I just get a completely white page.
I have tried running the same commands without ssr: false, and that does render a page, but that causes none of my javascript to work.
Edit: minimal reproducible example
So I've just follewd these steps from the nuxt docs.
Without making any code changes, except for editing my nuxt config, I've run generate.
This is what my nuxt config looks like right now;
import { defineNuxtConfig } from 'nuxt'
// https://v3.nuxtjs.org/api/configuration/nuxt.config
export default defineNuxtConfig({
ssr: false,
})
I then ran npx serve .output/public as suggested in the comments, and that seemed to work just fine locally.
I then copied the public folder to my web server, but the same issue persists, just a white screen is visible.
Maybe I should clarify my question a little more: is it still possible to host a nuxt SPA, without running a node process on the server, just as it was before in nuxt 2?
Right now I just switched to a server rendered application, as I don't see another solution.
Using NGINX to serve a static generated build will fail currently, because nginx do not support the MIME type for mjs files (sended as octet-stream).
To fix it just add "application/javascript" as mimetype for mjs files in the nginx configuration.
Example to adapt to your needs:
1- Create in your project root folder a file named "nginx.conf" with:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
types
{
application/javascript mjs;
}
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
With "include /etc/nginx/mime.types;" we are loading the default nginx mime types, and after that extending that list with "application/javascript mjs;".
2 - Then you could use that file in your image build step as follow (check the line 2):
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY .output/public /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Now you can deploy the image. Enable the gzip line if you are not using compresion already.
Related
I'm trying to dabble in Nginx and making my first server thingie, but it's not working. Here is my config file
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
location / {
root ~/42cursus/ft_server/tests/www;
}
location /images/ {
root ~/42cursus/ft_server/tests;
}
}
include servers/*;
}
I've been following this beginner's guide : https://nginx.org/en/docs/beginners_guide.html. The only thing I did different was delete the commented lines in the default config file in order for it to be clearer to me.
Whenever I send nginx -s reload, I get a "signal process started" added to my error.log, and trying to access the site via localhost just shows This.
Could someone help me, keeping in mind that I'm on a school computer and can't use SUDO ? Thank you in advance.
Added "listen 80;" to my server block, and now I get a 404 instead of nothing, at least that's progress !
Say I own a domain name: domain, and I host a static blog at www.domain.com. The advantage of having a static site is that I can host it for free on sites like netlify.
I'd now like to have several static webapps under the same domain name, so I don't have to purchase a domain for each webapp. I can do this by adding a subdomain for my apps. Adding a subdomain is easy enough. This video illustrates how to do it with GoDaddy for example. I can create a page for my apps called apps.domain.com where apps is my subdomain.
Say, I have several static webapps: app1, app2, app3. I don't want a separate subdomain for each of these, e.g., app1.domain.com. What I'd like instead is to have each app as a subfolder under the apps subdomain. In other words, I'd like to have the following endpoints:
apps.domain.com/app1
apps.domain.com/app2
apps.domain.com/app3
At the apps.domain.com homepage, I'll probably have a static page listing out the various apps that can be accessed.
How do I go about setting this up? Do I need to have a server of some sort (e.g., nginx) at apps.domain.com? The thing is I'd like to be able to develop and deploy app1, app2, app3 etc. independently of each other, and independently of the apps subdomain. Each of these apps will probably be hosted by netlify or something similar.
Maybe there's an obvious answer to this issue, but I have no idea how to go about it at the moment. I would appreciate a pointer in the right direction.
Something along the lines of below should get you started if you decide to use nginx. This is a very basic setup. You may need to tweak it quite a bit to suit your requirements.
apps.domain.com will serve index.html from /var/www
apps.domain.com/app1 will server index.html from /var/www/app1
apps.domain.com/app2 will server index.html from /var/www/app2
apps.domain.com/app3 will server index.html from /var/www/app3
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name apps.domain.com;
root /var/www;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /app1 {
}
location /app2 {
}
location /app3 {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I initially solved this problem using nginx. But, I was very unhappy with that because I needed to pay for a server, and needed to set up the architecture for it etc.
The easiest way to do this, that I know of today, is to make use of URL rewrites. E.g. Netlify rewrites, Next.js rewrites.
Rewrites allow you to map an incoming request path to a different destination path.
Here is an example usage in my website.
Just one addition: if you're hosting the apps on an external server you might want to setup nginx and use the proxy plugin to forward incoming requests from your nginx installation to the external webserver:
web-browser -> nginx -> external-web-server
And for the location that needs to be forwarded:
location /app1 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass https://url-of-external-webserver;
}
It would seem that you're asking the question prematurely — what actual issues are you having when doing what you're trying to do using the naive approach?!
It is generally the best idea to have each app run on its own domain or subdomain; this is done to prevent XSS attacks, where vulnerability in one of your apps may result in your whole domain becoming vulnerable. This is because security features are generally implemented in the browser on a per-domain basis, where it is presumed that the whole domain is under the control of a single party (e.g., running a single app, at the end of the day).
Otherwise, there's really nothing that special that must be done to have multiple apps on a single domain. Provided that your paths within each app are correct (e.g., they're either relative, or absolute with the full path of the location of the specific app), there's really not any specific issues to be aware of, frankly.
I can't get any changes in the /etc/nginx/nginx.conf http block to be used. I'm starting with the simplest thing - I want to modify the name of access.log to something else (ie a.log). It is a vanilla nginx install (no custom config files yet). Here's what I know:
changing a value in the head of nginx.conf does affect the configuration (changing worker_processes 4 to worker_processes 2 does change the # of workers)
Making a syntax error in nginx.conf's http block does cause nginx to throw an error on restart
Changing access_log to access_log /var/log/nginx/a.log does not modify the location of the log, and nginx in fact continues logging to /var/log/nginx/access.log
Here is a snippet of my nginx.conf file:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/a.log;
#....
}
Is it something as simple as I'm modifying an http block that gets overwritten by some other config file? Thanks for the help.
Isn't your access_log also defined in a server block ? Have a look at the default config in nginx/sites-enabled/.
In this case the value in http block is overwritten by the one in the lower block.
Hello I have installed the newest stable Version of Nginex (1.4.4) and want to install also phpMyAdmin, unfortunately the following error appears when I try to open phpMyAdmin in my browser through http:// 192 . . . /phpmyadmin:
404 Not Found
nginx/1.4.4
What exactly is the reason that Nginx can't find the phpMyAdmin file?
This is the content of my /etc/nginx/nginx.conf file:
user nginx;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Many Greetings
It's really not a good idea to make a symbolic in your document root, that's just asking for trouble having a phmyadmin shortcut in www.
The right way to do it is to create a file called php and add to /etc/nginx/sites-available. You can copy the file below but change the port number to something else.
server {
listen 30425;
# Don't want to log accesses.
#access_log /dev/null main;
access_log /var/log/nginx/php.acces_log main;
error_log /var/log/nginx/php.error_log info;
root /usr/share/phpmyadmin;
index index.php index.html index.htm;
error_page 401 403 404 /404.php;
location ~ .*.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SERVER_NAME $http_host;
fastcgi_ignore_client_abort on;
}
}
Now you need to make symbolic link to sites enabled directory
ln -s /etc/nginx/sites-available/php//etc/nginx/sites-enabled
now you need to add this code to Logging Settings part of nginx.conf
log_format main ‘$remote_addr – $remote_user [$time_local] ‘
‘”$request” $status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for”‘ ;
Now you can access your phpmyadmin with http://example.com:30425
I have followed the instructions on https://www.digitalocean.com/community/articles/how-to-install-phpmyadmin-on-a-lemp-server and linked nginx and phpMyAdmin through the following command:
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html
(the folder "www" was replaced in the newer versions of Nginx through "html")
sudo service nginx restart
If I now try to start phpMyAdmin through
"http:// 192 . . . /phpmyadmin"
the following error appears in my browser:
403 Forbidden
nginx/1.4.4
If I try to start phpMyAdmin through
"http:// 192 . . . /phpmyadmin/index.php"
the index.php file only gets downloaded.
Many Greetings
The answers provided here did not help me for reasons unknown. I suspect that other parts of my conf file ruled out going into the phpmyadmin directory.
Not wanting to spend too long fixing it to work I simply copied my working conf file from www.example.com to phpmyadmin.example.com and set the directory to /usr/share/phpmyadmin. I then did nginx -s reload and I was good to go without ever having to get to the bottom of the 403/502 errors that I encountered doing it properly. I also believe that http://phpmyadmin.example.com is more secure from random hackers trying their scripts than the http://www.example.com/phpmyadmin domain, however it does entail having the wildcard setup correctly for the DNS for your domain, i.e. so that the address resolves to your server.
Hope this helps anyone frustrated in the simple task of getting phpmyadmin working with nginx.
As user1941083 explain.
You just need to create a symbolic link between phpMyAdmin and your site’s directory.
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/www
And after that, modify nginx setting in /etc/nginx/sites-available.
Change :
index index.html index.html;
to :
index index.html index.html index.php;
Simply try by using http://domain-name-or-ip-address/phpmyadmin/index.php
Sometime Nginx do not use index.php automatically, we have to tell it, either by changing location in nginx.conf file or by adding index.php manually into url, just like above.
I know this is late though but I had the same problem after following some tutorials and suggestions online but to no avail. I also tried #STH answer on this page but still error still exist.
I had to restart the following services:
sudo systemctl restart nginx.service
sudo systemctl restart php-fpm
THIS DID THE TRICK: Configure the Daemons to Start at Boot, to ensure that all of the LEMP programs start automatically after any server restarts:
sudo systemctl enable nginx mysqld php-fpm
Try to create the symbolic link between the phpmyadmin location and nginx www:
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html
after this you can get the 403 Forbidden but should add the index.php inside the server tag where index.html and others are:
/etc/nginx/sites-available/default
and then restart nginx
I have an ember.js application I developped on my local machine. I use a restify/node.js server to make it available locally.
When I navigate in my application, the address bar changes like this:
Example 1
1. http://dev.server:3000/application/index.html#about
2. http://dev.server:3000/application/index.html#/items
3. http://dev.server:3000/application/index.html#/items/1
4. http://dev.server:3000/application/index.html#/items/2
I try now to deploy it on a remote test server which runs nginx.
Although everything works well locally, I can navigate into my web application but the part of the URI that is after the hashtag is not updated.
In any browser: http://test.server/application/index.html is always displayed in my address bar. For the same sequence of clicks as in Exemple 1, I always have:
1. http://web.redirection/application/index.html
2. http://web.redirection/application/index.html
3. http://web.redirection/application/index.html
4. http://web.redirection/application/index.html
Moreover, if I directly enter a complete URI http://web.redirection/application/index.html#/items/1 the browser will only display the content that is at http://test.server/application/index.html (which is definitely not the expected behaviour).
I suppose this come from my NGINX configuration since the application works perfectly on a local restify server.
NGINX configuration for this server is:
test.server.conf (which is symlinked into /etc/nginx/sites-enabled/test.server.conf)
server {
server_name test.server web.redirection;
root /usr/share/nginx/test;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location ~ \.csv$ {
alias /usr/share/nginx/test/$uri;
}
}
nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
EDIT:
Just to be sure that there were no missing files on my test server: I ran a restify/node server (like on my dev machine) and everything works fine when I connect to this server (!). Both nginx and restify servers points to the same files.
EDIT 2
I discovered that my problem happens when I use a web redirection.
If I use an address like http://test.server/application/index.html everything works fine
If I use http://web.redirection/application/index.html it does not work.
So this is my nginx conf that is not correctly redirecting web.redirection URI to test.server or something like that.
Does someone has an idea ? What do I miss ? What should I change to make this work ?
EDIT 3 and solution
The web redirection I used was an A type DNS record. This does not work. Using a CNAME type DNS record solves the issue.
No, this has nothing to do with nginx, any thing past the # is never sent to the server, a javascript code should handle this, I would suggest to use firebug or any inspector to make sure that all your js files are being loaded, and nothing fails with a 404 error, also check for console errors on the inspector console.
The problem came from the DNS redirection from web.redirection to test.server.
It was an A-type record: this does not work.
Using a CNAME-type record that points directly to test.server works.