I am new to webpack and using the angular2 starter pack from angularclass with rc6.
All is well but have questions about deploying to production which I did and seemed to work with nginx and supervisor.
So I did the below to deploy:
npm run build:prod
npm run server:prod
Supervisor Conf
[program:angular]
directory=/var/my-angular2/
autostart=true
autorestart=true
process_name = angular-%(process_num)s
command = npm run server:prod
--port=%(process_num)s
--log_file_prefix=%(here)s/logs/%(program_name)s-%(process_num)s.log
[group:angular_server]
programs=angular
My Nginx Conf
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/my-angular2/dist;
# Add index.php to the list if you are using PHP
index index.html index.htm index.php index.nginx-debian.html;
# Here we proxy pass only the base path
location = / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8080;
}
# Here we proxy pass all the browsersync stuff including
# all the websocket traffic
location /browser-sync {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
Then served app out of the dist/ folder in nginx
So in the process of building and running does the webpack process automatically add the below? Or is there another step.
enableProdMode()
When else am I missing for deploying and production angular2 app?
Related
I have installed code-server on my Plesk VPS, and i was wondering how to expose it to the outside world using a reverse proxy.
Currently code-server is bound to 127.0.0.1:8080, and if i use wget via SSH i get the expected page.
How do i go about exposing code-server to the internet (using reverse proxy) on Plesk/CentOS
I’ve tried using vhost_nginx.config file but to no luck
location ~ / {
proxy_pass http://localhost:8080;
proxy_read_timeout 90;
}
You can try using my nginx config, change app URL and app port if needed, put it in /etc/nginx/sites-available than use symlink to /etc/nginx/sites-enabled, and don't forget to restart nginx.
server {
listen 80;
server_name example.com; #change app url
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080; #change app port
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# location /overview {
# proxy_pass http://127.0.0.1:8080$request_uri; #change app port
# proxy_redirect off;
# }
}
}
I have an ASP.NET based app (on IIS8); which loads perfectly when visiting it directly at: http://localhost:89/files
When I visit my ngnix reverse proxy URL: http://localhost/files, instead of loading the webpage, the file Login.aspx is downloaded by the web browser. I don't have any issues with reverse proxying the root domain (for regular HTML webpages).
I would like to resolve this issue without modifying my ASP.NET app if at all possible. Below, is the configuration I'm using in nginx.conf:
server {
listen 80;
server_name localhost;
location / {
root "C:\inetpub\wwwroot";
index index.html index.htm;
}
location /files {
proxy_pass http://localhost:89/files/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Maybe, there's something I need to change on my Web.config file for the ASP.NET app?
I am trying to host a static website on EC2 but no luck.
here is my config file node
server {
listen 80;
server_name localhost;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}
}
I want to host static website too.
How can I do that on EC2
I'm not sure how I can explain it end to end. I hope you got a basic idea of how it works.
From your question, I can understand that you are having some problems with the Nginx configuration.
your Nginx config file should look like this,
location / {
# This would be the directory where your frontend code resides
root /var/www/html/;
try_files $uri /index.html;
}
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3000/;
proxy_set_header Host $http_host;
proxy_redirect off;
}
You can use PM2 for running the nodejs app in your VM.
Here Nginx would be webserver for your frontend application and a proxy to your backend application, all the request is going to hit on your Nginx server.
I hope this is what you are looking for.
I have two servers running on a DigitalOcean droplet. One is a Django/Wagtail application served with Gunicorn (used as a headless CMS), and the other is a SSR Nuxt.js app (front-end). Using the following nginx configuration I’ve made the Nuxt app available at example.com (works great), and now I’m trying to make my Django/Wagtail application available at the subdomain cms.example.com. (I’ve modified my local hosts file so the domain example.com actually functions)
/etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 80;
listen [::]:80;
server_name cms.example.com;
location / {
include proxy_params;
proxy_pass http://unix:/home/thomas/daweb/cms/cms.sock;
}
}
/etc/nginx/proxy_params
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
Result from curl --unix-socket /home/thomas/daweb/cms/cms.sock cms.example.com is html of the default Wagtail landing page, no errors.
However navigating to cms.example.com just gives me a connection error. If I swap the two, I can see the Wagtail interface at example.com, so I know they’re both working. However, I can’t seem to figure out how to configure a subdomain and I struggle to understand the nginx documentation. Also similar questions about configuring subdomains are usually about making static files available, not listening to active ports.
One extra layer of trouble is that the Wagtail CMS is accessible at /admin of its server root, so I’d like to make that page appear at cms.example.com rather than having to navigate to cms.example.com/admin. Any help would be greatly appreciated!
Check what is contained in /etc/nginx/proxy_params. I would expect something like this:
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
Also, to be sure Gunicorn is working correctly, try:
curl --unix-socket /home/thomas/daweb/cms/cms.sock cms.example.com
I have configured a meteor server and setup the nginx configuration. The route works however when configuring dynamic subdomains to point to a specific part of the web app it produces a 404 error on the browser when loading the meteor file.
I am attempting to direct all *.domain.com to http://localhost:3000/booking/
My configuration is:
server {
server_name *.domain.com;
listen 80;
location / {
proxy_pass http://localhost:3000/booking/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; #for websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
The 404 occurs in the Meteor JS file.
If I remove the above nginx subdomain configuration and go to a subdomain it works perfectly, loading the route application. I assume I am missing something to load the application correctly.
The issue only occurs when I proxy_pass to a route within the URL <url>/booking
There are different ways of solving the issue.
1 - In case of 404 try a fallback option without booking in url
server {
server_name *.domain.com;
listen 80;
location / {
proxy_pass http://localhost:3000/booking/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; #for websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
error_page 404 = #fallback;
}
location #fallback {
proxy_pass http://localhost:3000/$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; #for websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
2 - Have a separate block for js and css
server {
server_name *.domain.com;
listen 80;
location / {
proxy_pass http://localhost:3000/booking/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; #for websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
location ~ \.(js|css|font)$ {
proxy_pass http://localhost:3000/$request_uri;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
I have a similar setup, where I have configured different Meteor applications on several subdomains plus a static website on the domain root, all pointing interlly to different ports.
Here is my setup step by step.
Folder structure, location and proxy pass
First thing to think about is the folder strucutre. Depending on your subdomain's VHost-root directory there is a relative path to your subdomain's application folder.
Imagine the following setup:
/www (dir, usually under /var)
/domain (dir)
/websitexy (dir, a static website is deployed under this dir)
/subdomain (dir)
/books (dir, subdomain app is deployed under this dir)
For such a setup I made my nginx config to point to the app's location within the subdomain:
location /books {
I had a similar issue when first time starting my app. One thing I found out is, that my config worked, when setting proxy_pass on my private ip/port combination:
proxy_pass http://172.x.x.x:3000;
This also involves to remove the route name (/books) adter the port number on this entry. Now your proxy pass involves all routing within your subdomain.
Note on routing
Note, that there can be confusion about routing here. By setting the location property you set the routing on the nginx level (server's directory structure), which is why there is no route within your proxy pass.
Your application may have it's own internal routing defined. It is important, that you app's internal router retrieves all requests based from it's application ur root. This is why it is important to have the proxy pass not to include any path after the port number.
Websocket
I have read some articles on nginx and websocket connections. Basically my initial settings came from this article and looked like from this documentation article:
location /app {
proxy_pass 172.x.x.x;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
I also had to add a proxy_read_timeout and proxy_send_timeout because there was an issue with the websocket protocol otherwise:
By default, the connection will be closed if the proxied server does
not transmit any data within 60 seconds. This timeout can be increased
with the proxy_read_timeout directive
So I also set the timeout values:
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
proxy_set_header Connection "upgrade";
Read more on this here and here.
Summarizing my setup looks like the following (using your app credentials):
location /books {
proxy_pass http://172.x.x.x:3000;
proxy_http_version 1.1;
proxy_read_timeout 36000s;
proxy_send_timeout 36000s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
So to solve your case, you may check on your vhost directory (the one where your app is deployed, see folder structure above) and change your location and proxy_pass setting accordingly.
If this is not working, you may need to add some more output of your errors, e.g. a excerpt of the log when attempt to connect.