I have a website (https://checkeden.com) built in Next.js version 12 which is deployed in AWS lightsail server. Whenever I made some changes and create a new build the page shows 500 internal server error and when the build is done then is start working. I think I am doing something wrong here. Can someone guide me to how to deploy Next.js application in Lightsail server without 500 error when build is in process?
Also I am using Nginx Reverse proxy to server my website
below is the code snippest for Nginx configuration
server {
server_name checkeden.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I tried some youtube videos and blogs but didn't get what I want.
Related
I've deployed my .net core web application to Ubuntu 16.04 server with nginx and I want to send all incoming requests to my .net core application. I used tutorial from here here. My sites-available/default file
server {
listen 80;
server_name example.com *.example.com;
location / {
proxy_pass http://localhost:5000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Everything works fine except of one action when I want to pass parameters to change my image size on the fly
http://example.com/api/files/get/5beffcb65a8e8f1c700a1a22/image?w=400&h=400
In that case I receive 404 error. That error returned by Nginx. I tested it locally by curl and perform direct request to my .net core app and it works ok.
So how to configure nginx to send all requests with all parameters as is to my .net core applicatoin?
Don't set proxy-redirect to off. Refer to this link for an explanation:
https://unix.stackexchange.com/questions/290141/nginx-reverse-proxy-redirection
I have a server with ubuntu 16.04, kestrel and nginx as a proxy server that redirects to localhost where my app is. And my app is on Asp.Net Core 2. I'm trying to add push notifications and using SignalR core. On localhost everything is working well, and on a free hosting with iis and windows as well. But when I deploy my app on the linux server I have an error:
signalr-clientES5-1.0.0-alpha2-final.min.js?v=kyX7znyB8Ce8zvId4sE1UkSsjqo9gtcsZb9yeE7Ha10:1
WebSocket connection to
'ws://devportal.vrweartek.com/chat?id=210fc7b3-e880-4d0e-b2d1-a37a9a982c33'
failed: Error during WebSocket handshake: Unexpected response code:
204
But this error occurs only if I request my site from different machine via my site name. And when I request the site from the server via localhost:port everything is fine. So I think there is a problem in nginx. I read that I need to configure it for working with websockets which are used in signalr for establishing connection but I wasn't succeed. May be there is just some dumb mistake?
I was able to solve this by using $http_connection instead of keep-alive or upgrade
server {
server_name example.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I did this because SignalR was also trying to use POST and GET requests to my hubs, so doing just an Upgrade to the connection in a separate server configuration wasn't enough.
The problem is the nginx configuration file. If you are using the default settings of the ASP.NET Core deployment guide then the problem is the one of the proxy headers. WebSocket requires Connection header as "upgrade".
You have to set a new path for SignalR Hub on nginx configuration file.
such as
location /api/chat {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
You can read my full blog post
https://medium.com/#alm.ozdmr/deployment-of-signalr-with-nginx-daf392cf2b93
For SignalR in my case, besides the "proxy_set_header" settings, there is another critical setting "proxy_buffering off;".
So, a full example is now like,
http {
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name some_name;
listen 80 default_server;
root /path/to/wwwroot;
# Configure the SignalR Endpoint
location /hubroute {
proxy_pass http://localhost:5000;
# Configure WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache_bypass $http_upgrade;
# Configure ServerSentEvents
proxy_buffering off;
# Configure LongPolling
proxy_read_timeout 100s;
proxy_set_header Host $host;
}
}
}
See reference: Document reverse proxy usage with SignalR
I am newbie to Nginx and reverse proxy
I am struggling to make accessing of kibana through Nginx work with latest versions of Kibana Kibana 5.1.1
This is current config
server {
listen 8070;
location ~ /analytics/(?<kibana_uri>.*) {
proxy_pass https://stag.xxxxx.xxxx.es.amazonaws.com:5601/$kibana_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server.basepath : "/analytics"
I want to redirect all request for /analytics to my kibana server instance. but all I am getting is 404.Please help me with some example config if anyone has done it earlier.
And how should be the access link
For eg: http://localhost:8070/analytics
I have an Ember app running on port 4200 that uses an Express API on port 4500. I have uploaded my API to:
/var/www/my-api-domain.com/public_html/
I have also edited the nginx sites-available file:
location /
{
proxy_pass http://localhost:4500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
I SSH into the server, change directory to my API, and run node server and this works! When I visit my IP in the browser, I see my API working properly:
http://159.203.31.72
I then ran ember build -prod locally and uploaded the contents of the resulting dist folder to:
/var/www/my-ember-domain.com/public_html/
I once again updated the nginx sites-available with:
location /ember
{
proxy_pass http://localhost:4200;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Now what? Typically, when I run the site locally, I'd run ember server, but the resulting files in dist look much different and I don't have ember cli installed on the server. As I read about it, that doesn't seem to be the proper approach.
When I hit http://159.203.31.72/ember in the browser, I get an nginx 502 Bad Gateway. How can I serve my Ember app?
ember server starts a development server which should not be used in production. Build your app using ember build --prod. Afterwards you will find your assets in dist/ folder. Serve that ones with nginx and you are done. There is an example nginx.conf in ember-cli docs: https://ember-cli.com/user-guide/#deploying-an-https-server-using-nginx-on-a-unixlinuxmacosx-machine
You could use ember-cli-deploy if you have to set up a more complex deployment workflow.
I'm trying to deploy a websocket server to Elastic Beanstalk.
I have a Docker container that contains both nginx and a jar server, with nginx just doing forwarding. The nginx.conf is like this:
listen 80;
location /ws/ { # <-- this part only works locally
proxy_pass http://127.0.0.1:8090/; # jar handles websockets on port 8090
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / { # <-- this part works locally and on ElasticBeanstalk
proxy_pass http://127.0.0.1:8080/; # jar handles http requests on port 8080
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
I can run this docker locally and everything works fine - http requests are served, and I can connect websockets using ws://localhost:80/ws/ However, when I deploy to Elastic Beanstalk, http requests are still ok, but trying to connect websockets on ws://myjunk.elasticbeanstalk.com:80/ws/ gives a 404 error. Do I need something else to allow websockets on Elastic Beanstalk?
Ok, got it working. I needed the ElasticBeanstalk load balancer to use TCP instead of HTTP.
To do this from the AWS console (as it's laid out on 5/16/2015), go to your ElasticBeanstalk environment, choose "Configuration" on the left menu, under "Network Tier" there's a "Load Balancing" pane. Click its cog wheel, then you can change the load balancer protocol from http to tcp.