I have Nginx running on server a (port 8000) and uWSGI running on server b (port 8001). b already serves a web socket at ws://b:8001/s. I would like to configure a as a reverse proxy also giving access to this web socket at ws://a:8000/s.
I am interested (if I understand correctly and this is the right approach) in a relaying the original HTTP request to b and in b initiating the protocol upgrade (as would also happen in the absence of a proxy), not in a initiating the protocol upgrade, as seems to happen in this example.
What Nginx location block would allow me to do that?
That proved straigthforward enough. The following location block apparently does the trick (for Nginx 1.10.3 and uWSGI 2.0.17.1):
location /s {
proxy_pass http://b:8001/s;
proxy_http_version 1.1;
}
Related
I have an application running on a server at port 6001(frontend, built by react.js) and port 6002(backend, built by node.js) in EC2 instance.
When I send a request through ubuntu terminal in the instance by curl -X GET http://127.0.0.1:6002/api/works, It works fine, I get a proper data.
Now I go to a browser with the domain (http://example.com). However, I only get the front end getting called. When I send a request on a browser to the backend server, It gives me an error GET http://127.0.0.1:6002/api/works net::ERR_CONNECTION_REFUSED (the domain goes through ELB)
Here's my nginx config.
server {
listen 80;
listen [::]:80;
server_name example.com;
root /home/www/my-project/;
index index.html;
location / {
proxy_pass http://127.0.0.1:6001/;
}
location /api/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
proxy_pass http://127.0.0.1:6002/;
proxy_set_header X-Real-IP $remote_addr;
}
}
my case is similar to this, he/she says
My entry point via the aws dns takes me to the React app 127.0.0.1:4100 that is the port 80 nginx is listening to. The react app is the one that makes the backend calls and it is coded to make the backend api calls on 127.0.0.1:1323/api. The client error you saw was because it is displaying what the react app is trying to call in the server, but nginx is not redirecting that call to the app that is running at that port. Does that make sense?
the selected answer didn't work on me.
Also, According to the comment, the problem is solved by sending a request by http://AWS_IP/ on the react app. but I'm not sure If it's a good solution for me since there's no point to use ELB then? If I understand the concept of ELB right? I think the requests need to be done via ELB?
Please help, this is driving me crazy.
From your question, I understood the following things,
Your Domain is pointing to Amazon ELB
And there is a VM behind this ELB, and it has Nginx and 2 applications in it.
Nginx is listening on port 80 and Backend application is listening on port
6002 and frontend is listening on port 6001
YOUR FRONTEND APPLICATION IS CALLING THE BACKEND FROM YOUR LOCAL BROWSER USING
http://127.0.0.1:6002/api/works
Here is the problem,
You can curl 127.0.0.1 from the same instance where the application is running(listening to port 6001) because you are hitting the localhost of that instance, And it is different when your web application running on your local browser because your react application(all javascript application) executes in your local machine and for backend call, it is hitting the localhost(in your case) and returning CONNECTION REFUSED.
So the solution is you've to change the backend URL so that it should look something like http://yourdomain/api/works
In Addition to this, I've a couple of suggestions on your configuration.
You don't need a separate web server for your frontend since you can use the same Nginx.
Make sure that your ELB target port is 80 or the same port that NGINX is listening to.
And close the ports 6001 and 6002 (if it is publically accessible)
I am using an NGINX server as a reverse proxy. The NGINX server accepts a request from an external client (HTTP or HTTPS doesn't matter) and passes this request to a backend server. The backend server returns "a" URL to the client that will have another URL that it should use to make subsequent API calls. I want this returned URL to have the NGIX host and port number instead of the backend service host and port number so that my backend server details are never exposed. For e.g.
1) Client request:
http://nginx_server:8080
2) Nginx receives this and passes it to the backend running with some functionality at
http://backend_server:8090
3) The backend server receives this request and passes another URL to the client http://backend_server:8090/allok.
4) The client uses this URL and makes another subsequent API calls.
What I want is that in step 4 in the response the "backend_server:port" is replaced by the nginx server and port from the initial request. For e.g
http://nginx_server:8080/allok
However, the response goes back as
http://backend_server:8090/allok
my nginx.conf
http {
server {
listen 8080; --> Client request port
server_name localhost;
location / {
proxy_pass http://localhost:8090; ---> Backend server port. The backend
service and NGINX will always be on the same
machine
proxy_redirect http://localhost:8090 http://localhost:8080; --> Not sure if this is
correct. Doesn't seem to do what I want to achieve
# proxy_set_header Host $host;
}
}
}
Thanks in advance
I was able to resolve it. I had to eliminate the proxy_redirect directive from the config.
I wish to setup nginx as an https reverse proxy to a local application, failing over to remote hosts in case the local application is down, e.g. during deployment. My problem is I need the scheme (http or https) to depend on whether the upstream host is local or remote but I cannot find a way to set it dynamically.
Consider the below configuration.
upstream backend {
server localhost:8080; # scheme should be http here
server a.example.com:443 backup; # scheme should be https here
server b.example.com:443 backup; # scheme should be https here
}
server {
listen 443;
...
...
location / {
# How can I set the proxy_pass scheme to https when upstream is a remote host?
proxy_pass http://backend;
}
}
Is there a way of making the proxy_pass scheme depend on the chosen upstream? I looked into the nginx documentation and could not find any way of dynamically defining it. Am I missing something? Do I have to setup an intermediary server for localhost which handles https and set proxy_pass https://backend? That would be too bad.
I'm running uWSGI behind Nginx and have been using proxy_pass to get Nginx to hit uWSGI. Is there any benefit to switch to uwsgi_pass. If so, what is it?
uwsgi_pass uses an uwsgi protocol. proxy_pass uses normal HTTP to contact with uWSGI server. uWSGI docs claims that this protocol is better, faster and can benefit from all of uWSGI special features.
Are there any real benefits? Yes. You can send to uWSGI information what type of data you are sending and what uWSGI plugin should be invoked to generate response. With http (proxy_pass) you won't get that. More on that you can find in uWSGI docs.
But even if there aren't any documented benefits of using uwsgi protocol instead of http for you, you should use uwsgi protocol if you can, because uwsgi is the main protocol of uWSGI server and it just fits better here.
If you want to use uwsgi protocol you must change http-socket parameter in uWSGI start script to socket.
This is the main idea, I want to use NGINX or Apache webservers as a tcp processor, so they manage all threads and connections and client sockets, all packets received from a port, lets say, port 9000 will be redirected to a program made on php or python, and that program will process each request, storing the data in a database. The big problem is also that this program needs to send data to the client or socket that is currently connecting to the NGINX or Apache server, I've been told that I should do something like this instead of creating my own TCP server, which is too difficult and is very hard to maintain since the socket communication with huge loads could lead in memory faults or even could crash down the server. I have done it before, and in fact the server crashed.
Any ideas how to achieve this ??
thanks.
apache/ nginx is web server and could be used to provide static content service to your cusomter and forwarding the application service requests to other application servers.
i only knows about django and here is sample configuration of nginx from Configuration for Django, Apache and Nginx
location / {
# proxy / requests to apache running django on port 8081
proxy_pass http://127.0.0.1:8081/;
proxy_redirect off;
}
location /media/ {
# serve static media directly from nginx
root /srv/anuva_project/www/;
expires 30d;
break;
}
Based on this configuration, the nginx access local static data for url under /media/*
and forward requests to django server located at localhost port 8018.
I have the feeling HAProxy is certainly a tool better suited for your needs, which have to do with TCP and not HTTP apparently. You should at least give it a try.