I need to use NGinx as a proxy to another HTTP proxy, and it doesn't works because it doesn't sent the HOST of original url, only the path.
If I perform the request with curl it works and the dump is
curl --proxy http://localhost:81 http://sample.com/sample
http://sample.com/some-path
{ host: 'sample.com' }
If I perform the request with NGinx with the following config - it doesn't works and the dump is (the domain in the path is missing)
upstream proxies {server localhost:81;}
location / {
proxy_set_header Host $host;
proxy_pass http://proxies;
}
/some-path
{ host: 'sample.com' }
How to make NGinx to pass the whole path?
Solution is to add other proxy, for example DeleGate. Yes, NGinx won't pass the HOST properly, but the DeleGate fixes that.
Your Browser or App -> (NGinx -> DeleGate) -> whatever other proxy or app...
Related
I'm trying to configure my servers with private proxy access. Schema is:
example.com -> nginx -> https proxy -> proxy_pass to server with app
server app receiving connection only from proxy ip.
i'm tryed to find answer, but all what i found is not working for me, because its like:
example.com -> nginx with dns or smth -> proxy_pass to server with app
or like this nginx proxy_pass with a socks5 proxy?
but its not correct for me
i think its can work by socat for nginx.service, but idk how to set it :[
So, how i can set proxy for proxy_pass?
I'm using nginx for web service proxy. I have rest service as below and i want to proxy my domain
https://www.example.com/myRestservice. Service has some method like this;
http://1.1.1.1:123/api/work/method1
http://1.1.1.1:123/api/work/method2
*By the way services publish on two server as below in nginx.conf
As result i want to access to methods of service like "https://www.example.com/Restservice/api/work/method1"..
When i try to use rewrite in nginx as below, i can access service.
But in this time Post method's request body is emty. I can see service logs.
In my nginx.config
upstream RestService {
server 1.1.1.1:123;
server 1.1.1.2:123;
}
server {
listen 443 ssl;
server name https://www.example.com;
location ~ ^/Restservice/ {
add_header Access-Control-Allow-Origin *;
rewrite ^/Restservice/(.*) /$1 break;
proxy_pass http://Restservice/;
proxy_http_version 1.1;
}
}
Bye the way i try to location part like this, result is same.
location /Restservice {
proxy_pass http://Restservice/;
}
Normally I can access soap service with config from https link.
Is it about http redirection to https ?
In nginx access log;
status : 500
request: POST /Restservice/api/work/method1 HTTP/1.1
I find the reason. Because of the endcoding.
After choosing endcoding type 'UTF-8', I could see request body.
Through Jelastic's dashboard, I created this:
I just clicked "New environment", then I selected nodejs. I added a docker image (of mailhog).
Now, I would like that port 80 of my environment serves the nodejs application. This is by default so. Therefore nothing to do.
In addition to this, I would like port 8080 (or any other port than 80, like port 5000 for example) of my environment serves mailhog, hosted on the docker image. To do that, I added the following lines to the nginx-jelastic.conf (right after the first server serving the nodejs app):
server {
listen *:8080;
listen [::]:8080;
server_name _;
location / {
proxy_pass http://mailhog_upstream;
}
}
where I have also defined mailhog_upstream like this:
upstream mailhog_upstream{
server 10.102.8.215; ### DEFUPPROTO for common ###
sticky path=/; keepalive 100;
}
If I now browse my environment's 8080 port, then I see ... the nodejs app. If I try any other port than 80 or 8080, I see nothing. Putting another server_name doesn't help. I tried several things but nothing seems to work. Why is that? What am I doing wrong here?
Then I tried to get rid of the above mailhog_upstream and instead write
server {
listen *:5000;
listen [::]:5000;
server_name _;
location / {
proxy_pass http://10.102.8.215;
}
}
Browsing the environment's port 5000 doesn't work either.
If I replace the IP of the nodejs' app with that of my mailhog service, then mailhog runs on port 80. I don't understand how I can make the nodejs app run on port 80 and the mailhog service on port 5000 (or any other port than 80).
Could someone enlighten me please?
After all those failures, I tried another ansatz. Assume the path my env is example.com/. What I've tried above is to get mailhog to work upon calling example.com:5000, which I failed doing. Then I tried to make mailhog available through a call to example.com/mailhog. In order to do that, I got rid of all my modifications above and completed the current server in nginx-jelastic.conf with
location /mailhog {
proxy_pass http://10.102.8.96:8025/;
add_header Set-Cookie "SRVGROUP=$group; path=/";
}
That works in the sense that if I know browse example.com/mailhog, then I get something on the page, but not exactly what I want: it's the mailhog's page without any styling. Also, when I call mailhog's API through example.com/mailhog/api/v2/messages, I get a successful response without body, when I should've received
{"total":0,"count":0,"start":0,"items":[]}
What am I doing wrong this time?
Edit
To be more explicit, I put the following manifest that exhibits the second problem with the nginx location.
Full locations list for your case is a following:
(please pay attention to URIs in upstreams, they are different)
location /mailhog { proxy_pass http://172.25.2.128:8025/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /mailhog/api { proxy_pass http://172.25.2.128:8025/api; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection " upgrade"; }
location /css { proxy_pass http://172.25.2.128:8025; }
location /js { proxy_pass http://172.25.2.128:8025; }
location /images { proxy_pass http://172.25.2.128:8025; }
that works for me with your application
# curl 172.25.2.127/mailhog/api/v2/messages
{"total":0,"count":0,"start":0,"items":[]}
The following ports are opened by default: 80, 8080, 8686, 8443, 4848, 4949, 7979.
Additional ports can be opened using:
endpoints - maps the container internal port to random external
via Jelastic Shared LB
Public IP - provides a direct access to all ports of your
container
Read more in the following article: "Container configuration - Ports". This one may also be useful:"Public IP vs Shared Load Balancer"
I have an application running on localhost listening on port 8080
nginx is running as reverse proxy, listening on port 80
So, a request coming to nginx on port 80 is sent to this application listening on localhost:8080 and response from this application sent back to the user
Now this application is incapable of reading the header variables from request header and can read only query parameters
So I want nginx to pass header values as query parameters to this application listening on localhost:8080
E.g. let us say in the request header there is a custom variable called 'userid'.
How do we pass this userid as &userid=value appended to the url to application listening on localhost 8080
My current test file of site-available and site-enabled is
server {
location /test {
proxy_pass http://localhost:8080;
}
}
So there is no need to do rewrite or anything else. Simply pass the header parameters that you want to pass as query parameter to the localhost application like below by appending to the arguments.
If you have custom header parameter like userid, then it would be $http_userid
server {
location /test {
set $args $args&host=$http_host;
proxy_pass http://localhost:8080;
}
}
If you have a request header called userid, it will be available within an Nginx variable called $http_userid.
You can alter the query parameters of the original request with a rewrite...break statement.
For example:
location /test {
rewrite ^(.*)$ $1?userid=$http_userid break;
proxy_pass http://localhost:8080;
}
See this document for details.
I configured my Nginx as simple reverse proxy.
I'm just using basic setting
location / {
proxy_pass foo.dnsalias.net;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
}
The problem is that after some time (few days) the site behind nginx become unaccessible. Indead nginx try to call a bad ip (the site behind nginx is at my home behind my box and I'm a using a dyn-dns because my ip is not fixe). This dyn-dns is always valid (I can call my site directly) but for obscure reason Nginx get stuck with that..
So as said, nginx just give me 504 Gateway Time-out after some time. It looks like the error come when my ip change at home.
Here is a sample of error log:
[error] ... upstream timed out (110: Connection timed out) while connecting to upstream, client: my.current.ip, server: myreverse.server.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://my.old
.home.ip", host: "myreverse.server.com"
So do you know why nginx is using ip instead of the DN ?
If the proxy_pass value doesn't contain variables, nginx will resolve domain names to IPs while loading the configuration and cache them until you restart/reload it. This is quite understandable from a performance point of view.
But, in case of dynamic DNS record change, this may not be desired. So two options are available depending on the license you possess or not.
Commercial version (Nginx+)
In this case, use an upstream block and specify which domain name need to be resolved periodically using a specific resolver. Records TTL can be overriden using valid=time parameter. The resolve parameter of the server directive will force the DN to be resolved periodically.
http {
resolver X.X.X.X valid=5s;
upstream dynamic {
server foo.dnsalias.net resolve;
}
server {
server_name www.example.com;
location / {
proxy_pass http://dynamic;
...
}
}
}
This feature was added in Nginx+ 1.5.12.
Community version (Nginx)
In that case, you will also need a custom resolver as in the previous solution. But to workaround the unavailable upstream solution, you need to use a variable in your proxy_pass directive. That way nginx will use the resolver too, honoring the caching time specified with the valid parameter. For instance, you can use the domain name as a variable :
http {
resolver X.X.X.X valid=5s;
server {
server_name www.example.com;
set $dn "foo.dnsalias.net";
location / {
proxy_pass http://$dn;
...
}
}
}
Then, you will likely need to add a proxy_redirect directive to handle redirects.
Maybe check this out http://forum.nginx.org/read.php?2,215830,215832#msg-215832
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
In such setup ip address of "foo.example.com" will be looked up
dynamically and result will be cached for 5 minutes.