I am running an Nginx Ingress Controller and an Nginx reverse proxy in my Kubernetes cluster.
I started seeing an issue for just one user and it only started recently. That user keeps getting HTTP 502 from our Nginx.
When I send the same requests from my machine, they return HTTP 200. I also logged into the console of my NGINX container, and was able to execute the request that the redirect results in.
How else can I check the reason for HTTP 502? Are the responses from the address that I am redirected to visible anywhere?
Related
Background:
I have my personal website running on a lighttpd server on my raspberry pi. I have that server’s port (80) forwarded so it can be accessed publicly.
I’m in the process of making a project, and I want a node.js service to make requests to from the lighttpd server. I set up pm2 so the node.js server is always running. I have that port forwarded too (5000). I've verified that this server is working via postman and the browser
Problem:
I'm receiving the following error when making requests:
has been blocked by CORS policy: The request client is not a secure context and the resource is in more-private address space private.
Of note; I have Access-Control-Allow-Private-Network:true in the response header and Access-Control-Request-Private-Network:true in the request header. The only other solution I've found that might fix this is getting an SSL cert for the lighttpd server and using https for it, however I'm struggling setting that up to see if it would work
Questions:
Would getting an SSL cert for lighttpd allow me to make requests to my pm2 server?
Is there a different solution?
How secure is this setup? I don't expect a lot of traffic...
My program sends http requests to https://auth.riotgames.com/api/v1/authorization server. As I understand it, cloudflare is installed there, which blocks my requests by issuing a 403 status code.
But the problem is that if I run the Http Debugger (https://www.httpdebugger.com/) the server responds fine. This is independent of the use of a proxy. Might have something to do with the certificates as I tried to connect with Fiddler installed on another machine (with Http Debugger running on my machine) and if Fiddler decrypts the http traffic it starts responding with 403, and if it doesn't decrypt it responds fine.
With Http Debugger:
With Http Debugger and Fiddler on another machine:
If it's not clear, ask questions. I will be glad to any suggestions, for me it is very important.
I have a question that in a environment where NGINX is acting as a reverse proxy, then does NGINX forwards or creates a new HTTP request for the upstream server ?
And in case NGINX is configured to perform authentication also, then once the user is authenticated, then in future requests, how NGINX and upstream servers will know that the user is authenticated ?
NGINX forwards the request to upstream servers. It modifies two request headers and removes the empty request headers. When the request is forwarded the requested URL is placed in X-Target header. Refer to NGINX-Blog in NGINX.com.
i've successfully managed to set up a reverse proxy which receives data via POST requests from clients and forwards them to a NodeJS server for further processing and storing.
now i would like the nginx reverse proxy to return a 200 OK blank response for all of these requests BEFORE forwarding to the nodeJS server. so the clients will receive the response immediately without the need to wait for the backend server to finish the processing.
if i use "return 202;" inside the location directive, the nginx reverse proxy does respond immediately, but never forwards the request to the NodeJS server.
can this be achieved with nginx?
any help would be much appreciated.
Thanks,
I am using an Ubuntu machine with an Ubuntu guest OS. On the guest OS, I ran my OpenDaylight controller, making the topologies with Mininet and viewing them in the OpenDaylight GUI at localhost:8080. Next, I used Postman REST API Client extension on my Chrome Browser to make a GET request to my ODL Controller:
localhost:8080/restconf/operational/opendaylight-inventory:nodes/
I got the proper response to it in XML format. Now, I have to pass my request through NGINX proxy to 3Scale and get authentication using the app_id and app_key parameters. The request is then to be forwarded to the ODL controller so that I gan get the proper response.
I have already downloaded the proxy config files from NGINX. What modifications must be made in these files? What should be the request I enter in the Postman Client to get the same response as before?
You should only need to change the location of the nginx_.lua file in nginx_.conf
If you want to change the port that Nginx listens on, you will also need to change the listen directive in the server block, to your desired port e.g
server {
lua_code_cache off;
listen 81;
Also, you will need to ensure that there is an upstream block for your backend, e.g
upstream backend_localhost {
server localhost:8080 max_fails=5 fail_timeout=30;
}
but if you have entered this in the proxy configuration wizard that should already be there.
That should be all that you need to change/check.
The request in Postman should target Nginx instead of the ODL Controller, and pass in the application credentials e.g if Nginx is running on port 81
localhost:81/restconf/operational/opendaylight-inventory:nodes/?app_id=<YOUR_APP_ID>&app_key=<YOUR_APP_KEY>
Hopefully that should clear up any doubts. However, you can always email us at support#3scale.net if you have any further questions or add any comments here.