I have a gateway where a request comes from a browser that makes a request for a microservice. Microservice makes a request after back to the gateway.
All this communication takes place within the framework of one connection. In all this communication, the same Authorization header.
Schematically there is the following chain of calls:
Browser -> API Gateway -> Microservice
|
-> API Gateway (from microservice)
nginx gives me 400 bad request, in the logs the following error:
2018/06/20 23:05:15 [info] 22615#22615: *35468 client sent duplicate header line: "Authorization: Access-Token: 123213213213213", previous value: "Authorization: Access-Token: 123213213213213" while reading client request headers
I mean some sort of check for the uniqueness of the headers ... But does it really take into account the subquery application that goes back from the service to the Gateway API in the first NGINX session with the Gateway API?
Related
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
i can not understanding "When buffering is disabled, the request body is sent to the proxied server immediately as it is received. In this case, the request cannot be passed to the next server if nginx already started sending the request body."
what's mean "the request cannot be passed to the next server if nginx already started sending the request body"
If you have a service with multiple upstream servers, possibly for load balancing or resilience, and one of the upstream servers fail while Nginx is sending the request body to it, Nginx may try to use another server.
But it can only try another server if it has a complete copy of the original request body (i.e. request buffering enabled) or the client has not started sending the request body yet.
I have the following configuration:
A kubernetes cluster based on k3s, deployed on a private cloud.
A service runs on this cluster behind and nginx-based ingress. The service's task is to read a file sent by the client, and upload it to an S3 database.
I have a client (running on a docker container) that sends https requests to upload files to this service.
For each of those requests, it is expected that the server takes about 2 to 3 minutes to respond.
My problem is that some responses issued by our service don't reach the client (the client timeouts after waiting 15mn), even though the client itself successfully sent its HTTP requests, and the service successfully processed it and responded. In particular I can see from the nginx logs that the response 200 is returned by our service, yet the client does not receive this response.
Here is an example of nginx log from which I deduce our service responded 200:
10.42.0.0 - [10.42.0.0] - - [10/Oct/2019:09:55:12 +0000] \"POST /api/upload/ot_35121G3118C_1.las HTTP/1.1\" 200 0 \"-\" \"reqwest/0.9.20\" 440258210 91.771 [file-upload-service-8000] [] 10.42.0.6:8000 0 15.368 200 583d70db408b6be596f5012e7893a3c3\n
For example, I tried to let the client perform requests continuously during 24h (waiting for the sever to respond before issuing a new request), and about one or two requests per hours have this problem while the 18 others behave as expected.
Because nginx tells me a 200 response was returned by our service, it feels like the response was lost somewhere between the nginx ingress and the client. Any idea about what causes this problem? Is there a way for me to debug this?
EDIT:
To clarify, the exact ingress controller I'm using is nginx-ingress ingress controller, deployed with helm.
Also, the failure rate is completely random. The tendency is 1 or 2 per hour, but it can sometimes be more or less. In addition it does not seem to be correlated to the size of the file to upload nor to the number of requests that succeeded so far.
So we have an RPC endpoint that queries an application which is setup behind nginx (as a reverse proxy)
Internet sending POST JSON RPC -> Nginx at :443 ... proxies to -> Web Application at :8080
The application accepts jsonrpc POST requests coming from the internet, e.g.
{"jsonrpc": "2.0", "method": "subtract", "params": {"subtrahend": 23, "minuend": 42}, "id": 3}
Nginx is used to terminate SSL, do basic load-balancing and add required headers.
A vast majority of requests coming to the RPC endpoint from the internet are the same and the response changes relatively rarely, so we'd like to use a kind of cache to lower the load on the application.
Is it possible to configure nginx to read the body of jsonrpc POST request, extract the value of "method" and route the request based on that value either to application or to the caching service?
Internet sending POST JSON RPC -> Nginx at :443 reads "method" from POST body ...
-> if method == "getCounter" ... proxies to -> Caching service
-> if method != "getCounter" ... proxies to -> Application at :8080
We have a load balancer on amazon which balance 4 servers.
When sending specific HTTP request to the load balancer I get Http error code 400.
But when I sends the same request to each one of the servers directly I get Http 200 OK.
Other requests are working fine when using the balancer.
Any Ideas?
Thanks.
Don't know if this will help you, but I've had a similar problem. I was using jMeter and when accessing my instance over AWS load balancer I would always get: HTTP/1.1 400 BAD_REQUEST.
After lot of debugging I found out that I was sending an empty header (no name and no value) because I had an empty row in HTTP Header Manager in jMeter. So, I persume, AWS ELB does some headers checking, and returns HTTP 400, even tough I wasn't having any problems with going with the same request to my instances directly. I don't know if this will help you but you should double-check your headers for some stupid mistake like this one :D
I had a similar problem to this, and it was caused by ALB not accepting HTTP methods that are in lower case.
GET http://myhost.com/foo -> HTTP 200
get http://myhost.com/foo -> HTTP 400
In my case it was the headers issue.
ELB-HealthChecker was sending healthcheck request to my web server and nginx replied with 400.
The issue was that ELB-HealthChecker sends no headers with the request.
Depending on the configuration of your webserver, this might return 400 error code.
To check if this is the case, replicate "no headers" request with curl:
curl -I -H 'User-Agent:' -H 'Accept:' -H 'Host:' http://yourservice/health/
The solution is to configure nginx endpoint that will return 200 regardless of the presence of the request headers:
location = /health/ { return 200; }
In my case my target group for Port 443 was using HTTP protocol instead of HTTPs and I was getting 'Client using HTTP to connect to HTTPS server'
So I'm trying to implement the following scenario:
An application is protected by Basic Authentication. Let's say it is hosted on app.com
An HTTP proxy, in front of the application, requires authentication as well. It is hosted on proxy.com
The user must therefore provide credentials for both the proxy and the application in the same request, thus he has different username/password pairs: one pair to authenticate himself against the application, and another username/password pair to authenticate himself against the proxy.
After reading the specs, I'm not really sure on how I should implement this. What I was thinking to do is:
The user makes an HTTP request to the proxy without any sort of authentication.
The proxy answers 407 Proxy Authentication Required and returns a Proxy-Authenticate header in the format of: "Proxy-Authenticate: Basic realm="proxy.com".Question: Is this Proxy-Authenticate header correctly set?
The client then retries the request with a Proxy-Authorization header, that is the Base64 representation of the proxy username:password.
This time the proxy authenticates the request, but then the application answers with a 401 Unauthorized header. The user was authenticated by the proxy, but not by the application. The application adds a WWW-Authenticate header to the response like WWW-Authenticate: Basic realm="app.com". Question: this header value is correct right?
The client retries again the request with both a Proxy-Authorization header, and a Authorization header valued with the Base64 representation of the app's username:password.
At this point, the proxy successfully authenticates the request, forwards the request to the application that authenticates the user as well. And the client finally gets a response back.
Is the whole workflow correct?
Yes, that looks like a valid workflow for the situation you described, and those Authenticate headers seem to be in the correct format.
It's interesting to note that it's possible, albeit unlikely, for a given connection to involve multiple proxies that are chained together, and each one can itself require authentication. In this case, the client side of each intermediate proxy would itself get back a 407 Proxy Authentication Required message and itself repeat the request with the Proxy-Authorization header; the Proxy-Authenticate and Proxy-Authorization headers are single-hop headers that do not get passed from one server to the next, but WWW-Authenticate and Authorization are end-to-end headers that are considered to be from the client to the final server, passed through verbatim by the intermediaries.
Since the Basic scheme sends the password in the clear (base64 is a reversible encoding) it is most commonly used over SSL. This scenario is implemented in a different fashion, because it is desirable to prevent the proxy from seeing the password sent to the final server:
the client opens an SSL channel to the proxy to initiate the request, but instead of submitting a regular HTTP request it would submit a special CONNECT request (still with a Proxy-Authorization header) to open a TCP tunnel to the remote server.
The client then proceeds to create another SSL channel nested inside the first, over which it transfers the final HTTP message including the Authorization header.
In this scenario the proxy only knows the host and port the client connected to, not what was transmitted or received over the inner SSL channel. Further, the use of nested channels allows the client to "see" the SSL certificates of both the proxy and the server, allowing the identity of both to be authenticated.