This question is only related to Cloudflare Proxy
Trials:
If I try to establish a connection to mydomain.com:8443 through the browser it works
But if I try the same but using an ownmade TCP program, it disconnects before hitting the actual server.
Note that using the ownmade TCP program works if I turn off the orange cloud (proxy)
Errors:
So I used wireshark to see what happen and it turns out Cloudflare blocks the connection with error code 400 Bad Request
Thoughts/Questions:
Is there any settings in Cloudflare I can fiddle around with to forward it to the server regardless if it is a "bad request" (while keeping orange clouded proxy on)? Or am I forced to re-write the program to start a Websocket instead?
Related
We are facing 499 error when we close the browser tab before getting response for the request. We are using nginx in k8s.
I have tried by configuring "proxy_ignore_client_abort: on" property in ingress configuration, still we are getting issue even after configuring the above property. Please suggest me way to fix this issue.
Firstly we are supposed to know that the nginx throw 499 if the client actively disconnected the connection. So it you may not pay much attention to it if everything is good.
Nginx could be the server to the user and the client to the backend server like the below:
from user->->nginx->server(tomcat).
In my case, I found that server like tomcat would abort the connection if it cannot handle too many requests in the accepted list.(or too slow to respond).
In tcp, the real server like tomcat would maintain 2 list. The first 1 is SYN list, and the 2nd is accepted list. Pls let me ellaborate it:
Clients firstly send syn to the server.
and the server put it into syn list and return SYN+ACK.
Client send the ACK to the server.
Finally the server established the connection after removing it from the syn list and put it into the accepted list.
In your case, if you close the tab before step2, I think you needn't do anything at all.
if you close the tab before the tab 4, you can refactor the interface of your server to be async to greatly enhance its responding speed.
I found the following documentation from Nginx website itself: https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
Question:
The above point is not correct, right? Since HTTP is a synchronous protocol, after a client sends a request over an established TCP connection with the server (here Nginx reverse proxy), the client expects a response on that TCP connection. So if this is the case Nginx server cannot close the connection just after receiving the request, correct? Shouldn't the Nginx server keep the connection still open until it gets a response from upstream server connection and relays back that data over the same client connection?
I believe the way that paragraph is phrased is inaccurate.
The NGINX blog post mentioned in the question is referencing the behavior of UDP in the context of Direct Server Return (DSR). It is not part of their official documentation. I suspect that the author didn't do a good job of communicating how a conventional layer 7 reverse proxy connection works because they were focusing on explaining how DSR works.
I'm trying to diagnose a web service that sits behind some load balancers and proxies. Under load, one of the servers along the way starts to return HTTP 504 errors, which indicates a gateway timeout. With that background out of the way, here is my question:
When a proxy makes a request to the destination server, and the destination server receives the request but doesn't respond in time (thus exceeding the timeout), resulting in a 504, what happens when the destination server does eventually respond? Does it know somehow that the requestor is no longer interested in a response? Does it happily send a response with no idea that the gateway already sent HTTP error response back to the client? Any insight would be much appreciated.
It's implementation-dependent, but any proxy that conforms to RFC 2616 section 8.1.2.1 should include Connection: close on the 504 and close the connection back to the client so it can no longer be associated with anything coming back from the defunct server connection, which should also be closed. Under load there is the potential for race conditions in this scenario so you could be looking at a bug in your proxy.
If the client then wants to make further requests it'll create a new connection to the proxy which will result in a new connection to the backend.
I am running a backend server with gunicorn behind an nginx 1.6.2 on ubuntu 12.04.
Recently I noticed a lot of 408's in the nginx logs for upload (POST) requests and changing the various timeouts in nginx config I got to know that it was due to client_body_timeout.
Taking tcpDump on the server side it looked like the client is not sending anything after the initial SYN and SYNACK packets and after the client body timeout time the server tries to close the connection by sending FIN ACK, but the client does not ACK and the server goes into its retransmission policy.
Is there anything I am missing or any HTTP header needs to be added or any tcp parameter need to be configured
I found the issue.
Took the client side tcpdump n found that only small sized tcp segments were reaching the client.
Reduced mss to 1200 and it worked for me :). Don't know if this is the correct approach.
i'm having a problem with my app, on a certain situation.
We have a java server with jetty webserver embedded, and an air app on the client side.
It is working properly but on a single situation of a certain customer.
They have a private network that is not administrated by them (and has little chances of being changed as request). So, the only port they allow are 80 and 443.
The communications between the server and the client are through websockets and http.
The "online" check is made through http and, then, we use websockets to notify the client in order to start communication between them.
The thing is, in this situation, the "online" state works properly and any communication send by the client (forced), as it goes through http, gets to the server but, when the server communicates with the client, using websockets, it doesn't work.
We are using wireshark to check the communications: On a working setup, when the client app starts, a websocket is shown on wireshark, on the server side (registering the client on the server). And, after that, websockets that are only used from server to the client, don't show also.
What can be the problem? The port 80? (the same happens with 443 on that network).
Can it be a proxy/firewall that are blocking ws:// messages?
I've read somewhere that wss:// (encripted websockets) would work?
Thanks for your help.
Edit, so, I tried with https and wss communication and the same thing happens.. no websocket is set between the client and server (registering the client on the server).
This situation is happening for http on the customer network. On my test network, it works on http/ws but not with https/wss..
There are many firewalls and gateways out "in the wild" that do not understand the whole WebSocket HTTP/1.1 GET -> UPGRADE -> WebSocket mechanism.
There are several broken firewall implementations will attempt to interpret the WebSocket framing as improper content for HTTP/1.1 (which is a bad reading of the HTTP/1.1 spec) and start to muck with it.
The types of firewalls that inspect/filter/analyze the request/response contents are the ones that seem most susceptible.
I would check that the hardware (or software) that they are using to firewall their network is both compliant and upgraded to support WebSocket RFC-6455.