Is that possible to create Full-duplex persistent Http connection through 80-port with nginx which proxies requests to other internal servers? It has to be done to implement bidirectional binary data streaming between desktop applications through 80 port of the Http server.
You probably need some kind of HTTP tunnelling, or implement XMPP over HTTP in your application.
Related
I have a HTTP service running on a server that is to be used by my android application. I am thinking about various way to so that clients can send data to server in a secure manner. One common way is to use HTTPS protocol and have an load-balancer or a proxy that do the SSL termination.
Instead, I am thinking of using wiregaurd as a secure medium for communication. So I will first install wiregarud client as a part of my android application and send all the traffic through this wiregarud channel to the server which is being served from an http endpoint.
Which of the two approaches are better in terms of security and speed?
I have a requirement where I need to forward all the request from different sources to another network by grpc.
Request Server<-> Grpc Client <-> Internet <-> Grpc Server <-> Resource Server.
Request server and grpc client on same network.
Resource server and grpc server are on same network .
How to do I forward request server request to port that is sending data to grpc server ?
MY grpc server and client are in java so using grpc-java interface.
It sounds like you want a grpc-java-based proxy. "Grpc Client" in your diagram could be any HTTP/2 proxy. But you could use grpc-java to implement it.
I made an example generic proxy a while back. It does not need any information about the methods it is proxying. You basically just create a new outbound RPC for each inbound RPC and plug the inputs of one to the outputs of the other and vise versa.
I'm not clear why the handshake for WebSocket is HTTP. Wiki says "The handshake resembles HTTP so that servers can handle HTTP connections as well as WebSocket connections on the same port." What is the benefit of this? Once you start communicating over WebSocket you are using port 80 also...so why can't the initial handshake be in WebSocket format?
Also, how do you have both WebSocket and HTTP servers listening on port 80? Or is it typically the same application functioning as HTTP and WebSocket servers?
Thanks y'all :)
WebSockets are designed to work almost flawlessly with existing web infrastructures. That is the reason why WS connections starts as HTTP and then switches to a persistent binary connection.
This way the deployment is simplified. You don't need to modify your router's port forwarding and server listen ports... Also, because it starts as HTTP it can be load balanced in the same way that a normal HTTP request, firewalls are more lean to let the connection through, etc.. etc... Last but not the least, the HTTP handshake also carry cookies, which it is great to integrate with the rest of the app in the same way that AJAX does.
Both, traditional HTTP request-response and WS, can operate in the same port. Basiclally the WS client sends a HTTP request asking for "Upgrade:websocket", then if the server accepts the WS connections, replies with a HTTP response indicating "101 Switching Protocols", from that point the connection remains open and both ends consider it as a binary connection.
Websocket is designed in such a way that its servers can share a port with HTTP servers, by having its handshake be a valid HTTP Upgrade request.
I have a doubt in this design philosophy.
Any ways the WebSocket Protocol is an independent TCP-based protocol.
Why would we need this HTTP handshake(upgrade request) and a protocol switching. Instead why can't we directly(& independently) follow a websocket like protocol?
To quote from the IETF 6455 WebSocket spec:
The WebSocket Protocol attempts to address the goals of existing
bidirectional HTTP technologies in the context of the existing HTTP
infrastructure; as such, it is designed to work over HTTP ports 80
and 443 as well as to support HTTP proxies and intermediaries, even
if this implies some complexity specific to the current environment.
However, the design does not limit WebSocket to HTTP, and future
implementations could use a simpler handshake over a dedicated port
without reinventing the entire protocol.
In other words, there is a vast infrastructure for HTTP and HTTPS that already exists (proxies, firewalls, caches, and other intermediaries). In order to increase the chances of being adopted widely, the WebSocket protocol was designed to allow adjustments and extensions to the existing infrastructure without having to recreate everything from scratch to support a new protocol on a dedicate port.
It's also important to note that even if WebSocket protocol were to get rid of the HTTP compatible handshake, it would still need a handshake of almost equivalent complexity to support security requirements of the modern web so the browser and server can validate each other and to support CORS (cross-origin request sharing) securely. Even "raw" Flash sockets do a handshake with the server via the security policy request prior to creating the actual socket.
I understand that a SOCKS proxy only establishes a connection at the TCP level while an HTTP proxy interprets traffic at HTTP level. Thus a SOCKS proxy can work for any kind of protocol while an HTTP Proxy can only handle HTTP traffic. But why does an HTTP Proxy like Squid can support protocol like IRC, FTP ? When we use an HTTP Proxy for an IRC or FTP connection, what does specifically happen? Is there any metadata added to the package when it is sent to the proxy over the HTTP protocol?
HTTP proxy is able to support high level protocols other than HTTP if it supports CONNECT method, which is primarily used for HTTPS connections, here is description from Squid wiki:
The CONNECT method is a way to tunnel any kind of connection through an HTTP proxy. By default, the proxy establishes a TCP connection to the specified server, responds with an HTTP 200 (Connection Established) response, and then shovels packets back and forth between the client and the server, without understanding or interpreting the tunnelled traffic
If client software supports connection through 'HTTP CONNECT'-enabled (HTTPS) proxy it can be any high level protocol that can work with such a proxy (VPN, SSH, SQL, version control, etc.)
As others have mentioned, the "HTTP CONNECT" method allows you to establish any TCP-based connection via a proxy. This functionality is needed primarily for HTTPS connections, since for HTTPS connections, the entire HTTP request is encrypted (so it appears to the proxy as a "meaningless" TCP connection). In other words, an HTTPS session over a proxy, or a SSH/FTPS session over a proxy, will both appear as "encrypted sessions" to the proxy, and it won't be able to tell them apart, so it has to either allow them all or none of them.
During normal operation, the HTTP proxy receives the HTTP request, and is "smart enough" to understand the request to be able to do high level things with it (e.g. search its cache to see if it can serve the response without going to the destination server, or consults a whitelist/blacklist to see if this URL is allowed, etc.). In "CONNECT" mode, none of this happens. The proxy establishes a TCP connection to the destination server, and simply forwards all traffic from the client to the destination server and all traffic from the destination server to the client. That means any TCP protocol can work (HTTPS, SSH, FTP - even plain HTTP)