Standard way of using a single port for multiple sockets? - networking

Hey I am writing an app in Twisted, and as it stands I have 4 servers bound two different ports all communicating with the client via JSON. Is there anyway to bind these 4 servers to the same port and have the interactions remain the same?
For instance say the client subscribes to two different feeds, transmitted via a direct socket.
Right now I just do like
server1.read_string()
server2.read_string()
and it will read the correct JSON string from the respective feeds. Is there anyway to maintain this type of functionality but contact my server on the same port?
I do not want to throw all of the server functionality into one massive server and partition the data by header prefixes.
I don't want to do something like
s = server.read_string()
header = s.split(//some delimiter)[0]
if (header == "SERVER1")
{
// Blahh
}

It sounds like you have many clients interacting with your servers via HTTP. The standard solution is to throw a reverse proxy between the client and your servers - that proxy then forwards connections to the appropriate server depending on the URL. The reverse proxy can run on any one of your existing servers or on its own server to lighten the load.
If your data is cachable, the reverse proxy can do caching on your results too.
There are many reverse proxies available and you will want to choose one based on what sort of workload you have. Do you need it to be highly configurable? Is the data public or based on logins? How long does each connection last / how many connections to you want to hold open at once?
Squid, Varnish, HAProxy are good reverse proxies and even Apache could do this for you.
I plan to use HAProxy for Gridspy, my project as I have many ongoing connections with my clients and want to place an orbited server in the same URL path as my django server. See This tutorial for more information on how to forward many connections on port 80 from one server to many. This tutorial is focused on Comet, but your problem is even simpler than that.
If you are considering an ongoing tcp/ip connection from the browser back to your servers, seriously consider Orbited. See this tutorial about graphs via orbited and morbidQ. Orbited will also punch through firewalls and proxies better than most custom solutions will, as it looks like normal HTTP traffic.

In order to have multiple servers running on the same machine all bound to the same port, they need to be bound to different IP addresses. The only way to bind to the same port on the same IP is to enable the socket's SO_REUSESOCKET option, but then multiple servers would be able to receive each other's inbound data, really messing up your communications.
Otherwise, having a single server that uses headers to identifies the particular feeds is best. Why do you not want to do that?

Related

Are there existing solutions for serving multiple websocket clients from a single upstream connection using a proxy server?

I have a data vendor for real-time data that has a strict limit on the number of websocket connections I am allowed to make to their API. I have multiple microservices that need to consume this data, with some overlap in subscriptions. The clients do not need to communicate back anything beyond subscriptions.
I would like to design a system using a proxy-server that maintains a single websocket connection to the data vendor and then relays the appropriate messages to the clients via websocket. Optimally, the clients would be able to interact with the proxy server as if it were the data vendor's API.
I have looked at various (reverse?) proxy server solutions here, but have not found specific language about reducing the number of connections to the upstream data source. For example, I have looked into NGINX, but I can't tell if the proxy will combine client connections into a single upstream connection. The other solution I have researched is just putting all messages into a Kafka Pub/Sub via a connector, and have each client subscribe there.
I am curious if there are any existing, out of the box solutions to this problem before I implement my own solution.

What are my options for custom UDP load balancing/proxying?

I'm looking for software that supports custom UDP proxying/load-balancing based on an external routing table. The general idea is that some external service maintains a routing table (maybe using redis/memcached etc) that maps an downstream IP/Port combo to the address of an upstream internal service. Downstream clients would send all UDP traffic to the load balancer, which should then forward (preserving downstream IP/Port to support Direct Server Return) to the appropriate internal service based on the routing table.
I've looked at a range of existing load balancers (Envoy, Traefik, nginx etc) but can't seem to find any that would do what I need. Is there something out there that satisfies (or can be twisted into satisfying) these requirements?
I found this question which seems quite similar, but still doesn't satisfy my goals fully: UDP load balancing using customised balancing method with session id inside UDP body.
According to this blog, NGINX seems like a good option for providing the transparent proxying & DRS, but I'm not sure how to achieve the custom routing using an external table.
Thanks!

What is the simplest way to create http traffic from multiple sources on one virtual machine to another vm?

I'm thinking about just making my own http client with simple GET requests to generate the traffic, but how do I create multiple source IP's for each socket?
Create lots of clones of the http client.
If your client doesn't actually live at the IP address that you're telling the destination - your client won't receive the responses from the server, and that may cause problems in your testing.
So, that being said - You can quickly script CURL to make the HTTP requests. This post shows how to make CURL set the source IP address in the HTTP header.
The other half of this solution would be to bind multiple IP addresses to the NIC on the source machine. This other article explains doing this for windows, and this one shows it in Linux.
If you're set on writing your own client, you can alternate the binding of your socket to the different addresses you created above.
In Vsphere 5, you can have 10 nics per VMs.
But as stated above, creating multiple cloned clients is probably easier to maintain from a networking/routing perspective.

Forwarding HTTP Request with Direct Server Return

I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users.
The following shows a simple example:
1) The user's browser requests http://www.example.com/files/file1.zip
2) Request goes to server A, based on the DNS A record for example.com.
3) Server A analyzes the request and works out that /files/file1.zip is stored on server B.
4) Server A forwards the request to server B.
5) Server B returns file1.zip directly to the user without going through server A.
Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain.
From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy.
For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required.
For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address.
My problem is how to actually achieve step 4?
I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution.
Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.
Forgive my ignorance, but why not setup Server A to mount the files that are located on the other servers either via NFS or SMB, depending on whether you're using a unix variant, or whether you're using Windows?
Seems like what you're trying to do is overly complicate something that could be very simple. In addition, using network-mounted files will allow you to mount those files on additional machines in the future when you need them. At that point, then you could put a load balancer in front of server A (and servers x, y, and z, which also all mount files from server B).
Granted this would not solve the problem of bypassing server A on the return, technically server A would be returning the file instead of server B, but if a load balancer were to be put in front of A, then A would become B anyways, so technically B would still be returning the file, because the load balancer would use direct server return (its a standard feature for a long time now).
If I did miss something, please do elaborate.
Edit: Yes I realize this was posted nearly 3 years ago. Oh well.
Why not send an HTTP response of status code 307 Temporary Redirect?
At that point the client will re-request to the correct specified server.
I know you want a single domain, but you could have both individual subdomains plus a single common domain.
For example:
example.com has IP1, IP2, IP3.
example1.example.com has IP1
example2.example.com has IP2
example3.example.com has IP3
If the request comes to a server that it can't handle itself, it will forward the user to make another request to the correct specific server. An HTTP browser will follow this redirect transparently by the way.

Troubleshooting an SSL flood

Users connect to our webserver via https, and stay on a secured connection throughout their use of our service. A typical user session will establish a small handful of connections to the server (one or two).
There are a very small number of exceptions we are trying to track down. Particular users will intermittently have handfuls of hundreds of connections established. When we happen to catch the problem in the act, we can see the exchange of the SSL handshake, and from the perspective of the server, all appears to be in order. Yet we never observe a payload - the client instead connects on a new port and initiates a new handshake.
We do not have access to the client, and cannot observe the behavior from that side of the connection. Nor do we have a local scenario that can reproduce the problem.
It is our belief (though not confirmed) that the user agent is connecting to our server directly, and not through a proxy.
Does anybody recognize these symptoms? Can anyone suggest steps to further identify the problem?
Are there any patterns you can see to this traffic, aside from making many repeated requests?
For example, do the requests come from the same IP ranges? Possibly search engines or other spiders, or maybe from countries that you normally don't get users from, possibly indicating some sort of weird botnet or at least something you could block?
Do these rogue requests always negotiate to use a particular cipher suite, potentially indicating the client software?
Does it make any difference if you change the available cipher suites available for negotiation?
What server software are you using, and are there any firewalls within your network that could potentially be dropping some responses to the user?
i've seen a botnet flooding https sites being mentoned.
this is probably not your situation, but i thought i mention it.
i'm seeing chrome (12.0.742.60 beta) flooding my server with https connections, some half a dozen or more connections for a single static picture being served... as if it had an optimization to build up connections with ready https handshakes waiting for requests to send, and then after the page (file) has been served it closes them all.
on plain http i see only two connections (one extra for favicon.ico).

Resources