Connecting to two server from client in qt - qt

I need simultaneous persistent connection to two Servers from my Qt TCP client.
Do I need threads for this or there is another way.
Any Examples would be great.

You will need Multiple QTcpSockets to connect to multiple QTcpServers Simultaneously.
QTcpSockets Can Connect only single Server

Related

Are there existing solutions for serving multiple websocket clients from a single upstream connection using a proxy server?

I have a data vendor for real-time data that has a strict limit on the number of websocket connections I am allowed to make to their API. I have multiple microservices that need to consume this data, with some overlap in subscriptions. The clients do not need to communicate back anything beyond subscriptions.
I would like to design a system using a proxy-server that maintains a single websocket connection to the data vendor and then relays the appropriate messages to the clients via websocket. Optimally, the clients would be able to interact with the proxy server as if it were the data vendor's API.
I have looked at various (reverse?) proxy server solutions here, but have not found specific language about reducing the number of connections to the upstream data source. For example, I have looked into NGINX, but I can't tell if the proxy will combine client connections into a single upstream connection. The other solution I have researched is just putting all messages into a Kafka Pub/Sub via a connector, and have each client subscribe there.
I am curious if there are any existing, out of the box solutions to this problem before I implement my own solution.

Correct way to get a gRPC client to communicate with one of many ECS instances of the gRPC service?

I have a gRPC client, not dockerised, and server application, which I want to dockerise.
What I don't understand is that gRPC first creates a connection with a server, which involves a handshake. So, if I want to deploy the dockerised server on ECS with multiple instances, then how will the client switch from one to the other (e.g., if one gRPC server falls over).
I know AWS loadbalancer now works with HTTP 2, but I can't find information on how to handle the fact that the server might change after the client has already opened a connection to another one.
What is involved?
You don't necessarily need an in-line load balancer for this. By using a Round Robin client-side load balancing policy along with a DNS record that points to multiple backend instances, you should be able to get some level of redundancy.

Can I avoid of using a SignalR backplane behind a load balancer?

I use SignalR in order to expose RabbitMQ messages to browsers. This works fine with one app instance obviously. The question is if it could work with multiple instances too without a backplane. I understand that SignalR client could be disconnected from the pod A and connected back to the pod B but what exactly is the issue here? I am fine to lose some messages during reconnection. Is it the only issue? Is reconnection to the pod B treated as a regular new connection so that the client is just subscribed again as it was subscribed normally without reconnection? Or the system doesn't have input parameters it had during initial subscription and therefore it cannot resubscribe without hints?
As long as all of your SignalR servers are getting the same data from RabbitMQ or getting only the data for the clients connected to them, you don't need a backplane.
You will need a backplane if you have one of the following:
Clients can communicate with one another.
One one SignalR server is connected to RabbitMQ but clients can connect to multiple SignalR servers.
SignalR servers are connected to different queues or getting different data from the same queue.
I have a similar setup with a database instead of RabbitMQ and need a backplane to either have only one of the SignalR servers access the database (and have data be sent to all clients) or to share the database load between servers (and have data be sent to all clients). This way, the server getting the data can have it sent to a client connected to a different server.
I am using SignalR for ASP.NET and the servers do not know who is subscribed to the other servers. All messages are sent over the backplane and each server determines if they apply to their connected clients. This works well with broadcasts for example or if the same user has multiple clients to make sure they all get the same data regardless of the server.

2 connections on a same server with different ports mbedtls

I am working on an Embedded project with Lwip and mbedTLS stacks.
I a have a thread that manage a connection to a server on the port 21. This connection is encrypted with mbedTLS and everything works well.
Now I need to create another connection on the same server on a different port. This connection would be managed by a different thread.
Can I secured this second connection with the same ssl_context that I used to secure the first connection ? If yes, how should I do it ?
Thank you,
Emmanuel.
ssl_context is used for a single TLS session, and if you are using two connections, you are by definition establishing two TLS sessions, and as such, should be using two ssl_contexts. Since you should be using two different contexts, each context should have a different port set, when you call mbedtls_ssl_set_bio()

Sending TCP data to a client behind NAT

I am working on a client/server program with one server (not behind NAT) and many clients that are using NAT. I need the server to be able to transfer files to the clients every so often, thus the server must be able to initiate TCP traffic when needed. I have already figured out how to do this with UDP by caching the clients' IPEndPoints and using them later.
Can anyone recommend some sample code or a project (with source) they have seen that can do this? There are lots of Chat or IM projects out there to learn from, but they generally use only UDP across NAT or only work on LANs without NAT being used. C++/C#/VB source with a solution would help a lot. Thanks.
Your best options:
Clients polls periodically to discover if new files are available. Simplest option. This may or may not scale depending on how many clients and how often they would need to poll.
All clients keep a persistent TCP connection to the server. Server sends files to a specific client when ready. Avoids the polling overhead of #1, but might have issues if the number of clients reaches into the thousands if you haven't designed your service to scale for the C10K problem.
Clients connect to a notification server which is designed to handle many clients simultaneously connected. Client sends its notification server parameters to file transfer service and disconnects. When file server has a new file available, server sends notification through notification service to tell client to connect back for an awaiting file. Client disconnects from file server after file transfer is done.

Resources