2 connections on a same server with different ports mbedtls - tls1.2

I am working on an Embedded project with Lwip and mbedTLS stacks.
I a have a thread that manage a connection to a server on the port 21. This connection is encrypted with mbedTLS and everything works well.
Now I need to create another connection on the same server on a different port. This connection would be managed by a different thread.
Can I secured this second connection with the same ssl_context that I used to secure the first connection ? If yes, how should I do it ?
Thank you,
Emmanuel.

ssl_context is used for a single TLS session, and if you are using two connections, you are by definition establishing two TLS sessions, and as such, should be using two ssl_contexts. Since you should be using two different contexts, each context should have a different port set, when you call mbedtls_ssl_set_bio()

Related

Correct way to get a gRPC client to communicate with one of many ECS instances of the gRPC service?

I have a gRPC client, not dockerised, and server application, which I want to dockerise.
What I don't understand is that gRPC first creates a connection with a server, which involves a handshake. So, if I want to deploy the dockerised server on ECS with multiple instances, then how will the client switch from one to the other (e.g., if one gRPC server falls over).
I know AWS loadbalancer now works with HTTP 2, but I can't find information on how to handle the fact that the server might change after the client has already opened a connection to another one.
What is involved?
You don't necessarily need an in-line load balancer for this. By using a Round Robin client-side load balancing policy along with a DNS record that points to multiple backend instances, you should be able to get some level of redundancy.

Connecting to two server from client in qt

I need simultaneous persistent connection to two Servers from my Qt TCP client.
Do I need threads for this or there is another way.
Any Examples would be great.
You will need Multiple QTcpSockets to connect to multiple QTcpServers Simultaneously.
QTcpSockets Can Connect only single Server

Reliable delivery of information between a browser and a locally run service using port knocking

The goal
Allow a browser to exchange information with a service running locally. Allow the service to figure out the user (logon session in Windows) who runs the browser. Avoid, if possible, storing a TLS certificate and private key on the machine. A bonus task: provide a solution for the setup where an anti-virus software like Kaspersky or Sophos proxies all TCP connections.
The story
The underlying OS is Windows, but can be any modern OS. There is a daemon running in the system. In case of Windows this is a Windows service. There is a JavaScript loaded by an Internet browser from a remote server which sends data to the daemon. The daemon does not have an HTTP/HTTP server. Instead the daemon opens N ports and listens for incoming connection. The N is a low two digits number.
The JS initiates TCP connections to a selected group of ports K in the range N. In the current implementation JS attempts to load JS scripts from 127.0.0.1:port-number. The daemon accepts the connection and immediately closes it (kinda port knocking). The daemon recovers the data from the ports "knocked" by the JS.
In the current implementation the backend chooses a unique tuple of ports, for example a 3 ports combination. The tuple is a key identifying the browser session. The service collects "knocks" - the ports accessed by a specific OS process. The service queries the backend using the collected ports.
One of the goals of the solution is to avoid implementation of HTTP/HTTPS server in the service and save maintenance of a SSL certificate.
The problem
The order in which JS connects to the ports is not defined. Specifically two browsers can run knocking sessions simultaneously.
The service can fail to open some of the ports in the range N because the
ports are busy.
The order is not critical because the server chooses a unique combination from the range N. I need the system to tolerate missing ports. I was thinking about choosing more than one tuple and using more than one range N.
The question
How can I adopt FEC for the problem? Does the design make sense?

How does GCP Load balancers will manage websocket connections?

Clients are connecting to API gateway server through websocket connection. This server just orchestrates swarm of cloud functions, that are handling all of the data requesting and transforming. Server is statefull - it holds essential session data, which is defining, for example, what cloud functions are allowed to be requested by a given user.
This server doesn't use socket to broadcast data, so socket connections are not interacting between each other, and will not be doing this. So, all it needs to handle is single-client-to-server communication.
What will happen if i'll create bunch of replicas and put load balancer in front of all of them (like regular horizontal scaling)? If a user got connected to certain server instance, then his connection will stick there? or it will be switching between instances by load balancer?
There is a parameter available for load balancer that allows you to do what you are looking for: Session affinity.
"Session affinity if set attempts to send all network request from the same client to the same virtual machine instance."
Actually even if it seems to be related to load balancer you set it while creating target pools and/or backends. You should check if this solution can be applied to your particular configuration.

How can a web server handle multiple user's incoming requests at a time on a single port (80)?

How does a web server handle multiple incoming requests at the same time on a single port(80)?
Example :
At the same time 300k users want to see an image from www.abcdef.com which is assigned IP 10.10.100.100 and port 80. So how can www.abcdef.com handle this incoming users' load?
Can one server (which is assigned with IP 10.10.100.100) handle this vast amount of incoming users? If not, then how can one IP address be assigned to more than one server to handle this load?
A port is just a magic number. It doesn't correspond to a piece of hardware. The server opens a socket that 'listens' at port 80 and 'accepts' new connections from that socket. Each new connection is represented by a new socket whose local port is also port 80, but whose remote IP:port is as per the client who connected. So they don't get mixed up. You therefore don't need multiple IP addresses or even multiple ports at the server end.
From tcpipguide
This identification of connections using both client and server sockets is what provides the flexibility in allowing multiple connections between devices that we take for granted on the Internet. For example, busy application server processes (such as Web servers) must be able to handle connections from more than one client, or the World Wide Web would be pretty much unusable. Since the connection is identified using the client's socket as well as the server's, this is no problem. At the same time that the Web server maintains the connection mentioned just above, it can easily have another connection to say, port 2,199 at IP address 219.31.0.44. This is represented by the connection identifier:
(41.199.222.3:80, 219.31.0.44:2199).
In fact, we can have multiple connections from the same client to the same server. Each client process will be assigned a different ephemeral port number, so even if they all try to access the same server process (such as the Web server process at 41.199.222.3:80), they will all have a different client socket and represent unique connections. This is what lets you make several simultaneous requests to the same Web site from your computer.
Again, TCP keeps track of each of these connections independently, so each connection is unaware of the others. TCP can handle hundreds or even thousands of simultaneous connections. The only limit is the capacity of the computer running TCP, and the bandwidth of the physical connections to it—the more connections running at once, the more each one has to share limited resources.
TCP Takes care of client identification
As a.m. said, TCP takes care of the client identification, and the server only sees a "socket" per client.
Say a server at 10.10.100.100 listens to port 80 for incoming TCP connections (HTTP is built over TCP). A client's browser (at 10.9.8.7) connects to the server using the client port 27143. The server sees: "the client 10.9.8.7:27143 wants to connect, do you accept?". The server app accepts, and is given a "handle" (a socket) to manage all communication with this client, and the handle will always send packets to 10.9.8.7:27143 with the proper TCP headers.
Packets are never simultaneous
Now, physically, there is generally only one (or two) connections linking the server to internet, so packets can only arrive in sequential order. The question becomes: what is the maximum throughput through the fiber, and how many responses can the server compute and send in return. Other than CPU time spent or memory bottlenecks while responding to requests, the server also has to keep some resources alive (at least 1 active socket per client) until the communication is over, and therefore consume RAM. Throughput is achieved via some optimizations (not mutually-exclusive): non-blocking sockets (to avoid pipelining/socket latencies), multi-threading (to use more CPU cores/threads).
Improving request throughput further: load balancing
And last, the server on the "front-side" of websites generally do not do all the work by themselves (especially the more complicated stuff, like database querying, calculations etc.), and defer tasks or even forward HTTP requests to distributed servers, while they keep on handling trivially (e.g. forwarding) as many requests per second as they can. Distribution of work over several servers is called load-balancing.
1) How does a web server handle multiple incoming requests at the same time on a single port(80)
==> a) one instance of the web service( example: spring boot micro service) runs/listens in the server machine at port 80.
b) This webservice(Spring boot app) needs a servlet container like mostly tomcat.
This container will have thread pool configured.
c) when ever request come from different users simultaneously, this container will
assign each thread from the pool for each of the incoming requests.
d) Since the server side web service code will have beans(in case java) mostly
singleton, each thread pert aining to each request will call the singleton API's
and if there is a need for Database access , then synchronization of these
threads is needed which is done through the #transactional annotation. This
annotation synchronizes the database operation.
2) Can one server (which is assigned with IP 10.10.100.100) handle this vast amount of incoming users?
If not, then how can one IP address be assigned to more than one server to handle this load?
==> This will taken care by loadbalancer along with routetable
answer is: virtual hosts, in HTTP Header is name of domain so the web server know which files run or send to client

Resources