I am developing a client server application.
When I run same client multiple times, server received data from one only and block data from other.
Here my question is,is it valid to make multiple connection on same port from a single client?
Yes, you can. It all depends on how the server side code is written. You could fork () a separate process after you accept a client connection through accept () or you could save all the socket descriptors returned by accept () and handle all of them through select ().
So yes, it is valid to make multiple connections to the same port from the same client. The client would use a different source port so the 4-tuple (src_ip, src_port, dst_ip, dst_port) is kept unique.
Related
I'm trying to understand how local sockets (Unix domain socket) works, specifically how to use them with QT.
It is my understanding that Unix domain socket is always reliable and no data is lost. Looking at these examples these are the steps (making it simple), considering a Server (producer) and a Client (consumer)..
A qLocalServer (on the Server) create a socket, bind it to a known location and listen for incoming connections..
A qLocalSocket (on the client) connect to the known location. On the server connection is accepted.
Server can send data to the socket using write method (of the qLocalSocket instance returned by qLocalServer->nextPendingConnection())
When the socket buffer is full, any write by the server is blocked as soon as the client read from the socket and free the socket buffer. This is the part I don't understand: suppose the socket buffer is full and the server keeps writing .. where all these data are stored and how to control the data structure holding these data? Is there any way to discard these data?
Imagine a server that produce data 20 times/second and a client that consume 1 time/second, I want to drop all data in bewtween (better real time data than ALL the data in my use case).
There's nothing unique about the Qt socket classes (for the purposes of this question).
When the socket buffer is full, any write by the server is blocked as soon as the client read from the socket and free the socket buffer.
No - the write is blocked until the client performs a read, freeing up some of the buffer. (Perhaps this is what you meant and we just have a minor language barrier.)
Once the server writes to the socket, the data can't be "un-written" so how best to handle this depends on your application, and whether you have more control over the programming of the server or the client.
I have a configuration with the following server/clients :
One server with two bound sockets, a REP and a ROUTER
A client (we will call it a worker) that stays connected to the ROUTER socket
Another (real) client that connects on the REP socket.
I want the server to be able to tell the real client to connect (directly or somehow through the server) to a websocket, opened on the worker client. But it seems, I cannot retrieve the worker's IP-address from a ZeroMQ socket.
How could I achieve this, without some dirty IP-address retrieve hacks?
How could I achieve this, without some dirty IP-address retrieve hacks?
The best would be to use an explicitly communicated IP-address dialogue / handshaking between the server and the worker which would take place upon their setup / initialisation, in which the worker adviced these configuration details to server, upon having been asked to provide a such answer.
Given that, the "new"-real-client .connect()-s it's REQ onto the server's REP, and asks the server about where to go next, the server thus can answer this and the "new"-real-client will get received this way a legitimate IP-address:port# and any additionally needed details for any additional TCP/IP-L3 service establishment and use.
That simple :o) distributed-system
Design-side Epilogue:Because there are some further, design-side implications, hardwired inside of each type of the ZeroMQ sockets' Access-Point, it might be found more appropriate to serve a separate REP-AccessPoint on the server side, so as not to subordinate each "new"-real-client to become dependent upon a presence of events outside of the domains of control of both the server and such "new"-real-client, but to rather allow both such REQ/REP-endpoints to enjoy the independence of anything but their temporally (semi-)private details (re-)negotiation(s).
I am trying to create a Web Server of my own and there are several questions about working of Web servers we are using today. Questions are:
After receiving a HTTP request from a client through port 80, does server respond using same port 80?
If yes then while sending a large file say a pic in MB's, webserver will be unable to receive requests from other clients?
Is a computer port duplex or simplex? (Can it send and receive at the same time)?
If another port on server side is used to send response to client, then (if TCP is used, which is generally used), again 3-way handshaking will be done which will be overhead...
http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html here is a good guide on what's going on with webservers, although it's in c but the concepts are all there. This will explain the whole client server relationship as well as some implementation details.
I'll just give a high level on what's going on:
Usually what happens is when your server gets a new request that comes in it creates a fork that will process it, that way you are not bogged down by each request, when the request comes in the child process is handed a new file to write to(again this is all implementation details).
So really you have one server waiting for requests and for each request it received it spawns a child to process to deal with this request. I'm sure there are much easier languages to implement this stuff than c(I had to do both a c and java server serving to either one in my past) but c really gets you to understand the things that are going on and I'm betting that is what you are looking for here
Now there are a couple of things to think about:
how you want the webserver to work. The example explains the parent child process.
Do you want to use tcp/UDP there are differences in the way to payload gets delivered.
You don't have to connect on port 80. that's just the default for web.
Hopefully the guide will help you.
Yes. The server sends the response using the TCP connection established by the client, so it also responds using the same port. The server can handle connections from multiple clients using the same port because TCP connections are identified by (local-ip, local-port, remote-ip, remote-port), so the server can even handle multiple connections from same client provided that the source ports are different.
There are different techniques you can use to be able to serve multiple clients at the same time. These include
using multiple processes or threads: when one is busy serving a client the others can serve other clients.
using events: the server listens for events from the OS: when it can write a block of data to a connection it writes it, when a new client connects it accepts the connection, ...
Frequently both approaches are be combined.
A TCP connection is duplex: you can send and receive at the same time. The HTTP protocol is based on a simple request-response model though: at any given time only one party is "talking."
I need to create an application that:
Has one server
With a client that connects to the server and sends 8 longs (data from 8 sensors: rain, air humidity, wind speed...) 1 sensor data / long (sensor data is acquired from a custom USB device)
User clients. The end user runs this type of client to connect to the server for data retrieval from the sensors.
I used Qt before, creating Client-server applications with just one type of client. And I managed to create this application too, just at a smaller scale (used 5 words, and clients were connected simultaneously to the server). I used the Qt network examples fortune threaded server and http://goo.gl/srypT and blocking fortune client example.
How can i identify which client is which? (since they have different ip everytime they connect to internet). On my small scale application, I created some kind of protocol, but there must be a more efficient way to do this.
I assume that you want to identify the client type ("sensor client" vs. "user client"), not individual client instances.
The straightforward way to do this is to implement a protocol, as mentioned in the question. For your use case, this could be very simple:
let the "sensor client" send a "write" command (one character like "w" would be sufficient) followed by your sensor data. The server then receives the "w" command and knows that he needs to read sensor data from the client.
let the "user client" send a "read" command (e.g. the character "r"). When the server receives the "r" command it knows that it needs to send data to the client.
If, for whatever reason, you do not want to implement even such a simple protocol, you could also set up two separate QTcpServer instances which listen at different ports, lets say 8192 and 8193. Your "sensor client" would then connect to port 8192, and the server knows by the port number that the client will send data. Your "user clients" would connect to port 8193, and the server knows that the clients expect data and will send the required data.
In any case, you should be aware that there is no authentication and authorization involved, and any client who knows the simple protocol and/or the port numbers can send and receive data.
To identify a client, you have to use some kind of client ID. Usually, some kind of hash (a MD5 digest, a UUID or a GUID) is used as the client ID. This client ID have to be sent from the client to the server when the client connects to the server.
What happens after the client has been identified and accepted, depends on the type of connection (protocol). If you use a stateful protocol, the same connection will be kept open as long as the client uses it so there is no need to re-identify the client. If you use a stateless connection (HTTP, for example), you will have to re-send the same ID from the client to the server every time the client requires data (that is: a document, a page, etc.) to the server.
A simpler and more efficent way to deal with a client/server architecture like this consists in using an existing, proven server of some kind. For example, you could use a RESTful web server like Wt (http://www.webtoolkit.eu/wt/blog), given that you are already using C++.
Even better, I would use a Ruby- or a Python-based RESTful web service framework like:
http://www.sinatrarb.com/
http://bottlepy.org/docs/dev/
http://flask.pocoo.org/
Or the new Ruby-on-Rails API:
http://blog.steveklabnik.com/posts/2012-11-22-introducing-the-rails-api-project
https://github.com/rails-api/rails-api
Developing the server in Ruby or Python is much faster and easier. The client can developed in any way (C++ with Qt, Javascript in a web browser and many other ways)
I have a defined number of servers that can locally process data in their own way. But after some time I want to synchronize some states that are common on each server. My idea was that establish a TCP connection from each server to the other servers like a mesh network.
My problem is that in what order do I make the connections since there is no "master" server here, so that each server is responsible for creating there own connections to each server.
My idea was that make each server connect and if the server that is getting connected already has a connection to the connecting server, then just drop the connection.
But how do I handle the fact that 2 servers is trying to connect at the same time? Because then I get 2 TCP connections instead of 1.
Any ideas?
You will need to have a handshake protocol when you're connection to a server so you can verify whether it's ok to start sending/receiving data, otherwise you might end up with one of the endpoint connecting and start sending data immediately only to have the other end drop the connection.
For ensuring only one connection is up to a server,you just need something like this pseudocode:
remote_server = accept_connection()
lock mutex;
if(already_connected(remote_server)) {
drop_connection(remote_server)
}
unlock mutex;
If your server isn't multithreaded you don't need any locks to guard you when you check whether you're already connected - as there won't be any "at the same time" issues.
You will also need to have a retry mechanism to connect to a server based on a small random grace period in the event the remote server closed the connection you just set up.
If the connection got closed, you wait a little while, check if you're already connected (maybe the other end set up a connection to you in the mean time) and try to connect again. This is to avoid the situation where both ends set up the connection at the same time but the other end closes it because of the above logic.
Just as an idea. Each server accept a connection, then find out that it has got two TCP connections between the same servers. Then one connection is chosen to be closed. The way to choose what connection to close you just need to implement. For example both servers should compare their names ( or their IP address or their UID) and connection initiated by the server whose name is less (or UID) should be closed.
While better solution implies making a separate "LoadBalancer" to which all your servers are connected here is the small suggestion to make sure that connections are not created simultaneously.
Your servers can start connections in different times by using
bool CreateConnection = (time() % i == 0)
if (CreateConnection){ ... }
where i is the ID of the particular server.
and time() could be in seconds or fractions of a second, depending on your requerements.
This will guarantee that you never get two servers connecting at the same time to each other. If you do not have IDs for servers, you can use a random value.