The state of affairs: A GUI program, implemented in Java or C#, communicates with a low-one-digit-range number of devices via TCP/IP. Connections are GUI<-->Device only, never Device<--->Device.
Network traffic tends to be low, but the devices tend to send their messages in a somewhat synchronized manner.
The GUI takes a message, reads it and updates a simple graphic representation of the data. Its CPU usage hovers around 5%.
Although we've yet to see any networking problems, we're thinking of adding send and recieve queues in the GUI program.
Is this ...
entirely nonsensical?
well-advised?
urgently needed?
Related
I want multiple IoT devices (Say 50) communicating to a server directly asynchronously via TCP. Assume all of them have a heartbeat pulse every 30 seconds and may drop off and reconnect at variable times.
Can anyone advice me the best way to make sure no data is dropped or blocked when multiple devices are communicating simultaneously?
TCP by itself ensures no data loss during the communication between a client and a server. It does that by the use of sequence numbers and ACK messages.
Technically, before the actual data transfer happens, a TCP connection is created between the client (which can be an IoT device, or any other device) and the server. Then, the data is split into multiple packets and sent over the network through that connection. All TCP-related mechanisms like flow-control, error-detection, congestion-detection, and many others, take place once the data starts to flow.
The wiki page for TCP is a pretty good start if you want to learn more about how it works.
Apart from that, as long as your server has enough capacity to support the flow of requests coming from the devices, then everything should work (at least in theory).
I don't think you are asking the right question. There is no way to make sure that no data is dropped or blocked. Networks do not always work (that is why the word work is in network, to convince you otherwise ).
The right question is: how do I make my distributed system as available and reliable as possible? The answer involves viewing interruption and congestion as part of the normal operation, and build your software appropriately.
There is a timeless usenix/acm/? paper from the late 70s early 80s that invigorated the notion that end-to-end protocols are much more effective then over-featured middle to middle protocols; and most guarantees of middle to middle amount to best effort. If you rely upon those guarantees, you are bound to fail. Sorry, cannot find the reference right now, but it is widely cited.
I've noticed that I'm losing a lot of server time in calls to send(). I have a service which sends high-frequency small packets to a moderate number of clients (>10, <100).
I suspect that recent OS patches, which have made syscalls slow, might be contributing.
I feel like I want a way to supply a bunch of sockets/packets to a single syscall, and it will do the many send() calls internally, reducing API time.
Does anything like this exist? Are there any hidden or niche API's available?
Linux has batching send functions...
I'm going to implement a mobile game for Android and iOS in which each players' actions need to be broadcasted to other nearby players (nearby in the game) through a server that checks if the actions are permitted. The requirement is that the other players need to be notified about these actions as soon as possible. The actions don't need to be delivered in the order of sending, but the delivery should be reliable.
I'm considering using ZeroMQ to implement that. Nearby players could subscribe to the the same topic and publish/consume messages that contain other player's actions. Using a message queue seems to be very attractive compared to implementing the communication using some kind of RPC. I have the following doubts though:
Would ZeroMQ work well over a cellular network which isn't very
reliable?
ZeroMQ doesn't support sending messages using UDP, only
TCP. I don't require the messages to be received in an order. If a
message is lost, I'd like the receiver to be able to process the
messages that followed the lost one without waiting until the lost
one is resend. Is it possible to achieve that using ZeroMQ?
As an alternative, I was considering using ProtoBuf with Netty for example, with UDP + reliability implemented on top of that. However, this would be more work and I'm not sure if I'd be able to achieve better performance than ZeroMQ which is considered to be great in that matter.
Actually, is UDP communication over the cellular network/Internet a good idea? Wouldn't there be any issues with operator's firewalls, NATs or so? I'm assuming that it should be fine, since the communication is through a public server, not peer to peer.
Is there any fast and reliable message queue that supports UDP?
Here's my scenario:
In my application i have several processes which communicate with each other using Quickfix which internally use tcp sockets.the flow is like:
Process1 sends quickfix messaage-> process 2 sends quickfix message after processing message from
process 1 -> .....->process n
Similarly the acknowledgement messages flow like,
process n->....->process 1
Now, All of these processes except the last process( process n ) are on the same machine.
I googled and found that tcp sockets are the slowest of ipc mechanisms.
So, is there a way to transmit and recieve quick fix messages( obviously using their apis)
through other ipc mechanisms. If yes, i can then reduce the latency by using that ipc mechanism between all the processes which are on the same machine.
However if i do so, do those mechanisms guarentee the tranmission of complete message like tcp sockets do?
I think you are doing premature optimization, and I don't think that TCP will be your performance bottleneck. Your local LAN latency will be faster than that of your exterior FIX connection. From experience, I'd expect perf issues to originate in your app's message handling (perhaps due to accidental blocking in OnMessage() callbacks) rather than the IPC stuff going on afterward.
Advice: Write your communication component with an abstraction-layer interface so that later down the line you can swap out TCP for something else (e.g ActiveMQ, ZeroMQ, whatever else you may consider) if you decide you may need it.
Aside from that, just focus on making your system work correctly. Once you are sure teh behavior are correct (hopefully with tests to confirm them), then you can work on performance. Measure your performance before making any optimizations, and then measure again after you make "improvements". Don't trust your gut; get numbers.
Although it would be good to hear more details about the requirements associated with this question, I'd suggest looking at a shared memory solution. I'm assuming that you are running a server in a colocated facility with the trade matching engine and using high speed, kernel bypass communication for external communications. One of the issues with TCP is the user/kernel space transitions. I'd recommend considering user space shared memory for IPC and use a busy polling technique for synchronization rather than using synchronization mechanisms that might also involve kernel transitions.
I'm developing a multi-player game and I know nothing about how to connect from one client to another via a server. Where do I start? Are there any whizzy open source projects which provide the communication framework into which I can drop my message data or do I have to write a load of complicated multi-threaded sockety code? Does the picture change at all if teh clients are running on phones?
I am language agnostic, although ideally I would have a Flash or Qt front end and a Java server, but that may be being a bit greedy.
I have spent a few hours googling, but the whole topic is new to me and I'm a bit lost. I'd appreciate help of any kind - including how to tag this question.
If latency isn't a huge issue, you could just implement a few web services to do message passing. This would not be a slow as you might think, and is easy to implement across languages. The downside is the client has to poll the server to get updates. so you could be looking at a few hundred ms to get from one client to another.
You can also use the built in flex messaging interface. There are provisions there to allow client to client interactions.
Typically game engines send UDP packets because of latency. The fact is that TCP is just not fast enough and reliability is less of a concern than speed is.
Web services would compound the latency issues inherent in TCP due to additional overhead. Further, they would eat up memory depending on number of expected players. Finally, they have a large amount of payload overhead that you just don't need (xml anyone?).
There are several ways to go about this. One way is centralized messaging (client/server). This means that you would have a java server listening for udp packets from the clients. It would then rebroadcast them to any of the relevant users.
A second way is decentralized (peer to peer). A client registers with the server to state what game / world it's in. From that it gets a list of other clients in that world. The server maintains that list and notifies the other clients of people who join / drop out.
From that point forward clients broadcast udp packets directly to the other users.
If you look for communication framework with high performance try look at ACE C++ framework (it has Java bindings).
Official web-site is: http://www.cs.wustl.edu/~schmidt/ACE-overview.html
You could also look into Flash Media Interactive Server, or if you want a Java implementation, Wowsa or Red5. Those use AMF and provide native functionality for ShareObjects including synching of the ShareObjects among connected clients.
Those aren't peer to peer though (yet, it's coming soon I hear). They use centralized messaging managed by the server.
Good luck