5 applications between Raspberry Pi in a project I did. 1 "Server will be Client to 4 of them. Will there be more traffic from the tool here, or via TCP/IP or using MQTT? Which is more detailed? Can you help?
Good work...
It depends on several factors...
If your needs are for simplicity of code and design, probably MQTT.
If your communication is large streams of data and point-to-point rather than one-to-many, probably TCP.
Then there's latency to think about... acknowledgement of messages... whether the system should still be able to run with just 3 RasPis... whether it should be expandable to more than 5 RasPis.
The priority and important part for the situation you specify is the delay.
If communication is large data streams instead of multiple, TCP should generally be preferred.
By agreeing with the previous comment, it would be quite appropriate to use MQTT if the code complexity is aimed to be minimized.
Related
I want multiple IoT devices (Say 50) communicating to a server directly asynchronously via TCP. Assume all of them have a heartbeat pulse every 30 seconds and may drop off and reconnect at variable times.
Can anyone advice me the best way to make sure no data is dropped or blocked when multiple devices are communicating simultaneously?
TCP by itself ensures no data loss during the communication between a client and a server. It does that by the use of sequence numbers and ACK messages.
Technically, before the actual data transfer happens, a TCP connection is created between the client (which can be an IoT device, or any other device) and the server. Then, the data is split into multiple packets and sent over the network through that connection. All TCP-related mechanisms like flow-control, error-detection, congestion-detection, and many others, take place once the data starts to flow.
The wiki page for TCP is a pretty good start if you want to learn more about how it works.
Apart from that, as long as your server has enough capacity to support the flow of requests coming from the devices, then everything should work (at least in theory).
I don't think you are asking the right question. There is no way to make sure that no data is dropped or blocked. Networks do not always work (that is why the word work is in network, to convince you otherwise ).
The right question is: how do I make my distributed system as available and reliable as possible? The answer involves viewing interruption and congestion as part of the normal operation, and build your software appropriately.
There is a timeless usenix/acm/? paper from the late 70s early 80s that invigorated the notion that end-to-end protocols are much more effective then over-featured middle to middle protocols; and most guarantees of middle to middle amount to best effort. If you rely upon those guarantees, you are bound to fail. Sorry, cannot find the reference right now, but it is widely cited.
I don't have much experience with network programming, but an interesting problem came up that requires it. The server will be transmitting multiple streams of different types of data to other machines. Each machine should be able choose which of the streams (one or more) it will like to receive. The whole setup is confined to the local network only. Initially, there will be only two clients, but I would like to design a scalable approach, if possible.
The existing server code, which is streaming only a single stream, is using TCP streaming socket for doing so. However, from some reading on the subject, I am not sure if this approach will scale to multiple streams and multiple clients well. The reason is: wouldn't two clients, who want to receive the same stream but connect via different TCP sockets, result in wastage of bandwidth? Especially compared to UDP, which allows to multicast.
Due to my inexperience, I am relying on better informed people out there to advise me: considering that i do want the stream to be reliable, would it be worth it to start from the scratch with UDP, and implement reliability into it, than to keep using TCP? Or, will this be better solved by designing an appropriate network structure? I'd be happy to provide more details if needed. Thanks.
UPDATE: I am looking at PGM and emcaster for reliable multicasting at the moment. Must have C# implementations at server side, and python implementations at client side.
Since you want a scalable program, then UDP would be a better choice, because it does not go the extra length to verify that the data has been received, thus making the process of sending data faster.
What are the advantages of using multiple ports in a game? I understand why some would use a combination TCP and UDP for different purposes, but why do some games use multiple TCP or UDP ports? Is there any advantage to this? I am asking because I find myself making networking code for my game and I wonder why others go out of there way to have multiple ports?
For example GTA V uses 5 UDP ports and Assasins Creed Revelations uses 4 TCP and 4 UDP ports.
There is always a reason.
Quite often they are not (entirely) technical. For instance one team is working on the inter-game chat functionality while another is working on the server-client protocol for game X. Then they are integrated into the same product, but nobody bothers unifying the protocols due to costs, time constraints, concerns regarding future maintainability etc.
There are also purely technical reasons:
If the game server and the chat server run on different locations, it is natural to use multiple connections; the alternative is to use a sort of reverse-NAT box on the server side, but it's risky since it's a bottleneck and a single point of failure.
Stability: if the chat server crashes or malfunctions you don't want it to also bring down the stream between the client and the game server, so it's safer to communicate on parallel connections.
Overall it's a classic example of theory meets practice: it would be great to use a single port (and connection), but it is more practical to separate the different transmissions and interactions for a variety of reasons.
A bit off topic, have you noticed how many connections are opened when accessing a web page nowadays? It's often several dozens. Would probably be an order of magnitude less without all the ads though. Anyways, compared to that, a game opening 5 connections is nothing.
I am working on a project which requires sensor information to be obtained from multiple embedded devices so that it may be used by a master machine. The master currently has classes which contain backing fields for each sensor. Data is continuously read on each sensor and a packet is then written and sent to the master to update that sensor's backing field. I have little experience with TCP/UDP so I am not sure which protocol would work better with this setup.
I am currently using TCP to transfer the data because I am worried about data on our rotary encoders being received out of order. Since my experience with this topic is limited, I am not sure if this is this a valid concern.
Does anyone with experience in this area know any reasons that I should prefer one approach over the other?
How much you care about getting know a packet was delivered?
How much you care about getting know a delivered packet was 100% correct?
How much you care about the order of packet delivery?
How much you care about the peer is currently connected?
If the answers were "I care a lot", you'd prefer to keep on using TCP because it ensure all four points.
The counterpart is that UDP could be more lightweight and fast to handle if you manage small packets.
Anyway, it's not so easy choose this or that. Just try.
And read this brief explanation: http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/
I'm no expert but it seems this might be relevant:
Do you can about losing data?
If so, use TCP. Error recovery is automatic.
If not, use UDP. Lost packets are not re-sent. I also believe ordering here is not guaranteed.
Are there any libraries which put a reliability layer on top of UDP broadcast?
I need to broadcast large amounts of data to a large number of machines as quickly as possible, and generally it seems like such a problem must have already been solved many times over, but I wasn't able to find anything except for the Spread toolkit, which has a somewhat viral license (you have to mention it in all materials advertising the end product, which I'm not sure our customer will be willing to do).
I was already going to write such a thing myself (because it would be extremely fun to do!) but decided to ask first.
I looked also at UDT (http://udt.sourceforge.net) but it does not seem to provide a broadcast operation.
PS I'm looking at something as lightweight as a library - no infrastructure changes.
How about UDP multicast? Have a look at the PGM protocol for which there are several commercial and open source implementations.
Disclaimer: I'm the author of OpenPGM, an open source implementation of said protocol.
Though some research has been done on reliable UDP multicasting, I haven't yet used anything like that. You should take into consideration that this might not be as trivial as it first sounds.
If you don't have a list of nodes in the target network you have no idea when and to whom to resend, even if active nodes receiving your messages can acknowledge them. Sending to a large number of nodes, expecting acks from all of them might also cause congestion problems in the network.
I'd suggest to rethink the network architecture of your application, e.g. using some kind of centralized solution, where you submit updates to a server, and it sends this message to all connected clients. Or, if the original sender node's address is known a priori, then just let clients connect to it, and let the sender push updates via these connections.
Have a look around the IETF site for RFCs on Reliable Multicast. There is an entire working group on this. Several protocols have been developed for different purposes. Also have a look around Oracle/Sun for the Java Reliable Multicast Service project (JRMS). It was a research project of Sun, never supported, but it did contain Java bindings for the TRAM and LRMS protocols.