I am studding the MQTT & TCP/IP protocol.
Since i'm able to know that, MQTT is based on the TCP so as the TCP/IP
& we refer MQTT though we have the TCP/IP Protocol.
Why don't we use TCP/IP instead of MQTT?
Is there any advantages of MQTT that makes it better solution than the TCP/IP protocol?
Which is more reliable & required less no of data packet to form a communication?
(Note : TCP/IP in the sense forming a network between 2 devices using normal TCP/IP protocol as in GSM modems "connect > transfer data > disconnect")
Is there any advantages of MQTT that makes it better solution than the TCP/IP protocol?
Yes, it offers things TCP doesn't offer, namely an application layer protocol. Other examples of such protocols are FTP, HTTP, SMTP.
You're asking the wrong question. IP makes sure you can send data to another machine, TCP makes sure this data is received in-order and acknowledged, and application-level protocols make sure you can make sense of the data you receive.
Without an application level protocol, you have no meaningful communication. Where each sockets programming example begins with "WriteLine" and "ReadLine" text message exchanges, that in itself is (albeit a very rudimentary) application level protocol, namely "client and server exchange text messages ending in a newline".
So, no, you cannot use TCP/IP without an application level protocol, because as soon as you start writing a program sending and/or receiving data, you have at that moment defined an application level protocol.
With its own problems. And that's why you shouldn't invent your own protocol, but use existing ones. Pick the one that suits your needs. Do you need to publish or subscribe messages to some broker, use MQTT.
Unless you know very well what you're doing, don't invent your own.
The benefits of using MQTT over TCP/IP far outweighs the data overhead it introduces. Also, MQTT was devised to solve a specific problem of getting sensor data from a remote system which could not be connected to the consumer of the sensor data all the time.
If so, do you know examples of what can go wrong in a non-TCP network?
Learning about MQTT I came across several mentions of the fact that MQTT relies on TCP/IP stack. For example, from mqtt.org:
MQTT for Sensor Networks is aimed at embedded devices on non-TCP/IP
networks, whereas MQTT itself explicitly expects a TCP/IP stack.
But if you read the reference documents, you won't finding anything like that. Moreover, there's QoS field that can be used for reliable delivery and whose values other than 0 are essentially useless in TCP/IP networks. Right now I see nothing that would prevent me from establishing an MQTT connection using UNIX pipe, domain or UDP socket rather than TCP socket.
MQTT just needs a delivery that is ordered and reliable, it doesn't have to be TCP. SCTP works just fine, for example, but UDP doesn't because there is no way to guarantee your large PUBLISH packet made up of multiple UDP packets will arrive in order and complete.
With regards TCP reliability, in theory what you are saying is true, but in practice when an application calls write() and receives a successful return, it can't guarantee when the data has actually made it out of the computer to the remote host. All write() (or send()) does is copy the data to the kernel buffers, at which point you have no further control.
The only way to be sure that a message reaches the remote host at the application level is to have the remote host reply.
MQTT-SN (for sensor Network ) is the solution for the problem that MQTT was having while running over TCP/IP .
There is a concept of MQTT gateway which is brought in in for MQTT-SN which helps in bringing non-TCP / IP implementation.
http://emqttd-docs.readthedocs.io/en/latest/mqtt-sn.html
In TCP, you can differentiate between servers and clients, because servers are those who bind and accept (TCP listeners) and clients just connect to those servers. Both can send and receive.
But, in UDP how do you differentiate between servers and clients? There is no special behavior to differentiate between servers or clients in UDP, right? It seems that we can only classify machines involved in a UDP connection as senders and receivers. A server could be either, or both. It can receive data from many clients or it can send data to many clients (e.g. multicast server).
Please correct me if I am wrong and point me to the correct forum if I posted the question in the wrong one.
Thanks.
There isn't a server or a client with UDP. There are just peers.
Think about UDP as a Sender -> Receiver communication instead of Client <=> Server.
Since UDP is a connectionless protocol a response from the Receiver may or may not happen. That (among other things) is why TCP is considered more reliable but slower than UDP.
http://en.wikipedia.org/wiki/Connectionless_protocol
http://www.diffen.com/difference/TCP_vs_UDP
Not an expert with networking, but this is my understanding.
TCP and UDP are network protocols i.e. dealing with how data is to be transferred between nodes. If you were to look at the packet structure for both TCP and UDP, you will find that both have a source node section and a destination node section. Moreover, a physical machine will still exist as the source of information even in UDP. Whether, you call it server or just a client is a decision which the architecture of the system shall decide.
So, I think you are referring to a level above transmission of data i.e. in my understanding an architecture of network application. That is when we talk about client server applications, and may be P2P kind of an architecture where there can be multiple physical machines providing data. So, the terminology depends on which context are you referring.
To answer your question, yes a server and client can exist in both TCP and UDP. Let the architects decide!
Hope it helps!
I'm trying to send large files using UDP Adobe air to CPP. While transferring large files some packets are missing. How can I retrieve the missing packets data? I'm first of all connecting client(air) with server(cpp) using tcp. After connection establishment I'm starting file transfer. I am planning to get the file missing data using tcp and then resending the missing packets using tcp. Can anybody tell me how can i come to know which packets are missing while transferring. Thank you.
Can you please clarify what is happening? You say you are sending files over UDP yet connecting to the sever with TCP - the two protocols are mutually exclusive over a single connection.
UDP does not provide any mechanism for detecting packet loss (that's what TCP is for) so by default you won't be able to work out whether packets have been lost.
You should use TCP for sending files as it manages sending/resending packets for you.
As stated in the Air ServerSocket documentation (http://help.adobe.com/en_US/air/reference/html/flash/net/ServerSocket.html):
All packets [sent over TCP] are guaranteed to arrive (within reason) — any lost packets are retransmitted. In general, the TCP protocol manages the available network bandwidth better than the UDP protocol. Most AIR applications that require socket communications should use the ServerSocket and Socket classes [TCP] rather than the DatagramSocket class [UDP].
See this page for more information on the Air networking classes:
http://help.adobe.com/en_US/air/html/dev/WSb2ba3b1aad8a27b0-181c51321220efd9d1c-8000.html#WS5b3ccc516d4fbf351e63e3d118a9b90204-7cfb
my guess, tcp is slower because it does a resend when a packet gets lost. so that's probably why it's slower. but on the other hand, checking which packets get lost and re-send them by udp will also take longer ...
i would go for TCP instead of UDP
Like Sly says, udp seems the wrong tool to use here
This might be a silly question:
Does HTTP ever use the User Datagram Protocol?
For example:
If one is streaming MP3 or video using HTTP, does it internally use UDP for transport?
From RFC 2616:
HTTP communication usually takes place
over TCP/IP connections. The
default port is TCP 80, but other
ports can be used. This does not
preclude HTTP from being implemented
on top of any other protocol on the
Internet, or on other networks. HTTP
only presumes a reliable transport;
any protocol that provides such
guarantees can be used; the mapping
of the HTTP/1.1 request and response
structures onto the transport data
units of the protocol in question is
outside the scope of this
specification.
So although it doesn't explicitly say so, UDP is not used because it is not a "reliable transport".
EDIT - more recently, the QUIC protocol (which is more strictly a pseudo-transport or a session layer protocol) does use UDP for carrying HTTP/2.0 traffic and much of Google's traffic already uses this protocol. It's currently progressing towards standardisation as HTTP/3.
Typically, no.
Streaming is seldom used over HTTP itself, and HTTP is seldom run over UDP. See, however, RTP.
For something as your example (in the comment), you're not showing a protocol for the resource. If that protocol were to be HTTP, then I wouldn't call the access "streaming"; even if it in some sense of the word is since it's sending a (possibly large) resource serially over a network. Typically, the resource will be saved to local disk before being played back, so the network transfer is not what's usually meant by "streaming".
As commenters have pointed out, though, it's certainly possible to really stream over HTTP, and that's done by some.
Maybe just a bit of trivia, but UPnP will use HTTP formatted messages over UDP for device discovery.
Yes, HTTP, as an application protocol, can be transferred over UDP transport protocol.
Here are some of the services that use UDP and an underlying protocol for transferring HTTP data and streaming it to the end-user:
XMPP's Jingle Raw UDP Transport Method
A number for services that use UDT --- UDP-based Data Transfer Protocol, which is the a superset of UDP protocol.
The Transport Layer Security (TLS) protocol encapsulating HTTP as well as the above mentioned XMPP and other application protocols does have an implementation that uses UDP in its transport layer; this implementation is called Datagram Transport Layer Security (DTLS).
Push notifications in GNUTella are HTTP requests sent over UDP transport.
This article contains further details on streaming over UDP and its reliable superset, the RUDP: Reliable UDP (RUDP): The Next Big Streaming Protocol?
Of course, it doesn't necessarily have to be transmitted over TCP. I implemented HTTP on top of UDP, for use in the Satellite TV Broadcasting industry.
If you are streaming an mp3 or video that may not necessarily be over HTTP, in fact I'd be suprised if it was. It would probably be another protocol over TCP but I see no reason why you cannot stream over UDP.
If you do you have to take into account that there is no certainty that your data will arrive at the other end, but I can take it that you know about UDP.
To answer you question, No, HTTP does NOT use UDP.
For what you talk about though, mp3/video streaming COULD happen over UDP and in my opinion should never happen over HTTP.
Maybe some change on this topic with QUIC
QUIC (Quick UDP Internet Connections, pronounced quick) is an experimental transport layer network protocol developed by Google and implemented in 2013. QUIC supports a set of multiplexed connections between two endpoints over User Datagram Protocol (UDP), and was designed to provide security protection equivalent to TLS/SSL, along with reduced connection and transport latency, and bandwidth estimation in each direction to avoid congestion. QUIC's main goal is to optimize connection-oriented web applications currently using TCP.
I think some of the answers are missing an important point. The choice between UDP and TCP should not be based on the type of data (e.g., audio or video) or whether the application starts to play it before the transfer is completed ("streaming"), but whether it is real time. Real time data is (by definition) delay-sensitive, so it is often best sent over RTP/UDP (Real Time Protocol over UDP).
Delay is not an issue with stored data from a file, even if it's audio and/or video, so it is probably best sent over TCP so any packet losses can be corrected. The sender can read ahead and keep the network pipe full and the receiver can also use lots of playout buffering so it won't be interrupted by the occasional TCP retransmission or momentary network slowdown. The limiting case is where the entire recording is transferred before playback begins. This eliminates any risk of a playback stall, but is often impractical.
The problem with TCP for real-time data isn't retransmissions so much as excessive buffering as TCP tries to use the pipe as efficiently as possible without regard to latency. UDP preserves application packet boundaries and has no internal storage, so it does not introduce any latency.
(This is an old question, but it deserves an updated answer.)
In all likelihood, HTTP/3 will be using the QUIC protocol, which is described as
multiplexed transport over UDP
So, from a certain point of view, you could say that HTTP/3 will be using UDP.
The answer: Yes
Reason: See the OSI model.
Explaination:
HTTP is an application layer protocol, which could be encapsulated with a protocol that uses UDP, providing arguably faster reliable communication than TCP. The server daemon and client would obviously need to support this new protocol. Quake 2 protocol proves that UDP can be used over TCP to provide a basis for a structured communication system insuring flow control (e.g. chunk ids).
http over udp is used by some torrent tracker implementations (and supporteb by all main clients)
In theory yes it is possible to use UDP for http but that might be problematic. Say for instance in your example a mp3 or a video is being streamed there will be problem of ordering and some bits might go missing as UDP is not connection oriented there is no retransmit mechanism.
HTTP/3 (aka QUIC) uses UDP instead of TCP.
https://http3-explained.haxx.se/en/the-protocol/feature-udp
UDP is the best protocol for streaming, because it doesn't make demands for missing packages like TCP. And if it doesn't make demands, the flow is far more faster and without any buffering.
Even the stream delay is lesser than TCP. That is because TCP (as a far more secure protocol) makes demands for missing packages, overwriting the existing ones.
So TCP is a protocol too advanced to be used for streaming.