erlang: how to receive HTTP/RTSP messages from socket? - http

I want to manage HTTP or RTSP sessions with Erlang.
For example, a standart session for RTSP protocol looks like:
OPTIONS rtsp://192.168.1.55/test/ RTSP/1.0\r\n
CSeq: 1\r\n
User-Agent: VLC media player (LIVE555 Streaming Media v2008.07.24)\r\n
...
PLAY rtsp://192.168.1.55/test/ RTSP/1.0\r\n
CSeq: 5\r\n
Session: 1\r\n
Range: npt=0.000-\r\n
User-Agent: VLC media player (LIVE555 Streaming Media v2008.07.24)\r\n
The length of the every message is different.
For erlang, gen_server:listen uses an option {active, true} (to allow getting of an unlimited qantity of data) or {active, false} (for getting fixed length of data).
Is there a recommended method how to get and parse such messages with variable length?

For HTTP, use one of the HTTP packet modes documented for the inet:setopts/2 function. For example, to set a socket to receive HTTP messages as binaries, you can set the {packet, http_bin} on the socket. Have a look at my simple web server example to see how to use the HTTP packet modes.
For RTSP, there's no built-in packet parser, but because RTSP headers are line-oriented like HTTP, you can do your own header parsing using the {packet, line} mode. In that mode, you'll receive one header at a time until you receive an empty line indicating the end of the headers. You can then change the socket to {packet, raw} mode to receive any message body. The Content-Length header if present indicates the size of any message body.
The {active, true} vs {active, false} socket modes you mention control how data arrive at the controlling process (owner) of the socket.
The {active, true} mode sends all data from the socket to the controlling process as soon as they arrive. In this mode, data arrive as messages on the owner's message queue. Receiving messages on the process message queue is great because it allows the process to also handle other non-socket-related Erlang messages while handling socket data, but {active, true} isn't used that often because it provides no TCP back-pressure to the sender, and so a fast sender can overrun the receiver.
The {active, false} mode requires the receiver to call gen_tcp:recv/2,3 on the socket to retrieve data. This doesn't have the back-pressure problem of {active, true} but it can make message handling awkward since the Erlang process has to actively request the socket data rather than just sitting in a receive loop as it can with the other active modes.
Two other active modes you didn't mention are {active, once} and {active, N}. In {active, once} mode, the receiving process gets a single message via its message queue at a time, with the socket moving to the passive {active, false} mode after each message. To get another message, the receiver has to set {active, once} on the socket again when it's ready for the next message. This mode is nice because messages arrive on the process message queue same as they do with {active, true} mode, but back-pressure still works. The {active, N} mode is similar except that N messages, rather than just one, are received before the socket reverts to passive mode.

Related

What is the "retry" mechanism for nng req/rep. Are there no retries in pipe even if endpoint is tcp?

From the documentation, a req socket :
...is reliable, in that a the requester will keep retrying until a reply is received.
Specifically:
The request is resent if no reply arrives, until a reply is received or the request times out.
Q1:
This just means that a rep socket must package and send a message back to the req socket object to prevent retries, right?
However, using a lower-level reliable transport should make some guarantees about delivery even without req/rep, for example using normal nng_pair, shouldn't it?
For example, if I specify endpoints as "tcp://x.x.x.x", then shouldn't TCP itself perform reliable transport of the packets assuming sockets are connected? And, since nng_socket handles reconnects ...
When the pipe is closed, the dialer attempts to re-establish the connection. Dialers will also periodically retry a connection automatically if an attempt to connect asynchronously fails.
Q2:
... then it seems TCP+pair should be enough to ensure eventual delivery of packets?

What is difference between {active, false}, {active, true} and {active, once}?

as you probably know, there three modes of gen_tcp. {active, false}, {active, true} and {active, once}.
I have read some documents about {active, false}, {active, true} and {active, once}. However, I didn't get it.
What is difference between {active, false} and {active, true} and {active, once}?
Could you please explain plainly?
It's about flow control: you have an Erlang process handling incoming network traffic. Usually you want it to react to incoming packets quickly, but you don't want its queue of messages to grow faster than it can process it - but in certain cases you'll have different goals.
With {active, false}, you have explicit control of when the process receives incoming traffic: it only happens when you call gen_tcp:recv. However, while the process is waiting in gen_tcp:recv, it cannot receive other Erlang messages. Perhaps some other Erlang process is sending a message telling it to stop, but it doesn't know that yet because it's concentrating on getting network input.
With {active, true}, network input gets sent to the process as a message as soon as it is available. That means that you could have a receive expression that expects both network traffic and simple Erlang messages from other processes. This mode of operation could be useful if you're confident that your process can handle the input faster than it arrives, but you could end up with a long message queue that never gets cleared.
{active, once} is a compromise between the two: you receive incoming data as Erlang messages, meaning that you can mix network traffic with other work, but after receiving a packet you need to explicitly call inet:setopts with {active, once} again to receive more data, so you get to decide how quickly your process receives messages.
Since Erlang/OTP 17.0 there is yet another option, {active, N}, where N is an integer. That means that you can receive N messages before you have to call inet:setopts again. That could give higher throughput without having to give up flow control.
{active, false}
You have to read a chunk of data from the socket by calling gen_tcp:recv().
{active, true}
Erlang automatically reads chunks of data from the socket for you and gathers the chunks into a complete message and puts the message in the process mailbox. You read the messages using a receive clause. If some hostile actor floods your mailbox with messages, your process will crash.
{active, once}
Equivalent to {active, true} for the first chunks of data read from the socket, then {active, false} for any subsequent chunks of data.
You also need to understand how specifying {packet, N} influences things. See here: Erlang gen_tcp not receiving anything.

When does an application on the receiveing side read from a TCP buffer?

Does it only read when the PSH bit is set or the buffer is full, or is there some timings that manage that process? And if so, what are those timings, or at least, what are the recommended ones?
I looked throuugh RFC1122, but haven't founds that specific information. I've searched the web, too, but unsuccessfully.
When does an application on the receiveing side read from a TCP buffer?
Does it only read when the PSH bit is set or the buffer is full, or is there some timings that manage that process?
It is upto the application. Application logic shall determine when to read from the TCP recv socket buffer. TCP does not mandate any rules. If application does not read, your TCP recv buffer gets filled up as data keep flowing in and flow control kicks in.
You can write a program which never calls recv and hence does not get data out from TCP buffer. Or you can have blocking socket and call recv and be blocked until some data is in. Or if it is non-blocking you can depend on polling mechanisms like select to call recv when data arrives on socket. TCP buffer need not be full to be read into application.

Packets sometimes get concatenated

I'm trying to make a simple server/application in Erlang.
My server initialize a socket with gen_tcp:listen(Port, [list, {active, false}, {keepalive, true}, {nodelay, true}]) and the clients connect with gen_tcp:connect(Server, Port, [list, {active, true}, {keepalive, true}, {nodelay, true}]).
Messages received from the server are tested by guards such as {tcp, _, [115, 58 | Data]}.
Problem is, packets sometimes get concatenated when sent or received and thus cause unexpected behaviors as the guards consider the next packet as part of the variable.
Is there a way to make sure every packet is sent as a single message to the receiving process?
Plain TCP is a streaming protocol with no concept of packet boundaries (like Alnitak said).
Usually, you send messages in either UDP (which has limited per-packet size and can be received out of order) or TCP using a framed protocol.
Framed meaning you prefix each message with a size header (usualy 4 bytes) that indicates how long the message is.
In erlang, you can add {packet,4} to your socket options to get framed packet behavior on top of TCP.
assuming both sides (client/server) use {packet,4} then you will only get whole messages.
note: you won't see the size header, erlang will remove it from the message you see. So your example match at the top should still work just fine
You're probably seeing the effects of Nagle's algorithm, which is designed to increase throughput by coalescing small packets into a single larger packet.
You need the Erlang equivalent of enabling the TCP_NODELAY socket option on the sending socket.
EDIT ah, I see you already set that. Hmm. TCP doesn't actually expose packet boundaries to the application layer - by definition it's a stream protocol.
If packet boundaries are important you should consider using UDP instead, or make sure that each packet you send is delimited in some manner. For example, in the TCP version of DNS each message is prefixed by a 2 byte length header, which tells the other end how much data to expect in the next chunk.
You need to implement a delimiter for your packets.
One solution is to use a special character ; or something similar.
The other solution is to send the size of the packet first.
PacketSizeInBytes:Body
Then read the provided amount of bytes from your message. When you're at the end you got your whole packet.
Nobody mentions that TCP may also split your message into multiple pieces (split your packet into two messages).
So the second solution is the best of all. But a little hard. While the first one is still good but limits your ability to send packets with special characters. But the easiest to implement. Ofc theres a workaround for all of this. I hope it helps.

TCP and POSIX sockets accept() semantics

Situation: The server calls accept(). The client sends a SYN to the server. The server gets the SYN, and then sends a SYN/ACK back to the client. However, the client now hangs up / dies, so it never sends an ACK back to the server.
What happens? Does accept() return as soon as it receives the SYN, or does block until the client's ACK is returned? If it blocks, does it eventually time-out?
The call to accept() blocks until it has a connection. Unless and until the 3-way handshake completes there is no connection, so accept() should not return. For non-blocking sockets it won't block, but neither will it give you info about partially completed handshakes.
If the client never sends an ACK, accept() will either block or return EAGAIN if the socket is marked non-blocking.
It will eventually time out, because that scenario is in actual face a DoS (Denial of Service) and the resource for the accept returned to for use by the operating system. if might cause the master socket to block, since client is connected to the server once the accept returns with a valid file discriptor
In the event that a error occurs during the connection from the client, the value errno will be set and a good idea would be log or display an error message. , however read the man pages it is the best source of info in most cases.
In the case there is a failure, say, a timeout because a handshake does not complete, it will return -1 and set errno. I believe, after looking at the man page, that it will set errno to ECONNABORTED.

Resources