I want to ask how the mechanism when broadcast communication occurs (many endpoints) where each endpoint enters the bridge through the ARI originate (outgoing call) function.
Suppose there are 3 participants in a communication bridge. When one endpoint is talking, how does the mechanism occur? Does the endpoint make 2 packets with the same payload, then sent separately by the endpoint or only send one packet, then the asterisk duplicates the packet as much as n endpoint then sends it to each endpoint registered on the bridge?
Thank you
You can check app_conference.c code if you want.
Asterisk make independend packet for each party. How that packet created will depend on confbridge settings.
There is experimental res_rtp_multicast.c, but it not used in most scenarios.
Related
I want to broadcast a request to all client that are connected to my esp8266 12f access-point
I used this to create a connection per client, means if there're 3 clients it will create 3 connections.
for mac,ip in pairs(wifi.ap.getclient()) do
srv= net.createConnection(net.TCP, 0)
srv:on("receive", function(client, b_response) srv:close() collectgarbage() end)
srv:on("connection", function(client, b_request) client:send(request) end)
srv:connect(80, ip)
end
I tried the broadcast ip srv:connect(80, "255.255.255.255") but nothing was sent
The Problem :-
What I used that every srv will overwrite the previous srv so I can not get a response if it was delayed, even so I can name every srv with a different name like srv_1, srv_2, srv_3 but this take too much memory.
What I want
Create only one connection ?
Your code is using TCP, which is inherently a single connection, point-to-point protocol. There's no such thing as a "broadcast" TCP connection. TCP simply does not work using broadcast. That's like trying to use a car as a boat.
If you're sending a small amount of information, you might try UDP instead. The drawbacks are that UDP is unreliable - you can't be certain your message was received - and you'd need a lot more code to receive a response, if you want one, and you'd need to build a reliability mechanism (retransmit if no answer received, detect retransmissions in case the answer was dropped) if you care about that.
I'd recommend that you check out the MQTT protocol - it's designed to make it easy to communicate with multiple clients. It's lightweight and MQTT clients run well on NodeMCU and Arduino processors. There's an MQTT client built into NodeMCU's LUA implementation.
The downside is you'd need an MQTT broker that all of your NodeMCU's will connect to. The broker is usually run on a more capable processor (a Raspberry Pi is a good choice) or externally on the Internet (Adafruit offers a broker at https://io.adafruit.com/), although there are some implementations that run on an ESP8266.
I want to know if there's a downside of architecting a multicast pub-sub like system in the way described below:
Problem
There are about 30 applications that need to listen to certain events
There is 1 server that is receiving events from across the network [let's call it PRODUCER]
The 30 applications may be interested in one or more of the 30 events
Solution
we can assign each topic to a multicast group (defined by an IP in the multicast ip range)
For each event received by the PRODUCER it multicasts on the relevant channel/group
The subscribers subscribe to each multicast group/channel that they are interested in
My questions are
Is there any overhead in this design as opposed to having each subscriber listening to ALL events and having the PRODUCER send to a single channel - in other words, does sending on multiple channels have a downside?
For UDP Multicast, my understanding is that a group is defined by an IP address (and not the port). Then what is the relevance of a port in specifying a group/endpoint?
As I understand, BLE indications are a reliable communications method. How do you know if your indications was not communicated. I am writing code for the peripheral/server and currently when I send a notifications, I get a manual response from the central. I read that if I use indications, the acknowledges take place in the L2CAP layer automatically and communications is therefore faster, but how does my embedded controller know the Bluetooth module was not successful at getting the packet across the link? We are using the Microchip RN4030 Bluetooth module.
Let's make things clear.
The BLE stack looks roughly like the following. The stack has these layers in this order:
Link Layer
HCI (if controller and host are separated)
L2CAP
ATT
GATT
Application
The Link Layer is a reliable protocol in the sense that all packets are protected by a CRC and every packet is acknowledged by the receiving device. If a packet is not acknowledged, it is resent until an acknowledge is received. There can also only be one outstanding packet, which means no reordering of packets are possible. Acknowledges are normally not being presented to upper layers.
The HCI layer is the communication protocol between the controller and the host.
The L2CAP layer does almost nothing if you use the standard MTU size of 23. It has a length header and a type code ("channel") which indicates what type of packet is being sent (usually ATT).
On the ATT level, there are two types of packets that are sent from the server that are not sent as a response to a client request. Those are notifications and indications. Sending one notification or indication has the same "performance" since the type is just a tag of a packet that's sent over the lower layers. The differences are listed below:
Indications:
When an indication packet is sent to the client, the client must acknowledge the packet by sending a confirmation packet when it has received the indication packet. Even if the indication packet is invalid, a confirmation shall be sent back.
Upper layers are not involved sending back the confirmation.
The server may not send a new indication until it has received a confirmation from the previous one.
The only time you won't receive a confirmation after an indication is if the link is dropped (you should then get a disconnected event), or there is some bug in some of the BLE stacks.
After the app has sent an indication, most BLE stack confirms to the app that that a confirmation has been received by the client as that the indication operation has completed.
Notifications:
No ATT layer acknowledges are sent.
"Commands and notifications that are received but cannot be processed, due to buffer overflows or other reasons, shall be discarded. Therefore, those PDUs must be considered to be unreliable." However I have never noticed an implementation actually following this rule, i.e. all notifications are delivered to the application. Or I've never hit the max buffer size.
The GATT layer is mostly a definition of how the attribute protocol should be used and defines a DB structure of characteristics.
Implications
According to my opinion, there are several flaws or issues with the BLE standard. There are two things that might make indications useless and unreliable in practice:
There are no way to send back an error response instead of a confirmation.
The fact that it is the ATT layer that sends back the confirmation directly when it has received the indication, and not the app when it has successfully handled the indication.
This means that if for example, some bug or other issue causing that the BLE stack couldn't send the indication to the app, or your app crashed, or your app found your value to be invalid, there is no way your peripheral can aware of that. Since it got the confirmation it thinks everything is fine.
I can't understand why they defined indications this way. Since the app doesn't send the confirmation but a lower layer does, there seems to be no point at all in having an ATT layer acknowledge instead of just using the Link Layer acknowledge. Both are just acknowledges that the packet has been received halfway of its destination.
If we draw a parallel to a HTTP POST and internet, we could consider the Link Layer acknowledge as when the network card of the destination receives the request and the ATT confirmation as a confirmation that the TCP stack received the packet. You have no way of knowing that your web server software actually did receive your request, and it processed it with success.
The fact that notifications are allowed to be dropped by the receiver is also bad. Normally notifications are used if the peripheral wants to stream a lot of data to the central and then you don't want dropped packets. They should have designed the flow control so that the Link Layer stopped acknowledge incoming packets instead until the app are ready to process the next notifications. This is even already implemented at the LL + HCI + Host layers.
Windows
One interesting thing about at least the Windows BLE stack is, if it receives indications faster than the app processes them it starts to drop the indications as well, even though only notifications should be allowed to be dropped due to "buffer overflows or other reasons". It buffers at most 512 indications in my tests.
That said
Just use notifications and if you want some kind of confirmation, let the client send a write packet when it has received your data and successed processing it.
I am preparing for my university exam and one of the question last year was " how to make UDP multicast reliable " ( like tcp, retransmission of lost packets )
I thought about something like this :
Server send multicast using UDP
Every client send acknowledgement of receiving that packets ( using TCP )
If server realize that not everyone receive packets , it resends multicast or unicast to particular client
The problem are that there might be one client who usually lost packets and force server to resend.
Is it good ?
Every client send acknowledgement of receiving that packets ( using TCP )
Sending an ACK for each packet, and using TCP to do so, is not scalable to a large number of receivers. Using a NACK based scheme is more efficient.
Each packet sent from the server should have a sequence number associated with it. As clients receive them, they keep track of which sequence numbers were missed. If packets are missed, a NACK message can then be sent back to the server via UDP. This NACK can be formatted as either a list of sequence numbers or a bitmap of received / not received sequence numbers.
If server realize that not everyone receive packets , it resends multicast or unicast to particular client
When the server receives a NACK it should not immediately resend the missing packets but wait for some period of time, typically a multiple of the GRTT (Group Round Trip Time -- the largest round trip time among the receiver set). That gives it time to accumulate NACKs from other receivers. Then the server can multicast the missing packets so any clients missing them can receive them.
If this scheme is being used for file transfer as opposed to streaming data, the server can alternately send the file data in passes. The complete file is sent on the first pass, during which any NACKs that are received are accumulated and packets that need to be resent are marked. Then on subsequent passes, only retransmissions are sent. This has the advantage that clients with lower loss rates will have the opportunity to finish receiving the file while high loss receivers can continue to receive retransmissions.
The problem are that there might be one client who usually lost packets and force server to resend.
For very high loss clients, the server can set a threshold for the maximum percentage of packets missed. If a client sends back NACKs in excess of that threshold one or more times (how many times is up to the server), the server can drop that client and either not accept its NACKs or send a message to that client informing it that it was dropped.
There are a number of protocols which implement these features:
UFTP - Encrypted UDP based FTP with multicast (disclosure: author)
NORM - NACK-Oriented Reliable Multicast
PGM - Pragmatic General Multicast
UDPCast
Relevant RFCs:
RFC 4654 - TCP-Friendly Multicast Congestion Control (TFMCC): Protocol Specification
RFC 5401 - Multicast Negative-Acknowledgment (NACK) Building Blocks
RFC 5740 - NACK-Oriented Reliable Multicast (NORM) Transport Protocol
RFC 3208 - PGM Reliable Transport Protocol Specification
To make UDP reliable, you have to handle few things (i.e., implement it yourself).
Connection handling: Connection between the sending and receiving process can drop. Most reliable implementations usually send keep-Alive messages to maintain the connection between the two ends.
Sequencing: Messages need to split into chunks before sending.
Acknowledgement: After each message is received an ACK message needs to be send to the sending process. These ACK messasges can also be sent through UDP, doesn't have to be through UDP. The receiving process might realise that has lost a message. In this case, it will stop delivering the messages from the holdback queue (queue of messages that holds the received messages, it is like a waiting room for messages), and request of a retransmission of the missing message.
Flow control: Throttle the sending of data based on the abilities of the receiving process to deliver the data.
Usually, there is a leader of a group of processes. Each of these groups normally have a leader and a view of the entire group. This is called a virtual synchrony.
How is a packet received by a wireless devices with thousands of users/devices connected to the same network?
If we are using UDP, will it send the packets to all the devices such that only the authenticated devices will accept the packets and others would reject?
How does the situation change if we use TCP instead of UDP?
UDP and TCP are the same as they are higher layer protocols.
Majorly simplified, but the device opens a tunnel to a GSN (Gateway Serving Node) which is a server installed at the carrier. Which GSN to use is based on the APN (Access Point Name) supplied when the tunnel (PDP context) is requested. The tunnel is assigned an IP address at the GSN and that is the address used for IP communication. Packets will be filtered at the GSN and routed to the specific device. Traffic is tunneled between the GSN and the device using telecom specific protocols. Packets are not broadcast out to all devices and then filtered there.
Ps. I phrased the answer using GPRS terms. Other 2.5/3/4G protocols use the same structure but sometimes have different names.
what you mean by authenticated user?
are you concentrating in application level ? or at lower layers of the n/w?
even it is UDP , it should be thought of sending it to specific IP.even in complex n/w each s/m is an unique entity
Rohith Gowda , actually if you are concentrating on udp packets at Application level (either java, c# ...) u creates the packets for specific ip and sends to an IP,( which is the recivers ip) and the reciver have to grab it , i think you actually want this right? and no need to fear about others with different ip than what you are sending to, because you are in abstracted APP Layer, your doubt will be look after by lower layers.if you want an additional snooping proof just encode the data that you want to send
one Example is (in java)
DatagramPacket (UDP) can be created by invoking a new instance of
DatagramPacket(packet data [],offset ,length ,address* ,port* )
look at the last 2 params they specify the SeverAddress and the Port of transmit to the server
i think you are now clear that the destination server with the ip (Sever-address) listening at the particular port can grab it.