Incoming Emergency Calling from secure network? - gsm

Is it possible to trigger emergency cellular calls on multiple cell phones at a given time when all receivers are from a different service provider.

Related

Are Published messages sent to all nodes in a NATS cluster? Or only to nodes with local Subscribers to the message's Subject?

Let's say I have a simple 2-server NATS cluster with servers A and B.
If a client on Server A publishes a message to a Subject for which there are no subscribers on Server B, does that message still get sent from Server A to Server B?
Not sure what the goal is here but more details could help clarify the context. If you're asking specifically whether the nodes can pass messages published on another server to a subscribed client, yes.. how long will it stay? Depending on the defined expiration time.. what if the server goes down before it shares with the mesh, almost impossible but can happen.
Also, though you're probably just illustrating, even numbers are not generally advisable in clusters to avoid split-brain situations where it's impossible to locate the truth source (seed server in this case)..
Yes. NATS clustered servers will forward messages that it has received from a client to the immediately adjacent nats-server instances to which it has routes. Messages received from a route will only be distributed to local clients. There is not subscriber checks in the mesh.
It's not a black and white answer because NATS supports many deployment architectures with server to client communications, server to server cluster communications, and cluster to cluster communications through gateways to form super-clusters. Gateways for example optimize the interest graph propagation (i.e. only send data to other clusters if there is interest) Super-cluster with gateways documentation

Are messages sent to groups delivered to every user at the same time?

I was wondering if every user of a group will receive messages at the same time (disregarding the network latency) or if Telegram delivers messages in chunks, which mean some members would see the message earlier than others. Are messages delivered to the Web client and the phone clients at the same time?
Telegram transfer message at low latency, so all account/device should receive messages at same time.
You can prove this yourself by using two account in different network and receive messages.

Is it possible to improve this zmq architecture?

Intro:
In the below architecture, there are three key components.
Users - Machines where user application is running.
Applications - which are running inside the remote server.
Gateway/Broker - Required for isolation between user devices and server applications.
Message flow between user device and server application should happen as below
User shall transmit message to remote server, which will be used by
the one or more server applications.
Application shall broadcast/publish message to all connected
users.
Application shall send message to a particular user device
(Unicast).
In addition, one or more users will be connected or disconnected to the server arbitrarily and one or more application will be spawned or terminated arbitrarily.
For the above problem statement, I have designed the below zmq architecture.
The Gateway/Broker handles arbitrary assignments of users and applications and also provides the required isolation. It publishes user messages to all applications. It also aggregates all messages needed to be sent to users from applications via a SUB socket.
The application sends a two part message, the first part is the user identity and the second part is the actual message. The Gateway/Broker transmits that message to a user, based on identity. A special identity for a broadcast will be created, the gateway, if has received broadcast identity, will publish the message to all users via PUB socket.
The user connects to both ROUTER and PUB sockets in gateway. Fair queued data will be received from both sockets. While sending, the message will be sent to only gateway's ROUTER socket, not PUB socket.
Questions:
Q1: Is there any flaw with above architecture?
Q2: Is it possible to improve it more?
Metric assumed for the Q2:
The users and applications are dynamic in nature, they connect and disconnect on their own, the design should withstand that,
User reports its status periodically to server, design should facilitate latency of less than 333 [ms] ( a user, connected to server over internet, WAN connectivity btw user and server provides a latency much less than 333 [ms] )
Lossless transmission between server and users ( ACKing at backend, retransmission if lost )
You can try Malamute, which gives you what you need and more like credit-flow, keep-alive, tracking.
Malamute is small broker based on zeromq and part of the zeromq community. You can run Malamute as a component inside your application and don't need a dedicate service or daemon for it.
If you are using C or C++ that is no brainer as it integrate naturally. It also has binding for a lot more languages.
https://github.com/zeromq/malamute

What does concurrent requests really mean?

When we talk about capacity of a web application, we often mention the concurrent requests it could handle.
As my another question discussed, Ethernet use TDM (Time Division Multiplexing) and no 2 signals could pass along the wire simultaneously. So if the web server is connected to the outside world through a Ethernet connection, there'll be literally no concurrent requests at all. All requests will come in one after another.
But if the web server is connected to the outside world through something like a wireless network card, I believe the multiple signals could arrive at the same time through the electro-magnetic wave. Only in this situation, there are real concurrent requests to talk about.
Am I right on this?
Thanks.
I imagine "concurrent requests" for a web application doesn't get down to the link level. It's more a question of the processing of a request by the application and how many requests arrive during that processing.
For example, if a request takes on average 2 seconds to fulfill (from receiving it at the web server to processing it through the application to sending back the response) then it could need to handle a lot of concurrent requests if it gets many requests per second.
The requests need to overlap and be handled concurrently, otherwise the queue of requests would just fill up indefinitely. This may seem like common sense, but for a lot of web applications it's a real concern because the flood of requests can bog down a resource for the application, such as a database. Thus, if the application has poor database interactions (overly complex procedures, poor indexing/optimization, a slow link to a database shared by many other applications, etc.) then that creates a bottleneck which limits the number of concurrent requests the application can handle, even though the application itself should be able to handle them.
.Imagining a http server listening at port 80, what happens is:
a client connects to the server to request some page; it is connecting from some origin IP address, using some origin local port.
the OS (actually the network stack) looks at the incoming request's destination IP (since the server may have more than one NIC) and destination port (80) and verifies that some application is registered to handle data on that port (the http server). The combination of 4 numbers (origin IP, origin port, destination IP, port 80) uniquely identifies a connection. If such a connection does not exists yet, a new one is added to the network stack's internal table and a connection request is passed on to the http server's listening socket. From now on the network stack just passes on data for that connection to the application.
Multiple client can be sending requests, for each one the above happens. So from the network perspective, all happens serially, since data arrives one packet at a time.
From the software perspective, the http server is listening to incoming requests. The number of requests it can have queued before the clients start getting errors is determined by the programmer based on the hardware capacity (this is the first bit of concurrency: there can be multiple requests waiting to be processed). For each one it will create a new socket (as fast as possible in order to continue emptying the request queue) and let the actual processing of the request be done by another part of the application (different threads). These processing routines will (ideally) spend most of their time waiting for data to arrive and react (ideally) quickly to it.
Since usually the processing of data is many times faster than the network I/O, the server can handle many requests while processing network traffic, even if the hardware consist of only one processor. Multiple processors increase this capability. So from the software perspective all happens concurrently.
How the actual processing of the data is implemented is where the key to performance lies (you want it to be as efficient as possible). Several possibilities exist (async socket operations as provided by the Socket class, threadpool, unique threads, the new parallel features from .NET 4).
It's true that no two packets can arrive at the exact same time (unless multiple network cards are in use per Gabe's comment). However, web request usually requires a number of packets. The arrival of these packages is interspersed when multiple requests are coming in at near the same time (whether using wired or wireless access). Also, the processing of these requests can overlap.
Add multi-threading (or multiple processors / cores) to the picture, and you can see how lengthy operations such as reading from a database (which requires a lot of waiting around for a response) can easily overlap even though the individual packets are arriving in a serial fashion.
Edit: Added note above to incorporate Gabe's feedback.

Game Server: Broadcasting UDP packet over Internet?

A friend of mine made a small LAN-playable game, and ask me to change it, so it could be playable over the Internet. I don't want to make huge changes on the client application.
When a game is created, the server keep sending UDP BROADCAST packets to tell everyone that a game has been created. Now, I just need to change this BROADCAST in order to send these packets to a group of internet IP-adresses.
Can you tell me if the following solution is a good one: I would create a room server, lets call it 'room-broadcast-server', that contains IP adresses of everyone who joined the room. Then, the clients, instead of sending that BROADCAST packet, would send the packet to the room-broadcast-server, which would broadcast this packet to everyone who joined the room.
The problem is: The clients would receive these packets from the 'room-broacast-server' and they would try to comunicate with the room-broadcast-server, instead of comunicating with the machine that created the game. I would like to fool the clients, so that they think the packet came from the game server, and not from the room-broadcast-server. How can I make it?
Are you modifying only the server, or both the server and the clients? I would think that it would be simpler to just remove the broadcasts altogether and have the clients explicitly choose the server they want to connect to, rather than relying on the server broadcasting.
As a version 1, you could simply require players to type in the IP address/DNS name of the server in the client in order to connect.
For version 2, you could add support for a "lobby" where you have a (known) central lobby server that both clients and server connect to in order to find each other (so servers connect to the lobby to announce their presence and then clients connect to the lobby in order to browse servers that they want to connect to).
In a game that I have been writing (but on hold currently due to lack of free time :p) I had written the "lobby" server as a simple PHP+MySQL web application and had the clients and servers use HTTP requests to poll it for updates and so on. That way, I could host the central lobby server on a cheap web host and anybody could host games (the drawback is that cheap web hosts don't allow arbitrary socket connections, so I wasn't able to implement NAT punch-through on it, but if/when the game became popular my plan was to move the lobby server to a more expensive host that did allow arbitrary socket connections...)
Spoofing the source address isn't a suitable mechanism for your normal application protocol. It requires special permissions on client machines, it will be dropped by some networking filters, and is just generally gross and anti-social.
Since you're modifying the clients anyway (to send the messages to the game server instead of to the broadcast address), you can simply have the game server append the "true source" of the packet to each packet it sends out, and have the clients expect and process this information in the packets that they recieve from the game server.
The basic concept (one centric lobby server + multiple game servers) of the 2nd option from Dean is good and very common even in retail online games. One thing I don't understand is why you want to fake IP addresses. Clients should not care what server ip address is as long as it gets valid packets from it. Also, imo, you don't need to create a separate room(?) server since a game server can / should manage a list of clients as clients are connected via a lobby server.

Resources