It is my understanding that the Asterisk Manager Interface is single threaded.
Can someone please explain to me if this is true, and if so, explain what some of the limitations of this would be?
If calls to the AMI overload the single thread do requests get queued up? Can this cause issues on a system (phones losing registration, poor call quality, etc.)?
You can open 10 AMI sessions in 10 threads and create your own pool.
However very likly if you need more then one, you do something wrong. AMI response usually very fast.
Related
I have a question and I couldn't find any answer here.
I am using the NamedPipeClientStream/NamedPipeServerStream classes for bidirectional IPC communication in a .NET 5.0/.NET Framework 4.8 environment and it works fine, however I need to have a way of letting the client (immediately) know if the server is not running anymore and I am not sure what is the best way to achieve this.
Of course I can try on the client side to call Connect with a timeout every 3 seconds for example and if I get a TimeoutException, then it means the server is not available, however as far I as have read, doing this (for example with a timer and/or a background thread) is not very efficient. Are there better ways to do this (and still use named pipes)? Can someone point me in another (better) direction?
I had it solved before with WCF, however as .NET Core doesn't support it, it is not a solution anymore.
Thank you very much.
BR,
M.
You could do a twofold approach where if a server shuts down nicely you send a 'i am shutting down' message to the client. And also have the server send small heartbeat messages once per second to clients. This way clients can check if the last heartbeat is stale or the server has shutdown whenever they need.
Looking for a WCF replacement myself and came across your post ;-)
I have a Tokio TCP back-end application, which, briefly, after receiving a request, reads something from Redis, writes something to PostgreSQL, uploads something via HTTP, sends something to RabbitMQ etc. Processing each request takes a lot of time, so a separate task for each request is created. As sharing connections is impossible in asynchronous models, some connection pooling is required. For now, new connections are established on each request, and it is extremely excessive.
I have been looking for an asynchronous connection pool implementation in Rust, but have not found any of them up to date.
I would like to hear some advice on how to implement it myself.
The only idea I have come up with is:
Implement a Stream/Sink object with an inner collection of connections. It does not matter whether it is LIFO or FIFO, since the connections are identical. On the application startup, N connections are allocated.
Now I am not sure if it is possible to share such a pool among tasks, but if it were possible, tasks would poll the stream for a connection instance (instead of establishing their own one), use it, and then put back.
If there were no connections available, the stream might establish more of them or ask the task to hang on (depending on its configuration).
If a connection fails, it gets dropped and the pool now contains N-1 connections, so it may decide to allocate a new one on the next request.
So I have two problems I cannot find proper answers anywhere:
Must/can/should I share the stream/sink-pool among tasks in some way? Anyway, I see some Shared futures in the futures crate.
There are some gloomy points in the tokio/futures tutorial. E.g. it does not explain how do I notify the uppermost task, that is, how do I implement the mythical innermost future, which does not pool anything itself, but still has to notify the upper futures.
Or is my approach completely wrong? I could start playing with it by myself, but I have a strong suspicion that I have missed something, e.g. a one-click solution.
I’m writing a remote application, controlled by a server. The client would be some sort of daemon that’s pretty much always on. The thing is — these remote commands are unpredictable and sparse. The server could go hours or days without sending a message, or it could send several messages in an hour.
I have no experience with networking, so I’m not sure how all this works and I just need pointers for where to look.
What’s the best, most efficient (cheapest) way to do this? I’d be using AWS for all of this.
The first option I thought of, was to store in a database the IPs of all the clients associated with their user ID. When a AWS Lambda function is called, it makes a new connection to the IP associated with that user id, and sends the message, then closes the connection as the lambda function exits.
The second option was to host an EC2 instance, and actively keep alive connections to all the users. But this would require hosting the EC2 24/7 with potentially a lot of clients, and could get very expensive.
I’m not sure what best practice is here, or even what protocols to look into for that kind of thing. For example, on the first option, how would the server connect to the client? Wouldn’t it have to port forward because of firewalls or something?
Again, I don’t have any experience with network programming so I’ll take all the pointers I can get as to how this is generally accomplished.
Thanks!
I'm fairly new to Akka and writing concurrent applications and I'm wondering what's a good way to implement an actor that would wait for a redis list and once an item becomes available it will process it, or send it to a different actor to process?
Would using the blocking function BRPOPLPUSH be better, or would a scheduler that will ask the actor to poll redis every second be a better way?
Also, on a normal system, how many of these actors can I spawn concurrently without consuming all the resource the system has to offer? How does one decide how many of each Actor type should an actor system be able to handle on the system its running on?
As a rule of thumb you should never block inside receive. Each actor should rely only on CPU and never wait, sleep or block on I/O. When these conditions are met you can create even millions of actors working concurrently. Each actor is suppose to have 600-650 bytes memory footprint (see: Concurrency, Scalability & Fault-tolerance 2.0 with Akka Actors & STM).
Back to your main question. Unfortunately there is no official Redis client "compatible" with Akka philosophy, that is, completely asynchronous. What you need is a client that instead of blocking will return you a Future object of some sort and allow you to register callback when results are available. There are such clients e.g. for Perl and node.js.
However I found fyrie-redis independent project which you might find useful. If you are bound to synchronous client, the best you can do is either:
poll Redis periodically without blocking and inform some actor by sending a message to with a Redis reply or
block inside an actor and understand the consequences
See also
Redis client library recommendations for use from Scala
BRPOPLPUSH with block for long time (up to the timeout you specify), so I would favour a Scheduler instead which still blocks, but for a shorter amount of time every second or so.
Whichever way you go, because you are blocking, you should read this section of the Akka docs which describes methods for working with blocking libraries.
Do you you have control over the code that is inserting the item into redis? If so you could get that code to send your akka code a message (maybe over ActiveMQ using the akka camel support) to notify it when the item has been inserted into redis. This will be a more event driven way of working and prevent you from having to poll, or block for super long periods of time.
I'm having no luck trying to find out how channging the instance count for an ASP.Net web role affects requests currently being processed.
Heres the scenario:
An ASP.Net site is deployed with 6 instances
Via the console I reduce the instancecount to 4
Is azure smart enough to not remove instances from the pool if it is currently progressing requests or does it just kill them mid request?
I've been through the azure doco, goolge and a number of emails to MS tech support none of which were able to answer this seemingly simple question. I know about the events that get triggered by a shutdown etc but that doesnt really help in web site scenario with a live person waiting for a request to their response.
You cannot choose which instances to kill off. Primarily this is due to Windows Azure's instance allocation scheme, where your instances are split into different fault domains (meaning different areas of the data center - different rack, etc.). If you were to choose the instances to kill, this could leave you in a state where your remaining instances are in the same fault domain, which would void the SLA.
Having said that: You get an event when your role instance is shutting down (the OnStop() event). If you capture this event, you can do instance cleanup in preparation for VM shutdown. I can't recall if you're taken out of the load balancer at this point, but you could always force yourself out with a simple PowerShell command (Set-RoleInstanceStatus -Busy). This way your asp.net instance stops taking requests, and you can more easily shut down in a graceful manner.
EDIT: Sorry - didn't quite address all of your question. Since you get to capture OnStop(), you might have to implement a mechanism to make sure nothing's being processed in that instance. Since you're out of the load balancer, and assuming your requests are processed fairly quickly (2-5 seconds), you shouldn't have to wait long to clear out remaining requests. There's probably a performance counter to check, to see how many active requests are being handled.
Just to add to David's answer: the OnStop event happens when you are off the load balancer. For web apps, it is usually sufficient time to bleed out all requests after you are disconnected from the LB until the instance is shutdown. However, for long running or stateful connections (perhaps to a worker role), there would be an abrupt disconnect in some cases. While the OnStop method removes you from the LB, it does not terminate open connections. It simply prevents you from getting new connections. For web apps, this is usually enough time to complete the request (and you can delay the shutdown if necessary in the OnStop as well if you really want to).