My program should notify its subscribers about new calls, ended calls, transfers and so on.
I can listen to CEL events in the AMI, but simpler solution would be to query database every X seconds and handle records from there, since I'll have to do it anyway to handle calls that took place when my program wasn't running.
(Yeah, I know, usually pushed events are better then polling, but not in this case, IMO)
But I'm not sure how fast CEL events are dumped into the database. Is there any delay or queue?
I've tested on my local Asterisk and events appeared in the database right away, but on some highly loaded instances this may not be so.
There is no queue.
When the backend database CEL driver is loaded, it initializes the connection to the database. When an event happens, it basically blocks the execution of that call, untill the database operation finishes (succeeds, fails, times out).
If ODBC is used, that is a bit different as I remember. It handles database transactions and cursor, but still no queue. I'm not sure about connection pooling.
Related
I have built an orchestration with a loop to retreive paged data from REST web service. From page size and offset I am able to call the service for "next page" of data. Then I debatch it, map it to internal format and process it further. When one page is processed, I request next page from the REST web service.
As it turns out, the host running the orchestration and send ports causes the memory to constantly grow during processing of all the data, and eventually hit the throttling mode.
Why is memory not released when I am done with one page loop? Is it the "consumed" messages that are stored in the orchestration that builds up the memory? Is it possible to clear orchestration from these "consumed" messages, to release the memory used?
(No message tracking active on the orchestration, or send ports.)
Apparently, there is no way to prohibit BizTalk Orchestrations from building up a list of messages in Orchestration, including used/processed/consumed messages. Putting things in Scope does not prohibit this behaviour.
Hence, for long-running Orchestrations there can be a lot of messages building up. Especially for singleton Orchestrations, where the general solution proposed to deal with this problem is to make sure Orchestration shuts down once in a while (when idle, e.g.).
My solution was to split the Orchestration into two, and have the initial Orchestration start the second Orchestration with the Start Orchestration, which in turn calls the second Orchestration recursively, and so on, until last page is received and the last Orchestration ends.
Yes, what you need to do is to have scopes, and that the messages are initialised in the scope (Green highlight below) rather than an the top level (yellow), and that means they will also be disposed of at the end of the scope. Note: That means those message can't be used outside of the scope.
However if you are just re-using the same messages in the loop, then I wouldn't expect it to increase memory usage. So there is possibly something else going on. I suspect that you must be adding each page to a message, and that is what is growing
As anyofferschange notification amount varies with time. We don't have any specific way to read multiple notifications together.
So, I am reading one by one and saving some of information in sql server database, It takes quite a lot of time that I can never finish reading all the notifications.
What is the best possible way to achieve this?
Here's what I did...I started by clearing out the queue. Then I started my windows service that every few seconds polled the queue. I think I pulled back 10 messages at time. I would get a total count of messages and then spin up a number of threads that could handle the amount of messages I had waiting. One by one, I read the message, add to my SQL database, then delete the message from SQS.
Over time, I understood better how many threads to spin up and how often to poll my queue. As long as my service was running, I would maintain just a handful of SQS messages in the queue at a time and I would quickly read and process them. Occasionally, due to bad programming (yeah, it happens), my service would crash and I wouldn't know about it. Tens of thousands of messages become queued up and I would put my service in "crisis" mode, which polled at an increasing rate and essentially maxed out the number of calls I could make to SQS. Usually in a few hours, my service would catch up and then I increase the polling interval. Sometimes though, I would just dump the queue and start over as I'd have potentially hundreds of price changes on a single SKU and didn't want to waste the processing time to go through them. But most of the time, things ran smoothly.
Why can't you read more than one notification together? Like I said, I believe I read 10 at a time on each thread. Once I got the 10 messages, I processed them in a loop and dumped them to a SQL database. Once the 10 were processed, I send a message up to SQS to delete.
I ran this for several years on an account with over 10,000 SKU's. We had up to the minute price change notifications on all our products and could instantly reprice and update Amazon, if needed.
I have a Tokio TCP back-end application, which, briefly, after receiving a request, reads something from Redis, writes something to PostgreSQL, uploads something via HTTP, sends something to RabbitMQ etc. Processing each request takes a lot of time, so a separate task for each request is created. As sharing connections is impossible in asynchronous models, some connection pooling is required. For now, new connections are established on each request, and it is extremely excessive.
I have been looking for an asynchronous connection pool implementation in Rust, but have not found any of them up to date.
I would like to hear some advice on how to implement it myself.
The only idea I have come up with is:
Implement a Stream/Sink object with an inner collection of connections. It does not matter whether it is LIFO or FIFO, since the connections are identical. On the application startup, N connections are allocated.
Now I am not sure if it is possible to share such a pool among tasks, but if it were possible, tasks would poll the stream for a connection instance (instead of establishing their own one), use it, and then put back.
If there were no connections available, the stream might establish more of them or ask the task to hang on (depending on its configuration).
If a connection fails, it gets dropped and the pool now contains N-1 connections, so it may decide to allocate a new one on the next request.
So I have two problems I cannot find proper answers anywhere:
Must/can/should I share the stream/sink-pool among tasks in some way? Anyway, I see some Shared futures in the futures crate.
There are some gloomy points in the tokio/futures tutorial. E.g. it does not explain how do I notify the uppermost task, that is, how do I implement the mythical innermost future, which does not pool anything itself, but still has to notify the upper futures.
Or is my approach completely wrong? I could start playing with it by myself, but I have a strong suspicion that I have missed something, e.g. a one-click solution.
I need to log to the database every call to my Web API.
Now of course I don't want to go to my database on every call.
So lets say I have a dictionary or a hash table object in my cache,
and every 10000 records I go to the database.
I still don't want this every 10000 user to wait for this operation.
And I can't start a different thread for long operations since the application pool
can be recycled basically on anytime.
What is the best solution for this scenario?
Thanks
I would argue that your view of durability is rather inconsistent. Your cache of 10000 objects could also be lost at any time due to an app pool recycle or server crash.
But to the original question of how to perform a large operation without causing the user to wait:
Put constraints on app pool recycling and deal with the potential data loss.
Periodically dump the cached messages to a Windows service for further processing. This is still not 100% guaranteed to preserve data, e.g. the service/server could crash.
Use a message queue (MSMQ), possibly with WCF. A message queue can persist to disk, so this can be considered reasonably reliable.
Message Queuing (MSMQ) technology enables applications running at
different times to communicate across heterogeneous networks and
systems that may be temporarily offline. Applications send messages to
queues and read messages from queues.
Message Queuing provides guaranteed message delivery, efficient
routing, security, and priority-based messaging. It can be used to
implement solutions to both asynchronous and synchronous scenarios
requiring high performance.
Taking this a step further...
Depending on your requirements and/or environment, you could probably eliminate your cache, and write all messages immediately (and rapidly) to a message queue and not worry about performance loss or a large write operation.
I'm fairly new to Akka and writing concurrent applications and I'm wondering what's a good way to implement an actor that would wait for a redis list and once an item becomes available it will process it, or send it to a different actor to process?
Would using the blocking function BRPOPLPUSH be better, or would a scheduler that will ask the actor to poll redis every second be a better way?
Also, on a normal system, how many of these actors can I spawn concurrently without consuming all the resource the system has to offer? How does one decide how many of each Actor type should an actor system be able to handle on the system its running on?
As a rule of thumb you should never block inside receive. Each actor should rely only on CPU and never wait, sleep or block on I/O. When these conditions are met you can create even millions of actors working concurrently. Each actor is suppose to have 600-650 bytes memory footprint (see: Concurrency, Scalability & Fault-tolerance 2.0 with Akka Actors & STM).
Back to your main question. Unfortunately there is no official Redis client "compatible" with Akka philosophy, that is, completely asynchronous. What you need is a client that instead of blocking will return you a Future object of some sort and allow you to register callback when results are available. There are such clients e.g. for Perl and node.js.
However I found fyrie-redis independent project which you might find useful. If you are bound to synchronous client, the best you can do is either:
poll Redis periodically without blocking and inform some actor by sending a message to with a Redis reply or
block inside an actor and understand the consequences
See also
Redis client library recommendations for use from Scala
BRPOPLPUSH with block for long time (up to the timeout you specify), so I would favour a Scheduler instead which still blocks, but for a shorter amount of time every second or so.
Whichever way you go, because you are blocking, you should read this section of the Akka docs which describes methods for working with blocking libraries.
Do you you have control over the code that is inserting the item into redis? If so you could get that code to send your akka code a message (maybe over ActiveMQ using the akka camel support) to notify it when the item has been inserted into redis. This will be a more event driven way of working and prevent you from having to poll, or block for super long periods of time.