Skip duplicate async messages - symfony

I'm using symfony 4.4 with the symfony/messenger in async mode with doctrine as datasource.
The web application dispatch async messages after some user actions. But sometimes, the user execute the same action multiple times, for example, Update the customer.
In this case, if the user update the customer 10 times, the message is sent 10 times, And the worker execute all messages, and the last message alone was sufficient.
Is there a way to optimize and execute only the last message?
Thank's for your help.

Related

How to edit a deferring message via discord.py on program restart or with requests module?

i have this command:
#bot.tree.command(name="restart", description="Restart the bot")
#app_commands.check(is_owner)
async def restart(interaction: discord.Interaction):
await interaction.response.defer()
os.execl(sys.executable, sys.executable, *sys.argv)
and, as you can see, it restarts the program.
but what I want it to also do, is when it does restart, send a followup message editing the defered message telling the user (me) that the bot has restarted.
Edit:
I have an on_ready event registered that I could do this in, but I don't know exactly how to do it.
I tried looking at interactions.py but couldn't find the send function that is said to be tied to it in the docs.

Laravel - writing exception for schedule event in model listener

I've a Model Product that fires retrieved event ProductRetrieved and a listener CheckProductValidity that throws an exception depending on the API path (if-else condition).
Also, I've a update query that I have implemented in Console\Kernal.php that runs everyday at 00:00 Hours.
Problem: CheckProductValidity throws an exception for scheduled tasks. How do I make an exception in listener to allow retrieval of model Product data when it is done by scheduler.
Possible solution: Use unsetEventDispatcher and setEventDispatcher but at times this update query may take more than usual time. Also, cron also sends notifications and processes jobs (all depending on Product) so that might cause problems.
Not really a solution but this is how I fixed it.
// fix to catch if artisan schedule:run has intiated this check;
$parameters = app('request')->server->get('argv');
$allowed_commands = ['schedule:run', 'migrate:refresh', 'db:seed', 'queue:work'];
if ($parameters[0]==='artisan'
&& in_array($parameters[1], $allowed_commands))
return true;
In the listener I added this code which would check if the request was a result of artisan command or a route.

Firebase Pub/sub trigger: executing multiple times sporadically

We're using Firebase for our app that needs to process a some data and then send out a series of e-mails after their data has been decided.
Right now I'm triggering a single handler via CRON (which uses pub/sub) that processes the data and then publishes a series of messages to a different pub/sub topic. That topic in turn has a similar trigger function that goes through a few processes and then sends an single email per execution.
// Triggered by CRON task
const cronPublisher = functions.pubsub.topic('queue-emails').onPublish(async () => {
//processing
...
// Publish to other topic
await Promise.all(
emails.map((email) =>
publisher.queueSendOffer(email)
)
);
});
// Triggered by above, at times twice
const sendEmail = functions.pubsub.topic('send-email').onPublish(async () => {
//processing and send email
});
The issue I'm running into is that the 2nd topic trigger at times is executed more than once, sending two identical emails. The main potential cause I've come across by way of Google just involves long execution times resulting in timeouts, and retries. This shouldn't be the case since our acknowledgment timeout is configured to 300 seconds and the execution times never exceed ~12 seconds.
Also, the Firebase interface doesn't seem to give you any control over how this acknowledgment is sent.
This CRON function runs everyday and the issue only occurs every 4-5 days, but then it duplicates every single email.
Any thoughts?
Appreciated.
If 'every single message' is duplicated, perhaps it is your 'cronPublisher' function that is being called twice? Cloud Pubsub offers at least once semantics, so your job should be tolerant to this https://cloud.google.com/pubsub/docs/subscriber#at-least-once-delivery.
If you were to persist some information in a firebase transaction that this cron event had been received, and check that before publishing, you could prevent duplicate publishing to the "send-email" topic.

How can I send a query to database after the request is handled?

I have an application that does the following:
After the app receives a get request, it reads the client's cookies
for identification.
It stores the identification information in Postgresql DB
And it sends the appropriate response and finishes the handling
process.
But in this way the client is also waiting for me to store the data in PSQL. I don'
t want this what I want is:
After the app receives a get request, it reads the client's cookies
for identification.
It sends the appropriate response and finishes the handling process.
It stores the identification information in Postgresql DB.
In the second part storing process is happening after the client has received his response so he won't have to wait for it. I've searched for a solution but haven't found anything thus far. I believe I'm searching with wrong keywords because, I believe this is a common problem.
Any feedback is appreciated.
You should add a callback to the ioloop. Via some code like this:
from tornado import ioloop
def somefuction(*args):
# call the DB
...
... now in your get() or post() handler
...
io_loop = ioloop.IOLoop.instance()
io_loop.add_callback(partial(somefunction, arg, arg2))
... rest of your handler ...
self.finish()
This will get called after the response is returned to the user on the next iteration through the event handler to call your DB processor somefunction.
If you dont want to wait for Postgres to respond you could try
1) An async postgres driver
2) Put the DB jobs on a queue and let the queue handle the DB write. Try Rabbit MQ
Remember because you return to the user before you write to the DB you have to think about how to handle write errors

Symfony, Swift Mailer, CRON JOBS, & Shared Hosting Server

Before I tackle this solution, I wanted to run it by the community to get feedback.
Questions:
Is my approach feasible? i.e. can it even be done this way?
Is it the right/most efficient solution?
If it isn’t the right solution, what would be a better approach?
Problems:
Need to send mass emails through the application.
The shared hosted server only permits a maximum of 500 emails to be sent per hour before getting labeled a spammer
Server timeout while sending batch emails
Proposed Solution:
Upon task submittal (i.e. the user provides all necessary email information using a form and frontend template, selects the target audience, etc..), the action will then:
Determines how many records (from a stored db of contacts) the email will be sent to
If the number of records in #1 above is more than 400:
Assign a batch number to all these records in the DB.
Run a CRON job that:
Every hour, selects 400 records in batch “X” and sends the saved email template until there are no more records with batch “X”. Each time a batch of 400 is sent, it’s batch number is erased (so it won’t be selected again the following hour).
If there is an unfinished CRON JOB scheduled ahead of it (i.e. currently running), it will be placed in a queue.
Other clarification:
To send these emails I simply iterate over the SWIFT mailer using the following code:
foreach($list as $record)
{
mailers::sendMemberSpam($record, $emailParamsArray);
// where the above simply contains: sfContext::getInstance()->getMailer()->send($message);
}
*where $list is the list of records with a batch_number of “X”.
I’m not sure this is the most efficient of solutions, because it seems to be bogging down the server, and will eventually time out if the list or email is long.
So, I’m just looking for opinions at this point... thanks in advance.

Resources