How to send 12 000 emails via Swiftmailer in Symfony2? - symfony

Sometimes we need to send many emails. We are selecting users by 100, for each user create mail, send it (add to spool), make $em->clear(). But even in production env we cann't send much than 4000 emails: got "Unable allocate memory".
What proper way to do that? Add argument to our command end execute it many times using --skip=4000?

What we do, is we thread it... so, let's say you have a table with your users and you've got an ID and EMAIL column. We assume that there will be more/less an equal number of ID's ending on zero, than there is ending on 1, 2, etc.
Now we have our script that sends emails only send emails to the people who end on say zero, and another script that sends to people who's id fields end with 1, etc. Example, you use parameters to define this, let's say your script is called "send-a-lot.php", you'll run these 10 commands:
php send-a-lot.php --ending-on=0
php send-a-lot.php --ending-on=1
php send-a-lot.php --ending-on=2
php send-a-lot.php --ending-on=3
php send-a-lot.php --ending-on=4
php send-a-lot.php --ending-on=5
php send-a-lot.php --ending-on=6
php send-a-lot.php --ending-on=7
php send-a-lot.php --ending-on=8
php send-a-lot.php --ending-on=9
Inside your code, you want to do something like:
if ($id % 10 == $endingOnParameter) {
// send the mail
}
It's not exactly what you were asking, but at least that's what we did to help "some" of our load problem

I mention my experience:
I send about 8000 personal emails with symfony and SwiftMailer on a shared server with very limited resources. I had a table with users and create a task or command in which to make a paginated query, and passing the page size via parameter in my task. Y executed this task or command on a cron job every 30 minutes. You can configure depending of your resourses. With query page size you manage how many emails will be send, and with cron job you can manage the time between lots.
I acknowledge that there are more professional and robust solutions, but this was the only way I found on a shared server with limited resources.

Related

Symfony - background task from form setup

Would you know how to run a background task on Symfony 4, based on the setup of a form ? This would avoid that the user has to remain on the form until the task is finished.
The idea would be that when the form is validated, it starts an independant background task. Then the user can continue its navigation and come back once the task is finished to get the results.
Thanks for your help,
You need to use pattern Message Bus. Symfony has own implementation of this pattern since version 4.1 introducing Messenger Component.
You can see documentation here: https://symfony.com/doc/current/components/messenger.html
To get it work you need some external program that will implement AMQP protocol. Most popular in PHP world IMHO RabbitMQ.
A very simple solution for this could be the following procedure:
Form is valid.
A temporary file is created.
Cronjob gets executed every five minutes and starts a symfony command.
The command checks if the file exists and if it's empty.
If so, the command works of the background task. But before this, the command write it's process id in the file to prevent from beeing excuted a second time.
Remove the file when the command has finished.
As long as the file exists you can show a hint for the user that the task is running.

How to execute a query in eval.xqy file with app-server id

I need to run query with importing modules from pod.
Without importing modules if I run simple query with Database Id using below, it is working.
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$dataBaseId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
If I have import modules statements then it is throwing Error(File Not Found).
So I tried getting the app-server id and tried passing that instead of database-id as below,
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$serverId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
How to pass the server-id to make the query executing against particular app-server.
Is this MarkLogic 8 or earlier (I ask because rewrite options on 8 allow for dynamic switching of module databases before execution (among lots of other amazing goodies). This may be what you want because you can look at the query parameters at this point and build logic into the rewite rules.
Otherwise, Can you explain in more detail what you are trying to accomplish in the end. By the time your code ran, it was already executed in the context of a particular App server - so asking to execute against a another app server by analysing the query parameters is a bit too late (because you are already using the app server).
[edit] The following is in response to the comments since provided. This is a messy response because the actual ticket and comments are still not a completely clear picture. But if you stitch them together, then a problem statement does now exist for which I can respond.
The original author of the question confirmed via comments that they are "trying to hit an app server on a different node than the one that you actually posted to"
OK.. This is the response to that clarification:
That is not possible. Your request is already being processed by a thread on the node that you hit with your http request. Marklogic is a cluster, but it does not share threads (or anything else for that matter). Choices are:
a redirect to the proper node
possibly use the current node to make the request on your behalf.
But that ties up the first thread and the thread on the other node and has the HTTP communication overhead - and you need to have an app server listening for this purpose.
If this is a fire-and-forget type of situation, then you can hit any node and save the data/request in a document in the DB using a URI naming convention that indicates what app server it is for, and by way of insert triggers (with a URI-prefix for their server id), pick up the request from the DB and process it.

Wordpress Multi-Master DB Replication: Deadlock when updating cron table in wp_options

We're running Wordpress in an environment that features a multi-master DB behind a load-balancer. The error log was filling up with a deadlock error when WP tried to update the cron table in wp_options. We disabled wp-cron altogether but are still seeing the error, so, two questions:
1) What causes the cron table in wp_options to be updated?
2) It appears to run on every page load. Can this be disabled and a cronjob setup to run it periodically in crontab?
Thanks
Wordpress uses wp-cron.php as a means for running scheduled tasks when the user doesn't have access to or want to setup cronjobs via Unix. This process looks at the scheduled jobs in the cron table in wp_options and if the specified time (or more) has elapsed then the job executes.
wp-cron.php uses wp-includes/cron.php (the Wordpress Cron API) to run scheduled jobs. In cron.php you'll find a number of functions that update the cron table table, these functions are all around the scheduling of events.
Any function of Wordpress or plugin that requires a scheduled event uses the Cron API to do so. However, the action of scheduling an event (even if it already exists) updates the cron table in wp_options. Even with wp-cron.php totally disabled, these elements of Wordpress/the plugin are loading and scheduling their events, trying to update the cron table in the process.
I've not figured-out exactly why the deadlock occurs, other than knowing it must be related to the DB/site config, but I do now know that Wordpress is behaving itself.
I've run into this same issue -- the databases would go out of sync very quickly. Certain plugins made it occur faster (they scheduled lots of cron jobs), but even with them disabled, eventually the errors would block replication.
I was able to keep replication working by doing two things.
The first, in my.ini, was to add:
slave-skip-errors = 1062
This instructs MySql to skip creating entries when a duplicate key already exists. My cluster is set up as active-passive, so in theory, there should be no "real" writes to the passive MySql node unless the active node is down, in which case, there will be no "real" writes to that node. The only stuff that gets written to the passive node is as a result of wp-cron jobs, which (in theory) are also running on the active node.
The second, in each site's wp-config.ini, was to add:
/** disable cron */
define('DISABLE_WP_CRON', true);
This blocks wp-cron from running at all, so either one of these solutions should work on their own.
Another option would be to disable wp-cron, but leave the full database syncing in place, and schedule a script to call each site's wp-cron.php (you'd be accomplishing manually what the wp-cron service does automatically). That way, it will only run on the active node and the data should be synced over to the passive node with no problems.

Symfony, Swift Mailer, CRON JOBS, & Shared Hosting Server

Before I tackle this solution, I wanted to run it by the community to get feedback.
Questions:
Is my approach feasible? i.e. can it even be done this way?
Is it the right/most efficient solution?
If it isn’t the right solution, what would be a better approach?
Problems:
Need to send mass emails through the application.
The shared hosted server only permits a maximum of 500 emails to be sent per hour before getting labeled a spammer
Server timeout while sending batch emails
Proposed Solution:
Upon task submittal (i.e. the user provides all necessary email information using a form and frontend template, selects the target audience, etc..), the action will then:
Determines how many records (from a stored db of contacts) the email will be sent to
If the number of records in #1 above is more than 400:
Assign a batch number to all these records in the DB.
Run a CRON job that:
Every hour, selects 400 records in batch “X” and sends the saved email template until there are no more records with batch “X”. Each time a batch of 400 is sent, it’s batch number is erased (so it won’t be selected again the following hour).
If there is an unfinished CRON JOB scheduled ahead of it (i.e. currently running), it will be placed in a queue.
Other clarification:
To send these emails I simply iterate over the SWIFT mailer using the following code:
foreach($list as $record)
{
mailers::sendMemberSpam($record, $emailParamsArray);
// where the above simply contains: sfContext::getInstance()->getMailer()->send($message);
}
*where $list is the list of records with a batch_number of “X”.
I’m not sure this is the most efficient of solutions, because it seems to be bogging down the server, and will eventually time out if the list or email is long.
So, I’m just looking for opinions at this point... thanks in advance.

Sending emails asynchronously: spool, queue, and cronjob/daemon

I want to send emails asynchronously for faster and lighter http responses, but I'm struggling with many new concepts.
For example, the documentation talks about spool. It says I should use spool with a file, and then send emails with a command. But how should I be running that command? If I set a cronjob to execute that command every 1 minute (the minimum available in cron), users will have to wait an average of 30 secs for their emails to be sent (eg, the registration email).
So I thought of using a queue instead. I'm already using RabbitMQBundle for image processing (eg, thumbnail creation). But I only use this one periodically, so it is consumed from within a cronjob.
Maybe I should create a daemon that is always waiting for new messages to arrive the email queue and deliver them ASAP?
The solution is to send every email to a queue, and then consume that queue with a service. My service is very simple, it just takes items out of the queue, where each item is an array with from, to, body, etc., and sends that email. I'm using Thumper which makes Rabbit easier to use: github.com/videlalvaro/Thumper . And I make sure the service is always up using 'sv' (from Runit): smarden.org/runit/sv.8.html . You can use any other service or daemon manager you like.
I have the same problem as you had. How you finally solved your problem?
For the moment I run a little script in the crontab in order to run in loop:
<?php
include('/var/www/vendor/symfony/symfony/src/Symfony/Component/Filesystem/LockHandler.php');
use Symfony\Component\Filesystem\LockHandler;
$lock = new LockHandler('mailer:loop');
if ($lock->lock()) {
system('cd /var/www && php app/console swiftmailer:spool:send');
sleep(1);
$lock->release();
shell_exec('cd /var/www && php LoopMailer.php > /dev/null 2>/dev/null &');
}
It's not very clean but it does his job.
You need 2 services one for spooling message and other for send instant emails. Check this

Resources