Sending emails asynchronously: spool, queue, and cronjob/daemon - symfony

I want to send emails asynchronously for faster and lighter http responses, but I'm struggling with many new concepts.
For example, the documentation talks about spool. It says I should use spool with a file, and then send emails with a command. But how should I be running that command? If I set a cronjob to execute that command every 1 minute (the minimum available in cron), users will have to wait an average of 30 secs for their emails to be sent (eg, the registration email).
So I thought of using a queue instead. I'm already using RabbitMQBundle for image processing (eg, thumbnail creation). But I only use this one periodically, so it is consumed from within a cronjob.
Maybe I should create a daemon that is always waiting for new messages to arrive the email queue and deliver them ASAP?

The solution is to send every email to a queue, and then consume that queue with a service. My service is very simple, it just takes items out of the queue, where each item is an array with from, to, body, etc., and sends that email. I'm using Thumper which makes Rabbit easier to use: github.com/videlalvaro/Thumper . And I make sure the service is always up using 'sv' (from Runit): smarden.org/runit/sv.8.html . You can use any other service or daemon manager you like.

I have the same problem as you had. How you finally solved your problem?
For the moment I run a little script in the crontab in order to run in loop:
<?php
include('/var/www/vendor/symfony/symfony/src/Symfony/Component/Filesystem/LockHandler.php');
use Symfony\Component\Filesystem\LockHandler;
$lock = new LockHandler('mailer:loop');
if ($lock->lock()) {
system('cd /var/www && php app/console swiftmailer:spool:send');
sleep(1);
$lock->release();
shell_exec('cd /var/www && php LoopMailer.php > /dev/null 2>/dev/null &');
}
It's not very clean but it does his job.

You need 2 services one for spooling message and other for send instant emails. Check this

Related

How to transfer an XFB file using command BTOPUT in unix server

We have one .sh file which contains all the configurations.
We have something like this,
export MARK_REMOTE_NODE= (server name)
The requirement is we have to send the same file to two different servers.Is it possible to transfer the same XFB file to different REMOTE_NODE or servers in UNIX??
When i was searching i got to know that BTOPUT transfers are one file at a time to one Partner.So can anyone tell me how to transfer file to 2 different servers?
XFB already has a hard job matching different operating- and filesystems with optional compression and retry mechanism. You want some logic what will happen when 1 transfer fails (only send second when first succeeds, shoot-and-forget, always try to send both and trust your incident management to catch the errors thrown by your monitoring, wait for async transfer for time depending on filesize,..).
I wouldn't trust the XFB options and just make a loop in your script doing exactly what you want. The additional advantage is that a migration to another communication tool will be easier.
while read -r targethost; do
# You need a copy, since xfb will rename and delete the file
cp outputfile outputfile.${targethost}
my_send_xfb ${targethost} outputfile.${targethost}
# optional check result posting the file in the queue
if [ $? -ne 0 ]; then
echo "Xfb not ready or configured for ${targethost}"
# Perhaps break / send alert / ..
fi
done < myhosts

Monitoring SaltStack

Is there anything out there to monitor SaltStack installations besides halite? I have it installed but it's not really what we are looking for.
It would be nice if we could have a web gui or even a daily email that showed the status of all the minions. I'm pretty handy with scripting but I don't know what to script.
Anybody have any ideas?
In case by monitoring you mean operating salt, you can try one of the following:
SaltStack Enterprise GUI
Foreman
SaltPad
Molten
Halite (DEPRECATED by SaltStack)
These GUI will allow you more than just knowing whether or not minions are alive. They will allow you to operate on them in the same manner you could with the salt client.
And in case by monitoring you mean just whether the salt master and salt minions are up and running, you can use a general-purpose monitoring solutions like:
Icinga
Naemon
Nagios
Shinken
Sensu
In fact, these tools can monitor different services on the hosts they know about. The host can be any machine that has an ip address and the service can be any resource that can be queried via the underlying OS. Example of host can be a server, router, printer... And example of service can be memory, disk, a process, ...
Not an absolute answer, but we're developing saltpad, which is a replacement and improvement of halite. One of its feature is to display the status of all your minions. You can give it a try: Saltpad Project page on Github
You might look into consul while it isn't specifically for SaltStack, I use it to monitor that salt-master and salt-minion are running on the hosts they should be.
Another simple test would be to run something like:
salt --output=json '*' test.ping
And compare between different runs. It's not amazing monitoring, but at least shows your minions are up and communicating with your master.
Another option might be to use the salt.runners.manage functions, which comes with a status function.
In order to print the status of all known salt minions you can run this on your salt master:
salt-run manage.status
salt-run manage.status tgt="webservers" expr_form="nodegroup"
I had to write my own. To my knowledge, there is nothing out there which will do this, and halite didn't work for what I needed.
If you know Python, it's fairly easy to write an application to monitor salt. For example, my app had a thread which refreshed the list of hosts from the salt keys from time to time, and a few threads that ran various commands against that list to verify they were up. The monitor threads updated a dictionary with a timestamp and success/fail for each host after they ran. It had a hacked together HTML display color coded to reflect the status of each node. Took me a about half a day to write it.
If you don't want to use Python, you could, painfully, do something similar to this inefficient, quick, untested hack using command line tools in bash.
minion_list=$(salt-key --out=txt|grep '^minions_pre:.*'|tr ',' ' ') # You'
for minion in ${minion_list}; do
salt "${minion}" test.ping
if [ $? -ne 0 ]; then
echo "${minion} is down."
fi
done
It would be easy enough to modify to write file or send an alert.
halite was depreciated in favour of paid ui version, sad, but true - still saltstack does the job. I'd just guess your best monitoring will be the one you can write yourself, happily there's a salt-api project (which I believe was part of halite, not sure about this), I'd recommend you to use this one with tornado as it's better than cherry version.
So if you want nice interface you might want to work with api once you set it up... when setting up tornado make sure you're ok with authentication (i had some trouble in here), here's how you can check it:
Using Postman/Curl/whatever:
check if api is alive:
- no post data (just see if api is alive)
- get request http://masterip:8000/
login (you'll need to take token returned from here to do most operations):
- post to http://masterip:8000/login
- (x-www-form-urlencoded data in postman), raw:
username:yourUsername
password:yourPassword
eauth:pam
im using pam so I have a user with yourUsername and yourPassword added on my master server (as a regular user, that's how pam's working)
get minions, http://masterip:8000/minions (you'll need to post token from login operation),
get all jobs, http://masterip:8000/jobs (you'll n need to post token from login operation),
So basically if you want to do anything with saltstack monitoring just play with that salt-api & get what you want, saltstack has output formatters so you could get all data even as a json (if your frontend is javascript like) - it lets you run cmd's or whatever you want and the monitoring is left to you (unless you switch from the community to pro versions) or unless you want to use mentioned saltpad (which, sorry guys, have been last updated a year ago according to repo).
btw. you might need to change that 8000 port to something else depending on version of saltstack/tornado/config.
Basically if you want to have an output where you can check the status of all the minions then you can run a command like
salt '*' test.ping
salt --output=json '*' test.ping #To get output in Json Format
salt manage.up # Returns all minions status
Or else if you want to visualize the same with a Dashboard then you can see some of the available options like Foreman, SaltPad etc.

Symfony, Swift Mailer, CRON JOBS, & Shared Hosting Server

Before I tackle this solution, I wanted to run it by the community to get feedback.
Questions:
Is my approach feasible? i.e. can it even be done this way?
Is it the right/most efficient solution?
If it isn’t the right solution, what would be a better approach?
Problems:
Need to send mass emails through the application.
The shared hosted server only permits a maximum of 500 emails to be sent per hour before getting labeled a spammer
Server timeout while sending batch emails
Proposed Solution:
Upon task submittal (i.e. the user provides all necessary email information using a form and frontend template, selects the target audience, etc..), the action will then:
Determines how many records (from a stored db of contacts) the email will be sent to
If the number of records in #1 above is more than 400:
Assign a batch number to all these records in the DB.
Run a CRON job that:
Every hour, selects 400 records in batch “X” and sends the saved email template until there are no more records with batch “X”. Each time a batch of 400 is sent, it’s batch number is erased (so it won’t be selected again the following hour).
If there is an unfinished CRON JOB scheduled ahead of it (i.e. currently running), it will be placed in a queue.
Other clarification:
To send these emails I simply iterate over the SWIFT mailer using the following code:
foreach($list as $record)
{
mailers::sendMemberSpam($record, $emailParamsArray);
// where the above simply contains: sfContext::getInstance()->getMailer()->send($message);
}
*where $list is the list of records with a batch_number of “X”.
I’m not sure this is the most efficient of solutions, because it seems to be bogging down the server, and will eventually time out if the list or email is long.
So, I’m just looking for opinions at this point... thanks in advance.

Pagodabox or PHPfog + Gearman

All,
I'm looking for a good way to do some job backgrounding through either of these two services.
I see PHPFog supports IronWorks, but i need something more realtime. Through these cloud based PaaS services, I'm not able to use popen(background.php --token=1234). So I'm thinking the best solution, might be to try to kick off a gearman worker to handle the job. (Actually my preferred method would be to use websockets to keep a connection open and receive feedback from the job, rather than long polling a db table through AJAX, but none of these guys support websockets)
Question 1 is, is there a better solution than using gearman to offload the job?
Question 2 is, http://help.pagodabox.com/customer/portal/articles/430779 I see pagodabox supports 'worker listeners' ... has anybody set this up with gearman? Would it work?
Thanks
I am using PagodaBox with a background worker in an application I am building right now. Basically, PagodaBox daemonizes a PHP process for you (meaning it will continually run in the background), so all you really have to do is create a script that checks a database table for tasks to run, runs them, and then sleeps a bit so it's not running too many queries against your database.
This is a simplified version of what I have running:
// Remove time limit
set_time_limit(0);
// Show ALL errors
error_reporting(-1);
// Run daemon
echo "--- Starting Daemon ---\n";
while(true) {
// Query 'work_queue' table for new tasks
// Loop over items and do whatever tasks are associated with them
// Update row to mark task as completed
// Wait a bit
sleep(30);
}
A benefit to this approach is that it's easy to test via CLI:
php tasks.php
You will see all the echo statements come through in console as it's running, and of course it's much easier to do than a more complicated setup with other dependencies like Gearman.
So whenever you add a new task to the table, the maximum amount of time you'll wait for that task to be started in a batch is 30 seconds (or whatever your sleep time is). This is better and preferable to cron jobs, because if you setup a cron job to run every minute (the lowest possible interval) and the work you have to do takes longer than a minute, another cron process will start working on the same queue and you could end up with quite a lot of duplicated task work that could cause a lot of issues that are hard to debug and troubleshoot. So if you instead have either only one background worker that runs all tasks, or multiple background workers that work on different task types, you will never run into this issue.

How to send 12 000 emails via Swiftmailer in Symfony2?

Sometimes we need to send many emails. We are selecting users by 100, for each user create mail, send it (add to spool), make $em->clear(). But even in production env we cann't send much than 4000 emails: got "Unable allocate memory".
What proper way to do that? Add argument to our command end execute it many times using --skip=4000?
What we do, is we thread it... so, let's say you have a table with your users and you've got an ID and EMAIL column. We assume that there will be more/less an equal number of ID's ending on zero, than there is ending on 1, 2, etc.
Now we have our script that sends emails only send emails to the people who end on say zero, and another script that sends to people who's id fields end with 1, etc. Example, you use parameters to define this, let's say your script is called "send-a-lot.php", you'll run these 10 commands:
php send-a-lot.php --ending-on=0
php send-a-lot.php --ending-on=1
php send-a-lot.php --ending-on=2
php send-a-lot.php --ending-on=3
php send-a-lot.php --ending-on=4
php send-a-lot.php --ending-on=5
php send-a-lot.php --ending-on=6
php send-a-lot.php --ending-on=7
php send-a-lot.php --ending-on=8
php send-a-lot.php --ending-on=9
Inside your code, you want to do something like:
if ($id % 10 == $endingOnParameter) {
// send the mail
}
It's not exactly what you were asking, but at least that's what we did to help "some" of our load problem
I mention my experience:
I send about 8000 personal emails with symfony and SwiftMailer on a shared server with very limited resources. I had a table with users and create a task or command in which to make a paginated query, and passing the page size via parameter in my task. Y executed this task or command on a cron job every 30 minutes. You can configure depending of your resourses. With query page size you manage how many emails will be send, and with cron job you can manage the time between lots.
I acknowledge that there are more professional and robust solutions, but this was the only way I found on a shared server with limited resources.

Resources