I am sending out a nightly email through rules scheduler, when I manually execute it sends out one email as it should, however when it runs on the schedule it sends me 10 duplicate emails. I've looked all over and can't seem to find any solution to the problem.
Thanks in advance for any suggestions
Use Job scheduler module. In this module you first insert the data in job_schedule and create a queue for each schedule . when crons run it start executing each queue and send mails then it delete its entry from the job_scheduler table. hence it will not send same mail again and again to the same person.. There is proper documentation in job_scheduler module in drupal7. Just go through it.
This sounds like a bug in the Rules module; it has its quirks. I see you have reported this issue in the Rules issue queue: http://drupal.org/node/1314916, which is what I was first going to suggest. So now I know your issue is for Rules 7.x-2.x dev integration with Views 7... both of which have more than a few bugs. I strongly suspect this issue has as much to do with Views as with Rules. (The 10x repetition seems unlikely to be a coincidence since 10 is a default value for results-per-page in Views, etc)
When you report an issue, it's helpful to include all pertinent information (Drupal version, steps to replicate, what's written to the log, etc). I'd personally suggest seeing if you can replicate your issue in a clean installation of Drupal with just the modules necessary to run your test. If you can replicate it that way, it's easier to provide enough information for the developers to identify the issue and resolve it. (e.g. use Devel generate to create some nodes and dummy users, then create a very simple view, e.g. just titles of the five most recent nodes, and use that view as the source for your email content. Does it send 5 copies? You may need to configure a localhost mail server to test this.
Related
I want to use the forge viewer as a preview tool in my web app for generated data.
The problem I have is that the model derivative API is sometimes slow sometimes fast.
I read that this happens because the files are placed in a queue and being processed subsequentially.
In my opinion, this can be solved by:
Having the extraction.update webhook also tell me where I am in the queue. So I can inform my users with better progress information. Or when the queue is too long I can not stop the process.
Being able to have a private queue. I have no problem paying more credits if necessary.
Being able to generate svf2 files on my own server.
But I don't know if any of these options are possible. Or if there is another workaround.
Yes, that could be useful. I logged that request in our system: DERI-7940
Might be considered later on, but no plans currently
I'm not aware of any plans for that
We're always working on making the translation service better, but unfortunately, I cannot tell when it will meet your requirements - including the implementation of the webhook feature you mentioned.
SVF2 is specifically for very large models - is that what you are working with? If not, then I'm quite certain that translating to SVF would be faster.
We have a plugin for Wordpress that we've been using successfully on many customers- the plugin syncs stock numbers with our warehouse and exports orders to our warehouse.
We have recently had a client move to WP-Engine who seem to impose a hard 30 second limit on the length of a running request. Because sometimes we have many orders to export, the script simply hits a 502 bad gateway error.
According to WP-Engine documentation, this cannot be turned off on a client by client basis.
https://wpengine.com/support/troubleshooting-502-error/
My question is, what options do I have to get around a host's 30 second timeout limit? Setting set_time_limit has no effect (as expected as it is the web server killing the request, not PHP). The only thing I can think of is make heavy modifications to the plugin whereby it acts as an API and we simply pull the data from the clients system, however this is a last resort.
The long-process timeout is 60 seconds.
This cannot be turned off on shared plans, only plans with dedicated servers. You will not be able to get around this by attempting to modify it as it runs directly on Apache outside of your particular install
Your optons are:
1. 'Chunk' the upload to be smaller
2. Upload the sql file to your sFTP _wpeprivate folder and have their support import it for you.
3. Optimize the import so the content is imported more efficiently.
I can see three options here.
Change the web host (easy option).
Modify a plugin to process the sync in batches. However, this also won't give you a 100% guarantee with a hard script execution time limit - something may get lost in one or more batches and you won't even know.
Contact WP Engine and ask to raise the limit for this particular client.
We’re running into an issue sending duplicate notifications to our users using the Notifications module on our Mercury Pressflow implementation. The duplicate messages are identical save one thing- the [node-url] token is being replaced with ‘default’ in one of the messages. All the other tokens in the message are being replaced correctly.
The duplicate emails do not happen consistently, maybe 10-15% of the notifications sent out, however a duplicate message always has the proper url & the ‘default’ url.
The only major modification we’ve made to Mercury was spinning off MySQL to it’s own server and adding replication. We currently have the reads set up to round robin between the 2 MySQL instances.
I have done the following troubleshooting based on finding similar issues
made sure the cron job is calling the correct url
replaced all configurations named ‘default’ with the site name (Memcached, Varnish, and Apache configs)
disabled caching in an init_hook in the notifications module
Has anyone out there experienced anything similar with Notifications and Mercury? Any and all advice is greatly appreciated.
The "Mercury" stack is external to Drupal and doesn't affect how email is queued or sent. Something within your messaging/notifications configuration or use is causing multiple messages to be created.
If you have any custom code here, I would look at that and try to trace the token variance.
I know that similar questions have been asked all over the place, but I'm having trouble finding one that relates directly to what I'm after.
I have a website where a user uploads a data file, then that file is transformed and imported into SQL. The file could be up to 50mb in size, and some times this process can take 30 minutes or sometimes even longer.
I realise I need to palm off the actual work to another process, and poll that process on the web page. I'm wondering what the best approach would be though? Being a web developer by trade, I'm finding all this new Windows Service stuff a bit confusing, and I just wanted somewhere to start.
So:
Can I do / should I being doing this with a windows service? if so, how?
Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
clarifications
The data is imported into sql, there's no file distribution taking place.
If there is a failure, it absolutely MUST be reported to the user. The web page will poll every, lets say, 5 seconds, from the time the async task begins, to get the 'status' of the import. Once it's finished another response will tell the page to stop polling for status updates.
queries on final decision
ok, so as I thought, it seems that a windows service is the best idea. So as to HOW to get it to work, it seems the 'put the file there and wait for the service to pick it up' idea is the generally accepted way, is there a way I can start a process run by the service, without it having to constantly be checking a database table / folder? As I said earlier, I don't have any experience with Windows Services - I wondered if I put a public method in the service, can I call it somehow?
well ...
var thread = new Thread(() => {
// your action
});
thread.Start();
but you will have problems with that:
what if the import to sql fails? should there be any response to the client
if it fails, how do you ensure the file on a later request
what if the applications shuts down ... this newly created and started thread will be killed either
...
it's not always a good idea to store everything in sql (especially files...). if you want to make the file available to several servers why not distribute them via ftp ...?
i believe that your whole concept is a bit messed up (sry assuming this), and it might be helpful if you elaborate and give us more information about your intentions!
edit:
Can I do / should I being doing this
with a windows service? if so, how?
you can :) i advise you to create a simple console-program and convert this with srvany and sc. you can get a rough overview howto here (note: insert blanks after =... that's a silly pitfall)
the term should is relative, because you did not answer the most important question
what if a record is persisted to the database, telling a consumer that file test.img should be persisted, but your service hasn't captured it or did not transform it yet?
so ... next on
Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
you probably could create a WCF-service which recieves some binary-data and then stores this to a database. this request could be async. yes. but what for?
once again:
please give us more insight to your workflow: what are you exactly trying to achieve? which "environmental-conditions" to you have (eg. app A polls db and expects file-records which are referenced in table x to be persisted) ...
edit:
so you want to import a .csv-file. well that changes everything :)
but i won't advise you to use a wcf-service (there could be a usage: eg. a wcf-service which has a method to insert a single row, then your iteration through the file would be implemented in another app... not that good, though).
i would suggest following:
at first do everything in your webapp (as you've already done), but rather use some sort of bulk-insert and do your transformation/logic on the database.
if you have some sort of bottle-neck then, i would suggest you something like a minor job-service, eg:
webapp will upload the file and insert a row to a job-table. the job-service is continiously polling the table/or gets informed via wcf by the webapp (hey, hey, finally some sort of usage for WCF in your scenario... :) ) and then does the import-job, writing a finish-note to a table/or set the state of the job to finished ...
but this is a bit overkill :)
Please see if my below comments helps you to resolve your issue:
•Can I do / should I being doing this with a windows service? if so, how?
Yes you can do this with a windows service. And I think that is the way you should be doing it. You can implement your own service to process your request or you can use the open source code Job Proccessor
Basically the idea is..
You submit a request for processing
the csv file in database table with
some status as not started.
Then your windows service picks up
the request from database table which
are not started and update them as in
progress status.
Once the processing is complete
succesfully /unsuccesfuly your
service updated the database table
with status as Completed / Failed.
And your asp.net page can poll to
database table for the current status
every 5 sec or so.
•Should I use WCF? If this runs under IIS, will I have problems with aspnet_wp.exe recycling and timing out my process?
you should not be using WCF for this purpose.
I need to design a bug alert system, where the web support team is notified via email when a user of our website encounters an error of any sort (database exception, or a 404)
What would be the best way to design this section of the project? Any ideas would be appreciated.
You may want to look into using the global.asax file for application-wide error intercepting. A quick search yields this step-by-step walk-through:
http://aspnetresources.com/articles/CustomErrorPages.aspx
Depending on the volume of traffic you're expecting, sending an e-mail every time an error is intercepted may not be the best approach. At best, you'd flood inboxes (and make the support staff very unhappy), and at worst you'd get your mail servers blacklisted for spamming. The approach that I've used in the past on high-traffic sites is to queue up errors in a table that is read and purged at a set interval by a separate process. The process would aggregate the errors, grouping them by type, number of occurrences, etc, then send out an e-mail report to the support mailing lists.
ASP.NET health monitoring may be of interest: http://msdn.microsoft.com/en-us/library/ms998306.aspx. It's really simpler to use than this article first appears and doesn't require any additional components - it's all built-in.
I would implement an HTTPmodule that captures the onError event.
This is would allow the module to be reused over multiple applications. The destination email addresses, SMTP server etc, could be in the HTTPmodule, overriden in the web.config file for maximum flexibility.