Whats the maximum execution time of cron. is it possible to modify it if so any side effects.
The accepted answer above is INCORRECT. Cron's time limit in Drupal is hardcoded to 240 seconds. See the drupal_cron_run function in includes/common.inc, specifically these lines:
drupal_set_time_limit(240);
and
if (!lock_acquire('cron', 240.0)) {
(based on the source of Drupal 7.12)
So, there is no way to change this globally without hacking core. I have heard it suggested to call drupal_set_time_limit yourself inside of your hook_cron implementation, as doing so resets PHP's counter. However that won't help you when it's a third-party module implementing hook_cron.
Maximum execution time for Drupal's cron depends on your php.ini.
For example if you use wget -O - -q -t 1 http://www.example.com/cron.php as your cron command, apache's php.ini is used to determine the maximum execution time.
If you use php -f cron.php as your cron command, then php-cli's php.ini is used to determine the maximum execution time.
It is recommended to use php-cli for higher execution time, where you can set the maximum execution time from /etc/php5/cli/php.ini (if you use debian linux) and have no side effects on apache while cron runs.
I don't know if this is necessarily the case as I've just run the cron.php through my browser and I'm getting a max excution time error of 240 seconds while my max execution time in my php.ini is 1200 seconds. So somewhere besides my php.ini file Drupal is grabbing the max execution time.
That somewhere would be in the ./includes/common.inc or ./includes/locale.inc. Head into there and there are settings to adjust how long drupal will allow the cron to run for before giving up
This module can help you: Set Cron Time
Related
I installed Symfony3 framework-standard-edition. I'm trying to open the home page(app.php prod) and it is loaded 300-400ms.
This is my profiler information:
also I use php7.
Why it is so long?
You can try to optimize Zend OPCache.
Here are some recommended settings
opcache.revalidate_freq
Basically put, how often (in seconds) should the code cache expire and check if your code has changed. 0 means it checks your PHP code every single request (which adds lots of stat syscalls). Set it to 0 in your development environment. Production doesn't matter because of the next setting.
opcache.validate_timestamps
When this is enabled, PHP will check the file timestamp per your opcache.revalidate_freq value.
When it's disabled, opcache.revaliate_freq is ignored and PHP files are NEVER checked for updated code. So, if you modify your code, the changes won't actually run until you restart or reload PHP (you force a reload with kill -SIGUSR2).
Yes, this is a pain in the ass, but you should use it. Why? While you're updating or deploying code, new code files can get mixed with old ones— the results are unknown. It's unsafe as hell
opcache.max_accelerated_files
Controls how many PHP files, at most, can be held in memory at once. It's important that your project has LESS FILES than whatever you set this at. For a codebase at ~6000 files, I use the prime number 8000 for maxacceleratedfiles.
You can run find . -type f -print | grep php | wc -l to quickly calculate the number of files in your codebase.
opcache.memory_consumption
The default is 64MB. You can use the function opcachegetstatus() to tell how much memory opcache is consuming and if you need to increase the amount.
opcache.interned_strings_buffer
A pretty neat setting with like 0 documentation. PHP uses a technique called string interning to improve performance— so, for example, if you have the string "foobar" 1000 times in your code, internally PHP will store 1 immutable variable for this string and just use a pointer to it for the other 999 times you use it. Cool.
This setting takes it to the next level— instead of having a pool of these immutable string for each SINGLE php-fpm process, this setting shares it across ALL of your php-fpm processes. It saves memory and improves performance, especially in big applications.
The value is set in megabytes, so set it to "16" for 16MB. The default is low, 4MB.
opcache.fast_shutdown
Another interesting setting with no useful documentation. "Allows for faster shutdown".
Oh okay. Like that helps me. What this actually does is provide a faster mechanism for calling the destructors in your code at the end of a single request to speed up the response and recycle php workers so they're ready for the next incoming request faster.
Set it to 1 and turn it on.
opcache=1
opcache.memory_consumption=256
opcache.interned_strings_buffer=16
opcache.max_accelerated_files=8000
opcache.validate_timestamps=0
opcache.revalidate_freq=0
opcache.fast_shutdown=1
I hope it will help improve your performances
[EDIT]
You might also want to look at this answer:
Are Doctrine relations affecting application performance?
TheMrbikus, try some optimization with the following elements:
Use APC
Use Bootstrap files
Reference: http://symfony.com/doc/current/performance.html
Use the OPCache PHP7
Use Apache PHP-FPM.
E-mail sending process, and may slow down during the form rendering operations. Create a blank test Controller.
I've set up a hook on my gitlab server to call salt-run fileserver.update from a post-update hook.
How can I disable the schedule that does a update every 60 to reduce the load on my gitlab server?
The 60 seconds interval in which the Git filesystem is updated is defined by the loop_interval setting, which you can set in your master configuration file:
# The loop_interval option controls the seconds for the master's maintenance
# process check cycle. This process updates file server backends, cleans the
# job cache and executes the scheduler.
#loop_interval: 60
However, this interval controls not only the GitFS update schedule, but also a number of other maintenance tasks, so you should not increase this interval by too much.
From a quick reading of the source code (I'm not a core Salt developer though, so I might be mistaken), the GitFS update is hard-coded to run on the same schedule as these other maintenance tasks. There does not appear to be a way to disable or change the interval of only the GitFS update schedule.
I have a few work flows where I would like R to halt the Linux machine it's running on after completion of a script. I can think of two similar ways to do this:
run R as root and then call system("halt")
run R from a root shell script (could run the R script as any user) then have the shell script run halt after the R bit completes.
Are there other easy ways of doing this?
The use case here is for scripts running on AWS where I would like the instance to stop after script completion so that I don't get charged for machine time post job run. My instance I use for data analysis is an EBS backed instance so I don't want to terminate it, simply suspend. Issuing a halt command from inside the instance is the same effect as a stop/suspend from AWS console.
I'm impressed that works. (For anyone else surprised that an instance can stop itself, see notes 1 & 2.)
You can also try "sudo halt", as you wouldn't need to run as a root user, as long as the user account running R is capable of running sudo. This is pretty common on a lot of AMIs on EC2.
Be careful about what constitutes an assumption of R quitting - believe it or not, one can crash R. It may be better to have a separate script that watches the R pid and, once that PID is no longer active, terminates the instance. Doing this command inside of R means that if R crashes, it never reaches the call to halt. If you call it from within another script, that can be dangerous, too. If you know Linux well, what you're looking for is the PID from starting R, which you can pass to another script that checks ps, say every 1 second, and then terminates the instance once the PID is no longer running.
I think a better solution is to use the EC2 API tools (see: http://docs.amazonwebservices.com/AWSEC2/latest/APIReference/ for documentation) to terminate OR stop instances. There's a difference between the two of these, and it matters if your instance is EBS backed or S3 backed. You needn't run as root in order to terminate the instance - the fact that you have the private key and certificate shows Amazon that you're the BOSS, way above the hoi polloi who merely have root access on your instance.
Because these credentials can be used for mischief, be careful about running API tools from a given server, you'll need your certificate and private key on the server. That's a bad idea in the event that you have a security problem. It would be better to message to a master server and have it shut down the instance. If you have messaging set up in any way between instances, this can do all the work for you.
Note 1: Eric Hammond reports that the halt will only suspend an EBS instance, so you still have storage fees. If you happen to start a lot of such instances, this can clutter things up. Your original question seems unclear about whether you mean to terminate or stop an instance. He has other good advice on this page
Note 2: A short thread on the EC2 developers forum gives advice for Linux & Windows users.
Note 3: EBS instances are billed for partial hours, even when restarted. (See this thread from the developer forum.) Having an auto-suspend close to the hour mark can be useful, assuming the R process isn't working, in case one might re-task that instance (i.e. to save on not restarting). Other useful tools to consider: setTimeLimit and setSessionTimeLimit, and various checkpointing tools (I have a Q that mentions a couple). Using an auto-kill is useful if one has potentially badly behaved code.
Note 4: I recently learned of the shutdown command in package fun. This is multi-platform. See this blog post for commentary, and code is here. Dangerous stuff, but it could be useful if you want to adapt to Windows. I haven't tried it, though.
Update 1. Three more ideas:
You could use .Last() and runLast = TRUE for q() and quit(), which could shut down the instance.
If using littler or a script that invokes the script via Rscript, the same command line functions could be used.
My favorite package of today, tcltk2 has a neat timer mechanism, called tclTaskSchedule() that can be used to schedule the execution of an expression. You could then go crazy with the execution of stuff just before a hourly interval has elapsed.
system("echo 'rootpassword' | sudo halt")
However, the downside is having your root password in plain text in the script.
AFAIK those ways you mentioned are the only ones. In any case the script will have to run as root to be able to shut down the machine (if you find a way to do it without root that's possibly an exploit). You ask for an easier way but system("halt") is just an additional line at the end of your script.
sudo is an option -- it allows you to run certain commands without prompting for any password. Just put something like this in /etc/sudoers
<username> ALL=(ALL) PASSWD: ALL, NOPASSWD: /sbin/halt
(of course replacing with the name of user running R) and system('sudo halt') should just work.
I need to index 80.000 nodes.
The max amount of nodes I can index per each cron run is 500.
I need to run crone 80.000 / 500 times to index the entire website.
How can I automatically schedule these runs (when a run is finished, the next run automatically should start)?
I don't have SSH access so I cannot use drush.
Thanks
All cron does is visit yoursite.com/cron.php
So you could use cron/schedule task/etc on a local machine.
Did you try Poormanscron?
A module which runs the Drupal cron operation using normal browser/page requests instead of having to set up a crontab to request the cron.php script. The module inserts a small amount of JavaScript on each page of your site that when a certain amount of time has passed since the last cron run, calls an AJAX request to run the cron tasks. Your users should not notice any kind of delay or disruption when viewing your site. However, this approach requires that your site gets regular traffic/visitors in order to trigger the cron request.
Why don't you set a cronjob every 4 minutes or so? Just make sure that the interval between cronjobs is longer than the time it takes to run the cron script, so it won't overlap.
Give a try to Apache Solr Search module in drupal.
To reiterate and clarify other answers: As long as you haven't explicitly blocked it in .htaccess or Apache configuration, you can trigger Drupal's cron.php simply by visiting yoursite.com/cron.php from any browser. You can also set up your local machine (or any other machine that has web access, really) to run its own cronjob which triggers your site's cron.php. This process varies from platform to platform, but for example, on most Linux systems, you could run crontab -e and add a line like this:
0 * * * * wget -O - -q -t 1 http://www.example.com/cron.php
# Run example.com's cron tasks at the beginning of every hour.
or possibly:
*/5 * * * * wget -O - -q -t 1 http://www.example.com/cron.php
# Run example.com's cron tasks at every five minute interval.
we have scheduled one script to run for every 5 mins.
how we can check that whether the script is running for every 5 minutes in linux?
If anyone knows, please Reply..
Thanks in Advance.
We have created the script. But we dont run that script. Our Support team will run this for every minutes. If they found any error, then they will update to us.
How we can cofirm that they r running this script properly or not?
If you don't want to modify your script and you've scheduled it in cron, you could change your cron line to:
*/5 * * * * /home/me/myscript.sh; date >> /tmp/mylog
And check /tmp/mylog - a new line should be added with the date and time every run.
Maybe by making the script log in a logfile with the timestamp, so you can verify if the timestamp is OK, something like:
date >>/tmp/myprocess.log
at the top of the script (or in the loop if that's how you're "running" it) then you can examine the log file to check.
Maybe you could just add a log with a timestamp in your script ?
Then you could see if the script was effectively run every 5 min.
Without changing the script?
atop is a process monitor package that can record history of programs being run, and as root will catch terminations as well. See http://www.atcomputing.nl/Tools/atop/whyatop.html
Also consider the process accounting tools http://www.faqs.org/docs/Linux-mini/Process-Accounting.html