I have a php script which does the accepted answer described here.
It doesn't work unless I add the following before fclose($fp)
while (!feof($fp)) {
$httpResponse .= fgets($fp, 128);
}
Even a blank for loop would do the job instead of the above!!
But whats the point? I wanted Async calls :(
To add to my pain, the same code is running fine without the above code snippet in an Apache driven environment.
Anybody knows if Nginx or php-fpm having a problem with such requests?
What you're looking for can only be done on Linux flavor systems with a PHP build that includes the Process Control functions (PCNTL library).
You'll find it's documentation here:
http://php.net/manual/en/book.pcntl.php
Specifically what you want to do is "fork" a process. This creates an identical copy of the current PHP script's process including all memory references and then allows both scripts to continue executing simultaneously.
The "parent" script is aware that it is still the primary script. And the "child" script (or scripts, you can do this as many times as you want) is aware that is is a child. This allows you to choose a different action for the parent and the child once the child is spun off and turned into a daemon.
To do this, you'd use something along these lines:
$pid = pcntl_fork(); //store the process ID of the child when the script forks
if ($pid == -1) {
die('could not fork'); // -1 return value means the process could not fork properly
} else if ($pid) {
// a process ID will only be set in the parent script. this is the main script that can output to the user's browser
} else {
// this is the child script executing. Any output from this script will NOT reach the user's browser
}
That will enable a script to spin off a child process that can continue executing along side (or long after) the parent script outputs it's content and exits.
You should keep in mind that these functions must be compiled into your PHP build and that the vast majority of hosting companies will not allow access to them on their servers. In order to use these functions, you generally will need to have a Virtual Private Server (VPS) or a Dedicated server. Not even cloud hosting setups will usually offer these functions as if used incorrectly (or maliciously) they can easily bring a server to it's knees.
Related
Using the WP Crontrol plugin I schedule a process that sends out reminders emails to users. It is working well, but everytime I need to test something using actual data, I am scared that the system will send out reminders that should not have been sent or have already been sent from the live system.
After restoring the backup from the production server, I quickly go to the SMTP plugin I am using and select the option that drops emails sent. That does the job, but there is still a risk that something gets sent before I manage to do that.
So, I am considering my options. One is to wrap the reminder function into a check to see if it is the production server. And only run the function when it is.
I could check using home_url(), and I know it will work because I use this approach for something else.
But I feel there is a better and more correct way, and kindly ask for advice.
I usually use this approach in my projects to separate the code that runs according to the development environment. First I create a constant in the file wp-config.php with the name WP_ENVIRONMENT and assign the value of development to it and then I recognize the execution environment using two helper functions :
function prefix_is_development() {
return defined("WP_ENVIRONMENT") && "development" ===
strtolower(WP_ENVIRONMENT);
}
function prefix_is_production() {
return !defined("WP_ENVIRONMENT") || "production" ===
strtolower(WP_ENVIRONMENT);
}
I am working on a issue with the design of a service which is basically redirection.
The request link I get will contain some params (abc.com/param1=v1¶m2=v2).
I need to do two tasks on this link
I need to format the link and redirect user to another domain with
some params passed(xyz.com/p1=v2) depending on the value of ,say,
param1, This step should be as fast as possible
I need to save the link details to my DB after some processing.
I am planning to do this with nginx+lua(openresty)+(Redis or Mongodb?) combination.
As the two are unrelated task I am planning to split it, to do both asynchronously.
As the first task in a redirection, ngx.redirect("/link") seems apt for the case.
But the documentation says redirect call will terminates the processing of the current request
How can I make these two tasks independent and redirection will happen as fast a possible and should not wait for the completion of the second task.
Can I make storing done by another thread and how give this job to another thread?
Yeah of course you easlt can, first of all you have to perfecctly understand the Order of lua module directive, then for make you Mongodb process in a ceperate thread you have to call in with ngx.location.capture($url), where $url is the url in your location block :
location redirect/handling {
... //
content_by_lua_file url/to/your/code/forRedirectHandling
ngx.location.capture(mongo/save):
}
location mongo/save {
content_by_lua_file url/to/mongodbHandlingdCode
}
The ngx.location.capture() will point to your second location block and make your code in another thread (nginx worker).
Pls see the openresty documentation for know wich directive to use (access_by_lua, log_by_lua...)
hope this help :)
I have a simple Meteor application. I would like to run some code periodically on the server end. I need to poll a remote site for XML orders.
It would look something like this (coffee-script):
unless process.env.ORDERS_NO_FETCH
Meteor.setInterval ->
checkForOrder()
, 600000
I am using Velocity to test. I do not want this code to run in the mirrored instance that runs the tests (otherwise it will poach my XML orders and I won't see them in the real instance). So, to that end, I would like to know how to tell if server code is running in the testing environment so that I can avoid setting up the the periodic checks.
EDIT I realised that I missed faking one of my server calls in the tests, which is why my test code was grabbing one of the XML orders from the real server. So, this might not be an issue. I am not sure yet how the tests are run for the server code and if the server code runs in a mirror (is that a client only concept)?.
The server and the client both run in a mirror when using mocha/jasmine integration tests.
If you want to know if you are in a mirror, you can use:
Meteor.call('velocity/isMirror', function(err, isMirror) {
if (isMirror) {
// do something
}
});
Also, on the server you can use:
process.env.IS_MIRROR
You've already got the fake working, and that is the right approach.
I need to create an asynchronous scheduler inside nginx server to update a variable. Let me give you an example what I mean by this and why I need it.
Imagine config file that looks something like this:
http {
lua_shared_dict foo 5m;
server {
location /set {
content_by_lua '
local foo = ngx.shared.foo
ngx.say(foo:get("12345"))
';
}
}
}
I specified variable foo that resides in shared memory and all worker processes have access to it. What I want to do is to set those values from lua script that will be called every minite. Just for reference it will be going to the Redis and then retrieve necessary data, and update this variable. I know I can do this in content_by_lua in every call, but it's highly inefficient for a huge volume of traffic.
I would like a separate process that would be triggered every minute or so to just go and one task. Is there anything like this in nginx or are there any modules that could help me with that?
You can use the new ngx.timer API provided by ngx_lua. See the documentation for details:
https://www.nginx.com/resources/wiki/modules/lua/#ngx-timer-at
You can create a new timer in your timer handler to make the timer keeps triggering like a cronjob ;)
BTW, the timer is per-worker process, you can use the lua-resty-lock library in your timer handler to ensure that only one timer is active at a time among all the nginx workers: https://github.com/agentzh/lua-resty-lock
You can use ngx.timer.every API. Using ngx.timer.every API is recommended over using recursive ngx.timer.at API.
All,
I'm looking for a good way to do some job backgrounding through either of these two services.
I see PHPFog supports IronWorks, but i need something more realtime. Through these cloud based PaaS services, I'm not able to use popen(background.php --token=1234). So I'm thinking the best solution, might be to try to kick off a gearman worker to handle the job. (Actually my preferred method would be to use websockets to keep a connection open and receive feedback from the job, rather than long polling a db table through AJAX, but none of these guys support websockets)
Question 1 is, is there a better solution than using gearman to offload the job?
Question 2 is, http://help.pagodabox.com/customer/portal/articles/430779 I see pagodabox supports 'worker listeners' ... has anybody set this up with gearman? Would it work?
Thanks
I am using PagodaBox with a background worker in an application I am building right now. Basically, PagodaBox daemonizes a PHP process for you (meaning it will continually run in the background), so all you really have to do is create a script that checks a database table for tasks to run, runs them, and then sleeps a bit so it's not running too many queries against your database.
This is a simplified version of what I have running:
// Remove time limit
set_time_limit(0);
// Show ALL errors
error_reporting(-1);
// Run daemon
echo "--- Starting Daemon ---\n";
while(true) {
// Query 'work_queue' table for new tasks
// Loop over items and do whatever tasks are associated with them
// Update row to mark task as completed
// Wait a bit
sleep(30);
}
A benefit to this approach is that it's easy to test via CLI:
php tasks.php
You will see all the echo statements come through in console as it's running, and of course it's much easier to do than a more complicated setup with other dependencies like Gearman.
So whenever you add a new task to the table, the maximum amount of time you'll wait for that task to be started in a batch is 30 seconds (or whatever your sleep time is). This is better and preferable to cron jobs, because if you setup a cron job to run every minute (the lowest possible interval) and the work you have to do takes longer than a minute, another cron process will start working on the same queue and you could end up with quite a lot of duplicated task work that could cause a lot of issues that are hard to debug and troubleshoot. So if you instead have either only one background worker that runs all tasks, or multiple background workers that work on different task types, you will never run into this issue.