Solution for "Fatal error: Maximum function nesting level of '100' reached, aborting!" in PHP - recursion

I have made a function that finds all the URLs within an html file and repeats the same process for each html content linked to the discovered URLs. The function is recursive and can go on endlessly. However, I have put a limit on the recursion by setting a global variable which causes the recursion to stop after 100 recursions.
However, php returns this error:
Fatal error: Maximum function nesting level of '100' reached,
aborting! in
D:\wamp\www\crawler1\simplehtmldom_1_5\simple_html_dom.php on line
1355
I found a solution here: Increasing nesting function calls limit but this is not working in my case.
I am quoting one of the answers from the link mentioned above. Please do consider it.
"Do you have Zend, IonCube, or xDebug installed? If so, that is probably where you are getting this error from.
I ran into this a few years ago, and it ended up being Zend putting that limit there, not PHP. Of course removing it will let >you go past the 100 iterations, but you will eventually hit the memory limits."
Is there a way to increase the maximum function nesting level in PHP

Increase the value of xdebug.max_nesting_level in your php.ini

A simple solution solved my problem. I just commented this line:
zend_extension = "d:/wamp/bin/php/php5.3.8/zend_ext/php_xdebug-2.1.2-5.3-vc9.dll
in my php.ini file. This extension was limiting the stack to 100 so I disabled it. The recursive function is now working as anticipated.

Another solution is to add xdebug.max_nesting_level = 200 in your php.ini

Rather than going for a recursive function calls, work with a queue model to flatten the structure.
$queue = array('http://example.com/first/url');
while (count($queue)) {
$url = array_shift($queue);
$queue = array_merge($queue, find_urls($url));
}
function find_urls($url)
{
$urls = array();
// Some logic filling the variable
return $urls;
}
There are different ways to handle it. You can keep track of more information if you need some insight about the origin or paths traversed. There are also distributed queues that can work off a similar model.

Rather than disabling the xdebug, you can set the higher limit like
xdebug.max_nesting_level=500

It's also possible to fix this directly in php, for example in the config file of your project.
ini_set('xdebug.max_nesting_level', 200);

Go into your php.ini configuration file and change the following line:
xdebug.max_nesting_level=100
to something like:
xdebug.max_nesting_level=200

on Ubuntu using PHP 5.59 :
got to `:
/etc/php5/cli/conf.d
and find your xdebug.ini in that dir, in my case is 20-xdebug.ini
and add this line `
xdebug.max_nesting_level = 200
or this
xdebug.max_nesting_level = -1
set it to -1 and you dont have to worry change the value of the nesting level.
`

probably happened because of xdebug.
Try commenting the following line in your "php.ini" and restart your server to reload PHP.
  ";xdebug.max_nesting_level"

Try looking in /etc/php5/conf.d/ to see if there is a file called xdebug.ini
max_nesting_level is 100 by default
If it is not set in that file add:
xdebug.max_nesting_level=300
to the end of the list so it looks like this
xdebug.remote_enable=on
xdebug.remote_handler=dbgp
xdebug.remote_host=localhost
xdebug.remote_port=9000
xdebug.profiler_enable=0
xdebug.profiler_enable_trigger=1
xdebug.profiler_output_dir=/home/drupalpro/websites/logs/profiler
xdebug.max_nesting_level=300
you can then use #Andrey's test before and after making this change to see if worked.
php -r 'function foo() { static $x = 1; echo "foo ", $x++, "\n"; foo(); } foo();'

php.ini:
xdebug.max_nesting_level = -1
I'm not entirely sure if the value will ever overflow and reach -1, but it'll either never reach -1, or it'll set the max_nesting_level pretty high.

You could convert your recursive code into an iterative code, which simulates the recursion. This means that you have to push the current status (url, document, position in document etc.) into an array, when you reach a link, and pop it out of the array, when this link has finished.

Check recursion from command line:
php -r 'function foo() { static $x = 1; echo "foo ", $x++, "\n"; foo(); } foo();'
if result > 100 THEN check memory limit;

You could try to wiggle down the nesting by implementing parallel workers (like in cluster computing) instead of increasing the number of nesting function calls.
For example: you define a limited number of slots (eg. 100) and monitor the number of "workers" assigned to each/some of them. If any slots become free, you put the waiting workers "in them".

<?php
ini_set('xdebug.max_nesting_level', 9999);
... your code ...
P.S. Change 9999 to any number you want.

Stumbled upon this bug as well during development.
However, in my case it was caused by an underlying loop of functions calling eachother - as a result of continuous iterations during development.
For future reference by search engines - the exact error my logs provided me with was:
Exception: Maximum function nesting level of '256' reached, aborting!
If, like in my case, the given answers do not solve your problem, make sure you're not accidentally doing something along the lines of the following simplified situation:
function foo(){
// Do something
bar();
}
function bar(){
// Do something else
foo();
}
In this case, even if you set ini_set('xdebug.max_nesting_level', 9999); it will still print out the same error message in your logs.

If you're using Laravel, do
composer update
This should be work.

In your case it's definitely the crawler instance is having more Xdebug limit to trace error and debug info.
But, in other cases also errors like on PHP or core files like CodeIgniter libraries will create such a case and if you even increase the x-debug level setting it would not vanish.
So, look into your code carefully :) .
Here was the issue in my case.
I had a service class which is library in CodeIgniter. Having a function inside like this.
class PaymentService {
private $CI;
public function __construct() {
$this->CI =& get_instance();
}
public function process(){
//lots of Ci referencing here...
}
My controller as follow:
$this->load->library('PaymentService');
$this->process_(); // see I got this wrong instead it shoud be like
Function call on last line was wrong because of the typo, instead it should have been like below:
$this->Payment_service->process(); //the library class name
Then I was keeping getting the exceed error message. But I disabled XDebug but non helped. Any way please check you class name or your code for proper function calling.

I had a error when i was installing many plugins So the error 100 showed including the location of the last plugin that i installed C:\wamp\www\mysite\wp-content\plugins\"..." so i deleted this plugin folder on the C: drive then everything was back to normal.I think i have to limit the amount of plug-in i install or have activated .good luck i hope it helps

I had this issue with WordPress on cloud9. It turns out it was the W3 Caching plugin. I disabled the plugin and it worked fine.

Another solution if you are running php script in CLI(cmd)
The php.ini file that needs edit is different in this case. In my WAMP installation the php.ini file that is loaded in command line is:
\wamp\bin\php\php5.5.12\php.ini
instead of \wamp\bin\apache\apache2.4.9\bin\php.ini which loads when php is run from browser

You can also modify the {debug} function in modifier.debug_print_var.php, in order to limit its recursion into objects.
Around line 45, before :
$results .= '<br>' . str_repeat(' ', $depth * 2)
. '<b> ->' . strtr($curr_key, $_replace) . '</b> = '
. smarty_modifier_debug_print_var($curr_val, ++$depth, $length);
After :
$max_depth = 10;
$results .= '<br>' . str_repeat(' ', $depth * 2)
. '<b> ->' . strtr($curr_key, $_replace) . '</b> = '
. ($depth > $max_depth ? 'Max recursion depth:'.(++$depth) : smarty_modifier_debug_print_var($curr_val, ++$depth, $length));
This way, Xdebug will still behave normally: limit recursion depth in var_dump and so on.
As this is a smarty problem, not a Xdebug one!

I had the same problem and I resolved it like this:
Open MySQL my.ini file
In [mysqld] section, add the following line: innodb_force_recovery =
1
Save the file and try starting MySQL
Remove that line which you just added and Save

Related

RegDBGetKeyValueEx returns -1

I am using below installscript code to identify whether SharePoint is installed or not. but its not working. The function returns -1. Not sure what is the issue. can someone please help? I want to do below steps
Reach to this registry location first ""SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\15.0""
Read Name "SharePoint" and its value "Installed"
refer attached image.
function IsSharePointInstalled()
STRING szKey, svValue, szName;
NUMBER nvType, nvSize;
begin
RegDBSetDefaultRoot(HKEY_LOCAL_MACHINE);
szKey = "SOFTWARE\\Microsoft\\Shared Tools\\Web Server
Extensions\\15.0";
szName = "SharePoint";
if(RegDBKeyExist (szKey) >=1) then
MessageBox("Key found", INFORMATION);
if(RegDBGetKeyValueEx(szKey, szName, nvType, svValue, nvSize) < 0) then
MessageBox("Failed to get value", INFORMATION);
else
MessageBox("Successfully got value", INFORMATION);
endif;
endif;
RegDBSetDefaultRoot(HKEY_CLASSES_ROOT);
end;
Image
Installscipt: Where is your value located? Have you accounted for the 64- versus 32 bit sections of the registry?
HKEY_LOCAL_MACHINE\SOFTWARE
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node
Perhaps investigate the 64-bit option here (REGDB_OPTION_WOW64_64KEY) if you need the 64-bit section of the registry.
AppSearch: for a simple registry retrieval like this, you could use AppSearch instead (System Search View). I don't have the time to make a sample for that right now. You can also see the System Search View / Wizard.

Issue in executing a batch file using PeopleCode in Application engine program

I want to execute a batch file using People code in Application Engine Program. But The program have an issue returning Exec code as a non zero value (Value - 1).
Below is people code snippet below.
Global File &FileLog;
Global string &LogFileName, &Servername, &commandline;
Local string &Footer;
If &Servername = "PSNT" Then
&ScriptName = "D: && D:\psoft\PT854\appserv\prcs\RNBatchFile.bat";
End-If;
&commandline = &ScriptName;
/* Need to commit work or Exec will fail */
CommitWork();
&ExitCode = Exec("cmd.exe /c " | &commandline, %Exec_Synchronous + %FilePath_Absolute);
If &ExitCode <> 0 Then
MessageBox(0, "", 0, 0, ("Batch File Call Failed! Exit code returned by script was " | &ExitCode));
End-If;
Any help how to resolve this issue.
Best bet is to do a trace of the execution.
Thoughts:
Can you log on the the process scheduler you are running this on and execute the script OK?
Is the AE being scheduled or called at run-time?
You should not need to change directory as you are using a fully qualified path to the script.
you should not need to call "cmd /c" as this will create an additional shell for you application to run within, making debuging harder, etc.
Run a trace, and drop us the output. :) HTH
What about changing the working directory to D: inside of the script instead? You are invoking two commands and I'm wondering what the shell is returning to exec. I'm assuming you wrote your script to give the appropriate return code and that isn't the problem.
I couldn't tell from the question text, but are you looking for a negative result, such as -1? I think return codes are usually positive. 0 for success, some other positive number for failure. Negative numbers may be acceptable, but am wondering if Exec doesn't like negative numbers?
Perhaps the PeopleCode ChDir function still works as an alternative to two commands in one line? I haven't tried it for a LONG time.
Another alternative that gives you significant control over the process is to use java.lang.Runtime.exec from PeopleCode: http://jjmpsj.blogspot.com/2010/02/exec-processes-while-controlling-stdin.html.

Debugging bitbake pkg_postinst_${PN}: Append to config-file installed by other recipe

I'm writing am openembedded/bitbake recipe for openembedded-classic. My recipe RDEPENDS on keyutils, and everything seems to work, except one thing:
I want to append a single line to the /etc/request-key.conf file installed by the keyutils package. So I added the following to my recipe:
pkg_postinst_${PN} () {
echo 'create ... more stuff ..' >> ${sysconfdir}/request-key.conf
}
However, the intended added line is missing in my resulting image.
My recipe inherits update-rc.d if that makes any difference.
My main question is: How do i debug this? Currently I am constructing an entire rootfs image, and then poke-around in that to see, if the changes show up. Surely there is a better way?
UPDATE:
Changed recipe to:
pkg_postinst_${PN} () {
echo 'create ... more stuff ...' >> ${D}${sysconfdir}/request-key.conf
}
But still no luck.
As far as I know, postinst runs at rootfs creation, and only at first boot if rootfs fails.
So there is a easy way to execute something only first boot. Just check for $D, like this:
pkg_postinst_stuff() {
#!/bin/sh -e
if [ x"$D" = "x" ]; then
# do something at first boot here
else
exit 1
fi
}
postinst scripts are ran at roots time, so ${sysconfdir} is /etc on your host. Use $D${sysconfdir} to write to the file inside the rootfs being generated.
OE-Classic is pretty ancient so you really should update to oe-core.
That said, Do postinst's run at first boot? I'm not sure. Also look in the recipes work directory in the temp directory and read the log and run files to see if there are any clues there.
One more thing. If foo RDEPENDS on bar that just means "when foo is installed, bar is also installed". I'm not sure it makes assertions about what is installed during the install phase, when your postinst is running.
If using $D doesn't fix the problem try editing your postinst to copy the existing file you're trying to edit somewhere else, so you can verify that it exists in the first place. Its possible that you're appending to a file that doesn't exist yet, and the the package that installs the file replaces it.

Has there ever been a unix system call to create a link from an open file descriptor? [duplicate]

In Unix, it's possible to create a handle to an anonymous file by, e.g., creating and opening it with creat() and then removing the directory link with unlink() - leaving you with a file with an inode and storage but no possible way to re-open it. Such files are often used as temp files (and typically this is what tmpfile() returns to you).
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to my compulsive neatness. ;)
When poking through the relevant system call functions I expected to find a version of link() called flink() (compare with chmod()/fchmod()) but, at least on Linux this doesn't exist.
Bonus points for telling me how to create the anonymous file without briefly exposing a filename in the disk's directory structure.
A patch for a proposed Linux flink() system call was submitted several years ago, but when Linus stated "there is no way in HELL we can do this securely without major other incursions", that pretty much ended the debate on whether to add this.
Update: As of Linux 3.11, it is now possible to create a file with no directory entry using open() with the new O_TMPFILE flag, and link it into the filesystem once it is fully formed using linkat() on /proc/self/fd/fd with the AT_SYMLINK_FOLLOW flag.
The following example is provided on the open() manual page:
char path[PATH_MAX];
fd = open("/path/to/dir", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
/* File I/O on 'fd'... */
snprintf(path, PATH_MAX, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/path/for/file", AT_SYMLINK_FOLLOW);
Note that linkat() will not allow open files to be re-attached after the last link is removed with unlink().
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to the my compulsive neatness. ;)
If this is your only goal, you can achieve this in a much simpler and more widely used manner. If you are outputting to a.dat:
Open a.dat.part for write.
Write your data.
Rename a.dat.part to a.dat.
I can understand wanting to be neat, but unlinking a file and relinking it just to be "neat" is kind of silly.
This question on serverfault seems to indicate that this kind of re-linking is unsafe and not supported.
Thanks to #mark4o posting about linkat(2), see his answer for details.
I wanted to give it a try to see what actually happened when trying to actually link an anonymous file back into the filesystem it is stored on. (often /tmp, e.g. for video data that firefox is playing).
As of Linux 3.16, there still appears to be no way to undelete a deleted file that's still held open. Neither AT_SYMLINK_FOLLOW nor AT_EMPTY_PATH for linkat(2) do the trick for deleted files that used to have a name, even as root.
The only alternative is tail -c +1 -f /proc/19044/fd/1 > data.recov, which makes a separate copy, and you have to kill it manually when it's done.
Here's the perl wrapper I cooked up for testing. Use strace -eopen,linkat linkat.pl - </proc/.../fd/123 newname to verify that your system still can't undelete open files. (Same applies even with sudo). Obviously you should read code you find on the Internet before running it, or use a sandboxed account.
#!/usr/bin/perl -w
# 2015 Peter Cordes <peter#cordes.ca>
# public domain. If it breaks, you get to keep both pieces. Share and enjoy
# Linux-only linkat(2) wrapper (opens "." to get a directory FD for relative paths)
if ($#ARGV != 1) {
print "wrong number of args. Usage:\n";
print "linkat old new \t# will use AT_SYMLINK_FOLLOW\n";
print "linkat - <old new\t# to use the AT_EMPTY_PATH flag (requires root, and still doesn't re-link arbitrary files)\n";
exit(1);
}
# use POSIX qw(linkat AT_EMPTY_PATH AT_SYMLINK_FOLLOW); #nope, not even POSIX linkat is there
require 'syscall.ph';
use Errno;
# /usr/include/linux/fcntl.h
# #define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
# #define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
# #define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
unless (defined &AT_SYMLINK_NOFOLLOW) { sub AT_SYMLINK_NOFOLLOW() { 0x0100 } }
unless (defined &AT_SYMLINK_FOLLOW ) { sub AT_SYMLINK_FOLLOW () { 0x0400 } }
unless (defined &AT_EMPTY_PATH ) { sub AT_EMPTY_PATH () { 0x1000 } }
sub my_linkat ($$$$$) {
# tmp copies: perl doesn't know that the string args won't be modified.
my ($oldp, $newp, $flags) = ($_[1], $_[3], $_[4]);
return !syscall(&SYS_linkat, fileno($_[0]), $oldp, fileno($_[2]), $newp, $flags);
}
sub linkat_dotpaths ($$$) {
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(DOTFD, $_[0], DOTFD, $_[1], $_[2]);
close DOTFD;
return $ret;
}
sub link_stdin ($) {
my ($newp, ) = #_;
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(0, "", DOTFD, $newp, &AT_EMPTY_PATH);
close DOTFD;
return $ret;
}
sub linkat_follow_dotpaths ($$) {
return linkat_dotpaths($_[0], $_[1], &AT_SYMLINK_FOLLOW);
}
## main
my $oldp = $ARGV[0];
my $newp = $ARGV[1];
# link($oldp, $newp) or die "$!";
# my_linkat(fileno(DIRFD), $oldp, fileno(DIRFD), $newp, AT_SYMLINK_FOLLOW) or die "$!";
if ($oldp eq '-') {
print "linking stdin to '$newp'. You will get ENOENT without root (or CAP_DAC_READ_SEARCH). Even then doesn't work when links=0\n";
$ret = link_stdin( $newp );
} else {
$ret = linkat_follow_dotpaths($oldp, $newp);
}
# either way, you still can't re-link deleted files (tested Linux 3.16 and 4.2).
# print STDERR
die "error: linkat: $!.\n" . ($!{ENOENT} ? "ENOENT is the error you get when trying to re-link a deleted file\n" : '') unless $ret;
# if you want to see exactly what happened, run
# strace -eopen,linkat linkat.pl
Clearly, this is possible -- fsck does it, for example. However, fsck does it with major localized file system mojo and will clearly not be portable, nor executable as an unprivileged user. It's similar to the debugfs comment above.
Writing that flink(2) call would be an interesting exercise. As ijw points out, it would offer some advantages over current practice of temporary file renaming (rename, note, is guaranteed atomic).
Kind of late to the game but I just found http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data which may answer the question. I haven't tested it, though, so YMMV. It looks sound.

Compile Flex application without debug? Optimisation options for flex compiler?

I have created a simple test application
with the following code
var i : int;
for (i=0; i<3000000; i++){
trace(i);
}
When I run the application, it's very slow to load, which means the "trace" is running.
I check the flash player by right-clicking, the debugger option is not enable.
So I wonder if there is an option to put in compiler to exclude the trace.
Otherwise, I have to remove manually all the trace in the program.
Are there any other options of compiler to optimize the flex application in a maximum way?
There is a really sweet feature built into Flex called the logging API (you can read more about it here http://livedocs.adobe.com/flex/3/html/logging_09.html).
Basically, you log (trace) things in a different way, admittedly with slightly more code than a standard trace, but it allows you much greater flexibility. This is an example:
import mx.logging.Log;
Log.getLogger("com.edibleCode.logDemo").info("This is some info");
Log.getLogger("com.edibleCode.logDemo").error("This is an error");
Then all you need to do is create a trace target in your main application file, something like:
<mx:TraceTarget id="logTarget" fieldSeparator=" - " includeCategory="true" includeLevel="true" includeTime="true">
<mx:filters>
<mx:Array>
<mx:String>*</mx:String>
</mx:Array>
</mx:filters>
<!--
0 = ALL, 2 = DEBUG, 4 = INFO, 6 = WARN, 8 = ERROR, 1000 = FATAL
-->
<mx:level>0</mx:level>
</mx:TraceTarget>
And register the trace with:
Log.addTarget(logTarget);
This provides several benefits over the normal trace:
You can filter (turn off) traces to only see what you want:
Either by modifying the filters array
Or the level to show only error or fatal messages
You can replace the trace target with any other type of logging interface, e.g.
A TextField
A text file
Use conditional compilation, more here.
In your code set:
CONFIG::debugging {
trace(i);
}
Then go to Project->Properties->Flex Compiler and add
-define=CONFIG::debugging,false
or
-define=CONFIG::debugging,true
You could do a find/replace on the entire project. search for 'trace(' and replace with '//trace('. That would be quick enough and easily undone.
The mxmlc argument debug allows you to add or remove debug features from SWF files. The value of the debug argument is false by default for the command line compiler, but in Flex Builder, you have to manually create a non-debug SWF. According to the documentation on compiler arguments, debug information added to the SWF includes "line numbers and filenames of all the source files". There is no mention of trace() function calls, and I don't think there's a way to remove them through a compiler argument, but you're welcome to check the linked document for the entire list of available arguments.
There are two compiler options that you should set: -debug=false -optimize=true. In Flex Builder or Eclipse, look under Project->Properties->Flex Compiler and fill in the box labeled "Additional compiler arguments."
Go to your flex code base directory (and shut down Flex Builder if its running - it gets uppity if you change things while it's running). Run this to change all your trace statements. I recommend checking the tree into git or something first and then running a diff afterwards (or cp -r the tree to do a diff -r or something). The only major case this will mess up is if you have semicolons inside trace strings:
find . -name '*.as' -exec perl -pe 'BEGIN{ undef $/; }s/trace([^;]*);/CONFIG::debugging { trace $1 ; };/smg;' -i {} \;
find . -name '*.mxml' -exec perl -pe 'BEGIN{ undef $/; }s/trace([^;]*);/CONFIG::debugging { trace $1 ; };/smg;' -i {} \;
Then set up the following in your Project->Properties->Flex Compiler->Additional compiler arguments:
-define=CONFIG::debugging,true -define=CONFIG::release,false
And use:
CONFIG::release { /* code */ }
for the "#else" clause. This was the solution I picked after reading this question and answer set.
Also beware this:
if( foo )
{
/*code*/
}
else
CONFIG::debugging { trace("whoops no braces around else-clause"); };
I.e. if you have ONLY one of these in an if or else or whatever block, and its a naked block with no braces, then regardless of whether it's compiled out, it will complain.
Something else you could do is define a boolean named debugMode or something in an external constants .as file somewhere and include this file in any project you use. Then, before any trace statement, you could check the status of this boolean first. This is similar to zdmytriv's answer.
Have to say, I like edibleCode's answer and look forward to trying it some time.

Resources