Related
Pls help me
A few weeks ago it came out of gamemaker 2.3, practically in the gamemaker language they changed the scripts into functions, but now after converting the files to be able to reopen them, I double-checked all the scripts and etc but anyway when I start it it remains a black screen, however it doesn't give me any compilation errors or whatever, what could be the problem?
Ps.
I might sound stupid, but if someone has the same program as me I can pass the project to them so they can see the scripts for themselves, so basically it's just the base and there is only the script to make the player walk and for collisions, I know that no one would want to waste time, but I ask the same
Its possible that your code is stuck in an infinite loop, here's an example of what that might look like:
var doloop = true
while(doloop == true){
x += 1
y += 1
}
the "doloop" variable is never changed within the while loop, so it is always equal to true and the loop never ends. Because the code never finishes looping, it can never get around to drawing anything, so you end up with a black screen. The easiest way to check for these is to put a breakpoint/debugging point at the beginning and just after every while/for/do/ect loop and debug it. e.g. (I am using asterisks "*" to represent breakpoints)
var doloop = true
* while(doloop == true){
x += 1
y += 1
}
*
When you get to one of the loops remove the first breakpoint and hit the "continue" button in the debugger. If it (it being the computer) takes an longer than it should to hit the second breakpoint (as in, you wait for a ten seconds to or two minutes (depends on how complex the code is) and it still hasn't hit the second breakpoint), then you should replace the breakpoint at the beginning of the loop to check and make sure it is still in there. If it is still in the loop, then that is likely where the code is getting stuck. Review the loop and everywhere any associated variables are set/changed, and you should be able to find the problem (even if it takes a while).
Majestic_Monkey_ and the commentors are correct: use the debugger. It's easy and it's your friend. Just place a red circle on the very first line of code that runs, and click the little bug icon and you can step through your code easily.
But to address your specific issue (or if anyone in the future has this issue): scripts have changed into files that can have many functions. Where you used to have
//script_name
var num = argument0 + argument1;
return num;
You would now have
function script_name(a, b) {
var num = a + b;
return num;
}
All you have to do is create a decleration for your new function:
function my_function_name(argument_names, etc...)
Then wrap all your old code in { }, and replace all those ugly "argument0" things with actual names. It's that easy. Plus you can have more than one function per script!
I'm attempting to run some fairly deep recursive code in R and it keeps giving me this error:
Error: C stack usage is too close to the limit
My output from CStack_info() is:
Cstack_info()
size current direction eval_depth
67108864 8120 1 2
I have plenty of memory on my machine, I'm just trying to figure out how I can increase the CStack for R.
EDIT: Someone asked for a reproducible example. Here's some basic sample code that causes the problem. Running f(1,1) a few times you'll get the error. Note that I've already set --max-ppsize = 500000 and options(expressions=500000) so if you don't set those you might get an error about one of those two things instead. As you can see, the recursion can go pretty deep here and I've got no idea how to get it to work consistently. Thanks.
f <- function(root=1,lambda=1) {
x <- c(0,1);
prob <- c(1/(lambda+1),lambda/(lambda+1));
repeat {
if(root == 0) {
break;
}
else {
child <- sample(x,2,replace=TRUE,prob);
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1) {
child[1] <- f(root=child[1],lambda);
}
if(child[2] == 1 && child[1] == 0) {
child[2] <- f(root=child[2],lambda);
}
}
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1 || child[2] == 1) {
root <- sample(x,1,replace=TRUE,prob);
}
}
return(root)
}
The stack size is an operating system parameter, adjustable per-process (see setrlimit(2)). You can't adjust it from within R as far as I can tell, but you can adjust it from the shell before starting R, with the ulimit command. It works like this:
$ ulimit -s # print default
8192
$ R --slave -e 'Cstack_info()["size"]'
size
8388608
8388608 = 1024 * 8192; R is printing the same value as ulimit -s, but in bytes instead of kilobytes.
$ ulimit -s 16384 # enlarge stack limit to 16 megs
$ R --slave -e 'Cstack_info()["size"]'
size
16777216
To make a permanent adjustment to this setting, add the ulimit command to your shell startup file, so it's executed every time you log in. I can't give more specific directions than that, because it depends on exactly which shell you have and stuff. I also don't know how to do it for logging into a graphical environment (which will be relevant if you're not running R inside a terminal window).
I suspect that, regardless of stack limit, you'll end up with recursions that are too deep. For instance, with lambda = Inf, f(1) leads to an immediate recursion, indefinitely. The depth of the recursion seems to be a random walk, with some probability r of going deeper, 1 - r of finishing the current recursion. By the time you've hit the stack limit, you've made a large number of steps 'deeper'. This implies that r > 1 / 2, and the very large majority of time you'll just continue to recurse.
Also, it seems like it is almost possible to derive an analytic or at least numerical solution even in the face of infinite recursion. One can define p as the probability that f(1) == 1, write implicit expressions for the 'child' states after a single iteration, and equate these with p, and solve. p can then be used as the chance of success in a single draw from a binomial distribution.
This error is not due to memory it is due to recursion. A function is calling itself. This isn't always obvious from examining the definition of only one function. To illustrate the point, here is a minimal example of 2 functions that call each other:
change_to_factor <- function(x){
x <- change_to_character(x)
as.factor(x)
}
change_to_character <- function(x){
x <- change_to_factor(x)
as.character(x)
}
change_to_character("1")
Error: C stack usage 7971600 is too close to the limit
The functions will continue to call each other recursively and will theoretically never complete, even if you increase the limit it will still be exceeded. It is only checks within your system that prevent this from occurring indefinitely and consuming all of the compute resources of your machine. You need to alter the functions to ensure that they won't indefinitely call itself (or each other) recursively.
This happened to me for a completely different reason. I accidentally created a superlong string while combining two columns:
output_table_subset = mutate(big_data_frame,
combined_table = paste0(first_part, second_part, col = "_"))
instead of
output_table_subset = mutate(big_data_frame,
combined_table = paste0(first_part, second_part, sep = "_"))
Took me for ever to figure it out as I never expected the paste to have caused the problem.
I encountered the same problem of receiving the "C stack usage is too close to the limit" error (albeit for another application than the one stated by user2045093 above). I tried zwol's proposal but it didn't work out.
To my own surprise, I could solve the problem by installing the newest version of R for OS X (currently: version 3.2.3) as well as the newest version of R Studio for OS X (currently: 0.99.840), since I am working with R Studio.
Hopefully, this may be of some help to you as well.
One issue here can be that you're calling f inside itself
plop <- function(a = 2){
pouet <- sample(a)
plop(pouet)
}
plop()
Erreur : évaluations trop profondément imbriquées : récursion infinie / options(expressions=) ?
Erreur pendant l'emballage (wrapup) : évaluations trop profondément imbriquées : récursion infinie / options(expressions=) ?
Mine is perhaps a more unique case, but may help the few who have this exact problem:
My case has absolutely nothing to do with space usage, still R gave the:
C stack usage is too close to the limit
I had a defined function which is an upgrade of the base function:
saveRDS()
But,
Accidentally, this defined function was called saveRDS() instead of safe_saveRDS().
Thus, past that definition, when the code got to the line wihch actually uses saveRDS(...) (which calls the original base version, not the upgraded one), it gave the above error and crushed.
So, if you're getting that error when calling some saving function, see if you didn't accidentally run over it.
On Linux, I have permanently increased the size of the stack and memlock memories by doing so :
sudo vi /etc/security/limits.conf
Then, add the following lines at the end of the file.
* soft memlock unlimited
* hard memlock unlimited
* soft stack unlimited
* hard stack unlimited
For everyone's information, I am suddenly running into this with R 3.6.1 on Windows 7 (64-bit). It was not a problem before, and now stack limits seem to be popping up everywhere, when I try to "save(.)" data or even do a "save.image(.)". It's like the serialization is blowing these stacks away.
I am seriously considering dropping back to 3.6.0. Didn't happen there.
I often include a commented-out source("path/to/file/thefile.R") line at the top of an R script, e.g. thefile.R, so I can easily copy-paste this into the terminal to run it. I get this error if I forget to comment out the line, since running the file runs the file, which runs the file, which runs the file, ...
If that is the cause, the solution is simple: comment out the line.
Not sure if we re listing issues here but it happened to me with leaflet().
I was trying to map a dataframe in which a date column was of class POSIXlt.
Changing back to POSIXct solved the issue.
As Martin Morgan wrote... The problem is that you get too deep inside of recursion. If the recursion does not converge at all, you need to break it by your own. I hope this code is going to work, because It is not tested. However at least point should be clear here.
f <- function(root=1,lambda=1,depth=1) {
if(depth > 256){
return(NA)
}
x <- c(0,1);
prob <- c(1/(lambda+1),lambda/(lambda+1));
repeat {
if(root == 0) {
break;
} else {
child <- sample(x,2,replace=TRUE,prob);
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1) {
child[1] <- f(root=child[1],lambda,depth+1);
}
if(child[2] == 1 && child[1] == 0) {
child[2] <- f(root=child[2],lambda,depth+1);
}
}
if(child[1] == NA | child[2] == NA){
return NA;
}
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1 || child[2] == 1) {
root <- sample(x,1,replace=TRUE,prob);
}
}
return(root)
}
If you're using plot_ly check which columns you are passing. It seems that for POSIXdt/ct columns, you have to use as.character() before passing to plotly or you get this exception!
Here is how I encountered this error message. I met this error message when I tried to print a data.table in the console. It turned out it was because I mistakenly made a super super long string (by using collapse in paste() when I shouldn't) in a column.
The package caret has a function called createDataPartition that always results in error when the dataset to be partitioned has more than 1m rows.
Just for your info.
I faced the same issue. This problem won't be solved by reinstalling R or Rstudio or by increasing the stack size. Here is a solution that solved this problem -
If you are sourcing a.R inside b.R and at the same time sourcing b.R inside a.R, then the stack will fill up very fast.
Problem
This is the first file a.R in which b.R is sourced
#---- a.R File -----
source("/b.R")
...
...
#--------------------
This is the second file b.R, in which a.R is sourced
#---- b.R File -----
source("/a.R")
...
...
#--------------------
Solution
Source only one file to avoid the recursive calling of files within each other
#---- a.R File -----
source("/b.R")
...
...
#--------------------
#---- b.R File -----
...
...
#--------------------
OR
#---- a.R File -----
...
...
...
#--------------------
#---- b.R File -----
source("/a.R")
...
...
#--------------------
Another way to cause the same problem:
library(debug)
mtrace(lapply)
The recursive call isn't as obvious here.
I'm working on a problem requiring me to create a GUI (in Tkinter) which shows a different word in the label (referencing from a list) each time the button is pressed.
I've tried researching and have found similar problems but haven't found a working solution yet. I have tried 'for each' and 'while' loops, and 'if' statements, but haven't been able to get the code working correctly.
the_window.counter = 0
if the_window.counter == 0:
top_label['text'] = words [0]
the_window.counter + 1
elif the_window.counter == 1:
top_label['text'] = words [1]
the_window.counter + 1
the code shown above produces the first word in the list only, and multiple clicks don't have any effect. does anyone have any ideas?
Thanks.
You need need to keep a global counter, and update it each time it is clicked.
The following code illustrates the technique:
# initialized to -1, so that the first time it is called
# it gets set to zero
the_window_counter = -1
def handle_click():
global the_window_counter
the_window_counter += 1
try:
top_label.configure(text=words[the_window_counter])
except IndexError:
top_label.configure(text="no more words")
I am facing the following error:
Microsoft VBScript runtime error '800a0005'
Invalid procedure call or argument: 'left'
/scheduler/App.asp, line 16
The line is:
point1 = left(point0,i-1)
This code works perfectly in another server, but now on another server it is showing this error. I can guess it has to do with system or IIS settings or may be something else but its nothing with code (as its works fine in another server).
If i is equal to zero then this will call Left() with -1 as the length parameter. This will result in an Invalid procedure call or argument error. Verify that i >= 0.
Just experienced this problem myself - a script running seamlessly for many months suddenly collapsed with this error. It seems that the scripting engine falls over itself for whatever reason and string functions cease being able to handle in-function calculations.
I appreciate it's been quite a while since this question was asked, but in case anyone encounters this in the future...
Replace
point1 = left(point0, i-1)
with
j = i-1
point1 = left(point0, j)
... and it will work.
Alternatively, simply re-boot the server (unfortunately, simply re-starting the WWW service won't fix it).
I'm attempting to run some fairly deep recursive code in R and it keeps giving me this error:
Error: C stack usage is too close to the limit
My output from CStack_info() is:
Cstack_info()
size current direction eval_depth
67108864 8120 1 2
I have plenty of memory on my machine, I'm just trying to figure out how I can increase the CStack for R.
EDIT: Someone asked for a reproducible example. Here's some basic sample code that causes the problem. Running f(1,1) a few times you'll get the error. Note that I've already set --max-ppsize = 500000 and options(expressions=500000) so if you don't set those you might get an error about one of those two things instead. As you can see, the recursion can go pretty deep here and I've got no idea how to get it to work consistently. Thanks.
f <- function(root=1,lambda=1) {
x <- c(0,1);
prob <- c(1/(lambda+1),lambda/(lambda+1));
repeat {
if(root == 0) {
break;
}
else {
child <- sample(x,2,replace=TRUE,prob);
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1) {
child[1] <- f(root=child[1],lambda);
}
if(child[2] == 1 && child[1] == 0) {
child[2] <- f(root=child[2],lambda);
}
}
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1 || child[2] == 1) {
root <- sample(x,1,replace=TRUE,prob);
}
}
return(root)
}
The stack size is an operating system parameter, adjustable per-process (see setrlimit(2)). You can't adjust it from within R as far as I can tell, but you can adjust it from the shell before starting R, with the ulimit command. It works like this:
$ ulimit -s # print default
8192
$ R --slave -e 'Cstack_info()["size"]'
size
8388608
8388608 = 1024 * 8192; R is printing the same value as ulimit -s, but in bytes instead of kilobytes.
$ ulimit -s 16384 # enlarge stack limit to 16 megs
$ R --slave -e 'Cstack_info()["size"]'
size
16777216
To make a permanent adjustment to this setting, add the ulimit command to your shell startup file, so it's executed every time you log in. I can't give more specific directions than that, because it depends on exactly which shell you have and stuff. I also don't know how to do it for logging into a graphical environment (which will be relevant if you're not running R inside a terminal window).
I suspect that, regardless of stack limit, you'll end up with recursions that are too deep. For instance, with lambda = Inf, f(1) leads to an immediate recursion, indefinitely. The depth of the recursion seems to be a random walk, with some probability r of going deeper, 1 - r of finishing the current recursion. By the time you've hit the stack limit, you've made a large number of steps 'deeper'. This implies that r > 1 / 2, and the very large majority of time you'll just continue to recurse.
Also, it seems like it is almost possible to derive an analytic or at least numerical solution even in the face of infinite recursion. One can define p as the probability that f(1) == 1, write implicit expressions for the 'child' states after a single iteration, and equate these with p, and solve. p can then be used as the chance of success in a single draw from a binomial distribution.
This error is not due to memory it is due to recursion. A function is calling itself. This isn't always obvious from examining the definition of only one function. To illustrate the point, here is a minimal example of 2 functions that call each other:
change_to_factor <- function(x){
x <- change_to_character(x)
as.factor(x)
}
change_to_character <- function(x){
x <- change_to_factor(x)
as.character(x)
}
change_to_character("1")
Error: C stack usage 7971600 is too close to the limit
The functions will continue to call each other recursively and will theoretically never complete, even if you increase the limit it will still be exceeded. It is only checks within your system that prevent this from occurring indefinitely and consuming all of the compute resources of your machine. You need to alter the functions to ensure that they won't indefinitely call itself (or each other) recursively.
This happened to me for a completely different reason. I accidentally created a superlong string while combining two columns:
output_table_subset = mutate(big_data_frame,
combined_table = paste0(first_part, second_part, col = "_"))
instead of
output_table_subset = mutate(big_data_frame,
combined_table = paste0(first_part, second_part, sep = "_"))
Took me for ever to figure it out as I never expected the paste to have caused the problem.
I encountered the same problem of receiving the "C stack usage is too close to the limit" error (albeit for another application than the one stated by user2045093 above). I tried zwol's proposal but it didn't work out.
To my own surprise, I could solve the problem by installing the newest version of R for OS X (currently: version 3.2.3) as well as the newest version of R Studio for OS X (currently: 0.99.840), since I am working with R Studio.
Hopefully, this may be of some help to you as well.
One issue here can be that you're calling f inside itself
plop <- function(a = 2){
pouet <- sample(a)
plop(pouet)
}
plop()
Erreur : évaluations trop profondément imbriquées : récursion infinie / options(expressions=) ?
Erreur pendant l'emballage (wrapup) : évaluations trop profondément imbriquées : récursion infinie / options(expressions=) ?
Mine is perhaps a more unique case, but may help the few who have this exact problem:
My case has absolutely nothing to do with space usage, still R gave the:
C stack usage is too close to the limit
I had a defined function which is an upgrade of the base function:
saveRDS()
But,
Accidentally, this defined function was called saveRDS() instead of safe_saveRDS().
Thus, past that definition, when the code got to the line wihch actually uses saveRDS(...) (which calls the original base version, not the upgraded one), it gave the above error and crushed.
So, if you're getting that error when calling some saving function, see if you didn't accidentally run over it.
On Linux, I have permanently increased the size of the stack and memlock memories by doing so :
sudo vi /etc/security/limits.conf
Then, add the following lines at the end of the file.
* soft memlock unlimited
* hard memlock unlimited
* soft stack unlimited
* hard stack unlimited
For everyone's information, I am suddenly running into this with R 3.6.1 on Windows 7 (64-bit). It was not a problem before, and now stack limits seem to be popping up everywhere, when I try to "save(.)" data or even do a "save.image(.)". It's like the serialization is blowing these stacks away.
I am seriously considering dropping back to 3.6.0. Didn't happen there.
I often include a commented-out source("path/to/file/thefile.R") line at the top of an R script, e.g. thefile.R, so I can easily copy-paste this into the terminal to run it. I get this error if I forget to comment out the line, since running the file runs the file, which runs the file, which runs the file, ...
If that is the cause, the solution is simple: comment out the line.
Not sure if we re listing issues here but it happened to me with leaflet().
I was trying to map a dataframe in which a date column was of class POSIXlt.
Changing back to POSIXct solved the issue.
As Martin Morgan wrote... The problem is that you get too deep inside of recursion. If the recursion does not converge at all, you need to break it by your own. I hope this code is going to work, because It is not tested. However at least point should be clear here.
f <- function(root=1,lambda=1,depth=1) {
if(depth > 256){
return(NA)
}
x <- c(0,1);
prob <- c(1/(lambda+1),lambda/(lambda+1));
repeat {
if(root == 0) {
break;
} else {
child <- sample(x,2,replace=TRUE,prob);
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1) {
child[1] <- f(root=child[1],lambda,depth+1);
}
if(child[2] == 1 && child[1] == 0) {
child[2] <- f(root=child[2],lambda,depth+1);
}
}
if(child[1] == NA | child[2] == NA){
return NA;
}
if(child[1] == 0 && child[2] == 0) {
break;
}
if(child[1] == 1 || child[2] == 1) {
root <- sample(x,1,replace=TRUE,prob);
}
}
return(root)
}
If you're using plot_ly check which columns you are passing. It seems that for POSIXdt/ct columns, you have to use as.character() before passing to plotly or you get this exception!
Here is how I encountered this error message. I met this error message when I tried to print a data.table in the console. It turned out it was because I mistakenly made a super super long string (by using collapse in paste() when I shouldn't) in a column.
The package caret has a function called createDataPartition that always results in error when the dataset to be partitioned has more than 1m rows.
Just for your info.
I faced the same issue. This problem won't be solved by reinstalling R or Rstudio or by increasing the stack size. Here is a solution that solved this problem -
If you are sourcing a.R inside b.R and at the same time sourcing b.R inside a.R, then the stack will fill up very fast.
Problem
This is the first file a.R in which b.R is sourced
#---- a.R File -----
source("/b.R")
...
...
#--------------------
This is the second file b.R, in which a.R is sourced
#---- b.R File -----
source("/a.R")
...
...
#--------------------
Solution
Source only one file to avoid the recursive calling of files within each other
#---- a.R File -----
source("/b.R")
...
...
#--------------------
#---- b.R File -----
...
...
#--------------------
OR
#---- a.R File -----
...
...
...
#--------------------
#---- b.R File -----
source("/a.R")
...
...
#--------------------
Another way to cause the same problem:
library(debug)
mtrace(lapply)
The recursive call isn't as obvious here.