I am running a minimal example using BatchJobs, and the results are not as expected. I'm using the global_config settings, with debug=TRUE. I am running the following code in R on my HPC server:
library(BatchJobs)
reg <- makeRegistry(id = "batchtest")
batchMap(reg, identity, 1)
submitJobs(reg)
showStatus(reg)
load("batchtest-files/jobs/01/1-result.RData")
1-result
[1] 0
If I run batchMap(reg, identity, 2) the result is -1, and with batchMap(reg, identity, 3) the result is -2.
Any ideas why this might be happening? The identity function should just return the argument (so it should be 1 for the code above). I find the same issue with other functions. For example, if I use mean(rnorm(100, mean=100)) for the function I send to batchMap, I end up with results around -99. If I run this on multiple nodes, the results from each node are around -100 + node number (so the results from the 5th node are around -95).
Try an ls(). Probably the correct result from the load command is stored in the variable result. When you calculate 1-result you will get exactly the results you described.
Related
I am having a problem running a spiking-neuron simulator. I keep getting the error message, "operation +: Warning adding a matrix with the empty matrix will give an empty matrix result." Now I'm writing this program in "Scilab," but I'm hoping the problem I am having will be clear for the educated eye regardess. What I am doing is converting an existing MATLAB program to Scilab. The original MATLAB program and an explanation can be found here: https://www.izhikevich.org/publications/spikes.pdf
What happens in my Scilab version is that the first pass through the loop produces all the expected values. I Know this becuase I hit pause at the end of the first run, right before "end," and check all the values and matrix elements. However, if I run the program proper, which includes a loop of 20 iterations, I get the error message above, and all of the matrix values are empty! I cannot figure out what the problem is. I am fairly new to programming so the answer may be very simple as far as I know. Here is the Scilab version of the program:
Ne=8; Ni=2;
re=rand(Ne,1); ri=rand(Ni,1);
a=[0.02*ones(Ne,1); 0.02+0.08*ri];
b=[0.2*ones(Ne,1); 0.25-0.05*ri];
c=[-65+15*re.^2; -65*ones(Ni,1)];
d=[8-6*re.^2; 2*ones(Ni,1)];
S=[0.5*rand(Ne+Ni,Ne), -rand(Ne+Ni,Ni)];
v=60*rand(10,1)
v2=v
u=b.*v;
firings=[];
for t=1:20
I=[5*rand(Ne,1,"normal");2*rand(Ni,1,"normal")];
fired=find(v>=30);
j = length(fired);
h = t*ones(j,1);
k=[h,fired'];
firings=[firings;k];
v(fired)=c(fired);
u(fired)=u(fired)+d(fired);
I=I+sum(S(:,fired),"c");
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
u=u+a.*(b.*v-u);
end
plot(firings(:,1), firings(:,2),".");
I tried everything to no avail. The program should run through 20 iterations and produce a "raster plot" of dots representing the fired neurons at each of the 20 time steps.
You can add the following line
oldEmptyBehaviour("on")
at the beginning of your script in order to prevent the default Scilab rule (any algebraic operation with an empty matrix yields an empty matrix). However you will still have some warnings (despite the result will be OK). As a definitive fix I recommend testing the emptyness of fired in your code, like this:
Ne=8; Ni=2;
re=rand(Ne,1); ri=rand(Ni,1);
a=[0.02*ones(Ne,1); 0.02+0.08*ri];
b=[0.2*ones(Ne,1); 0.25-0.05*ri];
c=[-65+15*re.^2; -65*ones(Ni,1)];
d=[8-6*re.^2; 2*ones(Ni,1)];
S=[0.5*rand(Ne+Ni,Ne), -rand(Ne+Ni,Ni)];
v=60*rand(10,1)
v2=v
u=b.*v;
firings=[];
for t=1:20
I=[5*rand(Ne,1,"normal");2*rand(Ni,1,"normal")];
fired=find(v>=30);
if ~isempty(fired)
j = length(fired);
h = t*ones(j,1);
k=[h,fired'];
firings=[firings;k];
v(fired)=c(fired);
u(fired)=u(fired)+d(fired);
I=I+sum(S(:,fired),"c");
end
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
u=u+a.*(b.*v-u);
end
plot(firings(:,1), firings(:,2),".");
The [] + 1 is not really defined in a mathematical sense. The operation might fail or produce different results depending on the software you use. For example:
Scilab 5 [] + 1 produces 1
Scilab 6 [] + 1 produces [] and a warning
Julia 1.8 [] .+ 1 produces [] but [] + 1 an error.
Python+Numpy 1.23 np.zeros((0,0)) + 1 produces [].
I suggest checking with size() or a comparison to the empty matrix to avoid such strange behaviour.
I am doing an R code to evaluate limits.
I am not sure if my code even works I just run it and then it doesn't give anything and R stop debugging code / gets stuck, nothing works right after not even print statements. fn is supposed to be any function and tol is the error tolrence, I want to stop the program when the consective terms difference is less than 1e-6. I have to restart R and I always get the message "R session is currently busy" when I try to close R studio
lim<-function(funx,tol=1e-6){
n<-1
while(TRUE){
n<-n+1
term<-funx
next_term<-term+funx
if(abs(term-next_term)<tol){
break
}
}
return(term)
}
n<-1
fn<-(1/5)**n
lim(fn)
You made some mistakes in your program. For one, you always add the same number (funx) which will always be 0.20 and never smaller than the tolerance, so you get an endless loop.
If you want to call a function each time, you have to define this function and pass it to the lim() function. Otherwise, you just define fn as 0.20 and pass it as a double value to the function. It will never change.
If you want to find the limes of (1/5)^n, you can do it like that:
lim = function(f,x=1,tol=0.0001){
next.diff=tol
while(next.diff>=tol){
next.diff = abs(f(x)-f(x+1))
x = x + 1
}
return(list("Iterations"=x,"Limit"=f(x),"Next Value"=f(x+1)))
}
my.fun = function(x){(1/5)^x}
lim(my.fun,1,1e-6)
It wil lthen call the function for inceasing values of x and abort the loop as soon as the tolerance is reached. In this example:
> lim(my.fun,1,1e-6)
$Iterations
[1] 10
$Limit
[1] 1.024e-07
$`Next Value`
[1] 2.048e-08
So, at (1/5)^10 you already reach a value where the next iteration is closer than your tolerance. It's safe to say that it would converge to 0.
You can define any function of a value x and pass it to this lim function with a starting value for x and a tolerance level.
EDIT: For the limes of sqrt(x+1)-sqrt(x), you would just have to define a new function of x (or of n, if you wish) and pass it to lim():
> fun2 = function(x){sqrt(x+1)-sqrt(x)}
> lim(fun2,1,1e-6)
$Iterations
[1] 3969
$Limit
[1] 0.007936008
$`Next Value`
[1] 0.007935009
It's unclear as to what you really want to find out here, but as far as I understood, you want to see where the sequence (not a function) (and of the type a^n) converges. Well, if that is the case, then you need to change your code to something like this:
lim<-function(a,tol=1e-6)
{
n<-1
repeat
{
term<-a^n;next_term<-a^(n+1)
if(abs(term-next_term)<tol) break
n<-n+1
}
return(term)
}
Ok so here's what I did:
I assumed that the sequence you input is of the form a^n where a is a constant term, and n increase on the set of natural numbers
I defined the value of n initially inside the loop (why? cause I want to iterate over all the possible values of n, one-by-one)
Then I defined the first term of the sequence (named as term). As assumed, it's a^n initially. So the next term (a.k.a. next_term in my code) should be a^(n+1).
Now take their absolute difference. If it satisfies the condition, break out from the loop. Else, increase the value of n by 1 and let the loop run once again.
Then finally, return the value of the term. That's all...
I hope you will now be able to understand where you went wrong. Your approach was similar, but the code was of something else.
Remember, in this code, you don't need to enter the value of n separately while calling the function.
Here's what it returned:
> lim(1/5)
[1] 5.12e-07
> fn<-1/12
> lim(fn)
[1] 3.34898e-07
I am trying to figure out if it is possible, with a sane amount of programming, to create a certain debugging function by using R's metaprogramming features.
Suppose I have a block of code, such that each line uses as all or part of its input the output from thee line before -- the sort of code you might build with pipes (though no pipe is used here).
{
f1(args1) -> out1
f2(out1, args2) -> out2
f3(out2, args3) -> out3
...
fn(out<n-1>, args<n>) -> out<n>
}
Where for example it might be that:
f1 <- function(first_arg, second_arg, ...){my_body_code},
and you call f1 in the block as:
f1(second_arg = 1:5, list(a1 ="A", a2 =1), abc = letters[1:3], fav = foo_foo)
where foo_foo is an object defined in the calling environment of f1.
I would like a function I could wrap around my block that would, for each line of code, create an entry in a list. Each entry would be named (line1, line2) and each line entry would have a sub-entry for each argument and for the function output. the argument entries would consist, first, of the name of the formal, to which the actual argument is matched, second, the expression or name supplied to that argument if there is one (and a placeholder if the argument is just a constant), and third, the value of that expression as if it were immediately forced on entry into the function. (I'd rather have the value as of the moment the promise is first kept, but that seems to me like a much harder problem, and the two values will most often be the same).
All the arguments assigned to the ... (if any) would go in a dots = list() sublist, with entries named if they have names and appropriately labeled (..1, ..2, etc.) if they are assigned positionally. The last element of each line sublist would be the name of the output and its value.
The point of this is to create a fairly complete record of the operation of the block of code. I think of this as analogous to an elaborated version of purrr::safely that is not confined to iteration and keeps a more detailed record of each step, and indeed if a function exits with an error you would want the error message in the list entry as well as as much of the matched arguments as could be had before the error was produced.
It seems to me like this would be very useful in debugging linear code like this. This lets you do things that are difficult using just the RStudio debugger. For instance, it lets you trace code backwards. I may not know that the value in out2 is incorrect until after I have seen some later output. Single-stepping does not keep intermediate values unless you insert a bunch of extra code to do so. In addition, this keeps the information you need to track down matching errors that occur before promises are even created. By the time you see output that results from such errors via single-stepping, the matching information has likely evaporated.
I have actually written code that takes a piped function and eliminates the pipes to put it in this format, just using text manipulation. (Indeed, it was John Mount's "Bizarro pipe" that got me thinking of this). And if I, or we, or you, can figure out how to do this, I would hope to make a serious run on a second version where each function calls the next, supplying it with arguments internally rather than externally -- like a traceback where you get the passed argument values as well as the function name and and formals. Other languages have debugging environments like that (e.g. GDB), and I've been wishing for one for R for at least five years, maybe 10, and this seems like a step toward it.
Just issue the trace shown for each function that you want to trace.
f <- function(x, y) {
z <- x + y
z
}
trace(f, exit = quote(print(returnValue())))
f(1,2)
giving the following which shows the function name, the input and output. (The last 3 is from the function itself.)
Tracing f(1, 2) on exit
[1] 3
[1] 3
I am running a data set (in the example, "data object ") through several different functions in R and concatenating the numeric results at the end. See:
a<-median((function1(x=1,dataobject,reps=500)),na.rm=TRUE)
b<-median((function2(x=1,dataobject,reps=500)),na.rm=TRUE)
c<-median((function3(x=1,dataobject,reps=500)),na.rm=TRUE)
d<-median((function4(x=1,dataobject,reps=500)),na.rm=TRUE)
e<-median((function5(x=1,dataobject,reps=500)),na.rm=TRUE)
f<-median((function6(x=1,dataobject,reps=500)),na.rm=TRUE)
c(a,b,c,d,e,f)
However, some of the functions cannot be run with the data set I am using, and so they return an error; e.g. "function3" can't be run so when it gets to the concatenation step it gives "Error: object 'e' not found" and does not return anything. Is there any way to tell R at the concatenation step to assign a value of "NA" to an object that is not found and continue to run the rest of the code instead of stopping? So that the return would be
[1] 99.233 75.435 77.782 92.013 NA 97.558
A simple question, but I could not find any other instances of it being asked. I originally tried to set up a function to run everything and output the concatenated results, but ran into the same problem (when a function can't be run, the entire wrapper function stops as well and I don't know how to tell R to skip something it can't compute).
Any thoughts are greatly appreciated! Thanks!
A couple of solutions I can think of,
Initialize all the variables you plan to use, so they have a default value that you want.
a = b = c = d = e = NA
then run your code. If an error pops up, you will have NA in the variable.
Use "tryCatch". If you are unaware what this is, I recommend reading on it. It lets you handle errors.
Here is an example from your code,
tryCatch({
a<-median((function1(x=1,dataobject,reps=500)),na.rm=TRUE)
},
error = function(err){
print("Error in evaluating a. Initializing it to NA")
a <<- NA
})
i would love to have functionality like this:
print(randomParameter(1,2,3))
-- prints 1 2 or 3... randomly picks a parameter
i have tried using the func(...) argument but i cant seem to use the table ARG when i pass multiple parameters. I tried this:
function hsv(...)
return arg[math.random(1,#arg)] -- also tried: return arg[math.random(#arg)]
end
print(hsv(5,32,7))
i have even tried putting the #arg into a variable using the rand function, also making a for loop with it sequentially adding a variable to count the table. still nothing works.
i remember doing this a while back, amd it looked different then this. can anyone Help with this? THANKS!
To elaborate a bit on #EgorSkriptunoff's answer (who needs to change his habit of providing answers in comments ;)): return (select(math.random(select('#',...)),...)).
... provides access to vararg parameter in the function
select('#', ...) returns the number of parameters passed in that vararg
math.random(select('#',...)) gives you a random number between 1 and the number of passed parameters
select(math.random(select('#',...)),...) gives you the element with the index specified by that random number from the passed parameters.
The other solution that is using arg = {...} gives you almost the same result with one subtle difference related to the number of arguments when nil is included as one of the parameters:
> function f(...) print(#{...}, select('#', ...)) end
> f(1,2,3)
3 3
> f(1,2,nil)
2 3
> f(1,2,nil,3)
2 4
As you can see select('#',...) produces more accurate results (this is running LuaJIT, but as far as I remember, Lua 5.1 produces similar results).
function randomNumber(...)
t = {...}
return t[math.random(1,#t)]
end
print(randomNumber(1, 5, 2, 9))
> 1 or 5 or 2 or 9