Incorrect number of articles in IDL - idl-programming-language

this is my Headline in a IDL-source code:
pro gamow,t_plasma,z1=z1,z2=z2,a1=a1,a2=a2
; displays gamow peak for input value of t (in K)
; default values for protons
if not keyword_set(z1) then z1=1.
if not keyword_set(z2) then z2=1.
if not keyword_set(a1) then a1=1.
if not keyword_set(a2) then a2=1.
I am executing this in the terminal/console, with, for example:
gamow, 1d8
This works, since then z1=z2=a1=a2 = 1.0. And 1d8 means 100 Million. But, this doesn't work:
gamow, 1d8, 2, 2, 4, 4
why?
Best regards

You defined t_plasma as a positional parameter, but z1, z2, a1, and a2 as keyword parameters. Your first example only passes a single positional
parameter, so t_plasma is defined, and the other parameters are not, which is fine. Your
second example tries to pass all 5 arguments as positional parameters, but only one positional parameter is defined. So IDL reports an error, "Incorrect number of arguments".
Instead, try this:
gamow,1d8,z1=2,z2=2,a1=4,a2=4

Related

Get the intercept of standard error form the LinregressResult given by stats.linregres

So when I run this line on Jupyter notebook.
stats.linregress(xdata, data)
The result is
LinregressResult(slope=8.762662815890456, intercept=-583.1060100267368, rvalue=0.9764595396878868, pvalue=0.0, stderr=0.0710610032681328, intercept_stderr=31.20585863555493)
Then if I run this code
slope, intercept, r_value, p_value, std_err,std_err_intercept = stats.linregress(xdata, data)
There is an error:
ValueError: not enough values to unpack (expected 6, got 5)
But if you see there are 6 values to unpack and I need this value for my calculations could someone help, please.
This is a quirk of the evolving API of functions in scipy.stats. Old versions of linregress returned a tuple with length 5; they did not include the intercept_stderr in the result. New versions return an object with 6 attributes. However, for backwards compatibility, if you unpack that object, it will act like a tuple of length 5.
The simplest way to use the code is to write result = linregress(xdata, ydata), and then refer to the attributes as result.slope, result.intercept, etc. Then the standard error of the intercept is available as result.intercept_stderr.
If, for some reason, you must unpack the result, you can still write
slope, intercept, r_value, p_value, std_err = stats.linregress(xdata, data)
but you lose the intercept_stderr part of the result.
This is explained in the Notes section of the docstring.

Specify the type of variable keyword arguments in Julia [duplicate]

Is it possible to type function kwargs in Julia?
The following works for standard Varargs.
function int_args(args::Integer...)
args
end
int_args(1, 2, 3)
# (1, 2, 3)
int_args(1, 2, 3.0)
# ERROR: MethodError: `int_args` has no method matching int_args(::Int64, ::Int64, ::Float64)
However, when applying this same syntax to kwargs, all function calls seem to error.
function int_kwargs(; kwargs::Integer...)
kwargs
end
int_kwargs(x=1, y=2)
# ERROR: MethodError: `__int_kwargs#0__` has no method matching __int_kwargs#0__(::Array{Any,1})
Normal keyword arguments can have types, as in function f(x; a::Int=0), but this doesn't work for "rest" keyword arguments. Also note that since we currently don't dispatch on keyword arguments, the a::Int in this case is a type assertion and not a dispatch specification.
It looks like this case is not handled well, and needs a better error message at least. I'd encourage you to file an issue at https://github.com/JuliaLang/julia/issues.
I'm not sure what the syntax x::T... should mean for keyword arguments. In the case of varargs, it's clear that each element of x should have type T, but for rest keyword arguments each element is actually a symbol-value pair. Of course we could give it the meaning you describe (all values have type T), but this doesn't seem to come up very often. Keyword arguments tend to be quite heterogeneous, unlike varargs which are more like lists or arrays.

R: Enriched debugging for linear code chains

I am trying to figure out if it is possible, with a sane amount of programming, to create a certain debugging function by using R's metaprogramming features.
Suppose I have a block of code, such that each line uses as all or part of its input the output from thee line before -- the sort of code you might build with pipes (though no pipe is used here).
{
f1(args1) -> out1
f2(out1, args2) -> out2
f3(out2, args3) -> out3
...
fn(out<n-1>, args<n>) -> out<n>
}
Where for example it might be that:
f1 <- function(first_arg, second_arg, ...){my_body_code},
and you call f1 in the block as:
f1(second_arg = 1:5, list(a1 ="A", a2 =1), abc = letters[1:3], fav = foo_foo)
where foo_foo is an object defined in the calling environment of f1.
I would like a function I could wrap around my block that would, for each line of code, create an entry in a list. Each entry would be named (line1, line2) and each line entry would have a sub-entry for each argument and for the function output. the argument entries would consist, first, of the name of the formal, to which the actual argument is matched, second, the expression or name supplied to that argument if there is one (and a placeholder if the argument is just a constant), and third, the value of that expression as if it were immediately forced on entry into the function. (I'd rather have the value as of the moment the promise is first kept, but that seems to me like a much harder problem, and the two values will most often be the same).
All the arguments assigned to the ... (if any) would go in a dots = list() sublist, with entries named if they have names and appropriately labeled (..1, ..2, etc.) if they are assigned positionally. The last element of each line sublist would be the name of the output and its value.
The point of this is to create a fairly complete record of the operation of the block of code. I think of this as analogous to an elaborated version of purrr::safely that is not confined to iteration and keeps a more detailed record of each step, and indeed if a function exits with an error you would want the error message in the list entry as well as as much of the matched arguments as could be had before the error was produced.
It seems to me like this would be very useful in debugging linear code like this. This lets you do things that are difficult using just the RStudio debugger. For instance, it lets you trace code backwards. I may not know that the value in out2 is incorrect until after I have seen some later output. Single-stepping does not keep intermediate values unless you insert a bunch of extra code to do so. In addition, this keeps the information you need to track down matching errors that occur before promises are even created. By the time you see output that results from such errors via single-stepping, the matching information has likely evaporated.
I have actually written code that takes a piped function and eliminates the pipes to put it in this format, just using text manipulation. (Indeed, it was John Mount's "Bizarro pipe" that got me thinking of this). And if I, or we, or you, can figure out how to do this, I would hope to make a serious run on a second version where each function calls the next, supplying it with arguments internally rather than externally -- like a traceback where you get the passed argument values as well as the function name and and formals. Other languages have debugging environments like that (e.g. GDB), and I've been wishing for one for R for at least five years, maybe 10, and this seems like a step toward it.
Just issue the trace shown for each function that you want to trace.
f <- function(x, y) {
z <- x + y
z
}
trace(f, exit = quote(print(returnValue())))
f(1,2)
giving the following which shows the function name, the input and output. (The last 3 is from the function itself.)
Tracing f(1, 2) on exit
[1] 3
[1] 3

Simple function in lua: pick a random parameter passed into it

i would love to have functionality like this:
print(randomParameter(1,2,3))
-- prints 1 2 or 3... randomly picks a parameter
i have tried using the func(...) argument but i cant seem to use the table ARG when i pass multiple parameters. I tried this:
function hsv(...)
return arg[math.random(1,#arg)] -- also tried: return arg[math.random(#arg)]
end
print(hsv(5,32,7))
i have even tried putting the #arg into a variable using the rand function, also making a for loop with it sequentially adding a variable to count the table. still nothing works.
i remember doing this a while back, amd it looked different then this. can anyone Help with this? THANKS!
To elaborate a bit on #EgorSkriptunoff's answer (who needs to change his habit of providing answers in comments ;)): return (select(math.random(select('#',...)),...)).
... provides access to vararg parameter in the function
select('#', ...) returns the number of parameters passed in that vararg
math.random(select('#',...)) gives you a random number between 1 and the number of passed parameters
select(math.random(select('#',...)),...) gives you the element with the index specified by that random number from the passed parameters.
The other solution that is using arg = {...} gives you almost the same result with one subtle difference related to the number of arguments when nil is included as one of the parameters:
> function f(...) print(#{...}, select('#', ...)) end
> f(1,2,3)
3 3
> f(1,2,nil)
2 3
> f(1,2,nil,3)
2 4
As you can see select('#',...) produces more accurate results (this is running LuaJIT, but as far as I remember, Lua 5.1 produces similar results).
function randomNumber(...)
t = {...}
return t[math.random(1,#t)]
end
print(randomNumber(1, 5, 2, 9))
> 1 or 5 or 2 or 9

named parameters with same name

I'm using the 'caret' library to to do some cross validation on some trees.
The library provides a function called train, that takes in a named argument "method". Via its ellipsis it's supposed to let other arguments fall through to another function that it calls. This other function (rpart) takes an argument of the same name, "method".
Therefore I want to pass two arguments with the same name... and it's clearly failing. I tried to work around things as shown below but I get the error:
"Error in train.default(x = myx, y = myy, method = "rpart2", preProcess = NULL, :
formal argument "method" matched by multiple actual arguments"
any help is much appreciated! thanks!
train.wrapper = function(myx, myy, mytrControl, mytuneLenght, ...){
result = train(
x=myx,
y=myy,
method="rpart2",
preProcess=NULL,
...,
weights=NULL,
metric="Accuracy",
trControl=mytrControl,
tuneLength=mytuneLenght
)
return (result)
}
dtree.train.cv = train.wrapper(training.matrix[,2:1777],
training.matrix[,1],
2, method="class")
Here's a mock-up of your problem with a tr (train) function that calls an rp (rpart) function, passing it ...:
rp <- function(method, ...) method
tr <- function(method, ...) rp(...)
# we want to pass 2 to rp:
tr(method=1, method=2) # Error
tr(1, method=2) # 1, (wrong value!)
tr(method=1, metho=2) # 2 (Yay!)
What magic is this? And why does the last case actually work?! Well, we need to understand how argument matching works in R. A function f <- function(foo, bar) is said to have formal parameters "foo" and "bar", and the call f(foo=3, ba=13) is said to have (actual) arguments "foo" and "ba".
R first matches all arguments that have exactly the same name as a formal parameter. This is why the first "method" argument gets passed to train. Two identical argument names cause an error.
Then, R matches any argument names that partially matches a (yet unmatched) formal parameter. But if two argument names partially match the same formal parameter, that also causes an error. Also, it only matches formal parameters before .... So formal parameters after ... must be specified using their full names.
Then the unnamed arguments are matched in positional order to the remaining formal arguments.
Finally, if the formal arguments include ..., the remaining arguments are put into the ....
PHEW! So in this case, the call to tr fully matches method, and then pass the rest into .... When tr then calls rp, the metho argument partially matches its formal parameter method, and all is well!
...Still, I'd try to contact the author of train and point out this problem so he can fix it properly! Since "rpart" and "rpart2" are supposed to be supported, he must have missed this use case!
I think he should rename his method parameter to method. or similar (anything longer than "method"). This will still be backward compatible, but allows another method parameter to be passed correctly to rpart.
Generally wrappers will pass their parameters in a named list. In the case of train, provision for control is passed in the trControl argument. Perhaps you should try:
dtree.train.cv = train.wrapper(training.matrix[,2:1777],
training.matrix[,1],
2, # will be positionally matched, probably to 'myTuneLenght'
myTrControl=list(method="class") )
After your comment I reviewed again the train and rpart help pages. You could well be correct in thinking that trControl has a different purpose. I am suspicious that you may need to construct your call with a formula since rpart only has a formula method. If the y argument is a factor than method="class will be assumed by rpart. And ... running modelLookup:
modelLookup("rpart2")
model parameter label seq forReg forClass probModel
154 rpart2 maxdepth Max Tree Depth TRUE TRUE TRUE TRUE
... suggest to me that a "class" method would be assumed by default as well. You may also need to edit your question to include a data example (perhaps from the rpart help page?) if you want further advice.

Resources