What is the equivalent of 'break' in q#? - break

How would I break out of a loop when I meet a condition?
For example:
for (i in 0..10){
if (i==3){
// equivalent of break
}
}

There is no break in Q#; however, you can implement this behavior using repeat-until-success loop.
Q# is not a general-purpose language, and is designed to allow a lot of optimizations for when a program will be executed on a quantum device. Loops are one example of such design: if you know beforehand how many iterations your loop will do, use a for loop, if you need to iterate until some condition is met, use repeat-until-success loop.
Your example (which is not really a good example of why you'd need a break) would be written as something like this:
mutable i = 0;
repeat {
set i = i + 1;
} until (i == 10 || i == 3)
fixup {
();
}

Related

how to move to the next loop in PARI/GP if I use multi-line nested for loops?

my question is: how to move to the next loop in PARI/GP if I use multi-line nested for loops ? for example :
if I use this code :
for(K=1,10,for(i=1,5,if(isprime(2*i*prime(K)+1)==1,print(2*i"*"prime(K)))))
and since 2*(i=1)*prime(K=1)+1=5 is prime, I need my machine not to loop for i=2......i=5, I need it to move on to the next K, so:
how to do this on PARI/GP?
and I am sorry if my question is not clear or duplicated.
You need to use break. But first, let's clean up the presentation so this is more readable:
func()=
{
for(K=1,10,
for(i=1,5,
if(isprime(2*i*prime(K)+1)==1,
print(2*i"*"prime(K))
)
)
);
}
func()
You want to break out of the innermost loop, like so (just giving the function itself):
func()=
{
for(K=1,10,
for(i=1,5,
if(isprime(2*i*prime(K)+1)==1,
print(2*i"*"prime(K));
break
)
)
);
}
But while we're here, there's no need to add == 1; if is already branching on nonzero values.
func()=
{
for(K=1,10,
for(i=1,5,
if(isprime(2*i*prime(K)+1),
print(2*i"*"prime(K));
break
)
)
);
}
We could also store the value of prime(K) so we don't need to compute it twice. But better yet, let's use a loop directly over the primes so we don't need the prime() function at all!
func(maxK=10)=
{
my(K=0);
forprime(p=2,prime(maxK),
K++;
for(i=1,5,
if(isprime(2*i*p+1),
print(2*i"*"p);
break
)
)
);
}
Here I have changed the function so you can call it with different maximum values other than 10 and I've kept the index in case you wanted it for some reason. But I think a better approach would be to give a bound on how high you want to go in the primes directly, and forgetting about the prime indexes entirely:
func(maxP=29)=
{
forprime(p=2,maxP,
for(i=1,5,
if(isprime(2*i*p+1),
print(2*i"*"p);
break
)
)
);
}
In both cases I added a default argument so calling func() will do the same thing as your original function (except that it now breaks the way you want).

How do I mutate and optionally remove elements from a vec without memory allocation?

I have a Player struct that contains a vec of Effect instances. I want to iterate over this vec, decrease the remaining time for each Effect, and then remove any effects whose remaining time reaches zero. So far so good. However, for any effect removed, I also want to pass it to Player's undo_effect() method, before destroying the effect instance.
This is part of a game loop, so I want to do this without any additional memory allocation if possible.
I've tried using a simple for loop and also iterators, drain, retain, and filter, but I keep running into issues where self (the Player) would be mutably borrowed more than once, because modifying self.effects requires a mutable borrow, as does the undo_effect() method. The drain_filter() in nightly looks useful here but it was first proposed in 2017 so not holding my breath on that one.
One approach that did compile (see below), was to use two vectors and alternate between them on each frame. Elements are pop()'ed from vec 1 and either push()'ed to vec 2 or passed to undo_effect() as appropriate. On the next game loop iteration, the direction is reversed. Since each vec will not shrink, the only allocations will be if they grow larger than before.
I started abstracting this as its own struct but want to check if there is a better (or easier) way.
This one won't compile. The self.undo_effect() call would borrow self as mutable twice.
struct Player {
effects: Vec<Effect>
}
impl Player {
fn update(&mut self, delta_time: f32) {
for effect in &mut self.effects {
effect.remaining -= delta_time;
if effect.remaining <= 0.0 {
effect.active = false;
}
}
for effect in self.effects.iter_mut().filter(|e| !e.active) {
self.undo_effect(effect);
}
self.effects.retain(|e| e.active);
}
}
The below compiles ok - but is there a better way?
struct Player {
effects: [Vec<Effect>; 2],
index: usize
}
impl Player {
fn update(&mut self, delta_time: f32) {
let src_index = self.index;
let target_index = if self.index == 0 { 1 } else { 0 };
self.effects[target_index].clear(); // should be unnecessary.
while !self.effects[src_index].is_empty() {
if let Some(x) = self.effects[src_index].pop() {
if x.active {
self.effects[target_index].push(x);
} else {
self.undo_effect(&x);
}
}
}
self.index = target_index;
}
}
Is there an iterator version that works without unnecessary memory allocations? I'd be ok with allocating memory only for the removed elements, since this will be much rarer.
Would an iterator be more efficient than the pop()/push() version?
EDIT 2020-02-23:
I ended up coming back to this and I found a slightly more robust solution, similar to the above but without the danger of requiring a target_index field.
std::mem::swap(&mut self.effects, &mut self.effects_cache);
self.effects.clear();
while !self.effects_cache.is_empty() {
if let Some(x) = self.effects_cache.pop() {
if x.active {
self.effects.push(x);
} else {
self.undo_effect(&x);
}
}
}
Since self.effects_cache is unused outside this method and does not require self.effects_cache to have any particular value beforehand, the rest of the code can simply use self.effects and it will always be current.
The main issue is that you are borrowing a field (effects) of Player and trying to call undo_effect while this field is borrowed. As you noted, this does not work.
You already realized that you could juggle two vectors, but you could actually only juggle one (permanent) vector:
struct Player {
effects: Vec<Effect>
}
impl Player {
fn update(&mut self, delta_time: f32) {
for effect in &mut self.effects {
effect.remaining -= delta_time;
if effect.remaining <= 0.0 {
effect.active = false;
}
}
// Temporarily remove effects from Player.
let mut effects = std::mem::replace(&mut self.effects, vec!());
// Call Player::undo_effects (no outstanding borrows).
// `drain_filter` could also be used, for better efficiency.
for effect in effects.iter_mut().filter(|e| !e.active) {
self.undo_effect(effect);
}
// Restore effects
self.effects = effects;
self.effects.retain(|e| e.active);
}
}
This will not allocate because the default constructor of Vec does not allocate.
On the other hand, the double-vector solution might be more efficient as it allows a single pass over self.effects rather than two. YMMV.
If I understand you correctly, you have two questions:
How can I split a Vec into two Vecs (one which fulfill a predidate, the other one which doesn't)
Is it possible to do without memory overhead
There are multiple ways of splitting a Vec into two (or more).
You could use Iteratator::partition which will give you two distinct Iterators which can be used further.
There is the unstable Vec::drain_filter function which does the same but on a Vec itself
Use splitn (or splitn_mut) which will split your Vec/slice into n (2 in your case) Iterators
Depending on what you want to do, all solutions are applicable and good to use.
Is it possible without memory overhead? Not with the solutions above, because you need to create a second Vec which can hold the filtered items. But there is a solution, namely you can "sort" the Vec where the first half will contain all the items that fulfill the predicate (e.g. are not expired) and the second half that will fail the predicate (are expired). You just need to count the amount of items that fulfill the predicate.
Then you can use split_at (or split_at_mut) to split the Vec/slice into two distinct slices. Afterwards you can resize the Vec to the length of the good items and the other ones will be dropped.
The best answer is this one in C++.
[O]rder the indices vector, create two iterators into the data vector, one for reading and one for writing. Initialize the writing iterator to the first element to be removed, and the reading iterator to one beyond that one. Then in each step of the loop increment the iterators to the next value (writing) and next value not to be skipped (reading) and copy/move the elements. At the end of the loop call erase to discard the elements beyond the last written to position.
The Rust adaptation to your specific problem is to move the removed items out of the vector instead of just writing over them.
An alternative is to use a linked list instead of a vector to hold your Effect instances.

How to "leap of faith" when using recursion?

For me when making a recursive method. I always need to spend a lot of time to do it, because I will make some test cases and to see whether my recursive case works and to draw a stack diagram. However, when I ask other about it, they just say that I need to believe myself it will work. How am I suppose to believe that if you don't know what is going on in the recursive case?
You define what is going on in the recursive case, just as you define the rest of the method. Imagine someone else wrote a method to do what the one you are writing does; you wouldn't have a problem calling that, would you? The only difference is that you are that method's author, and it just happens to be the one being written.
For example: I am writing the following method:
// Sort array a[i..j-1] in ascending order
method sort_array( a, i, j ) {
..
}
The base case is easy:
if ( i >= j-1 ) // there is at most one element to be sorted
return; // a[i..j-1] is already sorted
Now, when that isn't true, I could do the following:
else {
k = index_of_max( a, i, j );
swap( a, j-1, k );
At this point, I know that a[j-1] has the correct value, so I just need to sort what comes before it -- fortunately, I have a method to do just that:
sort_array( a, i, j-1 );
}
No leap of faith is required; I know that recursive call will work because I wrote the method to do just that.

Explicitly calling return in a function or not

A while back I got rebuked by Simon Urbanek from the R core team (I believe) for recommending a user to explicitly calling return at the end of a function (his comment was deleted though):
foo = function() {
return(value)
}
instead he recommended:
foo = function() {
value
}
Probably in a situation like this it is required:
foo = function() {
if(a) {
return(a)
} else {
return(b)
}
}
His comment shed some light on why not calling return unless strictly needed is a good thing, but this was deleted.
My question is: Why is not calling return faster or better, and thus preferable?
Question was: Why is not (explicitly) calling return faster or better, and thus preferable?
There is no statement in R documentation making such an assumption.
The main page ?'function' says:
function( arglist ) expr
return(value)
Is it faster without calling return?
Both function() and return() are primitive functions and the function() itself returns last evaluated value even without including return() function.
Calling return() as .Primitive('return') with that last value as an argument will do the same job but needs one call more. So that this (often) unnecessary .Primitive('return') call can draw additional resources.
Simple measurement however shows that the resulting difference is very small and thus can not be the reason for not using explicit return. The following plot is created from data selected this way:
bench_nor2 <- function(x,repeats) { system.time(rep(
# without explicit return
(function(x) vector(length=x,mode="numeric"))(x)
,repeats)) }
bench_ret2 <- function(x,repeats) { system.time(rep(
# with explicit return
(function(x) return(vector(length=x,mode="numeric")))(x)
,repeats)) }
maxlen <- 1000
reps <- 10000
along <- seq(from=1,to=maxlen,by=5)
ret <- sapply(along,FUN=bench_ret2,repeats=reps)
nor <- sapply(along,FUN=bench_nor2,repeats=reps)
res <- data.frame(N=along,ELAPSED_RET=ret["elapsed",],ELAPSED_NOR=nor["elapsed",])
# res object is then visualized
# R version 2.15
The picture above may slightly difffer on your platform.
Based on measured data, the size of returned object is not causing any difference, the number of repeats (even if scaled up) makes just a very small difference, which in real word with real data and real algorithm could not be counted or make your script run faster.
Is it better without calling return?
Return is good tool for clearly designing "leaves" of code where the routine should end, jump out of the function and return value.
# here without calling .Primitive('return')
> (function() {10;20;30;40})()
[1] 40
# here with .Primitive('return')
> (function() {10;20;30;40;return(40)})()
[1] 40
# here return terminates flow
> (function() {10;20;return();30;40})()
NULL
> (function() {10;20;return(25);30;40})()
[1] 25
>
It depends on strategy and programming style of the programmer what style he use, he can use no return() as it is not required.
R core programmers uses both approaches ie. with and without explicit return() as it is possible to find in sources of 'base' functions.
Many times only return() is used (no argument) returning NULL in cases to conditially stop the function.
It is not clear if it is better or not as standard user or analyst using R can not see the real difference.
My opinion is that the question should be: Is there any danger in using explicit return coming from R implementation?
Or, maybe better, user writing function code should always ask: What is the effect in not using explicit return (or placing object to be returned as last leaf of code branch) in the function code?
If everyone agrees that
return is not necessary at the end of a function's body
not using return is marginally faster (according to #Alan's test, 4.3 microseconds versus 5.1)
should we all stop using return at the end of a function? I certainly won't, and I'd like to explain why. I hope to hear if other people share my opinion. And I apologize if it is not a straight answer to the OP, but more like a long subjective comment.
My main problem with not using return is that, as Paul pointed out, there are other places in a function's body where you may need it. And if you are forced to use return somewhere in the middle of your function, why not make all return statements explicit? I hate being inconsistent. Also I think the code reads better; one can scan the function and easily see all exit points and values.
Paul used this example:
foo = function() {
if(a) {
return(a)
} else {
return(b)
}
}
Unfortunately, one could point out that it can easily be rewritten as:
foo = function() {
if(a) {
output <- a
} else {
output <- b
}
output
}
The latter version even conforms with some programming coding standards that advocate one return statement per function. I think a better example could have been:
bar <- function() {
while (a) {
do_stuff
for (b) {
do_stuff
if (c) return(1)
for (d) {
do_stuff
if (e) return(2)
}
}
}
return(3)
}
This would be much harder to rewrite using a single return statement: it would need multiple breaks and an intricate system of boolean variables for propagating them. All this to say that the single return rule does not play well with R. So if you are going to need to use return in some places of your function's body, why not be consistent and use it everywhere?
I don't think the speed argument is a valid one. A 0.8 microsecond difference is nothing when you start looking at functions that actually do something. The last thing I can see is that it is less typing but hey, I'm not lazy.
This is an interesting discussion. I think that #flodel's example is excellent. However, I think it illustrates my point (and #koshke mentions this in a comment) that return makes sense when you use an imperative instead of a functional coding style.
Not to belabour the point, but I would have rewritten foo like this:
foo = function() ifelse(a,a,b)
A functional style avoids state changes, like storing the value of output. In this style, return is out of place; foo looks more like a mathematical function.
I agree with #flodel: using an intricate system of boolean variables in bar would be less clear, and pointless when you have return. What makes bar so amenable to return statements is that it is written in an imperative style. Indeed, the boolean variables represent the "state" changes avoided in a functional style.
It is really difficult to rewrite bar in functional style, because it is just pseudocode, but the idea is something like this:
e_func <- function() do_stuff
d_func <- function() ifelse(any(sapply(seq(d),e_func)),2,3)
b_func <- function() {
do_stuff
ifelse(c,1,sapply(seq(b),d_func))
}
bar <- function () {
do_stuff
sapply(seq(a),b_func) # Not exactly correct, but illustrates the idea.
}
The while loop would be the most difficult to rewrite, because it is controlled by state changes to a.
The speed loss caused by a call to return is negligible, but the efficiency gained by avoiding return and rewriting in a functional style is often enormous. Telling new users to stop using return probably won't help, but guiding them to a functional style will payoff.
#Paul return is necessary in imperative style because you often want to exit the function at different points in a loop. A functional style doesn't use loops, and therefore doesn't need return. In a purely functional style, the final call is almost always the desired return value.
In Python, functions require a return statement. However, if you programmed your function in a functional style, you will likely have only one return statement: at the end of your function.
Using an example from another StackOverflow post, let us say we wanted a function that returned TRUE if all the values in a given x had an odd length. We could use two styles:
# Procedural / Imperative
allOdd = function(x) {
for (i in x) if (length(i) %% 2 == 0) return (FALSE)
return (TRUE)
}
# Functional
allOdd = function(x)
all(length(x) %% 2 == 1)
In a functional style, the value to be returned naturally falls at the ends of the function. Again, it looks more like a mathematical function.
#GSee The warnings outlined in ?ifelse are definitely interesting, but I don't think they are trying to dissuade use of the function. In fact, ifelse has the advantage of automatically vectorizing functions. For example, consider a slightly modified version of foo:
foo = function(a) { # Note that it now has an argument
if(a) {
return(a)
} else {
return(b)
}
}
This function works fine when length(a) is 1. But if you rewrote foo with an ifelse
foo = function (a) ifelse(a,a,b)
Now foo works on any length of a. In fact, it would even work when a is a matrix. Returning a value the same shape as test is a feature that helps with vectorization, not a problem.
It seems that without return() it's faster...
library(rbenchmark)
x <- 1
foo <- function(value) {
return(value)
}
fuu <- function(value) {
value
}
benchmark(foo(x),fuu(x),replications=1e7)
test replications elapsed relative user.self sys.self user.child sys.child
1 foo(x) 10000000 51.36 1.185322 51.11 0.11 0 0
2 fuu(x) 10000000 43.33 1.000000 42.97 0.05 0 0
____EDIT __________________
I proceed to others benchmark (benchmark(fuu(x),foo(x),replications=1e7)) and the result is reversed... I'll try on a server.
My question is: Why is not calling return faster
It’s faster because return is a (primitive) function in R, which means that using it in code incurs the cost of a function call. Compare this to most other programming languages, where return is a keyword, but not a function call: it doesn’t translate to any runtime code execution.
That said, calling a primitive function in this way is pretty fast in R, and calling return incurs a minuscule overhead. This isn’t the argument for omitting return.
or better, and thus preferable?
Because there’s no reason to use it.
Because it’s redundant, and it doesn’t add useful redundancy.
To be clear: redundancy can sometimes be useful. But most redundancy isn’t of this kind. Instead, it’s of the kind that adds visual clutter without adding information: it’s the programming equivalent of a filler word or chartjunk).
Consider the following example of an explanatory comment, which is universally recognised as bad redundancy because the comment merely paraphrases what the code already expresses:
# Add one to the result
result = x + 1
Using return in R falls in the same category, because R is a functional programming language, and in R every function call has a value. This is a fundamental property of R. And once you see R code from the perspective that every expression (including every function call) has a value, the question then becomes: “why should I use return?” There needs to be a positive reason, since the default is not to use it.
One such positive reason is to signal early exit from a function, say in a guard clause:
f = function (a, b) {
if (! precondition(a)) return() # same as `return(NULL)`!
calculation(b)
}
This is a valid, non-redundant use of return. However, such guard clauses are rare in R compared to other languages, and since every expression has a value, a regular if does not require return:
sign = function (num) {
if (num > 0) {
1
} else if (num < 0) {
-1
} else {
0
}
}
We can even rewrite f like this:
f = function (a, b) {
if (precondition(a)) calculation(b)
}
… where if (cond) expr is the same as if (cond) expr else NULL.
Finally, I’d like to forestall three common objections:
Some people argue that using return adds clarity, because it signals “this function returns a value”. But as explained above, every function returns something in R. Thinking of return as a marker of returning a value isn’t just redundant, it’s actively misleading.
Relatedly, the Zen of Python has a marvellous guideline that should always be followed:
Explicit is better than implicit.
How does dropping redundant return not violate this? Because the return value of a function in a functional language is always explicit: it’s its last expression. This is again the same argument about explicitness vs redundancy.
In fact, if you want explicitness, use it to highlight the exception to the rule: mark functions that don’t return a meaningful value, which are only called for their side-effects (such as cat). Except R has a better marker than return for this case: invisible. For instance, I would write
save_results = function (results, file) {
# … code that writes the results to a file …
invisible()
}
But what about long functions? Won’t it be easy to lose track of what is being returned?
Two answers: first, not really. The rule is clear: the last expression of a function is its value. There’s nothing to keep track of.
But more importantly, the problem in long functions isn’t the lack of explicit return markers. It’s the length of the function. Long functions almost (?) always violate the single responsibility principle and even when they don’t they will benefit from being broken apart for readability.
A problem with not putting 'return' explicitly at the end is that if one adds additional statements at the end of the method, suddenly the return value is wrong:
foo <- function() {
dosomething()
}
This returns the value of dosomething().
Now we come along the next day and add a new line:
foo <- function() {
dosomething()
dosomething2()
}
We wanted our code to return the value of dosomething(), but instead it no longer does.
With an explicit return, this becomes really obvious:
foo <- function() {
return( dosomething() )
dosomething2()
}
We can see that there is something strange about this code, and fix it:
foo <- function() {
dosomething2()
return( dosomething() )
}
I think of return as a trick. As a general rule, the value of the last expression evaluated in a function becomes the function's value -- and this general pattern is found in many places. All of the following evaluate to 3:
local({
1
2
3
})
eval(expression({
1
2
3
}))
(function() {
1
2
3
})()
What return does is not really returning a value (this is done with or without it) but "breaking out" of the function in an irregular way. In that sense, it is the closest equivalent of GOTO statement in R (there are also break and next). I use return very rarely and never at the end of a function.
if(a) {
return(a)
} else {
return(b)
}
... this can be rewritten as if(a) a else b which is much better readable and less curly-bracketish. No need for return at all here. My prototypical case of use of "return" would be something like ...
ugly <- function(species, x, y){
if(length(species)>1) stop("First argument is too long.")
if(species=="Mickey Mouse") return("You're kidding!")
### do some calculations
if(grepl("mouse", species)) {
## do some more calculations
if(species=="Dormouse") return(paste0("You're sleeping until", x+y))
## do some more calculations
return(paste0("You're a mouse and will be eating for ", x^y, " more minutes."))
}
## some more ugly conditions
# ...
### finally
return("The end")
}
Generally, the need for many return's suggests that the problem is either ugly or badly structured.
[EDIT]
return doesn't really need a function to work: you can use it to break out of a set of expressions to be evaluated.
getout <- TRUE
# if getout==TRUE then the value of EXP, LOC, and FUN will be "OUTTA HERE"
# .... if getout==FALSE then it will be `3` for all these variables
EXP <- eval(expression({
1
2
if(getout) return("OUTTA HERE")
3
}))
LOC <- local({
1
2
if(getout) return("OUTTA HERE")
3
})
FUN <- (function(){
1
2
if(getout) return("OUTTA HERE")
3
})()
identical(EXP,LOC)
identical(EXP,FUN)
The argument of redundancy has come up a lot here. In my opinion that is not reason enough to omit return().
Redundancy is not automatically a bad thing. When used strategically, redundancy makes code clearer and more maintenable.
Consider this example: Function parameters often have default values. So specifying a value that is the same as the default is redundant. Except it makes obvious the behaviour I expect. No need to pull up the function manpage to remind myself what the defaults are. And no worry about a future version of the function changing its defaults.
With a negligible performance penalty for calling return() (as per the benchmarks posted here by others) it comes down to style rather than right and wrong. For something to be "wrong", there needs to be a clear disadvantage, and nobody here has demonstrated satisfactorily that including or omitting return() has a consistent disadvantage. It seems very case-specific and user-specific.
So here is where I stand on this.
function(){
#do stuff
...
abcd
}
I am uncomfortable with "orphan" variables like in the example above. Was abcd going to be part of a statement I didn't finish writing? Is it a remnant of a splice/edit in my code and needs to be deleted? Did I accidentally paste/move something from somewhere else?
function(){
#do stuff
...
return(abdc)
}
By contrast, this second example makes it obvious to me that it is an intended return value, rather than some accident or incomplete code. For me this redundancy is absolutely not useless.
Of course, once the function is finished and working I could remove the return. But removing it is in itself a redundant extra step, and in my view more useless than including return() in the first place.
All that said, I do not use return() in short unnamed one-liner functions. There it makes up a large fraction of the function's code and therefore mostly causes visual clutter that makes code less legible. But for larger formally defined and named functions, I use it and will likely continue to so.
return can increase code readability:
foo <- function() {
if (a) return(a)
b
}

What are block expressions actually good for?

I just solved the first problem from Project Euler in JavaFX for the fun of it and wondered what block expressions are actually good for? Why are they superior to functions? Is it the because of the narrowed scope? Less to write? Performance?
Here's the Euler example. I used a block here but I don't know if it actually makes sense
// sums up all number from low to high exclusive which are divisible by a or b
function sumDivisibleBy(a: Integer, b: Integer, high: Integer) {
def low = if (a <= b) a else b;
def sum = {
var result = 0;
for (i in [low .. <high] where i mod a == 0 or i mod b == 0) {
result += i
}
result
}
}
Does a block make sense here?
Well, no, not really, it looks like extra complexity with no real benefit. Try removing the sum variable and the block and you will see that the code still works the same.
In general block expressions can be useful when you want to create an anonymous scope rather than factoring the code into a function, most of the time you should rather create a function.

Resources