maximum and minimum constraint on a variable - math

I write a lot of methods looking a bit like this:
/* myVal must be between 10 and 90 */
int myVal = foo;
if(myVal < 10) { myVal = 10; }
else if (myVal > 90) { myVal = 90; }
Is there a more elegant way of doing it? Obviously you could easily write a method, but I wondered if any languages had a more natural way of setting a constraint, or whether there was something else I'm missing.
Language agnostic, because I'm interested in how different languages might deal with it.

In most programming languages, you could use something like
myVal = min(max(foo, 10), 90);
or simply write a clip() macro which does the same thing.

I've made use of an interval data type from time to time. Most of the effort is in initializing the different cases, since you might want to handle either end being either open or closed. Testing for inclusion is easy for any given case.
Nothing earth-shaking here, but surprisingly convenient. Easy to do in object-oriented, functional, and procedural styles.

Related

Is This A Good Example Of Recursion?

I realize this is a really simple bit of code, and I'm quite sure it's recursive, but I just want to make sure it is what I think it is. (Sorry if this is kind of a lame question, I'm just second guessing myself on if I understand what recursion actually is.)
var x = 0
func countToTen() {
if (x <= 10) {
println(x)
x++
countToTen()
}
}
Yes, this is definitely recursive! For good style, however, it is best to make x a parameter to the function. It's sort of a style issue, but it also makes the code easier to maintain to not have global variables like you have here.
here's what I'm talking about.
func countToTen(x) {
if (x <= 10) {
println(x)
countToTen(x + 1)
}
}
Now you can just call the function
countToTen(1)
And that would count from 1 to ten, for instance. You did it correctly, my version is just perhaps slightly cleaner form.
In programming, if a method:
Calls itself and,
Moves towards a base case (in this case x == 10)
then it is recursive.
Here is a post about real-world examples.

Variable vs Constant

I was wondering the differences of variable and constants as I see different declaration of variable/constant in the codes written by ex-colleagues.
I know that variable is something that can be change throughout the code and the value of constant is fixed and can't be changed. By far I've written everything in variable (even if the variable will not be change). Is it my practice is incorrect? Perhaps my code is not complicated therefore I use variable all the time.
Anyhow, if my understanding proven wrong, please enlighten me with the correct guidelines on this matter will do.
It is a good code practice to use constants whenever possible.
At runtime / compile time it will be known that only Read operations can be done on those values, thus some accessing / IO optimizations will be done to the code automatically , which will significantly increase performance.
Another difference is that constants are stored in a different preallocated section of your code (compiler dependent, but on most compilers this is what happens), which makes them easier to access , and they don't get allocated / deallocated all the time (so another performance optimization).
And finnaly, constants can be evaluated at compile time .
For example, if you have an ecuation of constants, something like the following :
float a = const1 * const2 / const3 + const4;
Then the whole expression will be evaluated at compile time, saving cycles at runtime (since the value will always be the same).
Some popular constants that refer to this sort of optimization are PI , PI/2 , PI/4, 1/PI.
const int const_a = 10;
int static_a = 70;
public void sample()
{
static_a = const_a+10; //This is correct
// const_a=88; //It is wrong
}
In the above example, if we declare the variable as const we can't able to assign the value from anywhere but we can use that variable.

Big idea/strategy behind turning while/for loops into recursions? And when is conversion possible/not possible?

I've been writing (unsophisticated) code for a decent while, and I feel like I have a somewhat firm grasp on while and for loops and if/else statements. I should also say that I feel like I understand (at my level, at least) the concept of recursion. That is, I understand how a method keeps calling itself until the parameters of an iteration match a base case in the method, at which point the methods begin to terminate and pass control (along with values) to previous instances and eventually an overall value of the first call is determined. I may not have explained it very well, but I think I understand it, and I can follow/make traces of the structured examples I've seen. But my question is on creating recursive methods in the wild, ie, in unstructured circumstances.
Our professor wants us to write recursively at every opportunity, and has made the (technically inaccurate?) statement that all loops can be replaced with recursion. But, since many times recursive operations are contained within while or for loops, this means, to state the obvious, not every loop can be replaced with recursion. So...
For unstructured/non-classroom situations,
1) how can I recognize that a loop situation can/cannot be turned into a recursion, and
2) what is the overall idea/strategy to use when applying recursion to a situation? I mean, how should I approach the problem? What aspects of the problem will be used as recursive criteria, etc?
Thanks!
Edit 6/29:
While I appreciate the 2 answers, I think maybe the preamble to my question was too long because it seems to be getting all of the attention. What I'm really asking is for someone to share with me, a person who "thinks" in loops, an approach for implementing recursive solutions. (For purposes of the question, please assume I have a sufficient understanding of the solution, but just need to create recursive code.) In other words, to apply a recursive solution, what am I looking for in the problem/solution that I will then use for the recursion? Maybe some very general statements about applying recursion would be helpful too. (note: please, not definitions of recursion, since I think I pretty much understand the definition. It's just the process of applying them I am asking about.) Thanks!
Every loop CAN be turned into recursion fairly easily. (It's also true that every recursion can be turned into loops, but not always easily.)
But, I realize that saying "fairly easily" isn't actually very helpful if you don't see how, so here's the idea:
For this explanation, I'm going to assume a plain vanilla while loop--no nested loops or for loops, no breaking out of the middle of the loop, no returning from the middle of the loop, etc. Those other things can also be handled but would muddy up the explanation.
The plain vanilla while loop might look like this:
1. x = initial value;
2. while (some condition on x) {
3. do something with x;
4. x = next value;
5. }
6. final action;
Then the recursive version would be
A. def Recursive(x) {
B. if (some condition on x) {
C. do something with x;
D. Recursive(next value);
E. }
F. else { # base case = where the recursion stops
G. final action;
H. }
I.
J. Recursive(initial value);
So,
the initial value of x in line 1 became the orginial argument to Recursive on line J
the condition of the loop on line 2 became the condition of the if on line B
the first action inside the loop on line 3 became the first action inside the if on line C
the next value of x on line 4 became the next argument to Recursive on line D
the final action on line 6 became the action in the base case on line G
If more than one variable was being updated in the loop, then you would often have a corresponding number of arguments in the recursive function.
Again, this basic recipe can be modified to handle fancier situations than plain vanilla while loops.
Minor comment: In the recursive function, it would be more common to put the base case on the "then" side of the if instead of the "else" side. In that case, you would flip the condition of the if to its opposite. That is, the condition in the while loop tests when to keep going, whereas the condition in the recursive function tests when to stop.
I may not have explained it very well, but I think I understand it, and I can follow/make traces of the structured examples I've seen
That's cool, if I understood your explanation well, then how you think recursion works is correct at first glance.
Our professor wants us to write recursively at every opportunity, and has made the (technically inaccurate?) statement that all loops can be replaced with recursion
That's not inaccurate. That's the truth. And the inverse is also possible: every time a recursive function is used, that can be rewritten using iteration. It may be hard and unintuitive (like traversing a tree), but it's possible.
how can I recognize that a loop can/cannot be turned into a recursion
Simple:
what is the overall idea/strategy to use when doing the conversion?
There's no such thing, unfortunately. And by that I mean that there's no universal or general "work-it-all-out" method, you have to think specifically for considering each case when solving a particular problem. One thing may be helpful, however. When converting from an iterative algorithm to a recursive one, think about patterns. How long and where exactly is the part that keeps repeating itself with a small difference only?
Also, if you ever want to convert a recursive algorithm to an iterative one, think about that the overwhelmingly popular approach for implementing recursion at hardware level is by using a (call) stack. Except when solving trivially convertible algorithms, such as the beloved factorial or Fibonacci functions, you can always think about how it might look in assembler, and create an explicit stack. Dirty, but works.
for(int i = 0; i < 50; i++)
{
for(int j = 0; j < 60; j++)
{
}
}
Is equal to:
rec1(int i)
{
if(i < 50)
return;
rec2(0);
rec1(i+1);
}
rec2(int j)
{
if(j < 60)
return;
rec2(j + 1);
}
Every loop can be recursive. Trust your professor, he is right!

Terminating an apply-based function early (similar to break?)

I am searching for a way to terminate an apply function early on some condition. Using a for loop, something like:
FDP_HCFA = function(FaultMatrix, TestCosts, GenerateNeighbors, RandomSeed) {
set.seed(RandomSeed)
## number of tests, mind the summary column
nT = ncol(FaultMatrix) - 1
StartingSequence = sample(1:nT)
BestAPFD = APFD_C(StartingSequence, FaultMatrix, TestCosts)
BestPrioritization = StartingSequence
MakingProgress = TRUE
NumberOfIterations = 0
while(MakingProgress) {
BestPrioritizationBefore = BestPrioritization
AllCurrentNeighbors = GenerateNeighbors(BestPrioritization)
for(CurrentNeighbor in AllCurrentNeighbors) {
CurrentAPFD = APFD_C(CurrentNeighbor, FaultMatrix, TestCosts)
if(CurrentAPFD > BestAPFD) {
BestAPFD = CurrentAPFD
BestPrioritization = CurrentNeighbor
break
}
}
if(length(union(list(BestPrioritizationBefore),
list(BestPrioritization))) == 1)
MakingProgress = FALSE
NumberOfIterations = NumberOfIterations + 1
}
}
I would like to rewrite this function using some derivation of apply. In particular, terminating the evaluation of the first individual with increased fitness, thereby avoiding the cost of considering the rest of the population.
I reckon that you don't really grasp the apply family and its purpose. Contrary to the general idea, they're not the equivalent of any for-loop. One can say that most for-loops are the equivalent of an apply, but that's another matter.
Apply does exactly as it says: it applies a function on a number of similar arguments sequentially, and returns the result. Hence, by definition you cannot break out of an apply. You're not operating in the global environment any more, so in principle you cannot keep global counters, check after each execution some condition and adapt the loop. You can access the global environment and even change variables using assign or <<-, but this is pretty dangerous.
To understand the difference, don't read apply(1:3,afunc) as for(i in 1:3) afunc(i), but as
afunc(1)
afunc(2)
afunc(3)
in one (block) statement. That reflects better what you're doing exactly. An equivalent for break in an apply simply doesn't make sense, as it is more a block of code than a loop.
Aside from getting your sample code to work* I think this is a clear case where a loop is the right choice. Although R can apply a function to a whole vector of variables [EDIT: but you have to decide what they are before applying], in this case I'd use a while loop to avoid the cost of running unnecessary repetitions. Caveat: I know for loops have compared favorably with apply in timing tests, but I have not seen a similar test for while. Check out some of the options at http://cran.r-project.org/doc/manuals/R-lang.html#Control-structures.
while ( *statement1* ) *statement2*

understanding referential transparency

Generally, I have a headache because something is wrong with my reasoning:
For 1 set of arguments, referential transparent function will always return 1 set of output values.
that means that such function could be represented as a truth table (a table where 1 set of output parameters is specified for 1 set of arguments).
that makes the logic behind such functions is combinational (as opposed to sequential)
that means that with pure functional language (that has only rt functions) it is possible to describe only combinational logic.
The last statement is derived from this reasoning, but it's obviously false; that means there is an error in reasoning. [question: where is error in this reasoning?]
UPD2. You, guys, are saying lots of interesting stuff, but not answering my question. I defined it more explicitly now. Sorry for messing up with question definition!
Question: where is error in this reasoning?
A referentially transparent function might require an infinite truth table to represent its behavior. You will be hard pressed to design an infinite circuit in combinatory logic.
Another error: the behavior of sequential logic can be represented purely functionally as a function from states to states. The fact that in the implementation these states occur sequentially in time does not prevent one from defining a purely referentially transparent function which describes how state evolves over time.
Edit: Although I apparently missed the bullseye on the actual question, I think my answer is pretty good, so I'm keeping it :-) (see below).
I guess a more concise way to phrase the question might be: can a purely functional language compute anything an imperative one can?
First of all, suppose you took an imperative language like C and made it so you can't alter variables after defining them. E.g.:
int i;
for (i = 0; // okay, that's one assignment
i < 10; // just looking, that's all
i++) // BUZZZ! Sorry, can't do that!
Well, there goes your for loop. Do we get to keep our while loop?
while (i < 10)
Sure, but it's not very useful. i can't change, so it's either going to run forever or not run at all.
How about recursion? Yes, you get to keep recursion, and it's still plenty useful:
int sum(int *items, unsigned int count)
{
if (count) {
// count the first item and sum the rest
return *items + sum(items + 1, count - 1);
} else {
// no items
return 0;
}
}
Now, with functions, we don't alter state, but variables can, well, vary. Once a variable passes into our function, it's locked in. However, we can call the function again (recursion), and it's like getting a brand new set of variables (the old ones stay the same). Although there are multiple instances of items and count, sum((int[]){1,2,3}, 3) will always evaluate to 6, so you can replace that expression with 6 if you like.
Can we still do anything we want? I'm not 100% sure, but I think the answer is "yes". You certainly can if you have closures, though.
You have it right. The idea is, once a variable is defined, it can't be redefined. A referentially transparent expression, given the same variables, always yields the same result value.
I recommend looking into Haskell, a purely functional language. Haskell doesn't have an "assignment" operator, strictly speaking. For instance:
my_sum numbers = ??? where
i = 0
total = 0
Here, you can't write a "for loop" that increments i and total as it goes along. All is not lost, though. Just use recursion to keep getting new is and totals:
my_sum numbers = f 0 0 where
f i total =
if i < length numbers
then f i' total'
else total
where
i' = i+1
total' = total + (numbers !! i)
(Note that this is a stupid way to sum a list in Haskell, but it demonstrates a method of coping with single assignment.)
Now, consider this highly imperative-looking code:
main = do
a <- readLn
b <- readLn
print (a + b)
It's actually syntactic sugar for:
main =
readLn >>= (\a ->
readLn >>= (\b ->
print (a + b)))
The idea is, instead of main being a function consisting of a list of statements, main is an IO action that Haskell executes, and actions are defined and chained together with bind operations. Also, an action that does nothing, yielding an arbitrary value, can be defined with the return function.
Note that bind and return aren't specific to actions. They can be used with any type that calls itself a Monad to do all sorts of funky things.
To clarify, consider readLn. readLn is an action that, if executed, would read a line from standard input and yield its parsed value. To do something with that value, we can't store it in a variable because that would violate referential transparency:
a = readLn
If this were allowed, a's value would depend on the world and would be different every time we called readLn, meaning readLn wouldn't be referentially transparent.
Instead, we bind the readLn action to a function that deals with the action, yielding a new action, like so:
readLn >>= (\x -> print (x + 1))
The result of this expression is an action value. If Haskell got off the couch and performed this action, it would read an integer, increment it, and print it. By binding the result of an action to a function that does something with the result, we get to keep referential transparency while playing around in the world of state.
As far as I understand it, referential transparency just means: A given function will always yield the same result when invoked with the same arguments. So, the mathematical functions you learned about in school are referentially transparent.
A language you could check out in order to learn how things are done in a purely functional language would be Haskell. There are ways to use "updateable storage possibilities" like the Reader Monad, and the State Monad for example. If you're interested in purely functional data structures, Okasaki might be a good read.
And yes, you're right: Order of evaluation in a purely functional language like haskell does not matter as in non-functional languages, because if there are no side effects, there is no reason to do someting before/after something else -- unless the input of one depends on the output of the other, or means like monads come into play.
I don't really know about the truth-table question.
Here's my stab at answering the question:
Any system can be described as a combinatorial function, large or small.
There's nothing wrong with the reasoning that pure functions can only deal with combinatorial logic -- it's true, just that functional languages hide that from you to some extent or another.
You could even describe, say, the workings of a game engine as a truth table or a combinatorial function.
You might have a deterministic function that takes in "the current state of the entire game" as the RAM occupied by the game engine and the keyboard input, and returns "the state of the game one frame later". The return value would be determined by the combinations of the bits in the input.
Of course, in any meaningful and sane function, the input is parsed down to blocks of integers, decimals and booleans, but the combinations of the bits in those values is still determining the output of your function.
Keep in mind also that basic digital logic can be described in truth tables. The only reason that that's not done for anything more than, say, arithmetic on 4-bit integers, is because the size of the truth table grows exponentially.
The error in Your reasoning is the following:
"that means that such function could be represented as a truth table".
You conclude that from a functional language's property of referential transparency. So far the conclusion would sound plausible, but You oversee that a function is able to accept collections as input and process them in contrast to the fixed inputs of a logic gate.
Therefore a function does not equal a logic gate but rather a construction plan of such a logic gate depending on the actual (at runtime determined) input!
To comment on Your comment: Functional languages can - although stateless - implement a state machine by constructing the states from scratch each time they are being accessed.

Resources