Just wondering to know , why initialValue is given to functional programming methods, such as to reduce()? What is specific reason for having it this way?
Instead of giving it to the bottom of the function, it is also possible to have it as the first element of the array, as it behaves the same, means same.
Trying to learn functional programming now, but this point doesn't make sense.
it is also possible to have it as the first element of the array, as it behaves the same, means same.
Well, no. reduce is a generic function and supposed to work with arbitrary arrays, not just those that have our expected value as the first element of the array. The most important case would be empty arrays here.
Yes, it would be possible to push (unshift) the initial value as the first element of the array, then call reduce without the initial value, then remove it again from the array and you'd have the same result. But this is cumbersome, and might not always work (e.g. with immutable arrays). Also the accumulator might not even have the same type as the array elements.
Just think of the implementation of reduce, and the pattern which it is supposed abstract:
function reduce(callback, iterable) {
let val = ???;
for (const element of iterable)
val = callback(val, element);
return val;
}
Taking the ??? as a parameter is the sensible choice.
The main reason for this is in the need to correctly handle zero-length data structures.
For multiplication, good choice is 1, for addition it's 0. For some data types it's empty string, unit matrix and so on. Those are zeros (or units) of algebraic structures with binary operations.
With this arrangement, there are fewer exceptions. For analogy, why do we need to use signed integers? It's there so we can express subtraction more naturally, without exceptions when 5 - 7 operations occurs.
Most of the functional programming is deeply rooted in mathematical (algebraic) structures. Knowledge of those really helps and simplify finding beautiful and generic solutions.
Also this question and answers may be of interest: Difference between fold and reduce? .
If the array is empty and no initialValue is provided, TypeError will be thrown.
MDN web docs
Related
Coming from Wolfram Mathematica, I like the idea that whenever I pass a variable to a function I am effectively creating a copy of that variable. On the other hand, I am learning that in Julia there are the notions of mutable and immutable types, with the former passed by reference and the latter passed by value. Can somebody explain me the advantage of such a distinction? why arrays are passed by reference? Naively I see this as a bad aspect, since it creates side effects and ruins the possibility to write purely functional code. Where I am wrong in my reasoning? is there a way to make immutable an array, such that when it is passed to a function it is effectively passed by value?
here an example of code
#x is an in INT and so is immutable: it is passed by value
x = 10
function change_value(x)
x = 17
end
change_value(x)
println(x)
#arrays are mutable: they are passed by reference
arr = [1, 2, 3]
function change_array!(A)
A[1] = 20
end
change_array!(arr)
println(arr)
which indeed modifies the array arr
There is a fair bit to respond to here.
First, Julia does not pass-by-reference or pass-by-value. Rather it employs a paradigm known as pass-by-sharing. Quoting the docs:
Function arguments themselves act as new variable bindings (new
locations that can refer to values), but the values they refer to are
identical to the passed values.
Second, you appear to be asking why Julia does not copy arrays when passing them into functions. This is a simple one to answer: Performance. Julia is a performance oriented language. Making a copy every time you pass an array into a function is bad for performance. Every copy operation takes time.
This has some interesting side-effects. For example, you'll notice that a lot of the mature Julia packages (as well as the Base code) consists of many short functions. This code structure is a direct consequence of near-zero overhead to function calls. Languages like Mathematica and MatLab on the other hand tend towards long functions. I have no desire to start a flame war here, so I'll merely state that personally I prefer the Julia style of many short functions.
Third, you are wondering about the potential negative implications of pass-by-sharing. In theory you are correct that this can result in problems when users are unsure whether a function will modify its inputs. There were long discussions about this in the early days of the language, and based on your question, you appear to have worked out that the convention is that functions that modify their arguments have a trailing ! in the function name. Interestingly, this standard is not compulsory so yes, it is in theory possible to end up with a wild-west type scenario where users live in a constant state of uncertainty. In practice this has never been a problem (to my knowledge). The convention of using ! is enforced in Base Julia, and in fact I have never encountered a package that does not adhere to this convention. In summary, yes, it is possible to run into issues when pass-by-sharing, but in practice it has never been a problem, and the performance benefits far outweigh the cost.
Fourth (and finally), you ask whether there is a way to make an array immutable. First things first, I would strongly recommend against hacks to attempt to make native arrays immutable. For example, you could attempt to disable the setindex! function for arrays... but please don't do this. It will break so many things.
As was mentioned in the comments on the question, you could use StaticArrays. However, as Simeon notes in the comments on this answer, there are performance penalties for using static arrays for really big datasets. More than 100 elements and you can run into compilation issues. The main benefit of static arrays really is the optimizations that can be implemented for smaller static arrays.
Another package-based options suggested by phipsgabler in the comments below is FunctionalCollections. This appears to do what you want, although it looks to be only sporadically maintained. Of course, that isn't always a bad thing.
A simpler approach is just to copy arrays in your own code whenever you want to implement pass-by-value. For example:
f!(copy(x))
Just be sure you understand the difference between copy and deepcopy, and when you may need to use the latter. If you're only working with arrays of numbers, you'll never need the latter, and in fact using it will probably drastically slow down your code.
If you wanted to do a bit of work then you could also build your own array type in the spirit of static arrays, but without all the bells and whistles that static arrays entails. For example:
struct MyImmutableArray{T,N}
x::Array{T,N}
end
Base.getindex(y::MyImmutableArray, inds...) = getindex(y.x, inds...)
and similarly you could add any other functions you wanted to this type, while excluding functions like setindex!.
It seems that tc_expr is constrained to knowledge of the typing context and nothing else so it is not possible to safely "typecheck" an expression that requires knowledge of the heap state, e.g. a pointer dereference as the condition of a while loop. Why is that and would it ever be possible for me to prove correct a loop such as:
char *t = ...;
...
while (*t != 0)
{
...
t++;
}
I would think while loops could optionally be proven with a variation of tc_expr that does allow for pointer dereference by accounting for the heap context along with the typing context. I suspect that the thinking is that a loop condition should be a “pure” expression, but I’m ultimately curious if that is really a necessary constraint.
P.S. I realize that I could rewrite this as a for loop. My question still stands knowing that VST allows me to prove this kind of loop albeit with different syntax.
Answer number 1: It's a design decision, one way or the other, and we found that many things are simpler (and more in the spirit of Separation Logic) if expressions do not access memory.
Answer number 2: You can write this while loop, just as it is. Then use clightgen with the -normalize flag (which you should always use anyway), and then you can verify it. However, in such a case, the loop form will not be (strictly speaking) a Clight "while" loop, it will have it's loop-test (if (?) then /*skip*/; else break;) in the middle of the loop body; so you will use forward_loop to prove it, instead of forward_while.
The Go docs say (emphasis added):
Programs using times should typically store and pass them as values, not pointers. That is, time variables and struct fields should be of type time.Time, not *time.Time. A Time value can be used by multiple goroutines simultaneously.
Is the last sentence (about using a Time value in multiple goroutines simultaneously) the only reason that they should "typically" be stored and passed as a value, rather than a pointer? Is this common to other structs as well? I tried looking for any logic that specifically enables this in the time.Time declaration and methods, but didn't notice anything special there.
Update: I often have to serve JSON representations of my structs, and I'd rather omit empty/uninitialized times. The json:",omitempty" tag doesn't work with time.Time values, which appears to be the expected behavior, but the best workaround seems to be to use a pointer, which goes against the advice in the docs quoted above.
It's common for many kind of simple values.
In Go, when some value isn't bigger than one or two words, it's common to simply use it as a value instead of using a pointer. Simply because there's no reason to use a pointer if the object is small and you don't pass it to be changed.
You might have to unlearn the practice of languages where everything structured couldn't be handled as values. It's probably natural for you to use integers or floating point numbers as values, not pointers. Why not do the same for times ?
Regarding your precise problem with JSON and assuming you don't want to write a specific Marshaller just for this, there's no problem in using a *time.Time. In fact this issue was already mentioned in the golang-nuts list.
What is the correct definition of destructive and non-destructive constructs in LISP (or in general). I have tried to search for the actual meaning but I have only found a lot of usage of these terms without actually explaining them.
It came to my understanding, that by destructive function is meant a function, that changes the meaning of the construct (or variable) - so when I pass a list as a parameter to a function, which changes it, it is called a destructive operation, because it changes the initial list and return a brand new one. Is this right or are there some exceptions?
So is for example set a destructive function (because it changes the value of x)? I think not but I do not how, how would I justify this.
(set 'x 1)
Sorry for probably a very basic question.... Thanks for any answers!
I would not interpret too much into the word 'destructive'.
In list processing, a destructive operation is one that potentially changes one or more of the input lists as a visible side effect.
Now, you can widen the meaning to operations over arrays, structures, CLOS objects, etc. You can also call variable assignment 'destructive' and so on.
In Common Lisp, it makes sense to talk about destructive operations over sequences (which are lists, strings, and vectors in general) and multi-dimensional arrays.
Practical Common Lisp distinguishes two kinds of destructive operations: for-side-effect operations and recycling operations.
set is destructive and for-side-effect: it always modifies its first argument. Beware, that it changes the binding for a symbol, but not the thing currently bound to that symbol. setf can change either bindings or objects in-place.
By contrast, nreverse is recycling: it is allowed to modify its argument list, although there's no guarantee that it will, so it should be used just like reverse (take the return value), except that the input argument may be "destroyed" and should no longer be used. [Scheme programmers may call this a "linear update" function.]
Playing with Erlang, I've got a process-looping function like:
process_loop(...A long list of parameters here...) ->
receive
...Message processing logic involving the function parameters...
end,
process_loop(...Same long list of parameters...)
end.
It looks quite ugly, so I tried a refactoring like that:
process_loop(...A long list of parameters...) ->
Loop = fun() ->
receive
...Message processing logic...
end,
Loop()
end,
Loop()
end.
But it turned out to be incorrect, as Loop variable is unbound inside the Loop function. So, I've arranged a workaround:
process_loop(...A long list of parameters...) ->
Loop = fun(Next) ->
receive
...Message processing logic...
end,
Next(Next)
end,
Loop(Loop)
end.
I have two questions:
Is there a way to achieve the idea of snippet #2, but without such "Next(Next)" workarounds?
Do snippets #1 and #3 differ significantly in terms of performance, or they're equivalent?
No. Unfortunately anonymous function are just that. Anonymous, unless you give them a name.
Snippet #3 is a little bit more expensive. Given that you do pattern matching on messages in the body, I wouldn't worry about it. Optimise for readability in this case. The difference is a very small constant factor.
You might use tuples/records as named parameters instead of passing lots of parameters. You can just reuse the single parameter that the function is going to take.
I guess (but I' not sure) that this syntax isn't supported by proper tail-recursion. If you refactor to use a single parameter I think that you will be again on the right track.
The more conventional way of avoiding repeating the list of parameters in snippet #1 is to put all or most of them in a record that holds the loop state. Then you only have one or a few variables to pass around in the loop. That's easier to read and harder to screw up than playing around with recursive funs.
I must say that in all cases where I do this type of recursion I don't think I have ever come across the case where exactly the same set of variables is passed around in the recursion. Usually variables will change reflecting state change in the process loop. It cannot be otherwise as you have to handle state explicitly. I usually group related parameters into records which cuts down the number of arguments and adds clarity.
You can of course use your solution and have some parameters implicit in the fun and some explicit in the recursive calls but I don't think this would improve clarity.
The same answer applies to "normal" recursion where you are stepping over data structures.