I have a quick Ada question. If I have a procedure where I may write out to a variable, or I might leave it alone, should it be an Out parameter or an In Out parameter? I guess this boils down to the question:
What does the caller see if it calls a procedure with a parameter as Out but the procedure doesn't touch the parameter. Does it see the same value? Undefined behavior?
The compiler doesn't complain because it sees an assignment to the Out variable...it just happens to be in a conditional, where it may not be reached, and the compiler doesn't bother to check all paths.
I suspect the safe bet is marking the parameter as In Out, but I'd like to know if this is necessary or just stylistically preferable.
Thanks!
-prelic
In Ada, when a procedure with an out parameter does not write anything to that parameter, the result passed back to the caller is something undefined. This means that whatever was in that variable in the caller, gets overwritten by garbage on return from the procedure.
The best practice in Ada is to definitively initialise all out parameters with a suitable default value at the start of the procedure. That way, any code path out of the procedure results in valid data passed back to the caller.
If you have something in the caller that might be changed by a procedure, you must use an in out parameter.
From the Ada 95 RM 6.4.1 (15):
For any other type, the formal parameter is uninitialized. If composite, a view conversion of the actual parameter to the nominal subtype of the formal is evaluated (which might raise Constraint_Error), and the actual subtype of the formal is that of the view conversion. If elementary, the actual subtype of the formal is given by its nominal subtype.
Related
I know that from here:
Julia function arguments follow a convention sometimes called "pass-by-sharing", which means that values are not copied when they are passed to functions. Function arguments themselves act as new variable bindings (new locations that can refer to values), but the values they refer to are identical to the passed values. Modifications to mutable values (such as Arrays) made within a function will be visible to the caller. This is the same behavior found in Scheme, most Lisps, Python, Ruby and Perl, among other dynamic languages.
Given this, it's clear to me that to pass by reference, all you need to do is have a mutable type that you pass into a function and edit.
My question then becomes, how can I clearly distinguish between pass by value and pass by reference? Does anyone have an example that shows a function being called twice; once with pass by reference, and once with pass by value?
I saw this post which alludes to some similar ideas, but it did not fully answer my question.
In Julia, functions always have pass-by-sharing argument-passing behavior:
https://docs.julialang.org/en/v1/manual/functions/
This argument-passing convention is also used in most general purpose dynamic programming languages, including various Lisps, Python, Perl and Ruby. A good and useful description can be found here:
https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_sharing
In short, pass-by-sharing works like pass-by-reference but you cannot change which value a binding in the calling scope refers to by reassigning to an argument in the function being called—if you reassign an argument, the binding in the caller is unchanged. This means that in general you cannot use functions to change bindings, such as for example to swap to variables. (Macros can, however, modify bindings in the caller.) In particular, if a variable in the caller refers to an immutable value like an integer or a floating-point number, its value cannot be changed by a function call since which object the variable refers to cannot be changed by a function call and the value itself cannot be modified as it is immutable.
If you want to have something like R or Matlab pass by value behavior, you need to explicitly create a copy of the argument before modifying it. This is precisely what R and Matlab do when an argument is passed in a modified and an external reference to the argument remains. In Julia it must be done explicitly by the programmer rather than being done automatically by the system. A downside is that the system can sometimes know that no copy is required (no external references remain) when the programmer cannot generally know this. That ability, however, is deeply tied with the reference counting garbage collections technique, which is not used by Julia due to performance considerations.
By convention, functions which mutate the contents of an argument have a ! postfix (e.g., sort v/s sort!).
So we know Ada supports default parameters like so
procedure Example(param1 : Integer := 1);
But my question is, where the default parameter evaluated? In all languages where I'm familiar, the default parameter is merely inserted into calling code, which requires downstream recompilation if the default parameter is changed. Does Ada use this same approach?
I tried searching the ARM 2012, but couldn't find "default parameter" anywhere in the entire document. So then I checked 6.4 and 6.4.1 where it seems like the ARM calls the relevant part "default expressions". However "default expressions" links to 3.7 Discriminants. This might possibly be used to reduce the amount of times something is defined, however if it's common to two concepts they should do what programmers do and define it separately; this jump is confusing and seems like an error.
Note 59 reads:
The default_expression for a discriminant of a type is evaluated when an object of an unconstrained subtype of the type is created.
Well, that doesn't make any sense in regards to subroutine calls.
So again, when is a "default expression" for a subroutine actually evaluated?
You've been looking in the right place, but you must have missed the important part in RM 6.4 10/2:
10/2 For the execution of a subprogram call, the name or prefix of
the call is evaluated, and each parameter_association is evaluated
(see 6.4.1). If a default_expression is used, an implicit
parameter_association is assumed for this rule. These evaluations are
done in an arbitrary order.
I found it shortly after posting this question.
6.4.1 6.25/3 reads:
For a call, any default_expression evaluated as part of the call is considered part of the call.
Is there an better way to get the reflect.Type of an interface in Go than reflect.TypeOf((*someInterface)(nil)).Elem()?
It works, but it makes me cringe every time I scroll past it.
Unfortunately, there is not. While it might look ugly, it is indeed expressing the minimal amount of information needed to get the reflect.Type that you require. These are usually included at the top of the file in a var() block with all such necessary types so that they are computed at program init and don't incur the TypeOf lookup penalty every time a function needs the value.
This idiom is used throughout the standard library, for instance:
html/template/content.go: errorType = reflect.TypeOf((*error)(nil)).Elem()
The reason for this verbose construction stems from the fact that reflect.TypeOf is part of a library and not a built-in, and thus must actually take a value.
In some languages, the name of a type is an identifier that can be used as an expression. This is not the case in Go. The valid expressions can be found in the spec. If the name of a type were also usable as a reflect.Type, it would introduce an ambiguity for method expressions because reflect.Type has its own methods (in fact, it's an interface). It would also couple the language spec with the standard library, which reduces the flexibility of both.
There are three 'normal' modes of passing parameters in Ada: in, out, and in out.
But then there's a fourth mode, access… is there anything wherein they're required?
(i.e. something that would otherwise be impossible.)
Now, I do know that the GNAT JVM Ada-compiler makes pretty heavy use of them in the imported [library] specifications. (Also, they could arguably be seen as essential for C/C++ translations.)
One of the primary drivers of the access mode was to work-around the restriction that, prior to Ada 2012, function parameters could only be of mode 'in'.
So while there may still be areas where they're an appropriate solution, perhaps in bindings, Ada 2012's relaxation of the allowed function parameters modes to now include 'in out' will probably significantly reduce the need for access mode.
Regardless of what other uses there are for them, I rather like using them when coding bindings to C API's that take in pointers (if and only if 0 is not a valid value for that parameter on the C side).
This way on the Ada side I can deal with a nice object rather than a messy error-prone pointer.
Of course you can just specify in the bindings that the parameter is passed by reference, which gets you the same thing.
In my latest project, the only time I've needed to use access so far is when defining my own stream subprograms (Read, Write, X'Class'Output etc. etc.). These functions require not null access Ada.Streams.Root_Stream_Type'Class as a parameter.
For example:
package Example is
type Printable_Type is private;
procedure Print_Printable(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Print : in Printable_Type);
for Printable_Type'Write use Print_Printable;
end Example
In Chapter 3, There is an example called "MySecond.hs", what I really don't understand is code like this:
safeSecond :: [a] -> Maybe a
it always in the first line of file, and delete it causes no trouble. anyone could enlight me with what that means? I am but a newbie to any functional programming language.
It is the type annotation. If you don't write it Haskell will infer it.
In this case safeSecond is the name of something. The :: separates the name from the type. It takes a list of type a(a is a type variable this function will work on a list of any type.) -> is function application, and Maybe a is the return type.
Note that 'a' represents a single type so if you pass in a int list you must get a Maybe int out. That is to say all 'a's in the the type must agree.
Maybe is just a type that has two alternatives Just a or Nothing.
It's the type signature of the function. It's meant to show what the inputs and outputs of the function are supposed/expected to be. For most Haskell code the compiler can infer it if you don't specify it, but it is highly recommended to always specify it.
Aside from helping you remember what the function should actually do, it's also a nice way for others to get an idea about what the function does.
Besides that, it's also useful for debugging, for instance when the type of the function isn't what you expected it to be. If you have a type signature for that function, you would get an error at the definition site of the function, vs if you don't you'd get one at the call site. see Type Signatures and Why use type signatures
Also since you're reading RWH, Chapter 2 covers this.
This is a type annotation; it acts like a function declaration in C.
In Haskell, type declaration is usually not strictly necessary, as Haskell can usually infer a good type from correct code. However, it is generally a good idea to declare types for important values, because:
If your code is not correct, you tend get more useful error messages that way (otherwise the compiler can get confused trying to infer your types, and the resulting failure message may not be clearly related to the actual error). If you are getting obscure/verbose error messages, adding type annotation may improve them.
Especially as a beginner, declaring important types can make you less confused about what you're doing -- it forces you to clarify your thinking as you write the program.
As others have mentioned, type annotation acts as active documentation, making other people less confused about your code. As usual, "other people" may be you, a few months down the road.