Accessibility levels and local procedure variables lifetime - ada

I do not understand accessibility levels, while watching tutorial videos from AdaCore, in particular this one:
https://youtu.be/nfBwXxAf7UE?t=1418
The example is as follows:
type An_Access is access all Integer;
procedure Proc is
W : aliased Integer;
X : An_Access := W'Access;
begin
null;
end Proc;
The tutorial says "X is of accessibility level 0, W is of accessibility level 1. Allowing this declaration would permit reference to the object outside of the scope."
Why would the point at which the access type is declared affect the access variables in different scopes? I mean, both W and X are local to Proc and will go out of scope and will not be usable once Proc is finished. So why would it matter, where the type of X is declared?

The primary reason is simplicity. The existing rule is easy to understand (you clearly understand it) and easy to implement. The data-flow analysis required (to distinguish between acceptable and unacceptable uses in general) is complex and not normally necessary for a compiler, so it was thought a bad idea to require it of compilers.
Another consideration is Ada's compilation rules. If Proc passes X to another subprogram declared in another package, the data-flow analysis would require the body of that subprogram, but Ada requires that it be possible to compile Proc without the body of the other package.
Finally, the only time* you'll ever need access-to-object types is if you need to declare a large object that won't fit on the stack, and in that case you won't need access all or 'access, so you won't have to deal with this.
*True as a 1st-order approximation (probably true at 2nd- and 3rd-order, too)

In Ada, when you try to think about accessibility, you have to do it in terms of access types instead of variables. There's no lifetime analysis of variables (contrarily to what Rust does, I think). So, what's the worst that could happen? If your pointer type level is less than the target variable level, accessibility checks will fail because the pointer might outlive the target.
I'm not sure what goes on with anonymous access types, but that's a whole different mess from what I pick here and there. Some people recommend not using them at all for variables.

Related

In Ada, it seems to be the general practice to declare specific subtypes, but why?

In this question, I asked how to define an unlimited upper bound for a range (turns out that the answer was fairly obvious, but not to someone new to Ada). In the answer, it was suggested to create a Specific Subtype for this.
A specific subtype of of the sort referred to in the question would look like this:
Type Speed is Float range 0 .. Float'Last
Additionally, I've noticed that a good portion of the code in this Ada project has specific types - like Feet_Float and Meters_Float and the like. Why is this the preferred practice, as opposed to just putting a range constraint on a basic float member variable in the class/package?
Ada doesn't prefer subtypes - Ada programmers do.
New types and subtypes (they are different, and both have their uses) help catch so many errors at so little cost or time penalty it's a mystery why good type systems fell so far out of fashion.
For example, recognise that the index for any array belongs to a subtype (possibly anonymous, but accessible as myArray'range as in for i in myArray'range loop ... end loop; or subtype myIndextype is myArray'range; theIndex : myIndextype; and you'll see that every buffer overflow vulnerability - or attack - ever written was simply a type error - or could have been, in Ada.
When you get a bug past the compiler, the first time your executable falls over with an Exception : Constraint_Error pointing spookily close to the mistake, you'll start to get a sense of the value of range-constrained types.
To expand on this a bit, I'll refer to a couple more Q&As.
First note that the compiler you're probably using, Gnat, may not be strictly Ada compliant unless you add a couple of optional flags on the command line (or project file) as described in the first example. Recent versions have turned some of them on by default.
Here's an example of a subtype being declared, used and going out of visible scope, (in a declare block, where the range of the subtype is unknown until runtime. Unlike many dynamic typed languages, this is both fast and safe, because the relevant storage is usually on the stack, if you're interested in the implementation details.
And an example of how not to use a declare block.
Here's an extreme example of not only declaring subtypes but telling the compiler how to pack them in storage. Common in embedded programming, either where space is tight (I have a complete digital watch in a processor with 1kbyte memory!) or for accessing specific bits in hardware registers. (Note that this example would be cleaner if updated to use Ada-2012 aspects.)
And this Q&A briefly covers the difference between new types and subtypes, for someone coming from Java. (I'm a little disappointed none of the Java experts managed an answer before it was closed, describing how they would handle the same issues)
The declaration of specific types has the following benefits:
Specific types prevent inappropriate mixing of abstractions. For instance, dividing the distance between the Moon and Earth in meters by the gross national product of Belgium expressed in Euros.
The name of the type more clearly documents the intended use of the type
Use of ranged types clearly documents the valid values for instances of the type.
The paradigm in Ada is to model the problem in the solution. One aspect of this is to model the range, accuracy, and precision of values in the problem space with appropriate scalar type and subtype definitions.
Why should one do this? McCormick, in analyzing why students with C experience and no Ada experience were able to complete his real-time S/W course's project in Ada but not in C found that the most important feature of Ada was
Modeling of scalar objects.
Strong typing.
Range constraints.
Enumeration types.
McCormick's paper
McCormick's site

Is the "define" primitive of Scheme an imperative languages feature? Why or why not?

(define hypot
(lambda (a b)
(sqrt (+ (* a a) (* b b)))))
This is a Scheme programming language.
"define" creates a variable and global binding
lambda creates a procedure
I would like to know if "define" would be considered as an imperative language feature! As long as I know imperative feature is static scoping. I think it is an imperative feature since "define" create a global binding and static scoped looks at global binding for any variable definition where as in dynamic it looks at the current most active binding.
Please help me find the correct answer!! And I would like to know why or why not?
In a Scheme program (define var expr) statement is both a declaration and an initialization. Declarations introduce a new name into the scope. Declarations and initialization are present in both imperative and declarative languages.
However if the same variable is defined twice, then define behave as an assignment - which belongs to the imperative paradigm.
You've put your finger on a subtle and contentious issue. There have long been two informal camps on how define should work, which I would label (very imperfectly, and very controversially!) as the static vs. dynamic camps.
The static camp sees define as a non-side-effecting top-level declaration—it's a syntax that simply defines a name in a top-level scope, just like let is a syntax that defines a name in a local scope. A bit more precisely, this camp tends to see the top-level environment as equivalent to a big letrec with all the defines as the bindings, and all "loose" top-level expressions as the body. This is, incidentally, similar to the way that simple compilers work—read the whole program from one or more files, figure out all of the top-level bindings and generate code with knowledge of the whole program's source text.
The dynamic camp, on the other hand, tends to conceive of the top-level environment as a mutable data structure to which bindings can be added at runtime, and define is then an operation that modifies the top-level environment. This is, incidentally, similar to how simple interactive interpreters work—read definitions interactively from input, one at a time, and incorporate them into the environment as the user provides them.
To give one example, the SLIB library is one that I recall has been criticized for being much too firmly in the "dynamic" camp. If you read Section 1.1 on "features", you see this right from the beginning:
SLIB maintains a list of features supported by a Scheme session. The set of features provided by a session may change during that session.
The documentation for the require form that you use in SLIB to "load" modules continues with this:
Procedure: require feature
If (provided? feature) is true, then require just returns.
Otherwise, if feature is found in the catalog, then the corresponding files will be loaded and (provided? feature) will henceforth return #t. That feature is thereafter provided.
Otherwise (feature not found in the catalog), an error is signaled.
If you read this carefully, you will be struck that it's framing the whole thing as modules being "loaded" at runtime—and not as compile-time linking, which is foreign to the design.
So a "session" is a set of bindings whose keys—not just their values—changes during the runtime of the program. Programs are able to mutate the session with provide and require. They are able to directly observe the mutation with provided?. And it is implied that they can indirectly observe the set of identifiers bound in top-level environment change as a result of require—a call to require causes procedure invocations that would result in a runtime error before its invocation to no longer be so afterwards.
So we can't help but conclude that going by the philosophy of the people who designed this library, define is imperative. But not every Scheme user or implementer shares this philosophy.
First off Scheme is lexically scoped. Define usually is not limited to top level bindings like it is in Racket. It can create bindings within other procedure bodies.
In some implementations define can manipulate state but only for top level definitions. Otherwise it acts like let and binds a variable to the local scope. To actually take advantage of the top-level rebinding programatically is difficult.
So define doesn't introduce an imperative style into scheme code. Compare define to set! and its relatives, which by modify the variable in whatever environment it is bound, thereby allowing imperative style in scheme code.

Localizing global variables

When using the Extended Program Check, I get the following warning:
Do not declare fields and field symbols (variable name) globally.
This is from declaring global data before the selection screen. The obvious solution is that they should be declared locally in a subroutine.
If I decide to do this, the data will now be out of scope for the other subroutines, so I would end up creating something to the effect of a main() function from C or Java. This sounds like a good idea - however, events such as INITIALIZATION are not allowed to be inside of subroutines, meaning that it forces a break in scope.
Observe the sample program below:
REPORT Z_EXAMPLE.
SELECTION-SCREEN BEGIN OF BLOCK upload WITH FRAME TITLE text-H01.
PARAMETERS: p_infile TYPE rlgrap-filename LOWER CASE OBLIGATORY.
SELECTION-SCREEN END OF BLOCK upload.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_infile.
PERFORM main1 CHANGING p_infile.
INITIALIZATION.
PERFORM main2.
TOP-OF-PAGE.
PERFORM main3.
...
main1, main2, and main3 cannot to my knowledge pass any data to one another without global declaration. If the data is parsed from the uploaded file p_infile in main1, it cannot be accessed in main2 or main3. Aside from omitting events all together, is there any way to abide by the warning but let data be passed over events?
There are a variety of techniques - I prefer to code almost everything except for the basic selection screen handling in a separate controller class. The report simply defers to that class and calls its methods. Other than that - it's just a warning that you can ignore if you know what you're doing. Writing a program without any global variable at all will certainly not be practical - however, you should think at least twice before using global variables or attributes in a place where a method parameter would be more appropriate.
As #vwegert so rightly said, it's almost impossible to write an ABAP program that doesn't have at least a few global variables (the selection screen and events enforce that, unfortunately).
One approach is to use a controller class, another is to have a main subroutine and have it call other subroutines as required, passing values as required. I tend to favour the latter approach in a lot of cases, if only because it's easier to split the subroutines into logical groupings in separate includes (doing so with classes can sometimes be a little ugly). It really is a matter of approach though, but the key thing is reducing global variables to a minimum - unfortunately too few ABAP developers that I've encountered care about such issues.
Update
#Christian has reminded me that as of ABAP AS 7.02, subroutines are considered obsolete:
Subroutines should no longer be created in new programs for the following reasons:
The parameter interface has clear weaknesses when compared with the parameter interface of methods, such as:
positional parameters instead of keyword parameters
no genuine input parameters in pass by reference
typing is optional
no optional parameters
Every subroutine implicitly belongs to the public interface of its program. Generally this is not desirable.
Calling subroutines externally is critical with regard to the assignment of the container program to a program group in the internal session. This assignment cannot generally be defined as static.
Those are all valid points and I think in light of that, using classes for modularisation is definitely the preferred approach (and from a purely aesthetic point of view, they also "fit" better with the syntax enhancements in 7.02 and later).

Ada: pragma Pure / Remote_Types and system types

I'm writing an Ada application that needs to be distributed, and I'm trying to use the DSA to do it, but I'm finding big limitations in what is "allowed" to be "withed" and what isn't.
I won't post sourcecode, since it's quite complex and this is a generic question anyway, I just wanted some pointers on what I'm not understanding correctly, so please bear with me and correct me if I'm wrong.
So my problem is this: I want to mark a procedure with the pragma Remote_Call_Interface so it can be called remotely. However as soon as I add the pragma compilation breaks due to the fact that the procedure is including other packages in my project that are not categorized as either Pure or Remote_Types.
So I try to mark the packages I need as either Pure or Remote_Types (dpeending wether they have state or not) but this in turn breaks compilation even further, since it turns out that you can't use even basic system types in a Pure/Remote_Types package, for example: you can't use Vectors, you can't use Unbounded_Strings, you can't use Maps, etc... the whole program falls to pieces since I can't use the data structures I used to build it anymore!
Is there a way around this? Or if I want to distribute my application I must strictly limit myself to the most basic types like Integers and booleans and little else?? I don't understand if I'm hitting against a limitation of the language or if I'm just doing it incorrectly (unfortunately the tutorials I found on DSA are all very vague, incidentally if anyone has some good ones feel free to link them!)
EDIT: after ajb's answer let me specify what is annoying me in particular: in the package I want to mark with pragma Remote_Call_Interface I'm trying to "with" some packages that are not pure/remote_types, however it only uses the types in those packages locally, it does not contain any procedures that accept such types as parameters, nor functions that return such types. This is what bothers me: since those types would not have to "travel" over the network, why can't I with them? I'm only using them locally... I don't understand this, and that is why I was trying to make those types Pure/Remote_Types, but now that I've read ajb's explanation (ie: Remote_Types is used so that objects of those types can travel over the network) I'm even more confused about why I can't use them if I only use them locally.
I'm not an expert on Ada distributed programming, but here's what I do know (or think I know):
The Annotated Ada Reference Manual, Section E.2.3 says, "The restrictions governing a remote call interface library unit are intended to ensure that the values of the actual parameters in a remote call can be meaningfully sent between two active partitions." For example, if a record type has a field that's an access type, you can't send it from one partition to another blindly, because the called partition won't be able to access the memory that the pointer points to. (Unbounded_String, Map, and Vector are implemented using access types as part of the internals.) All types used as parameters or return types must support "external streaming", meaning there has to be a way for the type to be converted to and from a stream of bytes so that the parameter value can be transmitted over a socket. If you have a record with an access type, but you provide 'Read and 'Write attributes so that the type can be written to and read from a byte stream without any actual pointers being transmitted, then you can put your record type in a Remote_Types package.
I'm not sure exactly what your problem is: are there certain types you want to pass as a parameter to a remote call but can't; or are there types that you want to use only in the rest of your application, but are getting in the way?
If it's the second one, then I think the solution is to restructure your packages so that all the "remote types" are separate from the non-remote types.
However, if you're really looking to pass an Unbounded_String, Map, or Vector from one partition to another in a remote call, it's trickier. Unbounded_String really should support external streaming, and there was a proposal to make Unbounded_String a Remote_Types package (see AI05-0204), but it wasn't acted on--I don't know why. Map and Vector would be bigger problems, though, since they are generic packages that have to work on any type, including those that don't support external streaming. In any case, those types aren't set up to be automatically converted to or from bytes to be passed over a socket.
But I think you could make it work like this:
private with Ada.Strings.Unbounded;
package Remote_Types_Package is
pragma Remote_Types;
type My_Unbounded_String is private;
private
type My_Unbounded_String is record
S : Ada.Strings.Unbounded.Unbounded_String;
end record;
end Remote_Types_Package;
The Unbounded_String package must be withed with private with; see E.2.2(6). You'll need to provide a function to create the My_Unbounded_String, and you'll need to provide stream read and write routines for My_Unbounded_String, and define 'Read and 'Write for the type. You should be able to write the Read and Write attributes by using the Read and Write attributes for the Unbounded_String. Something similar should be doable if you want to use a Vector as a remote call parameter, although you may have to do more work to marshal/unmarshal the type yourself.
Once again, I have not tried this, and it's possible there are some hitches in this solution.
EDIT: Since it now looks like the question is the simpler one--i.e. you have some types that are not going to be passed between partitions getting in the way--the solution should be simpler. Any types that you define that are going to be communicated between partitions need to be in a Remote_Types package, say P1. Other types should be in a different package, say P2 (or multiple packages). If types in P1 depend on types in P2, you can still get this to work by having P1 say private with P2;, and making sure you have the marshalling and unmarshalling procedures you need. If you run into difficulties, I'd encourage you to ask a new question here.
I don't know why the language required all such types to be quarantined in a Remote_Types package, instead of just saying that any type used in a Remote_Call_Interface package has to have only parts that can be streamed. There may have been some implementation issues. Any code that exists for a Remote_Types package has to be in programs in both partitions, perhaps, and this may have been an attempt to limit the type of code that would have to be linked into multiple partitions. But I'm just guessing.

What is the usefulness of the `access` parameter mode?

There are three 'normal' modes of passing parameters in Ada: in, out, and in out.
But then there's a fourth mode, access… is there anything wherein they're required?
(i.e. something that would otherwise be impossible.)
Now, I do know that the GNAT JVM Ada-compiler makes pretty heavy use of them in the imported [library] specifications. (Also, they could arguably be seen as essential for C/C++ translations.)
One of the primary drivers of the access mode was to work-around the restriction that, prior to Ada 2012, function parameters could only be of mode 'in'.
So while there may still be areas where they're an appropriate solution, perhaps in bindings, Ada 2012's relaxation of the allowed function parameters modes to now include 'in out' will probably significantly reduce the need for access mode.
Regardless of what other uses there are for them, I rather like using them when coding bindings to C API's that take in pointers (if and only if 0 is not a valid value for that parameter on the C side).
This way on the Ada side I can deal with a nice object rather than a messy error-prone pointer.
Of course you can just specify in the bindings that the parameter is passed by reference, which gets you the same thing.
In my latest project, the only time I've needed to use access so far is when defining my own stream subprograms (Read, Write, X'Class'Output etc. etc.). These functions require not null access Ada.Streams.Root_Stream_Type'Class as a parameter.
For example:
package Example is
type Printable_Type is private;
procedure Print_Printable(
Stream : not null access Ada.Streams.Root_Stream_Type'Class;
Print : in Printable_Type);
for Printable_Type'Write use Print_Printable;
end Example

Resources