I have some difficulties understanding procedure representation(section 2.2.3) in Chapter 2 of Essentials of Programming Languages (Friedman, Wand).
First of all, why is "having exactly one observer" important for procedure representation?
Secondly, the author gives a recipe of deriving procedure representation. Does this recipe means that we can derive procedure representation fully mechanically without any thoughts?
I tried to follow the recipe to solve the exercise 2.12(Implementing a stack datatype using procedure representation), but I find it very difficult to get the solution by only relying on the recipe.
Even after I get a procedure representation of stack, I have no clue that how I can follow the same approach to implement procedure representations for other datatypes. For me, I don't even know whether it's possible to implement a procedure representation for certain datatypes before trying.
Related
Suppose I have a subprogram written using the SPARK Ada subset with postconditions verifying some property - for example, that the returned array is sorted, whose body just calls out to a function external to SPARK - for example, a C/C++ function that sorts arrays. Is there any way to force SPARK to assume, after this call, that the array will be sorted?
In short, GNATprove takes a divide-and-conquer approach when analyzing code. The following explanation is incomplete and in practice things are slightly more complicated, but for the sake of understanding, it gives a useful perspective on how things work.
For each assertion, loop invariant and pre-/post-condition GNATprove creates verification conditions (VCs) that must be proven. Verification conditions are to be proven by assumptions and the semantics of the code.
When a code section is being analyzed, and this code section starts just after a call to a subprogram, then any post-condition of that subprogram is assumed to hold.
If that particular subprogram is implemented in SPARK, then GNATprove will try to proof that the post-conditions indeed hold by analyzing the subprogram. However, if the particular subprogram is not in SPARK (e.g., the subprogram is imported), then the post-conditions will remain assumptions and it is left to the developer to prove them by other means.
A nice example that illustrates the first point can be found in sections 1 and 2 of the recently published article The Work of Proof in SPARK (available here). Note in particular how a repeated call to the Increment function is being analyzed by GNATprove.
So, if you want SPARK to assume particular post-conditions to hold for a subprogram that is not in SPARK (an imported function, for example), just provide the post-conditions.
I've been working on SQL though I do not have enough knowledge on database. But still since last 1 week, I've been learning PL/SQL. I covered few basic things like how to write code, variable, different blocks etc. But my main confusion is, where should I use PL/SQL or where PL/SQL is basically used in industry ?
I'll be great full to all of you if any one give me a proper response.
I am assuming you have some understanding of when you use SQL, and you are asking where PL/SQL comes into the equation.
SQL is a declarative syntax - code what you want and let the database figure out how to fulfill it. As long as the what is a set.
PL/SQL is the procedural syntax - code exactly how you want to fulfill what. And the what doesn't have to be a set; it can be anything like any procedural language (e.g., python, java, c#, etc.) The main difference between PL/SQL and those other procedural languages is that the database is the compiler and executor of the resulting compiled code. It's an intimate way to couple data access code and your database.
I hope that helps.
In this question, I asked how to define an unlimited upper bound for a range (turns out that the answer was fairly obvious, but not to someone new to Ada). In the answer, it was suggested to create a Specific Subtype for this.
A specific subtype of of the sort referred to in the question would look like this:
Type Speed is Float range 0 .. Float'Last
Additionally, I've noticed that a good portion of the code in this Ada project has specific types - like Feet_Float and Meters_Float and the like. Why is this the preferred practice, as opposed to just putting a range constraint on a basic float member variable in the class/package?
Ada doesn't prefer subtypes - Ada programmers do.
New types and subtypes (they are different, and both have their uses) help catch so many errors at so little cost or time penalty it's a mystery why good type systems fell so far out of fashion.
For example, recognise that the index for any array belongs to a subtype (possibly anonymous, but accessible as myArray'range as in for i in myArray'range loop ... end loop; or subtype myIndextype is myArray'range; theIndex : myIndextype; and you'll see that every buffer overflow vulnerability - or attack - ever written was simply a type error - or could have been, in Ada.
When you get a bug past the compiler, the first time your executable falls over with an Exception : Constraint_Error pointing spookily close to the mistake, you'll start to get a sense of the value of range-constrained types.
To expand on this a bit, I'll refer to a couple more Q&As.
First note that the compiler you're probably using, Gnat, may not be strictly Ada compliant unless you add a couple of optional flags on the command line (or project file) as described in the first example. Recent versions have turned some of them on by default.
Here's an example of a subtype being declared, used and going out of visible scope, (in a declare block, where the range of the subtype is unknown until runtime. Unlike many dynamic typed languages, this is both fast and safe, because the relevant storage is usually on the stack, if you're interested in the implementation details.
And an example of how not to use a declare block.
Here's an extreme example of not only declaring subtypes but telling the compiler how to pack them in storage. Common in embedded programming, either where space is tight (I have a complete digital watch in a processor with 1kbyte memory!) or for accessing specific bits in hardware registers. (Note that this example would be cleaner if updated to use Ada-2012 aspects.)
And this Q&A briefly covers the difference between new types and subtypes, for someone coming from Java. (I'm a little disappointed none of the Java experts managed an answer before it was closed, describing how they would handle the same issues)
The declaration of specific types has the following benefits:
Specific types prevent inappropriate mixing of abstractions. For instance, dividing the distance between the Moon and Earth in meters by the gross national product of Belgium expressed in Euros.
The name of the type more clearly documents the intended use of the type
Use of ranged types clearly documents the valid values for instances of the type.
The paradigm in Ada is to model the problem in the solution. One aspect of this is to model the range, accuracy, and precision of values in the problem space with appropriate scalar type and subtype definitions.
Why should one do this? McCormick, in analyzing why students with C experience and no Ada experience were able to complete his real-time S/W course's project in Ada but not in C found that the most important feature of Ada was
Modeling of scalar objects.
Strong typing.
Range constraints.
Enumeration types.
McCormick's paper
McCormick's site
Does anyone know where I can get the BNF or EBNF for the LOGO programming language?
A BNF grammar might not be too useful in certain circumstances...
Writing a LOGO that's accurately compatible with existing/historical implementation isn't an easy task (I worked on such a project). The problem is that the parser doesn't do the full job, and the evaluator (interpreter) has to work with partial data. Consider this example:
proc1 a b proc2 c
It could mean proc1(a, b, proc2(c)) or proc1(a, b, proc2(), c) according to the number of parameters for proc1 & proc2.
Furthermore the LOGO interpreters I know, for example Berkely LOGO, seem from a cursory glance not to write a traditional parser that additionally has access to each procedure and its arity; instead they run the procedures and the procedures 'eat up' the number of parameters that they need. This makes the parser a little naive and the main role is that of an interpreter, and thus parsing is kind of unusual.
There is no standard LOGO implementation.
Your best call is probably to look at the source of a popular implementation, such as UCBLogo
I am a beginner in Ada and I have come across a piece of code which is shown below:
procedure Null_Proc is
begin
null;
end;
Now as per my knowledge the procedure in Ada doesn't return anything. My doubt is what does this procedure Null_proc do? I mean I am not clear with the definition of the procedure.
It does nothing.
It might be useful when a procedure must be called but nothing must be done; otherwise, it has little value. (I am working from memory; I assume that Ada does allow functions or procedures as parameters to other functions - in terms of C, pointers to functions.)
I've been known to write main routines that way when all the "real code" was in the withed packages. This is particularly likely if your program uses tasking, as the main routine cannot accept rendezvous like a task can, so it often ends up with nothing useful to do. Your entire program will stay active until all tasks complete, so the main routine really doesn't have to do anything.
Another possible use would be for implementing some kind of default routine to supply to callbacks.