How to run two procedures in ada - ada

I have an ada program that has a main procedure, now I want to add another procedure but I got an error saying "end of file expected, file can have only one compilation unit". I did some looking an I think it is because you can only have 1 procedure per file. Do I have to create another file and put the procedure alone in that? If so how would I compile both the codes and run it? Can someone show me how I would be able to compile both and run the whole file together.

As the compiler says, you can only have one compilation unit per file. A main program is compilation unit, which is a procedure.
If you want one program to run two procedures which both are compilation units, when you can do it like this:
with One_Procedure,
Another_Procedure;
procedure Sequential is
begin
One_Procedure;
Another_Procedure;
end Sequential;
If you want to run the two procedures in parallel do it like this:
with One_Procedure,
Another_Procedure;
procedure Parallel is
task One;
task Another;
task body One is
begin
One_Procedure;
end One;
task body Another is
begin
Another_Procedure;
end Another;
begin
null;
end Parallel;
The procedures may of course also be declared in the declarative region of the main program or in some packages.

Both recent GNATs and GPRbuild have options for indicating which units of a file you want compiled: that's -gnateINNN for gcc, and -eInn for gprbuild, as documented here.
Another option is to become familiar with gnatchop for extracting compilation units from files, and with -m for minimal recompilation; the latter prevents having to compile the world only because of running gnatchop when an edit has not "semantically" touched all compilation units in a file. GNAT then ignores time stamps. I sometimes run commands like
gnatchop -r -w -c allofit.ada && gnatmake -Ptest -m someunit.adb
where someunit.adb is generated for compilation unit Someunit (a procedure, a package) contained in file allofit.ada.

You can have 1 main procedure but several procedures within main procedure.
procedure main is
...text...
procedure sub1 () is
begin
...text...
end sub1;
procedure sub2 () is
begin
...text...
end sub2;
...text...
end main;

Related

Julia Distributed, redundant iterations appearing

I ran
mpiexec -n $nprocs julia --project myfile.jl
on a cluster, where myfile.jl has the following form
using Distributed; using Dates; using JLD2; using LaTeXStrings
#everywhere begin
using SharedArrays; using QuantumOptics; using LinearAlgebra; using Plots; using Statistics; using DifferentialEquations; using StaticArrays
#Defining some other functions and SharedArrays to be used later e.g.
MySharedArray=SharedArray{SVector{Nt,Float64}}(Np,Np)
end
#sync #distributed for pp in 1:Np^2
for jj in 1:Nj
#do some stuff with local variables
for tt in 1:Nt
#do some stuff with local variables
end
end
MySharedArray[pp]=... #using linear indexing
println("$pp finished")
end
timestr=Dates.format(Dates.now(), "yyyy-mm-dd-HH:MM:SS")
filename="MyName"*timestr
#save filename*".jld2"
#later on, some other small stuff like making and saving a figure. (This does give an error "no method matching heatmap_edges(::Surface{Array{Float64,2}}, ::Symbol)" but I think that this is a technical thing about Plots so not very related to the bigger issue here)
However, when looking at the output, there are a few issues that make me conclude that something is wrong
The "$pp finished" output is repeated many times for each value of pp. It seems that this amount is actually equal to 32=$nprocs
Despite the code not being finished, "MyName" files are generated. It should be one, but I get a dozen of them with different timestr component
EDIT: two more things that I can add
the output of the different "MyName" files is not identical, but this is expected since random numbers are used in the inner loops. There are 28 of them, a number that I don't easily recognize except that its again close to the 32 $nprocs
earlier, I wrote that the walltime was exceeded, but this turns out not to be true. The .o file ends with "BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES ... EXIT CODE :9", pretty shortly after the last output file.
$nprocs is obtained in the pbs script through
#PBS -l select=1:ncpus=32:mpiprocs=32
nprocs= `cat $PBS_NODEFILE|wc -l`
As pointed out by adamslc on the Julia discourse, the proper way to use Julia on a cluster is to either
Start a session with one core from the job script, add more with addprocs() in the Julia script itself
Use more specialized Julia packages
https://discourse.julialang.org/t/julia-distributed-redundant-iterations-appearing/57682/3

Instrumenting R programs using intel-pin

I am instrumenting R programs using pinatrace.so tool to generate trace for read and write memory instructions. What I observe is that multiple #eof statements get printed in the trace file at different places(which should have been actually get printed only at the end of the trace). Also, the immediate next line after #eof gets distorted and is not printed properly.
I am invoking R shell and my R program using the following command:
../../../pin -follow_execv -t obj-intel64/pinatrace.so -- /home/R-3.5.3./bin/R -f hello.R
The trace file gets printed as shown:
0 0x7ffc812cd1c8
1 0x7ffc812cd1c8
0 0x7f7f8555ee78
#eof
f6971ce8
1 0x6f4518
0 0x7ffc171a0b70
.....
.....
1 0x7ffc6da8f078
0 0x7f7c38786e78
#eof
ffc171a07c8
0 0x6f4e30
0 0x6ff918
What is wrong with this instrumentation?
When Pin is invoked with the follow_execv knob, it will create a new copy of itself in every child process that is created. The new copy is not aware that another copy is running in the parent or at all. See here:
If -follow_execv is enabled and the user has not registered to get a notification, Pin will be injected into child/exec-ed process with the same command line as of current process.
If the Pintool wasn't created with -follow_execv in mind, all copies of the Pintool will normally write to the same file. This will create strange artifacts such as what you're seeing, as different processes write to the same file and terminate it while other processes are writing after the terminator.
The simplest solution is to add a PID suffix to the file, another option is to use the Follow Child Process API (linked above) to determine which subprocess is the actual R program you want to trace. Finally, R may have support for instrumentation which you could use to instrument the program itself.

How do I determine whether a julia script is included as module or run as script?

I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.

Issue in executing a batch file using PeopleCode in Application engine program

I want to execute a batch file using People code in Application Engine Program. But The program have an issue returning Exec code as a non zero value (Value - 1).
Below is people code snippet below.
Global File &FileLog;
Global string &LogFileName, &Servername, &commandline;
Local string &Footer;
If &Servername = "PSNT" Then
&ScriptName = "D: && D:\psoft\PT854\appserv\prcs\RNBatchFile.bat";
End-If;
&commandline = &ScriptName;
/* Need to commit work or Exec will fail */
CommitWork();
&ExitCode = Exec("cmd.exe /c " | &commandline, %Exec_Synchronous + %FilePath_Absolute);
If &ExitCode <> 0 Then
MessageBox(0, "", 0, 0, ("Batch File Call Failed! Exit code returned by script was " | &ExitCode));
End-If;
Any help how to resolve this issue.
Best bet is to do a trace of the execution.
Thoughts:
Can you log on the the process scheduler you are running this on and execute the script OK?
Is the AE being scheduled or called at run-time?
You should not need to change directory as you are using a fully qualified path to the script.
you should not need to call "cmd /c" as this will create an additional shell for you application to run within, making debuging harder, etc.
Run a trace, and drop us the output. :) HTH
What about changing the working directory to D: inside of the script instead? You are invoking two commands and I'm wondering what the shell is returning to exec. I'm assuming you wrote your script to give the appropriate return code and that isn't the problem.
I couldn't tell from the question text, but are you looking for a negative result, such as -1? I think return codes are usually positive. 0 for success, some other positive number for failure. Negative numbers may be acceptable, but am wondering if Exec doesn't like negative numbers?
Perhaps the PeopleCode ChDir function still works as an alternative to two commands in one line? I haven't tried it for a LONG time.
Another alternative that gives you significant control over the process is to use java.lang.Runtime.exec from PeopleCode: http://jjmpsj.blogspot.com/2010/02/exec-processes-while-controlling-stdin.html.

Error: cannot generate code for file random.ads (package spec)

I somehow cannot compile (neither run) my Ada code in GPS. I get an error:
cannot generate code for file random.ads (package spec)
gprbuild: *** compilation phase failed
The random.ads file looks like this:
with Ada.Numerics.Float_Random;
use Ada.Numerics.Float_Random;
package random is
protected randomOut is
procedure Inicializal;
entry Parcel(
randomout: out Positive;
from: in Positive;
to: in Positive := 1
);
private
G: Generator;
Inicializalt: Boolean := False;
end randomOut;
task print is
entry write(what: in String);
end print;
end random;
The .gpr file looks as follows:
project Default is
package Compiler is
for Default_Switches ("ada") use ("-g", "-O2");
end Compiler;
for Main use ("hunting.adb");
end Default;
What does this mean? How can I fix it? Thank you!
You can't generate code for a package specification.
This is normal and expected.
You can compile the package body, random.adb, and generate code for it - but there's usually no need.
Just compile your main program, (or your test harness if you're unit testing) and let the compiler find all its dependencies.
(If it can't, either you haven't written them yet, or it's looking in the wrong place. If you need help with that, add relevant info to the question).
The problem is caused by
task print is
entry write(what: in String);
end print;
As any task is specified as a body, the compiler had trouble deciding: it had a body, that has to be compiled, in a spec file, which does not. Moving the task to the .adb file solved the issue.

Resources