What are the differences between runghc and ghc? - ghc

What are the differences between runghc and ghc?
I ran a short program that seemed to compile fine with both, except that I got the following with runghc, but not plain ghc:
error:
Variable not in scope: main :: IO a0
Perhaps you meant `min' (imported from Prelude)
It seems that compiling things in runghc, those things need Main function, while plain ghc, doesn't?
Is that all?

runghc is used to run programs directly, not compiling them. A file without a main IO action can't be run as, by definition, there is nothing to run.
What ghc does with a file not containing a main is compile it as a module, to be imported by other haskell programs/modules - these are, of course, not runnable.

Related

How do I determine whether a julia script is included as module or run as script?

I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.

Spec not allow body (remove in gpr file) without touching source files

I got the following error wich is common with generated sources:
spec of this package does not allow a body
I would like to know if it exist a rule to put in the gpr file to ignore this error.
Like a ignore flag.
As I mentionned this files are generated so i have no right on them (not alowed to suppress them neither rewrite them).
More over it would be nice to have a rule that work for every generation.
If you were to compile
package Guillaume is
end Guillaume;
package body Guillaume is
end Guillaume;
in Ada 1983 mode, you would get e.g.
gnatmake -gnat83 guillaume.ads
gcc -c -gnat83 guillaume.ads
guillaume.ada:1:09: warning: package "Guillaume" does not require a body
guillaume.ada:1:09: warning: body in file "guillaume.adb" will be ignored
Having a body that isn’t required by the spec was made illegal with Ada 95 (it would be possible to change a body and for the compilation process not to notice that it needed to be recompiled, leading to confusion). If your code generator was designed to produce Ada 83, then I guess you may have to face compiling in Ada 83 mode - but GNAT doesn’t, as far as I know, guarantee to be 100% compatible, particularly as far as the run time system is concerned.
Are the offending package bodies all actually empty? If so, you might be able to list them and use the Excluded_Source_List_File attribute in your project. If not, you are in trouble, because there’s no way (without changing package sources) to get the code in them to execute.
(Later): actually, Excluded_Source_List_File doesn’t help; it stops gprbuild looking at the file, but not the compiler; and it’s the compiler that rejects the body. Sorry. But if you could make such a list you could use it to delete the bad bodies.
You can exclude the body from the list of source files:
for Excluded_Source_Files use ("my_body.adb");

GNAT Programming Suite - source file not found

Ada is still new to me, so I am trying to find my way around the GPS IDE. I asked another question earlier, but I think this problem has precedence over that one, and may be at the root of my trouble.
When I compile, I am getting a long list of *warning: source file ... not found"
In my .gpr file, I have listed all of the spec and body source files and use the following naming scheme:
package Naming is
for Casing use "mixedcase";
for Dot_Replacement use ".";
for Spec_Suffix ("ada") use "_s.ada";
for Body_Suffix ("ada") use "_b.ada";
end Naming;
What is odd it the error messages all look either like this:
warning: source file "xxx_b.adb" not found
or this
warning: source file "xxx.adb" not found
Note that neither of these (xxxb.adb or xxx.adb) conform to the file specs, which should end with .ada.
Can someone explain what is going on here?
I'm 99% sure that the problem is one of the ones I mentioned in answer to your other question: GNAT does not normally support more than one compilation unit in a file. I got exactly the behaviour you describe with GPS and these files:
james_s.ada:
with Jane;
package James is
end James;
jim_s.ada:
package Jim is
end Jim;
package Jane is
end Jane;
The error message on compiling james_s.ada says it can't find Jane_s.ada, but when I ask GPS to go to the declaration of Jane it takes me to the "correct" line in jim_s.ada.
You could use gnatchop to split jim_s.ada, but it doesn't understand project files or naming conventions; you probably want to keep the existing names for the code that works, so you'd rename gnatchop's output as required.
However! to my great surprise, it turns out that GNAT does support having more than one compilation unit in a file, provided package Naming in the project file tells it about each unit in the file:
package Naming is
for Casing use "mixedcase";
for Dot_Replacement use ".";
for Spec_Suffix ("ada") use "_s.ada";
for Body_Suffix ("ada") use "_b.ada";
for Spec ("Jim") use "jim_s.ada" at 1;
for Spec ("Jane") use "jim_s.ada" at 2;
end Naming;
It's up to you whether to do this or to bite the bullet and use gnatchop, either on the multi-unit files or on the whole source tree.
First off, this isn't an Ada problem, its a Gnat problem. Other Ada compilers have no problem with the file names you are using.
However, Gnat is rather unique in that it expects there to be only one program unit (package body, package spec, stand-alone routine, etc) per source file. This is because it is also rather unique in that it expects to be able to find the source code for any program unit just by knowing that unit's Ada intentifier. Most other Ada compilers maintain some kind of library file that maps file names to program units, and you have to register all your files into it. (Whereas your typcial C compiler just leaves the problem of finding files for all your code up to the user entirely).
Generally the easiest thing to do with Gnat, the way that will cause you the least trouble, is to just use its default file naming convention (and of course don't put multiple program units in a single file.
If you already have some existing Ada code (perhaps developed for another compiler), the easiest way to import it into Gnat is typically to run the gnatchop tool on it all. So that's what I'd suggest you try.
From GPRbuild User's Guide:
Strings are used for values of attributes or as indexes for these attributes. They are in general case sensitive, except when noted otherwise [...]
Based on this, I believe you have to use "Ada" instead of "ada" as index for Spec_Suffix and Body_Suffix. I currently do not have access to the tools for testing this, so I suggest to just try it out.

macros defined in verilog file but error shows undifined macros in modelsim

I have defined the macros of all my verilog files in one verilog file, say FabScalarParam.v
and I compile the FabScalarParam.v first in the system.do file then compile other verilog files.
But when I run "do system.do" to compile the design, it shows me the errors like this,
# ** Error: I:/programming/EDK/project_4/pcores/instruction_side_v1_00_a/hdl/verilog/StallUnit.v(6): (vlog-2163) Macro `MAX_STALL_CYCLES_LOG is undefined.
It says that some macros are not defined. Is there any method to choose the FabScalarParam.v as global file in compile list in modelsim? Due to the large number of macros, I cannot specify all the macros using the method: compile --> compile options --> verilog & system verilog --> other verilog options --> Macros.
I use modelsim 6.5, xilinx edk 12.4
This has to do with compile order. Compiler directives, which included macros are processed linearly by the compiler. See IEEE1364-2001 section 19 or IEEE1800-2009 section 22 for more detail. Make sure the macro file is the first file to that is compiled.
Manually adding the `include also works, however your compiler may give macro redefined warnings. It is recommend to encapsulate the macro definitions in a `ifdef/`ifndef. Doing so resolve the macro defined warnings. If multiple files refer to same `include then encapsulation of the macros may also improve compiler performance. See the following example.
macros.vh:
`ifndef macros_vh
// NOTE: for Verilog 1995 `ifndef is not supported use `ifdef macros_vh `else
`define macros_vh
/**************
* your macros *
* `define ... *
***************/
`endif
Then in your verilog files (*.v / *.sv)
`include "macro.vh"
/*************
* your code *
*************/
I cannot find anything useful on Internet to set global file in the compile list in modelsim. So, I just manually add `include ... in each file to solve the problem. Though it is stupid, It works fine.
If someone knows how to set the global file in the compile list in modelsim, please update it. :-) Thanks.
I had the same problem when I compiled my project with script.
At last, I figure out you can't compile your macro and verilog files in different scope.
EX:
vlog -work work macro.v
vlog -work work project.v
You have to:
vlog -work work macro.v project.v
Compiling them in one vlog command can solve the problem.
You can compile a verilog file and define the precompiler macro that will be applied for this file by adding the following option to vlog:
+define+<macro_name>[=<macro_text>]
which Same as compiler directive: `define macro_name macro_text
for example:
vlog +define+macro_name -work work project.v
This is my work around without adding `include in every files needing macros
vlog -mfcu -y <path/to/source/files> +libext+.v+.sv <source file 1> <source file 2> <... source file N>
-mfcu let vlog treats all source files in one command line as one compilation unit. If macro are defined in <source file 1>, they are visible to all following source files after it.

Error in Ada separate file

I am translating a Ada83 to Ada95 file. The problem happens when I try to compile a file which calls a separate. The error is "Illegal character " and refers to directive to preprocessor:
with BAS_PUT;
#if ADA_COMPILER="GNAT" then
WITH ADA.GNAT_PUT;
#else
WITH ADA_PUT;
#end if;
separate(A_CALL_PUT)
procedure ....
This problem does not happen when the same preprocessor directive is in a file adb that it is not a separate function.
Someone can help me???
Ada has no preprocessor, so # is indeed an illegal character.
Some compilers (eg: Gnat) do come with one, but if so it is one of their own devising. If you like you can set up your build system to run your Ada source files through external preprocessor (eg: the C pre-processor). I've never done that, but I'm told its eminently doable.
If your compiler does happen to come with a preprocessor, it is non-standard. Use it if you like, but by definition it will be useless for creating portable source files (which appears to be what you are trying to do with it).
Most folks would consider it better form to just create different source files for your different environments, and have the build environment (make rules?) switch between them.

Resources