I can't seem to get output from the lfortran jupyter kernel.
I installed via conda install for:
- lfortran
- jupyter
I can run jupyter and select the lfortran kernel. However:
I see no hello world and also no error.
If in a second cell I call new it crashes the kernel.
The global scope in LFortran is special to enable interactivity and usage in a notebook and defines a set of additional rules. You actually don't need a program body to run any Fortran code there, just using the print statement directly will work:
print *, "Hello world!"
The extensions to Fortran available are described here.
Further, a program itself is not supposed to be callable, rather it should execute directly after being declared (this might be a bug in LFortran, reported it in lfortran#648). Instead you might want to declare a subroutine:
subroutine new
print *, "Hello world!"
end subroutine new
And than run it with
call new
Related
To clarify, somfile.py needs variables that are generated from main.ipynb. So, when I simply do %run somefile.py I get this error:
NameError: name 'viewer' is not defined
This viewer is defined in the main code above. However, now if I use %load somefile.py and THEN run it, it works fine. But, the whole point of me doing this is to not show the users of my script, the nitty gritty details. I am preparing this for some students.
The documentation of the magic command %run covers use of the -i option to "run the file in IPython’s namespace instead of an empty one." You want to add that flag to your %run command:
viewer = "something"
%run -i my_script.py
It applies to notebook kernel namespace as IPython was incorporated into the IPython notebook, which became the Jupyter notebook project later.
I have a script main.jl which prints a simple "Hello world" string:
println("Hello world!")
However, when trying to run the script through the terminal like this:
julia> main.jl
I get the error:
ERROR: type #main has no field jl
All the information I can find online suggests calling the script like I do to run it. I have assured that I'm in the correct directory - what am I doing wrong?
You are trying to run the file from the Julia REPL (as indicated by the julia> prompt at the beginning of the line). There, you have to include the file as #AndreWildberg mentions. This will run the commands from the file as if you had typed them into the REPL.
The information you found online might have been about running Julia from "normal" terminal, aka a console shell like bash on Linux. There, running julia main.jl will run the program, although the REPL method above is usually preferred for working with Julia.
(question about calling the script with arguments asked in the comment):
First of all, I'll mention that this is not the usual workflow with Julia scripts. I've been writing Julia code for years, and I had to look up how to handle command-line arguments because I've never once used them in Julia: usually what's done instead is that you define the functions you want in the file, with maybe a main function, and after doing an include, you call the main function (or whichever function you want to try out) with arguments.
Now, if your script already uses command-line arguments (and you don't want to change that), what you can do is assign to the variable that holds them, ARGS, before the include statement:
julia> push!(empty!(ARGS), "arg1")
1-element Vector{String}:
"arg1"
julia> include("main.jl")
Here we empty the ARGS to make sure any previous values are gone, then push the argument (or arguments) we want into it. You can try this out for educational purposes, but if you are new to the language, I would suggest learning and getting used to the more Julian workflow involving function calls that I've mentioned above.
The julia> prompt means your terminal is in Julia REPL mode and is expecting valid Julia code as input. The Julia code main.jl would mean that you want to return the value of a field named jl inside a variable named main. That is why you get that error. The Julia code you would use to run the contents of a file is include("main.jl"). Here the function include is passed the name of your file as a String. This is how you should run files from the REPL.
The instructions you read online are assuming your terminal is in shell mode. This is usually indicated by a $ prompt. Here your terminal is expecting code in whatever language your shell is using e.g. Bash, PowerShell, Zsh. When Julia is installed, it will (usually) add a julia command which works in any shell. The julia command by itself will transform your terminal from shell mode to REPL mode. This julia command can also take additional arguments like filenames. If you use the command julia main.jl in this environment, it will run the file using the Julia program and then exit you back to your terminal shell. This is how you should run files from the terminal shell.
A third way to run Julia files would be to install an IDE like VSCode. Then you can run code from a file with keyboard shortcuts rather than by typing commands.
See the Getting Started Documentation for more information.
Adding to Sundar R's answer, if you want to run script which takes commandline arguments from REPL, you can check this package: https://github.com/v-i-s-h/Runner.jl
It allows you to run you script from REPL with args like:
julia> #runit "main.jl arg1 arg2"
See the README.md for detailed examples.
I'm trying to replicate the functionality of the code editor on a platform I was previously using called Odoo.sh. The platform would let me create a .ipynb notebook, but in the cells I could reference pre-set variables which required no boilerplate code inside of the notebook. Extremely convenient.
If you're familiar with Odoo, it was like having odoo-bin shell be implicitly run before executing any of the cells inside the notebook. It was wonderful to work with, but Odoo.sh is proprietary, so I'm trying to replicate the same functionality on my local machine.
A minimal example of what I'm going for here would be to have the following python code run before executing any of my .ipynb notebook file's cells.
example_value = False
def example_func():
global example_value
example_value = True
example_func()
So that inside of any notebook's cells I could simply run something like example_value and get an output of True.
In the case of Odoo.sh it almost seemed like there was a special custom kernel set up that was nothing more than a regular Python 3 kernel with some initialization code. This may be exactly what was going on, but I don't know enough about how Jupyter works to know for myself. How do I replicate this functionality?
I figured it out! You need to create a custom kernel, but for this use case you can just reuse the default IPython kernel and just pass some variables into the user namespace.
First, create a Python file for your kernel. Let's use test_kernel.py. Here are the contents:
from ipykernel.ipkernel import IPythonKernel
from ipykernel.kernelapp import IPKernelApp
if __name__ == "__main__":
example_value = False
def example_func():
global example_value
example_value = True
example_func()
IPKernelApp.launch_instance(
kernel_class=IPythonKernel,
user_ns={"example_value": example_value})
See how the arbitrary code from the question is run before launching the kernel instance. Using the user_ns argument, we can pass arbitrary data to the user environment.
To get our kernel up and running we need to make a test directory and then a test/kernel.json file. It will have these contents:
{
"argv": ["python", "-m", "test_kernel", "-f", "{connection_file}"],
"display_name": "Test"
}
Let's install that bad boy. Run jupyter kernelspec install --user test. In that command, test is the name of the directory we created. The --user argument makes Jupyter install the kernel only for the current user. You don't have to use it if you don't want to.
Now we should be good to go! Start things up with jupyter notebook and you will see your new kernel is available to use when using notebooks. And check it out, we can see the variable we passed into the namespace:
Last of all, be sure to note that in order for this to work your test_kernel.py file will need to be somewhere where Python can import it. I'm not an expert on this, but from a bit of Googling I took this to mean that the directory containing the file should either be the current working directory or be in your PYTHONPATH.
From the octave CLI or octave GUI, if I run
plot([1,2,3],[1,4,9])
it will display a plot window that I can look at and interact with. If however I create file myPlot.m with the same command as content
plot([1,2,3],[1,4,9])
and that I run it with
octave myPlot.m
then I can briefly see the plot window appear for a fraction of a second and immediatly close itself. How can I prevent this window from closing itself?
Octave 4.2.2
Ubuntu 18.04
Here is a full example, given the confusion in the comments.
Suppose you create a script called plotWithoutExiting.m, meant to be invoked directly from the linux shell, rather than from within the octave interpreter:
#!/opt/octave-4.4.1/bin/octave
h = plot(1:10, 1:10);
waitfor(h)
disp('Now that Figure Object has been destroyed I can exit')
The first line in linux corresponds to the 'shebang' syntax; this special comment tells the bash shell which interpreter to run to execute the script below. I have used the location of my octave executable here, yours may be located elsewhere; adapt accordingly.
I then change the permissions in the bash shell to make this file executable
chmod +x ./plotWithoutExiting.m
Then I can run the file by running it:
./plotWithoutExiting.m
Alternatively, you could skip the 'shebang' and executable permissions, and try to run this file by calling the octave interpreter explicitly, e.g.:
octave ./plotWithoutExiting.m
or even
octave --eval "plotWithoutExiting"
You can also add the --no-gui option to prevent the octave GUI momentarily popping up if it does.
The above script should then run, capturing the plot into a figure object handle h.
waitfor(h) then pauses program flow, until the figure object is destroyed (e.g. by closing the window manually).
In theory, if you did not care to collect figure handles, you can just use waitfor(gcf) to pause execution until the last active figure object is destroyed.
Once this has happened, the program continues normally until it exits. If you're not running the octave interpreter in interactive mode, this typically also exits the octave environment (you can prevent this by using the --persist option if this is not what you want).
Hope this helps.
Run #terminal as (need to exit octave later)
octave --persist myscript.m
or append
waitfor(gcf)
at the end of script to prevent plot from closing
Disclaimer: I use jupyter kernel, but the question is also relevant for jupyter notebook.
According to jupyter kernel --help-all, I should be able to change the jupyter kernel JSON connection file by setting a parameter called --KernelManager.connection_file.
If I understand this correctly, that means that the following command:
jupyter kernel --KernelManager.connection_file=connection.json
should start a kernel and give me a connection file called connection.json.
However, this is what I get:
→ jupyter kernel --KernelManager.connection_file='test-this-thing.json'
[KernelApp] Starting kernel 'python3'
[KernelApp] Connection file: /Users/me/Library/Jupyter/runtime/kernel-1e65d0fe-bf8e-1234-8208-463bd4a1234a.json
Now, jupyter doesn't complain that I've passed a wrong argument nor anything, it just doesn't change the connection file.
Am I doing something wrong? How can I correctly change the connection filename?
Essentially, nothing you are doing in the above code is wrong. Previously the kernel overrode whatever you set as the connection file with a hard coded file location.
This has now been fixed as per the following pull requests:
https://github.com/jupyter/jupyter_client/pull/399
Removed the static connection file name declaration on kernelapp initialize method.
https://github.com/jupyter/jupyter_client/pull/432
Set the default connection_file such that it preserves an existing configuration.
I useful workaround to set the connection file is to not call jupyter kernel directly, but rather use the kernel manager module, which is more flexible:
python -m ipykernel_launcher -f ~/kernels/file.json
The above works for current and previous versions of jupyter, so I'd consider it to be more reliable.