Detecting keystrokes in Julia - julia

I have a piece of code in Julia in which a solver iterates many, many times as it seeks a solution to a very complex problem. At present, I have to provide a number of iterations for the code to do, set low enough that I don't have to wait hours for the code to halt in order to save the current state, but high enough that I don't have to keep activating the code every 5 minutes.
Is there a way, with the current state of Julia (0.2), to detect a keystroke instructing the code to either end without saving (in case of problems) or end with saving? I require a method such that the code will continue unimpeded unless such a keystroke event has happened, and that will interrupt on any iteration.
Essentially, I'm looking for a command that will read in a keystroke if a keystroke has occurred (while the terminal that Julia is running in has focus), and run certain code if the keystroke was a specific key. Is this possible?
Note: I'm running julia via xfce4-terminal on Xubuntu, in case that affects the required command.

You can you an asynchronous task to read from STDIN, blocking until something is available to read. In your main computation task, when you are ready to check for input, you can call yield() to lend a few cycles to the read task, and check a global to see if anything was read. For example:
input = ""
#async while true
global input = readavailable(STDIN)
end
for i = 1:10^6 # some long-running computation
if isempty(input)
yield()
else
println("GOT INPUT: ", input)
global input = ""
end
# do some other work here
end
Note that, since this is cooperative multithreading, there are no race conditions.

You may be able to achieve this by sending an interrupt (Ctrl+C). This should work from the REPL without any changes to your code – if you want to implement saving you'll have to handle the resulting InterruptException and prompt the user.

I had some trouble with the answer from steven-g-johnson, and ended up using a Channel to communicate between tasks:
function kbtest()
# allow 'q' pressed on the keyboard to break the loop
quitChannel = Channel(10)
#async while true
kb_input = readline(stdin)
if contains(lowercase(kb_input), "q")
put!(quitChannel, 1)
break
end
end
start_time = time()
while (time() - start_time) < 10
if isready(quitChannel)
break
end
println("in loop # $(time() - start_time)")
sleep(1)
end
println("out of loop # $(time() - start_time)")
end
This requires pressing and then , which works well for my needs.

Related

How can I test whether stdin has input available in julia?

I would like to detect whether there is input on stdin in a short time window, and continue execution either way, with the outcome stored in a Bool. (My real goal is to implement a pause button on a simulation that runs in the terminal. A second keypress should unpause the program, and it should continue executing.) I have tried to use poll_fd but it does not work on stdin:
julia> FileWatching.poll_fd(stdin, readable=true)
ERROR: MethodError: no method matching poll_fd(::Base.TTY; readable=true)
Is there a way that will work on julia? I have found a solution that works in python, and I have considered using this via PyCall, but I am looking for
a cleaner, pure-julia way; and
a way that does not fight or potentially interfere with julia's use of libuv.
bytesavailable(stdin)
Here is a sample usage. Note that if you capture the keyboard you also need to handle Ctrl+C yourself (in this example only the first byte of chunk is checked).
If you want to run it fully asynchronously put #async in front of the while loop. However if there will be no more code in this case this program will just exit.
import REPL
term = REPL.Terminals.TTYTerminal("xterm",stdin,stdout,stderr)
REPL.Terminals.raw!(term,true)
Base.start_reading(stdin)
while (true)
sleep(1)
bb = bytesavailable(stdin)
if bb > 0
data = read(stdin, bb)
if data[1] == UInt(3)
println("Ctrl+C - exiting")
exit()
end
println("Got $bb bytes: $(string(data))")
end
end
Following #Przemyslaw Szufel's response, here is a full solution that allows a keypress to pause/unpause the iteration of a loop:
import REPL
term = REPL.Terminals.TTYTerminal("xterm",stdin,stdout,stderr)
REPL.Terminals.raw!(term,true)
Base.start_reading(stdin)
function read_and_handle_control_c()
b = bytesavailable(stdin)
#assert b > 0
data = read(stdin, b)
if data[1] == UInt(3)
println("Ctrl+C - exiting")
exit()
end
nothing
end
function check_for_and_handle_pause()
if bytesavailable(stdin) > 0
read_and_handle_control_c()
while bytesavailable(stdin) == 0
sleep(0.1)
end
read_and_handle_control_c()
end
nothing
end
while true
# [do stuff]
sleep(0.05)
check_for_and_handle_pause()
end
This is somewhat suboptimal in that it requires the process to wake up regularly even when paused, but it achieves my goal nevertheless.

Julia #distributed: subsequent code run before all workers finish

I have been headbutting on a wall for a few days around this code:
using Distributed
using SharedArrays
# Dimension size
M=10;
N=100;
z_ijw = zeros(Float64,M,N,M)
z_ijw_tmp = SharedArray{Float64}(M*M*N)
i2s = CartesianIndices(z_ijw)
#distributed for iall=1:(M*M*N)
# get index
i=i2s[iall][1]
j=i2s[iall][2]
w=i2s[iall][3]
# Assign function value
z_ijw_tmp[iall]=sqrt(i+j+w) # Any random function would do
end
# Print the last element of the array
println(z_ijw_tmp[end])
println(z_ijw_tmp[end])
println(z_ijw_tmp[end])
The first printed out number is always 0, the second number is either 0 or 10.95... (sqrt of 120, which is correct). The 3rd is either 0 or 10.95 (if the 2nd is 0)
So it appears that the print code (#mainthread?) is allowed to run before all the workers finish. Is there anyway for the print code to run properly the first time (without a wait command)
Without multiple println, I thought it was a problem with scope and spend a few days reading about it #.#
#distributed with a reducer function, i.e. #distributed (+), will be synced, whereas #distributed without a reducer function will be started asynchronously.
Putting a #sync in front of your #distributed should make the code behave the way you want it to.
This is also noted in the documentation here:
Note that without a reducer function, #distributed executes asynchronously, i.e. it spawns independent tasks on all available workers and returns immediately without waiting for completion. To wait for completion, prefix the call with #sync

How to run external program from Julia and wait until it finishes, then read its output

I'm trying to execute external program from Julia via run, then wait until it finishes and store its output into a variable.
The only solution I came up with is this:
callback = function(data)
print(data)
end
open(`minizinc com.mzn com.dzn`) do f
x = readall(f)
callback(x)
end
The problem is that I do not want to use callbacks.
Is there any way, how to wait until the process is finished and then continue in executing?
Thanks in advance
You can just call readall (or readstring on Julia master) on the command object:
julia> readall(`echo Hello`)
"Hello\n"

asynchronous file IO in Tcl

I have a C function that I have wrapped in Tcl that opens a file, reads the contents, performs an operation, and returns a value Unfortunately, when I call the function to open a large file, it blocks the event loop. The OS is linux.
I'd like to make the calls asynchronous. How do I do so?
(I can pass the work to another Tcl thread, but that's not exactly what I want).
This is quite difficult to do in general. The issue is that asynchronous file operations don't work very well with ordinary files due to the abstractions involved at the OS level. The best way around this — if you can — is to build an index over the file first so that you can avoid reading through it all and instead just seek to somewhere close to the data. This is the core of how a database works.
If you can't do that but you can apply a simple filter, putting that filter in a subprocess (pipes do work with asynchronous I/O in Tcl, and they do so on all supported platforms) or another thread (inter-thread messages are nice from an asynch processing perspective too) can work wonders.
Use the above techniques if you can. They're what I believe you should do.
If even that is impractical, you're going to have to do this the hard way.
The hard way involves inserting event-loop-aware delays in your processing.
Introducing delays in 8.5 and before
In Tcl 8.5 and before, you do this by splitting your code up into several pieces in different procedures and using a stanza like this to pass control between them through a “delay”:
# 100ms delay, but tune it yourself
after 100 [list theNextProcedure $oneArgument $another]
This is continuation-passing style, and it can be rather tricky to get right. In particular, it's rather messy with complicated processing. For example, suppose you were doing a loop over the first thousand lines of a file:
proc applyToLines {filename limit callback} {
set f [open $filename]
for {set i 1} {$i <= $limit} {incr i} {
set line [gets $f]
if {[eof $f]} break
$callback $i $line
}
close $f
}
applyToLines "/the/filename.txt" 1000 DoSomething
In classic Tcl CPS, you'd do this:
proc applyToLines {filename limit callback} {
set f [open $filename]
Do1Line $f 1 $limit $callback
}
proc Do1Line {f i limit callback} {
set line [gets $f]
if {![eof $f]} {
$callback $i $line
if {[incr i] <= $limit} {
after 10 [list Do1Line $f $i $limit $callback]
return
}
}
close $f
}
applyToLines "/the/filename.txt" 1000 DoSomething
As you can see, it's not a simple transformation, and if you wanted to do something once the processing was done, you'd need to pass around a callback. (You could also use globals, but that's hardly elegant…)
(If you want help changing your code to work this was, you'll need to show us the code that you want help with.)
Introducing delays in 8.6
In Tcl 8.6, though the above code techniques will still work, you've got another option: coroutines! We can write this instead:
proc applyToLines {filename limit callback} {
set f [open $filename]
for {set i 1} {$i <= $limit} {incr i} {
set line [gets $f]
if {[eof $f]} break
yield [after 10 [info coroutine]]
$callback $i $line
}
close $f
}
coroutine ApplyToAFile applyToLines "/the/filename.txt" 1000 DoSomething
That's almost the same, except for the line with yield and info coroutine (which suspends the coroutine until it is resumed from the event loop in about 10ms time) and the line with coroutine ApplyToAFile, where that prefix creates a coroutine (with the given arbitrary name ApplyToAFile) and sets it running. As you can see, it's not too hard to transform your code like this.
(There is no chance at all of a backport of the coroutine engine to 8.5 or before; it completely requires the non-recursive script execution engine in 8.6.)
Tcl does support asynchronous I/O on its channels (hence including files) using event-style (callback) approach.
The idea is to register a script as a callback for the so-called readable event on an opened channel set to a non-blocking mode and then in that script call read on the channel once, process the data read and then test for whether that read operation hit the EOF condition, in which case close the file.
Basically this looks like this:
set data ""
set done false
proc read_chunk fd {
global data
append data [read $fd]
if {[eof $fd]} {
close $fd
set ::done true
}
}
set fd [open file]
chan configure $fd -blocking no
chan event $fd readable [list read_chunk $fd]
vwait ::done
(Two points: a) In case of Tcl ≤ 8.5 you'll have to use fconfigure instead of chan configure and fileevent instead of chan event; b) If you're using Tk you don't need vwait as Tk already forces the Tcl event loop to run).
Note one caveat though: if the file you're reading is located on a physically attached fast medium (like rotating disk, SSD etc) it will be quite highly available which means the Tcl's event loop will be saturated with the readable events on your file and the overall user experience will likely be worse than if you'd read it in one gulp because the Tk UI uses idle-priority callbacks for many of its tasks, and they won't get any chance to run until your file is read; in the end you'll have sluggish or frozen UI anyway and the file will be read slower (in the wall-clock time terms) compared to the case of reading it in a single gulp.
There are two possible solutions:
Do use a separate thread.
Employ a hack which gives a chance for the idle-priority events to run — in your callback script for the readable event schedule execution of another callback script with the idle priority:
chan event $fd readable [list after idle [list read_chunk $fd]]
Obviously, this actually doubles the number of events piped through the Tcl event loop in response to the chunks of the file's data becoming "available" but in exchange it brings the priority of processing your file's data down to that of UI events.
You might also be tempted to just call update in your readable callback to force the event loop to process the UI event, — please don't.
There's yet another approach available since Tcl 8.6: coroutines. The chief idea is that instead of using events you interleave reading a file using reasonably small chunks with some other processing. Both tasks should be implemented as coroutines periodically yielding into each other thus creating a cooperative multitasking. Wiki has more info on this.

When will infinite loops, and recursive functions with no stop conditions, eventually stop?

I heard a myth saying that infinite loop or a recursive function with no stop condition will stop when "the stack overflows". Is it right?
For example :
void call()
{
call();
}
or
for(;;){;}
Will they really stop when the stack overflows?
Update: If it really stops, can I detect after how many recursive calls?
It really depends on the choice of language.
In some languages, your infinite recursive function will halt with a stack overflow based on system- or language-dependent conditions. The reason for this is that many implementations of function call and return allocate new space for each function call, and when space is exhausted the program will fail. However, other languages (Scheme and various gcc optimization levels) will actually have this program run forever because they are smart enough to realize that they can reuse the space for each call.
In some languages, infinite loops will run forever. Your program will just keep on running and never make progress. In other languages, infinite loops are allowed to be optimized away by the compiler. As an example, the C++ standard says that the compiler is allowed to assume that any loop will either terminate or make some globally-visible action, and so if the compiler sees the infinite loop it might just optimize the loop out of existence, so the loop actually does terminate.
In other words, it really depends. There are no hard-and-fast answers to this question.
Hope this helps!
Your first example is a recursive method call - each invocation of call (in most languages and environments) will create a new stack frame, and eventually you'll run out of stack (you'll get a stack overflow condition).
Your second example involves no recursive method invocation - it's just a loop. No stack frames need to be created, and the stack won't overflow.
Give your examples a shot - the first example will cause a stack overflow faster than you may think. The second will make your fans spin really fast, but nothing will happen otherwise until you kill it.
Depends on which language you are using, the loop will end when maximum memory allocated or max execution time is reached. Some languages will detect infinite loop and stop it from running.
p.s. It is not a myth. You can actually try it.
"Update: If it really stops, can I detect after how many recursive calls?"
You most certainly can:
call(int n){
print(n)
call (n+1)
}
Then just call:
call(1)
When the stack overflow occurs, look at the last printed number. This was the number of method calls you had.
Hope this helps!
N.S.
No matter the language, when I create a loop that is potentially infinite, I always build an "emergency exit" into it. I've done this in C#, VB.Net, VBA, JAVA, and just now for the first time in an IBM DB2 SQL function.
Since the concept is valid in any language, I'll present it in pseudocode, as follows:
Begin Procedure
Declare Variable safety_counter As Integer
Begin loop
...
<body of code>
If some_criteria = True Then <-- this is your normal exit
Exit Loop
End If
...
safety_counter = safety_counter + 1
If safety_counter >= 1000 <-- set to any value you wish
Exit Loop <-- this is the emergency exit
End If
End Loop
Some variations are as follows.
If the loop is simple, the two exit criteria might be written in the same If-Then:
If some_criteria = True Then <-- normal exit
Or safety_counter >= 1000 <-- emergency exit
Exit Loop
End If
safety_counter = safety_counter + 1
An error value might be returned like this:
Begin Procedure
Declare Variable result_value As Integer
Declare Variable safety_counter As Integer
Begin loop
...
<body of code>
If some_criteria = True Then <-- this is your normal exit
Set result_value = abc * xyz <-- the value you seek
Exit Loop
End If
...
safety_counter = safety_counter + 1
If safety_counter >= 1000 <-- set to any sufficient value
Set result_value = (-123) <-- indicate "infinite loop error"
Exit Loop <-- this is the emergency exit
End If
End Loop
Return result_value
There are many possible ways to do this, but the key is in safety_counter, and the way of monitoring it to trigger the emergency exit.

Resources