Is it possible to catch a SIGINT to stop a Julia program from running, but doing so in an "orderly" fashion?
function many_calc(number)
terminated_by_sigint = false
a = rand(number)
where_are_we = 0
for i in eachindex(a)
where_are_we = i
# do something slow...
sleep(1)
a[i] += rand()
end
a, where_are_we, terminated_by_sigint
end
many_calc(100)
Say I want to end it efter 30 seconds, because I didn't realize it would take so long, but don't want to throw away all the results, because I have another method to continue from where_are_we-1. Is it at all possible to stop it (softly) early, but using the SIGINT signal?
You can just use a try ... catch ... end and check if the error is an interrupt.
For your code:
function many_calc(number)
terminated_by_sigint = false
a = rand(number)
where_are_we = 0
try
for i in eachindex(a)
where_are_we = i
# do something slow...
sleep(1)
a[i] += rand()
end
catch my_exception
isa(my_exception, InterruptException) ? (return a, where_are_we, true) : error()
end
a, where_are_we, terminated_by_sigint
end
Will check if the exception is an interupt, and will return with the values if so. Else it will error.
Related
I have the following code that runs in about 8 seconds:
using Random
DEBUG = false
PRISONERS = 100
ATTEMPTS = 50
function tryit()
a = [1:1:PRISONERS;]
a = shuffle(a)
for i in 1:PRISONERS
if DEBUG
println("prisoner $i")
end
count = 0
check = i
while count <= ATTEMPTS
count += 1
if a[check] == i
if DEBUG
println("Attempt $count: Checking box $check and found my number")
end
break
else
if DEBUG
println("Attempt $count: Checking box $check and found $(a[check])")
end
check = a[check]
end
end
if count > ATTEMPTS
if DEBUG
println("Prisoner $i failed to find his number in 50 attempts")
end
return false
end
end
return true
end
function main()
tries = 100000
success = 0.0
for i in 1:tries
if tryit()
success += 1
end
end
println("Ratio of success = $(success / tries)")
end
main()
Moving the global variables inside the function cuts the runtime to a tenth:
using Random
function tryit()
DEBUG = false
PRISONERS = 100
ATTEMPTS = 50
a = [1:1:PRISONERS;]
a = shuffle(a)
for i in 1:PRISONERS
if DEBUG
println("prisoner $i")
end
count = 0
check = i
while count <= ATTEMPTS
count += 1
if a[check] == i
if DEBUG
println("Attempt $count: Checking box $check and found my number")
end
break
else
if DEBUG
println("Attempt $count: Checking box $check and found $(a[check])")
end
check = a[check]
end
end
if count > ATTEMPTS
if DEBUG
println("Prisoner $i failed to find his number in 50 attempts")
end
return false
end
end
return true
end
function main()
tries = 100000
success = 0.0
for i in 1:tries
if tryit()
success += 1
end
end
println("Ratio of success = $(success / tries)")
end
main()
I could not reproduce the problem with a smaller code example. I am using Julia version 1.7.2 on Linux Mint.
Why globals? If you are concerned because you want to keep the variables' values global at function call for some reason, aliasing them in your function signature can fix the slowdown problem:
function tryit(PRISONERS = PRISONERS, ATTEMPTS = ATTEMPTS, DEBUG = DEBUG)
...
Declaring the global variables as constants solved the problem.
using Random
const DEBUG = false
const PRISONERS = 100
const ATTEMPTS = 50
function tryit()
a = [1:1:PRISONERS;]
a = shuffle(a)
for i in 1:PRISONERS
if DEBUG
println("prisoner $i")
end
count = 0
check = i
while count <= ATTEMPTS
count += 1
if a[check] == i
if DEBUG
println("Attempt $count: Checking box $check and found my number")
end
break
else
if DEBUG
println("Attempt $count: Checking box $check and found $(a[check])")
end
check = a[check]
end
end
if count > ATTEMPTS
if DEBUG
println("Prisoner $i failed to find his number in 50 attempts")
end
return false
end
end
return true
end
function main()
tries = 100000
success = 0.0
for i in 1:tries
if tryit()
success += 1
end
end
println("Ratio of success = $(success / tries)")
end
main()
See Why is there a performance difference between global and local variables in non-interactive Julia programs? for technical explanations.
I have a function that looks for a file named "global" in parent directories of the working dir. This is how I imagined it:
function readglobal()
if isfile("./global")
text = readdlm("./global",String,comment_char='/')
else
for i = 1:8
if isfile("../"^i * "global")
text = readdlm("../"^i * "global",String,comment_char='/')
break
end
end
end
isdefined(:text) || error("Could not find global file")
dict = Dict{String,String}()
for i in 1:size(text)[1]
dict[text[i,1]] = text[i,2]
end
return dict
end
This doesn't work because isdefined looks for global variables in the current_module(), where I call the function from.
Is there a way to make it work as intended? To evalute isdefined(:text) inside the function?
I worked around this with the following code, but I think the code above is cleaner.
function readglobal()
foundit = isfile("./global")
if foundit
text = readdlm("./global",String,comment_char='/')
foundit=true
else
for i = 1:8
foundit = isfile("../"^i * "global")
if foundit
text = readdlm("../"^i * "global",String,comment_char='/')
break
end
end
end
foundit || error("Could not find global file")
dict = Dict{String,String}()
for i in 1:size(text)[1]
dict[text[i,1]] = text[i,2]
end
return dict
end
Edit: Somehow I missed that you found the workaround, as you call it, which is pretty much the same as I suggested. I disagree that the first code is cleaner, using isdefined seems hacky to me. A foundit flag is the right way, IMHO.
Original answer:
Don't use isdefined to check whether the file has been found. Instead set a flag, e.g. filefound. Something along these lines (warning, untested):
function readglobal()
filefound = false
filepath = "global"
for i in 0:8
filefound = isfile(filepath)
if filefound
break
end
filepath = joinpath("..", filepath)
end
filefound || error("Could not find file ")
text = readdlm(filepath, String, comment_char='/')
dict = Dict{String,String}()
for i in 1:size(text, 1)
dict[text[i,1]] = text[i,2]
end
return dict
end
Edit 2: Here's a variant:
function readglobal(filename, maxtries=8)
tries = 0
while !isfile(filename) && (tries+=1) <= maxtries
filename = joinpath("..", filename)
end
tries > maxtries || error("Could not find file ")
text = readdlm(filename, ...
...
end
The following is a 1.5-line version of this function:
readglobal() = Dict(mapslices(x->=>(x...),readdlm(first(filter(isfile,
"./$("../"^i)global" for i=0:8)),String;comment_char='/'),2))
It even returns an error if the file is missing ;)
Not an exact answer, but I would use a separate function:
function get_global_content()
if isfile("./global")
return readdlm("./global",String,comment_char='/')
else
for i = 1:8
if isfile("../"^i * "global")
return readdlm("../"^i * "global",String,comment_char='/')
end
end
end
error("Could not find global file")
end
function readglobal()
text = get_global_content()
dict = Dict{String,String}()
for i in 1:size(text)[1]
dict[text[i,1]] = text[i,2]
end
return dict
end
Alternatively, have a look at Nullable, e.g.,
function readglobalnull()
text = Nullable()
if isfile("./global")
text = Nullable(readdlm("./global",String,comment_char='/'))
else
for i = 1:8
if isfile("../"^i * "global")
text = Nullable(readdlm("../"^i * "global",String,comment_char='/'))
break
end
end
end
isnull(text) && error("Could not find global file")
text = get(text)
dict = Dict{String,String}()
for i in 1:size(text)[1]
dict[text[i,1]] = text[i,2]
end
return dict
end
Version 0.7 of Julia has a #isdefined variable_name macro that is able to do precisely what I asked for in this question. It should work for any local variable.
I have a function, which is supposed to return zero, if the input cannot be converted to an integer.
But sometimes it fails, if the field from a resultset is not a proper value, whatever it is.
Function nulblank(str)
dim val
if IsNull(str) then
val=0
else
str = trim(str)
if isNumeric(str) then
val = cDbl(str)
else
val = 0
end if
end if
nulblank = val
end function
I get an error 0x80020009 on str = trim(str)
This function is only called on
set rs = conn.execute(sql)
i = nulblank(rs("somefield"))
How can I make this function "failsafe", so it never dies, but returns 0 on "bad" values?
I guess I could do on error resume next and if Err.Number <> 0 then something.
But what can be in a rs("somefield") which is not null, but cannot be trim()'ed?
That error usually relates to an empty recordset.
You should check that the recordset has a row before attempting to retrieve a column value, eg:
set rs = conn.execute(sql)
if not rs.eof then
i = nulblank(rs("somefield"))
end if
Case #1:
module try;
string inp = "my_var";
initial begin
$display("Here we go!");
case (inp)
"my_var" : $display("my_var");
default : $display("default");
endcase
end
endmodule
Output is my_var
Case #2
module try;
string inp = "my_var";
initial begin
$display("Here we go!");
case (inp)
"*var*" : $display("*var*");
default : $display("default");
endcase
end
endmodule
Output is default.
Is it possible to get a hit with wildcard search in a case statement?
SystemVerilog does not have any string regular expression matching methods built into the standard. The UVM has a package that has a uvm_re_match() function. You can import the UVM package to get access to this function even if you do not use any other UVM testbench features. Some simulators, such as ModelSim/Questa have these routines built in as an extension to SystemVerilog so that you can do
module try;
string inp = "my_var";
initial begin
$display("Here we go!");
case (1)
inp.match("*var*") : $display("*var*");
default : $display("default");
endcase
end
endmodule
I found a work-around :
function string match(string s1,s2);
int l1,l2;
l1 = s1.len();
l2 = s2.len();
match = 0 ;
if( l2 > l1 )
return 0;
for(int i = 0;i < l1 - l2 + 1; i ++)
if( s1.substr(i,i+l2 -1) == s2)
return s2;
endfunction
module try;
string target_id = "abc.def.ddr4_0";
string inp = "ddr4_0";
string processed_inp;
initial begin
$display("Here we go!");
for(int i=0;i<2;i++) begin
if (i == 1)begin
inp = "ddr4_1";
target_id = "abc.def.ddr4_1";
end
processed_inp = match(target_id, inp);
$display("input to case = %0s", processed_inp);
case (processed_inp)
"ddr4_0" : $display("ddr4_0 captured!");
"ddr4_1" : $display("ddr4_1 captured!");
default : $display("default");
endcase
end
end
endmodule
Output:
Here we go!
input to case = ddr4_0
ddr4_0 captured!
input to case = ddr4_1
ddr4_1 captured!
PS : Found this solution on the www.
Cannot find the link right now. Will post the reference soon.
I am new to F# and I have frankensteined the code below from various examples I found online in an attempt to get a better understanding of how I can use it. Currently the code below reads in a list of machines from a file and pings each of the machines. I had to divide the initial array from the file up into a smaller arrays of 25 machines to control the number of concurrent actions otherwise it takes far to long to map out the list of machines. I would like be able to use a threadpool to manage the threads but I have not found a way to make it work. Any guidance would be great. I am not able to make this work:
let creatework = FileLines|> Seq.map (fun elem -> ThreadPool.QueueUserWorkItem(new WaitCallback(dowork), elem))
Here is the complete code:
open System.Threading
open System
open System.IO
let filePath = "c:\qa\machines.txt"
let FileLines = File.ReadAllLines(filePath)
let count = FileLines.Length/25
type ProcessResult = { exitCode : int; stdout : string; stderr : string }
let executeProcess (exe,cmdline) =
let psi = new System.Diagnostics.ProcessStartInfo(exe,cmdline)
psi.UseShellExecute <- false
psi.RedirectStandardOutput <- true
psi.RedirectStandardError <- true
psi.CreateNoWindow <- true
let p = System.Diagnostics.Process.Start(psi, EnableRaisingEvents = true)
let output = new System.Text.StringBuilder()
let error = new System.Text.StringBuilder()
p.OutputDataReceived.Add(fun args -> output.AppendLine(args.Data)|> ignore)
p.ErrorDataReceived.Add(fun args -> error.AppendLine(args.Data) |> ignore)
p.BeginErrorReadLine()
p.BeginOutputReadLine()
p.WaitForExit()
{ exitCode = p.ExitCode; stdout = output.ToString(); stderr = error.ToString() }
let dowork machinename=
async{
let exeout = executeProcess(#"c:\windows\system32\ping.exe", "-n 1 " + machinename)
let exelines =
if exeout.stdout.Contains("Reply from") then Console.WriteLine(machinename + " " + "REPLY")
elif exeout.stdout.Contains("Request timed out.") then Console.WriteLine(machinename + " " + "RTO")
elif exeout.stdout.Contains("Ping request could not find host") then Console.WriteLine(machinename + " " + "Unknown Host")
else Console.WriteLine(machinename + " " + "ERROR")
exelines
}
printfn "%A" (System.DateTime.Now.ToString())
for i in 0..count do
let x = i*25
let y = if i = count then FileLines.Length-1 else (i+1)*25
printfn "%s %d" "X equals: " x
printfn "%s %d" "Y equals: " y
let filesection = FileLines.[x..y]
let creatework = filesection |> Seq.map dowork |> Async.Parallel |> Async.RunSynchronously|>ignore
creatework
printfn "%A" (System.DateTime.Now.ToString())
printfn "finished"
UPDATE:
The code below works and provides a framework for what I want to do. The link that was referenced by Tomas Petricek did have the bits of code that made this work. I just had to figure which example was the right one. It is within 3 seconds of duplicate framework written in Java so I think I am headed in the right direction. I hope the example below will be useful to anyone else trying to thread out various executables in F#:
open System
open System.IO
open System.Diagnostics
let filePath = "c:\qa\machines.txt"
let FileLines = File.ReadAllLines(filePath)
type Process with
static member AsyncStart psi =
let proc = new Process(StartInfo = psi, EnableRaisingEvents = true)
let asyncExit = Async.AwaitEvent proc.Exited
async {
proc.Start() |> ignore
let! args = asyncExit
return proc
}
let shellExecute(program : string, args : string) =
let startInfo =
new ProcessStartInfo(FileName = program, Arguments = args,
UseShellExecute = false,
CreateNoWindow = true,
RedirectStandardError = true,
RedirectStandardOutput = true)
Process.AsyncStart(startInfo)
let dowork (machinename : string)=
async{
let nonbtstat = "NONE"
use! pingout = shellExecute(#"c:\windows\system32\ping.exe", "-n 1 " + machinename)
let pingRdToEnd = pingout.StandardOutput.ReadToEnd()
let pingresults =
if pingRdToEnd.ToString().Contains("Reply from") then (machinename + " " + "REPLY")
elif pingRdToEnd.ToString().Contains("Request timed out.") then (machinename + " " + "RTO")
elif pingRdToEnd.ToString().Contains("Ping request could not find host") then (machinename + " " + "Unknown Host")
else (machinename + " " + "PING_ERROR")
if pingresults.ToString().Contains("REPLY") then
use! nbtstatout = shellExecute(#"c:\windows\system32\nbtstat.exe", "-a " + machinename)
let nbtstatRdToEnd = nbtstatout.StandardOutput.ReadToEnd().Split('\n')
let nbtstatline = Array.tryFind(fun elem -> elem.ToString().Contains("<00> UNIQUE Registered")) nbtstatRdToEnd
return Console.WriteLine(pingresults + nbtstatline.Value.ToString())
else return Console.WriteLine(pingresults + " " + nonbtstat)
}
printfn "%A" (System.DateTime.Now.ToString())
let creatework = FileLines |> Seq.map dowork |> Async.Parallel |> Async.RunSynchronously|>ignore
creatework
printfn "%A" (System.DateTime.Now.ToString())
printfn "finished"
The main problem with your code is that executeProcess is a synchronous function that takes a long time to run (it runs the ping.exe process and waits for its result). The general rule is that tasks in a thread pool should not block for a long time (because then they block thread pool threads, which means that the thread pool cannot efficiently schedule other work).
I think you can solve this quite easily by making executeProcess asynchronous. Instead of calling WaitForExit (which blocks), you can wait for the Exitted event using Async.AwaitEvent:
let executeProcess (exe,cmdline) = async {
let psi = new System.Diagnostics.ProcessStartInfo(exe,cmdline)
psi.UseShellExecute <- false
// [Lots of stuff omitted]
p.BeginOutputReadLine()
let! _ = Async.AwaitEvent p.Exited
return { exitCode = p.ExitCode
stdout = output.ToString(); stderr = error.ToString() } }
This should unblock threads in the thread pool and so you'll be able to use Async.Parallel on all the URLs from the input array without any manual scheduling.
EDIT As #desco pointed out in a comment, the above is not quite right if the process exits before the AwaitEvent line is reached (before it may miss the event). To fix that, you need to use Event.guard function, which was discussed in this SO question:
Need help regarding Async and fsi