How do I do simple multithreading in Isabelle ML? - isabelle

I've found a use for wanting to start multiple Isabelle_System.bash processes.
In this next source, I use 3 bash commands. For a simple example, I would like to start them in separate threads, so that they run concurrently, rather than sequentially.
ML {*
Isabelle_System.bash ("echo '1. Call script to compile in the PIDE console.'");
Isabelle_System.bash ("echo '2. Call script to compile in a Windows console.'");
Isabelle_System.bash ("echo '3. Maybe a third process.'");
(*In an outer syntax command, I have options to allow 1 and 2, so it might be
useful to allow starting both at the same time, to be able to terminate the
PIDE process, and let the Windows console keep running. But, unless
multithreading is used, their execution will be sequential, which is
useless.*)
*}
I did a grep, and I've found src/Pure/Concurrent/simple_thread.ML.
However, this is not a priority, and it wouldn't be best for me to use trial-and-error to try and figure out what needs to be done on my own.
If someone can give me a simple plug-n-play template to run the 3 bash commands above, I would appreciate it. Or, maybe someone can tell me why I can't or shouldn't do it.

"Simple multithreading" is an oxymoron, because threads are never simple. If you just want to do parallel programming, which is simple, then the first place to look is the structure Par_List, e.g. the Par_List.map combinator. In the Isabelle/Isar Implementation Manual there is a section on "Parallel skeletons" about it, and more information in the vicinity.

Here, I fill in a few details, based on Makarius Wenzel's answer.
It turns out, that with the Isabelle/ML library in src/Pure, running parallel, independent bash processes is extraordinarily easy:
ML {* (*Running this multiple times shows the order of execution varies.*)
Par_List.map : ('a -> 'b) -> 'a list -> 'b list;
val bash_strings = [
"echo 'Echo 1.'; echo 'Echo 1.'; echo 'Echo 1.'; echo 'Echo 1.' ",
"echo 'Bash 2, bro!'",
"echo 'Bash 3, hombre or señorita!'",
"echo 'Bash 4, dude!'"];
val _ = Par_List.map Isabelle_System.bash bash_strings
*} (*
OUTPUT:
Bash 4, dude!
Bash 2, bro!
Bash 3, hombre or señorita!
Echo 1.
Echo 1.
Echo 1.
Echo 1.
val it = fn: ('a -> 'b) -> 'a list -> 'b list
val bash_strings =
["echo 'Echo 1.'; echo 'Echo 1.'; echo 'Echo 1.'; echo 'Echo 1.' ",
"echo 'Bash 2, bro!'", "echo 'Bash 3, hombre or señorita!'",
"echo 'Bash 4, dude!'"]:
string list*)
A 5 minute solution to a multi-month implementation problem, not counting the overhead of asking the question, reading the answer, opening the PDF, firing up the PIDE, and typing various stuff before locking onto some source that shows a simple example that works.
Section 0.9.1 Parallel skeletons, The Isabelle/Isar Implementation, Isabelle2014 (PDF)
src/Pure/par_list.ML

Related

How can I tell if a makefile is being run from an interactive shell?

I have a makefile which runs commands that can take a while. I'd like those commands to be chatty if the build is initiated from an interactive shell but quieter if not (specifically, by cron). Something along the lines of (pseudocode):
foo_opts = -a -b -c
if (make was invoked from an interactive shell):
foo_opts += --verbose
all: bar baz
foo $(foo_opts)
This is GNU make. If the specifics of what I'm doing matter, I can edit the question.
It isn't strictly determining whether it is invoked from an interactive shell or not, but for a cron job in which the output is redirected to a file, the answer to this question would be the same as for How to detect if my shell script is running through a pipe?:
if [ -t 0 ]
then
# input is from a terminal
fi
Edit: To use this to set a variable in a Makefile (in GNU make, that is):
INTERACTIVE:=$(shell [ -t 0 ] && echo 1)
ifdef INTERACTIVE
# is a terminal
else
# cron job
endif
http://www.faqs.org/faqs/unix-faq/faq/part5/section-5.html
5.5) How can I tell if I am running an interactive shell?
In the C shell category, look for the variable $prompt.
In the Bourne shell category, you can look for the variable $PS1,
however, it is better to check the variable $-. If $- contains
an 'i', the shell is interactive. Test like so:
case $- in
*i*) # do things for interactive shell
;;
*) # do things for non-interactive shell
;;
esac
I do not think you can easily find out. I suggest adopting an alternative strategy, probably by quelling the verbose output from the cron job. I would look to do that using a makefile like this:
VERBOSE = --verbose
foo_opts = -a -b -c ${VERBOSE}
all: bar baz
foo $(foo_opts)
Then, in the cron job, specify:
make VERBOSE=
This command-line specification of VERBOSE overrides the one in the makefile (and cannot be changed by the makefile). That way, the specialized task (cron job) that you set up once and use many times will be done without the verbose output; the general task of building will be done verbosely (unless you elect to override the verbose-ness on the command line).
One minor advantage of this technique is that it will work with any variant of make; it does not depend on any GNU Make facility.
I’m not really sure what "am interactive" means. Do you mean if you have a valid /dev/tty? If so, then you could check that. Most of us check isatty on stdin, though, because it answers the questions we want to know: is there someone there to type something.
Just a note: you can also see the related discussion that I had about detecting redirection of STDOUT from inside a Makefile.
I believe it will be helpful to readers of this question - executive summary:
-include piped.mk
all: piped.mk
ifeq ($(PIPED),1)
#echo Output of make is piped because PIPED is ${PIPED}
else
#echo Output of make is NOT piped because PIPED is ${PIPED}
endif
#rm -f piped.mk
piped.mk:
#[ -t 1 ] && PIPED=0 || PIPED=1 ; echo "PIPED=$${PIPED}" > piped.mk
$ make
Output of make is NOT piped because PIPED is 0
$ make | more
Output of make is piped because PIPED is 1
In my answer there I explain why the [-t 1] has to be done in an action and not in a variable assignment (as in the recommended answer here), as well as the various pitfalls regarding re-evaluation of a generated Makefile (i.e. the piped.mk above).
The term interactive in this question seems to imply redirection of STDIN... in which case replacing [ -t 1 ] with [ -t 0 ] in my code above should work as-is.
Hope this helps.

In Unix, is it possible to give getops a range of values to expect?

Sorry if the title is confusing, but here's what I mean:
If I have a script that can accept several parameters, I'd use the getops command in order to more easily control script actions based on the parameters passed. However, lets say one of these parameters can be any number from 5 - 9, or whatever. Is there a way to tell getops that any number passed as command to the script between 5 and 9 should be taken as a single user-command?
My code so far is something like:
#!/bin/sh
args=`getopt -o abc: -- "$#"`
eval set -- "$args"
echo "After getopt"
for i in $args
do
case "$i" in
-c) shift;echo "flag c set to $1";shift;;
-a) shift;echo "flag a set";;
-b) shift;echo "flag b set";;
done
I want to see if I can do something like:
#!/bin/sh
args=`getopt -o ab[0-9]c: -- "$#"`
eval set -- "$args"
echo "After getopt"
for i in $args
do
case "$i" in
-c) shift;echo "flag c set to $1";shift;;
-a) shift;echo "flag a set";;
-b) shift;echo "flag b set";;
-[0-9]) shift; echo $i;;
done
No, at least not with the one I use (someone may have an enhanced one out there but it won't be standard anywhere).
For that particular case, it's probably okay to use:
args=`getopt -o ab0123456789c: -- "$#"`
although, for larger cases, that might be unwieldy.
Have you thought about not treating them as individual options? In other words, say they're debug levels for a logging procedure. Why could you not use:
args=`getopt -o abc:d: -- "$#"`
and specify them with progname -b -d4 instead of progname -b -4?

In UNIX shell scripting: What does $! expand to?

What is the meaning for $! in shell or shell scripting? I am trying to understand a script which has the something like the following.
local#usr> a=1
local#usr> echo $a
1
local#usr> echo $!a
a
It is printing the variable back. Is it all for that? What are the other $x options we have? Few I know are $$, $*, $?. If anyone can point me to a good source, it will be helpful. BTW, This is in Sun OS 5.8, KSH.
The various $… variables are described in Bash manual. According to the manual $! expands to the PID of the last process launched in background. See:
$ echo "Foo"
Foo
$ echo $!
$ true&
[1] 67064
$ echo $!
67064
[1]+ Done true
In ksh it seems to do the same.
From the ksh man page on my system:
${!vname}
Expands to the name of the variable referred to by vname. This
will be vname except when vname is a name reference.
For the shell you are asking, ksh, use the the ksh manual, and read this:
Parameter Substitution
A parameter is an identifier, one or more digits, or any of
the characters *, #, #, ?, -, $, and !.
It is clear that those are the accepted options $*, $#, $#, $?, $-, $$, and $!.
More could be included in the future.
For the parameter $!, from the manual:
"!" The process number of the last background command invoked.
if you start a background process, like sleep 60 &, then there will be a process number for such process, and the parameter $! will print its number.
$ sleep 60 &
[1] 12329
$ echo "$!"
12329
If there is no background process in execution (as when the shell starts), the exansion is empty. It has a null value.
$ ksh -c 'echo $!'
If there is a background process, it will expand to the PID of such process:
$ ksh -c 'sleep 30 & echo $!'
42586
That is why echo $!a expanded to a. It is because there is no PID to report:
$ ksh -c 'echo $!a'
a
Other shells may have a different (usually pretty similar) list of expansions (a parameter with only one $ and one next character).
For example, bash recognize this *##?-$!0_ as "Special parameters". Search the Bash manual for the heading "3.4.2 Special Parameters".
Special Parameters
The shell treats several parameters specially.
It gives the Process id of last backgroundjob or background function
Please go through this link below
http://www.well.ox.ac.uk/~johnb/comp/unix/ksh.html#specvar
! is a reference operator in unix, though it is not called with that name.
It always refers to a mother process. Try typing :! in vi, it takes you to command prompt and you can execute commands as usual until exit command.
! in SQLPLUS also executes the command from the command prompt. try this in sqlplus
SQL> !ls --- this gives the list of files inthe current dir.
$! - obviously gives the process id of the current/latest process.

ksh: how to probe stdin?

I want my ksh script to have different behaviors depending on whether there is something incoming through stdin or not:
(1) cat file.txt | ./script.ksh (then do "cat <&0 >./tmp.dat" and process tmp.dat)
vs. (2) ./script.ksh (then process $1 which must be a readable regular file)
Checking for stdin to see if it is a terminal[ -t 0 ] is not helpful, because my script is called from an other script.
Doing "cat <&0 >./tmp.dat" to check tmp.dat's size hangs up waiting for an EOF from stdin if stdin is "empty" (2nd case).
How to just check if stdin is "empty" or not?!
EDIT: You are running on HP-UX
Tested [ -t 0 ] on HP-UX and it appears to be working for me. I have used the following setup:
/tmp/x.ksh:
#!/bin/ksh
/tmp/y.ksh
/tmp/y.ksh:
#!/bin/ksh
test -t 0 && echo "terminal!"
Running /tmp/x.ksh prints: terminal!
Could you confirm the above on your platform, and/or provide an alternate test setup more closely reflecting your situation? Is your script ultimately spawned by cron?
EDIT 2
If desperate, and if Perl is available, define:
stdin_ready() {
TIMEOUT=$1; shift
perl -e '
my $rin = "";
vec($rin,fileno(STDIN),1) = 1;
select($rout=$rin, undef, undef, '$TIMEOUT') < 1 && exit 1;
'
}
stdin_ready 1 || 'stdin not ready in 1 second, assuming terminal'
EDIT 3
Please note that the timeout may need to be significant if your input comes from sort, ssh etc. (all these programs can spawn and establish the pipe with your script seconds or minutes before producing any data over it.) Also, using a hefty timeout may dramatically penalize your script when there is nothing on the input to begin with (e.g. terminal.)
If potentially large timeouts are a problem, and if you can influence the way in which your script is called, then you may want to force the callers to explicitly instruct your program whether stdin should be used, via a custom option or in the standard GNU or tar manner (e.g. script [options [--]] FILE ..., where FILE can be a file name, a - to denote standard input, or a combination thereof, and your script would only read from standard input if - were passed in as a parameter.)
This strategy works for bash, and would likely work for ksh. Poll 'tty':
#!/bin/bash
set -a
if [ "$( tty )" == 'not a tty' ]
then
STDIN_DATA_PRESENT=1
else
STDIN_DATA_PRESENT=0
fi
if [ ${STDIN_DATA_PRESENT} -eq 1 ]
then
echo "Input was found."
else
echo "Input was not found."
fi
Why not solve this in a more traditional way, and use the command line argument to indicate that the data will be coming from stdin?
For an example, consider the difference between:
echo foo | cat -
and
echo foo > /tmp/test.txt
cat /tmp/test.txt

How do you use newgrp in a script then stay in that group when the script exits

I am running a script on a solaris Box. specifically SunOS 5.7. I am not root. I am trying to execute a script similar to the following:
newgrp thegroup <<
FOO
source .login_stuff
echo "hello world"
FOO
The Script runs. The problem is it returns back to the calling process which puts me in the old group with the source .login_stuff not being sourced. I understand this behavior. What I am looking for is a way to stay in the sub shell. Now I know I could put an xterm& (see below) in the script and that would do it, but having a new xterm is undesirable.
Passing your current pid as a parameter.
newgrp thegroup <<
FOO
source .login_stuff
xterm&
echo $1
kill -9 $1
FOO
I do not have sg available.
Also, newgrp is necessary.
The following works nicely; put the following bit at the top of the (Bourne or Bash) script:
### first become another group
group=admin
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
### now continue with rest of the script
This works fine on Linuxen. One caveat: arguments containing spaces are broken apart. I suggest you use the
env arg1='value 1' arg2='value 2' script.sh construct to pass them in (I couldn't get it to work with $# for some reason)
The newgrp command can only meaningfully be used from an interactive shell, AFAICT. In fact, I gave up on it about ... well, let's say long enough ago that the replacement I wrote is now eligible to vote in both the UK and the USA.
Note that newgrp is a special command 'built into' the shell. Strictly, it is a command that is external to the shell, but the shell has built-in knowledge about how to handle it. The shell actually exec's the program, so you get a new shell immediately afterwards. It is also a setuid root program. On Solaris, at least, newgrp also seems to ignore the SHELL environment variable.
I have a variety of programs that work around the issue that newgrp was intended to address. Remember, the command pre-dates the ability of users to belong to multiple groups at once (see the Version 7 Unix Manuals). Since newgrp does not provide a mechanism to execute commands after it executes, unlike su or sudo, I wrote a program newgid which, like newgrp, is a setuid root program and allows you to switch from one group to another. It is fairly simple - just main() plus a set of standardized error reporting functions used. Contact me (first dot last at gmail dot com) for the source. I also have a much more dangerous command called 'asroot' that allows me (but only me - under the default compilation) to tweak user and group lists much more thoroughly.
asroot: Configured for use by jleffler only
Usage: asroot [-hnpxzV] [<uid controls>] [<gid controls>] [-m umask] [--] command [arguments]
<uid controls> = [-u usr|-U uid] [-s euser|-S euid][-i user]
<gid controls> = [-C] [-g grp|-G gid] [-a grp][-A gid] [-r egrp|-R egid]
Use -h for more help
Option summary:
-a group Add auxilliary group (by name)
-A gid Add auxilliary group (by number)
-C Cancel all auxilliary groups
-g group Run with specified real GID (by name)
-G gid Run with specified real GID (by number)
-h Print this message and exit
-i Initialize UID and GIDs as if for user (by name or number)
-m umask Set umask to given value
-n Do not run program
-p Print privileges to be set
-r euser Run with specified effective UID (by name)
-R euid Run with specified effective UID (by number)
-s egroup Run with specified effective GID (by name)
-S egid Run with specified effective GID (by number)
-u user Run with specified real UID (by name)
-U uid Run with specified real UID (by number)
-V Print version and exit
-x Trace commands that are executed
-z Do not verify the UID/GID numbers
Mnemonic for effective UID/GID:
s is second letter of user;
r is second letter of group
(This program grew: were I redoing it from scratch, I would accept user ID or user name without requiring different option letters; ditto for group ID or group name.)
It can be tricky to get permission to install setuid root programs. There are some workarounds available now because of the multi-group facilities. One technique that may work is to set the setgid bit on the directories where you want the files created. This means that regardless of who creates the file, the file will belong to the group that owns the directory. This often achieves the effect you need - though I know of few people who consistently use this.
newgrp adm << ANYNAME
# You can do more lines than just this.
echo This is running as group \$(id -gn)
ANYNAME
..will output:
This is running as group adm
Be careful -- Make sure you escape the '$' with a slash. The interactions are a little strange, because it expands even single-quotes before it executes the shell as the other group. so, if your primary group is 'users', and the group you're trying to use is 'adm', then:
newgrp adm << END
# You can do more lines than just this.
echo 'This is running as group $(id -gn)'
END
..will output:
This is running as group users
..because 'id -gn' was run by the current shell, then sent to the one running as adm.
Anyways, I know this post is ancient, but hope this is useful to someone.
This example was expanded from plinjzaad's answer; it handles a command line which contains quoted parameters that contain spaces.
#!/bin/bash
group=wg-sierra-admin
if [ $(id -gn) != $group ]
then
# Construct an array which quotes all the command-line parameters.
arr=("${#/#/\"}")
arr=("${arr[*]/%/\"}")
exec sg $group "$0 ${arr[#]}"
fi
### now continue with rest of the script
# This is a simple test to show that it works.
echo "group: $(id -gn)"
# Show all command line parameters.
for i in $(seq 1 $#)
do
eval echo "$i:\${$i}"
done
I used this to demonstrate that it works.
% ./sg.test 'a b' 'c d e' f 'g h' 'i j k' 'l m' 'n o' p q r s t 'u v' 'w x y z'
group: wg-sierra-admin
1:a b
2:c d e
3:f
4:g h
5:i j k
6:l m
7:n o
8:p
9:q
10:r
11:s
12:t
13:u v
14:w x y z
Maybe
exec $SHELL
would do the trick?
You could use sh& (or whatever shell you want to use) instead of xterm&
Or you could also look into using an alias (if your shell supports this) so that you would stay in the context of the current shell.
In a script file eg tst.ksh:
#! /bin/ksh
/bin/ksh -c "newgrp thegroup"
At the command line:
>> groups fred
oldgroup
>> tst.ksh
>> groups fred
thegroup
sudo su - [user-name] -c exit;
Should do the trick :)

Resources