Is there an egg or some library that would allow me to write CSP style programs in Scheme? By CSP style I mean what's implemented in Go (go/channel/select) or Clojure's core.async.
Chicken Scheme has a channel egg that you could try out.
$ chicken-install -s channel
Chicken now also has a gochan egg (which I wrote). It's much simpler than channel, but channel seems to be a little complex and I'm unsure of how well it has been tested.
$ chicken-install -s gochan
$ csi -R gochan -p '(gochan-receive (gochan "hello world"))'
hello world
Related
Scenario:
Let's say a script my_online_searcher <query> opens a browser with the search for query, where query can be multiple words long. This same script also provides a utility flag -s <query> that shows the search engine's suggestions. E.g.:
$ my_online_searcher -s lion rema
lion remake
lion remake cast
lion remake zoo tycoon 2
lion remake zt2
remake lion king
remastered lion's share
remake lion king 2019
remastered lion king
remake lion king trailer
remaking lion king
Desired outcome:
The user would type $ my_online_searcher lion rema[TAB] and ZSH completion menu would give the above options.
Attempts to achieve desired outcome:
Create a little completions script _my_online_searcher that pretty much calls the my_online_search -s <query> support function to give out options. Something like:
[...]
completions=(${(f)"$(my_online_searcher -s ${arg} ${words:2})"})
_describe 'suggestions' completions
[...]
This allows for the spaces to be escaped and allow ZSH to see the query as a single argument.
However, a per-word completion does not allow the progressive completion of the suggestions. Either it repeats the previous words, or filters out suggestions that do not begin with the first arguments (e.g. $ my_online searcher lion rema[TAB] -> $ my_online_searcher lion remake lion king).
One idea was to modify the LBUFFER inside the completion script. This turned out to not be allowed as ZSH gives the error that LBUFFER is readonly
TL;DR:
Is it possible to have ZSH completion system see all the arguments after the command as a modifiable argument? In other words, can I propose completions to multiple arguments at the same time in ZSH?
We are having a discussion at work, what is the best UNIX command tool that to view log files. One side says use LESS, the other says use MORE. Is one better than the other?
A common problem is that logs have too many processes writing to them, I prefer to filter my log files and control the output using:
tail -f /var/log/<some logfile> | grep <some identifier> | more
This combination of commands allows you to watch an active log file without getting overwhelmed by the output.
I opt for less. A reason for this is that (with aid of lessopen) it can read gzipped log (as archived by logrotate).
As an example with this single command I can read in time ordered mode dpkg log, without treating differently gzipped ones:
less $(ls -rt /var/log/dpkg.log*) | less
Multitail is the best option, because you can view multiple logs at the same time. It also colors stuff, and you can set up regex to highlight entries you're looking for.
You can use any program: less, nano, vi, tail, cat etc, they differ in functionality.
There are also many log viewers: gnome-system-log, kiwi etc (they can sort log by date / type etc)
Less is more. Although since when I'm looking at my logs I'm typically searching for something specific or just interested in the last few events I find myself using cat, pipes and grep or tail rather than more or less.
less is the best, imo. It is light weight compared to an editor, it allows forward and backward navigation, it has powerful search capabilities, and many more things. Hit 'h' for help. It's well worth the time getting familiar with it.
On my Mac, using the standard terminal windows, there's one difference between less and more, namely, after exiting:
less leaves less mess on my screen
more leaves more useful information on my screen
Consequently, if I think I might want to do something with the material I'm viewing after the viewer finishes (for example, copy'n'paste operations), I use more; if I don't want to use the material after I've finished, then I use less.
The primary advantage of less is the ability to scroll backwards; therefore, I tend to use less rather than more, but both have uses for me. YMMV (YMWV; W = Will in this case!).
As your question was generically about 'Unix systems', keep into account that
in some cases you have no choice, for old systems you have only MORE available,
but not LESS.
LESS is part of the GNU tools, MORE comes from the UCB times.
Turn on grep's line buffering mode.
Using tail (Live monitoring)
tail -f fileName
Using less (Live monitoring)
less +F fileName
Using tail & grep
tail -f fileName | grep --line-buffered my_pattern
Using less & grep
less +F fileName | grep --line-buffered my_pattern
Using watch & tail to highlight new lines
watch -d tail fileName
Note: For linux systems.
For my developer work I reside in the *nix shell environment pretty much all day, but still can't seem to memorize the name and argument specifics of programs I don't use daily. I wonder how other 'casual amnesiacs' handle this. Do you maintain an big cheat sheet? Do you rehearse the emacs shortcuts when you take your weekly shower? Or is your desk covered under sticky notes?
Using bash_completion is one way of not having to remember the precise syntax of program arguments.
> svn [tab][tab]
--help checkout delete lock pdel propget revert
--version ci diff log pedit proplist rm
-h cleanup export ls pget propset status
add co help merge plist pset switch
annotate commit import mkdir praise remove unlock
blame copy info move propdel rename update
cat cp list mv propedit resolved
If I don't use a command regularly enough to remember what I want, I tend to just use --help or the man pages when I need to.
Or, if I'm lucky, I use CTRL+R and let bash's history search find when I last used it.
Eventually you just remember them, well the set that you use anyway. I used to maintain a README in my home directory when I was starting out but that disappeared many years ago.
One useful command is man -k which you pass a word to and it will return a list of all commands whose man page summary contains that word.
'apropos' is also a very useful command. It will list all commands whose man pages contain the keyword.
Is there a case of ... or context where cat file | ... behaves differently than ... <file?
When reading from a regular file, cat is in charge of reading the data, performs it as it pleases, and might constrain it in the way it writes it to the pipeline. Obviously, the contents themselves are preserved, but anything else could be tainted. For example: block size and data arrival timing. Additionally, the pipe in itself isn't always neutral: it serves as an additional buffer between the input and ....
Quick and easy way to make the block size issue apparent:
$ cat large-file | pv >/dev/null
5,44GB 0:00:14 [ 393MB/s] [ <=> ]
$ pv <large-file >/dev/null
5,44GB 0:00:03 [1,72GB/s] [=================================>] 100%
Besides the thing posted by other users, when using input redirection from a file, standard input is the file but when piping the output of cat to the input, standard input is a stream with the contents of the file. When standard input is the file will be able to seek within the file but the pipe will not allow it. You can see this by finding a zip file and running the following commands:
zipinfo /dev/stdin < thezipfile.zip
and
cat thezipfile.zip | zipinfo /dev/stdin
The first command will show the contents of the zipfile while the second will show an error, though it is a misleading error because zipinfo does not check the result of the seek call and errors later on.
A useless use of cat is always to be avoided. It's like driving with the handbrake on. It wastes CPU cycles for nothing, the OS constantly context switching between the cat process and the next in the pipe. If all the world's useless cats were gone and stopped being invented, reinvented, passed on from father to son, we wouldn't have global warming because we could easily live with 1.21 Gigawatts of power saved.
Thanks. I feel better now. Please join me in my crusade to stamp out useless use of cat on stackoverflow. This site is, as far as I perceive it, a major contribution to the proliferation of useless cats. I don't blame the newbies, but I do want to teach them. Workers and newbies of the world, loosen the handbrakes and save the planet!!!1!
cat will allow you to pipe multiple files in sequentially. Otherwise, < redirection and cat file | produce the same side effects.
Pipes cause a subshell to be invoked for the command on the right. This interferes with environment variables.
cat foo | while read line
do
...
done
echo "$line"
versus
while read line
do
...
done < foo
echo "$line"
One further difference is behavior on a blocking open() of the input file.
For example, assuming input is a FIFO with no writers, one invocation will not spawn any child programs until the input file is opened, while the other will spawn two processes:
prog ... < a_fifo # 'prog' not launched until shell can open file
cat a_fifo | prog ... # 'prog' and 'cat' are running (latter may block on open)
In practice this rarely matters except in contrived circumstances. prog might periodically log or do some cleanup work while waiting for input, for example, which you might want to happen even if no input is available. (Why wouldn't prog be sophisticated enough to open its own input fifo nonblocking?)
cat file | starts up another program (cat) that doesn't have to start in the second case. It also makes it more confusing if you want to use "here documents". But it should behave the same.
I've looked this up a thousand times, and I always forget it, so, here for eternity:
Solaris has a bit of an awkward syntax for tail.
How do I do the equivalent of BSD's tail -nN?
What I want are the last N lines from tail's input.
Just remove the "n"
tail -100
Or you can use:
/usr/xpg4/bin/tail
which does behave like you want (tail -nN).
xpg4 = Xopen Portability Guide Issue 4, contains binaries strictly compliant with several POSIX and other standards. The differences with the former ones are usually details in options supported and behavior.
According to your distribution, there is also /usr/xpg6/bin, /usr/openwin/bin (OpenWindows commands), /usr/dt/bin (CDE desktop commands), /usr/sfw/bin (Solaris freeware) and various other.
For instance, Solaris Express is introducing /usr/gnu/bin to provide Gnu binaries with their custom extensions and specificities.
Cross-platform variant of tail -n 10 for scripts:
sed -e :a -e '$q;N;11,$D;ba' file
This works the same for Linux and Solaris.