Run jq in fzf preview with arguments - jq

I have some json data and I want to interactively query it with fzf and jq, by sending the data through stdin and typing the jq query into the fzf query box.
My attempt so far is showing one result in the box, but editing the contents of the query box turns the results blank instead.
fzf-tmux --preview 'jq "$#" <<< {}' <<<'[{"x": 1}, {"y": 2}]'

A recent Hacker News post about using fzf as a REPL had me thinking it would be awesome to live-edit jq filters as well. Using the base implementation from that article, I ended up with:
echo '' | fzf --print-query --preview='jq {q} <(echo "[{"x": 1}, {"y": 2}]")'
You can clean up the quoting a bit, at the expense of some verbosity, by changing it to:
(export json='[{"x": 1}, {"y": 2}]'; echo '' | fzf --print-query --preview='jq {q} <(echo $json)')
or (somewhat safer for unvalidated input):
(export json='[{"x": 1}, {"y": 2}]'; echo '' | fzf --print-query --preview='jq {q} <(printf "%s" "$json")')
Final example, using the StackExchange API to retrieve this post:
(export json=$(curl -s --compressed -H "Accept-Encoding: GZIP" "https://api.stackexchange.com/2.2/posts/56744579?site=stackoverflow&filter=withbody"); echo '' | fzf --print-query --preview-window=wrap --preview='filter={q}; jq -M -r "${filter}" <(printf "%s" "$json")')
One more example, added around 18 months later. This is the same as the previous example, but for the fish shell. It also uses httpie to clean things up as well, since httpie automatically handles things like the encoding/compression. I also left in the color output on this one:
begin
set -lx jq_url 'https://api.stackexchange.com/2.2/posts/56744579?site=stackoverflow&filter=withbody'
echo '' | fzf --print-query --preview='set -x q {q}; jq -C {q} (http -b GET "$jq_url" | psub)'
end
Note: The begin/end block is only there to keep variables in a local scope. They really aren't required for the example to work, just to keep from polluting the namespace.

If you're expecting $# to be expanded by the shell, then the simple fix is to modify the quoting:
fzf-tmux --preview 'jq '"$#"' <<< {}'
If on the other hand, you want to use the {q} feature of fzf, which seems to be the case, then you may be out of luck, though whether that's because of a bug in fzf, or some incompatibility between jq and fzf, I cannot tell.
Navigating by paths
Let's suppose $JSON is a file containing a single JSON array or object. Then when running the following, you'll see the paths on the LHS, and the value at the selected path on the RHS:
jq -rc paths "$JSON" |
fzf-tmux --preview 'x={}; jq "getpath($x)" '"$JSON"

Related

how to get name of the current command in zsh? (to email yourself shell output in zsh with a good Subject line)

In tcsh, the most "unscriptable" shell, I often use the following code to email myself output of any command, with the subject line conveniently set exactly to what the command was:
~/script.sh | & mail -s "`history 1| cut -f3-`" user#example.org
Here's the example session to show that the output I get is exactly as the input command:
tcsh% printf "%s\n" "`history 1| cut -f3-`"
printf "%s\n" "`history 1| cut -f3-`"
tcsh%
Note that it works perfectly with the long-term history storage, too, and can be modified without having to be written again.
I've tried using the following in zsh, but, it's one command behind, so, does not produce the required output, plus it looks pretty ugly with the extra -d and tr being required:
\history -1 | tr -s ' ' | cut -d\ -f3-
I've also tried !# in zsh, but it results in broken history (stuff gets duplicated), and doesn't even work even work properly by itself, either:
zsh% printf "%s\n" "!#"
dquote>
Is there any way in zsh to get the whole command line that's being executed such that it could be used as the subject line for the email?

How to extract individual values from output aws cli --query

I'm running this query with zsh:
output=$(aws sagemaker describe-training-job \
--training-job-name $name \
--query '{S3ModelArtifacts:ModelArtifacts.S3ModelArtifacts,TrainingImage:AlgorithmSpecification.TrainingImage,RoleArn:RoleArn}')
But for the life of me I can't seem to individually extract out S3ModelArtifacts, TrainingImage, and RoleArn.
It seems to be neither an array nor an associative array? But it looks like it's json format when I do echo $output.
Ultimately I just want to be able to do something like var=${output[TrainingImage]} but this just gives me the whole response instead of just the TrainingImage value.
Any help appreciated.
You can use the command line tool jq to parse json output like so:
(19-11-27 10:25:38) <0> [~] printf %s "$output" | jq '.TrainingImage'
"123456789877.dkr.ecr.eu-west-1.amazonaws.com/kmeans:1"
Or, as this is a pretty simple query, you can use sed:
(19-11-27 10:25:43) <0> [~] printf %s "$output" | sed -n -e 's/^.*TrainingImage"://p'
"123456789877.dkr.ecr.eu-west-1.amazonaws.com/kmeans:1",
Here is the explanation of the sed command.

Zsh read output of command into array splitting on newline

I have a command that outputs a bunch of stuff e.g. running mycmd will give:
foobar
derp derp
like so
etc
Some of these lines will have spaces in them.
How do I read these into an array in zsh such that ${arr[1]} gives foobar, ${arr[2]} gives derp derp etc.
I have tried something like but it seems to split the array on chars not newlines.
IFS=$'\n' read -d '' -r arr <<< "$(mycmd)"
i.e. ${arr[1]} gives f when it should give foobar
Okay its actually very simple:
IFS=$'\n' arr=($(mycmd))
I'm not sure exactly why the read usage in the original question didn't work. It's possibly related to mixing <<< and $(). Or maybe the user just had a messed up shell session. Or maybe it was a bug in an older version of Zsh.
In any case, it has nothing to do with the behavior of the read builtin, and the original proposal was very close to correct. The only problem was using <<< $(...) instead of a plain pipe, which should just be a stylistic goof (rather than an error).
The following works perfectly fine in Zsh 5.8.1 and 5.9:
function mycmd {
print foobar
print derp derp
print like so
print etc
}
typeset -a lines
mycmd | IFS=$'\n' read -r -d '' -A lines
echo ${(F)lines}
You should see:
foobar
derp derp
like so
etc
I prefer this style, instead of ( $(...) ). Not requiring a subshell is useful in many cases, and the quoting/escaping situation is a lot simpler.
Note that -d '' is required to prevent read from terminating at the first newline.
You can wrap this up in a function easily:
function read-lines {
if (( $# != 1 )); then
print -u2 'Exactly 1 argument is required.'
return 2
fi
local array="${1:-}"
read -r -d '' "$array"
}

When it is allowed to omit the dot filter in jq?

I do not understand, when it is allowed to omit the dot expression.
It is possible to convert every line of raw input into a JSON string:
$ echo -e "a\nb" | jq -Rc .
"a"
"b"
In that example it makes no difference, when the dot expression is missing:
$ echo -e "a\nb" | jq -Rc
"a"
"b"
Next I can read the output from the first jq and slurp it into an array:
$ echo -e "a\nb" | jq -Rc . | jq -sc .
["a","b"]
Here it makes also no difference, when I omit the dot expression:
$ echo -e "a\nb" | jq -Rc . | jq -sc
["a","b"]
But when I omit both dot expressions, I get an usage error and an empty array as result:
$ echo -e "a\nb" | jq -Rc | jq -sc
jq - commandline JSON processor [version 1.5]
Usage: jq [options] <jq filter> [file...]
...
[]
Why?
Before directly answering the question, I'd like to clarify that:
It is always acceptable to specify a filter explicitly.
Some versions of jq expect that a filter will be specified explicitly.
Different versions of jq behave differently in the absence of an explicit filter.
The main idea guiding jq's evolution with regard to interpreting the absence of a filter intelligently has been that if there's something to read on STDIN, and if a filter has not been specified explicitly, and if it looks like you meant ., then assume you did mean ..
The answer to the question, then, is that the perplexing behavior noted in the question is a bug in a particular version of jq.
(Or if you like, the perplexing behavior reflects the difficulties that arise when developers seek to endow software with the ability to read your mind.)
By the way, the bug has been fixed:
$ jq --version
jq-1.5rc2-150-g1740fd0
$ echo -e "a\nb" | jq -Rc | jq -sc
["a","b"]
The answer is in the rest of the text
Usage: jq [options] <jq filter> [file...]
A filter should be mandatory then, a filter takes an input and produces an output, but in many times you dont need to produce an output and just want the result printed so the default was . (see the issue believe introduced in 1.5, before you must had to include the filter)
so it should be the same if . is the default filtering, unfortunately is how pipe is reading stdin and stout. You can read the details in the GitHub issue
Maybe we should print the usage message only when the program is empty, and stdin and stdout are both terminals? That is, assume . when stdin is not a terminal or when stdout is not a terminal.
so the rule is :
if you want to be perfectionist always use a filter even if . is the filter you want
if you want the result of your command to be the input of another pipe, you must indicate the filter, again if you just want the same result to be taken as input of the next command.
so the same
echo -e "a\nb" | jq -Rc > test.txt will produce an error but echo -e "a\nb" | jq -Rc . > test.txt will write the result of the command into the file

Is there a Unix utility to prepend timestamps to stdin?

I ended up writing a quick little script for this in Python, but I was wondering if there was a utility you could feed text into which would prepend each line with some text -- in my specific case, a timestamp. Ideally, the use would be something like:
cat somefile.txt | prepend-timestamp
(Before you answer sed, I tried this:
cat somefile.txt | sed "s/^/`date`/"
But that only evaluates the date command once when sed is executed, so the same timestamp is incorrectly prepended to each line.)
ts from moreutils will prepend a timestamp to every line of input you give it. You can format it using strftime too.
$ echo 'foo bar baz' | ts
Mar 21 18:07:28 foo bar baz
$ echo 'blah blah blah' | ts '%F %T'
2012-03-21 18:07:30 blah blah blah
$
To install it:
sudo apt-get install moreutils
Could try using awk:
<command> | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }'
You may need to make sure that <command> produces line buffered output, i.e. it flushes its output stream after each line; the timestamp awk adds will be the time that the end of the line appeared on its input pipe.
If awk shows errors, then try gawk instead.
annotate, available via that link or as annotate-output in the Debian devscripts package.
$ echo -e "a\nb\nc" > lines
$ annotate-output cat lines
17:00:47 I: Started cat lines
17:00:47 O: a
17:00:47 O: b
17:00:47 O: c
17:00:47 I: Finished with exitcode 0
Distilling the given answers to the simplest one possible:
unbuffer $COMMAND | ts
On Ubuntu, they come from the expect-dev and moreutils packages.
sudo apt-get install expect-dev moreutils
How about this?
cat somefile.txt | perl -pne 'print scalar(localtime()), " ";'
Judging from your desire to get live timestamps, maybe you want to do live updating on a log file or something? Maybe
tail -f /path/to/log | perl -pne 'print scalar(localtime()), " ";' > /path/to/log-with-timestamps
Kieron's answer is the best one so far. If you have problems because the first program is buffering its out you can use the unbuffer program:
unbuffer <command> | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; }'
It's installed by default on most linux systems. If you need to build it yourself it is part of the expect package
http://expect.nist.gov
Just gonna throw this out there: there are a pair of utilities in daemontools called tai64n and tai64nlocal that are made for prepending timestamps to log messages.
Example:
cat file | tai64n | tai64nlocal
Use the read(1) command to read one line at a time from standard input, then output the line prepended with the date in the format of your choosing using date(1).
$ cat timestamp
#!/bin/sh
while read line
do
echo `date` $line
done
$ cat somefile.txt | ./timestamp
I'm not an Unix guy, but I think you can use
gawk '{print strftime("%d/%m/%y",systime()) $0 }' < somefile.txt
#! /bin/sh
unbuffer "$#" | perl -e '
use Time::HiRes (gettimeofday);
while(<>) {
($s,$ms) = gettimeofday();
print $s . "." . $ms . " " . $_;
}'
$ cat somefile.txt | sed "s/^/`date`/"
you can do this (with gnu/sed):
$ some-command | sed "x;s/.*/date +%T/e;G;s/\n/ /g"
example:
$ { echo 'line1'; sleep 2; echo 'line2'; } | sed "x;s/.*/date +%T/e;G;s/\n/ /g"
20:24:22 line1
20:24:24 line2
of course, you can use other options of the program date. just replace date +%T with what you need.
Here's my awk solution (from a Windows/XP system with MKS Tools installed in the C:\bin directory). It is designed to add the current date and time in the form mm/dd hh:mm to the beginning of each line having fetched that timestamp from the system as each line is read. You could, of course, use the BEGIN pattern to fetch the timestamp once and add that timestamp to each record (all the same). I did this to tag a log file that was being generated to stdout with the timestamp at the time the log message was generated.
/"pattern"/ "C\:\\\\bin\\\\date '+%m/%d %R'" | getline timestamp;
print timestamp, $0;
where "pattern" is a string or regex (without the quotes) to be matched in the input line, and is optional if you wish to match all input lines.
This should work on Linux/UNIX systems as well, just get rid of the C\:\\bin\\ leaving the line
"date '+%m/%d %R'" | getline timestamp;
This, of course, assumes that the command "date" gets you to the standard Linux/UNIX date display/set command without specific path information (that is, your environment PATH variable is correctly configured).
Mixing some answers above from natevw and Frank Ch. Eigler.
It has milliseconds, performs better than calling a external date command each time and perl can be found in most of the servers.
tail -f log | perl -pne '
use Time::HiRes (gettimeofday);
use POSIX qw(strftime);
($s,$ms) = gettimeofday();
print strftime "%Y-%m-%dT%H:%M:%S+$ms ", gmtime($s);
'
Alternative version with flush and read in a loop:
tail -f log | perl -pne '
use Time::HiRes (gettimeofday); use POSIX qw(strftime);
$|=1;
while(<>) {
($s,$ms) = gettimeofday();
print strftime "%Y-%m-%dT%H:%M:%S+$ms $_", gmtime($s);
}'
caerwyn's answer can be run as a subroutine, which would prevent the new processes per line:
timestamp(){
while read line
do
echo `date` $line
done
}
echo testing 123 |timestamp
Disclaimer: the solution I am proposing is not a Unix built-in utility.
I faced a similar problem a few days ago. I did not like the syntax and limitations of the solutions above, so I quickly put together a program in Go to do the job for me.
You can check the tool here: preftime
There are prebuilt executables for Linux, MacOS, and Windows in the Releases section of the GitHub project.
The tool handles incomplete output lines and has (from my point of view) a more compact syntax.
<command> | preftime
It's not ideal, but I though I'd share it in case it helps someone.
The other answers mostly work, but have some drawbacks. In particular:
Many require installing a command not commonly found on linux systems, which may not be possible or convenient.
Since they use pipes, they don't put timestamps on stderr, and lose the exit status.
If you use multiple pipes for stderr and stdout, then some do not have atomic printing, leading to intermingled lines of output like [timestamp] [timestamp] stdout line \nstderr line
Buffering can cause problems, and unbuffer requires an extra dependency.
To solve (4), we can use stdbuf -i0 -o0 -e0 which is generally available on most linux systems (see How to make output of any shell command unbuffered?).
To solve (3), you just need to be careful to print the entire line at a time.
Bad: ruby -pe 'print Time.now.strftime(\"[%Y-%m-%d %H:%M:%S] \")' (Prints the timestamp, then prints the contents of $_.)
Good: ruby -pe '\$_ = Time.now.strftime(\"[%Y-%m-%d %H:%M:%S] \") + \$_' (Alters $_, then prints it.)
To solve (2), we need to use multiple pipes and save the exit status:
alias tslines-pipe="stdbuf -i0 -o0 ruby -pe '\$_ = Time.now.strftime(\"[%Y-%m-%d %H:%M:%S] \") + \$_'"
function tslines() (
stdbuf -o0 -e0 "$#" 2> >(tslines-pipe) > >(tslines-pipe)
status="$?"
exit $status
)
Then you can run a command with tslines some command --options.
This almost works, except sometimes one of the pipes takes slightly longer to exit and the tslines function has exited, so the next prompt has printed. For example, this command seems to print all the output after the prompt for the next line has appeared, which can be a bit confusing:
tslines bash -c '(for (( i=1; i<=20; i++ )); do echo stderr 1>&2; echo stdout; done)'
There needs to be some coordination method between the two pipe processes and the tslines function. There are presumably many ways to do this. One way I found is to have the pipes send some lines to a pipe that the main function can listen to, and only exit after it's received data from both pipe handlers. Putting that together:
alias tslines-pipe="stdbuf -i0 -o0 ruby -pe '\$_ = Time.now.strftime(\"[%Y-%m-%d %H:%M:%S] \") + \$_'"
function tslines() (
# Pick a random name for the pipe to prevent collisions.
pipe="/tmp/pipe-$RANDOM"
# Ensure the pipe gets deleted when the method exits.
trap "rm -f $pipe" EXIT
# Create the pipe. See https://www.linuxjournal.com/content/using-named-pipes-fifos-bash
mkfifo "$pipe"
# echo will block until the pipe is read.
stdbuf -o0 -e0 "$#" 2> >(tslines-pipe; echo "done" >> $pipe) > >(tslines-pipe; echo "done" >> $pipe)
status="$?"
# Wait until we've received data from both pipe commands before exiting.
linecount=0
while [[ $linecount -lt 2 ]]; do
read line
if [[ "$line" == "done" ]]; then
((linecount++))
fi
done < "$pipe"
exit $status
)
That synchronization mechanism feels a bit convoluted; hopefully there's a simpler way to do it.
doing it with date and tr and xargs on OSX:
alias predate="xargs -I{} sh -c 'date +\"%Y-%m-%d %H:%M:%S\" | tr \"\n\" \" \"; echo \"{}\"'"
<command> | predate
if you want milliseconds:
alias predate="xargs -I{} sh -c 'date +\"%Y-%m-%d %H:%M:%S.%3N\" | tr \"\n\" \" \"; echo \"{}\"'"
but note that on OSX, date doesn't give you the %N option, so you'll need to install gdate (brew install coreutils) and so finally arrive at this:
alias predate="xargs -I{} sh -c 'gdate +\"%Y-%m-%d %H:%M:%S.%3N\" | tr \"\n\" \" \"; echo \"{}\"'"
No need to specify all the parameters in strftime() unless you really want to customize the outputting format :
echo "abc 123 xyz\njan 765 feb" \
\
| gawk -Sbe 'BEGIN {_=strftime()" "} sub("^",_)'
Sat Apr 9 13:14:53 EDT 2022 abc 123 xyz
Sat Apr 9 13:14:53 EDT 2022 jan 765 feb
works the same if you have mawk 1.3.4. Even on awk-variants without the time features, a quick getline could emulate it :
echo "abc 123 xyz\njan 765 feb" \
\
| mawk2 'BEGIN { (__="date")|getline _;
close(__)
_=_" " } sub("^",_)'
Sat Apr 9 13:19:38 EDT 2022 abc 123 xyz
Sat Apr 9 13:19:38 EDT 2022 jan 765 feb
If you wanna skip all that getline and BEGIN { }, then something like this :
mawk2 'sub("^",_" ")' \_="$(date)"
If the value you are prepending is the same on every line, fire up emacs with the file, then:
Ctrl + <space>
at the beginning of the of the file (to mark that spot), then scroll down to the beginning of the last line (Alt + > will go to the end of file... which probably will involve the Shift key too, then Ctrl + a to go to the beginning of that line) and:
Ctrl + x r t
Which is the command to insert at the rectangle you just specified (a rectangle of 0 width).
2008-8-21 6:45PM <enter>
Or whatever you want to prepend... then you will see that text prepended to every line within the 0 width rectangle.
UPDATE: I just realized you don't want the SAME date, so this won't work... though you may be able to do this in emacs with a slightly more complicated custom macro, but still, this kind of rectangle editing is pretty nice to know about...

Resources