When using make -j on a file with many targets and one of then fails, it can be a bit of a pain to identify the particular command that provoked the error. Especially when there's at lot of output.
Can I persuade gnu make to print something like this (preferably at the end of output):
This particular command failed: "frobniz -foo" with output "frobniz only takes -foo options on a thursday"?
For now, I resort to using make -j -Otarget inside tmux with its searchable history so that I can locate that "failed" output:
Makefile:41: recipe for target 't4' failed
Alas, the 't4' target has about 40 commands with plenty output so I have to do more searching in order to locate the actual command that fails. It's manageable, but does feel clumsy.
I tried remaketoo, but it does not seem to have any options for this.
EDIT: Obviously, I should have put some code here, let me amend that; This is one target out of 16, each with about 40 commands. Example Below have been cut to 4 commands.
t4:
sso -dump FAIL:B -path /instadm-bin/ktkopdat.start -ttarget "AA" ht1
sso -dump FAIL:B -path /instadm-bin/ktknyadm -ttarget "BB" ht1
sso -dump FAIL:B -path /instadm-bin/multiadm -ttarget "CC" ht1
sso -dump FAIL:B -path /instadm-bin/ktkslet.start -ttarge "DD" ht1
Lets say I run make -j and the second line above fails. The failure and the command are visible in the output from make, but mixed in with a lot of other output.
What make executes to generete the error is: sso -dump FAIL:B -path /instadm-bin/ktknyadm -ttarget "BB" ht1 - it fails with a status -1.
Now, make must know which command failed, since it is able to abort the process at this point. This information is very valuable to me, but it is nevertheless hidden in a jumble of output from a lot of parallel processes. Please note that I could of course rerun make without the -j option, thus ensuring that the needed info shows up at the bottom of the output, but I'd rather not repeat the lengthy build more times than necessary.
I'm not sure what you mean by 40 commands with plenty of output. It would be helpful if you provided an example.
If you mean, you have 40 logical recipe lines (not using backslash to combine 40 physical lines into a single logical line), then as soon as one of those commands fails make will stop building that recipe and won't run any more commands, so the failed command will always be the last one.
If you mean, you have a long shell script that consists of 40 physical lines combined into a single logical line using semicolons and backslashes, then make cannot help you, because make has no idea what command failed.
Make will invoke one instance of the shell and pass each entire logical recipe line to that shell, and wait for the shell to exit with an exit code of 0 or not-0. If the exit code is 0 then make assumes the recipe succeeded; if the exit code is non-0 make assumes the recipe failed. Make has no way to know that there was more than one command invoked, which commands were invoked, which ones may have failed, which ones emitted which output, etc. All of that information is known only to the shell, not to make.
Related
Finally getting around to switching to zsh (from bash)... I'm trying to understand a bit more about the completion system and could use a quick pointer. I have been able to get other completions to work for command arguments, but I'm struggling with path completions.
I use a simple function (cdp) to jump to project directories. I've set up a very basic completion script, which almost works. I just can't seem to get the behavior that I'm hoping for.
Ideally, typing cdp in{tab} would expand to all projects starting with in, such as:
~/Projects/indigo ~/Projects/instant
Instead, I can only get cdp {tab} to get the ~/Projects path. From there, it will expand the first-level directory. I'd like to be able to just run standard completion for cd once the project directory is expanded.
Here is the completion script, save in _cdp and added to fpath:
#compdef cdp
basedir="$HOME/Projects"
# the function for jumping to directories...
cdp() {
if [ -z "$1" ] ; then
cd $basedir
else
cd "$1"
fi
}
# completion helper...
_alternative "directories:user directory:($basedir/*)"
It's pretty basic, I'm just stuck trying to sort out where to go next. Any thoughts or pointers would be great. Thanks!
UPDATE
I'm finding that cdpath works fine for most of what I need... It would still be interesting to know how to complete this simple function, but for now at least I have a working solution using cdpath and auto_cd.
I am attempting to bend zsh, my shell of choice, to my will, and am completely at a loss on the syntax and operation of completions.
My use case is this: I wish to have completions for 'ansible-playbook' under the '-e' option support three variations:
Normal file completion: ansible-playbook -e vars/file_name.yml
Prepended file completion: ansible-playbook -e #vars/file_name.yml
Arbitrary strings: ansible-playbook -e key=value
I started out with https://github.com/zsh-users/zsh-completions/blob/master/src/_ansible-playbook which worked decently, but required modifications to support the prefixed file pathing. To achieve this I altered the following lines (the -e line):
...
"(-D --diff)"{-D,--diff}"[when changing (small files and templates, show the diff in those. Works great with --check)]"\
"(-e --extra-vars)"{-e,--extra-vars}"[EXTRA_VARS set additional variables as key=value or YAML/JSON]:extra vars:(EXTRA_VARS)"\
'--flush-cache[clear the fact cache]'\
to this:
...
"(-D --diff)"{-D,--diff}"[when changing (small files and templates, show the diff in those. Works great with --check)]"\
"(-e --extra-vars)"{-e,--extra-vars}"[EXTRA_VARS set additional variables as key=value or YAML/JSON]:extra vars:__at_files"\
'--flush-cache[clear the fact cache]'\
and added the '__at_files' function:
__at_files () {
compset -P #; _files
}
This may be very noobish, but for someone that has never encountered this before, I was pleased that this solved my problem, or so I thought.
This fails me if I have multiple '-e' parameters, which is totally a supported model (similar to how docker allows multiple -v or -p arguments). What this means is that the first '-e' parameter will have my prefixed completion work, but any '-e' parameters after that point become 'dumb' and only allow for normal '_files' completion from what I can tell. So the following will not complete properly:
ansible-playbook -e key=value -e #vars/file
but this would complete for the file itself:
ansible-playbook -e key=value -e vars/file
Did I mess up? I see the same type of behavior for this particular completion plugin's '-M' option (it also becomes 'dumb' and does basic file completion). I may have simply not searched for the correct terminology or combination of terms, or perhaps in the rather complicated documentation missed what covers this, but again, with only a few days experience digging into this, I'm lost.
If multiple -e options are valid, the _arguments specification should start with * so instead of:
"(-e --extra-vars)"{-e,--extra-vars}"[EXTR ....
use:
\*{-e,--extra-vars}"[EXTR ...
The (-e --extra-vars) part indicates a list of options that can not follow the one being specified. So that isn't needed anymore because it is presumably valid to do, e.g.:
ansible-playbook -e key-value --extra-vars #vars/file
Say I need to run a UNIX command-line program on some big file. If I want to see a bit of the output, I can pipe the output to head:
$ cat big_file.txt | head
For cat and many similar programs, this command runs extremely fast. Since only the first few lines are needed, program execution stops as soon as the requirement on the other end of the pipe is satisfied. So we all know and love this kind of pipe command for quickly exploring what's in a file or what kind of output a program will write for us.
How does this work at the code level? Does some kind of signal get sent to cat telling it that its output isn't needed anymore, and it can stop running now?
I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;
I'm writing a program (in python) that calls a separate program (via subprocess). I'm finding that in some cases the sub program is getting stuck running. I can see the sub-program by running top, and if i press "c", I can see the full command line.
What I want, is to be able to stick debugging data (like current thread id, etc) in the command line when i'm calling the sub program, so I can futher debug my problem.
Is there a way to put comments in command line arguments such that they show up in top?
I can't think of a direct way but you could write a little shell script to which you pass the actual command to run plus argument and debugging information. It would show up in the top/ps output.
Instead of making them comments, put them in the environment. For example, if you have a /proc file system, you could do:
FOO=value cmd
When top shows the pid of the command, do:
tr '\000' '\012' < /proc/pid/environ | grep FOO
to see the value of FOO in the environment of the cmd. If the values contain newlines, you will need to be more careful about the display, something like:
perl -n0E 'say if /FOO/' /proc/pid/environ