How Do I Programmatically Check for a Program's Existence? - unix

Let's say I'm writing something that depends on external programs, like svn. How do I check for their existence automatically, so I can print a helpful error message when they're absent? Iterating through PATH is possible, but hardly elegant and efficient. Are there cleaner solutions?
I've seen this behavior in a bootstrapping script, though I can't remember where. It looked a little like this:
checking for gcc... yes

If you are using bash, you can use the type builtin:
$ type -f svn
svn is /usr/bin/svn
If you want to use it in a script:
$ type -f svn &>/dev/null; echo $?
0
$ type -f svn_doesnt_exist &>/dev/null; echo $?
1

Try to actually call it.
It makes most sense to call it with -V or whatever else option that makes the program report its version; most of the time you want the program to be at least such-and-such version.
If your program is a shell script, which is your friend, too.

Related

Spawn shell and source in one command

I have sciprt that launches my development environment with multiple tmux tiles. I want to spawn a shell that sources my environment so I dont have to source it myself.
I usually do the following each time I open the tmux tile:
source env/bin/activate
I spawn my shell with $SHELL, I use zsh. I see that bash has the --init-file flag which sources a file, this also does not load the bashrc. I guess thats close but not good enough.
I am looking for something like this $SHELL --source ~/env/bin/activate. Or any workarounds also help
I don't think this is possible; Your best bet is to implement a workaround in your own .zshenv file, e.g.,
if [[ test -e "$MY_INIT_SCRIPT_675" ]] ; then
source "$MY_INIT_SCRIPT_675"
fi

Adding --ignore-failed-read to tar causes "unknown function modifier" error

I'm using the tar command in UNIX to perform backups of particular directories. However, some directories contain files/sub-directories which the current user doesn't have any read permissions on. As a result the tar command is returning a non 0 exit code.
I came across the following modifier in the man pages '--ignore-failed-read', which suppresses the non 0 exit code when encountering files it cannot read. However, whenever I try using it I get the error 'unknown function modifier'.
Could anyone help me out here?
my tar command looks something like this:
tar --create --ignore-failed-read --file=test.tar my_dir
Your command seems to be perfectly valid and I don't see any typos/mistakes.
To be absolutely sure, I just tried it on my VM running under 32 bit Debian 7.1 (wheezy) with stock kernel 3.2.0.4. As I suspected, archive has been created successfuly (the only change was, of course, the name of the source directory). I also checked version of my tar with
tar --version
which gave me following output:
tar (GNU tar) 1.26
First of all, you should check this info. If you get the same (with possible difference in version number) output, that's fine. If not (or version that seems much older), it's possible, that you are using tar, which simply doesn't support this feature.
Also, you can check, if your tar really DOES support mentioned flag. To do this, type into console:
tar --help | grep ignore-failed-read
You should see something like this:
--ignore-failed-read do not exit with nonzero on unreadable files
If output stays empty, that means this version of tar does not know this flag at all.
See if any of the above helps.
Another option that might work better in this case is --warning=no-file-changed.
tar --warning=no-file-changed -czf backup.tgz dir1 dir2
--warning controls the display of warning messages. You can add no- to the message keyword to suppress it. So in this case no-file-changed suppresses the file-changed warning.
c.f. https://www.gnu.org/software/tar/manual/html_section/tar_27.html

how to create a "makefile" for c++?

I usually work on visual c++ 2010 for creating console applications as programming problems. There is this submission which requires me to give the source for the file "Makefile" by some command in unix environment
all:
g++ program.cc -o program
since i don't use unix and have never created a "makefile". I don't know how to make this submission. I have read about a makefile which is supposed to give the directions dependencies etc for compiling the program. I am using the header files iostream string and iterator in the program. i have tried the "all:" command . The bash returns command not found.
Can someone help me with this submission? The code is ready but the only thing stopping for submitting is this "makefile". please include the shell commands as well.
You're missing newline and two tabs (yes, you read right, not spaces) after the all: line, something like this:
all:
g++ helloworld.cc -o helloworld
To invoke make, type make in the directory with the Makefile. Dependencies on system headers are usually not considered, if your code has just one file, you can safely ignore that.

GNU make's -j option

Ever since I learned about -j I've used -j8 blithely. The other day I was compiling an atlas installation and the make failed. Eventually I tracked it down to things being made out of order - and it worked fine once I went back to singlethreaded make. This makes me nervous. What sort of conditions do I need to watch for when writing my own make files to avoid doing something unexpected with make -j?
I think make -j will respect the dependencies you specify in your Makefile; i.e. if you specify that objA depends on objB and objC, then make won't start working on objA until objB and objC are complete.
Most likely your Makefile isn't specifying the necessary order of operations strictly enough, and it's just luck that it happens to work for you in the single-threaded case.
In short - make sure that your dependencies are correct and complete.
If you are using a single threaded make then you can be blindly ignoring implicit dependencies between targets.
When using parallel make you can't rely on the implicit dependencies. They should all be made explicit. This is probably the most common trap. Particularly if using .phony targets as dependencies.
This link is a good primer on some of the issues with parallel make.
Here's an example of a problem that I ran into when I started using parallel builds. I have a target called "fresh" that I use to rebuild the target from scratch (a "fresh" build). In the past, I coded the "fresh" target by simply indicating "clean" and then "build" as dependencies.
build: ## builds the default target
clean: ## removes generated files
fresh: clean build ## works for -j1 but fails for -j2
That worked fine until I started using parallel builds, but with parallel builds, it attempts to do both "clean" and "build" simultaneously. So I changed the definition of "fresh" as follows in order to guarantee the correct order of operations.
fresh:
$(MAKE) clean
$(MAKE) build
This is fundamentally just a matter of specifying dependencies correctly. The trick is that parallel builds are more strict about this than are single-threaded builds. My example demonstrates that a list of dependencies for given target does not necessarily indicate the order of execution.
If you have a recursive make, things can break pretty easily. If you're not doing a recursive make, then as long as your dependencies are correct and complete, you shouldn't run into any problems (save for a bug in make). See Recursive Make Considered Harmful for a much more thorough description of the problems with recursive make.
It is a good idea to have an automated test to test the -j option of ALL the make files. Even the best developers have problems with the -j option of make. The most common issues is the simplest.
myrule: subrule1 subrule2
echo done
subrule1:
echo hello
subrule2:
echo world
In normal make, you will see hello -> world -> done.
With make -j 4, you will might see world -> hello -> done
Where I have see this happen most is with the creation of output directories. For example:
build: $(DIRS) $(OBJECTS)
echo done
$(DIRS):
-#mkdir -p $#
$(OBJECTS):
$(CC) ...
Just thought I would add to subsetbrew's answer as it does not show the effect clearly. However adding some sleep commands does. Well it works on linux.
Then running make shows differences with:
make
make -j4
all: toprule1
toprule1: botrule2 subrule1 subrule2
#echo toprule 1 start
#sleep 0.01
#echo toprule 1 done
subrule1: botrule1
#echo subrule 1 start
#sleep 0.08
#echo subrule 1 done
subrule2: botrule1
#echo subrule 2 start
#sleep 0.05
#echo subrule 2 done
botrule1:
#echo botrule 1 start
#sleep 0.20
#echo "botrule 1 done (good prerequiste in sub)"
botrule2:
#echo "botrule 2 start"
#sleep 0.30
#echo "botrule 2 done (bad prerequiste in top)"

tool for building software

I need something like make i.e. dependencies + executing shell commands where failing command stops make execution.
But more deeply integrated with shell i.e. now in make each line is executed in separate context so it is not easy to set variable in one line and use it in following line (I do not want escape char at end of line because it is not readable).
I want simple syntax (no XML) with control flow and functions (what is missing in make).
It does not have to have support for compilation. I have to just bind together several components built using autotools, package them, trigger test and publish results.
I looked at: make, ant, maven, scons, waf, nant, rake, cons, cmake, jam and they do not fit my needs.
take a look at doit
you can use shell commands or python functions to define tasks (builds).
very easy to use. write scripts in python. "no api" (you dont need to import anything in your script)
it has good support to track dependencies and targets
Have a look at fabricate.
If that does not fulfill your needs or if you would rather not write your build script in Python, you could also use a combination of shell scripting and fabricate. Write the script as you would to build your project manually, but prepend build calls with "fabricate.py" so build dependencies are managed automatically.
Simple example:
#!/bin/bash
EXE="myapp"
CC="fabricate.py gcc" # let fabricate handle dependencies
FILES="file1.c file2.c file3.c"
OBJS=""
# build link
for F in $FILES; do
echo $CC -c $F
if [ $? -ne 0 ]; then
echo "Build failed while compiling $F" >2
exit $?
fi
OBJS="$OBJS ${F/.c/.o}"
done
# link
$CC -o $EXE $OBJS
Given that you want control flow, functions, everything operating in the same environment and no XML, it sounds like you want to use the available shell script languages (sh/bash/ksh/zsh), or Perl (insert your own favourite scripting language here!).
I note you've not looked at a-a-p. I'm not familiar with this, other than it's a make system from the people who brought us vim. So you may want to look over that.
A mix of makefile and a scripting language to choose which makefile to run at a time could do it.
I have had the same needs. My current solution is to use makefiles to accurately represent the graph dependency (you have to read "Recursive make considered harmful"). Those makefiles trigger bash scripts that take makefiles variables as parameters. This way you have not to deal with the problem of shell context and you get a clear separation between the dependencies and the actions.
I'm currently considering waf as it seems well designed and fast enough.
You might want to look at SCons; it's a Make-replacement written in Python.

Resources