I need to make a Makefile, and it should have a run rule. However, the run requires some parameters.
Does anyone have any idea how I can pass arguments in when running a rule in a Makefile? I want to be able to run the run rule with arguments by typing make run foo bar.
I tried this, but it didn’t work:
run:
make compile
./scripts/runTrips $1 $2 $PLACES $OUT $VERS
The parameters I want supplied are the first and the second.
When passing parameters to a make command, reference them like you would other internal make variables.
If your makefile looks like:
run:
script $(param1) $(param2)
You can call it with the following syntax:
$> make run param1=20 param2=30
and make should call the script like:
script 20 30
Make itself doesn't provide passing arguments like for scripts. Usually make is used in the following way: you configure project than run just simple 'make'. Configuring can be done by running shell script 'configure'. This script is the one that you can pass parameters to. For example:
./configure param1 param2
make run
configure script must parse parameters and write them out to config.mk. config.mk must contain the following:
PARAM1 = val1
PARAM2 = val2
Your Makefile must include config.mk:
TOP = .
include $(TOP)/config.mk
run:
make compile
./scripts/runTrips $(PARAM1) $(PARAM2) $(PLACES) $(OUT) $(VERS)
In your 'configure' script you can also check parameters for correctness and make other checks and calculations.
Related
In .zsh, in my .zshrc file I'd like to set up a function to cd to a directory I input, but using an existing variable to write the common ~/path/to/parent/directory/$input
I've been unable to find out what the correct syntax is for this particular usage. For example, I want to enter
goto mydir
and execute a cd to ~/path/to/parent/directory/mydir
But I get an error: gt:cd:3 no such file or directory ~/path/to/parent/directory/mydir even though that directory exists.
This is the variable declaration and function I am trying:
export SITESPATH="~/path/to/parent/directory"
function gt(){
echo "your site name is $#"
echo "SITESPATH: " $SITESPATH "\n"
cd $SITESPATH/$#
}
It makes no difference if I use the above, without quotes, or "cd $SITESPATH/$#" with quotes.
I don't see the point in using $# in your function, since you expect only one argument. $1 would be sufficient.
The problem is in the tilde which is contained in your variable SITEPATH. You need to have it expanded. You can either do it by writing
export SITESPATH=~/path/to/parent/directory
when you define the variable, or inside your function by doing a
cd ${~SITESPATH)/$1
A third possibility is to turn on glob_subst in your shell:
setopt glob_subst
In this case, you can keep your current definition of $SITESPATH, and tilde-substitution will happen automatically.
I am trying to use environment variables set by ksh and the expect command in the same script. However, if I try to source both of them, it doesnt work. Is there a way to source ksh and expect in the same script?
Do something like
#!/usr/bin/ksh
. /path/to/ksh_stuff.sh
export FOO=bar
# other ksh stuff
expect <<'END_EXPECT'
source /path/to/expect_stuff.exp
send_user "FOO is $env(FOO)\n"
# other expect stuff
END_EXPECT
Adding quotes around the here-doc terminator (<<'END_EXPECT') means that the entire here-doc is single quoted, so ksh will not attempt any parameter substitutions on it. This is a effective way to isolate the expect script's variables from ksh.
In Korn shell you dot in the other script, example:
. ${other_script}
This will run in the same process as the parent script. The other script can see any variables that are defined in the parent script. If you want to sub-shell (to run an external command), then you will need to export any variables first.
If you want to reference environment variables in your expect script, (those that are exported by a ksh script that runs expect in a subshell) then your expect script needs to reference a global array env . For example if your ksh script exports MYPATH variable then subshells to expect, the expect might reference $env(MYPATH)
I wasn't able to find a documentation for the widely used autoload command in zsh. Does anybody can explain it in plain English?
A bit more specific: What does autoloading of modules mean, for example in this line:
autoload -Uz vcs_info
What does it do?
I've tried autoload --help, man autoload, googling - no success. Thanks!
The autoload feature is not available in bash, but it is in ksh (korn shell) and zsh. On zsh see man zshbuiltins.
Functions are called in the same way as any other command. There can be a name conflict between a program and a function. What autoload does is to mark that name as being a function rather than an external program. The function has to be in a file on its own, with the filename the same as the function name.
autoload -Uz vcs_info
The -U means mark the function vcs_info for autoloading and suppress alias expansion. The -z means use zsh (rather than ksh) style. See also the functions command.
Edit (from comment, as suggested by #ijoseph):
So it records the fact that the name is a function and not an external program - it does not call it unless the -X option is used, it just affects the search path when it is called. If the function name does not collide with the name of a program then it is not required. Prefix your functions with something like f_ and you will probably never need it.
For more detail see http://zsh.sourceforge.net/Doc/Release/Functions.html.
autoload tells zsh to look for a file in $FPATH/$fpath containing a function definition, instead of a file in $PATH/$path containing an executable script or binary.
Script
A script is just a sequence of commands that get executed when the script is run. For example, suppose you have a file called hello like this:
echo "Setting 'greeting'"
greeting='Hello'
If the file is executable and located in one of the directories in your $PATH, then you can run it as a script by just typing its name. But scripts get their own copy of the shell process, so anything they do can't affect the calling shell environment. The assignment to greeting above will be in effect only within the script; once the script exits, it won't have had any impact on your interactive shell session:
$ hello
Setting 'greeting'
$ echo $greeting
$
Function
A function is instead defined once and stays in the shell's memory; when you call it, it executes inside the current shell, and can therefore have side effects:
hello() {
echo "Setting 'greeting'"
greeting='Hello'
}
$ hello
Setting 'greeting'
$ echo $greeting
Hello
So you use functions when you want to modify your shell environment. The Zsh Line Editor (ZLE) also uses functions - when you bind a key to some action, that action is defined as a shell function (which has to be added to ZLE with the zle -N command).
Now, if you have a lot of functions, then you might not want to define all of them in your .zshrc every time you start a new shell; that slows down shell startup and uses memory to store functions that you might not wind up calling during the lifetime of that shell. So you can instead put the function definitions into their own files, named after the functions they define, and put the files into directories in your $FPATH, which works like $PATH.
Zsh comes with a bunch of standard functions in the default $FPATH already. But it won't know to look for a command there unless you've first told it that the command is a function.
That's what autoload does: it says "Hey, Zsh, this command name here is a function, so when I try to run it, go look for its definition in my FPATH, instead of looking for an executable in my PATH."
The first time you run command which Zsh determines is autoloaded function, the shell sources the definition file. Then, if there's nothing in the file except the function definition, or if the shell option KSH_AUTOLOAD is set, it proceeds to call the function with the arguments you supplied. But if that option is not set and the file contains any code outside the function definition (like initialization of variables used by the function), the function is not called automatically. In that case it's up to you to call the function inside the file after defining it so that first invocation will work.
I have a makefile called test.mak and another one just called GNUmakefile
The way I execute the test.mak from command line is "make test.mak arg1". Now I would like to execute it in the GNUmakefile. I know I can use "include" so I did (in GNUmakefile)
"-include test.mak arg1" but the arg1 is treated like another makefile but how do I pass the argument to test.mak?
Thanks
I am guessing you just take the argument in GNUmakefile and since they share the same variable, test.mak should be getting the argument.
There are a few errors in your question that need to be cleared up:
First, to invoke make using test.mak, you should use
make -f test.mak arg1
the "-f" tells make the name of the makefile to use. Without it, make will try to use GNUmakefile, makefile, and Makefile (in that order).
Second, the "arg1" on your command line tells make which target you are trying to make. It is an argument to the make program, but not to the makefile itself. Because of that, it's not necessary to try to "pass" arg1 to test.mak on your include line. The "include" directive's parameters are the files to be included.
The line
include test.mak arg1
tries to include two files (named test.mak and arg1).
What you really want to do in GNUmakefile is just
include test.mak
which will allow you to type
make arg1
to create the "arg1" target using your implicitly-named GNUmakefile.
I want to use a shell script that I can call to set some environment variables. However, after the execution of the script, I don't see the environment variable using "printenv" in bash.
Here is my script:
#!/bin/bash
echo "Hello!"
export MYVAR=boubou
echo "After setting MYVAR!"
When I do "./test.sh", I see:
Hello!
After setting MYVAR!
When I do "printenv MYVAR", I see nothing.
Can you tell me what I'm doing wrong?
This is how environment variables work. Every process has a copy of the environment. Any changes that the process makes to its copy propagate to the process's children. They do not, however, propagate to the process's parent.
One way to get around this is by using the source command:
source ./test.sh
or
. ./test.sh
(the two forms are synonymous).
When you do this, instead of running the script in a sub-shell, bash will execute each command in the script as if it were typed at the prompt.
Another alternative would be to have the script print the variables you want to set, with echo export VAR=value and do eval "$(./test.sh)" in your main shell. This is the approach used by various programs [e.g. resize, dircolors] that provide environment variables to set.
This only works if the script has no other output (or if any other output appears on stderr, with >&2)