Do Curly Brace Wildcards work in GNU Make 4 (or even POSIX Make)? - unix

I found a difference of behaviour between GNU Make 4.1 and 3.81 and wonder whether my code is not POSIX compliant which 4 is enforcing more strictly, or whether something else is going on.
I distilled the failure case to this Makefile
.POSIX:
all: test-b
test-a:
cat a.txt b.txt c.txt >results.txt
test-b:
cat {a,b,c}.txt >results.txt
Assuming those files have been created with cat {a,b,c}.txt, target test-a always works, yet test-b works on Make 3.81 but fails on 4.1.
The output for 3.81:
$ make --version
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
This program built for i386-apple-darwin11.3.0
$ make
cat {a,b,c}.txt >results.txt
$ echo $?
0
The output for 4.1:
$ make --version
GNU Make 4.1
Built for x86_64-pc-linux-gnu
Copyright (C) 1988-2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ make
cat {a,b,c}.txt >results.txt
cat: {a,b,c}.txt: No such file or directory
Makefile:9: recipe for target 'test-b' failed
make: *** [test-b] Error 1
$ echo $?
2
It's possible the cat command is actually failing in 3.81 and it just isn't pointing it out, as later versions of GNU Make mention passing the -e flag to the shell when invoking target commands to make it more POSIX compliant, but I can't see how that command could be failing.
I assume the wildcards are handled solely by the shell, so I can't see how invoking the shell via a make target command should be any different.
Which is these behaviours are correct? If wildcards like that don't work in Makefiles, which other wildcards can I assume to work?
test-b still fails in 4.1 even if .POSIX: is removed from the file.

Recipes are sent to the shell. They are not interpreted by make. So your question is really, are curly-brace expansions supported by the shell?
That depends on which shell make uses. They are not supported by POSIX standard sh. They are supported by bash (and many other shells).
Make always invokes /bin/sh, regardless of what shell you personally use, unless you specifically set the make SHELL variable to something else. On some systems, /bin/sh is a symlink to /bin/bash so they are the same thing (bash runs in a "POSIX emulation" mode when invoked as /bin/sh but most bash features are still available). Other systems use different shells, such as dash, as /bin/sh which do not have extra bash features.
So, you can either (a) not have a portable makefile and assume /bin/sh is the same as /bin/bash, (b) set SHELL := /bin/bash in your makefile to force it to use bash always (but fail on systems that don't have bash installed), or (c) write your makefile recipes to use only POSIX sh features so it works regardless of which shell is used for /bin/sh.

Related

How to enable helpful error messages for gmake?

Example
I have a gmake 4.0 from a MinGW installation, that shows no helping error messages.
For example I have this makefile:
# test.mk
all: somefile.c
Since I have no file named somefile.c present, it fails, but without any helpful message:
$ gmake -f test.mk
gmake: ***. Stop.
When I add the --debug flag I get some hints:
$ gmake -f test.mk --debug
GNU Make 4.0
Built for x86_64-w64-mingw32
Copyright (C) 1988-2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Reading makefiles...
Updating goal targets....
File 'all' does not exist.
File 'somefile.c' does not exist.
Must remake target 'somefile.c'.
gmake: ***. Stop.
Expectation
For an older gmake 3.80 from another tool set, I get a helping message:
$ gmake-old -f test.mk
gmake-old: *** No rule to make target `somefile.c', needed by `all'. Stop.
Environment
I am running these command within a git-bash under Windows. I am getting a similar message, when calling gmake within windows cmd.
Question
Are there options for gmake or environment variables that I might set in order to get gmake 4.0 to provide helping messages like No rule to make target 'somefile.c', needed by 'all'?

How to use additional flags when creating lisp script file to start from console

I want to create a .lisp file which I can start as script, i.e., with a leading #!/bin/usr/sbcl --script. This works just fine.
File:
#!/usr/bin/sbcl --script
(format t "test~%")
Output:
$> ./test.lisp
test
However, I also need to adjust the dynamic space size for that particular script to work. But this somehow prevents the --script flag from working
File:
#!/usr/bin/sbcl --dynamic-space-size 12000 --script
(format t "test~%")
Output:
$> ./test.lisp
This is SBCL 1.4.5.debian, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.
SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
*
How can I increase the dynamic space size while keeping the convenience of starting the lisp program/script from the command prompt?
This is a general limitation of shebang lines and not SBCL specific.
Newer env versions (in GNU — FreeBSD has had this a bit longer) understand a -S option to split the argument:
https://www.gnu.org/software/coreutils/manual/html_node/env-invocation.html#g_t_002dS_002f_002d_002dsplit_002dstring-usage-in-scripts
#!/usr/bin/env -S sbcl --dynamic-space-size 9000 --script
Apparently the #! considers everything following the command as a single string and thus --dynamic-space-size 12000 --script is treated as one and not as a parameter and a flag.
My current solution is to create an additional .sh file:
#!/bin/bash
sbcl --dynamic-space-size 12000 --script ./test.lisp $#
However, this has the obvious downside that the script needs to be started from the same directory as the .lips file. Consequently, I am still looking for the 'perfect' solution and this is rather a stop-gap.

Gcc versions later than 7 are not supported by CUDA 10 - Qt Error in Arch Linux

I am running Arch Linux and trying to build a project in Qt however, Qt spits the following error:
/opt/cuda/include/crt/host_config.h:129: error: #error -- unsupported GNU version! gcc versions later than 7 are not supported!
I have already tried a suggestion from a previous Stack Overflow post found here:
CUDA incompatible with my gcc version
I did not use the exact command as my cuda is located in /opt/cuda/bin/gcc. I did the same command for g++. However, the terminal outputs that these files are already linked. I did confirm this by going to the actual file and looking at it's properties.
Can someone please suggest a solution to my issue?
I managed to do so usung this two lines, this will update the symbolic links of cuda to gcc7
ln -s /usr/bin/gcc-7 /usr/local/cuda/bin/gcc
ln -s /usr/bin/g++-7 /usr/local/cuda/bin/g++
The issue comes from cuda-10.0/targets/x86_64-linux/include/crt/host_config.h in the main CUDA-10 directory tree. The target for your architecture was placed in /opt.
Some posts recommend faking the inequality
if __GNUC__ > 7
to say
if __GNUC__ > 8
but that is a bad idea. Using
make 'NVCCFLAGS=-m64 -D__GNUC__=7' -k
is permissible in some trivial cases, but still fundamentally the same bad hack.
You probably have alternates on your system which has constructed symbolic links pointing to the version 8 gnu tool chain files. That's why you get an indication version 7 is already installed.
You can learn how to modify your alternates for just your developer users BUT NOT for root or any system admin accounts. You may want to remember how to switch back and forth between 7 and 8 so you only use 7 when actually needed, since many other things may be tested only with 8.
If that doesn't work for you, you can build gcc-7 from source. The preparatory system admin work includes a dnf install, a build from source, an install of 7.4 gnu compiler, and a set up of paths for CUDA development only. If you have gnu gcc and g++ version 8 installed with the appropriate standard libraries and it works, the version 7 compiler can be installed with relative ease.
Browse and find the nearest mirror listed on https://gcc.gnu.org/mirrors.html and then copy the link location for gcc-7.4.0.tar.xz and place it in the shell variable u like this example.
u="http://mirrors.concertpass.com/gcc/releases/gcc-7.4.0/gcc-7.4.0.tar.xz"
Then you can do the rest as commands.
sudo dnf install libmpc-devel
cd
mkdir -p scratch
cd scratch
wget -O - "$u" |tar Jxf -
cd gcc-7.4.0
mkdir build
cd build
../configure --prefix=/usr/local/gcc-7
make
sudo bash -c "cd \"`pwd`\"; make install"
Then you execute this in the shells and tools you develop with. Do NOT put this in the system login apparatus or in .bashrc or .bash_profile, for the same reason as above. Other things may be tested with version 8 only. Instead place them in your development environment where they belong.
LD_LIBRARY_PATH=/usr/local/gcc-7/lib64:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/gcc-7/lib:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/cuda-10.0/NsightCompute-1.0/host/linux-desktop-glibc_2_11_3-glx-x64/Plugins:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/cuda-10.0/NsightCompute-1.0/target/linux-desktop-glibc_2_11_3-glx-x64:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/cuda-10.0/targets/x86_64-linux/lib/stubs:$LD_LIBRARY_PATH
PATH=/usr/local/gcc-7/bin:$PATH
PATH=/usr/local/cuda-10.0/bin:$PATH
PATH=$HOME/big/cuda.samples/NVIDIA_CUDA-10.0_Samples/bin/x86_64/linux/release:$PATH

make sees different PATH than account's .bashrc-configured PATH

I want to call ShellCheck, a Haskell program for linting shell scripts, from a Makefile.
When I install ShellCheck via cabal install, it is installed as ~/.cabal/bin/shellcheck. So, I have configured Bash accordingly:
$ cat ~/.bashrc
export PATH="$PATH:~/.cabal/bin"
$ source ~/.bashrc
$ shellcheck -V
ShellCheck - shell script analysis tool
version: 0.3.4
license: GNU Affero General Public License, version 3
website: http://www.shellcheck.net
This enables me to run shellcheck from any directory in Bash. However, when I try to call it from a Makefile, make cannot find shellcheck.
$ cat Makefile
shlint:
-shlint lib/
shellcheck:
-shellcheck lib/**
lint: shlint shellcheck
$ make shellcheck
shellcheck lib/**
/bin/sh: 1: shellcheck: not found
make: [shellcheck] Error 127 (ignored)
I think that make is not receiving the same PATH as my normal Bash shell. How can I fix this?
Try using $HOME, not ~:
export PATH="$PATH:$HOME/.cabal/bin"
The ~-means-home-directory feature is not supported in pathnames everywhere in all shells. When make runs a recipe it doesn't use the user's shell (that would be a disaster!) it uses /bin/sh always.
On some systems (particularly Debian/Ubuntu-based GNU/Linux distributions) the default /bin/sh is not bash, but rather dash. Dash doesn't support ~ being expanded in the PATH variable.
In general, you should reserve ~ for use on the command line as a shorthand. But in scripting, etc. you should always prefer to write out $HOME.
ETA:
Also, the double-star syntax lib/** is a non-standard feature of shells like bash and zsh and will not do anything special in make recipes. It is identical to writing lib/*.
You can force make to use a different shell than /bin/sh by adding:
SHELL := /bin/bash
to your makefile, for example, but this makes it less portable (if that's an issue).

Redirect Error Stream to File and Console in Windows

I want to redirect error stream from java console application to file and console. In normal situation the error is displayed only in console. I want to be displayed in console and file. How can I achieve this? When I write:
java -classpath lib.jar com.hertz.test.Blad 2>error.log
Then the errors are redirecting to file but I don't see them on console. Also does anybody know how to add date and time to the logs in this situation?
I'm working in Windows 2003 Server.
This is of course a simple exercise in piping the output through a filter, in this case the tee command, which is done in Microsoft's command interpreter much the same as in JP Software's TCC/LE and (non-C-shell family) Unix shells:
java -classpath lib.jar com.hertz.test.Blad 2>&1 | tee error-and-output.log
Treating standard output and standard error differently is little more than an exercise in redirection syntax, for which this example here is but one of several possibilities, and is a separate question.
java -classpath lib.jar com.hertz.test.Blad 2>&1 1>con | tee error.log
All that remains is to obtain a tee command. There are several possibilities:
Use a port of a Unix tee command. There are several choices. Oft-mentioned are GNUWin32, cygwin, and unxutils. Less well known, but in some ways better, are the tools in the SFUA utility toolkit, which run in the Subsystem for UNIX-based Applications that comes right there in the box with Windows 7 Ultimate edition and Windows Server 2008 R2. (For Windows XP and Windows Server 2003, one can download and install Services for UNIX version 3.5.) This toolkit has a large number of command-line TUI tools, from mv and du, through the Korn and C shells, to perl and awk. It comes in both x86-64 and IA64 flavours as well as x86-32. The programs run in Windows' native proper POSIX environment, rather than with emulator DLLs (such as cygwin1.dll) layering things over Win32. And yes, the toolkit has tee, as well as some 300 others.
Use one of the many native Win32 tee commands that people have written and published. One such is Ritchie Lawrence's MTEE, which as you can see has /D and /T options to add time and date stamps to each line that it processes.
Use a replacement command interpreter that comes with a built-in TEE command. JP Software's TCC/LE is one such. TCC/LE has a built in TEE command. As you can see, it also has /D and /T options to add time and date stamps to each line that it processes.
As an aside: It's better for your application to add date and time stamps itself than for them to be post-processed by the TEE command. For several reasons, relating both to how applications behave when their standard streams are pipes and to how pipes work, each line of output will not necessarily be processed by TEE at the time that your application generated it in the first place. The leeway will affect both the relative (to one another) and the absolute (to the wall clock) accuracy of the timestamps that you see.

Resources