I'm catching a make error:
make: no rule to make `|' needed by cryptest.exe
Here's the recipe:
cryptest.exe: public_service | libcryptopp.a $(TESTOBJS)
$(CXX) -o $# $(CXXFLAGS) $(TESTOBJS) ./libcryptopp.a $(LDFLAGS) $(LDLIBS)
The | is an Order-only prerequisites. Order-only prerequisites usually work, so I'm not sure what the trouble is in this instance.
Why am I receiving the make error?
My apologies for the question. I image this has been asked and answered many times before. Unfortunately, neither Google nor Bing seem to work with characters like |.
Order-only prerequisites is a feature introduced in GNU Make 3.80.
Excerpt from the NEWS file:
Version 3.80
A new feature exists: order-only prerequisites. These prerequisites affect the order in which targets are built, but they do not impact the rebuild/no-rebuild decision of their dependents. That is to say, they allow you to require target B be built before target A, without requiring that target A will always be rebuilt if target B is updated. Patch for this feature provided by Greg McGary <…#mcgary.org>.
Related
gmake doesn't seem to have a value for $(INSTALL). is this supposed to be defined by the user?
$(CC) works fine. most sample Makefiles i went over didn't have an explicit definition of $(INSTALL)...
if it has to be defined by user, what are best practices (other than aliasing _PROGRAM and _DATA)? why prefer install over cp?
Makefile
helloworld:
echo 'hello, world' >helloworld
install:
$(INSTALL) ${HOME}/ helloworld
log
$ make helloworld
$ make install
/home/<username>/ helloworld
make: /home/kevins/: Permission denied
make: *** [Makefile:5: install] Error 127
version info
GNU Make 4.3
Built for x86_64-pc-linux-gnu
There is no default value defined for INSTALL. You can see all the default rules and variables by running:
make -p -f/dev/null
Whether install or cp is a better fit depends entirely on your use-case. install does a lot more than cp. But, you can run other commands in addition to cp to take care of those things, and install is not available on every system. So, it's what's best for you.
I am running Arch Linux and trying to build a project in Qt however, Qt spits the following error:
/opt/cuda/include/crt/host_config.h:129: error: #error -- unsupported GNU version! gcc versions later than 7 are not supported!
I have already tried a suggestion from a previous Stack Overflow post found here:
CUDA incompatible with my gcc version
I did not use the exact command as my cuda is located in /opt/cuda/bin/gcc. I did the same command for g++. However, the terminal outputs that these files are already linked. I did confirm this by going to the actual file and looking at it's properties.
Can someone please suggest a solution to my issue?
I managed to do so usung this two lines, this will update the symbolic links of cuda to gcc7
ln -s /usr/bin/gcc-7 /usr/local/cuda/bin/gcc
ln -s /usr/bin/g++-7 /usr/local/cuda/bin/g++
The issue comes from cuda-10.0/targets/x86_64-linux/include/crt/host_config.h in the main CUDA-10 directory tree. The target for your architecture was placed in /opt.
Some posts recommend faking the inequality
if __GNUC__ > 7
to say
if __GNUC__ > 8
but that is a bad idea. Using
make 'NVCCFLAGS=-m64 -D__GNUC__=7' -k
is permissible in some trivial cases, but still fundamentally the same bad hack.
You probably have alternates on your system which has constructed symbolic links pointing to the version 8 gnu tool chain files. That's why you get an indication version 7 is already installed.
You can learn how to modify your alternates for just your developer users BUT NOT for root or any system admin accounts. You may want to remember how to switch back and forth between 7 and 8 so you only use 7 when actually needed, since many other things may be tested only with 8.
If that doesn't work for you, you can build gcc-7 from source. The preparatory system admin work includes a dnf install, a build from source, an install of 7.4 gnu compiler, and a set up of paths for CUDA development only. If you have gnu gcc and g++ version 8 installed with the appropriate standard libraries and it works, the version 7 compiler can be installed with relative ease.
Browse and find the nearest mirror listed on https://gcc.gnu.org/mirrors.html and then copy the link location for gcc-7.4.0.tar.xz and place it in the shell variable u like this example.
u="http://mirrors.concertpass.com/gcc/releases/gcc-7.4.0/gcc-7.4.0.tar.xz"
Then you can do the rest as commands.
sudo dnf install libmpc-devel
cd
mkdir -p scratch
cd scratch
wget -O - "$u" |tar Jxf -
cd gcc-7.4.0
mkdir build
cd build
../configure --prefix=/usr/local/gcc-7
make
sudo bash -c "cd \"`pwd`\"; make install"
Then you execute this in the shells and tools you develop with. Do NOT put this in the system login apparatus or in .bashrc or .bash_profile, for the same reason as above. Other things may be tested with version 8 only. Instead place them in your development environment where they belong.
LD_LIBRARY_PATH=/usr/local/gcc-7/lib64:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/gcc-7/lib:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/cuda-10.0/NsightCompute-1.0/host/linux-desktop-glibc_2_11_3-glx-x64/Plugins:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/cuda-10.0/NsightCompute-1.0/target/linux-desktop-glibc_2_11_3-glx-x64:$LD_LIBRARY_PATH
LD_LIBRARY_PATH=/usr/local/cuda-10.0/targets/x86_64-linux/lib/stubs:$LD_LIBRARY_PATH
PATH=/usr/local/gcc-7/bin:$PATH
PATH=/usr/local/cuda-10.0/bin:$PATH
PATH=$HOME/big/cuda.samples/NVIDIA_CUDA-10.0_Samples/bin/x86_64/linux/release:$PATH
Looking at the libs (:/Qt/5.11.0/5.11.0/gcc_64/lib/) of Qt5.11.0, I was wondering which one was linking openssl. So I ran the following:
for lib in `ls *.so`; do ldd $lib | grep ssl; done
And I got no output, suggesting that none of the libraries are linking against openssl. But I believe that Qt must link against it somehow (e.g. for networking).
How is the linking done? And where does it look for openssl? Do I have a way to know which one Qt found on my system (e.g. given I have multiple ones)?
Qt links some libraries dynamically at runtime. In order to see which libraries are loaded, one can check the diagnostic output of the dynamic linker, as hinted by Matteo:
Say my executable is called QGroundcontrol:
On Linux:
LD_DEBUG=libs ./QGroundcontrol 2>&1 | grep -E "ssl|crypto"
On macOS:
DYLD_PRINT_LIBRARIES=1 ./QGroundcontrol 2>&1 | grep -E "ssl|crypto"
From this I could see that Qt finds openssl on the system.
Now, I still don't know how I can tell Qt to look somewhere else (in case I want to link another openssl), but that's another question.
I'm using GNU make to build a group of static libraries, using the implicit make rules for doing so. These rules run the ar(1) command to update the library / archive. Profiling has shown that the build time would be reduced if I used the -j option to make to run parallel jobs during the build.
Unfortunately, the GNU make manual has a section
http://www.gnu.org/software/make/manual/html_node/Archive-Pitfalls.html that pretty much says that make provides no concurrency guards for running ar(1), and thus it can (and does) corrupt the archive. The manual further teases that this may be fixed in the future.
One solution to this is to use http://code.google.com/p/ipcmd, which basically does semaphore locking before running a command, thus serializing the ar(1) commands building the archive. This particular solution isn't good for me because I'm building with mingw based cross-compilation tools on Windows.
Is there a simpler or better solution to this problem?
Do the archiving as a single step, rather than trying to update the archive incrementally:
libfoo.a: $(OBJS)
-rm -f $#
$(AR) rc $# $^
$(RANLIB) $#
Try the following -
AR := flock make.lock $(AR)
clean::
rm -f make.lock
Now ar(1) will execute with an exclusive lock to the file make.lock, thereby serializing access to the library.
You can add a command to delete the file make.lock after the ranlib command.
Add export AR to propagate the definition to sub-makes, if necessary.
Ever since I learned about -j I've used -j8 blithely. The other day I was compiling an atlas installation and the make failed. Eventually I tracked it down to things being made out of order - and it worked fine once I went back to singlethreaded make. This makes me nervous. What sort of conditions do I need to watch for when writing my own make files to avoid doing something unexpected with make -j?
I think make -j will respect the dependencies you specify in your Makefile; i.e. if you specify that objA depends on objB and objC, then make won't start working on objA until objB and objC are complete.
Most likely your Makefile isn't specifying the necessary order of operations strictly enough, and it's just luck that it happens to work for you in the single-threaded case.
In short - make sure that your dependencies are correct and complete.
If you are using a single threaded make then you can be blindly ignoring implicit dependencies between targets.
When using parallel make you can't rely on the implicit dependencies. They should all be made explicit. This is probably the most common trap. Particularly if using .phony targets as dependencies.
This link is a good primer on some of the issues with parallel make.
Here's an example of a problem that I ran into when I started using parallel builds. I have a target called "fresh" that I use to rebuild the target from scratch (a "fresh" build). In the past, I coded the "fresh" target by simply indicating "clean" and then "build" as dependencies.
build: ## builds the default target
clean: ## removes generated files
fresh: clean build ## works for -j1 but fails for -j2
That worked fine until I started using parallel builds, but with parallel builds, it attempts to do both "clean" and "build" simultaneously. So I changed the definition of "fresh" as follows in order to guarantee the correct order of operations.
fresh:
$(MAKE) clean
$(MAKE) build
This is fundamentally just a matter of specifying dependencies correctly. The trick is that parallel builds are more strict about this than are single-threaded builds. My example demonstrates that a list of dependencies for given target does not necessarily indicate the order of execution.
If you have a recursive make, things can break pretty easily. If you're not doing a recursive make, then as long as your dependencies are correct and complete, you shouldn't run into any problems (save for a bug in make). See Recursive Make Considered Harmful for a much more thorough description of the problems with recursive make.
It is a good idea to have an automated test to test the -j option of ALL the make files. Even the best developers have problems with the -j option of make. The most common issues is the simplest.
myrule: subrule1 subrule2
echo done
subrule1:
echo hello
subrule2:
echo world
In normal make, you will see hello -> world -> done.
With make -j 4, you will might see world -> hello -> done
Where I have see this happen most is with the creation of output directories. For example:
build: $(DIRS) $(OBJECTS)
echo done
$(DIRS):
-#mkdir -p $#
$(OBJECTS):
$(CC) ...
Just thought I would add to subsetbrew's answer as it does not show the effect clearly. However adding some sleep commands does. Well it works on linux.
Then running make shows differences with:
make
make -j4
all: toprule1
toprule1: botrule2 subrule1 subrule2
#echo toprule 1 start
#sleep 0.01
#echo toprule 1 done
subrule1: botrule1
#echo subrule 1 start
#sleep 0.08
#echo subrule 1 done
subrule2: botrule1
#echo subrule 2 start
#sleep 0.05
#echo subrule 2 done
botrule1:
#echo botrule 1 start
#sleep 0.20
#echo "botrule 1 done (good prerequiste in sub)"
botrule2:
#echo "botrule 2 start"
#sleep 0.30
#echo "botrule 2 done (bad prerequiste in top)"