passing multiple paths to cmdenv in hadoop streaming - hadoop-streaming

I am using Hadoop streaming jar and trying to pass environmental variable which points to multiple paths using -cmdenv.
hadoop jar ../hadoop-streaming.jar \
-libjars .../something.jar \
-inputFormat ..CustomInputFormat \
-file mapper.py \
-file stream.py \
-cacheFile ../files#aliasname \
-cmdenv LD_LIBRARY_PATH=/home/files/1/xyz:/home/files/2/:/home/files/1/abc/ \
-mapper mapper.py \
-input hdfs:/inputfiles/ \
-output hdfs:/outputfiles/ \
-reducer NONE \
-verbose
There are couple of queries I have.
1. The environmental variable defined in cmdenv is not visible to mapper script
2. Can I provide hdfs directories as path to environmental variable?
When I run the execute the hadoop command, I getting error in applications logs where it throws "error while loading shared libraries: xyz.so: cannot open shared object file: No such file or directory".
Also, I could notice the environmental variable in the stream job
STREAM: stream.addenvironment=HADOOP_ROOT_LOGGER= LD_LIBRARY_PATH=<path1>/:<path2>/..
Please suggest where I am going wrong?

I was able to resolve the issue by adding the C code and dependency libraries to hdfs and then add them to CacheFile with symlinks and providing the symlinks in the environmental variable in cmdenv. See below.
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
-libjars /<path>/jars/Hadoop_Streaming.jar \
-inputformat com.hadoop.IMCFInputFormat \
-outputformat org.apache.hadoop.mapred.TextOutputFormat \
-file mapper.sh \
-file stream.py \
-cacheFile hdfs:/<hdfspath>/IMCF/Common/#IMCF_Common \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/Common/#IMCF_STMC_Common \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarLabResults/#IMCF_STMC_StarLabResults \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarMeasureMembership/#IMCF_STMC_StarMeasureMembership \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarMedicalCase/#IMCF_STMC_StarMedicalCase \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarMedicalClaim/#IMCF_STMC_StarMedicalClaim \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarPrcdrTracking/#IMCF_STMC_StarPrcdrTracking \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarRxClaim/#IMCF_STMC_StarRxClaim \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/Stars/#IMCF_STMC_Stars \
-cacheFile hdfs:/<hdfspath>/IMCF/STMC/StarDerived/#IMCF_STMC_StarDerived \
-cacheFile hdfs:/<hdfspath>/lkup_files/MFW_meas_comp_def.lkup#MFW_meas_comp_def.lkup \
-cacheFile hdfs:/<hdfspath>/lkup_files/MFW_msr_cdset_x_cdset_lst.lkup#MFW_msr_cdset_x_cdset_lst.lkup \
-cacheFile hdfs:/<hdfspath>/lkup_files/MFW_msr_criteria.lkup#MFW_msr_criteria.lkup \
-cacheFile hdfs:/<hdfspath>/lkup_files/MFW_run_params.properties#MFW_run_params_HMO.env \
-cacheFile hdfs:/<hdfspath>/IMCF/MainModule/imcf.exe#imcf.exe \
-mapper mapper.sh \
-input /<hdfspath>/Stars_Ext_Tbl/IMCF_CIF_EXT/ \
-output /<hdfspath>/stream_output/ \
-reducer NONE \
-cmdenv LD_LIBRARY_PATH=IMCF_Common:IMCF_STMC_Common:IMCF_STMC_StarLabResults:IMCF_STMC_StarMeasureMembership:IMCF_STMC_StarMedicalCase:IMCF_STMC_StarMedicalClaim:IMCF_STMC_StarPrcdrTracking:IMCF_STMC_StarRxClaim:IMCF_STMC_Stars:IMCF_STMC_StarDerived \
-verbose

Related

Can I completely customize the flags for ./configure in a SPEC file?

My rpmbuild log tells me all the flags used when calling configure:
./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu \
--program-prefix= \
--disable-dependency-tracking \
--prefix=/usr \
--exec-prefix=/usr \
--bindir=/usr/bin \
--sbindir=/usr/sbin \
--sysconfdir=/etc \
--datadir=/usr/share \
--includedir=/usr/include \
--libdir=/usr/lib64 \
--libexecdir=/usr/libexec \
--localstatedir=/var \
--sharedstatedir=/var/lib \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--prefix=/opt/custom/SENSOR/Qt-5.15.2 \
--confirm-license \
--opensource
My problem is that the 'build' and 'host' flags (plus several others) are unknown commands for this particular configure script. How can I take complete control of the call to configure in my SPEC file? It's obviously not enough to add new flags to the %configure scriptlet, I need to remove flags that rpmbuild is adding by default.
It looks like the answer is to call configure directly instead of using the scriptlet. I.e., replace %configure with:
./configure --prefix=/opt/custom/SENSOR -confirm-license -opensource

Can't run Qt application o 32-bit Yocto build with eglfs

I have built a Yocto image with the following configuration
# Architecture of the host machine
SDKMACHINE ?= "x86_64"
# Extra image configuration defaults
EXTRA_IMAGE_FEATURES = "debug-tweaks ssh-server-openssh"
CORE_IMAGE_EXTRA_INSTALL += "openssh-sftp openssh-sftp-server"
INIT_MANAGER = "systemd"
INHERIT += "rm_work"
# Extra packages
LICENSE_FLAGS_WHITELIST = "commercial"
DISTRO_FEATURES:remove = " \
x11 \
directfb \
vulkan \
wayland \
"
DISTRO_FEATURES:append = " \
alsa \
opengl \
gles2 \
"
IMAGE_INSTALL:append = " \
coreutils \
qtbase-plugins \
qtbase-tools \
qtdeclarative \
qtdeclarative-plugins \
qtdeclarative-qmlplugins \
qtdeclarative-tools \
qtimageformats-plugins \
qtmultimedia \
qtmultimedia-plugins \
qtmultimedia-qmlplugins \
qtquickcontrols2 \
qtquicktimeline \
qtscript \
qtsvg \
qtsvg-plugins \
qtsystems \
qtsystems-qmlplugins \
qtsystems-tools \
rsync \
"
PACKAGECONFIG:append:pn-qtbase = " \
eglfs \
fontconfig \
gles2 \
libpng \
jpeg \
libs \
widgets \
"
# Image file system types to package
IMAGE_FSTYPES = "rpi-sdimg"
# Package management configuration
PACKAGE_CLASSES = "package_ipk"
MACHINE ??= "raspberrypi4"
DISTRO ??= "poky"
BBMULTICONFIG ?= ""
I could successfully build and generate and SDK. I can build the Qt app and deploy it to the device.
I set
QT_QPA_PLATFORM=eglfs
DISPLAY=:0
but I get the error
EGL library doesn't support Emulator extensions
Aborted
The error looks similar to
QT Creator can not remote run and debugging on i.Mx6 (buildroot)
but since the Pi doesn't have a Vivante driver, I wouldn't know what to pass. I tried to search for integrations
$ find / -name "*egl*"
/usr/lib/plugins/video/videonode/libeglvideonode.so
/usr/lib/plugins/videoeglvideonode
/usr/lib/plugins/egldeviceintegrations
/usr/lib/plugins/egldeviceintegrations/libqeglfs-emu-integration.so
/usr/lib/plugins/egldeviceintegrations/libqeglfs-kms-integration.so
/usr/lib/plugins/egldeviceintegrations/libqeglfs-kms-egldevice-integration.so
/usr/lib/plugins/platforms/libqminimalegl.so
/usr/lib/plugins/platforms/libqeglfs.so
but eglfs-kms or all other combinations I have tried, don't work.
If I try
QT_QPA_EGLFS_INTEGRATION=none
I get
Unable to query physical screen size, defaulting to 100 dpi.
To override, set QT_QPA_EGLFS_PHYSICAL_WIDTH and QT_QPA_EGLFS_PHYSICAL_HEIGHT (in millimeters).
Cannot find EGLConfig, returning null config
EGL Error : Could not create the egl surface: error = 0x300b

QML module not found

I copied a project to another directory (forked it) and now I always get QML module not found (QtQuick.Controls). , and similar.
The original project does not show any errors. I cleaned and rebuilt everything but that didn't solve it. The project compiles and runs perfectly, so QtCreator is somehow fooled that there is some problem.
What could be the reason for this? I am using Qt 5.11, QtCreator 4.7.1
See attached picture
Project file:
QT += quick network
QT += quickcontrols2
QT += widgets
CONFIG += c++11
# The following define makes your compiler emit warnings if you use
# any feature of Qt which as been marked deprecated (the exact warnings
# depend on your compiler). Please consult the documentation of the
# deprecated API in order to know how to port your code away from it.
DEFINES += QT_DEPRECATED_WARNINGS
# You can also make your code fail to compile if you use deprecated APIs.
# In order to do so, uncomment the following line.
# You can also select to disable deprecated APIs only up to a certain version of Qt.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000 # disables all the APIs deprecated before Qt 6.0.0
SOURCES += \
blockchainaccount.cpp \
blockies.cpp \
error.cpp \
ethkey.cpp \
hqx.cpp \
hqx2.cpp \
hqx3.cpp \
hqx4.cpp \
identicon.cpp \
walletaccount.cpp \
main.cpp \
aewm.cpp \
acctlist.cpp \
block.cpp \
blocklist.cpp \
txlist.cpp \
vtlist.cpp \
transaction.cpp \
valuetransfer.cpp \
acctcatlist.cpp \
ftokens.cpp \
token.cpp \
txparam.cpp \
ftokops.cpp \
nftokens.cpp \
simres.cpp \
ftapprovals.cpp \
ftholders.cpp \
mainstats.cpp \
prefs.cpp \
blockheader.cpp \
addresslist.cpp \
acctcat.cpp \
balance.cpp \
big.cpp \
tokop.cpp \
ftholder.cpp \
ftapproval.cpp \
utils.cpp
RESOURCES += qml.qrc
# Additional import path used to resolve QML modules in Qt Creator's code model
QML_IMPORT_PATH =
# Additional import path used to resolve QML modules just for Qt Quick Designer
QML_DESIGNER_IMPORT_PATH =
# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target
DISTFILES +=
HEADERS += \
blockchainaccount.h \
blockies.h \
error.h \
ethkey.h \
hqx.h \
hqx2.h \
hqx3.h \
hqx4.h \
identicon.h \
walletaccount.h \
aewm.h \
acctlist.h \
block.h \
blocklist.h \
txlist.h \
vtlist.h \
transaction.h \
valuetransfer.h \
acctcatlist.h \
ftokens.h \
token.h \
txparam.h \
ftokops.h \
nftokens.h \
simres.h \
ftapprovals.h \
ftholders.h \
mainstats.h \
prefs.h \
blockheader.h \
addresslist.h \
acctcat.h \
balance.h \
big.h \
tokop.h \
ftholder.h \
ftapproval.h \
utils.h \
config.h
I too had this problem and was able to resolve it, mostly by just following the instructions in the warning message. A rough approximation of that warning message said, "please add /usr/lib64/qt/qml to your QML_IMPORT_PATH variable."
A quick search in my project found just one instance of QML_IMPORT_PATH in the project's .pro file. I added the suggested path to that line in the .pro file and the problem was resolved.
YMMV

How to enable HDFS caching on Amazon EMR?

What's the easiest way to enable HDFS Caching on EMR ?
More specifically, how to set dfs.datanode.max.locked.memory and increase the "maximum size that may be locked into memory" (ulimit -l) on all nodes ?
The following code seems to work fine for dfs.datanode.max.locked.memory and I could probably write a custom bootstrap to update /usr/lib/hadoop/hadoop-daemon.sh and call ulimit. Is there any better or faster way ?
elastic-mapreduce --create \
--alive \
--plain-output \
--visible-to-all \
--ami-version 3.1.0 \
-a $access_id \
-p $private_key \
--name "test" \
--master-instance-type m3.xlarge \
--instance-group master --instance-type m3.xlarge --instance-count 1 \
--instance-group core --instance-type m3.xlarge --instance-count 10 \
--pig-interactive \
--log-uri s3://foo/bar/logs/ \
--bootstrap-action s3://elasticmapreduce/bootstrap-actions/configure-hadoop \
--args "--hdfs-key-value,dfs.datanode.max.locked.memory=2000000000" \

Need Help w/ Annoying Makefile Errors -- g++: g++ and shell errors -- and Multi-Makefile Design Advice

I have a makefile:
#Nice, wonderful makefile written by Jason
CC=g++
CFLAGS=-c -Wall
BASE_DIR:=.
SOURCE_DIR:=$(BASE_DIR)/source
BUILD_DIR:=$(BASE_DIR)/build
TEST_DIR:=$(BASE_DIR)/build/tests
MAKEFILE_DIR:=$(BASE_DIR)/makefiles
DATA_DIR:=$(BASE_DIR)/data
DATA_DIR_TESTS:=$(DATA_DIR)/tests
MOLECULE_UT_SOURCES := $(SOURCE_DIR)/molecule_test/main.cc \
$(SOURCE_DIR)/molecule_manager.h \
$(SOURCE_DIR)/molecule_manager.cpp \
$(SOURCE_DIR)/molecule_manager_main.h \
$(SOURCE_DIR)/molecule_manager_main.cpp \
$(SOURCE_DIR)/molecule_reader.h \
$(SOURCE_DIR)/molecule_reader.cpp \
$(SOURCE_DIR)/molecule_reader_psf_pdb.h \
$(SOURCE_DIR)/molecule_reader_psf_pdb.cpp \
$(SOURCE_DIR)/parameter_manager_lj_molecule.h \
$(SOURCE_DIR)/parameter_manager_lj_molecule.cpp \
$(SOURCE_DIR)/parameter_manager.h \
$(SOURCE_DIR)/parameter_manager.cpp \
$(SOURCE_DIR)/parser.h \
$(SOURCE_DIR)/parser.cpp \
$(SOURCE_DIR)/common.h
MOLECULE_UT_DATA := \
$(DATA_DIR_TESTS)/molecule_test/par_oxalate_and_friends.inp \
$(DATA_DIR_TESTS)/molecule_test/dicarboxy-octane_4.pdb \
$(DATA_DIR_TESTS)/molecule_test/dicarboxy-octane_4.psf
PARAM_UT_SOURCES := $(SOURCE_DIR)/parameter_test/main.cc \
$(SOURCE_DIR)/parameter_manager_lj_molecule.h \
$(SOURCE_DIR)/parameter_manager_lj_molecule.cpp \
$(SOURCE_DIR)/parameter_manager.h \
$(SOURCE_DIR)/parameter_manager.cpp \
$(SOURCE_DIR)/parser.h \
$(SOURCE_DIR)/parser.cpp \
$(SOURCE_DIR)/common.h
PARAM_UT_DATA := $(DATA_DIR_TESTS)/molecule_test/par_oxalate_and_friends.inp
molecule_test : molecule_test_prepare_sources molecule_test_prepare_makefiles \
molecule_test_prepare_data_files
#$(shell cd $(TEST_DIR)/molecule_unit_test/; \
make ./bin/molecule_test)
molecule_test_prepare_sources: molecule_test_dir
#echo Copying sources...
#cp --preserve $(MOLECULE_UT_SOURCES) \
$(TEST_DIR)/molecule_unit_test/source
molecule_test_prepare_makefiles: $(MAKEFILE_DIR)/Makefile.molecule_test
#cp --preserve $(MAKEFILE_DIR)/Makefile.molecule_test \
$(TEST_DIR)/molecule_unit_test/Makefile
molecule_test_prepare_data_files:
cp --preserve $(MOLECULE_UT_DATA) $(TEST_DIR)/molecule_unit_test/bin/
molecule_test_dir:
#if test -d $(BUILD_DIR); then \
echo Build exists...; \
else \
echo Build directory does not exist, making build dir...; \
mkdir $(BUILD_DIR); \
fi
#if test -d $(TEST_DIR); then \
echo Tests exists...; \
else \
echo Tests directory does not exist, making tests dir...; \
mkdir $(TEST_DIR); \
fi
#if test -d $(TEST_DIR)/molecule_unit_test; then \
echo Molecule unit test directory exists...; \
else \
echo Molecule unit test directory does \
not exist, making build dir...; \
mkdir $(TEST_DIR)/molecule_unit_test; \
fi
#if test -d $(TEST_DIR)/molecule_unit_test/source; then \
echo Molecule unit test source directory exists...; \
else \
echo Molecule unit test source directory does \
not exist, making build dir...; \
mkdir $(TEST_DIR)/molecule_unit_test/source; \
fi
#if test -d $(TEST_DIR)/molecule_unit_test/obj; then \
echo Molecule unit test object directory exists...; \
else \
echo Molecule unit test object directory does \
not exist, making object dir...; \
mkdir $(TEST_DIR)/molecule_unit_test/obj; \
fi
#if test -d $(TEST_DIR)/molecule_unit_test/bin; then \
echo Molecule unit test executable directory exists...; \
else \
echo Molecule unit test executable directory does \
not exist, making executable dir...; \
mkdir $(TEST_DIR)/molecule_unit_test/bin; \
fi
param_test : param_test_prepare_sources param_test_prepare_makefiles \
param_test_prepare_data_files
#$(shell cd $(TEST_DIR)/param_unit_test/; \
make ./bin/param_test)
param_test_prepare_sources: param_test_dir
#echo Copying sources...
#cp --preserve $(PARAM_UT_SOURCES) $(TEST_DIR)/param_unit_test/source
param_test_prepare_makefiles: $(MAKEFILE_DIR)/Makefile.param_test
#cp --preserve $(MAKEFILE_DIR)/Makefile.param_test \
$(TEST_DIR)/param_unit_test/Makefile
param_test_prepare_data_files:
cp --preserve $(PARAM_UT_DATA) $(TEST_DIR)/param_unit_test/bin/
param_test_dir:
#if test -d $(BUILD_DIR); then \
echo Build exists...; \
else \
echo Build directory does not exist, making build dir...; \
mkdir $(BUILD_DIR); \
fi
#if test -d $(TEST_DIR); then \
echo Tests exists...; \
else \
echo Tests directory does not exist, making tests dir...; \
mkdir $(TEST_DIR); \
fi
#if test -d $(TEST_DIR)/param_unit_test; then \
echo Param unit test directory exists...; \
else \
echo Param unit test directory does \
not exist, making build dir...; \
mkdir $(TEST_DIR)/param_unit_test; \
fi
#if test -d $(TEST_DIR)/param_unit_test/source; then \
echo Param unit test source directory exists...; \
else \
echo Param unit test source directory does \
not exist, making build dir...; \
mkdir $(TEST_DIR)/param_unit_test/source; \
fi
#if test -d $(TEST_DIR)/param_unit_test/obj; then \
echo Param unit test object directory exists...; \
else \
echo Param unit test object directory does \
not exist, making object dir...; \
mkdir $(TEST_DIR)/param_unit_test/obj; \
fi
#if test -d $(TEST_DIR)/param_unit_test/bin; then \
echo Param unit test executable directory exists...; \
else \
echo Param unit test executable directory does \
not exist, making executable dir...; \
mkdir $(TEST_DIR)/param_unit_test/bin; \
fi
That calls a second makefile after it creates and populates the directory structure.
The second makefile is as follows:
#Nice, wonderful makefile written by Jason
CC=g++
CFLAGS=-c -Wall
SOURCE_DIR:=./source
OBJ_DIR:=./obj
EXE_DIR:=./bin
$(EXE_DIR)/molecule_test : $(OBJ_DIR)/main.o \
$(OBJ_DIR)/parameter_manager_lj_molecule.o \
$(OBJ_DIR)/parameter_manager.o $(OBJ_DIR)/parser.o \
$(OBJ_DIR)/molecule_manager.o $(OBJ_DIR)/molecule_manager_main.o \
$(OBJ_DIR)/molecule_reader.o \
$(OBJ_DIR)/molecule_reader_psf_pdb.o
#$(CC) $(OBJ_DIR)/main.o $(OBJ_DIR)/parameter_manager.o \
$(OBJ_DIR)/parser.o $(OBJ_DIR)/parameter_manager_lj_molecule.o \
$(OBJ_DIR)/molecule_manager.o $(OBJ_DIR)/molecule_manager_main.o \
$(OBJ_DIR)/molecule_reader.o \
$(OBJ_DIR)/molecule_reader_psf_pdb.o \
-o molecule_test
#mv molecule_test $(EXE_DIR)/
$(OBJ_DIR)/main.o: $(SOURCE_DIR)/parameter_manager.h \
$(SOURCE_DIR)/parameter_manager_lj_molecule.h \
$(SOURCE_DIR)/molecule_manager.h \
$(SOURCE_DIR)/molecule_manager_main.h \
$(SOURCE_DIR)/molecule_reader.h \
$(SOURCE_DIR)/molecule_reader_psf_pdb.h \
$(SOURCE_DIR)/common.h $(SOURCE_DIR)/main.cc
$(CC) $(CFLAGS) $(SOURCE_DIR)/main.cc
#mv main.o $(OBJ_DIR)/
$(OBJ_DIR)/molecule_reader.o: $(SOURCE_DIR)/parameter_manager.h \
$(SOURCE_DIR)/parameter_manager_lj_molecule.h \
$(SOURCE_DIR)/molecule_manager.h \
$(SOURCE_DIR)/molecule_manager_main.h \
$(SOURCE_DIR)/molecule_reader.h \
$(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/molecule_reader.cpp
#mv molecule_reader.o $(OBJ_DIR)/
$(OBJ_DIR)/molecule_reader_psf_pdb.o: $(SOURCE_DIR)/parameter_manager.h \
$(SOURCE_DIR)/parameter_manager_lj_molecule.h \
$(SOURCE_DIR)/molecule_manager.h \
$(SOURCE_DIR)/molecule_manager_main.h \
$(SOURCE_DIR)/molecule_reader.h \
$(SOURCE_DIR)/molecule_reader_psf_pdb.h \
$(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/molecule_reader_psf_pdb.cpp
#mv molecule_reader_psf_pdb.o $(OBJ_DIR)/
$(OBJ_DIR)/molecule_manager.o: $(SOURCE_DIR)/molecule_manager.h \
$(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/molecule_manager.cpp
#mv molecule_manager.o $(OBJ_DIR)/
$(OBJ_DIR)/molecule_manager_main.o: $(SOURCE_DIR)/molecule_manager.h \
$(SOURCE_DIR)/molecule_manager_main.h \
$(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/molecule_manager_main.cpp
#mv molecule_manager_main.o $(OBJ_DIR)/
$(OBJ_DIR)/parameter_manager_lj_molecule.o: $(SOURCE_DIR)/common.h \
$(SOURCE_DIR)/parameter_manager.h \
$(SOURCE_DIR)/parser.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/parameter_manager_lj_molecule.cpp
#mv parameter_manager_lj_molecule.o $(OBJ_DIR)/
$(OBJ_DIR)/parameter_manager.o: $(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/parameter_manager.cpp
#mv parameter_manager.o $(OBJ_DIR)/
$(OBJ_DIR)/parser.o: $(SOURCE_DIR)/parser.h
#$(CC) $(CFLAGS) $(SOURCE_DIR)/parser.cpp
#mv parser.o $(OBJ_DIR)/
$(OBJ_DIR)/common.o: $(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) $(SOURCE_DIR)/common.h
mv common.h.gch $(OBJ_DIR)/
I admit I'm somewhat of a novice at Makefiles. I would both like advice as to how to streamline these files (without too much "magic") and how to fix these two errors...
First I have to say everything is working, so to speak. When I build my target it creates all the directories right and generates the executable. And all my files get copied properly and get recompiled when I touch the files in my base-level source directory. So these aren't "real" errors so to speak, just annoying error text I want to get rid of...
The first error occurs when I run a build make molecule_test which requires it to do something. Whatever it needs to do gets done, but I also get:
g++: g++: No such file or directory
g++: g++: No such file or directory
g++: g++: No such file or directory
g++: g++: No such file or directory
g++: g++: No such file or directory
g++: g++: No such file or directory
make: *** [molecule_test] Error 1
..AGAIN the build succeeds, creating the executable properly
The second error I get occurs when there's nothing to be done...when that happens I get:
/bin/sh: -c: line 0: unexpected EOF while looking for matching ``'
/bin/sh: -c: line 1: syntax error: unexpected end of file
Please be gentle... I've read basic makefile tutorials, including the gnu makefile tutorial, but there seems to be a leap between creating a small program with a handful of local sources and a large program with the need for nested directories, data files, etc. I'm trying to make that leap... unfortunately I have no best practice makefiles from past code as I'm at a small research group at a university, not a corporate atmosphere.
My basic approach is to create a base directory with the following
[dir] source/
[dir] data/
[dir] makefiles/
[dir] build/ **gets created
Makefile
The top level makefile creates a subdirectory in the build directory, copies the needed sources (say for a particular test program, and needed data files, and a makefile to make all the sources. The top level makefile then calls the build-level makefile.
I'd be open to ideas on how to streamline this process, but would appreciate if we FIRST resolve the errors.
Thanks in advance!!!
P.S. I'm running on Centos 5.4, GNU Make 3.81, gcc version 4.1.2 20080704 (Red Hat 4.1.2-44) .... GNU Make and gcc are both 64-bit versions...
This fix seems to clear up both of your errors:
molecule_test : molecule_test_prepare_sources molecule_test_prepare_makefiles \
molecule_test_prepare_data_files
#cd $(TEST_DIR)/molecule_unit_test && $(MAKE) ./bin/molecule_test
The $(shell ...) command is for invoking a shell outside of a rule. There's no need to use it here, since this is a command in a rule-- it's already happening in a subshell. Also note that this uses $(MAKE) instead of make (the reasons are a little subtle, just think of it as a good habit).
You can do it even more concisely and quietly:
molecule_test : molecule_test_prepare_sources molecule_test_prepare_makefiles \
molecule_test_prepare_data_files
#$(MAKE) -s -C $(TEST_DIR)/molecule_unit_test ./bin/molecule_test
As for streamlining, there's a lot you can do. You can reduce the length of your second makefile by about half, and fix what appear to be a number of bugs, and with the first one you can do even better. It depends on how much "magic" you can tolerate. Here's a quick attempt at streamlining your second Makefile (since I don't have your files to test it with I can't promise it'll work without some touchups).
CC=g++
CFLAGS=-c -Wall
SOURCE_DIR:=./source
INCDIRS := -I$(SOURCE_DIR)
OBJ_DIR:=./obj
EXE_DIR:=./bin
VPATH = $(SOURCE_DIR)
$(EXE_DIR)/molecule_test : $(OBJ_DIR)/main.o \
$(OBJ_DIR)/parameter_manager_lj_molecule.o \
$(OBJ_DIR)/parameter_manager.o $(OBJ_DIR)/parser.o \
$(OBJ_DIR)/molecule_manager.o $(OBJ_DIR)/molecule_manager_main.o \
$(OBJ_DIR)/molecule_reader.o \
$(OBJ_DIR)/molecule_reader_psf_pdb.o
#$(CC) $^ -o $#
$(OBJ_DIR)/main.o $(OBJ_DIR)/molecule_reader.o \
$(OBJ_DIR)/molecule_reader_psf_pdb.o: \
molecule_manager.h \
molecule_manager_main.h \
parameter_manager.h \
parameter_manager_lj_molecule.h
$(OBJ_DIR)/main.o: main.cpp \
molecule_reader.h \
molecule_reader_psf_pdb.h common.h
$(CC) $(CFLAGS) $(INCDIRS) $< $#
$(OBJ_DIR)/molecule_reader_psf_pdb.o: molecule_reader.h
$(OBJ_DIR)/parameter_manager_lj_molecule.o: parser.h
%.o: %.cpp %.h common.h
$(CC) $(CFLAGS) $(INCDIRS) $< -o $#
For a start you can get rid of all the mv commands and use make's built-in variables, e.g.
$(OBJ_DIR)/parameter_manager.o: $(SOURCE_DIR)/parameter_manager.cpp $(SOURCE_DIR)/common.h
$(CC) $(CFLAGS) -o $# $<

Resources