I'm quite new to makefile but I still cannot understand how to set up the subdirectories of my source files.
My directory tree is:
i18n/
src/
engine/
graphics/ (currently only directory used)
I'm using this premade Makefile:
TARGET = caventure
LIBS = -lSDL2
CC = g++
CFLAGS = -Wall
TGTDIR = build
.PHONY: default all clean
default: $(TARGET)
all: default
OBJECTS = $(patsubst %.cpp, %.o, $(wildcard *.cpp))
HEADERS = $(wildcard *.h)
%.o: %.cpp $(HEADERS)
$(CC) $(CFLAGS) -c $< -o $#
.PRECIOUS: $(TARGET) $(OBJECTS)
$(TARGET): $(OBJECTS)
$(CC) $(OBJECTS) -Wall $(LIBS) -o $(TGTDIR)/$(TARGET)
clean:
-rm -f *.o
-rm -f $(TARGET)
GNU make's wildcard function does not recursively visit all subdirectories.
You need a recursive variant of it, which can be implemented as described in this answer:
https://stackoverflow.com/a/18258352/1221106
So, instead of $(wildcard *.cpp) you need to use that recursive wildcard function.
Another simpler way of finding files recursively might be to just use find.
For example, if you have a layout like this.
$ tree .
.
├── d1
│ └── foo.txt
├── d2
│ ├── d4
│ │ └── foo.txt
│ └── foo.txt
├── d3
│ └── foo.txt
└── Makefile
You could write a Makefile like this.
index.txt: $(shell find . -name "*.txt")
echo $^
Which prints this.
$ make
echo d2/d4/foo.txt d2/foo.txt d1/foo.txt d3/foo.txt
d2/d4/foo.txt d2/foo.txt d1/foo.txt d3/foo.txt
Related
I have these JSON files in a large directory structure. Some are just "abc.json" and some the added ".finished". I want to rsync only the files without ".finished".
$ find
.
./a
./a/abc.json.finished
./a/abc.json <-- this file
./a/index.html
./a/somefile.css
./b
./b/abc.json.finished
./b/abc.json <-- this file
Sample rsync command that copies all the "abc.json" AND the "abc.json.finished". I just want the "abc.json".
$ rsync --exclude="finished" --include="*c.json" --recursive \
--verbose --dry-run . server:/tmp/rsync
sending incremental file list
created directory /tmp/rsync
./
a/
a/abc.json
a/abc.json.finished
a/index.html
a/somefile.css
b/
b/abc.json
b/abc.json.finished
sent 212 bytes received 72 bytes 113.60 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
Update: Added more files to the folders. HTML files, CSS and other files are present in my scenario. Only files ending in "c.json" should be transferred.
Scenario can be recreated with the following commands:
mkdir a
touch a/abc.json.finished
touch a/abc.json
touch a/index.html
touch a/somefile.css
mkdir b
touch b/abc.json.finished
touch b/abc.json
Try the following command. It assumes that you also want to replicate the source directory tree, (for any directories containing files which end with c.json), in the destination location:
$ rsync --include="*c.json" --exclude="*.*" --recursive \
--verbose --dry-run . server:/tmp/rsync
Explanation of command:
--include="*c.json" includes only assets whose name ends with c.json
--exclude="*.*" excludes all other assets (i.e. assets whose name includes a dot .)
--recursive recurse into directories.
--verbose log the results to the console.
--dry-run shows what would have been copied, without actually copying the files. This option/flag should be omitted to actually perform the copy task.
. the path to the source directory.
server:/tmp/rsync the path to the destination directory.
EDIT: Unfortunately, the command provided above also copies files whose filename does not include a dot character. To avoid this consider utlizing both rsync and find as follows:
$ rsync --dry-run --verbose --files-from=<(find ./ -name "*c.json") \
./ server:/tmp/rsync
This utilizes process substitution, i.e. <(list), to pass the output from the find command to the --files-from= option/flag of the rsync command.
source tree
.
├── a
│ ├── abc.json
│ ├── abc.json.finished.json
│ ├── index.html
│ └── somefile.css
└── b
├── abc.json
└── abc.json.finished.json
resultant destination tree
server
└── tmp
└── rsync
├── a
│ └── abc.json
└── b
└── abc.json
A hacky solution is use grep and create a file containing all file names we want to transfer.
find |grep "c.json$" > rsync-files
rsync --files-from=rsync-files --verbose --recursive --compress --dry-run \
./ \
server:/tmp/rsync
rm rsync-files
Content of 'rsync-files':
./a/abc.json
./b/abc.json
Output when running rsync command:
sending incremental file list
created directory /tmp/rsync
./
a/
a/abc.json
b/
b/abc.json
I have access to unix server from Putty application. Can anyone tell me how can I view/print all the files and directories inside a directory.
I tried below by searching internet and not working. Not sure what actually they do!
find ./ -type d | awk -F "/" '{ ld=0x2500; lt=0x251c; ll=0x2502; for (i=1; i<=NF-2; i++){printf "%c ",ll} printf "%c%c %s\n",lt,ld,$NF }'
and this
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
The tool tree will help you, while at it, you might also want to install pstree.
19:38:05 dusted#mono~
$ tree test
test
├── a
│ ├── 1
│ ├── 2
│ └── 3
├── b
│ ├── 1
│ ├── b
│ └── c
├── b-files.txt
├── new-b-files.txt
├── newer-b-files.txt
└── test
2 directories, 10 files
Hey there I did a little searching and stumbled upon a site that explains what you are asking. Let us know if this leads you in the right direction...http://www.centerkey.com/tree/
The 'find' command should do the job:
find /path/to/directory
If you want more information for each entry, you can combine 'find' with 'ls' like this:
find /path/to/directory -exec ls -ld "{}" \;
I have a following directory structure,
I also have this make file that works fine, but it needs all files in the same directory and also it creates *.o and bin files in the same directory. Can someone please show me how to improve this code so that i can move *.h files into /h, *.c files into /src. Also *.o files would be created in /obj and the binary file will be created in /bin?
I was thinking of something like this. This part only creates *.o files, no binary files. However, this is giving me an error right now.
Step 1: h/
Add a variable, make a small change to the %.o rule, and add a vpath directive, so that the %.o rule will know where to look:
INC_DIR = h
%.o: %.c
$(cc) -I$(INC_DIR) -c $<
vpath %.h $(INC_DIR)
Step 2: src/
Add another variable, change the assignment of objs, add another vpath:
SRC_DIR := src
objs:=$(patsubst $(SRC_DIR)/%.c,%.o,$(wildcard $(SRC_DIR)/*.c))
vpath %.c $(SRC_DIR)
Step 3: obj/
Add a variable, change objs and the %.o rule again, and the clean rule:
OBJ_DIR = obj
objs:=$(patsubst $(SRC_DIR)/%.c,$(OBJ_DIR)/%.o,$(wildcard $(SRC_DIR)/*.c))
$(OBJ_DIR)/%.o: %.c
$(cc) -Ih -c $< -o $#
clean:
rm -f *.d $(OBJ_DIR)/*.o $(prog)
Step 4: bin/
Add another variable, and change the assignment of prog:
BIN_DIR := bin
prog:=$(BIN_DIR)/$(notdir $(PWD))
EDIT:
What you are now asking for is a bad design. But here it is:
obj/makefile:
SRC_DIR := ../src
objs:=$(patsubst $(SRC_DIR)/%.c,%.o,$(wildcard $(SRC_DIR)/*.c))
cc:=gcc
.PHONY: ALL_OBJS
ALL_OBJS: $(objs)
INC_DIR := ../h
%.o: %.c
$(cc) -I$(INC_DIR) -c $<
vpath %.c $(SRC_DIR)
.PHONY: clean test
clean:
rm -f *.[od]
-include *.d
bin/makefile:
P:= $(PWD)
P:= $(dir $(P))
prog:= $(notdir $(P:/=))
OBJ_DIR := ../obj
objs:=$(notdir $(wildcard $(OBJ_DIR)/*.o))
cc:=gcc
ccflags:=-lcurses -lgdbm -lgdbm_compat
$(prog): $(objs)
$(cc) $(ccflags) -o $# $^
vpath %.o $(OBJ_DIR)
.PHONY: clean test
clean:
rm -f *.d $(prog)
test: $(prog)
$(test)
-include *.d
To make it simple, say I have the following folders:
./src/ with many .c files
./obj/ with many .obj files
./output/ with my binaries I want to build
My makefile is as follows:
all: init mybin
# init commands
init:
mkdir obj
mkdir output
mybin: project1 project2 project3
$(CC) src/misc.c ... etc
$(LK) obj/first.obj obj/second.obj obj/third.obj obj/four.obj obj/five.obj obj/six.obj obj/seven.obj obj/eight.obj obj/nine.obj -o output/myapp.bin
project1: obj/first.obj obj/second.obj obj/third.obj
obj/first.obj: src/first.c
$(CC) first.c ... etc
obj/second.obj: src/second.c
$(CC) obj/second.c ... etc
obj/third.obj: src/third.c
$(CC) obj/third.c ... etc
project2: obj/four.obj obj/five.obj obj/six.obj
obj/four.obj: src/four.c
$(CC) four.c ... etc
obj/five.obj: src/five.c
$(CC) obj/five.c ... etc
obj/six.obj: src/six.c
$(CC) obj/six.c ... etc
project3: obj/seven.obj obj/eight.obj obj/nine.obj
obj/seven.obj: src/seven.c
$(CC) seven.c ... etc
obj/eight.obj: src/eight.c
$(CC) obj/eight.c ... etc
obj/nine.obj: src/nine.c
$(CC) obj/nine.c ... etc
The first time I ran make all, everything compiled find. Then I did:
$ touch src/four.c
$ make all
$
But make exits without compiling anything. I guess it did not detect that one of the .c files had changed, however I don't see what's wrong with my dependencies.
What I expected:
touching src/four.c should have marked obj/four.obj obsolete, and project2 aswell, hence marking mybin obsolete too. This chain should trigger a new compilation of src/four.c to obj/four.obj and then a new linkage of the whole project.
Did you specify the output file of compilation (likely the -o option)? By default (for most toolchains), compiling a .c file produces an .o file, not an .obj one.
UPD.
To get Make updating targets when some prerequisites change you have to provide an exact dependencies between files as far as Make use timestamps to determine whether a file has been changed.
That is, all and init could remain as so-called .PHONY targets, but it is a good practice to make the rest targets to be files.
OUT_DIR := ./output
SRC_DIR := ./src
OBJ_DIR := ./obj
MYBIN := $(OUT_DIR)/myapp.bin
OBJS := $(addprefix $(OBJ_DIR)/, \
first.obj \
second.obj \
third.obj \
four.obj \
five.obj \
six.obj \
seven.obj \
eight.obj \
nine.obj)
.PHONY : all mkdir-output mkdir-obj
all : $(MYBIN)
mkdir-output :
#mkdir -p $(OUT_DIR)
mkdir-obj :
#mkdir -p $(OBJ_DIR)
$(MYBIN) : $(OBJS) | mkdir-output
$(LK) $^ -o $#
$(OBJS) : | mkdir-out
$(OBJS) : $(OBJ_DIR)/%.obj : $(SRC_DIR)/%.c
$(CC) $< -object=$# $(CC_OPT)
The last rule is GNU Make's static pattern rule. And the mkdir-xxx prerequisites after a pipe sign | are order-only ones.
I'm currently in a situation where I have very limited access to a server, but need to upload and download a significant amount of files contained within a single directory structure. I don't have SSH access, so I can't use SCP - and rsync isn't an option either unfortunately.
I'm currently using ncftpput, which is great but seems to be quite slow (in spite of a fast connection).
Is there an alternative / better method I could look into?
(Please accept my apologies if this has been covered, I did a quick search prior to posting but didn't find anything that specifically answered my question)
Try using LFTP:
http://lftp.yar.ru/
or YAFC:
http://yafc.sourceforge.net/index.php
If you have a good connection, I would recommend mounting the ftp server via the GNOME or KDE file managers, or else using CurlFtpFS. Then you can treat it like just another folder.
I'm not familiar with ncftpput. For non-interactive FTP, I've always used the Perl Net::FTP module -- http://perldoc.perl.org/Net/FTP.html
This will be faster because you can login, then do all the transfers at once (it seems from a cursory glance that you execute ncftpput once for each file get/put).
Just remember to NEVER use ASCII mangling! This is the default, so use:
$ftp->binary
ASCII mangling needs to die in the same fire with MySQL automatic-timezone-interpreting.
Since I always end up having a problem with this, I'll post my notes here:
One thing I always get to confuse is the syntax; so below there is a bash tester script which creates some temporary directories, then starts a temporary ftp server, and compares rsync (in plain local file mode, as it doesn't support ftp) with lftp and ftpsync.
The thing is - you can use rsync /path/to/local /path/to/remote/, and rsync will automatically figure out, that you want a local subdirectory created under remote; however, for lftp or ftpsync you have to specify the target directory manually, as in ... /path/to/local /path/to/remote/local (if it doesn't exist it will be created).
You can find the ftpserver-cli.py in How do I temporarily run an FTP server? - Ask Ubuntu; and ftpsync is here: FTPsync (however, note it is buggy; see also Search/grep ftp remote filenames - Unix & Linux Stack Exchange);
Here is a shortened output of the puttest.sh script, showing the recursive put behavior in different cases:
$ bash puttest.sh
Recreate directories; populate loctest, keep srvtest empty:
show dirs:
+ tree --noreport -a /tmp/srvtest /tmp/loctest
/tmp/srvtest
/tmp/loctest
├── .git
│ └── tempa2.txt
└── tempa1.txt
*NOTE, rsync can automatically figure out parent dir:
+ rsync -a --exclude '*.git*' /tmp/loctest /tmp/srvtest/
show dirs:
+ tree --noreport -a /tmp/srvtest /tmp/loctest
/tmp/srvtest
└── loctest
└── tempa1.txt
/tmp/loctest
├── .git
│ └── tempa2.txt
└── tempa1.txt
cleanup:
+ rm -rf /tmp/srvtest/loctest
Start a temporary ftp server:
+ sudo bash -c 'python /path/to/pyftpdlib/ftpserver-cli.py --username=user --password=12345 --directory=/tmp/srvtest &'
+ sleep 1
Using: user: user pass: 12345 port: 21 dir: /tmp/srvtest
[I 14-03-02 23:24:01] >>> starting FTP server on 127.0.0.1:21, pid=21549 <<<
[I 14-03-02 23:24:01] poller: <class 'pyftpdlib.ioloop.Epoll'>
[I 14-03-02 23:24:01] masquerade (NAT) address: None
[I 14-03-02 23:24:01] passive ports: None
[I 14-03-02 23:24:01] use sendfile(2): False
test with lftp:
*NOTE, lftp syncs *contents* of local dir (rsync-like syntax doesn't create target dir):
+ lftp -e 'mirror -R -x ".*\.git.*" /tmp/loctest / ; exit' -u user,12345 127.0.0.1
show dirs:
+ tree --noreport -a /tmp/srvtest /tmp/loctest
/tmp/srvtest
└── tempa1.txt
/tmp/loctest
├── .git
│ └── tempa2.txt
└── tempa1.txt
cleanup:
+ rm -rf /tmp/srvtest/tempa1.txt
*NOTE, specify lftp target dir explicitly (will be autocreated):
+ lftp -e 'mirror -R -x ".*\.git.*" /tmp/loctest /loctest ; exit' -u user,12345 127.0.0.1
show dirs:
+ tree --noreport -a /tmp/srvtest /tmp/loctest
/tmp/srvtest
└── loctest
└── tempa1.txt
/tmp/loctest
├── .git
│ └── tempa2.txt
└── tempa1.txt
cleanup:
+ sudo rm -rf /tmp/srvtest/loctest
*NOTE, ftpsync syncs *contents* of local dir (rsync-like syntax doesn't create target dir); also info mode -i is buggy (it puts, although it shouldn't):
*NOTE, ftpsync --ignoremask is for older unused code; use --exclude instead (but it is buggy; need to change in source)
+ /path/to/ftpsync/ftpsync -i -d '--exclude=.*\.git.*' /tmp/loctest ftp://user:12345#127.0.0.1/
show dirs:
+ tree --noreport -a /tmp/srvtest /tmp/loctest
/tmp/srvtest
└── tempa1.txt
/tmp/loctest
├── .git
│ └── tempa2.txt
└── tempa1.txt
cleanup:
+ sudo rm -rf /tmp/srvtest/tempa1.txt
*NOTE, specify ftpsync target dir explicitly (will be autocreated):
+ /path/to/ftpsync/ftpsync -i -d '--exclude=.*\.git.*' /tmp/loctest ftp://user:12345#127.0.0.1/loctest
show dirs:
+ tree --noreport -a /tmp/srvtest /tmp/loctest
/tmp/srvtest
└── loctest
└── tempa1.txt
/tmp/loctest
├── .git
│ └── tempa2.txt
└── tempa1.txt
cleanup:
+ sudo rm -rf /tmp/srvtest/loctest
+ sudo pkill -f ftpserver-cli.py
And, here is the puttest.sh script:
#!/usr/bin/env bash
set -x
# change these to match your installations:
FTPSRVCLIPATH="/path/to/pyftpdlib"
FTPSYNCPATH="/path/to/ftpsync"
{ echo "Recreate directories; populate loctest, keep srvtest empty:"; } 2>/dev/null
sudo rm -rf /tmp/srvtest /tmp/loctest
mkdir /tmp/srvtest
mkdir -p /tmp/loctest/.git
echo aaa > /tmp/loctest/tempa1.txt
echo aaa > /tmp/loctest/.git/tempa2.txt
{ echo "show dirs:"; } 2>/dev/null
tree --noreport -a /tmp/srvtest /tmp/loctest
{ echo -e "\n*NOTE, rsync can automatically figure out parent dir:"; } 2>/dev/null
rsync -a --exclude '*.git*' /tmp/loctest /tmp/srvtest/
{ echo "show dirs:"; } 2>/dev/null
tree --noreport -a /tmp/srvtest /tmp/loctest
{ echo "cleanup:"; } 2>/dev/null
rm -rf /tmp/srvtest/*
{ echo -e "\nStart a temporary ftp server:"; } 2>/dev/null
# https://askubuntu.com/questions/17084/how-do-i-temporarily-run-an-ftp-server
sudo bash -c "python $FTPSRVCLIPATH/ftpserver-cli.py --username=user --password=12345 --directory=/tmp/srvtest &"
sleep 1
{ echo "test with lftp:"; } 2>/dev/null
# see http://russbrooks.com/2010/11/19/lftp-cheetsheet
# The -R switch means "reverse mirror" which means "put" [upload].
{ echo -e "\n*NOTE, lftp syncs *contents* of local dir (rsync-like syntax doesn't create target dir):"; } 2>/dev/null
lftp -e 'mirror -R -x ".*\.git.*" /tmp/loctest / ; exit' -u user,12345 127.0.0.1
{ echo "show dirs:"; } 2>/dev/null
tree --noreport -a /tmp/srvtest /tmp/loctest
{ echo "cleanup:"; } 2>/dev/null
rm -rf /tmp/srvtest/*
{ echo -e "\n*NOTE, specify lftp target dir explicitly (will be autocreated):"; } 2>/dev/null
lftp -e 'mirror -R -x ".*\.git.*" /tmp/loctest /loctest ; exit' -u user,12345 127.0.0.1
{ echo "show dirs:"; } 2>/dev/null
tree --noreport -a /tmp/srvtest /tmp/loctest
{ echo "cleanup:"; } 2>/dev/null
sudo rm -rf /tmp/srvtest/*
{ echo -e "\n*NOTE, ftpsync syncs *contents* of local dir (rsync-like syntax doesn't create target dir); also info mode -i is buggy (it puts, although it shouldn't):"; } 2>/dev/null
{ echo -e "\n*NOTE, ftpsync --ignoremask is for older unused code; use --exclude instead (but it is buggy; need to change ` 'exclude=s' => \$opt::exclude,` in source)"; } 2>/dev/null
$FTPSYNCPATH/ftpsync -i -d --exclude='.*\.git.*' /tmp/loctest ftp://user:12345#127.0.0.1/
{ echo "show dirs:"; } 2>/dev/null
tree --noreport -a /tmp/srvtest /tmp/loctest
{ echo "cleanup:"; } 2>/dev/null
sudo rm -rf /tmp/srvtest/*
{ echo -e "\n*NOTE, specify ftpsync target dir explicitly (will be autocreated):"; } 2>/dev/null
$FTPSYNCPATH/ftpsync -i -d --exclude='.*\.git.*' /tmp/loctest ftp://user:12345#127.0.0.1/loctest
{ echo "show dirs:"; } 2>/dev/null
tree --noreport -a /tmp/srvtest /tmp/loctest
{ echo "cleanup:"; } 2>/dev/null
sudo rm -rf /tmp/srvtest/*
sudo pkill -f ftpserver-cli.py
{ set +x; } 2>/dev/null
No mention of ncftp?
In Ubuntu, sudo apt install ncftp