How do I write a yocto/bitbake recipe to copy a directory to the target root file system - directory

I have a directory of 'binary' (i.e. not to be compiled) files and just want them to be installed onto my target root file system.
I have looked at several articles, none of which seem to work for me.
The desired functionality of this recipe is:
myRecipe/myFiles/ --> myRootFs/dir/to/install
My current attempt is:
SRC_URI += "file://myDir"
do_install() {
install -d ${D}/path/to/dir/on/fs
install -m ${WORKDIR}/myDir ${D}/path/to/dir/on/fs
}
I can't complain about the Yocto documentation overall, it's really good! Just can't find an example of something like this!

You just have to copy these files into your target rootfs. Do not forget to pakage them if they are not installed in standard locations.
SRC_URI += "file://myDir"
do_install() {
install -d ${D}/path/to/dir/on/fs
cp -r ${WORKDIR}/myDir ${D}/path/to/dir/on/fs
}
FILES_${PN} += "/path/to/dir/on/fs"

Take care that with a simple recursive copy, you will end up having host contamination warnings so you would need to copy with the following parameters:
do_install() {
[...]
cp --preserve=mode,timestamps -R ${S}${anydir}/Data/* ${D}${anyotherdir}/Data
[...]
}
As other recipes in poky do, or just follow the official recommendations to avoid problems with ownership and permissions.

For a recipe folder like this:
.
├── files
│   ├── a.txt
│   ├── b.c
│   └── Makefile
└── myrecipe.bb
You can use the following recipe to install it on a specific folder into your rootfs:
SRC_URI = " file://*"
do_install() {
install -d ${WORKDIR}/my/dir/on/rootfs
install -m 0755 ${S}/* ${WORKDIR}/my/dir/on/rootfs/*
}
FILES_${PN} = "/my/dir/on/rootfs/* "

I think it did not work for you becuase you forgot to add mode value, after "install -m",
see man page of install command:
https://linux.die.net/man/1/install
install -m [mode] src destination

Related

qmake INSTALLS for a file not existing yet

Suppose I have a test.pro file with content as followings
unix {
inspro.path = /tmp
inspro.files += test.pro
}
!isEmpty(inspro.path) INSTALLS += inspro
unix {
insdoc.path = /tmp
insdoc.files += test.txt
}
!isEmpty(insdoc.path) INSTALLS += insdoc
Running qmake test.pro results in a Makefile. The file, test.pro, exists already, and the created Makefile contains install_inspro and uninstall_inspro for the file test.pro:
install_inspro: first FORCE
#test -d $(INSTALL_ROOT)/tmp || mkdir -p $(INSTALL_ROOT)/tmp
$(QINSTALL) /home/jianz/test/pro/test.pro $(INSTALL_ROOT)/tmp/test.pro
uninstall_inspro: FORCE
-$(DEL_FILE) -r $(INSTALL_ROOT)/tmp/test.pro
-$(DEL_DIR) $(INSTALL_ROOT)/tmp/
However, corresponding install_insdoc and install_insdoc are created if and only if the file test.txt exists.
In the case that the file test.txt is created as part of QMAKE_POST_LINK, is there a way to force qmake to create install_insdoc and uninstall_insdoc?
I think there's a custom install target CONFIG directive to help with this. Add:
insdoc.CONFIG += no_check_exist
Documented at https://doc.qt.io/qt-5/qmake-variable-reference.html#installs
More details and caveats at https://wiki.qt.io/Undocumented_QMake#Custom_install_config
Related Q/A: qmake copy files created while building

Undefined symbol error when trying to depend on RcppAmadillo

I am trying to depend on RcppArmadillo in my package but I get an error unable to load shared object /tmp/Rtmp0LswYZ/Rinst82cbed4eaee/00LOCK-alt.raster/00new/alt.raster/libs/alt.raster.so: undefined symbol: dsyev_ when I try to run the command R CMD build . in my package directory. However, following the instructions on https://stackoverflow.com/a/14165455 in an interactive R session works correctly. I have also run the R -e 'Rcpp::compileAttributes()' in my package directory and it seems to generate the RcppExports.cpp correctly. What am I doing wrong?
As surmised in the comments above, it is really beneficial to start from a working example.
To create one, we offer the RcppArmadillo.package.skeleton() function. Use it as follows:
edd#rob:/tmp$ Rscript -e 'RcppArmadillo::RcppArmadillo.package.skeleton("demoPkg")'
Calling kitten to create basic package.
Creating directories ...
Creating DESCRIPTION ...
Creating NAMESPACE ...
Creating Read-and-delete-me ...
Saving functions and data ...
Making help files ...
Done.
Further steps are described in './demoPkg/Read-and-delete-me'.
Adding pkgKitten overrides.
>> added .gitignore file
>> added .Rbuildignore file
Deleted 'Read-and-delete-me'.
Done.
Consider reading the documentation for all the packaging details.
A good start is the 'Writing R Extensions' manual.
And run 'R CMD check'. Run it frequently. And think of those kittens.
Adding RcppArmadillo settings
>> added Imports: Rcpp
>> added LinkingTo: Rcpp, RcppArmadillo
>> added useDynLib and importFrom directives to NAMESPACE
>> added Makevars file with Rcpp settings
>> added Makevars.win file with RcppArmadillo settings
>> added example src file using armadillo classes
>> added example Rd file for using armadillo classes
>> invoked Rcpp::compileAttributes to create wrappers
edd#rob:/tmp$
It should create these files:
edd#rob:/tmp$ tree demoPkg/
demoPkg/
├── DESCRIPTION
├── man
│   ├── demoPkg-package.Rd
│   ├── hello.Rd
│   └── rcpparma_hello_world.Rd
├── NAMESPACE
├── R
│   ├── hello.R
│   └── RcppExports.R
└── src
├── Makevars
├── Makevars.win
├── rcpparma_hello_world.cpp
└── RcppExports.cpp
3 directories, 11 files
edd#rob:/tmp$

How to compile multiple simple projects with GNU make

I am trying to implement various project from a programming book. My intention was to have each project exercise in its own folder and then have a makefile that compiles all of them with something like a make all. The folder structure is like this:
.
├── Makefile
├── bin
│   ├── prog1
│ ├── prog2
│ └── prog3
└── src
├── prog1
│ ├── Makefile
│ └── main.c
├── prog2
│ ├── Makefile
│ └── main.c
└── prog3
├── Makefile
└── main.c
I would like to learn how to set up such a structure. In particular the part where the top makefile visit all folders in src calls make there, and then copies and renames the executable into the bin folders.
Your layout schematic shows a makefile for each exercise, plus the top-level makefile that you seem actually to be asking about. It would be best for the top-level makefile to avoid duplicating the behavior of the per-exercise makefiles, as such duplication would create an additional maintenance burden for you. Additionally, it is likely that you will eventually progress to exercises involving multiple source files, and perhaps to some that have multiple artifacts to be built. This is all the more reason for each per-exercise makefile to contain everything necessary to build the exercise with which it is associated (into the exercise-specific directory), and for the top-level makefile to depend on those.
Following that scheme would leave a well-defined role for the top-level makefile: to perform the per-exercise builds (by recursively running make), and to copy the resulting binaries to bin/. This is not the only way to set up a system of cooperating makefiles, but it is fairly easy, and that will allow you to focus on the exercises instead of on the build system.
Let us suppose, then, that each individual exercise can be built by changing to its directory and running make, with the result being an executable in the same directory, with the same name as the directory. That is, from the top-level directory, executing cd src/prog2; make would produce the wanted executable as src/prog2/prog2. In that case, the top-level makefile needs little more than the names of all the exercises, and a couple of rules:
EXERCISES = prog1 prog2 prog3
BINARIES = $(EXERCISES:%=bin/%)
all: $(BINARIES)
$(BINARIES):
make -C src/$$(basename $#)
cp src/$$(basename $#)/$$(basename $#) $#
Note: that uses a feature specific to GNU's implementation of make to compute the names of the wanted binaries from the exercise names. I take that to be acceptable, since you tagged [gnu-make], but in any case, it is a convenience feature, not a necessity.
There are different ways to tackle this, but something like this should work for your example:
PROGS := bin/prog1 bin/prog2 bin/prog3
all: $(PROGS)
$(PROGS):
$(MAKE) -C src/$(#F)
mkdir -p $(#D)
cp src/$(#F)/main $#
.PHONY: clean
clean:
rm -f $(PROGS)
for t in $(PROGS); do make -C src/`basename $$t` clean; done
We define a list of targets (PROGS) we are to build. We say these targets are prerequisites of all and then we go ahead and define how they should be built, that is: we recursively descent into src/ plus filename part of the target to run make there. We create directory of the target to be sure it's there and copy main from the directory we've descended to the path of the target.
For a good measure, there is a clean target as well that removes all the PROGS and runs make clean recursively in src/.

How to install flyway DB migration tool in CentOS?

I am trying to install flyway on a centOS machine.
I have downloaded Flyway command line tar file and extracted it.
I tried to execute some flyway commands but dnt work
it says "-bash: flyway: command not found"
Did I miss anything.
Do I have to install?
I dnt find any tutorials for Installation.
No need to install it, it's simply a shell script with a JRE, the Flyway Java libraries and associated resources.
Sounds like you need to add the location of to the flyway shell script to your PATH variable if you want to run it without being in the directory or specifying the path.
e.g.
If you have extracted flyway-commandline-4.1.2-linux-x64.tar.gz to /opt/flyway/flyway-4.1.2 which looks like:
flyway-4.1.2
├── conf
├── flyway # <---- The shell script
├── lib
└── ...
somewhere in your setup you want that on your PATH
export PATH=$PATH:/opt/flyway/flyway-4.1.2
Note the command line documentation mentions the first two steps as
download the tool and extract it
cd into the extracted directory.

How do I tar node modules in different directories?

I want to cache node modules for each submodule. How can I do so? For example I have the following directory structure:
/test1/node_modules
/test2/node_modules
How do I tar each node module directory under the main directory so that I can then have a zip file with the following structure
/test1/node_modules
/test2/node_modules
edit
What I mean is that I want to get all the node_modules directories under the main directory. node_modules directory can be under directory test1 or test2 or test3. I want to get them all and zip them, maintaining the directory structure. So in the zip file they will be test1/node_modules, test2/node_modules
... but I also want a "catch all" solution... every node_modules dir should be in my tar.
Its not clear where you're blocked. Here is how I would do it:
Use 2 distinct commands, one to create, one to add:
# create my.tar
tar cf my.tar /test1/node_modules/*
# add second directory with tar uf
tar uf my.tar /test2/node_modules/*
If you have more than test1 & test2, but want to have all test dirs:
tar cf my.tar /test*/node_modules/
If you want every node_modules, then use a find command, piped to your tar command
find / -type d -name node_modules | xargs tar cf my.tar
Assume you have these node_modules
➦ tree ./
./
├── pack.js
├── test1
│   └── node_modules
│   └── a
└── test2
└── node_modules
└── b
4 directories, 3 files
You can use node script to pack files. /^test\d+/ means test1 test2 test3 etc.
'use strict';
const fstream = require('fstream');
const zlib = require('zlib');
const tar = require('tar');
const path = require('path');
const dist = path.join(__dirname, 'all.tgz');
fstream.Reader({
path: __dirname,
filter() {
return this.path === __dirname ||
path.relative(__dirname, this.path).match(/^test\d+/);
},
})
.pipe(tar.Pack({ fromBase: true }))
.pipe(zlib.createGzip())
.pipe(fstream.Writer(dist));
Run node pack.js and all node_modules directories will be in one file all.tgz.
vim all.tgz
" tar.vim version v29
" Browsing tarfile
" Select a file with cursor and press ENTER
/
test1/
test1/node_modules/
test1/node_modules/a
test2/
test2/node_modules/
test2/node_modules/b

Resources