I am using PyInstaller to create a single-file executable.
The script is executing behave tests, within a subprocess routine.
Sample:
cmd = ['behave']
proc = subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Currently this requires behave to be installed on the machine, prior of executable to be executed.
Is there a way behave library can be bundled into the executable?
I've been looking into hidden imports, but with no success.
Maybe a better idea sounds to be runtime hooks? I welcome any inputs.
Related
I am in the process of developing an R package with external source code that takes a long time to compile. While compilation time isn't a big problem for a one-off installation, I have to routinely reinstall the package to test new additions. Is it possible to prevent re-compiling the source code if there haven't been any changes to it?
I don't necessarily need this to be automated, but I can't figure out a manual solution either. As my source code is in Rust, the following serves as the most representative example I have (note that it requires Rust cargo to be installed):
git clone https://github.com/r-rust/hellorust
Rscript -e "devtools::install('hellorust', quick = TRUE)"
When I run the above, I see that the hellorust.so file has been created in the src directory, but how do I make devtools::install() use this file rather than recompile everything? It doesn't seem like devtools::install(quick = TRUE) or devtools::install(build = FALSE) are meant for this...
Alternatively, is it possible to achieve the desired behavior on the Rust side of things? I don't understand why cargo would recompile everything if there haven't been any changes and the target directory is still there. That said, I'm quite new to Rust and compiled languages in general so my understanding of the broader concepts involved here is unfortunately quite limited...
I would also be interested to learn if there is a better way to test R packages during development than manually reinstalling them.
Based on the comments by r2evans, the final answer seems to be that this isn't what devtools::install is for.
As per the devtools documentation, there are three main tools for "frequent development tasks":
load_all
document
test
Of these load_all "simulates installing and reloading your package, loading R code in R/, compiled shared objects in src/ and data files in data/". By default, load_all() will not recompile source code in src/ (unless the recompile flag is set to true).
So the answer is to use load_all as opposed to install during package development and manually control when to compile the source code using something like devtools::compile_dll.
I am a little bit confused on how to efficiently prepare the R package, so that it will be compatible across all needed system platforms. This is needed so that the new version of package will be accepted by CRAN. The main difficulty comes from compiling external C++ shared library, and optionally CUDA version if the compiler is available. To support this flow I've created specific Makefile, unfortunately using GNU-extensions. It works fine on Linux, OSX and when executed manually via gmake on Solaris. Relevant part is here:
# Checking whether nvcc compiler is available
NVCC_TEST = $(shell basename $(shell which nvcc 2> /dev/null)"")
ifeq ($(NVCC_TEST),nvcc)
ALL_LIBS += libcucubes_gpu.so
ALL_OBJS += $(GPU_OBJS)
ALL_FLAGS += $(GPU_FLAGS)
else
ALL_OBJS += gpu_fallback.o
endif
Turns out that, when running R CMD INSTALL (...) on Solaris, the installation fails on something like this:
make: Fatal error in reader: Makefile, line 39: Unexpected end of line seen
ERROR: compilation failed for package 'libcucubes'
As it turns out, it is caused by the fact that Solaris' version of make is executed instead of GNU-compatible gmake (I've tested it works fine), even though it is available. My question is whether there is any simple way to force usage of gmake here, for the R package build. In general I know I could use autotools to solve compatibility issues during installation, but it seems to bring too much complexity for that simple case. Any advices will be really appreciated, thanks!
If you can't get your build process to use gmake instead of Solaris's pure POSIX make, you can use this hack:
Make a dedicated directory for this hack: mkdir $HOME/make_hack
Softlink gmake asmakein that directory: ln -s /path/to/gmake $HOME/make_hack/make
Set your PATH: PATH=$HOME/make_hack:$PATH
Now, run your build process using that PATH, and it should use gmake. Hopefully it just uses make from its PATH envval and not some hardcoded full path.
Yeah, it's a hack. But it's probably a lot easier than modifying the build process to use gmake instead of make.
From Writing R Extensions:
If you really must require GNU make, declare it in the DESCRIPTION
file by
SystemRequirements: GNU make
and ensure that you use the value of environment variable MAKE (and
not just make) in your scripts.
configure scripts are the preferred solution though. BTW, in general a Makevars file is also preferred over a full Makefile.
Hej,
When I try to call QIIME with a system call from R, i.e
system2("macqiime")
R stops responding. It's no problem with other command line programs though.
can certain programs not be called from R via system2() ?
MacQIIME version:
MacQIIME 1.8.0-20140103
Sourcing MacQIIME environment variables...
This is the same as a normal terminal shell, except your default
python is DIFFERENT (/macqiime/bin/python) and there are other new
QIIME-related things in your PATH.
(note that I am primarily interested to call QIIME from R Markdown with engine = "sh" which fails, too. But I strongly suspect the problems are related)
In my experience, when you call Qiime from unix command line, it usually creates a virtual shell of it`s own to run its commands which is different from regular system commands like ls or mv. I suspect you may not be able to run Qiime from within R unless you emulate that same shell or configuration Qiime requires. I tried to run it from a python script and was not successful.
Seems the PyInstaller put all the python script into the executable file, and when run this file, it start PyInstaller bootloader first, then prepare a temp python environment add run the scripts.
So I wonder whether my source code are safe. Can I get the source code from the package when running the executable file?
PyInstaller includes the byte compiled (.pyc) files of your program but not the original source (.py) files. You don't even need to run the executable to get the .pyc files. There are more or less working Python decompilers that turn compiled byte code (.pyc) into equivalent source code (.py).
You need to assess whether this protection is good enough for your purposes. However as a friendly suggestion, I recommend first inventing/writing something that people will want to copy before worrying about how to protect it.
On windows after running the grunt build command for creating brackets shell it gives done without errors but i dont see any .exe file generated..
What might be the problem???
Here are some possible solutions:
Are you following the full brackets-shell build instructions, including all prerequisites?
Make sure Brackets isn't running at the same time. The build will fail silently if the .exe file is currently in use (see bug).
Try with a fresh git clone of the repo. If your brackets-shell local copy has been around for a while, sometimes the build & deps folders can get in a bad state. (I'm assuming you haven't modified the source at all. If you have, try with an unmodified copy of the source first to make sure it builds correctly without any of your changes).
Check that python --version shows 2.7.x
Verbose build output would also be helpful in diagnosing issues like this, but unfortunately there's not yet an easy way to get that...
If you follow the instructions on bracket-shell's wiki page, the Windows executable should be created in the Release directory.