Access environment variables inside a jam configuration file - bjam

I am trying to cross compile Boost python library with x86_64-w64-mingw32 compiler on a Linux host. I need to specify the path to python libraries and include files inside my user-config.jam file. Instead of hard coding this path I would like to read it through an environment variable.
Below are the contents of my user-config.jam file:
import os ;
local PYTHON_DEPS_1 = os.environ[PYTHON_DEPS] ;
using python : 2.7 : /usr/local/bin/python2.7 : $(PYTHON_DEPS_1)/usr/include/python2.7 : $(PYTHON_DEPS_1)/usr/lib ;
However the above expands to the below include path used during boost python module build in the compiler command line:
" -I"os.environ[PYTHON_DEPS]/usr/include/python2.7"
Can someone please guide how to properly use environment variables ?

Try changing your
local PYTHON_DEPS_1 = os.environ[PYTHON_DEPS] ;
to
local PYTHON_DEPS_1 = [ os.environ PYTHON_DEPS ] ;

Related

Setting environment variables in jupyter hub

I followed approach in this thread. I can easily set env variable in jupyter hub using the %env VAR = 5. However, when I try to print out this variable in the terminal I get only a blank line, as if the variable did not exist at all. Is it somehow possible to be able to print in terminal the env var defined in the notebook?
Setting environment variables from the notebook results in these variables being available only from that notebook.
%env VAR=TEST
import os
print(os.environ["VAR"])
...
>>> TEST
If you want to persist the variable, you need to put it either in the kernel.json file, or in systemd service file for jupyterhub, or in something like ~/.bashrc.
JupyterHub has a jupyterhub_config.py file (e.g. located in /etc/jupyterhub/jupyterhub_config.py) which can be used to control various aspects of the hub and the notebook environment.
In order to make a specific environment variable available to all notebooks on the hub, add the following lines to it:
c.Spawner.environment = {
'VAR': 'Test'
}
and restart the JupyterHub service (something like sudo service jupyterhub restart).
P.S. If you would just like to forward an environment variable from the user's environment, there is also
c.Spawner.env_keep = ['VAR']
Following from #leopold.talirz's answer, for those wanting to modify an environment variable without overwriting it (i.e. append a path to the PATH variable), I found you can do something like the following,
import os
original_path = os.environ['PATH']
c.Spawner.environment = {
'PATH': '/path/to/foo:{}'.format(original_path)
}
NOTE: In my case, I'm working with The Littlest JupyterHub, so I put the above in /opt/tljh/config/jupyterhub_config.d/environment.py.

How to get platform dependent output filename with QMake?

Assume I have a qmake project file *.pro:
# some stuff ...
TARGET = my_binary
# other stuff...
include( $$PWD/post.pri )
And inside the post.pri file (because I would like to reuse whatever this *.pri file does), I would like to get the complete name of the output file.
For example if is an app, then on windows I would like to get my_binary.exe and on linux my_binary. Or if the project is a shared lib, I would like to get my_binary.dll or libmy_binary.so respectively. Same if is a static lib, I would expect my_binary.lib and libmy_binary.a.
I have already tried the undocumented qmake variable QMAKE_FILE_OUT but with no success.
You can do this in your .pro script:
load(resolve_target)
message($$QMAKE_RESOLVED_TARGET)
It will output the build path and target name, according to your platform and project TEMPLATE.

How to enable ngx_stream_core_module in Yocto

I tried to enable ngx_stream_core_module by adding following code in nginx.inc
do_configure () {
--with-stream=dynamic
}
FILES_${PN} += "${PN}/*"
SYSROOT_DIRS += "${PN}/"
but compiling error happens,
nginx: Files/directories were installed but not shipped in any package:
/usr/modules/ngx_stream_module.so
and I am sure ngx_stream_module.so is generated in nginx/1.12.2-r0/package/usr/modules/.
Can anyone give me some ideas?
In FILES_${PN} you should reference the installation path of the installed files and the files themselves (the latter can be substituted by a wildcard) within a package. As follows:
FILES_${PN} += "/usr/modules/*"
Check out https://www.yoctoproject.org/docs/current/mega-manual/mega-manual.html#var-FILES
Furthermore, you should point out to the YP version you're using, as well as the meta-layer that contains your nginx recipe.
PD: It is a bad practice to modify the *.inc or the *.bb of a recipe from a third party layer, write a *.bbappend on your own layer instead.

Problems with libraries in premake

I have experienced certain problems when using libraries in premake4 scripts.
1) When creating a shared library (.dll) on Windows 10 using a premake4 script, it creates the dll fine, but it also creates a static library of small size (2K).
In my case, I was creating a shared library named MathLib.dll using a premake4 script. It did that correctly, but it also created a file named libMathLib.a of size 2K. (It may be empty.)
I don't see why there was a need for the Makefile generated by premake4 to create libMathLib.a, when in fact the objective was to create a .dll file. I think this may be a premake4 bug and I have raised it on the premake4 Issue tracker on github.
The premake4 lua script is as follows:
-- Dir : Files > C > SW > Applications > Samples >
-- premakeSamples > premake-sharedlib-create
--#!lua
-- A solution contains projects,
-- and defines the available configurations
solution "MathLib"
configurations { "Debug", "Release" }
-- A project defines one build target
project "MathLib"
kind "SharedLib"
language "C++"
files { "**.h", "**.cpp" }
includedirs {"../../../ProgramLibraries/Headers/"}
-- Create target library in Files > C > SW >
-- Applications > ProgramLibraries
targetdir "../../../ProgramLibraries/"
configuration "Debug"
defines { "DEBUG" }
flags { "Symbols" }
configuration "Release"
defines { "NDEBUG" }
flags { "Optimize" }
-- Register the "runmakefile" action.
newaction
{
trigger = "runmakefile",
description = "run the generated makefile to create the executable using the default ('debug' config)",
execute = function()
os.execute("make")
end
}
-- Register the "runmakefilerelease" action.
newaction
{
trigger = "runmakefilerelease",
description = "run the generated makefile to create the executable using the 'release' config)",
execute = function()
os.execute("make config=release")
end
}
2) The above problem is more serious than it sounds. Supposing I had already created a genuine static library named libMathLib.a in the Libraries dir, using a separate premake4 script. Subsequently, if I also create a shared library named MathLib.dll in the same directory as the static library, a dummy static library (possibly empty) would be created and replace the earlier genuine static library.
3) -- EDIT -- : I had reported this point (use of a static library) as a problem, but it has started working now. I don't know the reason, but the only difference, as far as I am aware, is that I had shut down and restarted my PC (and therefore my MSYS session on Windows 10). Therefore I am deleting this point.
How can I solve the above 2 problems?
That's the import library. You can turn it off with Premake's NoImportLib flag.
flags { "NoImportLib" }

ScalaJs PhantomJsEnv doesn't work when phantomjs is installed from npm

I am trying to use phantomjs as installed via npm to perform my unit tests for ScalaJS.
When I run the tests I am getting the following error:
/usr/bin/env: node: No such file or directory
I believe that is because of how phatomjs when installed with npm loads node:
Here is the first line from phantomjs:
#!/usr/bin/env node
If I change that first line to hardcode to the node executable (this involves modifying a file installed by npm so it's only a temporary solution at best):
#!/home/bjackman/cgta/opt/node/default/bin/node
Everything works.
I am using phantom.js btw because moment.js doesn't work in the NodeJSEnv.
Work Around:
After looking through the plugin source is here the workaround:
I am forwarding the environment from sbt to the PhantomJSEnv:
import scala.scalajs.sbtplugin.ScalaJSPlugin._
import scala.scalajs.sbtplugin.env.nodejs.NodeJSEnv
import scala.scalajs.sbtplugin.env.phantomjs.PhantomJSEnv
import scala.collection.JavaConverters._
val env = System.getenv().asScala.toList.map{case (k,v)=>s"$k=$v"}
olibCross.sjs.settings(
ScalaJSKeys.requiresDOM := true,
libraryDependencies += "org.webjars" % "momentjs" % "2.7.0",
ScalaJSKeys.jsDependencies += "org.webjars" % "momentjs" % "2.7.0" / "moment.js",
ScalaJSKeys.postLinkJSEnv := {
if (ScalaJSKeys.requiresDOM.value) new PhantomJSEnv(None, env)
else new NodeJSEnv
}
)
With this I am able to use moment.js in my unit tests.
UPDATE: The relevant bug in Scala.js (#865) has been fixed. This should work without a workaround.
This is indeed a bug in Scala.js (issue #865). For future reference; if you would like to modify the environment of a jsEnv, you have two options (this applies to Node.js and PhantomJS equally):
Pass in additional environment variables as argument (just like in #KingCub's example):
new PhantomJSEnv(None, env)
// env: Map[String, String]
Passed-in values will take precedence over default values.
Override getVMEnv:
protected def getVMEnv(args: RunJSArgs): Map[String, String] =
sys.env ++ additionalEnv // this is the default
This will allow you to:
Read/Modify the environment provided by super.getVMEnv
Make your environment depend on the arguments to the runJS method.
The same applies for arguments that are passed to the executable.

Resources