QBS: Explicitly setting qbs.profiles inside Products causing build to fail - qt

My use-case is this:
I have a static library which I want to be available for some profiles (e.g. "gcc", "arm-gcc", "mips-gcc").
I also have an application which links to this library, but this applications should only build using a specific profile (e.g. "arm-gcc").
For this I am modifying the app-and-lib QBS example.
The lib.qbs file:
import qbs 1.0
Product {
qbs.profiles: ["gcc", "arm-gcc", "mips-gcc"] //I added only this line
type: "staticlibrary"
name: "mylib"
files: [
"lib.cpp",
"lib.h",
]
Depends { name: 'cpp' }
cpp.defines: ['CRUCIAL_DEFINE']
Export {
Depends { name: "cpp" }
cpp.includePaths: [product.sourceDirectory]
}
}
The app.qbs file:
import qbs 1.0
Product {
qbs.profiles: ["arm-gcc"] //I added only this line
type: "application"
consoleApplication: true
files : [ "main.cpp" ]
Depends { name: "cpp" }
Depends { name: "mylib" }
}
The app build fails. Qbs wrongly tries to link to the "gcc" version of the library instead of the "arm-gcc" version, as you can see in the log:
Build graph does not yet exist for configuration 'default'. Starting from scratch.
Resolving project for configuration default
Setting up build graph for configuration default
Building for configuration default
compiling lib.cpp [mylib {"profile":"gcc"}]
compiling lib.cpp [mylib {"profile":"arm-gcc"}]
compiling lib.cpp [mylib {"profile":"mips-gcc"}]
compiling main.cpp [app]
creating libmylib.a [mylib {"profile":"gcc"}]
creating libmylib.a [mylib {"profile":"mips-gcc"}]
creating libmylib.a [mylib {"profile":"arm-gcc"}]
linking app [app]
ERROR: /usr/bin/arm-linux-gnueabihf-g++ -o /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/app.7d104347/app /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/app.7d104347/3a52ce780950d4d9/main.cpp.o /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/mylib.eyJwcm9maWxlIjoiZ2NjIn0-.792f47ec/libmylib.a
ERROR: /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/mylib.eyJwcm9maWxlIjoiZ2NjIn0-.792f47ec/libmylib.a: error adding symbols: File format not recognized
collect2: error: ld returned 1 exit status
ERROR: Process failed with exit code 1.
The following products could not be built for configuration default:
app
The build fails only when selecting one profile in the app.qbs file, and this profile should not be the first profile in the qbs.profiles line in the lib.qbs file.
When selecting two or more profiles - the build succeeds.
My analysis:
I think this problem is related to multiplexing:
The lib.qbs contains more than one profile. This turns on multiplexing when building the library, which, in turn, adds additional 'multiplexConfigurationId' to the build-directory name (moduleloader.cpp).
The app.lib contains only one profile, so multiplexing is not turned on and the build-directory name does not get the extra string.
The problem can be solved by changing the code (moduleloader.cpp) so that multiplexing is turned even if there is only one profile i.e. with the following patch:
--- moduleloader.cpp 2018-10-24 16:17:43.633527397 +0300
+++ moduleloader.cpp.new 2018-10-24 16:18:27.541370544 +0300
## -872,7 +872,7 ##
= callWithTemporaryBaseModule<const MultiplexInfo>(dummyContext,
extractMultiplexInfoFromProduct);
- if (multiplexInfo.table.size() > 1)
+ if (multiplexInfo.table.size() > 0)
productItem->setProperty(StringConstants::multiplexedProperty(), VariantValue::trueValue());
VariantValuePtr productNameValue = VariantValue::create(productName);
## -891,7 +891,7 ##
const QString multiplexConfigurationId = multiplexInfo.toIdString(row);
const VariantValuePtr multiplexConfigurationIdValue
= VariantValue::create(multiplexConfigurationId);
- if (multiplexInfo.table.size() > 1 || aggregator) {
+ if (multiplexInfo.table.size() > 0 || aggregator) {
multiplexConfigurationIdValues.push_back(multiplexConfigurationIdValue);
item->setProperty(StringConstants::multiplexConfigurationIdProperty(),
multiplexConfigurationIdValue);
This worked for my use case. I don't know if it make sense in a broader view.
Finally, the questions:
Does it all make sense?
Is this a normal behavior?
Is this use-case simply not supported?
Is there a better solution?
Thanks in advance.

Yes, the default behavior with multiplexing is that the a non-multiplexed product depends on all variants of the dependency. In general, there is no way for a user to change that behavior, but there should be.
However, luckily for you, profiles are special:
Depends { name: "mylib"; profiles: "arm-gcc" }
This should fix your problem.

Related

A module requiring 'fs' does not work inside storybook, but does work in-browser

I've got a basic create-react-app setup. A part of the CRA application is a lexer and a parser generated by antlr4ts.
In-browser, everything runs fine.
However, Storybook (installed via npx sb init) refuses to build, throwing:
ERROR in ./node_modules/antlr4ts/misc/InterpreterDataReader.js
Module not found: Error: Can't resolve 'fs' in '/home/rijndael/projects/antlr-truth-table-generator/node_modules/antlr4ts/misc'
# ./node_modules/antlr4ts/misc/InterpreterDataReader.js 16:11-24
# ./node_modules/antlr4ts/misc/index.js
# ./src/antlr/index.ts
# ./src/store/slices/tables/index.ts
# ./src/components/TruthTable/index.tsx
# ./src/components/TruthTable/stories/index.stories.tsx
# ./src sync ^\.(?:(?:^|\/|(?:(?:(?!(?:^|\/)\.).)*?)\/)stories\/(?!\.)(?=.)[^/]*?\.stories\.(js|jsx|ts|tsx))$
# ./.storybook/generated-stories-entry.js
# multi ./node_modules/#storybook/core/dist/server/common/polyfills.js ./node_modules/#storybook/core/dist/server/preview/globals.js ./.storybook/storybook-init-framework-entry.js ./.storybook/preview.js-generated-config-entry.js ./.storybook/generated-stories-entry.js ./node_modules/webpack-hot-middleware/client.js?reload=true&quiet=false&noInfo=undefined
Indeed, a module I'm requiring has import { Interval } from "antlr4ts/misc";.
However, antlr4ts/misc/index.js doesn't seem to require InterpreterDataReader. Perhaps one of its children does, however why it's then not listed in the require stack trace?
antlr4ts/misc/index.js
__export(require("./ANTLRInputStream"));
__export(require("./BailErrorStrategy"));
__export(require("./BufferedTokenStream"));
__export(require("./CharStreams"));
__export(require("./CodePointBuffer"));
__export(require("./CodePointCharStream"));
__export(require("./CommonToken"));
__export(require("./CommonTokenFactory"));
__export(require("./CommonTokenStream"));
__export(require("./ConsoleErrorListener"));
__export(require("./DefaultErrorStrategy"));
__export(require("./Dependents"));
__export(require("./DiagnosticErrorListener"));
__export(require("./FailedPredicateException"));
__export(require("./InputMismatchException"));
__export(require("./InterpreterRuleContext"));
__export(require("./IntStream"));
__export(require("./Lexer"));
__export(require("./LexerInterpreter"));
__export(require("./LexerNoViableAltException"));
__export(require("./ListTokenSource"));
__export(require("./NoViableAltException"));
__export(require("./Parser"));
__export(require("./ParserInterpreter"));
__export(require("./ParserRuleContext"));
__export(require("./ProxyErrorListener"));
__export(require("./ProxyParserErrorListener"));
__export(require("./RecognitionException"));
__export(require("./Recognizer"));
__export(require("./RuleContext"));
__export(require("./RuleContextWithAltNum"));
__export(require("./RuleDependency"));
__export(require("./RuleVersion"));
__export(require("./Token"));
__export(require("./TokenStreamRewriter"));
__export(require("./VocabularyImpl"));
Inside InterpreterDataReader.js we indeed do have a const fs = require("fs");. However, it's no less weird, as everything does run when built by react-scripts.
How do I make it run inside Storybook as well?
Why does it run when built by react-scripts, but not inside Storybook?
.storybook/main.js:
module.exports = {
"stories": [
"../src/**/stories/*.stories.mdx",
"../src/**/stories/*.stories.#(js|jsx|ts|tsx)"
],
"addons": [
]
}
Update
I fixed it by importing directly import { Interval } from "antlr4ts/misc/Interval";, thus omitting the giant require block in antlr4ts/misc/index.js. However, I'm no less interested in why it runs inside the browser, but not in Storybook.

posix_fallocate() failed: Operation not permitted while opening .realm file

I get the below error when i try to open and download .realm file in /tmp directory of serverless framework.
{"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"Error: posix_fallocate() failed: Operation not permitted" }
Below is the code:
let realm = new Realm({path: '/tmp/custom.realm', schema: [schema1, schema2]});
realm.write(() => {
console.log('completed==');
});
EDIT: this might soon be finally fixed in Realm-Core: see issue 4957.
In case you'll run into this problem elsewhere, here's a workaround.
This caused by AWS Lambda not supporting the fallocate and fallocate64 system calls. Instead of returning the correct error code in this case, which would be EINVAL for not supported on this file system, Amazon has blocked the system call so that it returns EPERM. Realm-Core has code that handles EINVAL return value correctly but will be bewildered by the unexpected EPERM returned from the system call.
The solution is to add a small shared library as a layer to the lambda: compile the following C file on Linux machine or inside lambda-ci Docker image:
#include <errno.h>
#include <fcntl.h>
int posix_fallocate(int __fd, off_t __offset, off_t __len) {
return EINVAL;
}
int posix_fallocate64(int __fd, off_t __offset, off_t __len) {
return EINVAL;
}
Now, compile this to a shared object with something like
gcc -shared fix.c -o fix.so
Then add it to a root of a ZIP file:
zip layer.zip fix.so
Create a new lambda layer from this zip
Add the lambda layer to your lambda function
Finally make the shared object be loaded by configuring the environment value LD_PRELOAD with value /opt/fix.so to your Lambda.
Enjoy.

Get system property can't work well in the debug log level

I would like to get system property from command line and print it in sbt.
I used followiing code snippet
Depend.scala
val myVar = Option(System.getProperty("myVar")).getOrElse("default")
Build.sbt
val showMesg = settingKey[Unit]("Show message")
showMesg := {
sLog.value.info(myVar)
}
It works well when I use following command:
sbt -DmyVar=abc compile
[info] abc
But if I want to output it in debug log level. It can't get the system property correctly.
val showMesg = settingKey[Unit]("Show message")
showMesg := {
sLog.value.debug(myVar)
}
sbt -DmyVar=abc compile -debug
[debug] default
I just curious why I can't get the property when log level is debug.

How do I suppress warnings about the Unsafe API when compiling with SBT

I have a project including both Java and Scala sources. The Java sources include one file which uses the sun.misc.Unsafe API. We are well aware of the risks of using this API and want to suppress the warnings in this file. However, we also want to treat warnings as errors on the rest of the Java and Scala code.
My build.sbt includes these lines:
lazy val root = (project in file(".")).
settings(
// ...
Compile / javacOptions ++= Seq(
"-Xlint:all",
"-Werror",
"-XDenableSunApiLintControl"
),
Compile / fork := true,
Compile / javaOptions += "-XDenableSunApiLintControl",
// ...
)
I understand from an SO comment on a related post that I must fork a new JVM for -XDenableSunApiLintControl to take effect. For this reason, I specified Compile / fork := true and respecified the option in the Compile / javaOptions.
Unfortunately, I still see the warning reported as an error.
My Java file starts this way:
package is.hail.annotations;
import sun.misc.Unsafe;
import java.lang.reflect.Field;
#SuppressWarnings("sunapi")
public final class Memory {
Which successfully suppresses the warnings when using Gradle with
compileJava {
options.compilerArgs << "-Xlint:all" << "-Werror" << "-XDenableSunApiLintControl"
}
tasks.withType(JavaCompile) {
options.fork = true // necessary to make -XDenableSunApiLintControl work
}
How do I enable suppression of the sun.misc.Unsafe API in SBT?

SBT Subprojects do not recognize plugin commands

I'm having an issue with getting SBT Subprojects to recognize commands provided by plugins. I have the following plugin source:
object DemoPlugin extends AutoPlugin {
override lazy val projectSettings = Seq(commands += demoCommand)
lazy val demoCommand =
Command.command("demo") { (state: State) =>
println("Demo Plugin!")
state
}
}
Which is used by a project configured as follows:
lazy val root = project in file(".")
lazy val sub = (project in file("sub")).
enablePlugins(DemoPlugin).
settings(
//...
)
The plugin is, of course, listed in project/plugins.sbt. However, when I open up sbt in the project, I see the following:
> sub/commands
[info] List(sbt.SimpleCommand#413d2cd1)
> sub/demo
[error] Expected ':' (if selecting a configuration)
[error] Not a valid key: demo (similar: doc)
[error] sub/demo
Even stranger, using consoleProject, I can see that the command in the project is the one defined by DemoPlugin!
scala> (commands in sub).eval.map { c => c.getClass.getMethod("name").invoke(c) }
res0: Seq[Object] = List(demo)
I'm looking to be able to type sub/demo, and have it perform the demo command. Any help would be much appreciated!
Commands aren't per-project. They only work for the top-level project.
It's also recommended to try and use tasks, or if needed input tasks where you might want to use a command.
If you really need a command, there's a way to have a sort of "holder" task, see the answer to Can you access a SBT SettingKey inside a Command?

Resources