Qbs custom module not working - qt

I want to make a module to use the QtRO repc compiler to produce .h files from .rep files.
I coded the module but when I try to load it in an application product it does not load and disable the product.
the modules are in C:\Users\User\qt\qbs
Qbs Module replica.qbs:
import qbs
Module {
property bool source: true
FileTagger {
patterns: "*.rep"
fileTags: ["rep"]
}
Rule {
inputs: ["rep"]
Artifact {
fileTags: ["txt_output"]
}
prepare: {
var cmd = new Command();
cmd.program = "repc.exe";
if source {
cmd.arguments = ["-i", "rep", "-o", "source", input.filePath];
} else {
cmd.arguments = ["-i", "rep", "-o", "replica", input.filePath];
}
console.log("repc on : ", input.filePath);
return [cmd];
}
}
}
product.qbs:
import qbs
Application {
name: "ServiceExposer"
Depends { name: "cpp" }
Depends { name: "Qt.core" }
Depends { name: "Qt.remoteobjects" }
Depends { name: "replica" }
files: [
"main.cpp",
"service_exposer.rep"
]
}
project.qbs:
import qbs
Project {
references: ["ServiceExposer/ServiceExposer.qbs"]
qbsSearchPaths: "C:\Users\User\qt\qbs"
}
I don't see where I made the mistake.
Thank you in advance for your help.

If it's a header file, why do you give it the "cpp" tag? Shouldn't it be "hpp"?
What is the reason you are putting the file into the source directory? Do you plan on adding it to your repository? Normally, build artifacts (no matter whether they are binaries or human-readable files) should be located inside the build directory as not to "pollute" the source tree.
You did not mention in what way the module does not work for you now, so it's hard to diagnose the problem. You should mention what you expected to happen and what happened instead (giving the concrete error message, if there is one).

I managed to make it work after digging a little more in the doc and source code, I share with you the working module.
This module when imported if there are any .rep files (QtRO (remote objects)) module remote object definition) in your project, it will invoke the repc compiler and compile them and put the resulting .h file in your source directory.
Still not complete, I didn't find a way to manipulate the files property of the Product Item to add the .h to it automatically.
import qbs
import qbs.FileInfo
Module {
FileTagger {
patterns: ["*.rep"]
fileTags: ["repc-rep"]
}
Rule {
inputs: ["repc-rep"]
Artifact {
filePath: repc_" + FileInfo.baseName(input.fileName) + "_source.h"
fileTags: ["cpp"]
}
prepare: {
var cmd = new Command();
cmd.description = "repc " + input.fileName;
cmd.program = "repc.exe"
cmd.arguments = ["-i", "rep", "-o", "source", input.filePath, output.filePath];
var cmd2 = new JavaScriptCommand();
cmd2.silent = true;
cmd2.sourceCode = function() {
File.copy(output.filePath, FileInfo.path(input.filePath) + "/" + output.fileName);
}
return [cmd, cmd2];
}
}
}
In order to this module to work, the repc.exe must be in your path.
Any suggestion are welcomed.

Related

How to set up react-native integration test

In react-native doc, it says to check UIExploreIntegrationTest. It seems that it requires some setup on Xcode as it uses Objective C code(*.m). I'm new on Obj-C test.. May I know how to set up the integration test on Xcode?
With some guesswork I was able to nail down a few steps to get integration tests running on iOS. However I'm still figuring out how to get Android integration tests working.
Go ahead and copy IntegrationTests.js from the RN github and make a new JS file called Tests.js
Place both of these files in the root of your project. Then change IntegrationTests.js by going down and changing all of their requires to just one require statement for the file you just created require('./Tests')
Here is a basic implementation of what your Tests.js file should look like:
'use strict';
var React = require('react');
var ReactNative = require('react-native');
var {
Text,
View,
} = ReactNative;
var { TestModule } = ReactNative.NativeModules;
var Tests = React.createClass({
shouldResolve: false,
shouldReject: false,
propTypes: {
RunSampleCall: React.PropTypes.bool
},
getInitialState() {
return {
done: false,
};
},
componentDidMount() {
if(this.props.TestName === "SomeTest"){
Promise.all([this.SomeTest()]).then(()=>
{
TestModule.markTestPassed(this.shouldResolve);
});
return;
}
},
async SomeTest(){
var one = 1;
var two = 2;
var three = one + two;
if(three === 3){
this.shouldResolve = true;
}else{
this.shouldResolve = false;
}
}
render() : ReactElement<any> {
return <View />;
}
});
Tests.displayName = 'Tests';
module.exports = Tests;
Here is a basic implementation of your Tests.m file (inside xcode)
#import <UIKit/UIKit.h>
#import <XCTest/XCTest.h>
#import <RCTTest/RCTTestRunner.h>
#import "RCTAssert.h"
#define RCT_TEST(name) \
- (void)test##name \
{ \
[_runner runTest:_cmd module:##name]; \
}
#interface IntegrationTests : XCTestCase
#end
#implementation IntegrationTests
{
RCTTestRunner *_runner;
}
- (void)setUp
{
_runner = RCTInitRunnerForApp(#"IntegrationTests", nil);
}
- (void)test_SomeTest
{
[_runner runTest:_cmd
module:#"Tests"
initialProps:#{#"TestName": #"SomeTest"}
configurationBlock:nil];
}
#end
Also you need to add RCTTest from node_modules/react-native/Libraries/RCTTest/RCTTest.xcodeproj to your libraries. then you need to drag the product libRCTTest.a of that project you added to Linked Frameworks and Libraries for your main target in the general tab.
^^ that path might be slightly incorrect
Then you need to edit your scheme and set an environment variable CI_USE_PACKAGER to 1
So if you do all those steps you should have a simple test run and pass. It should be fairly easy to expand after that. Sorry if my answer is slightly sloppy, let me know if you have any questions.

Location of Intern reporters output files like corbertura or html report

I'm using Grunt with Intern and set some reporters to lcovhtml and cobertura:
grunt.initConfig({
intern: {
runner: {
options: {
config: 'tests/intern',
runType: 'runner',
reporters: ['pretty', 'lcovhtml','junit','cobertura']
}
}
},
Is there any configuration to control output directory of these files for all or each reporter?
For example, by adding a parameters reportDir to the options object defined in your Gruntfile.js, you can update intern/lib/reporters/lcovhtml.js with:
define([
'dojo/node!istanbul/lib/collector',
'dojo/node!istanbul/lib/report/html',
'dojo/node!istanbul/index'
], function (Collector, Reporter) {
var collector = new Collector(),
reporter = new Reporter();
//...
});
with:
define([
'../args',
'dojo/node!istanbul/lib/collector',
'dojo/node!istanbul/lib/report/html',
'dojo/node!istanbul/index'
], function (args, Collector, Reporter) {
var collector = new Collector(),
reporter = new Reporter({ dir: args.reportDir });
//...
});
You can propagate a similar update in cobertura.js and junit.js reporters.
Note: I documented this approach in https://github.com/theintern/intern/issues/71. The patch for the corresponding issue has not yet been published (pushed to Intern 2.3).

How to avoid unnecessary uglifying in GruntJS?

I have a grunt file with the following definition:
uglify: {
build: {
src: 'www/temp/application.js', // a concatenation of files via grunt-contrib-concat
dest: 'www/temp/application.min.js'
}
},
what I would really like to do is to recompute the final application.min.js only in case that application.js file was changed. More precisely, I want to add the condition:
# pseudocode
if (getFileContents(application.js) == getFileContents(previously.uglified.application.js)) {
// do nothing
} else {
// run uglifying on application.js
}
Reason:
I deploy my project by git and uglifying is relatively slow (3+ seconds) and moreover, it is unnecessary since I don't change JS files often.
There are several possible solutions:
You can create your own grunt task that will check files for last modify time using for example fs.stat then run uglify task through grunt.task.run with prepared options as argument.
Or you can build files object dynamically passing it through filter function:
var fs = require('fs');
module.exports = function (grunt) {
function filterChanged(files) {
var mtime = '',
stats;
for (var dest in files) {
stats = fs.statSync(files[dest]);
try {
mtime = fs.readFileSync(files[dest] + '.mtime', 'utf8');
}
catch (ex) {
fs.writeFileSync(files[dest] + '.mtime', stats.mtime, 'utf8');
return files;
}
if (stats.mtime == mtime || !mtime) {
delete files[dest];
}
else {
fs.writeFileSync(files[dest] + '.mtime', stats.mtime, 'utf8');
}
}
return files;
}
grunt.initConfig({
uglify: {
build: {
files: filterChanged({
'www/temp/application.min.js': 'www/temp/application.js'
})
}
}
});
};
This causes invoke of filterChanged function every time uglify task runs.

run concat task dynamically with destination file dynamically created

I'm running a grunt concat task on one of my projects and it looks something like this:
/**
* Concatenate | Dependencies Scripts
*/
concat: {
dependencies: {
files: {
"./Ditcoop/js/plugins.min.js": ["./Ditcoop/js/vendor/**/*.min.js", "!./Ditcoop/js/vendor/modernizr/*.js", "!./Ditcoop/js/vendor/jquery/*.js"],
"./Global/js/plugins.min.js": ["./Global/js/vendor/**/*.min.js", "!./Global/js/vendor/modernizr/*.js", "!./Global/js/vendor/jquery/*.js"],
"./Webshop/js/plugins.min.js": ["./Webshop/js/vendor/**/*.min.js", "!./Webshop/js/vendor/modernizr/*.js", "!./Webshop/js/vendor/jquery/*.js"]
}
}
}
My question would be if I could somehow make that more dynamic without having to specify each root folder. I was thinking of something like this:
concat: {
dependencies: {
files: {
"./*/js/plugins.min.js": ["./*/js/vendor/**/*.min.js", "!./*/js/vendor/modernizr/*.js", "!./*/js/vendor/jquery/*.js"],
}
}
}
I'm pretty sure I cannot do it this way, but I could use the expand option, I'm just not sure how I could use it so I can do that under the right root folder, so I won't create the same destination file as many times I run the concat.
Always remember Gruntfiles are javascript :)
grunt.initConfig({
concat: {
dependencies: {
files: (function() {
var files = Object.create(null);
grunt.file.expand({filter: 'isDirectory'}, '*').forEach(function(dir) {
files[dir + '/js/plugins.min.js'] = [
dir + '/js/vendor/**/*.min.js',
'!' + dir + '/js/vendor/modernizr/*.js',
'!' + dir + '/js/vendor/jquery/*.js'
];
});
return files;
}()),
},
},
});
But if your dependency handling logic is this complex you may want to consider using a module loader such as browserify or requirejs. The concat task is really just for joining simple files together.

How can a Qbs build Rule use a product

I want to use Qbs to compile an existing project. This project already contains a code-transformation-tool (my_tool) that is used heavily in this project.
So far I have (simplified):
import qbs 1.0
Project {
Application {
name: "my_tool"
files: "my_tool/main.cpp"
Depends { name: "cpp" }
}
Application {
name: "my_app"
Group {
files: 'main.cpp.in'
fileTags: ['cpp_in']
}
Depends { name: "cpp" }
Rule {
inputs: ["cpp_in"]
Artifact {
fileName: input.baseName
fileTags: "cpp"
}
prepare: {
var mytool = /* Reference to my_tool */;
var cmd = new Command(mytool, input.fileName, output.fileName);
cmd.description = "Generate\t" + input.baseName;
cmd.highlight = "codegen";
return cmd;
}
}
}
}
How can I get the reference to my_tool for the command?
This answer is based on an email from Qbs author Joerg Bornemann who allowed me to cite it here.
The property usings of Rule allows to add artifacts from products dependencies to the inputs.
In this case we are interested in "application" artifacts.
The list of applications could then be accessed as input.application.
Application {
name: "my_app"
Group {
files: 'main.cpp.in'
fileTags: ['cpp_in']
}
Depends { name: "cpp" }
// we need this dependency to make sure that my_tool exists before building my_app
Depends { name: "my_tool" }
Rule {
inputs: ["cpp_in"]
usings: ["application"] // dependent "application" products appear in inputs
Artifact {
fileName: input.completeBaseName
fileTags: "cpp"
}
prepare: {
// inputs["application"] is a list of "application" products
var mytool = inputs["application"][0].fileName;
var cmd = new Command(mytool, [inputs["cpp_in"][0].fileName, output.fileName]);
cmd.description = "Generate\t" + input.baseName;
cmd.highlight = "codegen";
return cmd;
}
}
}
Unfortunately the usings property in a Rule is deprecated since QBS 1.5.0. At the moment I have the same requirement. Using a product artifact in a non multiplex Rule.
The problem with a multiplex Rule is, if a single file in the input set changes, all input artifacts will be re-processed. Which is rather time consuming in my case.

Resources