multiple, prioritized ways to satisfy a dependency - build-process

Suppose I have target A that can be built only when either B or C have already been built. Building C is much more expensive than building B.
How can I write an optimal SConstruct file which satisfies the following constraints when I ask it to build A.
If either B or C is present and uptodate, directly build A
If neither B nor C is present and uptodate, first build B and then build A
If scons does not provide this capability, does any other build tool provide it?

You might try using SideEffect() on the builder for B or C, and also have a builder to generate A.
Haven't tried this, but it might work.

Related

PDAL pipelines and getting the difference

I have 2 overlapping point clouds - A.las and B.las.
A is from 2015 and B is from 2016, both are from the same area.
I have PDAL (through OSGeos4W64), I'm trying to create a new file containing all the points which are different, this can be in two files i.e. A_diff and B_diff or as a single All_diff.
I've tried to use diff within pdal and pcl, but I'm not sure how to write the syntax of the json file, the www.pdal.io site is not great for beginners. Can anyone provide me with an example?
Heres the pcl info http://docs.pointclouds.org/trunk/classpcl_1_1_segment_differences.html
Thank you for any help.
It is not possible to do this as a PDAL pipeline with the current suite of stages.
The problem is that all reader stages will be subject to the same filter stages (not entirely true, there is a concept of branching pipelines, but it is not widely used). Regardless, there is no way to query one input cloud from another in the pipeline setup. The only workaround that comes immediately to mind would be to develop a custom filter that accepts as one of its inputs the filename of the cloud to query against. We do something similar when colorizing points from a raster. You'd have to develop two pipelines (A to B, and B to A) and write the partial diffs.
I think the easiest way forward is to create a new PDAL kernel that does exactly what you need.

Robot Framework Test Flow

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

How to reuse parts of Windows Workflows?

I have 2 different WF workflows, which both use an identical piece of logic.
Currently that part is copypasted from one workflow to another.
How can I use a part of a workflow in two different workflows without duplicating it?
Basically, I want to have one "library" workflow, which is used by the 2 "real" workflows.
Just add a new "Activity Library" project to your solution.
Create an activity in the library that contains the common part.
Add a reference to the Activity Library project.
Build all in release.
Drop the common activity into your workflows.
You could make a new type of activity which is able to start a workflow definition.
So you end up with a workflow definition A, workflow definition B. And within this definitions you can place your custom activity which can execute a new workflow definition C.
So A and B points to C.
Workflow definition C can be defined as a child workflow of A and B.
I think this link is a good starting point.
http://wf.codeplex.com/wikipage?title=How%20do%20I%20invoke%20a%20Child%20Workflow%3f&referringTitle=Microsoft.Activities%20Overview
You can also just make a workflow with the common part and use LoadAndInvokeWorkflow activity to execute it in another workflow.

Association of any kind inbetween QC projects?

I want to associate one QC project with another (e.g., manual testing and automation testing). I use QC 11.00
I would like to know what kind of association there can be between two QC projects (on the same domain), so I do not have to maintain two projects and then copy paste what I need e.g. common repositories etc.
I'm not sure that you can do this. A project in QC is supposed to be a self-contained entity, that is, there is no way (that I know of) that you can automatically move data between projects.
Sure, you can copy and paste data, as well as create a project with another one as base, but that is probably not what you want.
I would rather have manual testing and automation in the same project, which makes more sense I think. The point is that the project is supposed to identify the test object, rather than the test methodology - the latter can be done better in Test Plan where you specify a Test Type when you create your test.
This way, you will have all defects and test reports for your test object in the same project which will make it all the easier to track what is going on.
As a general rule; you would want to keep all project data for one project in that project; and, you want project data from that project to be unique and separate from all other projects.
That being said... if you really wanted to do this (and were able to convince a QC subject matter expert that it was a good idea?), then it should be a relatively simple matter to amend the workflow with additional code to interface with another project.

Download a file in D

How do I download a file in D? I have checked out the standard library, and the sample. I would rather use phobos with the newest dmd2 than tango. All I need to do is download a file (hopefully using std.socket and std.socketstream). Could also use etc.c.curl.
etc.c.curl provides the C bindings for curl, so you could use that. That's really the only way that I know of to do it using Phobos at the moment, unless you want to do it with std.socket and handle the HTTP requests and responses yourself (which I assume that you don't really want to do).
However, a D wrapper for the C curl bindings is currently in review in the digitalmars.D newsgroup, which would give you a D API for interacting with curl. Assuming that it passes review (which it probably will, though it may change a fair bit during the review process), it'll end up in Phobos. Once it's merged in, it'll be in the following release. So, it'll probably be in either 2.055 or 2.056, depending on when 2.055 gets released.
Until then, however, you're pretty much going to need to either use the C bindings or download the D curl wrapper currently under review. You can find the documentation here and the code here if you want to try it out. If you do that however, it would be much appreciated if you chimed in on the review in the newsgroup to give feedback on it so that it can be appropriately ironed out and improved prior to inclusion in Phobos.

Resources