how to execute dependent task when parent task is called with -c option - bitbake

I have executed a task with bitbake -c task1 option. when this task is executed task2 also needs to be executed. How can I make sure that task2 will be called in bitbake. And also I have a dependency to run the task2 as independent task with bitbake -c option.

Bitbake tasks are the consecutive chain of functions with certain hierarchy. You cannot execute do_install before do_compile, or in place of do_compile, or without do_compile. Such ability is absent as it doesn't have much sense.
bitbake -c task1 means "perform everything that task1 demands before execution, execute task1 and leave". So there is no way for executing task2 that goes after task1 by command bitbake -c task1.
In my opinion -c argument should be used rather for debugging, than for work. Ideally you have to describe all relations between tasks and initiate build by image names, not individual tasks. So it seems like you have chosen the wrong tool for your purposes. What are you trying to do?

Related

Clear Failed Airflow DAG But Don't Restart

I'm running a custom backfill script that backfills a DAG serially. (If I run the backfill concurrently, I either run into a deadlock problem or a Serializable isolation problem.) As part of the process, I sometimes have failed DAGs mixed in with non-existing dates. To backfill that failed date, I need to clear it first. The problem comes in that a cleared DAG auto restarts and conflicts with the first date running in the backfill.
airflow clear dag_id -f -s 01-01-20 -e 01-12-20 -f
Because of the way it is built, I'll have to rewrite it from scratch and clear each backfill individually. If I can prevent a cleared DAG from rerunning, I would save me some time. Is there a way to do this in the cli?
You could try setting the max_active_runs argument to 1 when creating the DAG object. This will ensure that no more than one execution is active at a time, that way you can clear as many as you'd like and let Airflow handle the rest.

how does bitbake parse task with -c option

When I executed bitbake -c task task execution was successful. I wanted to know how bitbake is parsing task provided with -c option. Can anyone help me with the code flow.
There are various tasks like do_compile, do_configure, do_rootfs and so on that the bitbake does when building a given recipe. Check here for the list of tasks.
When we want to perform only a specific task of our choice, we need the -c <task> switch for bitbake. Here is an example of such context.
The kernel under Yocto is managed by the recipe sources/meta-fsl-arm/recipes-kernel/linux/linux-imx_4.9.11.bb file (in my case). I've added some modules that I've created on my own. In this case, though you've modified the kernel (linux-imx) and the time stamps are updated, the bitbake relies on sstate-cache information for verifying if a given recipe is worked on or not.
However, you have a change in the recipe file so, it needs to be compiled again. In such case, you opt for the command bitbake -c compile linux-imx -f, with which we are instructing the bitbake tool to "compile" the recipe linux-imx forcefully using -f. You can perform any other task in the same way like -c menuconfig for configuration menu, -c distrodata for data about the distribution, -c checkpkg for checking the list of packages and so on.
For better understanding, go through the latest version of Yocto Reference Manual.

robotframework : How to ignore failure from --rerunfailed when all tests are passed

I run my robotframework test suite as a teamcity/jenkins build with two simple steps as below
build step #1: pybot
build step #2: pybot --rerunfailed Results\output.xml
When all the tests in step-1 are passed, the build fails because step-2 (--rerunfailed) triggers an error ( [ ERROR ] Collecting failed tests from 'Results\output.xml' failed: All tests passed.) .
Could someone please suggest how to ignore or overcome this error, so that I can show the build as passed in this case ?
I had similar Problem, i fixed it this way:
robot -d %ResultPath% %TestSuitName% || robot --rerunfailed output.xml --output output1.xml -l log.html -r report.html TestSuitName || rebot --rerunmerge --output output.xml -l log.html -r report.html output.xml output1.xml
Uses || to run those command simultaneously and it will work.
Make build step #2 dependent on build step #1 failing. That is, only run pybot --rerunfailed if the first pybot exited with a non-zero exit status.
The simplest way to do this is to create a custom test runner in bash or python or powershell that does both the running of pybot and re-running of pybot as a single step. You then configure this shell script as a single step.
Another way to do it is to have your second build step either look at the return code of the previous step (if possible), or scans the output.xml to see if there are failures. If there are no failures, it returns without doing any work.
There's robot:skip-on-failure tag. It it sets then a test case will skip for failure and perform if it's ok. It takes RobotFramework 5.0.

How to run one airflow task and all its dependencies?

I suspected that
airflow run dag_id task_id execution_date
would run all upstream tasks, but it does not. It will simply fail when it sees that not all dependent tasks are run. How can I run a specific task and all its dependencies? I am guessing this is not possible because of an airflow design decision, but is there a way to get around this?
You can run a task independently by using -i/-I/-A flags along with the run command.
But yes the design of airflow does not permit running a specific task and all its dependencies.
You can backfill the dag by removing non-related tasks from the DAG for testing purpose
A bit of a workaround but in case you have given your tasks task_id-s consistently you can try the backfilling from Airflow CLI (Command Line Interface):
airflow backfill -t TASK_REGEX ... dag_id
where TASK_REGEX corresponds to the naming pattern of the task you want to rerun and its dependencies.
(remember to add the rest of the command line options, like --start_date).

GNU make's -j option

Ever since I learned about -j I've used -j8 blithely. The other day I was compiling an atlas installation and the make failed. Eventually I tracked it down to things being made out of order - and it worked fine once I went back to singlethreaded make. This makes me nervous. What sort of conditions do I need to watch for when writing my own make files to avoid doing something unexpected with make -j?
I think make -j will respect the dependencies you specify in your Makefile; i.e. if you specify that objA depends on objB and objC, then make won't start working on objA until objB and objC are complete.
Most likely your Makefile isn't specifying the necessary order of operations strictly enough, and it's just luck that it happens to work for you in the single-threaded case.
In short - make sure that your dependencies are correct and complete.
If you are using a single threaded make then you can be blindly ignoring implicit dependencies between targets.
When using parallel make you can't rely on the implicit dependencies. They should all be made explicit. This is probably the most common trap. Particularly if using .phony targets as dependencies.
This link is a good primer on some of the issues with parallel make.
Here's an example of a problem that I ran into when I started using parallel builds. I have a target called "fresh" that I use to rebuild the target from scratch (a "fresh" build). In the past, I coded the "fresh" target by simply indicating "clean" and then "build" as dependencies.
build: ## builds the default target
clean: ## removes generated files
fresh: clean build ## works for -j1 but fails for -j2
That worked fine until I started using parallel builds, but with parallel builds, it attempts to do both "clean" and "build" simultaneously. So I changed the definition of "fresh" as follows in order to guarantee the correct order of operations.
fresh:
$(MAKE) clean
$(MAKE) build
This is fundamentally just a matter of specifying dependencies correctly. The trick is that parallel builds are more strict about this than are single-threaded builds. My example demonstrates that a list of dependencies for given target does not necessarily indicate the order of execution.
If you have a recursive make, things can break pretty easily. If you're not doing a recursive make, then as long as your dependencies are correct and complete, you shouldn't run into any problems (save for a bug in make). See Recursive Make Considered Harmful for a much more thorough description of the problems with recursive make.
It is a good idea to have an automated test to test the -j option of ALL the make files. Even the best developers have problems with the -j option of make. The most common issues is the simplest.
myrule: subrule1 subrule2
echo done
subrule1:
echo hello
subrule2:
echo world
In normal make, you will see hello -> world -> done.
With make -j 4, you will might see world -> hello -> done
Where I have see this happen most is with the creation of output directories. For example:
build: $(DIRS) $(OBJECTS)
echo done
$(DIRS):
-#mkdir -p $#
$(OBJECTS):
$(CC) ...
Just thought I would add to subsetbrew's answer as it does not show the effect clearly. However adding some sleep commands does. Well it works on linux.
Then running make shows differences with:
make
make -j4
all: toprule1
toprule1: botrule2 subrule1 subrule2
#echo toprule 1 start
#sleep 0.01
#echo toprule 1 done
subrule1: botrule1
#echo subrule 1 start
#sleep 0.08
#echo subrule 1 done
subrule2: botrule1
#echo subrule 2 start
#sleep 0.05
#echo subrule 2 done
botrule1:
#echo botrule 1 start
#sleep 0.20
#echo "botrule 1 done (good prerequiste in sub)"
botrule2:
#echo "botrule 2 start"
#sleep 0.30
#echo "botrule 2 done (bad prerequiste in top)"

Resources