How can i get the Differential ID in Phabricator from a branch - phabricator

I have made a commit on a particular branch and then I made a Differential patch with Arcanist (executing arc diff).
Now I forget the differential ID of this particular Differential. I want to get it back. How can I achieve this?

If you go to that branch and use the command arc which, it should tell you the Diff. Also, if you used arc diff to make the differential, the commit message of that commit should have been updated with a URL to the Diff.

Related

Airflow - branching without joining

I am still very new to airflow and am trying to make a workflow that only processes data on business days. The flow logic should be:
if date.is_busday():
process_data()
else:
do_nothing() # but mark the run as successful or skipped
I used the BranchPythonOperator but did not join the two branches at the end. If it is not a business day, I have it run a dummy operator branch. Otherwise, it goes on to a normal operator branch. Here is what the flow looks like:
DAG diagram
I am able to successfully skip non-business days using this method. However, I get this warning: WARNING - No DAG RUN present this should not happen'
Additionally, when I set SLA, I started getting emails for all the dummy tasks that were not run.
When I started reading about branch operators, all the examples eventually joined the branches (branch documentation, ex. 1, ex. 2, ex. 3). This question is the closest I found to mine but seemed to be solved by joining the branches: similar question This question doesn't join the branches but their use case seems unique and complicated ex.4
Since I get warnings and cannot find any examples out there of un-joined branches, I am now doubting that my approach. My questions are:
What is the best/simplest way to make this work in airflow?
If my approach is correct, how do I get rid of the warning message or is it just something I have to live with?
If I had to join the branches, could I join the load_data branch and the dummy branch with another dummy branch? This seems kind of absurd to me but could it be the right way to avoid all the warnings?
Thanks in advance.
I would suggest you use Shortcircut instead of branch operator.
The branch operator always expects 2 downstream tasks at least.
Short Circut operator will allow you to evaluate. IF its a business day, continue, IF not, ShortCircuitOperator will stop the flow
ShortCircuitOperator will also expect a python callable function, that function can just return True / False to continue or exit.

xProc - Pausing a pipeline and continue it when certain event occurs

I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.

Fossil SCM: Merge leafs back to trunk

I have been working for some time with Fossil SCM but I still see something I don't quite get.
In the screenshot you can see that I have two Leaves that are present in the repository, but sadly I can't find the way to merge them back into trunk (is annoying to have the 'Leaf' mark in all my commits).
I had Leaves before and I normally merged them by doing
fossil update trunk
fossil merge <merged_changeset_id>
But now I just get the message:
fossil: cannot find a common ancestor between the current checkout and ...
Update: This repository is a complete import from a git repository, I'm gonna try to reproduce the exception.
ravenspoint is right---using --baseline BASELINE,
especially using the initial empty commit
of the branch you are trying to merge into
will link your independent branches into a single graph.
You can also hide the leaves you do not want to see from the timeline through the web ui, or mark them as closed.
Updated, 2017-01-12: this approach stopped working for me at some point.
I get "lack both primary and secondary files" errors when I try it now. I suspect this is dependent on the schema, possibly the changes associated with Fossil 1.34
Have you tried:
--baseline BASELINE Use BASELINE as the "pivot" of the merge instead
of the nearest common ancestor. This allows
a sequence of changes in a branch to be merged
without having to merge the entire branch.

Hadoop MapReduce implementation of shortest PATH in a graph, not just the distance

I have been looking for "MapReduce implementation of Shortest path search algorithms".
However, all the instances I could find "computed the shortest distance form node x to y", and none actually output the "actual shortest path like x-a-b-c-y".
As for what am I trying to achieve is that I have graphs with hundreds of 1000s of nodes and I need to perform frequent pattern analysis on shortest paths among the various nodes. This is for a research project I am working on.
It would be a great help if some one could point me to some implementation (if it exists) or give some pointers as to how to hack the existing SSSP implementations to generate the paths along with the distances.
Basically these implementations work with some kind of messaging. So messages are send to HDFS between map and reduce stage.
In the reducer they are grouped and filtered by distance, the lowest distance wins. When the distance is updated in this case, you have to set the vertex (well, some ID probably) where the message came from.
So you have additional space requirement per vertex, but you can reconstruct every possible shortest path in the graph.
Based on your comment:
yes probably
I will need to write another class of the vertex object to hold this
additional information. Thanks for the tip, though it would be very
helpful if you could point out where and when I can capture this
information of where the minimum weight came from, anything from your blog maybe :-)
Yea, could be a quite cool theme, also for Apache Hama. Most of the implementations are just considering the costs not the real path. In your case (from the blog you've linked above) you will have to extract a vertex class which actually holds the adjacent vertices as LongWritable (maybe a list instead of this split on the text object) and simply add a parent or source id as field (of course also LongWritable).
You will set this when propagating in the mapper, that is the for loop that is looping over the adjacent vertices of the current key node.
In the reducer you will update the lowest somewhere while iterating over the grouped values, there you will have to set the source vertex in the key vertex by the vertex that updated to the minimum.
You can actually get some of the vertices classes here from my blog:
Source
or directly from the repository:
Source
Maybe it helps you, it is quite unmaintained so please come back to me if you have a specific question.
Here is the same algorithm in BSP with Apache Hama:
https://github.com/thomasjungblut/tjungblut-graph/blob/master/src/de/jungblut/graph/bsp/SSSP.java

how can I reenter my flowchart workflow at a previous higher level

I have a .xamlx flowchart workflow that models an approval process. if the submitter changes the document before the flowchart has finished, I want the submitter to 'resubmit' the document. I thought I would just be able to call the first receive activity again, but I think the workflow is recognizing it is already further along and is exiting.
Do I need to 'cancel' the workflow before 'resubmitting'? Or perhaps I just need another method later in the flowchart that the submitter calls? Any help is appreciated.
You can create any branch you want in a flowchart. So you can just loop back to the top of the workflow. You do need to model this explicitly in your workflow though.
#Maurice was correct MOL. I could copy the original Receive to a later place in the workflow. But in order for it to work correctly, the first receive needed both it CorrelatesOn and CorrelationInitializers set to the same Correlation Handle variable. The 'copy' only needed the CorrelatesOn property set. Thay may have been obvious but since I didn't know it, I'm documenting here in case anyone else gets the same issue.

Resources