Can I execute several flows from ONE corda flow?
Can I create several different transactions for different states from one flow.
The task is connected with creation different stated into one flow.
Any examples?
Thanks.
Yes. You can do that with the help of subflows. refer this and the
flow cook book
Related
As a follow-up to my question, I've been further looking at the design of an application that I see consistent issues with and while delving into understanding it properly (the documentation for it is outdated), I've come across a part of the orchestration that I am unclear on why it is that way.
The application is a singleton design:
Now while I understand that the application is implementing a singleton-like design, I don't understand why the scope circled in red is repeated before the listen shape. I've not seen anything online that documents a design like this and I can't figure out what functionality it adds. So my question is: what function does this accomplish? Is it needed?
You have collapsed the first scope, so we cannot see what that contains, but you indicated that it is the same code. Usually that will be processing the first Activating message. The scope inside the listen will be for subsequent messages that match the correlation.
Sometimes if you have a lot of duplicated code/logic, you might want to have it Call another Orchestration that contains that code/logic.
The other option of course would be to put the process straight after the first loop, followed by the listen, which would be cleaner and remove the duplicated code. As per BizTalk Singleton Orchestration Design
I see Rule Flow which supports action so it may be possible to build some types of workflow on top of this. In my situation I have an case management application with tasks for different roles, all working on a "document" that flows through different states and depending on state, different role will see in their queue to work on.
I'm not sure what your question is, but InRule comes with direct support for Windows Workflow Foundation, so executing any InRule RuleApplication, including those with RuleFlow definitions, is certainly possible.
If you'd like assistance setting up this integration, I would suggest utilizing the support knowledge base and forums at http://support.inrule.com
Full disclosure: I am an InRule Technology employee.
For case management scenarios, you can use decisions specifically to model a process. Create a custom table or flags in your cases that depict the transition points in your process (steps). As you transition steps, call a decision which will determine if the data state is good enough to make the transition. If it is, then set the flag for the new state. Some folks allow for multiple states at the same time. InRule is a stateless platform; however, when used with CRM it provides 95% of the process logic and relies on CRM to do the persistence. I have written about this pattern in a white paper:
https://info.inrule.com/rs/inruletechnology/images/White_Paper_InRule_Salesforce_Integration.pdf
In my application, I've a domain model which is essentially a graph. I need to essentially perform the following operations and the send the resulting graph to the client over network
Operations to be performed
Filter certain nodes based on business policy
Augment with more nodes and relationships (potentially from other data providers
After filtering, I need a serialization mechanism as well. After working with Neo4j and Tinkerpop, I feel Tinkerpop fits well for my usecase as it has
In-memory graph support (TinkerGraph)
Serialization mechanisms: GraphML, GML and GrapjSON
I am wondering if my understanding is accurate and approach is correct. Please suggest.
Sounds right. I often extract subgraphs and store them in a TinkerGraph for follow-on processing. I also use GraphSON for serialization. Seems like you're on the right track.
Here are 2 good sources for additional information:
gremlindocs.com
https://groups.google.com/forum/#!forum/gremlin-users
I’m working on a BPMS project with WF4. For implementing human activities I used custom native activity that executes several functionality. A book mark is created with it. WF instance is persisted and workflow will be unload for next call.
Exactly my problem is fork-joint in workflow foundation 4. I don’t know how I can do it.
I found that parallel activity execute each child activity of itself and when all of them are finished workflow can be continued also know about pick branch and it’s functionality, But in my project there are another kind of activity that it’s like parallel and branching activity.
I want to have multiple branch of sequence that can work with each other and go to en without any dependency to other sequence. I think it’s like multiple instance workflows also I need to join branches in some situation and it’s fork-join. May be one of the branches go to the end of workflow but another is on the middle of sequence.
Dose wf4 support multi-branching ? Can I do it?
WF4 doesn't support fork-joins. You need to model this using a Parallel activity and/or custom activities with bookmarks.
I wanted to use [StudyCompletionDate (0032,1050)] but since it is retired I would like to know the attribute to be used to determine whether a study is complete or is ready for Archival.
We are writing a archival solution for a PACS server, I would like to query the PACS server for the DICOM images that are marked for archival. I want to know if there is any flag that indicates that a DICOM image is marked for archival.
The preferred method for this type of operation would be through the use of the DICOM Instance Availability service, which is defined in DICOM Supplement 93. The beginning of the supplement describes several use cases similar to what you're discussing.
As far as just performing a DICOM C-FIND, and determining the study status, there's no real method to find out what you're looking for. The Instance Availability tag only tells if the study is Online, Nearline, or Offline. To see if its complete, and then you could monitor the Number of Study Related Instances tag to see if the number of instances is increasing. If its been stable for a configured amount of time, you could assume the study is complete.
I'm afraid Steve is correct. There really is no way to tell that a study has been completed externally using only DICOM data. The real solution is to build your system to not expect a 'done' state. DICOM assumes a study is never static. This is probably because a study is not a concrete thing. A study is inferred via the actual instances. If you make the same assumption (that studies are not static), you should be fine. Good luck on your data model! :)
Also, you could use a performed procedure step to know that the study has moved forward in workflow. The problem with that, is that now you're talking HL7, and you're not talking to a PACS, you're talking to the RIS, that talks to the PACS. Maybe too indirect for a well engineered solution. However, this does not tell you that a study is complete, it only tells you that it has moved. The problem with number of related instances is that it tells you how many images were shot. It tells you nothing about how many images actually exist (read: whether the tech deleted bad images or not).
Oh, and use Storage Commit for verifying archival.