I have 2 different WF workflows, which both use an identical piece of logic.
Currently that part is copypasted from one workflow to another.
How can I use a part of a workflow in two different workflows without duplicating it?
Basically, I want to have one "library" workflow, which is used by the 2 "real" workflows.
Just add a new "Activity Library" project to your solution.
Create an activity in the library that contains the common part.
Add a reference to the Activity Library project.
Build all in release.
Drop the common activity into your workflows.
You could make a new type of activity which is able to start a workflow definition.
So you end up with a workflow definition A, workflow definition B. And within this definitions you can place your custom activity which can execute a new workflow definition C.
So A and B points to C.
Workflow definition C can be defined as a child workflow of A and B.
I think this link is a good starting point.
http://wf.codeplex.com/wikipage?title=How%20do%20I%20invoke%20a%20Child%20Workflow%3f&referringTitle=Microsoft.Activities%20Overview
You can also just make a workflow with the common part and use LoadAndInvokeWorkflow activity to execute it in another workflow.
Related
I have 2 overlapping point clouds - A.las and B.las.
A is from 2015 and B is from 2016, both are from the same area.
I have PDAL (through OSGeos4W64), I'm trying to create a new file containing all the points which are different, this can be in two files i.e. A_diff and B_diff or as a single All_diff.
I've tried to use diff within pdal and pcl, but I'm not sure how to write the syntax of the json file, the www.pdal.io site is not great for beginners. Can anyone provide me with an example?
Heres the pcl info http://docs.pointclouds.org/trunk/classpcl_1_1_segment_differences.html
Thank you for any help.
It is not possible to do this as a PDAL pipeline with the current suite of stages.
The problem is that all reader stages will be subject to the same filter stages (not entirely true, there is a concept of branching pipelines, but it is not widely used). Regardless, there is no way to query one input cloud from another in the pipeline setup. The only workaround that comes immediately to mind would be to develop a custom filter that accepts as one of its inputs the filename of the cloud to query against. We do something similar when colorizing points from a raster. You'd have to develop two pipelines (A to B, and B to A) and write the partial diffs.
I think the easiest way forward is to create a new PDAL kernel that does exactly what you need.
I want to associate one QC project with another (e.g., manual testing and automation testing). I use QC 11.00
I would like to know what kind of association there can be between two QC projects (on the same domain), so I do not have to maintain two projects and then copy paste what I need e.g. common repositories etc.
I'm not sure that you can do this. A project in QC is supposed to be a self-contained entity, that is, there is no way (that I know of) that you can automatically move data between projects.
Sure, you can copy and paste data, as well as create a project with another one as base, but that is probably not what you want.
I would rather have manual testing and automation in the same project, which makes more sense I think. The point is that the project is supposed to identify the test object, rather than the test methodology - the latter can be done better in Test Plan where you specify a Test Type when you create your test.
This way, you will have all defects and test reports for your test object in the same project which will make it all the easier to track what is going on.
As a general rule; you would want to keep all project data for one project in that project; and, you want project data from that project to be unique and separate from all other projects.
That being said... if you really wanted to do this (and were able to convince a QC subject matter expert that it was a good idea?), then it should be a relatively simple matter to amend the workflow with additional code to interface with another project.
Is it possible to place an item in workflow from the event system? The problem I am facing is that we would like to direct components to two different workflows based on what folder they are in (instead of what schema they use), which the Tridion UI doesn't seem to support. I was hoping to write an event that is triggered on check-in of a component, so I can then determine which folder that component is in and direct it into the appropriate workflow from the event.
I'm fine with creation of the event, I'm just totally lost on where to start as far as adding the item to workflow goes. I have looked at the TOM.NET API documentation but really haven't found anything that helps. Any assistance or examples would be appreciated.
As #Jeremy suggests, what you are actually trying to do is not possible.
However you can achieve the same outcome by making the second step of your workflow an automated decision which creates 2 separate workflow branches within one workflow process definition. Your automated decision can read the Org Item of the component and direct it accordingly.
This is not possible - a Component is only added to a WF process when it is saved and its Schema has an associated process definition.
I'm designing an information system (in asp.net) in witch will be handling different modules once its done.
and i don't have enough time nor money to make all of the modules at once, so I've decided to do a few modules first and later on when i have time or money continue with the reset of them.
Now the question is: is there a generic way to call a module from a list for example:I would create a directory with modules where i'm planning to drop the .dll of the modules, so when i make a new one i will put the new .dll there. On the other hand, i want to build something like a skeleton where i will generically call all the modules in the directory I've made via code, without having to re write the code of the skeleton whenever new modules are dropped into the directory. finally I've planned that each module should have three layers one for db accessing another one for logic and a the last one for interface drawing so each module should be independent of each other.
is it possible? how should i do this I've been looking but cant find anything yet.
is there a better way you suggest?
You would definitely need to create common interfaces that modules implement and common data contracts. If you need to load dlls dynamically - it is possible but you would need to use reflection. Look here:
http://dranaxum.wordpress.com/2008/02/25/dynamic-load-net-dll-files-creating-a-plug-in-system-c/
We're currently migrating our ASP Intranet to .NET and we started to develop this Intranet in one ASP.NET website. This, however, raised some problems regarding Visual Studio (performance, compile-time, ...).
Because our Intranet basically exists of modules, we want to seperate our project in subprojects in Visual Studio (each module is a subproject).
This raises also some problems because the modules have references to each other.
Module X uses Module Y and vice versa... (circular dependencies).
What's the best way to develop such an Intranet?
I'll will give an example because it's difficult to explain.
We have a module to maintain our employees. Each employee has different documents (a contract, documents created by the employee, ...).
All documents inside our Intranet our maintained by a document module.
The employee-module needs to reference the document-module.
What if in the future I need to reference the employee-module in the document-module?
What's the best way to solve this?
It sounds to me like you have two problems.
First you need to break the business orientated functionality of the system down into cohesive parts; in terms of Object Orientated design there's a few principles which you should be using to guide your thinking:
Common Reuse Principle
Common Closure Principle
The idea is that things which are closely related, to the extent that 'if one needs to be changed, they all are likely to need to be changed'.
Single Responsibility Principle
Don't try to have a component do to much.
I think you also need to look at you dependency structure more closely - as soon as you start getting circular references it's probably a sign that you haven't broken the various "things" apart correctly. Maybe you need to understand the problem domain more? It's a common problem - well, not so much a problem as simply a part of designing complex systems.
Once you get this sorted out it will make the second part much easier: system architecture and design.
Luckily there's already a lot of existing material on plugins, try searching by tag, e.g:
https://stackoverflow.com/questions/tagged/plugins+.net
https://stackoverflow.com/questions/tagged/plugins+architecture
Edit:
Assets is defined in a different module than employees. But the Assets-class defines a property 'AssignedTo' which is of the type 'Employee'. I've been breaking my head how to disconnect these two
There two parts to this, and you might want to look at using both:
Using a Common Layer containing simple data structures that all parts of the system can share.
Using Interfaces.
Common Layer / POCO's
POCO stands for "Plain Old CLR Objects", the idea is that POCO's are a simple data structures that you can use for exchanging information between layers - or in your case between modules that need to remain loosely Coupled. POCO's don't contain any business logic. Treat them like you'd treat the String or DateTime types.
So rather than referencing each other, the Asset and Employee classes reference the POCO's.
The idea is to define these in a common assembly that the rest of your application / modules can reference. The assembly which defines these needs to be devoid of unwanted dependencies - which should be easy enough.
Interfaces
This is pretty much the same, but instead of referring to a concrete object (like a POCO) you refer to an interface. These interfaces would be defined in a similar fashion to the POCO's described above (common assembly, no dependencies).
You'd then use a Factory to go and load up the concrete object at runtime. This is basically Dependency Inversion.
So rather than referencing each other, the Asset and Employee classes reference the interfaces, and concrete implementations are instantiated at runtime.
This article might be of assistance for both of the options above: An Introduction to Dependency Inversion
Edit:
I've got the following method GetAsset( int assetID ); In this method, the property asset.AssignedTo (type IAssignable) is filled in. How can I assign this properly?
This depends on where the logic sits, and how you want to architect things.
If you have a Business Logic (BL) Layer - which is mainly a comprehensive Domain Model (DM) (of which both Asset and Employee were members), then it's likely Assets and Members would know about each other, and when you did a call to populate the Asset you'd probably get the appropriate Employee data as well. In this case the BL / DM is asking for the data - not isolated Asset and Member classes.
In this case your "modules" would be another layer that was built on top of the BL / DM described above.
I variation on this is that inside GetAsset() you only get asset data, and atsome point after that you get the employee data separately. No matter how loosely you couple things there is going to have to be some point at which you define the connection between Asset and Employee, even if it's just in data.
This suggests some sort of Register Pattern, a place where "connections" are defined, and anytime you deal with a type which is 'IAssignable' you know you need to check the register for any possible assignments.
I would look into creating interfaces for your plug-ins that way you will be able to add new modules, and as long as they follow the interface specifications your projects will be able to call them without explicitly knowing anything about them.
We use this to create plug-ins for our application. Each plugin in encapsulated in user control that implements a specific interface, then we add new modules whenever we want, and because they are user controls we can store the path to the control in the database, and use load control to load them, and we use the interface to manipulate them, the page that loads them doesn't need to know anything about what they do.