I am new to Alfresco.
Am working for a project which uses Alfresco as the document repository.
There is a requirement to create some approval workflow for the documents.
We are still in doubt on using the Activiti in Alfresco for implementation
If you guys can help me out with the following questions, it will be very much helpful in making the decision
1) The rules for approval will be changing dynamically. Could the rule engine like drools be integrated with Activiti in alfresco?
And if yes , then how?
2) The task created has to be shown in an external application.
How feasible it is to query the alfresco database from an external application? Could SQL be used for it, or we need to relay on API's for this.??
3) How could I check the database schema of the Activiti in Alfresco?
4) If the rules file could be used for decision making, could these rules file be changed dynamically from the external application. And if yes, how?
These questions may sound very silly but they are eating my head out.
Please help
Thanks,
Abhishek
If you want to write some applications using Activiti also outside of Alfresco, I think you should go toward using Activiti standalone, and probably integrate it with Alfresco just when you need to upload documents to repository or things like that. Level of Alfresco and Activiti integration is quite deep.
AD 2,3) If you really want to access Activiti tables not using Alfresco API's, yes, it's possible. The tables in database are just the same as mentioned in Activiti documentation (http://www.activiti.org/userguide/#database.tables.explained)
1) Although the bpm is a fork of drools, I don't think Alfresco has native support to that. But still, I'm not quite sure about that. We always used jbpm or activiti.
2) You should use Alfresco workflow API's to achieve what you want. At least is the correct and highly recommended way in developing workflows in Alfresco.
3) You could do that by opening the activit-engine in the Alfresco package. Inside you should find a file named activiti.mysql.create.sql. That could help you somehow. If you follow this link you could also find some interesting commands to debug Alfreco and Activiti tables
4) Define rules file, please. If you're talking about modifying activiti workflow files, no, that is should't be done once deployed. If you're talking about ending tasks or taking actions in workflow, you should use API's for that.
Not sure about embedding drools, but you have two types of code based tasks in Activiti - a script task and a service task. Script tasks are probably easier as they are javascript by default but you can also load other script engines like groovy if you need to. Service tasks are java classes that need to be deployed into the Alfresco classpath to be used.
I would definitely stick to the APIs as was mentioned
Alch3mi5t answered this, but again I would steer clear
I would use a service task for this as you could call out from your java code into another system.
If you use Activiti explorer to add a service task to a diagram, you'll notice a property called 'Service class' which is relevant when the selected type is 'java class'. You would put here the fully qualified class name, e.g. org.example.activiti.CustomLogic This has to implement one of a couple of interfaces, such as ActivityBehavior, ActivityExecution or DelegateExecution. This interface requires you to create an execute() method in which you can put your logic and any external callouts.
To deploy, compile it into a jar (in eclipse, 'Create deployment artefacts') and deploy the jar to Alfresco's classpath, e.g. webapps/alfresco/WEB-INF/lib then restart alfresco
According to the Activiti documentation you can use drools rules on a "Business Rule" task
A Business Rule task is used to synchronously execute one or more rules. Activiti uses Drools Expert, the Drools rule engine to execute business rules. Currently, the .drl files containing the business rules have to be deployed together with the process definition that defines a business rule task to execute those rules. This means that all .drl files that are used in a process have to be packaged in the process BAR file like for example the task forms
Related
I want to share the pact file from consumer to bitbucket and then provider can use from the same location.
Does anybody implemented this?
Thanks in advance.
It's worth noting that it's very much a non-standard way of doing it and I would highly recommend you don't do it this way (see https://docs.pact.io/pact_nirvana/step_4/). You're going to have to build a lot of the workflows again, and that will require investment in building tooling and coming up with ideas to evolve the contract. At some point, you'll be rebuilding key features the Pact Broker already has and would be better off just running that or using a hosted service like pactflow.io.
So, without a pact broker, you don't get the can-i-deploy tool, versioning, environment management and all of these powerful workflows.
With that said - if you do want to use bitbucket:
you'll need to create a manual process (i.e. script) to upload (from consumer) and download (provider) from bitbucket
You'll only want to upload from CI, so that's easier to control
For provider verification, you really want to do this on a laptop, so you'll need a standard approach for pulling down the correct contract to verify there and on CI. Every team member will need credentials to read from that repo
Pact doesn't have a mechanism to pull from git protocols, but you could potentially add that feature to the languages you need them for
can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.
We are going to build custom workflow solution to our clients and most of the time we need to integrate it to their existing system predominately using Microsoft technologies e.g. Exchange Server and SharePoint.
Clients are expecting use workflow to computerize tasks from HR processes, product inventory management. They may or may not have their own CRM but some of them may already using Sharepoint for some processes and they are willing to move away if we can offer a more robust, flexible and economical solution.
I found Alfresco and Activiti very promising but not sure which should I adopt. From my research Alfresco is a full blown CRM with Cloud and workflow (using Activiti as engine) whereas Activiti is the engine on its own.
How should I judge when to go for Alfresco and likewise for Activiti?
TIA
Alfresco is, first and foremost, a repository. If you need a place to store files (either end-users storing files or applications storing files) you should consider using Alfresco as your repository for those files.
As you point out, Alfresco has embedded the Activiti workflow engine. This includes an abstracted service layer that wraps the engine so that, for many operations, when working with Java or server-side JavaScript, you don't need to know much about Activiti. (Obviously you do need to know how to define BPMN 2.0 process definitions to create the workflow).
So if you need to store files and you need to route those files in a business process, Alfresco's embedded workflow engine makes it very easy to do that.
If your primary use case is more general than that (ie, you don't always need to route files in a business process) then you may want to consider a standalone workflow engine. Alfresco can still participate in those workflows, of course, but if your primary use case isn't about files, why go to the trouble of setting up and maintaining a document repository?
In the end, there is no hard and fast rule here. The beauty is that both Alfresco and Activiti are open source. You can try them out, dig into the details, and decide for yourself what is the best fit.
How do I browse a jackrabbit repository using a spring-mvc webapp?
How do I map incoming URL requests in the spring web controllers to nodes in the repository? I'd like the users to be able to open a word document in OpenOffice or Word by opening a URL like the following and save back to it via webdav.
http://localhost:8080/my-app/my-doc.doc
Thanks in advance for any ideas.
Éamonn
the Jackrabbit Repository and the associated JSR standard for Java Content Repositories alone provides a fairly low level persistency API, which you could probably use to build Repositories for Domain objects, mapping the data to repository structures such as JCR nodes/properties. You will use the JCR API located at the javax.jcr.* package to manipulate the repository (and for maximum portability). In a sentence, you can use Jackrabbit to replace your database.
A quick google search showed that there are indeed projects that aim to provide similar convenience wrappers to the ones you probably know and love for JDBC and Hibernate, only for JCRs. I found for example the Spring Modules project: http://java.net/projects/springmodules/ which was unfortunately last updated about two years ago, so it is still on JCR 1.0. For sample usage take a look at http://java.net/projects/springmodules/sources/svn/content/trunk/samples/jcr/src/org/springmodules/examples/jcr/JcrService.java?rev=2110
Still, you could probably write your own JCR2Template without a lot of effort, and encapsulate the repetitive tasks such as connection and exception handing by using the Template Method pattern.
So as for the request mapping, you can run the JCR on a separate server, just like you would with a relational database, and connect to it via RMI. Here's an example: http://dev.day.com/content/docs/en/crx/current/developing/accessing_jcr_connectors.html
I would consider this the "clean" way to use a JCR in Spring MVC applications.
As for the WebDAV saving part... I know Jackrabbit does indeed support the mounting of Repositories as WebDAV drives, but I don't really have any experience with it and I honestly can't imagine a way to tell Word to upload a file upon edit somewhere... But I am not a Word expert at all, sorry....
Now ... the Apache Sling Framework on the other hand provides an interesting approach to build RESTful applications, that integrate well with the repository model and some higher level abstractions of the Repository structure. The way Servlets are resolved in Sling, however is completely different from plain Spring MVC (see http://dev.day.com/content/ddc/blog/2008/07/cheatsheet/_jcr_content/par/download/file), so it would a bit of work to reconcile both approaches.
Hope there's some info in there you can use.
Cheers,
Johannes
How can we extend the Alfresco database? I need to add new tables to the existing database structure.
Does alfresco support this?
thanks in advance,
sri..
I think changing the alfresco db model is never a good solution. Some alfresco upgrades are made using Schema Upgrade Scripts, and that could get messy.
Have you tried to extend the alfresco content model?
Alfresco support some data types, allowing you to persist data. The Web Script framework allow you manipulate all your data inside your content model.
If your data is not suitable for a "content model", I think you should create a new database to hold your data.
Well, it is just a database. So you can create as many new tables as you want just like you would in any other database.
Obviously Alfresco won't use them because it doesn't know them, but you can query the tables as you like.
Advices from alfresco engineers are do not touch alfresco database. Please take a look at this page.
http://forums.alfresco.com/forum/general/non-technical-alfresco-discussion/where-alfreso-user-details-are-stored-i-alfresco
Changing alfresco database is not recommended.Content Model will be the good way.If such requirement is mandatory than,
You can use spring with hibernate for database connection.Properties which is required for connecting database are all ready declared inside alfresco-global.properties which is located inside "tomcat/shared/classes/".
For Spring bean injection you can declare beans inside any file which ends with "-context" which resides inside "tomcat/shared/classes/alfresco/extension" folder.
I will still recommend developer to use content model.
Depending on your use case, you may or may not need to play directly with the[/a] data base. I think your use case should fall in one of the following:
Use case 1:
You need to setup some metadata on folders and/or documents. You may have to nest multiple levels of nodes with different sets of custom metadata on each level.
You probably need to extend alfresco models in order to define custom document/folder models that best suits your business requirements. Please check jpotts' tutorial to learn how to do so.
Use case 2:
You need to define multiple lists with different sets of properties, those lists may or may not be linked to some content in your alfresco repo.
You probably need to learn more about alfresco sites' datalists, once you do so you may be interested in learning how to extend OOTB alfresco content model, jpotts' tutorial would be a good starting point, and then you should be checking this tutorial in order to learn how to manage datalists in stand alone aikau apps/share pages.
Use case 3:
You need to leverage a relational database in order to define and leverage you complex business logic that do not fall in any of the use cases defined above.
Are you sure you do not want to code a brand new app leveraging a technology that you are familiar with and have it communicate with alfresco using RESTfull api/cmis/.... ?
Are you sure alfresco is THE way to go ? If so, and you still want to have your custom complex business model in a bare relational database:
Please consider using a separate database instance / database for your custom extension, this way you would be sure any new patch/upgrade to alfresco that may change database structure won't affect your extension (or at least wont give you hard time upgrading it)
If you are really tied to only 1 database instance / 1 database schema, you will probably want to precede your table names with some prefix and hope none of alfresco future upgrades would have new tables with the same prefix. You probably also need to make sure to manage your database config wisely (connection pools ..) so neither your alfresco instance nor your custom extension have to starve. (make sure you close the connections you are opening)
Alfresco and Activiti come with a database. It is not good to access the database directly. Doing so can cause unexpected locking behavior or exhaust connection pools on the DB. This turns out into performance problems and other kinds of issues are possible too. In case you want to update Alfresco or Activiti you can do it through APIs. Easy to extend, easy to customize and hassle free integration capability are some of the reasons which has made http://loganwinson.doodlekit.com/blog/entry/4249216/top-things-to-know-about-alfresco-development>Alfresco web development popular among businesses.