Archestra list all engines on a platform - enumerate

I need to write a script able to print a list of all the engines existing on an Archestra platform.
I've tried to create a script at the platform level, but haven't found anywhere information about how to obtain the engines list, the same way for instance IDE is doing it.
Anybody has idea of this?

AFAIK, the IDE uses SQL to populate the tree. There is a table with all the properties of the object, including where the object is running.

It is not possible to use the exact same mechanism as the ArchestrA IDE, but there are several options:
Externally build and save the Galaxy hierarchy using GRAccess. This
is the recommended technique.
Connect directly to the Galaxy database and build the hierarchy. This not an officially supported technique, but it is very fast.
Build the list of Engines through scripting, either using MXAccess or possibly from an ArchestrA script.
For #3 MXAccess/Scripting, it is possible to add relative tags to find the engines:
MyPlatform._Engines[-1] -> iterate through to get the engines
${EngineName}.Objects[-1] -> if needed, lists the objects in the engine
MyPlatform.Tagname -> name of Platform
MyPlatform.Host -> host name of Platform
If you try the ArchestrA scripting route, you will need to use the Indirect type and BindTo, but realize the asynchronous nature of the responses so it will require delays and polling to see if the data is available. Usually communication in the same engine is fairly quick though.

Related

how to create gremlin custom predicate in javascript

I am trying to create a Regex custom predicate for my query
I am seeing this can be done from the gremlin console as below
f = {x,y -> x ==~ y}
g.V().has('desc',test(f,/^Dal.*/)).values('desc')
however I am wondering how I can create custom predicate in a Javascript client?
I am using npm package (https://www.npmjs.com/package/gremlin) and Typescript.
The example you found works because when working with a local (embedded) graph such as a TinkerGraph you can essentially create custom classes and closures in Java and/or Groovy. You can think of this as extending Gremlin locally.
However, the Gremlin JavaScript client is designed to work with a remote graph. Many hosted graph providers limit or block entirely the use of such code for reasons of security. If you have control over the Gremlin Server you are connecting to or the provider you are using allows for closures/lambdas then you may be able to take advantage of that, see [1].
If you control the Gremlin Server you are using you could potentially just add the scripts that create the custom predicates there in the configuration files. For completeness to help others who find this post I included a link to the discussion on predicates that I believe you are referring to in your question [2].
[1] https://tinkerpop.apache.org/docs/current/reference/#gremlin-javascript-lambda
[2] http://www.kelvinlawrence.net/book/PracticalGremlin.html#pred

How to declare namespaces at one place in marklogic and then import/invoke them in various xquery files?

I have more than 50 namespaces used in my Marklogic API's .The count can go on increasing further - I am looking for a way to find the feasibility if there is any way to utilise or store them in database or add them in app-server and later on how to invoke them in all the Xquery files- where they have been till now are updated manually in case of any new addition.
Yes! If you go to the Admin API (port 8001) and go under either your Group or App Servers, you'll see a Namespaces section on the left and in there you can enter your commonly used namespaces. After that they'll just exist in all the code automatically.

Custom Report in alfresco?

Currently i am generating a report (we are getting files are uploaded within a time stamp).
I am getting all files and folders.Iterating the result and checking created date one by one.That is taking too much time approx 8 min to revert with resuls.Can anyone tell me is there any alfresco report api that i can use? or using solr how to fetch the result?
I like to follow an approach which is maybe not really orthodox. Usually, you don't want to report on all documents, only document using a specific type or aspect. So, what I do is to create a Java behaviour on onCreate, onUpdate and onDelete that updates a custom database with only the metadata that I'm interested in. Then, I can connect any OOTB reporting tools such as Pentaho, Jasper or Tableau. You have of couse some other traditional alternatives, such as:
Using this module developed by a community member: http://fcorti.com/alfresco-audit-analysis-reporting/
Or using the module provided by Alfresco: http://docs.alfresco.com/analytics/concepts/analytics-using.html
SOLR/Lucene is not an option, querying DB directly is not an option either (performance wise).
I would suggest using one of the options available (AAAR for instance) or developing something on your own following the same principles.
I did little bit investigation on this and found below link.
http://docs.alfresco.com/4.0/tasks/audit-recording-values.html
I think you can user auditService in alfresco and get your things done.There are few alfresco webservices(related to audit) already available which will allow you to filter response.In case if you need to customize it , than you can create webscript and use auditService in it.
You can use below url for browsing all your alfresco webservice.
http://localhost:8080/alfresco/service/index

programmatically execute alfresco rules

I have created a rule programmatically in Java and attached with a space, its working fine whenever a new document is inserted into that space. But what if I already have some documents uploaded in the space and I want to run the Rule. I know i can do this via Explorer as defined in the following article.
http://docs.alfresco.com/4.0/index.jsp?topic=%2Fcom.alfresco.enterprise.doc%2Ftasks%2Flibrary-folder-rules-run.html
But I want to achieve the same using Java code.
Does any one please suggest some solution.
:: I am using Alfresco enterprise 4.0.2
I would recommend you to bind behaviours to policies instead if you are into java:
http://wiki.alfresco.com/wiki/Policy_Component#Binding_Behaviour_to_a_Policy
My personal experience is that you as a developer get much more control over events in the repository using behaviours (opposed to rules). But maybe thats just me :)

Better way to handle page that links to hundreds of binaries?

I've struggled with a better solution for the following setup. I'm not actively working on this, but know some that might appreciate other ways of handling this.
Setup:
Tridion-managed page has a single "linked list" component Linked list
Single component has component links to other components in Tridion
Linked-to components often link to multimedia component (mm)
An XSLT component template (XSLT CT) renders XML with above content and with links to PDF
XSL document() function used to grab embedded (linked-to) content, all content converted to XML nodes and attributes
TCMScriptAssistant namespace with publishBinary() publishes related PDF and other media
Page template just outputs the result of the CT
Business requirements:
improved publishing (last I worked on this, some of these files created a 2GB publishing transaction because of the PDFs)
published XML content file must reference the associated PDFs; hyperlinks work but identifiers might not help because of...
no Tridion content delivery APIs, mainly for independence from the storage database but also to avoid Tridion-specific code on the presentation server (loosely coupled setup and less training for developers)
The biggest issue is the huge transport package during publishing. The second problem is publishing any of the linked-to PDFs will cause the page to republish.
How could this setup be improved or re-engineered, preferably without too many changes to the existing templates, though modular templating could be considered.
Dynamic component presentations could possibly work, but would need to be published to the file system and not use dynamic linking or broker objects (e.g. no criteria filters, binary metadata, etc).
There are indeed 2 questions. I will handle them in reverse order.
To prevent the page from being republished when you publish a binary, you can use the event system in older versions of Tridion (pre-2011) to turn off link resolving, or with newer versions you can use a custom resolver to prevent this. There is an article by Nuno which explains this(http://nunolinhares.blogspot.com/2011/10/tridion-publisher-and-custom-resolvers.html)
Your second one is a bit tougher, in no small part because of your criteria for not using the SDL Tridion CD APIs. I would have suggested publishing the binaries separately (this would keep the file size down of your transaction package), and using Binary Linking to resolve the paths at request time.
Given this is not an option, I think the only was I would approach it would be to still use dynamic component presentations, and then use predictable unique file names for the PDfs (i.e. use something like 317-12345.pdf based on the URI), and use one directory for all the binaries. That way you could enter the paths to the binary using your XSLT template, as you know where the binaries will be located later. You could then use a custom resolver to publish the binaries when you publish the main list component or page.
Hope that helps
Chris

Resources