How to declare namespaces at one place in marklogic and then import/invoke them in various xquery files? - xquery

I have more than 50 namespaces used in my Marklogic API's .The count can go on increasing further - I am looking for a way to find the feasibility if there is any way to utilise or store them in database or add them in app-server and later on how to invoke them in all the Xquery files- where they have been till now are updated manually in case of any new addition.

Yes! If you go to the Admin API (port 8001) and go under either your Group or App Servers, you'll see a Namespaces section on the left and in there you can enter your commonly used namespaces. After that they'll just exist in all the code automatically.

Related

Archestra list all engines on a platform

I need to write a script able to print a list of all the engines existing on an Archestra platform.
I've tried to create a script at the platform level, but haven't found anywhere information about how to obtain the engines list, the same way for instance IDE is doing it.
Anybody has idea of this?
AFAIK, the IDE uses SQL to populate the tree. There is a table with all the properties of the object, including where the object is running.
It is not possible to use the exact same mechanism as the ArchestrA IDE, but there are several options:
Externally build and save the Galaxy hierarchy using GRAccess. This
is the recommended technique.
Connect directly to the Galaxy database and build the hierarchy. This not an officially supported technique, but it is very fast.
Build the list of Engines through scripting, either using MXAccess or possibly from an ArchestrA script.
For #3 MXAccess/Scripting, it is possible to add relative tags to find the engines:
MyPlatform._Engines[-1] -> iterate through to get the engines
${EngineName}.Objects[-1] -> if needed, lists the objects in the engine
MyPlatform.Tagname -> name of Platform
MyPlatform.Host -> host name of Platform
If you try the ArchestrA scripting route, you will need to use the Indirect type and BindTo, but realize the asynchronous nature of the responses so it will require delays and polling to see if the data is available. Usually communication in the same engine is fairly quick though.

how to develop a custom connector in SailPoint

I am novices to the field of Identity and Access management.
Till now I know, Sail point has provided the some direct connectors to integrate the known systems like LDAP, HR systems, OIM, Databases..
And sailpoint also provided the support for disconnected applications with the use of Custom connectors.
Here, My question is how to develop a custom connector..?
I do not have jar file provided by sailpoint which contain "AbstractConnector" class.
So that I can write my own class and develop..?
I also so not understand, what to do with that class?(if i have a jar)
How sailpoint will refer to that class..
Do we need to deploy that class to somewhere...
Here I am expecting the complete flow to develop and deploy the custom connector..
If anyone is working please help..
If you unzip your identityiq.war, you'll find a JAR file called WEB-INF/lib/connector-bundle.jar. This is the JAR where you'll find AbstractConnector. Once you've written your connector code, you will need to compile it and bundle it into a JAR file, which you will place into WEB-INF/lib.
Finally, you will need to update the ConnectorRegistry object (under Configuration on the debug screen) to reference the new class, which will make it available as an Application type. If it has custom connection parameters (as most do), you will also need an xhtml page that will be embedded into the Sailpoint UI to prompt the user configuring the Application.
If you have Compass access, they have a whitepaper called Custom Connectors that you will find helpful.
All that said, I encourage you to try to find a way to use an out-of-box connector if possible.
Most of the times it will be better if you use the DelimitedFile connector, you can import a CSV of identity data, and make it work within Sailpoint's workflow. You will be able to map fields, correlate accounts and create multi-valued group memberships rapidly. Of course, this means that Sailpoint will not be connected directly to the application, and you will have to develop a workflow to extract the identities and upload them. But at least, you can integrate without going the Custom Connector way.

Custom Report in alfresco?

Currently i am generating a report (we are getting files are uploaded within a time stamp).
I am getting all files and folders.Iterating the result and checking created date one by one.That is taking too much time approx 8 min to revert with resuls.Can anyone tell me is there any alfresco report api that i can use? or using solr how to fetch the result?
I like to follow an approach which is maybe not really orthodox. Usually, you don't want to report on all documents, only document using a specific type or aspect. So, what I do is to create a Java behaviour on onCreate, onUpdate and onDelete that updates a custom database with only the metadata that I'm interested in. Then, I can connect any OOTB reporting tools such as Pentaho, Jasper or Tableau. You have of couse some other traditional alternatives, such as:
Using this module developed by a community member: http://fcorti.com/alfresco-audit-analysis-reporting/
Or using the module provided by Alfresco: http://docs.alfresco.com/analytics/concepts/analytics-using.html
SOLR/Lucene is not an option, querying DB directly is not an option either (performance wise).
I would suggest using one of the options available (AAAR for instance) or developing something on your own following the same principles.
I did little bit investigation on this and found below link.
http://docs.alfresco.com/4.0/tasks/audit-recording-values.html
I think you can user auditService in alfresco and get your things done.There are few alfresco webservices(related to audit) already available which will allow you to filter response.In case if you need to customize it , than you can create webscript and use auditService in it.
You can use below url for browsing all your alfresco webservice.
http://localhost:8080/alfresco/service/index

Clarification on web2py apps sharing a single database

Sorry, I'm a little unclear on the web2py manual explanation.
as an example, given app1 and app2
I want to have app2 share the database I have built in app1
So do I change the app2/models/db.py file to show: db = DAL('sqlite://storage.sqlite',migrate='false') ?
and include all other myModel.py files in app2/models directory as well?
if the database is in app1/databases/ how does app2 know how to find the correct database file?
This Thread begins to answer the question but I'm still unclear on how to define where the shared database lives.
Note, DAL(..., migrate=False) just sets the default value of migrate for each table -- it will not have any effect on the migration status of tables whose define_table() calls include their own explicit migrate argument. If you want to completely disable migrations for an entire db connection (regardless of the individual define_table() calls), instead use:
DAL(..., migrate_enabled=False)
Also, to share model definitions between applications, rather than simply copying the model files, you could put the definitions in functions or classes within modules and then import the modules. Another option is to use auto_import:
DAL(..., auto_import=True)
Note, auto_import will import the field names and types, but it will not include DAL-specific attributes, such as validators and defaults, so its usage is somewhat limited.
I can't test this right now but the answer should be:
you can override the folder in the DAL:
So both apps should point to the same file.
(see the docs and this thread).
.
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases')
yes, should need the model files in both apps too, otherwise the apps won't know how to access the db.

Calling a web/wcf service from orchestration: adding a generated item vs adding service reference

If I want to call a web service or wcf method from an orchestration, I can do it by either adding a service reference to the project or adding a generated item. What is the advantage of either approach - is there a best practice?
Steef -Jan Wiggers answers a similar question here
TL;DR - Always use the Generated Items wizard.
My 10c - Although the .xsd files imported by Add Service is added as a schema and set to BtsCompile, there are some limitations such as:
Add Service Reference will add the client proxy, which isn't needed in a BizTalk project (and which might 'tempt' your devs to do silly things like using this proxy from a Custom assembly)
Service Reference makes a mess of importing complicated WSDL (e.g. with Generics or dependencies on other Schemas), See Considerations when consuming Web Services
Using the Add Generated Items wizard does extra work for you:
Adds in a Port Type for accessing the service, already preconfigured for the correct message types. Note however that it adds the Port type to a dummy .odx - i.e. don't delete the odx until you've moved the Port type elsewhere.
Allow you to create the Send Port bindings at the same time.
One thing I would recommend with the Wizard, is to create a folder for the WCF reference and always import all the artifacts into the folder (i.e. don't do the usual separation of Schemas from Ports and leave the dummy .odx there as well). This way, if you need to regenerate the items, just delete everything in the folder and start again (sadly, the wizard doesn't have a Update Service Reference equivalent.
Also note that if you do move the generated Schemas and Port Types into a separate assembly, that you will need to change the type modifier access to Public (it is internal by default)

Resources