Newcomer to Alfresco and Web Development here, so bear with me. I've so far installed Alfresco and was able to use the Maven AMP archetype to create my own custom content model for the data I need to store in it. Now I need to access this data from an external site by querying the Alfresco repository.
I've followed what I can find on CMIS and was able to execute a query using curl and get the results I expect in a large XML stream. My colleague was having an uphill battle trying to interpret these results using Coldfusion. Now I searched around, and understand that to interpret these results and make the process a bit easier, it is better to use some kind of client like Open CMIS (or Chemistry, I'm still a bit confused on the terminology here).
We've so far tried the the PHP client, but received some errors from the xmlLoad function not reading 'nbsp' characters. PHP seemed like the easiest version to implement, though we're considering moving to Java if that works better. However, we've seen very scarce documentation on either end. Are there some better examples that we may have missed or maybe some other way to do this? Seems like this should be simple to implement, yet it's given us quite the stall due to the brick wall that Alfresco and CMIS seem to be.
If you don't want to use a library, the CMIS Browser Binding might work better for you. It returns JSON instead of XML.
Try:
http://<host>/alfresco/api/-default-/public/cmis/versions/1.1/browser?cmisselector=query&succinct=true&q=SELECT * FROM cmis:document
Shouldn't be a brick wall at all. Here are some resources:
The custom content types tutorial has a section on CMIS,
including CMIS queries which may be helpful to you even if you do not need custom types
The CMIS & Apache Chemistry book from
Manning is a good resource (disclosure: Florian and I co-authored it
along with another colleague, Jay Brown)
There are some Java examples on Google Code
There are additional resources and helpful links on the Alfresco CMIS page
Related
Please assist:
I'm familiar with VBA and C++, but not with Java. Now wanting to delve into Office Scripts.
However, I want to know if I can achieve the same as in VBA:
I am logging into niche websites and fetching data in tables using VBA Internet Controls (getElementByID()), etc.
As far as I know, these niche websites do not have an API, as the sample scenario of webscraping on the Microsoft website does:
https://learn.microsoft.com/en-us/office/dev/scripts/resources/scenarios/noaa-data-fetch
I would like to know if I can log onto these websites, and then fetch information using HTML (getElementByID()) or similar?
I am just unsure if I can use Office Scripts directly, or if I require to include some library or something.
Any guidance would be appreciated.
Currently, there is no way to do this through Office Scripts alone. The fetch command and REST APIs are the only ways to get data in a script directly from webservices. If you'd like to request the addition of a specific library, please use the Send feedback button in the Office Scripts Code Editor.
The discussion in the comments about using Power Automate is a reasonable path to pursue. The linked video (https://www.youtube.com/watch?v=_O9eEotCT0U) is a good place to start.
I am working on a demo for which I need to expose some data stored in the SQLite DB file as a SPARQL endpoint that can be queried. This doesn't have to be fancy at the moment just a way to expose static data as the RDF is fine for me.
I am wondering if anyone knows how to achieve this using any Open source or free tools available.
I understand that I might have to write r2rml mapping file and other configurations, however I am unable to find a way to do the same. I hear Apache Jena can do the trick, however can't find a good example on how to achieve this.
Does someone knows a good tutorial that shows how this can be achieved.
After looking at various options, i found http://d2rq.org/ as the most easiest way to expose my RDBMS data as SPARQL endpoit.
Please follow the documentation on the portal, which is very generous on explaining how this can be achieved.
My question is very simple. I have an Orbeon form and I want to store the XML file of the form in Alfresco. What is the easiest way to do this integration.
I know that Orbeon PE has this feature but I would like to use CE.
I also checked http://blog.ossgeeks.org/2011/12/alfresco-persistence-layer-for-orbeon.html but I couldn't get it to work.
Using or creating an Alfresco persistence layer is the way to go. Most likely this means either:
Getting the persistence layer Alexey mentions in that blog post to work. Maybe you could get feedback about the specific issue you have by posting a more details description as a comment on that post.
Writing your own persistence layer. This might be simpler than what you imagine, especially if you don't need all the operations (e.g. if you don't need search since you won't be using the summary page).
I've not dug into the details of what and how SDL Tridion is storing data in it's internal search engine (SOLR), but I need to build a GUI extension that needs to perform searching on component/metadata fields across publications.
I can't see any reason not to have a look into SOLR, but before I invest the time, does anyone know any reason why this would be a bad idea?
Thanks in advance!
It's a bad idea in general to bypass the API and directly query SOLR.
From your question, I see no reason to do so.
Do you need to index more data than what is already indexed by Tridion?
If not, surely you can just search using the API?
If you do, you could consider implementing a custom Search Indexing Handler for the additional data. Although this is not very well documented at the moment, it seem rather straight forward to create (implement ISearchIndexingHandler and update your CM and SOLR configuration). The benefit would be that your data can also be searched for using the standard Tridion search.
It really depends on your search requirements. If it's just about simple search - then it's probably fine, but if you want to make some Tridion specific searches then it will be quite difficult as SDL Tridion does a lot of post processing on SOLR results. Why can't you just use CoreService and have a convenient supported search interface?
As Peter said, its really a bad idea to interact with SOLR that comes with Tridion. Tridion has a abstraction layer to hide complexity of SOLR query. For example tridion hides case sensivity of the search keyword.
I strongly recommend to use tridion search api to build ur interface. Tridion search api also supports executing solr query directly. But its not recommended.
For indexing additional data u can implement ISearchIndexingHandler. It has some complexity with the solr config files (adding new fields).
What's your preferred method of providing a search facility on a website? Currently I prefer to use Lucene.net over Indexing Service / SQL Server full-text search (as there's nothing to set up server-side), but what other ways are being used out there?
We used both Lucene.net, Indexing Service and SQL Server full-text. For a project with large and heavy DB search functionality SQL search has an upper hand in terms of performance/resource hit. Otherwise Lucene is much better in all aspects.
Take a look at Solr. It uses Lucene for text indexing, but it is a full blown http server so you can post documents over http and do search using urls. The best part is that it gives you faceted searching out of the box which will require a lot of work if you do it yourself.
you could use google, it's not going to be the fastest indexer but it does provide great results when you have no budget.
dtSearch is one we've often used, but I'm not really that big a fan of it.
A lot of people are using Google's custom search these days; even a couple of banks that I know of use it for their intranet.
If you need to index all the pages of your site (not just the ones Google indexes) or if you want to create a search for your intranet web sites, the Google Mini is pretty sweet. It will cost you some money, but it is really easy to have it up and running within just a couple of hours. Depending on how many pages you need to index it can be expensive though.
I'm using dtSearch and I (kind of) like it. The API isn't the greatest in the world for .NET but it can get the job done and it's pretty fast. And it's cheap, so your boss will like it (~$1,000 US).
The results leave something to be desired as it doesn't do any kind of semantic relevance rankings or anything fancy. It does a better job than anything you can get out of MS SQL server though.
It has a web spider that makes it easy to do quick search apps on a website. If you need to you can use the API to create hooks into your database and to provide item level security - but you have to do the work yourself. Their forum leaves something to be desired as well but maybe people will start posting dtSearch stuff here. :)
Has anyone tried Microsoft search server express?
http://www.microsoft.com/enterprisesearch/serverproducts/searchserverexpress/default.aspx
I haven't tried it yet, but it could potentially be powerful.
From the site it looks primarily geared towards sharepoint users but given its sdk I don't see why you couldn't use it for a regular old site search
I also recommend SOLR. It's easy to set up, maintain, and configure. I've found it to be stable and easy to scale. There's a c# package for interfacing with solr.