I am trying my hands on Marklogic Query console. I understood what Document is , Format and Collections and their use. I am unable to understand why property tab is used for? when clicked on it, it shows the property for the particular document. but why do we need properties? whats the use? Please give me a real time scenario.
Appreciate your help
The Properties Tab in Query Console would display the XML document properties fragment for the context document in the database; properties are optional, so nothing will show if there are no properties, and are usually used to store metadata about the associated document, and they share the same URI. Properties can be accessed in XPath using the property:: axis. Properties documents can also stand alone at a URI with no associated document. JSON documents can have properties, too, but they must be stored as XML. Element indexes can be created as well on properties document elements. There is an API on properties that includes xdmp:document-properties to access properties, xdmp:document-add-properties to create properties on a document, and xdmp:document-set-properties which can be used to create standalone properties documents.
A properties fragment is a sidecar document fragment that keeps metadata about a document separate from the content of the document. Common uses include:
Properties projected out of binary files so they are searchable
Timestamps on documents including bitemporal documents
Workflow state for documents including the state for CPF (the Content Processing Framework)
For more detail, see:
https://docs.marklogic.com/guide/app-dev/properties#id_19516
Hoping that helps,
Related
I am using Firebase Translate Text Extension to translate few documents fields in my project. I want to add one more field that is in a documents in the nested collection:
So each document in collection "spots_test" has collection "reviews". I want to translate one field in each new review added, and I am wondering how can I set up it in Firebase Translate Text Extension, I was trying to set up something like this, but it didn't work:
Is there any way to handle nested collections?
I wasn't able to find proper documentation, however I experimented a bit. It seems to be working this way on my side (LevelOne is collection, test is sub-collection in any document of the collection):
LevelOne/{doc}/test
I don't think that it's important what is in the brackets I tested {something} as well. Working fine.
As this is Firebase Function base feature, I tried the same wildcards logic as in Firebase Function background triggers for Firestore. To be honest, as I didn't found any documentation in extension docs so I am not sure if this is intended behavior, but it works.
UPDATE:
I have continued the test. The extension is generating function visible in Functions tab of Firebase console. The trigger is visible there. The value of the trigger is gendered from extension configuration "Collection path"+{messageId}. So for example you can setup:
{collection}/{doc}/{subcollection}
In this situation translate text extension will work on every document in 2nd level collection no matter what the path is.
I've come up with the mapping that follows while working on the REST API of a system where users are able to create and manage resources of different types.
// READ OPERATIONS
GET /objects => read collection meta
GET /objects/[id] => read single element
GET /objects/?[query] => read a number of elements
GET /objects/?all => read all elements
// CREATE / UPDATE OPERATIONS
PUT /objects => possibly create the collection and update its meta
PUT /objects/[id] => possibly create and update a single element
PUT /objects/?all => update the entire content of the collection
POST /objects => create new objects or update existing objects
PATCH /objects => partially update the collection meta
PATCH /objects/[id] => partially update a single element
PATCH /objects/?all => partially update all the elements
PATCH /objects/?[query] => partially update a number of elements
// DELETE OPERATIONS
DELETE /objects => delete the collection
DELETE /objects/[id] => delete a single element
DELETE /objects/?all => empty the collection
DELETE /objects/?[query] => delete a number of elements
Here's some more information on the system:
each resource can be either be a simple one or a collection-like one;
each resource, collection or not, has properties of its own that need to be accessed and manipulated;
the API must support bulk (not batch) operations.
I've also examined the following alternatives:
using /collection to access the collection's set of elements and /collection?meta to access the collection's own data;
using a whole new resource to access a collection's own data, such as /collections/path/to/collection.
I do not like alternative n. 1) because it feels, to me, semantically poor. By comparison, when I refer to a box I am actually referring to the box in itself and not to its content.
I do not like alternative n. 2) because a resource ends up having its own data exposed by another resource, duplicating urls and making the problem of "which url should I use" not that trivial as I'd like it to be.
Therefore, my questions:
Is the mapping I have proposed a valid, proper mapping for a REST API? Is it respectful of REST principles? I'm not asking whether it's the best mapping out there or not. I'm asking about its validity.
If not, which one of the alternatives is the better one and why?
Please excuse my english, I'm not a native speaker of the language.
I thought the API design looked OK, but then I re-read this comment of yours at the start:
where users are able to create and manage resources of different
types.
If the resources of your system are of different types, why are you exposing them with a neutral, type-less API that works only with generic objects?
The first part of RESTful API design is the identification of the nouns in your system, and those nouns should be considered strongly as candidates for exposure as URIs. I would strongly encourage you to try and get more specific than object and model the business functionality of your system with clearer URIs.
And your English is fine!
First of all, the semantics of URIs aren't relevant to REST. "RESTful URI" is almost an oxymoron. The only constraint an URI must follow to be RESTful is that it references one and only one resource.
Obviously, that doesn't mean REST URIs can be obscure. They should be as clear, intuitive and descriptive as possible, but whatever scheme you decide to use is fine, as long as its consistent. If you're so concerned with this, it means you're probably not using HATEOAS and should take a look at it.
Second, you're not considering the media types, and that's why you end up with the problem of using URIs to designate different media types. Let's say that retrieving all elements of a collection should be simply:
GET /objects
And retrieving a single element of a collection should be:
GET /objects/[id]
Now, if the client needs only the metadata for a resource, either a collection or a single element, it should specify that through the Accept header, not by going to a separate URI you point to in the documentation, or even worse, by adding query string parameters.
So, for instance, if the media type for your object is application/vnd.mycompany.myobject+json, your clients get the full object representation when using that media type in the Accept header, and get the metadata by using something like application/vnd.mycompany.myobjectmetadata+json.
I guess this probably isn't what you expected, but that's what REST is. Your documentation and design effort should be focused on your media types, not your URIs. When you use HATEOAS, URI design is irrelevant, and if you're not using HATEOAS, you're not using REST.
What are the differences between getRawSomething and getSomething methods on Archetypes contents (eg. ATNewsItem)?
For exemple, what is the difference between getRawImage and getImage? Or getRawRelatedItems and getRelatedItems? etc.
getRaw* gives you the direct, unprocessed raw data as stored on the object. The get* methods are allowed to transform that data in some way as needed.
For example, TextField fields will transform text to safe HTML when using get, but getRaw() gives you the untransformed data, be that markdown, restructuredtext or unprocessed HTML.
From the developer documentation:
Archetypes has two kinds of access methods:
normal, getSomething(), which filters output;
raw, the so-called edit accessor, getRawSomething() which does not filter output.
When you want to edit the current contents of a field, use getRaw*, when rendering the contents, use get*.
Specifically, related items are stored in a reference field, where the getRaw() method returns object UIDs, the get() method returns objects, having first resolved the UIDs for you.
Image fields, like file fields, will wrap the data in the associated object type (OFS.Image for image fields) if not already that type when using .get() but return whatever the underlying storage has got for .getRaw(). Usually the object is already wrapped though.
I have a set of data rendered using ASP.Net (VB.Net) to a web page. I now want to export that data to XML. I have created some code to generate a schema, however, I don't know what to do next. I want to have the schema be in-line with the XML data, and I would like the compiler to check to make sure that the data I'm entering for the XML content validates against the included schema. Anyone know of a way to do this? The idea is for me to be able to open the resultant file in Excel with fields of the correct type.
I've build XML documents before, and this is my first schema document I've created programmatically. However, I've never worked with inline schema's, much less used them to strongly-type the XML being added to the document.
I've read over the following, which were quite helpful, but neither of which addressed the issue I mention above:
http://www.aspfree.com/c/a/XML/Generating-XML-Schema-Dynamically-Using-VBNET-2005-Essentials/
http://blogs.msdn.com/b/kaevans/archive/2007/06/05/inline-an-xml-schema-into-your-xml-document.aspx
I have no idea what you mean by "... I would like the compiler to check to make sure that the data I'm entering for the XML content validates against the included schema."
The compiler never checks that. If you want to validate your XML Document against a schema programmatically, you should probably use http://msdn.microsoft.com/en-us/library/system.xml.schema.xmlschemavalidator.aspx.
But for inlining the schema with your document, you sort of answered your own question. The second link in your question, to http://blogs.msdn.com/b/kaevans/archive/2007/06/05/inline-an-xml-schema-into-your-xml-document.aspx, is exactly what you are trying to do.
You can think of an inline XML Schema as a document-within-a-document. Well, using Kirk's example, the outermost document is more of a container which uses the undefined namespace (no schema). His example uses a document root of "DerekDoc" that belongs to the undefined namespace. You can name yours whatever you want.
Inside that root are essentially two documents. One is the inline XML Schema. You would just add it as a child element of the root. The other is the XML document that you intended to conform to the XML Schema. You will need to use the xmlns attribute to set this element to the namespace defined by your XML Schema (the target namespace of the schema).
It might work (I haven't tried it) to set the root element to the target namespace of the schema, but it might be harder for clients to validate the document since it's a forward reference.
I have a local SQLite database that contains a tree (as Nested Sets). In an AIR application, I want to display that tree in a tree control and provide means to change the nodes' names and copy, move, add or delete nodes.
Now, I'm hiccupping a little on where to put which code. Obviously, I have a class which will perform operations like load / update / insert / delete against the database. This would load the whole tree into some storage variable and save changes made by the user back to the db.
Should this class be the dataProvider, the dataDescriptor or an extension of the Tree control itself? And when the user requests an operation like adding a node, should that update the dataProvider and let the database handler react on an event, or should it call the database handler's method and then update the dataProvider? I'd say that the latter is better, because it's easier to not update the Tree's data if something goes wrong with the db query.
There's methods to add and remove nodes in the DefaultDataDescriptor and in the Tree class (protected methods in the latter), should I use / extend those or ignore them?
The reason I'm confused about this is that, according to the docs, a Tree control uses the object stored in its 'dataDescriptor' property to parse and manipulate the actual data which is stored inside its 'dataProvider' property.
This seems to make sense, until you realize that unless you subclass it, it's never the Tree control that manipulates data (with the exception of drag&drop, if that's enabled), and it's not the dataDescriptor, either. Rather, in all examples, manipulating data happens directly via the dataProvider object and that triggers event handlers in the Tree control.
What is it I don't get here?
Take a look at mx.controls.treeClasses.HierarchicalCollectionView. It is not part of the public API, but its full source is available as part of Flex. The Tree controller uses this class internally to handle various data sources.