Should JSON API entities include a relationship for its parent? - json-api

I haven't been able to find a clear answer, hoping someone can help.
If we have, say, a blog with posts, and each post can have comments, and each comment a related user. Is it against convention, if I request the comment, to include both the user and the post in the relationships?
data: {
type: 'comments',
id: '1',
relationships: {
post: {...}, //should this be here?
user: {...},
}
attributes: {...},
},
included: {...}

As paulsm4 correctly stated: "it is up to you".
But I can give you some advice about that.
In such situations, you can give the caller of the API the choice of having such links or not via a querystring flag, like
?relationships=post,user
In this case, if you do not specify the relationship flag, you'll get the simple comment data or you can decide to give them all; in the second case, you can use relationships as a sort of filter.
In some APIs, I've seen also a more invasive approach: embed the related object directly in the returned JSON.
With the same technique as before:
?embed=post,user
This should produce an embedded JSON object in the current JSON reply including the original objects just as you were asking something like "GET /post/123" or "GET /user/456" separately. This can be handy in some situations.
Often this flag is named "expand" denoting same or similar behaviour.
For an example open this API documentation from Atlassian and search for "expand".
It does exist an old "standard" for your problem called HAL that speaks about linking and embedding in REST APIs.
Even the Wordpress API offers such features, give it a look in the official documentation.
An alternative to this is to rewrite the entire API in GraphQL leveraging the REST approach.

Q: Should JSON API entities include a relationship for its parent?
A: I assume that's entirely up to you!
If your JSON is defined by some third party, then you have to live with what they gave you. Please post details on how the JSON is specified.
Otherwise, if you're "inventing" the format yourself:
One possibility is to have a relationships: field with a link to the "parent".
Perhaps a better solution might to invent a "container" (perhaps a simple array!) to hold your "records".
If this were a database, I'd have a "posts" table, and a "comments" table. The "comments" table would have a "Post ID" column as a foreign key into the "posts" table.
'Hope that helps ... at least a bit...

JSON API specification does not make any requirements on the attributes and relationships being included in a resource object. The specification is just saying how they must be formatted if they are included. If I did not missed anything, specification does not even require that all resource objects of the same type must have same attributes and relationships.
But I would argue that there isn't any value in not including the relationships. JSON API specification does not require a relationship object to include resource linkage data. On the contrary it's only talking about resource linkage data in context of a compound document, in which it's used "to link together all of the included resource objects without having to GET any URLs via links."
It's totally valid and could be considered best practice to only provide related resource link if the related resources are not included in the payload. Constructing such a link would not put any workload on your server since it does not require to query the database. It also does not make any relevant difference in payload size.
An example of a payload using both techniques would look like this. It assumes that the request explicitly asked to include related user using include query param.
// GET https://examples.com/api/v1/comments/1?include=user
{
data: {
type: 'comments',
id: '1',
relationships: {
post: {
links: {
related: 'https://examples.com/api/v1/comments/1/post'
}
},
user: {
data: {
type: 'users',
id: '2'
},
links: {
related: 'https://examples.com/api/v1/comments/1/user'
}
},
}
},
included: [
{
type: 'users',
id: '2',
attributes: {
name: 'John Doe'
}
}
]
}
You may also want to include a relationship link, which "allows the client to directly manipulate the relationship." Update relationships chapter of spec gives a deep dive into what you could accomplish using relationship links.

Related

Sparse fields on complex JSON API attributes

According to #document-resource-object-attributes it is allowed to have 'complex' values for attributes, i.e. any valid JSON value.
With #fetching-sparse-fieldsets it is possible to select a subset of the content. However, all examples are matching the attribute name.
For example:
{
"data": [
{
"type": "dogs",
"id": "3f02e",
"attributes": {
"name": "doggy",
"body": {
"head": "small",
"legs": [
{
"position": "front",
"side": "right"
},
{
"position": "front",
"side": "left"
}
],
"fur": {
"color": "brown"
}
}
}
}
]
In the result I am only interested in the name, body.head and body.fur.color.
What would be a correct way to solve this (preferably without requiring relations, since this data is valid)?
JSON:API's Sparse Fieldsets feature allows to request only specific fields of a resource:
A client MAY request that an endpoint return only specific fields in the response on a per-type basis by including a fields[TYPE] parameter.
https://jsonapi.org/format/#fetching-sparse-fieldsets
A field is either an attribute or a relationship in JSON:API:
A resource object’s attributes and its relationships are collectively called its “fields”.
https://jsonapi.org/format/#document-resource-object-fields
Sparse Fieldsets are not meant to have an impact on the value of an attribute or a relationship. If you have such a need you shouldn't model the data as a complex value but expose it as a separate resource.
Please note that there is no need that your database schema and the exposed resources by your API are the same. Actually it often makes sense to not have a 1-to-1 relationship between database tables and resources in your JSON:API.
Don't be afraid of having multiple resources. It's often much better for the long-term than having one resource with complex objects:
You can include the related resource (e.g. dog-bodies, dog-legs, dog-furs in your case) by default.
You can generate the IDs for that resources automatically based on the persisted ID of a parent resource.
You can have much stricter constraints and easier documentation for you API if having separate resources.
You can reduce the risk of collisions as you can support updating specific parts (e.g. color attribute of a dog-furs) rather than replacing the full body value of a dogs resource.
The main drawback that I see currently with having multiple resources instead of one is the limitation that you can't create or update more than one resource in the same request with JSON:API v1.0. But it's very likely that the upcoming v1.1 won't have that limitation anymore. An official existing called Atomic Operations is proposed for that use case by a member of the core team working on the spec.

What is the correct JSONAPI way to post multiple related entities in a single request?

At some point in my hypothetical app, I want to create multiple related entities of different types in a single request, for efficiency sake. In the example below I serialize the request in a way that it contains the data about the new User as well as its related Avatar.
// POST /api/users
{
data: {
attributes: { ... },
type: 'user',
relationships: {
avatar: {
data: {
attributes: { ... }
type: 'avatar',
}
}
}
}
}
The question is, what would be the correct/recommended way (if there's any) to do that in JSONAPI?
Creating or updating multiple resources in a single request is not supported by JSON:API spec yet. However there is a proposal for an Atomic Operations extension for the upcoming v1.1 of the spec.
But in most cases such a feature is not required for efficiency. You might even cause more load to the server by bundling multiple create or update requests into one. Doing multiple requests in parallel is cheap with HTTP/2 nowadays.
It might not be as performant as doing it with one requests if the operations depend on each other (e.g. must await a post to be created before a comment for this post could be created). But in that case atomic transactions are also a strong requirement. That's the main driver behind that extension.
So to answer your question:
It's currently not supported in JSON:API spec.
There is a good chance that it will be supported in the next version (v1.1) by an extension.
If efficiency is the only reason you are looking for such a feature, you might not need it at all.
Since it is common, more over may times encouraged to decouple REST API resources from internal representations, there is no recommendation that would suggest against defining a specific 'virtual' endpoint, where the attributes of that resource in turn would become attributes of two or more different resources under different endpoints.
It may not solve your problem, if you want such feature in general, but if this is only needed for some resource combinations, you can always make a dedicated endpoint for a resource which incorporates all attributes of all related resources.
In your case it could be something like:
// POST /api/users_with_avatar
{
data: {
attributes: {
"user_attribute_1": "...",
"user_attribute_2": "...",
"user_attribute_3": "...",
"avatar_attribute_1": "...",
"avatar_attribute_2": "..."
},
type: 'user-with-avatar'
}
}

What is a Payload in Redux context

Can someone please explain what exactly a Payload is in Redux context? In layman terms please, the technical term wasn't useful. Hence still the confusion.
What I understand is that Payload is the actual data that is transmitted over the network. Does this mean, Payload of an action in Redux context means that the data that is passed as a parameter while an action is emitted to change the Redux state?
tl;dr
Payload is a non-official, community accepted (de facto) naming convention for the property that holds the actual data in a Redux action object.
Official Documentation
The official documentation only states that a Redux action has to be a plain object and needs a string action type:
A plain object describing the change that makes sense for your application. ... Actions must have a type field that indicates the type of action being performed. Types can be defined as constants and imported from another module. It's better to use strings for type than Symbols because strings are serializable. Other than type, the structure of an action object is really up to you. If you're interested, check out Flux Standard Action for recommendations on how actions could be constructed.
Community Best Practices
Lots of things are not standardized in Redux so you have maximal flexibility to do the things in your own way, but since most of us don't want to come up with a custom solution to every little everyday detail, the community tends to establish best practices.
To separate this type from regular data the payload property is used. Now, on what should go into payload and what should be on the same level with it is debatable, but a popular standard (recommended by the official docs too) is the Flux Standard Action which states that among the official requirements you may add a payload, error, and meta property. Here the payload is defined as:
The optional payload property MAY be any type of value. It represents the payload of the action. Any information about the action that is not the type or status of the action should be part of the payload field. By convention, if error is true, the payload SHOULD be an error object.
Payload is what is keyed ( the key value pairs ) in your actions and passed around between reducers in your redux application. For example -
const someAction = {
type: "Test",
payload: {user: "Test User", age: 25},
}
This is a generally accepted convention to have a type and a payload for an action. The payload can be any valid JS type ( array , object, etc ).
Hope this clarifies your doubt!
An action object has a type:
{
type: "DELETE_POST",
id: 123
}
besides the type, it usually has some kind of data that provides more information about this action. This is called "payload". In the above action object, the id is the payload.
Some programmers would write it in such a way:
{
type: "DELETE_POST",
payload: {
id: 123
}
}
and it is mainly a matter of style / convention.
Some more details:
In some other situations, the payload could be an array of data, such as an array of JavaScript objects.
Wikipedia defined it quite well:
In computing and telecommunications, the payload is the part of transmitted data that is the actual intended message. Headers and metadata are sent only to enable payload delivery.
Can we say that the action type is also part of the payload? Perhaps we can, if we look at it this way:
instruction: BUY_FROM_SUPERMARKET
items: ["egg", "milk", "lettuce"]
In this case, I think it is reasonable to consider BUY_FROM_SUPERMARKET as part of the message and therefore, part of the payload, but in Redux, it is not, but I guess it is just the way it is.
https://redux.js.org/tutorials/fundamentals/part-2-concepts-data-flow
Actions
An action is a plain JavaScript object that has a type field. You can think of an action as an event that describes something that happened in the application.
The type field should be a string that gives this action a descriptive name, like "todos/todoAdded". We usually write that type string like "domain/eventName", where the first part is the feature or category that this action belongs to, and the second part is the specific thing that happened.
An action object can have other fields with additional information about what happened. By convention, we put that information in a field called payload.
A typical action object might look like this:
const addTodoAction = {
type: 'todos/todoAdded',
payload: 'Buy milk'
}

How to remove Cloud Firestore field type specifiers when using REST API?

I totally made up the name "type specifiers." What I mean is the stringValue key in front of a value. Usually I would expect a more-standard response: "name" : "name_here".
{
"fields": {
"name": {
"stringValue": "name_here"
}
}
}
Is it possible to remove those when making a GET call?
More importantly, it be nice to understand why it's structured like it is. Even for POST-ing data? The easy answer is probably because Cloud Firestore, unlike Realtime Database, needs to know the specific types, but what are all the deeper reasons? Is there an "official" name for formatting like this where I could do more research?
For example, is the reasoning any related to Protocol Buffers? Is there a way to request a protobuf instead of JSON?
Schema:
Is it possible to remove those when making a GET call?
In short No. The Firestore REST API GET returns an instance of Document.
See https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents#Document
{
"name": string,
"fields": {
string: {
object(Value)
},
...
},
"createTime": string,
"updateTime": string,
}
Regarding the "Protocol Buffer": When the data is deserialized you could just have a function to convert into the structure you wish to use, e.g. probably using the protocol buffers if you wish but as there appear to be libraries for SWIFT, OBJECTIVE-C, ANDROID, JAVA, PYTHON, NODE.JS, GO maybe you won’t need to use the REST API and craft a Protocol Buffer.
Hopefully address your “More Importantly” comment:
As you eluded to in your question Firestore has a different data model to the Realtime Database.
Realtime database data model allows JSON objects with the schema and keywords as you want to define it.
As you point out, the Firestore data model uses predefined schemas, in that respect some of the keywords and structure cannot be changed.
The Cloud Firestore Data Model is described here: https://firebase.google.com/docs/firestore/data-model
Effectively the data model is / where a document can contain a subcollection and the keywords “name”, “fields”, “createdTime”, “upTime” are in a Firestore document (a pre-defined JSON document schema).
A successful the Firestore REST API GET request results in a Document instance which could contain collection of documents or a single document. See https://firebase.google.com/docs/firestore/reference/rest/. Also the API discovery document helps give some detail about the api:
https://firestore.googleapis.com/$discovery/rest?version=v1beta1
An example REST API URL structure is of the form:
https://firestore.googleapis.com/v1beta1/projects/<yourprojectid>/databases/(default)/documents/<collectionName>/<documentID>
It is possible to mask certain fields in a document but still the Firestore Document schema will persist. See the three examples GET:
collection https://pastebin.com/98qByY7n
document https://pastebin.com/QLwZFGgF
document with mask https://pastebin.com/KA1cGX3k
Looking at another example, the REST API to run Queries
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/runQuery
the response body is of the form:
{
"transaction": string,
"document": {
object(Document)
},
"readTime": string,
"skippedResults": number,
}
In summary:
The Realtime database REST API will return the JSON for the object according to the path/nodes as per your “more-standard response”.
The Firestore REST API returns a specific Firestore predefined response structure.
There API libraries available for several language so maybe it’s not necessary to use the REST API and craft your own Protocol Buffer but if you needed to you it’s probably feasible.
I don't understand why somebody just say that you can't and don't try think some solution for help! Seriously that this is a really problem solver?
Anyway, I created a script that will help you (maybe it's late now hahaha).
The script encode json and after replace it as string to modify and remove Google type fields (low process).
It's a simple code, I know that you can improve it if necessary!
WARNING!!
Maybe you will have problems with values that contain '{}' or '[]'. This can be solved with a foreach that convert all strings that contains this elements in other char (like '◘' or '♦', some char that you know that doesn't will be in value.
Ex.: Hi {Lorena}! ------> Hi ◘Lorena♦!
After the process, convert again to '{}' or '[]'
YOU CAN'T HAVE FIELDS WITH THE SAME NAME THAT GOOGLE FIELDS
Ex.: stringValue, arrayValue, etc
You can see and download the script in this link:
https://github.com/campostech/scripts-helpers/blob/master/CLOUD%20STORE%20JSON%20FIELDS%20REMOVER/csjfr.php

Reading data from CMF/PHPCR

I'm trying to use CMF for backoffice content edition. For the purposes of local content edition, CMF works fine. But then, I want to send this data to another server using a custom data structure, one that is completely different from what PHPCR uses.
Does CMF provide any kind of API or service to query its repository? For instance, my entities are Pages which contain Sections which contain Articles that finally contain the properties "title" and "body". I want to send this structure as a JSON to another server without all the overhead present in PHPCR.
{
pageTitle: "Home",
sections: [
{
sectionTitle: "firstSection",
articles: [
{
title: "Hello",
body: "Welcome to this page"
}
]
}
]
}
The CMF provides quite a few components, so I am not sure exactly which you want to use and which you want to skip.
For example for the inline editing, you could point things to a different JSON-LD capable backend.
If you want to use Sonata for administration, then it gets a bit more complicated. In theory you can create a new Jackalope transport layer that simply talks to some REST service which will enable CMF to read/write from it just like with the other Jackalope transports. In fact I have done a proof of concept once to use the Prismic.io service in exactly this way, though I only implemented the read part.
Maybe the best way to answer your question is if you could state which parts of the CMF you do want to use, rather than just say you do not want to use any of the existing PHPCR implementations.

Resources