How to make dynamic input parameters based on another query response in an Integromat app? - make.com

I am implementing an API that has an endpoint which returns a list of custom fields for objects such as company, lead, etc.
When creating a lead, the user should be able to enter custom fields alongside the hardcoded ones.
How can this be done dynamically in the Mappable parameters of a module?

You can create an RPC (procedure for getting of dynamic input fields via API) and combine it with static parameters like in the example below:
Mappable parameters:
[
{
"name": "id",
"type": "uinteger",
"label": "Static parameter"
},
"rpc://getDynamicFields"
]
Ref: https://docs.integromat.com/apps/structure-blocks/expect#rpc
How to create RPC you can find here: https://docs.integromat.com/apps/app-structure/rpcs#fields-rpc

Related

How to create swagger documentation for dynamic request/response

We have an API application which is driven dynamically based on SQL defined metadata. This API app is a reporting application and the report is driven by what kind of request is passed into the API. This means if JSON request is passed like this -
ex 1
{
"Field1": "Value1"
"GroupBy": ["GroupByValue1", "GroupByValue2"]
}
Then it gives below resultset -
{
"GroupByValue1": "SomeValue1"
"GroupByValue2": "SomeValue2"
... other fields based on GroupBy1 and GroupBy2
}
ex 2
{
"Field1": "Value1"
"GroupBy": ["GroupByValue3"]
}
Then it gives below resultset -
{
"GroupByValue3": "SomeValue1"
... other fields based on GroupBy3
}
So the mapping of request v/s fields for that request is defined in the SQL database.
And we need to generate swagger documentation for this kind of dynamic requests. So my question is since we use swashbuckle for swagger documentation, we have to give it specific resultset based on the request being passed. Now we have APIs to give us those request/resultset relationship, but is there a way to generate documentation completely dynamically based on this API (using c# code or typescript) that gives us request/resultset relationship.
You can create dynamic Swagger documentation using IDocumentFilter.
We also use dynamic SQL metadata at this example.

Sparse fields on complex JSON API attributes

According to #document-resource-object-attributes it is allowed to have 'complex' values for attributes, i.e. any valid JSON value.
With #fetching-sparse-fieldsets it is possible to select a subset of the content. However, all examples are matching the attribute name.
For example:
{
"data": [
{
"type": "dogs",
"id": "3f02e",
"attributes": {
"name": "doggy",
"body": {
"head": "small",
"legs": [
{
"position": "front",
"side": "right"
},
{
"position": "front",
"side": "left"
}
],
"fur": {
"color": "brown"
}
}
}
}
]
In the result I am only interested in the name, body.head and body.fur.color.
What would be a correct way to solve this (preferably without requiring relations, since this data is valid)?
JSON:API's Sparse Fieldsets feature allows to request only specific fields of a resource:
A client MAY request that an endpoint return only specific fields in the response on a per-type basis by including a fields[TYPE] parameter.
https://jsonapi.org/format/#fetching-sparse-fieldsets
A field is either an attribute or a relationship in JSON:API:
A resource object’s attributes and its relationships are collectively called its “fields”.
https://jsonapi.org/format/#document-resource-object-fields
Sparse Fieldsets are not meant to have an impact on the value of an attribute or a relationship. If you have such a need you shouldn't model the data as a complex value but expose it as a separate resource.
Please note that there is no need that your database schema and the exposed resources by your API are the same. Actually it often makes sense to not have a 1-to-1 relationship between database tables and resources in your JSON:API.
Don't be afraid of having multiple resources. It's often much better for the long-term than having one resource with complex objects:
You can include the related resource (e.g. dog-bodies, dog-legs, dog-furs in your case) by default.
You can generate the IDs for that resources automatically based on the persisted ID of a parent resource.
You can have much stricter constraints and easier documentation for you API if having separate resources.
You can reduce the risk of collisions as you can support updating specific parts (e.g. color attribute of a dog-furs) rather than replacing the full body value of a dogs resource.
The main drawback that I see currently with having multiple resources instead of one is the limitation that you can't create or update more than one resource in the same request with JSON:API v1.0. But it's very likely that the upcoming v1.1 won't have that limitation anymore. An official existing called Atomic Operations is proposed for that use case by a member of the core team working on the spec.

How to remove Cloud Firestore field type specifiers when using REST API?

I totally made up the name "type specifiers." What I mean is the stringValue key in front of a value. Usually I would expect a more-standard response: "name" : "name_here".
{
"fields": {
"name": {
"stringValue": "name_here"
}
}
}
Is it possible to remove those when making a GET call?
More importantly, it be nice to understand why it's structured like it is. Even for POST-ing data? The easy answer is probably because Cloud Firestore, unlike Realtime Database, needs to know the specific types, but what are all the deeper reasons? Is there an "official" name for formatting like this where I could do more research?
For example, is the reasoning any related to Protocol Buffers? Is there a way to request a protobuf instead of JSON?
Schema:
Is it possible to remove those when making a GET call?
In short No. The Firestore REST API GET returns an instance of Document.
See https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents#Document
{
"name": string,
"fields": {
string: {
object(Value)
},
...
},
"createTime": string,
"updateTime": string,
}
Regarding the "Protocol Buffer": When the data is deserialized you could just have a function to convert into the structure you wish to use, e.g. probably using the protocol buffers if you wish but as there appear to be libraries for SWIFT, OBJECTIVE-C, ANDROID, JAVA, PYTHON, NODE.JS, GO maybe you won’t need to use the REST API and craft a Protocol Buffer.
Hopefully address your “More Importantly” comment:
As you eluded to in your question Firestore has a different data model to the Realtime Database.
Realtime database data model allows JSON objects with the schema and keywords as you want to define it.
As you point out, the Firestore data model uses predefined schemas, in that respect some of the keywords and structure cannot be changed.
The Cloud Firestore Data Model is described here: https://firebase.google.com/docs/firestore/data-model
Effectively the data model is / where a document can contain a subcollection and the keywords “name”, “fields”, “createdTime”, “upTime” are in a Firestore document (a pre-defined JSON document schema).
A successful the Firestore REST API GET request results in a Document instance which could contain collection of documents or a single document. See https://firebase.google.com/docs/firestore/reference/rest/. Also the API discovery document helps give some detail about the api:
https://firestore.googleapis.com/$discovery/rest?version=v1beta1
An example REST API URL structure is of the form:
https://firestore.googleapis.com/v1beta1/projects/<yourprojectid>/databases/(default)/documents/<collectionName>/<documentID>
It is possible to mask certain fields in a document but still the Firestore Document schema will persist. See the three examples GET:
collection https://pastebin.com/98qByY7n
document https://pastebin.com/QLwZFGgF
document with mask https://pastebin.com/KA1cGX3k
Looking at another example, the REST API to run Queries
https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/runQuery
the response body is of the form:
{
"transaction": string,
"document": {
object(Document)
},
"readTime": string,
"skippedResults": number,
}
In summary:
The Realtime database REST API will return the JSON for the object according to the path/nodes as per your “more-standard response”.
The Firestore REST API returns a specific Firestore predefined response structure.
There API libraries available for several language so maybe it’s not necessary to use the REST API and craft your own Protocol Buffer but if you needed to you it’s probably feasible.
I don't understand why somebody just say that you can't and don't try think some solution for help! Seriously that this is a really problem solver?
Anyway, I created a script that will help you (maybe it's late now hahaha).
The script encode json and after replace it as string to modify and remove Google type fields (low process).
It's a simple code, I know that you can improve it if necessary!
WARNING!!
Maybe you will have problems with values that contain '{}' or '[]'. This can be solved with a foreach that convert all strings that contains this elements in other char (like '◘' or '♦', some char that you know that doesn't will be in value.
Ex.: Hi {Lorena}! ------> Hi ◘Lorena♦!
After the process, convert again to '{}' or '[]'
YOU CAN'T HAVE FIELDS WITH THE SAME NAME THAT GOOGLE FIELDS
Ex.: stringValue, arrayValue, etc
You can see and download the script in this link:
https://github.com/campostech/scripts-helpers/blob/master/CLOUD%20STORE%20JSON%20FIELDS%20REMOVER/csjfr.php

Cannot create a new entity (created with ECK) through API using REST module

Here is my situation: I am using the ECK module with Drupal 8 to create entities and bundles, and the new REST core module to create API features.
I have installed the REST_UI module, and I enabled the route for the entity I am interested in.
Here's my issue: I created an entity type and a bundle with ECK, and I can then create a new entity when I am calling the /entity/entity_type_name endpoint with a POST request, giving the following parameter as json:
{
"type":[{"target_id":"bundle_name"}],
"field_test_text":[{"value":"test"}]
}
However, this is only working when I have only one entity type in my list of entities; Let's say for example I decide to create a new entity type, then run the same request, I got the following error message:
Drupal\Core\Entity\Exception\AmbiguousEntityClassException: Multiple entity types found for Drupal\eck\Entity\EckEntity
I understand that apparently, now that I have multiple entity types, the Entity API is not able to understand what should be the type of the entity it has to create (which I find pretty weird, considering that I am providing it in the URL under this form /entity/entity_type_name and that there are different routes available for the different types of entities that I have).
I guess I need to pass an extra parameter in my json for Drupal to understand what kind of entity it should create, but what would be this parameter ? I've been trying to look online and in the documentation, but I cannot figure out how to do this.
I had the same problem, and here is how I resolved it:
Enable the HAL module.
Enable hal_json under Accepted request formats in /admin/config/services/rest for
that particular resource.
Then, in your POST request, use headers:
Content-Type: application/hal+json
X-CSRF-Token: [AUTH SESSION TOKEN]
And the body of the request being:
{
"_links": {
"type": {
"href": "http://localhost:8080/rest/type/[ENTITY_TYPE]/[ENTITY_BUNDLE]"
}
},
"title":[
{"value": "This is a new entity title"}
],
"field_example":[
{"value": "This is an example of a custom text field value."}
]
}
Drupal is reading the entity type and bundle from the _links.type.href string.
For example, if your entity type was automobile and your bundle was car, your URL would be "http://localhost:8080/rest/type/automobile/car"

How to send Json to Azure Appinsights with c# library

I'm implementing the Azure's Application Insights and the API I found is I can only send there Dictionary of type string and string. Also if I use TraceTelemetry it has properties on it which again is dictionary of string and string.
However when I add one field to the custom properties (cars in my case) which has value of serialized json it will result in such a payload being sent to the Application Insights.
"baseData": {
"ver": 2,
"message": "Test Message",
"properties": {
"cars": "[{\"Id\":0,\"Price\":{\"Value\":12.32,\"Currency\":.....
}
}
notice the backslash making it one json value.
But the appinsight portal will understand it - and parse it.
So I can use Microsoft provided C# API but it just looks ugly and seems like the API is JSON anyway, so why is API limited to Dictionary<string, string> ?
It is because of filtering in Azure Portal. The main purpose of Properties (Dictionary<string, string>) is provide the ability to find specified requests, exceptions etc. You are also limited by count of properties (it was about 200). The typical properties are: "username", "isAuthenticated", "role", "score", "isAnonymous", "portalName", "group", "product" atc. Typically global properties.
If you want to send whole object / json, you can use TrackTrace(). You can find all the traces regarding to concrete request in portal.

Resources