Working on dynamic responses from a data source in klipfolio - klipfolio

Basically, In our data source, I have used facebook graph api to get the list of all posts including likes, shares and comments for each post.
I receive a json response in which we are getting "shares" keys presence is inconsistent which means inside json array which consists facebook post json object that in some cases have shares key value and in some cases its absent.
So, while using this data source in our klip, "shares" key value are not mapped correctly with the other post details.
It is because, when we use data source like this : #/data/shares/count , it will provide the value of shares for each post but in some cases where "shares" key is not present or absent for that data, then it replaces the value by the next found "shares" key value but it needs to be 0 so that data in the klip should be mapped exactly.
DATA SOURCE
KLIP

Due to Facebook not returning a JSON record if there is no data for that record, you will need to manipulate the data with xpath axes. If you want to "pad" 0 where there is no record, you will also need to use LOOKUP(). For example:
Shares = #/data/shares/count
66 records
IDs where there are shares= #/data/id[preceding-sibling::count]
66 records
LOOKUP(#/data/id,
#/data/id[preceding-sibling::count],
#/data/shares/count)
LOOKUP will return 100 records and pad where there is no share count value.

Related

Need help to iterate through array of JSON response in TOSCA

I just started working on TOSCA, I need one help with the technical issue which I am facing now.
I have an API which sends a array of objects and my test condition should validate a particular field in all objects in array.
In TOSCA when I scanned my API, I see an item attribute is created and that has all fields within it.As per the source I see that we can extract any data from the item either by making the item as "$index" or by setting the value as index==1(index value).
or like this
But I don't want to iterate like this as for each test data, the number of items may vary and I don't want to hard code the response by index.As it fails with a new data set as below
With one test data I got four records and in next iteration response has only three records and also data updated so the verification is failed.
Can someone help me to find out a solution to iterate / loop through all items at once(using some loops) and extract data into buffer.

NetSuite System Notes CASE formula returning all notes

I've added a formula(date/time) column to a saved search in NetSuite, to return a system notes' date.
My CASE formula is returning all the system notes row's, and I would like a specific row's date i.e. 'POP Host Int ID' date.
How can I specify the row to return the date from, or remove the rows with no date that are not relevant?
CASE
WHEN {systemnotes.field} = 'POP Host Int ID' AND {systemnotes.type} = 'Set'
THEN {systemnotes.date}
ELSE NULL
END
It appears that my WHEN logic works to identify the record's system notes do contain an entry for 'POP Host Int ID' but in THEN I'm not specifying which row to get the date from so it returns all rows. And I could be wrong on this part.
Example results
Example System Notes for 1 record
Thank you for your assistance.
The CASE statement doesn't determine which rows are returned, only what data is returned for that field. On the other hand, the reference to the systemnotes table creates a join that causes each record result to be repeated for every system note entry.
To avoid this, add {systemnotes.field} = 'POP Host Int ID' and {systemnotes.type} = 'Set' as Filters in the Criteria tab instead of in the WHEN conditions. You can then just add the field under results instead of needing a formula.
Edit in response to comment below:
In cases where you need one result per base record (user), but they don't all have valid values from the joined table (system notes), I'd suggest grouping the results by user, and using aggregation functions for all the columns. EG: For the column in question I'm assuming you are getting one valid result and a lot of blanks per user. If you group by user and set the Summarize function to MAX, you should just get one result where the valid value is returned. If no valid value exists from the system notes, you would still get a result from the user and that field will be blank.
If you are creating a saved search the place to do this is in the criteria section.
The views you've shared are for the System Notes pertaining to a single record.
For those views you could just use the Field selector in the Filters section to select your POP Host Ing ID field.
For a saved search you would use the Advanced view and scroll down the criteria field list. Near the bottom are the System Notes. You can filter on Field, Date etc

Kibana: How to aggregate for all UUIDs

I am tracking my url hit counts and want to aggregate them.
I have a few URL as follows:
example.com/service/{uuid}
when I view in Kibana it lists out the total hit count of each URL individually so my table has something like:
example.com/homepage 100 count
example.com/service/uuid1 10 count
example.com/service/uuid2 5 count
Is there an easy way to combine all uuids into 1 entry?
I was thinking of replacing uuids with a static string, however the admins blocked regex support making the replacement very difficult. So I am trying to see if there is any other way before doing that.
Thanks!
I would suggest to create a new field with scripted fields.
The new field would return value: example.com/service/uuid if the url contains the word uuid. Otherwise it will return the url as it is.
Then you could do the aggregation on the new field.

Internal : Collection fields are defined but cannot be matched to the incoming collection - in blueprism

I want collect Financial historical data from NASDAQ link https://www.nasdaq.com/symbol/ge/historical. In this I am spying date element and using "get table" I can get whole table data for date, open, high,... which I am putting in collection but the thing is I am not able to give column name to collection. I made 6 field for each all 6 column you can see in image attched. But when i run the programv I get an error "Internal : Collection fields are defined but cannot be matched to the incoming collection - The collection definition does not contain the field Column1". if I dont add field I get data in collection which has default column name column 1, column 2, ...column 6 . But I want to have their specific column name. I think the problem is with the data type I am using while creating field in collection. I tried different combination for the data type but still...Please help me on that. enter image description hereimage 1image 2
image 3image 4
The error is exactly as it says; the fields cannot be matched; in other words, the fields should match. Since you get default field names from the Read stage, then you should either rename the fields before passing the collection to the process or have the collection receiving the collection at the process level have no fields defined (it will get the headers defined from the object and you can rename the fields after that, or just use the default column names, but that's not practical).
To rename the fields, you can use the default object "Utility - Collection Manipulation", either actions "Rename Collection Fields" or "Rename Field".
Rename Collection Fields
You will have to supply the collection containing the read table (Main Collection) and a collection containing the same headers as the collection containing the read table (New Headers), and in the first row, the new headers (it was designed like that, it's not that intuitive; it took me a good while to figure it out). The collection New Headers should look like the below:
Rename Field
For this one, you will need to loop over each header. Collection In will be the collection containing the read table, and you insert each header to change one at a time. (e.g. first loop iteration will have Column1 as Field Name and date as New Name, second loop iteration will have Column2 and open, etc)

How to describe (enumerate) picklist enties valid for a specific record type in Salesforce?

In apex code I want to enumerate the legal values for a picklist field. To do this I can just call Account.Foobar__c.getDescribe().getPickListValues() and I've got a list of Schema.PickListEntry values.
However it's possible to setup multiple record types for a given sObject. For example Account might have "Manufacturer", "Distributor" and "Retailer" record types. In the Salesforce setup it is possible edit (limit) the picklist entries for each field based on record type. So Retailer type accounts might only use a subset of the picklist values for the Foobar field.
So basically I want Account.Foobar__c.getDescribe().getPickListValues('Retailer') however this is not the syntax. The validFor method looks promising, but it seems like it is only for field dependent picklists - a picklist filtered only by record type returns false for isDependentPicklist.
I know this is an old post, but maybe the info below will help someone who still needs the answer.
I found here that one can actually get a list of record type specific picklist values by making a describeLayout() call.
Using your example (C#):
DescribeLayoutResult result = binding.describeLayout("Account", new string[] { "01230000000xxXxXXX" } );
PicklistEntry[] values = result.recordTypeMappings[0].picklistsForRecordType[12345].picklistValues;
Replace "01230000000xxXxXXX" with a RecordTypeId of your Retailer record type object. Use the query "SELECT Id FROM RecordType WHERE Name = 'Retailer'" to get the value.
Replace 12345 with an index of your picklist object that you would like to get values of.
You can't do it in pure Apex AFAIK, unfortunately. The metadata API does expose it.
Related opinions: http://boards.developerforce.com/t5/Apex-Code-Development/Any-way-to-obtain-picklist-values-by-record-type/td-p/287563

Resources