How to call tabular function inside the for loop - azure-data-explorer

I have one table with 3 columns.
id
start
end
I am applying one function on table , function with 3 arguments.
Ex:Function1(id,start,end)
This function is working when I pass static value to it.but I want to run this function on full table.
How to do this?
This my use case Actually
datatable(id:string, start:string,end:string)
[
'Z213',datetime(2021-08-06T02:12:37.1597030Z),datetime(2021-08-06T17:37:21.8962890Z),
'Z213',datetime(2021-08-06T00:00:25.7896310Z),datetime(2021-08-06T01:59:50.1172850Z),
'Z213',datetime(2021-08-06T02:04:37.1243340Z),datetime(2021-08-06T02:12:37.1352020Z),
'Z213',datetime(2021-08-06T17:45:19.7289570Z),datetime(2021-08-06T23:56:44.8047730Z),
'Z213',datetime(2021-08-06T17:43:23.7238020Z),datetime(2021-08-06T17:43:28.7256000Z),
'Z213',datetime(2021-08-06T02:04:17.1238770Z),datetime(2021-08-06T02:04:24.1256730Z),
'Z213',datetime(2021-08-06T02:02:15.1199760Z),datetime(2021-08-06T02:02:15.1204780Z),
]
|invoke function1('Z213',datetime(2021-08-06T02:12:37.1597030Z),datetime(2021-08-06T17:37:21.8962890Z))
So when I pass this 1st row then it will work fine .
but I want do this like
|invoke Function (id,start,end)

Take a look at the tabular functions here is an example:
let append_to_column_a=(T:(a:string), what:string) {
T | extend a=strcat(a, " ", what)
};
datatable (a:string) ["sad", "really", "sad"]
| invoke append_to_column_a(":-)")

Related

Querring Humio (LogScale)

i'm trying to create this query for Humio and can't figure out the syntax I need.
I want to use the regex() function on fieldA, put its result in a new field RESULT and then look for RESULT inside the text of fieldB.
I tried something like:
regex(regex="...(?<RESULT>...)...", field=fieldA)
| in(field=fieldB, values=[*RESULT*])
but I can only pass free text to the value of the in() function.
How can I pass the value of the new field?

Vectorized Operation in R causing problems with custom function

I'm writing out some functions for Inventory management. I've recently wanted to add a "photo url column" to my spreadsheet by using an API I've used successfully while initially building my inventory. My Spreadsheet header looks like the following:
SKU | NAME | OTHER STUFF
I have a getProductInfo function that returns a list of product info from an API I'm calling.
getProductInfo<- function(barcode) {
#Input UPC
#Output List of product info
CallAPI(barcode)
Process API return, remove garbage
return(info)
}
I made a new function that takes my inventory csv as input, and attempts to add a new column with product photo url.
get_photo_url_from_product_info_output <- function(in_list){
#Input GetProductInfo Output. Returns Photo URL, or nothing if
#it doesn't exist
if(in_list$DisplayStockPhotos == TRUE){
return(in_list$StockPhotoURL)
} else {
return("")
}
}
add_Photo_URL <- function(in_csv){
#Input CSV data frame, appends photourl column
#Requires SKU (UPC) assumes no photourl column
out_csv <- mutate(in_csv, photo =
get_photo_url_from_product_info_output(
getProductInfo(SKU)
)
)
}
return (out_csv)
}
#Call it
new <- add_Photo_URL(old)
My thinking was that R would simply input the SKU of the from the row, and put it through the double function call "as is", and the vectorized DPLYR function mutate would just vectorize it. Unfortunately I was running into all sorts of problems I couldn't understand. Eventually I figured out that API call was crashing because the SKU field was all messed up as it was being passed in. I put in a breakpoint and found out that it wasn't just passing in the SKU, but instead an entire list (I think?) of SKUs. Every Row all at once. Something like this:
#Variable 'barcode' inside getProductInfo function contains:
[1] 7.869368e+11 1.438175e+10 1.256983e+10 2.454357e+10 3.139814e+10 1.256983e+10 1.313260e+10 4.339643e+10 2.454328e+10
[10] 1.313243e+10 6.839046e+11 2.454367e+10 2.454363e+10 2.454367e+10 2.454348e+10 8.418870e+11 2.519211e+10 2.454375e+10
[19] 2.454381e+10 2.454381e+10 2.454383e+10 2.454384e+10 7.869368e+11 2.454370e+10 2.454390e+10 1.913290e+11 2.454397e+10
[28] 2.454399e+10 2.519202e+10 2.519205e+10 7.742121e+11 8.839291e+11 8.539116e+10 2.519211e+10 2.519211e+10 2.519211e+10
Obviously my initial getProductInfo function can't handle that, so it'll crash.
How should I modify my code, whether it be in the input or API call to avoid this vectorized operation issue?
Well, it's not totally elegant but it works.
I figured out I need to use lapply, which is usually not my strong suit. Initally I tried to nest them like so:
lapply(SKU, get_photo_url_from_product_info_output(getProductInfo())
But that didn't work. So I just came up with bright idea of making another function
get_photo_url_from_sku <- function(barcode){
return(get_photo_url_from_product_info_output(getProductInfo(barcode)))
}
Call that in the lapply:
out_csv<- mutate(in_csv, photocolumn = lapply(SKU, get_photo_url_from_sku))
And it works great. My speed is only limited by my API calls.

Filemaker Pro - Using Script to populate report layout

I have a problem where I have a list of fields from a table (not static, can be modified by user), and I need to generate a report using these user selected fields. The report can show all the rows, no need for aggregation or filtering.
I thought I could create a report layout then using a filemaker script to populate it but can't seem to find the right commands, can someone let me know how I could achieve this?
I'm using filemaker pro 18 advanced
Thanks in advance!
EDIT: Since you want a dynamic report, then I recommend you look up a technique called "Virtual List" for rendering the data.
Here's an example script that iterates over a found set of records and builds the virtual list data in a variable (it doesn't show how to render it though):
# Field names and delimiter
Set Variable [ $delim ; Value: Char(9) // tab character ]
# Set these dynamically with a script parameter
Set Variable [ $fields ; Value: List ( "Contacts::nameFirst" ; "Contacts::nameCompany" ; "Contacts::nameLast" ) ]
Set Variable [ $fieldCount ; Value: ValueCount ( $fields ) ]
Go to Layout [ “Contacts” (Contacts) ; Animation: None ]
Show All Records
Go to Record/Request/Page [ First ]
# Loop over all the records and append a row in the $data variable for each
Set Variable [ $data ; Value: "" ]
Loop
# Get the delimited field values
Set Variable [ $i ; Value: 0 ]
Set Variable [ $row ; Value: "" ]
Loop
Exit Loop If [ Let ( $i = $i + 1 ; $i > $fieldCount ) ]
Set Variable [ $value ; Value: GetField ( GetValue ( $fields ; $i ) ) ]
Insert Calculated Result [ Target: $row ; If ( $i > 1 ; $delim ) & $value ]
End Loop
enter code here
# Append the new row of data to the list variable
Insert Calculated Result [ Target: $data ; If ( Get ( RecordNumber ) > 1 ; ¶ ) & $row ]
Go to Record/Request/Page [ Next ; Exit after last: On ]
End Loop
# Save to a global variable to show in a virtual list layout
Set Variable [ $$DATA ; Value: $data ]
Exit Script [ Text Result: ]
please note this code is just one of many possible formats the virtual list can take. A lot of people, myself included, prefer to use JSON objects or arrays for each row of the list since it automatically handle field values with carriage returns. This is sort of the old-fashioned way. Kevin Frank at FileMaker Hacks has some good recent articles about virtual list techniques if you're interested.
PS, another great technique for rendering table data dynamically is to collect the data in a JSON array and render it in a webviewer with https://datatables.net/
I did something like this for the oncology department of UM om 1980 or so using 4th Dimension and a new plug in that used one line of code to create a web browser with all the functions that a doctor might want. The data was placed inside a variable as it was sent/returned and 4D could use a variable in the report to display the data.
FileMaker does not have this ability built in as 4D did so you will have to do it yourself.JSON is the most likely tool that I am familiar with. YouTube has many videos on JSON.
You have two classes of variables for your report: Column headers and column data to display. Fortunately Filemaker is quite good and very easy to design. Just make a typical report and replace the text/header or field names with a JSON variable or any. $ColumnName = JSON variable.
Create a JSON calculated field in the database. In that calculated field set the JSON variable and this can be used for all of the columns.
This is the essence of the idea with the final result to be determined by you. What you are asking for is not easy and would require serious work by a skilled JSON scripter.

How to save a global variable with table format from NetLogo Behaviorspace

I have written a fairly complicated code for my ABM (634 agents having interactions, each having different variables some of which are lists with multiple values that are updated each tick). As I need to save the updated values for all agents, I have defined a global variable using table:make. This table has 634 keys (each key for one agent), and each key has a list of those values (from that agents-own list variable) for the correspondent agent. But when I use the name of this table to be reported as one of my outputs in Behavior Space, the result in csv file is a table with no keys and it has only a number in it: {{table: 1296}}. So, I was wondering how I could change this variable to be able to have all values.
If you're happy to do some post-processing with R or something after the fact, then table:to-list might be all you need. For example, with a simple setup example like:
extensions [ table ]
globals [ example-table ]
turtles-own [ turtle-list ]
to setup
ca
crt 3 [
set turtle-list ( list random 10 one-of [ "A" "B" "C" ] random 100 )
]
set example-table table:make
foreach sort turtles [
t ->
table:put example-table ( word "turtle_" [who] of t ) [turtle-list] of t
]
reset-ticks
end
And a to-report to clean each table item such that the first item is the key and all other items are the items in the list:
to-report easier-read-table [ table_ ]
let out []
foreach table:to-list table_ [ i ->
set out lput ( reduce sentence i ) out
]
report out
end
You can set up your BehaviorSpace experiment such that one of your reporters is that reporter, something like:
To get a .csv file like:
Where the reporter column outputs a list of lists that you can process how you like.
However, I probably wouldn't use the basic BehaviorSpace output for this, but instead have a call in the experiment to call a manual table output procedure. For example, using the csv extension to make this output-table procedure:
to output-table [ filename_ table_ ]
let out [["key" "col1" "col2" "col3"]]
foreach table:to-list table_ [ i ->
set out lput ( reduce sentence i ) out
]
csv:to-file filename_ out
end
This outputs a much more analysis-ready table if you're less comfortable cleaning the output of a list-of-lists that as far as I know is what you would get from the BehaviorSpace output. So, you can either call it at the end of your experiment, like:
To get a table like:
Which is a little nicer to deal with. You can obviously modify this to report more often if needed, for example:
which would output a table at each tick of the experiment (you can also do this in your code to make it a little easier).

Xquery variable concat based on condition

I am trying to concat the value of an element based on certain condition, but unable to do so. What's wrong here?
For below given sample structure, I need to concat the value of CID based upon OutcomeCode code. Say if we have OutcomeCode as OC and PC, then we should display concatenated value of CId in a string variable.
<v4:ValidateResponse xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v4="http://service.com/v4">
<v4:Details>
<v4:Detail>
<v4:CId>001</v4:CId>
</v4:Detail>
<v4:OutcomeCode>FC</v4:OutcomeCode>
</v4:Details>
<v4:Details>
<v4:Detail>
<v4:CId>002</v4:CId>
</v4:Detail>
<v4:OutcomeCode>PC</v4:OutcomeCode>
</v4:Details>
<v4:Details>
<v4:Detail>
<v4:CId>003</v4:CId>
</v4:Detail>
<v4:OutcomeCode>OC</v4:OutcomeCode>
</v4:Details>
</v4:ValidateResponse>
Here is my transformation
as xs:string
{
for $Details in $ValidateResponse /*:Details
let $OutcomeCode := data($Details/*:OutcomeCode)
return
if (($OutcomeCode ='OC') or ($OutcomeCode='PC'))
then
contact('CID is-',data($Details/*:Detail/*:CId))
else
fn:data('Technical_Check')
};
I am unable to get concat values.
Expected result should be like: CID is- 002,003
as these 2 meet the OC and PC condition.
You could simplify this for loop and combine the criteria into a single XPath to select the CId from Details that have OutcomeCode of "OC" or "PC".
Then, use string-join() in order to produce a comma separated value.
Then, use concat() to produce a string with the prefix and the CSV value:
concat('CID is- ',
string-join(
$ValidateResponse/*:Details[*:OutcomeCode =('OC','PC')]/*:Detail/*:CId,
",")
)

Resources