I have a record as
firstMap = [ name1:[ value1:10, value2:'name1', value3:150, value4:20 ],
name2:[ value1:10, value2:'name2', value3:150, value4:20 ] ]
I have a list where the values are name1, name2, etc.
I want to pull the list depending on the name1 as
[ name1:[ value1:10, value2:'name1', value3:150, value4:20 ]
firstMap.subMap(["name1"]), did work for me, but I have a list and by looping the list I need to pull the values
namesList.each{record ->
newMap = firstmap.subMap(record)
}
I have tried subMap([offer]), subMap(["offer"]), subMap(["offer?.stringValue()"]), subMap(['offer']), etc. But none of them work for me.
You don't need submap at all, that's only really useful when you want to grab a few keys at once or if you need the original key in the result
Try:
firstMap = [ name1:[ value1:10, value2:'name1', value3:150, value4:20 ],
name2:[ value1:10, value2:'name2', value3:150, value4:20 ] ]
def namesList = [ 'name1', 'name2' ]
namesList.each { name ->
println firstMap[ name ]
}
Or if you need a Map result with the original query key:
namesList.each { name ->
println firstMap.subMap( [ name ] )
}
Or indeed:
namesList.each { name ->
println( [ (name):firstMap[ name ] ] )
}
Would give you the same (ie: create a new map with the key name and the value of my first example)
Related
I have a cell of a table-column that is a dynamic. This was ingested from .Net as a Dictionary, but in Kusto it looks like an array of objects, that has a property key and value:
[
{"key":"ProjectId","value":"1234"},
{"key":"ProjectName","value":"Albatros"},
{"key":"User","value":"Bond"}
]
I want to convert the contents of the cell in my Kusto query to the following dynamic:
{
"ProjectId": "1234",
"ProjectName": "Albatros",
"User": "Bond"
}
I cant figure out how to write the expression, that converts it form the array into the new dynamic format.
Can anyone point me in the right direction?
you can use a combination of mv-apply and make_bag():
print d = dynamic([
{"key": "value"},
{"ProjectId": "1234"},
{"ProjectName": "Albatros"},
{"User": "Bond"}
])
| mv-apply d on (
summarize result = make_bag(d)
)
result
{ "key": "value", "ProjectId": "1234", "ProjectName": "Albatros", "User": "Bond"}
UPDATE based on your change to the original question:
print d = dynamic([
{"key":"ProjectId","value":"1234"},
{"key":"ProjectName","value":"Albatros"},
{"key":"User","value":"Bond"}
])
| mv-apply d on (
summarize result = make_bag(pack(tostring(d.key), d.value))
)
result
{ "ProjectId": "1234", "ProjectName": "Albatros", "User": "Bond"}
Am trying to do a simple if/then/else using JMESPath
For example: 'if the input is a string, return the string, else return the "value" property of the input'. An input of "abc" would return "abc". An input of {"value":"def"} would return "def"
With jq this is easy: if .|type == "string" then . else .value end
With JMESPath, I can get the type
type(#)
or the input:
#
or the value property:
value
but I have not found a way to combine them into an if-then-else. Is there any way to do this?
It is possible but not cleanly. The general form is to:
Make the value you are testing an array (wrap in square braces)
Apply the map function to map the filtered array to what value you want if true
At this point you have an array that is populated with one (true) item if the array filter passed, otherwise it is empty
Concat to that array one item (the false value)
Finally, take item at index 0 in this array - which will be the result of the condition
This should allow you to also derive possible transformations for both the false and true conditions
For example, if the test data is as so:
{
"test": 11
}
Depending on the value you can get either produce the results (using test data 11 and 2 as example):
"Yes, the value is 11 which is greater than 10"
OR
"No the value is 2 which is less than or equal to 10"
Like so:
[
map(
&join(' ', ['Yes, the value is', to_string(#), 'which is greater than 10']),
[test][? # > `10`]
),
join(' ', ['No the value is', to_string(test), ' which is less than or equal to 10'])
][] | #[0]
So to abstract a template:
[
map(
&<True Expression Here>,
[<Expression you are testing>][? # <Test Expression>]
),
<False Expression Here>)
][] | #[0]
people[?general.id !=100] || people
{
"people": [
{
"general": {
"id": 100,
"age": 20,
"other": "foo",
"name": "Bob"
},
"history": {
"first_login": "2014-01-01",
"last_login": "2014-01-02"
}
},
{
"general": {
"id": 101,
"age": 30,
"other": "bar",
"name": "Bill"
},
"history": {
"first_login": "2014-05-01",
"last_login": "2014-05-02"
}
}
]
}
if else condition works here
CustomerSearch.Customers.Select ("ARUNDEL, CLAUDE")
How do I get a different one from list everytime?
You could use the following:
[ ] INTEGER i
[-] for(i = 1; i < 4; ++i)
[ ] TestApplication.ListBoxDialog.TheListBox.Select(i)
[ ] String sList = TestApplication.ListBoxDialog.TheListBox.GetItemText(i)
[ ] Print("List Box Selection = " + sList)
John
I have an existing map in Groovy.
I want to create a new map that has the same keys but different values in it.
Eg.:
def scores = ["vanilla":10, "chocolate":9, "papaya": 0]
//transformed into
def preference = ["vanilla":"love", "chocolate":"love", "papaya": "hate"]
Any way of doing it through some sort of closure like:
def preference = scores.collect {//something}
You can use collectEntries
scores.collectEntries { k, v ->
[ k, 'new value' ]
}
An alternative to using a map for the ranges would be to use a switch
def grade = { score ->
switch( score ) {
case 10..9: return 'love'
case 8..6: return 'like'
case 5..2: return 'meh'
case 1..0: return 'hate'
default : return 'ERR'
}
}
scores.collectEntries { k, v -> [ k, grade( v ) ] }
Nice, functional style solution(including your ranges, and easy to modify):
def scores = [vanilla:10, chocolate:9, papaya: 0]
// Store somewhere
def map = [(10..9):"love", (8..6):"like", (5..2):"meh", (1..0):"hate"]
def preference = scores.collectEntries { key, score -> [key, map.find { score in it.key }.value] }
// Output: [vanilla:love, chocolate:love, papaya:hate]
def scores = ["vanilla":10, "chocolate":9, "papaya": 0]
def preference = scores.collectEntries {key, value -> ["$key":(value > 5 ? "like" : "hate")]}
Then the result would be
[vanilla:like, chocolate:like, papaya:hate]
EDIT: If you want a map, then you should use collectEntries like tim_yates said.
I want to import data from text file.which contain arround lakhs of records
I am using bulk insert do it like this
BULK
INSERT vw_bulk_insert_test
FROM '\\server\c$\csvtext.txt'--\\server\SQLEXPRESS\csvtest.txt'
WITH
(FIRSTROW=2,
check_CONSTRAINTS,
FIELDTERMINATOR = '~',
ROWTERMINATOR = '\n'
)
GO
But before insert I want to validate values of each column without using cursor.Like if second row will have values of all fields except unit_number(Column) then it should create a error log specifying unit_number value is missing.
Personally, I would bulk-insert into a temp table, and then do validations/conversions from the temp table into the table where things will ultimately reside using either TSQL or TSQL in the form of stored procedures created for this purpose.
You have this syntax
BULK INSERT
[ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ]
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
[ [ , ] ERRORFILE = 'file_name' ]
)]