Trying to add array element to a maybe existing config file - jq

I'm testing the following:
#!/usr/bin/env bash
element="$(jo -p \
locations=["$(jo \
name="$name" \
address="$address" \
latitude=$lat \
longitude=$long \
gridId=$grid_id \
gridX=$grid_X \
gridY=$grid_Y \
countyUGC="$countyUGC" \
zoneUGC="$zoneUGC" \
stationId="$stationId")"])"
if [[ -e weather.locations ]]; then
jq --argjson element "$element" '. += $element' \
weather.locations >weather.locations.temp && mv weather.locations.temp weather.locations
else
echo "$entry" >weather.locations
fi
The 1st time I run this, the file will not exist (reason for my [[ ]]). But subsequent times, it seems to always modify the initial element instead of adding. My goal: Append location array with new element (I don't see any need for updating, but "name" would be the field to key off of)
output of the jo: (of course all the fields would be filled in)
{
"locations": [
{
"name": "home",
"address": "123 Main St, MyTown MyState",
"latitude": null,
"longitude": null,
"gridId": null,
"gridX": null,
"gridY": null,
"countyUGC": null,
"zoneUGC": null,
"stationId": null
}
]
}
Thanks for any pointers

Your jo command produces an object with a "locations" field, whose value will be used to replace the one of the existing object (. refers to the outlying object rather than to its "locations" array).
Instead you'll want the following :
jq --argjson element "$element" '.locations += $element.locations'

Related

How to put specific filenames into a specific JSON format using bash or Perl?

Assuming I'm in the folder like this:
➜ tmp.lDrLPUOF ls
1.txt 2.txt 3.txt 1.zip 2.rb
I want to put all the filenames of text files into a specific JSON format like this:
{
"": [
{
"title": "",
"file": "1"
},
{
"title": "",
"file": "2"
},
{
"title": "",
"file": "3"
}
]
}
Now I just know how to list all the filenames:
➜ tmp.lDrLPUOF ls *'.txt'
1.txt 2.txt 3.txt
Can I use bash or Perl to achieve this purpose? Thank you very much!
Edit
Thanks for #Charles Duffy and #Shawn 's great answers. But it's my fault to forget another important piece of information——time. I want to put the filenames into the resulting JSON per their creating time.
The creating time is as below:
➜ tmp.lDrLPUOF ls -lTr
total 0
-rw-r--r-- 1 administrator staff 0 Oct 12 09:35:05 2022 3.txt
-rw-r--r-- 1 administrator staff 0 Oct 12 09:35:08 2022 2.txt
-rw-r--r-- 1 administrator staff 0 Oct 12 09:35:12 2022 1.txt
So the resulting JSON I wanted should be like this:
{
"": [
{
"title": "",
"file": "3"
},
{
"title": "",
"file": "2"
},
{
"title": "",
"file": "1"
}
]
}
{ shopt -s nullglob; set -- *.txt; printf '%s\0' "$#"; } | jq -Rn '
{"": [ input
| split("\u0000")[]
| select(. != "")
| {"title": "",
"file": . | rtrimstr(".txt")
}
]
}
'
Let's break this down into pieces.
On the bash side:
shopt -s nullglob tells the shell that if *.txt has no arguments, it should emit nothing at all, instead of emitting the string *.txt as a result.
set -- overwrites the argument list in the current context (because this is a block on the left-hand side of the pipeline that context is transient and won't change "$#" in code outside the pipe).
printf '%s\0' "$#" prints our arguments, with a NUL character after each one; if there are no arguments at all, it prints only a NUL.
On the jq side:
-R specifies that the input is raw data, not json.
-n specifies that we don't automatically consume any inputs, but will instead use input or inputs to specify where input should be read.
split("\u0000") splits the input on NULs. (This is important because the NUL is the only character that can't exist in a filename, which is why we used printf '%s\0' on the shell end; that way we work correctly with filenames with newlines, literal quotes, whitespace, and all the other weirdness that's able to exist).
select(. != "") ignores empty strings.
rtrimstr(".txt") removes .txt from the name.
Addendum: Sorting by mtime
The jq parts don't need to be modified here: to sort by mtime you can adjust only the shell. On a system with GNU find, sort and sed, this might look like:
find . -maxdepth 1 -type f -name '*.txt' -printf '%T# %P\0' |
sort -zn |
sed -z -re 's/^[[:digit:].]+ //g' |
jq -Rn '
...followed by the same jq given above.
If installed, tree can be a good alternative to list the contents of directories as it can encode its output as well-defined JSON which comes in handy when dealing with strange file names (and especially when your desired output is JSON anyways).
tree -JtL 1 -P '*.txt'
[
{"type":"directory","name":".","contents":[
{"type":"file","name":"3.txt"},
{"type":"file","name":"2.txt"},
{"type":"file","name":"1.txt"}
]}
,
{"type":"report","directories":0,"files":3}
]
tree -J outputs JSON
tree -t sorts by last modification time
tree -L 1 recurses only 1 level deep
tree -P '*.txt' reduces the the list to file pattern *.txt
Of course, you can also add more details, if needed, such as
tree -p includes file permissions
tree -u and tree -g include user and group names
tree -s includes the file size in bytes
tree -D --timefmt '%F %T' includes the last modification time
tree -JtL 1 -P '*.txt' -pusD --timefmt='%F %T'
[
{"type":"directory","name":".","mode":"0755","prot":"drwxr-xr-x","user":"hustnzj","size":4096,"time":"2022-10-12 09:35:00","contents":[
{"type":"file","name":"3.txt","mode":"0644","prot":"-rw-r--r--","user":"hustnzj","size":123,"time":"2022-10-12 09:35:05"},
{"type":"file","name":"2.txt","mode":"0644","prot":"-rw-r--r--","user":"hustnzj","size":456,"time":"2022-10-12 09:35:08"},
{"type":"file","name":"1.txt","mode":"0644","prot":"-rw-r--r--","user":"hustnzj","size":789,"time":"2022-10-12 09:35:12"}
]}
,
{"type":"report","directories":0,"files":3}
]
A note regarding this comment: tree -t sorts by last modification time. There's also an option tree -c to sort by (and with tree -D to show time as) last status change instead, but there's no dedicated option (I know of) that uses creation/birth times (if supported by the file system).
Then, using that JSON output as input, you can employ jq for further filtering and formatting:
tree … | jq --arg ext '.txt' '
{"": (first.contents | map(
select(.type == "file") | {title: "", file: .name | rtrimstr($ext)}
))}
'
{
"": [
{
"title": "",
"file": "3"
},
{
"title": "",
"file": "2"
},
{
"title": "",
"file": "1"
}
]
}
Demo
Note: This includes the filter select(.type == "file") as tree would also include the names of subdirectories. Drop it if you want them included.
Using just jq, any shell:
$ jq -n --args '{"": [ $ARGS.positional[] | rtrimstr(".txt") | { title: "", file: . } ] }' *.txt
{
"": [
{
"title": "",
"file": "1"
},
{
"title": "",
"file": "2"
},
{
"title": "",
"file": "3"
}
]
}
The filenames passed on the command line (The expansion of *.txt are in the jq variable $ARGS.positional. For each one, remove the .txt extension and use the rest in a object of the desired structure.
Or with a perl one-liner:
$ perl -MJSON::PP -E 'say encode_json({"" => [ map { { title => "", file => s/\.txt$//r } } #ARGV ] })' *.txt
{"":[{"file":"1","title":""},{"title":"","file":"2"},{"file":"3","title":""}]}
My take:
stat -c '%Y:%n' *.txt \
| sort -t: -n \
| cut -d: -f2- \
| xargs basename -s .txt \
| jq -s 'map({title: "", file: tostring}) | {"": .}'

DynamoDB: How to append values into the List (Array) of an existing Item

(I'm using AWS PHP SDK)
Let's say i have a Table:
Table name: article
Primary partition key: article_id (Number)
With a sample of manually created Item:
{
"article_id": 10010,
"updated": "2018-02-22T20:15:19.28800Z",
"comments": [ "Nice article!", "Thank you!" ]
}
Adding new comment:
I know how to entirely update (overwrite) this existing Item, in this way:
$key = $marshaler->marshalJson('
{
"article_id": 10010
}
');
$eav = $marshaler->marshalJson('
{
":u": "2018-02-22T20:15:19.28800Z",
":c": [ "Nice article!", "Thank you!", "This is the new one!" ]
}
');
$params = [
'TableName' => 'article',
'Key' => $key,
'ExpressionAttributeValues'=> $eav,
'UpdateExpression' => 'set updated=:u, comments=:c',
'ReturnValues' => 'UPDATED_NEW'
];
I can somehow APPEND new values (aka) add new comment, this way. But this is literally still recreating the whole entire Item again, which is not the way i preferred.
How do i just simply APPEND new values into an List/Array inside an existing Item, please?
Add elements to the list
When you use SET to update a list element, the contents of that element are replaced with the new data that you specify. If the element does not already exist, SET will append the new element at the end of the list.
Create Table
aws dynamodb create-table \
--table-name article \
--attribute-definitions AttributeName=article_id,AttributeType=N \
--key-schema AttributeName=article_id,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
Add item
aws dynamodb put-item \
--table-name article \
--item '{
"article_id": {"N": "123"},
"updated": {"S": "00:00:00"},
"comments": {
"L": [
{ "S": "Nice article!" },
{ "S": "Thank you!" }
]}
}'
Update Item
aws dynamodb update-item \
--table-name article \
--key '{"article_id":{"N":"123"}}' \
--update-expression "SET comments[50] = :c, updated=:u" \
--expression-attribute-values '{
":u": {"S": "01:01:01"},
":c": {"S": "This is the new one!"}
}' \
--return-values ALL_NEW
List comments[] contain two elements (index 0 and 1) before the update. As comments[50] doesn't exist, the SET operation will append it to the end of the list (comments[2]).

How to do parse json array dynamically in shell script using jq too in shell script

Suppose I have the following json in a file json.txt
{
"first_name": "John",
"last_name": "Smith",
"things_carried": [
"apples",
"hat",
"harmonica"
],
"children": [
{
"first_name": "Bobby Sue",
"last_name": "Smith"
},
{
"first_name": "John Jr",
"last_name": "Smith"
}
]
}
In shell script I had written the logic to find the size of children array using jq tool .
size=cat json.txt | jq '.children | length'
i=0
while [ $i -le $size ]
do
array[$i]=$(cat json.txt | jq '.children[$i]')
i=`expr $i + 1`
done
On running this it gives the following error -
.children[$i] 1 compile error
It seems that it is not able to replace the variable i in the children[] array , as because if we give the expression -
array[$i]=$(cat json.txt | jq '.children[0]')
it runs well .
Can someone help me .
You're using single quotes around the jq program. Shells do not interpolate variables inside single quotes; this is intentional and the jq manual recommends using single quotes around programs for this reason.
An argument syntax is provided by jq for this purpose. This syntax allows you to set jq variables to the value of shell variables. You could replace your current jq invocation with this:
array[$i]=$(cat json.txt | jq --arg i $i '.children[$i | tonumber]')
It looks like you're just trying to set the children to a bash array variable.
You don't need to loop, just set the array directly.
$ IFS=$'\n'; array=($(jq -c '.children[]' json.txt))
You should use the following syntax:
array[$i]=$(cat json.txt | jq '.children['${i}']')

Unix — run one script when wc of the file not matched

I want to run the script with different parameters if the wc of the text file is matched or not matched!
My Script:
#!/bin/sh
x= echo `wc -l "/scc/ftp/mrdr_rpt/yet_to_load.txt"`
if [ $x -gt 0 ]
then
sh /scc/ftp/mrdr_rpt/eam.ksh /scc/ftp/mrdr_rpt/vinu_mrdr_rpt.txt /scc/ftp/mrdr_rpt/yet_to_load.txt from#from.com to.name#to.com
elif
sh /scc/ftp/mrdr_rpt/eam.ksh /scc/ftp/mrdr_rpt/vinu_mrdr_rpt.txt /scc/ftp/mrdr_rpt/yet_to_load.txt from#from.com to.name#to.com, hi.name#hi.com
fi
You need to capture the output of wc accurately, and you need to avoid getting a file name in its output. You have:
x= echo `wc -l "/scc/ftp/mrdr_rpt/yet_to_load.txt"`
if [ $x -gt 0 ]
The space after the = is wrong. The echo is not wanted. You should use input redirection with wc. (wc is a little peculiar. If you give it a file name to process, it includes the file name in the output; if you have it process standard input, it doesn't include a file name in the output.) You should use $(…) in preference to back-quotes.
x=$(wc -l < "/scc/ftp/mrdr_rpt/yet_to_load.txt")
if [ $x -gt 0 ]
If you want to check if the file is not empty (rather than being a file with data but no newlines), then you can use a more direct test:
if [ -s "/scc/ftp/mrdr_rpt/yet_to_load.txt" ]
You should probably be using a name such as
DIR="/scc/ftp/mrdr_rpt"
and then referencing it to reduce the ugly repetitions in your code:
if [ $x -gt 0 ]
then
sh "$DIR/eam.ksh" "$DIR/vinu_mrdr_rpt.txt" "$DIR/yet_to_load.txt" \
from#from.com to.name#to.com
else
sh "$DIR/eam.ksh" "$DIR/vinu_mrdr_rpt.txt" "$DIR/yet_to_load.txt" \
from#from.com to.name#to.com, hi.name#hi.com
fi
However, I think the comma in the second line is probably not needed, and it might be better to use:
who="from#from.com to.name#to.com"
if [ -s "$DIR/yet_to_load.txt" ]
then who="$who hi.name#hi.com"
fi
sh "$DIR/eam.ksh" "$DIR/vinu_mrdr_rpt.txt" "$DIR/yet_to_load.txt" $who
Then you've only one line with all the names in it. And you might do even better with an array instead of string:
who=("from#from.com" "to.name#to.com")
if [ -s "$DIR/yet_to_load.txt" ]
then who+=("$who hi.name#hi.com" "Firstname Lastname <someone#example.com>")
fi
sh "$DIR/eam.ksh" "$DIR/vinu_mrdr_rpt.txt" "$DIR/yet_to_load.txt" "${who[#]}"
Using arrays means you can handle blanks in the names correctly where a simple string doesn't.

Dynamodb. How to put-item conditional on two unique attributes where one is hash key?

I am really not understanding at all how dynamodb's condition expression is supposed to work and I can't find any relief reading the documentation or searching for examples.
In the runnable below I am trying to only permit a put-item into the table when inserting the item would retain the uniqueness of the hash key and one other table attribute.
It seems simple enough to define the condition expression to be as shown below, but it doesn't work.
My question is how can I make put-item conditional on two attributes being separately unique in the table?
#!/usr/bin/env bash
TABLE_NAME="Test"
read -r -d '' ATTRIBUTE_DEFINITIONS << EOF
[
{
"AttributeName": "hashKey",
"AttributeType": "S"
}
]
EOF
read -r -d '' KEY_SCHEMA << EOF
{
"AttributeName": "hashKey",
"KeyType": "HASH"
}
EOF
read -r -d '' THROUGHPUT << EOF
{
"ReadCapacityUnits": 1,
"WriteCapacityUnits": 1
}
EOF
read -r -d '' ITEM1 << EOF
{
"hashKey": { "S": "one" },
"blah": { "S": "foo" }
}
EOF
read -r -d '' ITEM2 << EOF
{
"hashKey": { "S": "one" },
"blah": { "S": "baz" }
}
EOF
read -r -d '' ITEM3 << EOF
{
"hashKey": { "S": "two" },
"blah": { "S": "baz" }
}
EOF
CONDEXP="hashKey<>:hk AND blah<>:bh"
read -r -d '' EXPVALUES2 << EOF
{
":hk": { "S": "two" },
":bh": { "S": "baz" }
}
EOF
read -r -d '' EXPVALUES3 << EOF
{
":hk": { "S": "two" },
":bh": { "S": "baz" }
}
EOF
aws dynamodb create-table \
--table-name "$TABLE_NAME" \
--attribute-definitions "$ATTRIBUTE_DEFINITIONS" \
--key-schema "$KEY_SCHEMA" \
--provisioned-throughput "$THROUGHPUT"
aws dynamodb put-item \
--table-name "$TABLE_NAME" \
--item "$ITEM1"
# BUG: I want this this fail because the hashKey in
# ITEM2 is already in the table. It doesn't fail
aws dynamodb put-item \
--table-name "$TABLE_NAME" \
--item "$ITEM2" \
--condition-expression "$CONDEXP" \
--expression-attribute-values "$EXPVALUES2"
# BUG: I want this this fail because the blah in
# ITEM3 is already in the table
aws dynamodb put-item \
--table-name "$TABLE_NAME" \
--item "$ITEM3" \
--condition-expression "$CONDEXP" \
--expression-attribute-values "$EXPVALUES3"
Condition expressions only work within the context of one item. Uniqueness of an attribute is a table-global property. While you can condition a write (Put/Update/Delete Item) on the non-existence of a key using attribute_not_exists(hk), the table will not ensure uniqueness of other attributes by itself. You could enable a Stream on the table and attach a Lambda function to the stream that populates a materialized view keyed on your second attribute bh and check the materialized view for bh uniqueness and then by doing a conditional write on the base table. NB, there is a race condition between the materialized view bh check and the conditional write to base table in this approach that you may have to address, depending on your application.

Resources