I'm trying to scan item where some field called reason be a String with:
aws dynamodb scan --table-name my_table --condition-expression "attribute_type(reason, :v_sub)" --expression-attribute-values file://expression-attribute-values.json
expression-attribute-values.json is:
{
":v_sub":{"S":"SS"}
}
I took the example from: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html but I'm getting this error:
any hint?
This is because the scan operation does not accept condition-expression as parameter, avaible options are
--table-name <value>
[--index-name <value>]
[--attributes-to-get <value>]
[--select <value>]
[--scan-filter <value>]
[--conditional-operator <value>]
[--return-consumed-capacity <value>]
[--total-segments <value>]
[--segment <value>]
[--projection-expression <value>]
[--filter-expression <value>]
[--expression-attribute-names <value>]
[--expression-attribute-values <value>]
[--consistent-read | --no-consistent-read]
[--cli-input-json | --cli-input-yaml]
[--starting-token <value>]
[--page-size <value>]
[--max-items <value>]
[--generate-cli-skeleton <value>]
[--cli-auto-prompt <value>]
However the put-item operation does accept a condition-expression and --expression-attribute-values parameters, see complete list below. Additionally, notice that a condition-expression is "A condition that must be satisfied in order for a conditional PutItem operation to succeed"
put-item
--table-name <value>
--item <value>
[--expected <value>]
[--return-values <value>]
[--return-consumed-capacity <value>]
[--return-item-collection-metrics <value>]
[--conditional-operator <value>]
[--condition-expression <value>]
[--expression-attribute-names <value>]
[--expression-attribute-values <value>]
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--cli-auto-prompt <value>]
By checking the docs, only Condition Expressions accept functions like attribute_type. See Syntax for Conditions Expressions and Condition Expressions Examples
References
Scan Operation
Put-Item Operation
Related
I am looping over array of s3 buckets in a json file.
[
[
"response": {
"Buckets: : [
{"Name": "foo"},
{"Name": "bar"}
]
}
]
]
I want to loop over each bucket, call the aws s3 api to get the region for each bucket and append the {"region": "region_name"} for each object inside Buckets array and persist the changes to file.
I am struggling to write the modified data to file as such that it doesn't lose all the data.
Below script writes to a temp.json file but it keeps overwriting the data for each run. In the end I only see last element in the Bucket array written to temp.json file
I want only the region key added to each element but keep all the contents of file same.
jq -r '.[][0].response.Buckets[].Name' $REPORT/s3.json |
while IFS= read -r bucket
do
region=$(aws s3api get-bucket-location --bucket $bucket | jq -r '.LocationConstraint')
jq -r --arg bucket "$bucket" --arg region "$region" '.[][0].response.Buckets[] | select(.Name==$bucket) | .region=$region' $REPORT/s3.json | sponge $REPORT/temp.json
done
Keep the context by adding parentheses to the LHS as in (.[][0].response.Buckets[] | select(.Name==$bucket)).region=$region.
jq -r '.[][0].response.Buckets[].Name' $REPORT/s3.json |
while IFS= read -r bucket
do
region=$(aws s3api get-bucket-location --bucket $bucket | jq -r '.LocationConstraint')
jq -r --arg bucket "$bucket" --arg region "$region" '(.[][0].response.Buckets[] | select(.Name==$bucket)).region=$region' $REPORT/s3.json | sponge $REPORT/temp.json
done
With a little bit more refactoring, you could also immediately provide the bucket name to the first inner jq call, which can already create the final objects, which can then be fed as an array to the outer jq call using --slurpfile, for instance. This would move the second inner jq call outside the loop, reducing the overall number of calls by one order.
jq --slurpfile buckets <(
jq -r '.[][0].response.Buckets[].Name' $REPORT/s3.json |
while IFS= read -r bucket; do
aws s3api get-bucket-location --bucket $bucket | jq --arg bucket "$bucket" '{Name: $bucket, region: .LocationConstraint}'
done
) '.[][0].response.Buckets = $buckets' $REPORT/s3.json | sponge $REPORT/temp.json
Given the following JSON:
{ 'doc': 'foobar',
'pages': [
{ 'data': { 'keyA': { 'foo': 'foo', 'bar': 'bar' }, 'keyB': {'A':'123', 'B': 'c'} },
...
]
}
I am performing the following jq search:
#!/usr/bin/jq -f
.. | .keyA?, .keyB?
Which is great:
{'foo': 'foo', 'bar': 'bar'} # pretend it is formatted
{'A': '123', 'B': 'c'}
But now I'd like to format the output like:
{'key': 'keyA', 'value': {...}}
{'key': 'keyB', 'value': {...}}
However, there doesn't seem to be a reasonable way to use the recursive operator .. and do this.
Instead, I'll have to run N-independent searches for the N keys to have them all separated out. This is painful.
Is it possible to reform the data so that the output of the search function gets labeled with the key that yielded the object?
You can use object construction:
$ jq -c '.. | { keyA, keyB }?' sample.json
{"keyA":null,"keyB":null}
{"keyA":null,"keyB":null}
{"keyA":{"foo":"foo","bar":"bar"},"keyB":{"A":"123","B":"c"}}
{"keyA":null,"keyB":null}
{"keyA":null,"keyB":null}
Which you can then filter out:
$ jq -c '.. | { keyA, keyB }? | to_entries[] | select(.value)' sample.json
{"key":"keyA","value":{"foo":"foo","bar":"bar"}}
{"key":"keyB","value":{"A":"123","B":"c"}}
to_entries will turn an object into an array of key-value pairs. Then, checking the key name explicitly enables you to also find values of null. The -c flag makes the output "compact".
#!/usr/bin/env -S jq -c -f
.. | objects | to_entries[] | select(.key == ("keyA", "keyB"))
{"key":"keyA","value":{"foo":"foo","bar":"bar"}}
{"key":"keyB","value":{"A":"123","B":"c"}}
I ended up on a mission to document/solve zparseopts this afternoon. I think I found a bug with the -K option - or as I call it -KillYourself.
I think -K is not handling a flag option correctly. The output skews previously existing key/value pairs one-half pair to the left as shown below.
Can a BASH / ZSH guru make sure that I am doing this correctly?
Expected Behavior with -K and additional options and flags:
Dictionary:
--key -> greg
--flag ->
default -> Mr. Yutz Was Here
Expected Behavior with -K and no other options:
Dictionary:
default -> Mr. Yutz Was Here
Curious Possibly Broken Behavior:
Dictionary:
--key -> greg
--flag -> default
Mr. Yutz Was Here ->
It looks like the Associative Array gets "bumped" a half value to the left.
#!/usr/bin/env zsh
declare -a pargs #numveric index array
declare -A paargs #associative array - or dictionary
pargs=("Mr. Yutz Was Here")
paargs[default]="Mr. Yutz Was Here"
echo "\n=====\npargs currenty:\n$pargs"
echo "\n=====\npaargs currenty:\n$paargs"
# -K - KILL YOURSELF - and KEEP previous settings like defaults you pumped into the array before processing. As long as they are not set below. Then this is meaningless.
# Seriously....if you use this you will want to kill yourself. If you have a flag, the Associative array breaks. Ugh.
# to recreate bug, run without arguments. Then run again with --flag --key FooBar. Delete -K and do it again.
# The "Dictionary Output will be borked."
# -D - DELETE - pops each item from the input and processes the next element as "the first one" of $1 if you want to be all shelly about it
# -E - NO ERROR - don't stop on an error. keep going until done or -- is hit
# -a - ARRAY for the results indexed from 1
# -A - DICTIONARY for the results in key / value pairs. Flags don't have values. Just "presence"
zparseopts -D -E -a pargs -A paargs -flag -key:
printf "\n=====\nArray of Results:\n\n"
for ((i = 1; i <= $#pargs; i++))
do
echo "item: $i \t-> $pargs[$i]"
done
echo "\n====="
printf "Dictionary:\n\n"
for key value in ${(kv)paargs}
do
echo "$key \t-> $value"
done
echo "=====\n"
# Flag detection.
printf "flag=%s key=%s\n\n" ${pargs[(I)--flag]} ${paargs[--key]}
printf "%s\n\n" "$*"
This behaves as I would expect:
Output without -K:
greg#api:~/projects/test_swarm$ ./zsh_test.sh --flag --key greg
=====
pargs currenty:
1: Mr. Yutz Was Here
=====
paargs currenty:
default: Mr. Yutz Was Here
=====
Array of Results:
item: 1 -> --flag
item: 2 -> --key
item: 3 -> greg
=====
Dictionary:
--key -> greg
--flag ->
=====
flag=1 key=greg
Output with -K:
greg#api:~/projects/test_swarm$ ./zsh_test.sh --flag --key greg
=====
pargs currenty:
1: Mr. Yutz Was Here
=====
paargs currenty:
default: Mr. Yutz Was Here
=====
Array of Results:
item: 1 -> --flag
item: 2 -> --key
item: 3 -> greg
=====
Dictionary:
--key -> greg
--flag -> default
Mr. Yutz Was Here ->
=====
flag=1 key=greg
The issue is the printout of the values in the associative array. Try replacing the paargs for loop with this:
printf 'paargs %s: %s\n' "${(kv#)paargs}"
The significant changes are the addition of double quotes and # parameter expansion flag. zsh is usually very forgiving with quoting, but there are some cases where it makes a difference.
It's often easier to display variable values with the typeset builtin:
typeset -p pargs paargs
(meta note: do we need a zparseopts tag?)
I wish to pass a file through to my instance, but there is no
file option? Even though documentation says that this should be available?
stack#openstack:/tmp$ nova boot --config-drive true --flavor 8755fc00-4c24-418d-a505-592c809108d9 --image 2ca99129-4b09-4a2a-a43d-b22c76dc4efc --file license=FGVM080000109836.lic --user-data cloudinitfile test
usage: nova [--version] [--debug] [--os-cache] [--timings]
[--os-region-name <region-name>] [--service-type <service-type>]
[--service-name <service-name>]
[--os-endpoint-type <endpoint-type>]
[--os-compute-api-version <compute-api-ver>]
[--endpoint-override <bypass-url>] [--profile HMAC_KEY]
[--insecure] [--os-cacert <ca-certificate>]
[--os-cert <certificate>] [--os-key <key>] [--timeout <seconds>]
[--os-auth-type <name>] [--os-auth-url OS_AUTH_URL]
[--os-system-scope OS_SYSTEM_SCOPE] [--os-domain-id OS_DOMAIN_ID]
[--os-domain-name OS_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID]
[--os-project-name OS_PROJECT_NAME]
[--os-project-domain-id OS_PROJECT_DOMAIN_ID]
[--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
[--os-trust-id OS_TRUST_ID]
[--os-default-domain-id OS_DEFAULT_DOMAIN_ID]
[--os-default-domain-name OS_DEFAULT_DOMAIN_NAME]
[--os-user-id OS_USER_ID] [--os-username OS_USERNAME]
[--os-user-domain-id OS_USER_DOMAIN_ID]
[--os-user-domain-name OS_USER_DOMAIN_NAME]
[--os-password OS_PASSWORD]
<subcommand> ...
error: unrecognized arguments: --file test
Try 'nova help ' for more information.
Please check with alternative command using OpenStack CLI:
***openstack server create --image 2ca99129-4b09-4a2a-a43d-b22c76dc4efc --flavor 8755fc00-4c24-418d-a505-592c809108d9 --config-drive True --file license=FGVM080000109836.lic --user-data cloudinitfile test***
Note:
1. Please also check once the injected files quota.
Yes, --file option is there by which you can pass a file to your instance. You can check available optional parameter using "nova help command".
e.g:
nova help boot
usage: nova boot [--flavor <flavor>] [--image <image>]
[--image-with <key=value>] [--boot-volume <volume_id>]
[--snapshot <snapshot_id>] [--min-count <number>]
[--max-count <number>] [--meta <key=value>]
**[--file <dst-path=src-path>]** [--key-name <key-name>]
[--user-data <user-data>]
[--availability-zone <availability-zone>]
[--security-groups <security-groups>]
[--block-device-mapping <dev-name=mapping>]
[--block-device key1=value1[,key2=value2...]]
[--swap <swap_size>]
[--ephemeral size=<size>[,format=<format>]]
[--hint <key=value>]
[--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid>]
[--config-drive <value>] [--poll] [--admin-pass <value>]
[--access-ip-v4 <value>] [--access-ip-v6 <value>]
<name>
Boot a new server.
I'm trying to filter AWS ECR image list returned as JSON with jq and regular expressions.
Following command work as expected and return filtered list:
aws ecr list-images --registry-id 123456789012 --repository-name repo | jq '.imageIds | map(select(.imageTag)) | map(select(.imageTag | test("[a-z0-9]-[0-9]")))'
[
{
"imageTag": "bbe3d9-2",
"imageDigest": "sha256:4c0e92098010fd26e07962eb6e9c7d23315bd8f53885a0651d06c2e2e051317d"
},
{
"imageTag": "3c840a-1",
"imageDigest": "sha256:9d05e04ccd18f2795121118bf0407b0636b9567c022908550c54e3c77534f8c1"
},
{
"imageTag": "1c0d05-141",
"imageDigest": "sha256:a62faabb9199bfc449f0e0a6d3cdc9be57b688a0890f43684d6d89abcf909ada"
}
]
But when I try to pass regular expression as an argument to jq it return an empty array.
aws ecr list-images --registry-id 123456789012 --repository-name repo | jq --arg reg_exp "[a-z0-9]-[0-9]" '.imageIds | map(select(.imageTag)) | map(select(.imageTag | test("$reg_exp")))'
[]
I have tried multiple ways to pass that variable, but just can't get it work. Other relevant information may be that I'm using zsh on mac and my jq version is jq-1.5. Any help is appreciated.
$reg_exp is a variable referring to your regular expression, "$reg_exp" is just a literal string. Remove the quotes. (and that extra map/select is redundant)
jq --arg reg_exp "[a-z0-9]-[0-9]" '.imageIds | map(select(.imageTag | test($reg_exp)))'