AWS SAM Template - Error: Failed to parse template: while parsing a block mapping - aws-serverless

I have this template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
example
Parameters:
EnvironmentPrefix:
Type: String
Default: "test-"
Resources:
S3JsonLoggerFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/s3-json-logger.s3JsonLoggerHandler
Runtime: nodejs18.x
Architectures:
- x86_64
MemorySize: 128
Timeout: 60
Policies:
- S3ReadPolicy:
BucketName:
!Fn::ImportValue:
!Sub "${EnvironmentPrefix}example-bucket"
Events:
S3NewObjectEvent:
Type: S3
Properties:
Bucket:
!Fn::ImportValue:
!Sub "${EnvironmentPrefix}example-bucket"
Events: s3:ObjectCreated:*
Filter:
S3Key:
Rules:
- Name: prefix
Value: "changelog/"
Why does it fail with this error???
Error: Failed to parse template: while parsing a block mapping
in "<unicode string>", line 23, column 13:
BucketName:
^
expected <block end>, but found '<tag>'
in "<unicode string>", line 25, column 17:
!Sub "${EnvironmentPrefix}example-b ...
^
I've been staring at this and comparing it to the examples at https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-policy-templates.html and can't figure out what's different!!!

I think your syntax of !Fn::ImportValue: is wrong.
It should be just Fn::ImportValue: without the !, as ! is a short-form alias for Fn::.
Policies:
- S3ReadPolicy:
BucketName:
Fn::ImportValue:
!Sub "${EnvironmentPrefix}example-bucket"
Docs are here:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-importvalue.html

Related

Retrieving JFrog Artifact via Concourse pipeline - "empty version - return current version"

Trying to test artifactory-resource by running through the example pipeline.
groups:
- name: all
jobs:
- set-pipeline
- trigger-when-new-file-is-added-to-artifactory
jobs:
- name: set-pipeline
serial: true
plan:
- in_parallel:
- get: ea-terraform-module-aws-rds
trigger: true
- set_pipeline: deploying-rds-instance-from-jfrog-artifact
file: ea-terraform-module-aws-rds/examples/concourse/ea-terraform-module-aws-rds.yml
- name: trigger-when-new-file-is-added-to-artifactory
plan:
- get: ea-rds-jfrog-repo
- task: use-new-file
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
inputs:
- name: ea-rds-jfrog-repo
run:
path: cat
args:
- "./ea-rds-jfrog-repo/ea-terraform-module-aws-rds*.zip"
resource_types:
- name: artifactory
type: docker-image
source:
repository: pivotalservices/artifactory-resource
resources:
- name: ea-rds-jfrog-repo
type: artifactory
check_every: 1m
source:
endpoint: https://xxx.jfrog.io/artifactory
repository: "ea-terraform-module-aws-rds-1.4.0.zip"
regex: "ea-terraform-module-aws-rds-(?<version>.*).zip"
username: ${JF_USER}
password: ${JF_PASSWORD}
- name: ea-terraform-module-aws-rds
type: git
source:
private_key: ((github_private_key))
uri: git#github.com:xxx/xxx
branch: SAAS-27134
Concourse Error: pipeline path -> deploying-rds-isntance-from-jfrog-artifact/ea-rds-jfrog-repo
enter image description here
Repo on JFrog Artifactory
enter image description here
tried adding a version parameter
The following error indicates that the concourse resource is making a call to the Artifactory API, but instead of receiving a JSON structure, it gets a null response instead. The resource passes the null response to the jq utility that assumes an iterable object.
So why does it get a null response from the API?
It looks like, at minimum, the repository: in the ea-rds-jfrog-repo resource definition is incorrect.
Based on the second snapshot:
I'm going to guess it should be set to
repository: "/ea-terraform-module-aws-rds/terraform-module/aws"
I recommend using the https://github.com/spring-io/artifactory-resource which is in active development, instead of the unmaintained one from pivotalservices

Ansible filter to extract specific keys from a dict into another dict

Using the nios lookup modules, I can get a list of dicts of records
- set_fact:
records: "{{ lookup('community.general.nios', 'record:a', filter={'name~': 'abc.com'}) }}"
This returns something like
- ref: record:a/someBase64:name/view
name: abc.com
ipv4addr: 1.2.3.4
view: default
- ref: record:a/someBase64:name/view
name: def.abc.com
ipv4addr: 1.2.3.5
view: default
- ref: record:a/someBase64:name/view
name: ghi.abc.com
ipv4addr: 1.2.3.6
view: default
I want to convert this into a dict of dicts of {name}: a: {ipv4addr}
abc.com:
a: 1.2.3.4
def.abc.com:
a: 1.2.3.5
ghi.abc.com:
a: 1.2.3.6
So that I can then run a similar lookup to get other record types (e.g. cname) and combine them into the same dict. The items2dict filter seems halfway there, but I want the added a: key underneath.
If you just wanted a dictionary that maps name to an ipv4 address, like:
{
"abc.com": "1.2.3.4",
...
}
You could use a simple json_query expression. Take a look at the
set_fact task in the following example:
- hosts: localhost
gather_facts: false
vars:
data:
- ref: record:a/someBase64:name/view
name: abc.com
ipv4addr: 1.2.3.4
view: default
- ref: record:a/someBase64:name/view
name: def.abc.com
ipv4addr: 1.2.3.5
view: default
- ref: record:a/someBase64:name/view
name: ghi.abc.com
ipv4addr: 1.2.3.6
view: default
tasks:
- set_fact:
name_map: "{{ dict(data|json_query('[].[name, ipv4addr]')) }}"
- debug:
var: name_map
Running that playbook will output:
PLAY [localhost] ***************************************************************
TASK [set_fact] ****************************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"name_map": {
"abc.com": "1.2.3.4",
"def.abc.com": "1.2.3.5",
"ghi.abc.com": "1.2.3.6"
}
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You could use a similar structure to extract other data (e.g. cname records). This would get you dictionary per type of data, rather than merging everything together in a single dictionary as you've requested, but this might end up being easier to work with.
To get exactly the structure you want, you can use set_fact in a loop, like this:
- hosts: localhost
vars:
data:
- ref: record:a/someBase64:name/view
name: abc.com
ipv4addr: 1.2.3.4
view: default
- ref: record:a/someBase64:name/view
name: def.abc.com
ipv4addr: 1.2.3.5
view: default
- ref: record:a/someBase64:name/view
name: ghi.abc.com
ipv4addr: 1.2.3.6
view: default
gather_facts: false
tasks:
- set_fact:
name_map: "{{ name_map|combine({item.name: {'a': item.ipv4addr}}) }}"
loop: "{{ data }}"
vars:
name_map: {}
- debug:
var: name_map
This will produce:
PLAY [localhost] ***************************************************************
TASK [set_fact] ****************************************************************
ok: [localhost] => (item={'ref': 'record:a/someBase64:name/view', 'name': 'abc.com', 'ipv4addr': '1.2.3.4', 'view': 'default'})
ok: [localhost] => (item={'ref': 'record:a/someBase64:name/view', 'name': 'def.abc.com', 'ipv4addr': '1.2.3.5', 'view': 'default'})
ok: [localhost] => (item={'ref': 'record:a/someBase64:name/view', 'name': 'ghi.abc.com', 'ipv4addr': '1.2.3.6', 'view': 'default'})
TASK [debug] *******************************************************************
ok: [localhost] => {
"name_map": {
"abc.com": {
"a": "1.2.3.4"
},
"def.abc.com": {
"a": "1.2.3.5"
},
"ghi.abc.com": {
"a": "1.2.3.6"
}
}
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Get a YAML file with HTTP and use it as a variable in an Ansible playbook

Background
I have a YAML file like this on a web server. I am trying to read it and make user accounts in the file with an Ansible playbook.
users:
- number: 20210001
name: Aoki Alice
id: alice
- number: 20210002
name: Bob Bryant
id: bob
- number: 20210003
name: Charlie Cox
id: charlie
What I tried
To confirm how to read a downloaded YAML file dynamically with include_vars, I had written a playbook like this:
- name: Add users from list
hosts: workstation
tasks:
- name: Download yaml
get_url:
url: http://fqdn.of.webserver/path/to/yaml.yml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
handlers:
- name: Read yaml
include_vars:
file: /tmp/tmp.yml
name: userlist
- name: List usernames
debug:
var: "{{ item }}"
loop: "{{ userlist.users }}"
Problem
In the handler Read yaml, I got the following error message. On the target machine (workstation.example.com), /tmp/tmp.yml is downloaded correctly.
RUNNING HANDLER [Read yaml] *****
fatal: [workstation.example.com]: FAILED! => {"ansible facts": {"userlist": []},
"ansible included var files": [], "changed": false, "message": "Could not find o
r access '/tmp/tmp. yml' on the Ansible Controller.\nIf you are using a module a
nd expect the file to exist on the remote, see the remote src option"}
Question
How can I get a YAML file with HTTP and use it as a variable with include_vars?
Another option would be to use the uri module to retrieve the value into an Ansible variable, then the from_yaml filter to parse it.
Something like:
- name: Add users from list
hosts: workstation
tasks:
- name: Download YAML userlist
uri:
url: http://fqdn.of.webserver/path/to/yaml.yml
return_content: yes
register: downloaded_yaml
- name: Decode YAML userlist
set_fact:
userlist: "{{ downloaded_yaml.content | from_yaml }}"
Note that uri works on the Ansible Controller, while get_url works on the target host (or on the host specified in delegate_to); depending on your network configuration, you may need to use different proxy settings or firewall rules to permit the download.
The include_vars task looks for files on the local (control) host, but you've downloaded the file to /tmp/tmp.yml on the remote host. There are a number of ways of getting this to work.
Perhaps the easiest is just running the download task on the control machine instead (note the use of delegate_to):
tasks:
- name: Download yaml
delegate_to: localhost
get_url:
url: http://fqdn.of.webserver/path/to/yaml.yml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
This will download the file to /tmp/tmp.yml on the local system, where it will be available to include_vars. For example, if I run this playbook (which grabs YAML content from an example gist I just created)...
- hosts: target
gather_facts: false
tasks:
- name: Download yaml
delegate_to: localhost
get_url:
url: https://gist.githubusercontent.com/larsks/70d8ac27399cb51fde150902482acf2e/raw/676a1d17bcfc01b1a947f7f87e807125df5910c1/example.yaml
dest: "/tmp/tmp.yml"
notify:
- Read yaml
- List usernames
handlers:
- name: Read yaml
include_vars:
file: /tmp/tmp.yml
name: userlist
- name: List usernames
debug:
var: item
loop: "{{ userlist.users }}"
...it produces the following output:
RUNNING HANDLER [Read yaml] ******************************************************************
ok: [target]
RUNNING HANDLER [List usernames] *************************************************************
ok: [target] => (item=bob) => {
"ansible_loop_var": "item",
"item": "bob"
}
ok: [target] => (item=alice) => {
"ansible_loop_var": "item",
"item": "alice"
}
ok: [target] => (item=mallory) => {
"ansible_loop_var": "item",
"item": "mallory"
}
Side note: based on what I see in your playbook, I'm not sure you want
to be using notify and handlers here. If you run your playbook a
second time, nothing will happen because the file /tmp/tmp.yml
already exists, so the handlers won't get called.
With #Larsks 's answer, I made this playbook that works correctly in my environment:
- name: Download users list
hosts: 127.0.0.1
connection: local
become: no
tasks:
- name: Download yaml
get_url:
url: http://fqdn.of.webserver/path/to/yaml/users.yml
dest: ./users.yml
- name: Add users from list
hosts: workstation
tasks:
- name: Read yaml
include_vars:
file: users.yml
- name: List usernames
debug:
msg: "{{ item.id }}"
loop: "{{ users }}"
Point
Run get_url on the control host
As #Larsks said, you have to run the get_url module on the control host rather than the target host.
Add become: no to the task run on the control host
Without "become: no", you will get the following error message:
TASK [Gathering Facts] ******************************************************
fatal: [127.0.0.1]: FAILED! => {"ansible_facts": {}, "changed": false, "msg":
"The following modules failed to execute: setup\n setup: MODULE FAILURE\nSee
stdout/stderr for the exact error\n"}
Use connection: local rather than local_action
If you use local_action rather than connection: local like this:
- name: test get_url
hosts: workstation
tasks:
- name: Download yaml
local_action:
module: get_url
url: http://fqdn.of.webserver/path/to/yaml/users.yml
dest: ./users.yml
- name: Read yaml
include_vars:
file: users.yml
- name: output remote yaml
debug:
msg: "{{ item.id }}"
loop: "{{ users }}"
You will get the following error message:
TASK [Download yaml] ********************************************************
fatal: [workstation.example.com]: FAILED! => {"changed": false, "module_stde
rr": "sudo: a password is required\n", "module_stdout":"", "msg":"MODULE FAIL
URE\nSee stdout/stderr for the exact error", "rc": 1}
get_url stores a file on the control host
In this situation, the get_url module stores users.yml on the control host (in the current directory). So you have to delete the users.yml if you don't want to leave it.

Argo artifacts gives error "http: server gave HTTP response to HTTPS client"

I was setting up Argo in my k8s cluster in Argo namespace.
I also Installed MinIO as an Artifact repository (https://github.com/argoproj/argo-workflows/blob/master/docs/configure-artifact-repository.md).
I am configuring a workflow which tries to access that Non-Default Artifact Repository as:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifact-passing-
spec:
entrypoint: artifact-example
templates:
- name: artifact-example
steps:
- - name: generate-artifact
template: whalesay
- - name: consume-artifact
template: print-message
arguments:
artifacts:
# bind message to the hello-art artifact
# generated by the generate-artifact step
- name: message
from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello world | tee /tmp/hello_world.txt"]
outputs:
artifacts:
# generate hello-art artifact from /tmp/hello_world.txt
# artifacts can be directories as well as files
- name: hello-art
path: /tmp/hello_world.txt
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
- name: print-message
inputs:
artifacts:
# unpack the message input artifact
# and put it at /tmp/message
- name: message
path: /tmp/message
s3:
endpoint: argo-artifacts-minio.argo:9000
bucket: my-bucket
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey
container:
image: alpine:latest
command: [sh, -c]
args: ["cat /tmp/message"]
I created the workflow in argo namespace by:
argo submit --watch artifact-passing-nondefault-new.yaml -n argo
But the workflow fails with an error:
STEP PODNAME DURATION MESSAGE
✖ artifact-passing-z9g64 child 'artifact-passing-z9g64-150231068' failed
└---⚠ generate-artifact artifact-passing-z9g64-150231068 12s failed to save outputs: Get https://argo-artifacts-minio.argo:9000/my-bucket/?location=: http: server gave HTTP response to HTTPS client
Can someone help me to solve this error?
Since the minio setup runs without TLS configured, the workflow should specify that it should connect to an insecure artifact repository.
Including a field insecure: true in the s3 definition section of the workflow solves the issue.
s3:
endpoint: argo-artifacts-minio.argo:9000
insecure: true
bucket: my-bucket
key: /my-output-artifact.tgz
accessKeySecret:
name: argo-artifacts-minio
key: accesskey
secretKeySecret:
name: argo-artifacts-minio
key: secretkey

Using parameter placeholders in codeception.yml

I'm setting up Codeception's Db module and would like to use the parameters from my Symfony 2's parameters.yml file.
Basically something like this:
paths:
tests: tests
log: tests/_log
data: tests/_data
helpers: tests/_helpers
settings:
bootstrap: _bootstrap.php
suite_class: \PHPUnit_Framework_TestSuite
colors: true
memory_limit: 1024M
log: true
modules:
config:
Symfony2:
app_path: 'app'
var_path: 'app'
environment: 'test'
Db:
dsn: "mysql:host='%test_database_host%';dbname='%test_database_name%'"
user: "%test_database_user%"
password: "%test_database_password%"
dump: tests/_data/test_data.sql
The placeholders (e.g. %test_database_user%) aren't replaced by the values in the parameters.yml file in Symfony 2's app/config directory.
parameters.yml:
parameters:
test_database_name: testdb
test_database_host: 127.0.0.1
test_database_user: root
test_database_password: thisismypassword
What's the best way to do this?
Thanks.
In order to use params you should add params config to codeception.yml:
params:
- env # to get params from environment vars
- params.yml # to get params from yaml (Symfony style)
- params.env # to get params from .env (Laravel style)
you can use param values using '%' placeholders:
modules:
enabled:
- Db:
dsn: %DB_DSN%
user: %DB_USER%
password: %DB_PASS%
This possibility exists since this PR : https://github.com/Codeception/Codeception/pull/2855
In your example, you can add the params into your codeception.yml, like this :
params:
- app/config/parameters.yml
paths:
tests: tests
log: tests/_log
data: tests/_data
helpers: tests/_helpers
settings:
bootstrap: _bootstrap.php
suite_class: \PHPUnit_Framework_TestSuite
colors: true
memory_limit: 1024M
log: true
modules:
config:
Symfony2:
app_path: 'app'
var_path: 'app'
environment: 'test'
Db:
dsn: "mysql:host='%test_database_host%';dbname='%test_database_name%'"
user: "%test_database_user%"
password: "%test_database_password%"
dump: tests/_data/test_data.sql
Be aware that you can not access parameters like %kernel.project_dir%

Resources