Simple Swagger.v2 array definition not responding - swagger-2.0

Am trying to learn swagger. To create a mock server for a simple api that returns
A) an object of the form {id: someid, name: some name}
b) an array of those object
I have the first part working but the second just won't work. Can someone who knows have a look at my YAML definition below ?
swagger: "2.0"
info:
version: 1.0.0
title: Simple API
description: A simple API to learn how to write OpenAPI Specification
schemes:
- http
host: localhost:8080
basePath: /
Here are the two paths defined, the first (/api/dataset) works, the second (/api/datasets) does not.
paths:
/api/dataset:
get:
summary: summary
description: desc
responses:
200:
description: dataset
schema:
$ref: '#/definitions/dataset'
/api/datasets:
get:
summary: summary
description: desc
responses:
200:
description: datasets list
schema:
$ref: '#/definitions/datasets'
These are the definitions, I suspect I'm doing something wrong here ...
definitions:
dataset:
type: object
properties:
dataset_id:
type: string
name:
type: string
required:
- dataset_id
- name
example:
dataset_id: FunnyJokesData
name: FunnyJokesData
datasets:
type: array
items:
$ref: '#/definitions/dataset'
example:
- dataset_id: example_01
name: example_01
- dataset_id: example_02
name: example_02
- dataset_id: example_03
name: example_03
After generating a stub server with this definition the curl response for
/api/dataset has a response body:
$ curl -X GET "http://localhost:8080/api/dataset" -H "accept: application/json"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 64 0 64 0 0 4000 0 --:--:-- --:--:-- --:--:-- 4000{
"dataset_id": "FunnyJokesData",
"name": "FunnyJokesData"
}
but the curl response for '/api/datasets' is empty:
$ curl -X GET "http://localhost:8080/api/datasets" -H "accept: application/json"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
I can't figure out why one works and the other does not.
thanks for looking

I guess "won't work" applies just to the Node.js server stub, because the Python/Flask stub returns the string "do some magic!" instead of JSON for both endpoints.
It looks like Swagger Codegen's Node.js generator does not support array-level example, that's why the corresponding response is empty. You can submit a bug report here: https://github.com/swagger-api/swagger-codegen/issues.
One workaround that seems to be working with the Node.js generator is to use response examples instead:
/api/datasets:
get:
summary: summary
description: desc
responses:
200:
description: datasets list
schema:
$ref: '#/definitions/datasets'
examples:
application/json:
- dataset_id: example_01
name: example_01
- dataset_id: example_02
name: example_02
- dataset_id: example_03
name: example_03

Related

Why does OpenApi generator name the class of service Api as "DefaultService" in Angular13

I add OpenApi Generator in my angular project, but when i generate the Api with the command " openapi-generator-cli generate -i ../openApi/src/main/resources/api.yml -g typescript-angular -o src/app/core/api/v1", the name of Api service generted is default.service.ts, how can I change the name generated ?
api.yml :
openapi: 3.0.3
info:
title: Test
description:
version: 1.0.0
servers:
- url: /api
paths:
/tests:
$ref: './controllers/test.yml#/tests'
test.yml
tests:
get:
operationId: loadTests
responses:
'200':
description:
content:
application/json:
schema:
type: array
items:
$ref: '../model/test.yml#/b'
script added to package.json :
"generate:api": "openapi-generator-cli generate -i ../openApi/src/main/resources/api.yml -g typescript-angular -o src/app/core/api/v1"
version of openApi :
"#openapitools/openapi-generator-cli": "^2.4.26",
image of files
You can achieve this using tags in your operation object.
tests:
get:
tags:
- testtag
operationId: loadTests
responses:
'200':
description:
...
In the above example, you'd get testtag.service.ts. It is allowed to specify more than one tag - but be aware: for each tag there will be one .service.ts-file. You can but don't have to introduce these tags separately to add a description.
All paths that have no tag will end up in the default.service.ts-file.
To learn more about tags and operations, you should have a look at the operation object chapter and/or the tag object chapter of the specification

How to use the serverless environment variable in stepfunction parameter

I have a query with hardcoded dates used in the parameters section.Instead I want to pass them as environment variables.Any suggestions on how to parameterize the QueryString parameter?
service: service-name
frameworkVersion: '2'
provider:
name: aws
runtime: go1.x
lambdaHashingVersion: 20201221
stage: ${opt:stage, self:custom.defaultStage}
region: us-east-1
tags: ${self:custom.tagsObject}
logRetentionInDays: 1
timeout: 10
deploymentBucket: lambda-repository
memorySize: 128
tracing:
lambda: true
plugins:
- serverless-step-functions
configValidationMode: error
stepFunctions:
stateMachines:
callAthena:
name: datasorting-dev
type: STANDARD
role: ${self:custom.datasorting.${self:provider.stage}.iam}
definition:
Comment: "Data Refersh"
StartAt: Refresh Data
States:
Refresh Data:
Type: Task
Resource: arn:aws:states:::athena:startQueryExecution.sync
Parameters:
QueryString: >-
ALTER TABLE table.raw_data ADD IF NOT EXISTS
PARTITION (YEAR=2021, MONTH=02, DAY=15, hour=00)
WorkGroup: primary
ResultConfiguration:
OutputLocation: s3://output/location
End: true
you can replace any value in your serverless.yml enclosed in ${} brackets,
Serverless Framework Guide to Variables:
https://www.serverless.com/framework/docs/providers/aws/guide/variables/
for example, you can create a custom: section looking for environment variables, and if they are not present, you can have default values:
service: service-name
frameworkVersion: '2'
custom:
year: ${env:YEAR, 'default-year'}
month: ${env:MONTH, 'default-month'}
day: ${env:DAY, 'default-day'}
hour: ${env:HOUR, 'default-hour'}
stepFunctions:
stateMachines:
callAthena:
...
Parameters:
QueryString: >-
ALTER TABLE table.raw_data ADD IF NOT EXISTS
PARTITION (YEAR=${self:custom.year}, MONTH=${self:custom.month}, DAY=${self:custom.day}, hour=${self:custom.hour})
...

HOT template for cinder volume with or without volume_type

I am trying to write a HOT template for Openstack volume, and need to have the volume_type as a parameter. I also need to support a case when the parameter is not given, and default to the Cinder default volume type.
First attempt was to pass null to the volume_type , hoping it would give the default volume type. However no matter what I pass (null, ~, default, "" ) , seems there is no way to get the default volume type.
type: OS::Cinder::Volume
properties:
name: test
size: 1
volume_type: { if: ["voltype_given" , {get_param:[typename]} , null] }
Is there any way to get the default volume type , when you have the "volume_type" property defined?
Alternatively, is there any way to have the "volume_type" property itself behind a conditional? I tried several ways, but no luck. Something like:
type: OS::Cinder::Volume
properties:
if: ["voltype_given" , [ volume_type: {get_param:[typename]} ] , ""]
name: test
size: 1
ERROR: TypeError: : resources.kk-test-vol: : 'If' object is not iterable
Could you do something like this?
---
parameters:
typename:
type: string
conditions:
use_default_type: {equals: [{get_param: typename}, '']}
resources:
MyVolumeWithDefault:
condition: use_default_type
type: OS::Cinder::Volume
properties:
name: test
size: 1
MyVolumeWithExplicit:
condition: {not: use_default_type}
type: OS::Cinder::Volume
properties:
name: test
size: 1
volume_type: {get_param: typename}
# e.g. if you need to refer to the volume from another resource
MyVolumeAttachment:
type: OS::Cinder::VolumeAttachment
properties:
instance_uid: some-instance-uuid
volume_id:
if:
- use_default_type
- get_resource: MyVolumeWithDefault
- get_resource: MyVolumeWithExplicit

Not able to execute lifecycle operation using script plugin

I'm trying to learn how to use script plugin. I'm following script plugin docs here but not able to make it work.
I've tried to use the plugin in two ways. The first, when cloudify.interface.lifecycle.start operation is mapped directly to a script:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
inputs: {}
The second with a direct mapping:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: script.script_runner.tasks.run
inputs:
script_path: scripts/create_project.sh
I've created a directory named scripts and placed the below create_project.sh script in this directory:
#! /bin/bash -e
ctx logger info "Hello to this world"
hostname
I'm getting errors while validating the blueprint.
Error when operation is mapped directly to a script:
[2019-04-13 13:29:40.594] [DEBUG] DslParserExecClient - got output from dsl parser Could not extract plugin from operation mapping 'scripts/create_project.sh', which is declared for operation 'start'. In interface 'cloudify.interfaces.lifecycle' in node 'Import_Project' of type 'cloudify.nodes.WebServer'
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-27O0e1t813N6as
in line: 3, column: 2
path: node_templates.Import_Project
value: {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'scripts/create_project.sh', 'inputs': {}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}
Error when using a direct mapping:
[2019-04-13 13:25:21.015] [DEBUG] DslParserExecClient - got output from dsl parser node 'Import_Project' has no relationship which makes it contained within a host and it has a plugin 'script' with 'host_agent' as an executor. These types of plugins must be installed on a host
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-279QCz2CV3Y81L
in line: 2, column: 0
path: node_templates
value: {'Import_Project': {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'script.script_runner.tasks.run', 'inputs': {'script_path': 'scripts/create_project.sh'}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}}
What is missing to make this work?
I also found the Cloudify Script Plugin examples from their documentation do not work: https://docs.cloudify.co/4.6/working_with/official_plugins/configuration/script/
However, I found I can make the examples work by adding an executor line in parallel with the implementation line to override the host_agent executor as follows:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
executor: central_deployment_agent
inputs: {}

How to find in an ansible (yaml) dictionary a key from his value?

I have this dictionary:
MyClouds:
Devwatt:
ExternalNetwork: PublicRSC
Flavors:
- Flavor_1cpu_1gb: Devwatt_1cpu_1gb
- Flavor_1cpu_2gb: Devwatt_1cpu_2gb
- Flavor_1cpu_4gb: Devwatt_1cpu_4gb
Fuga:
ExternalNetwork: Internet
Flavors:
- Flavor_1cpu_1gb: Fuga_1cpu_1gb
- Flavor_1cpu_2gb: Fuga_1cpu_2gb
- Flavor_1cpu_4gb: Fuga_1cpu_4gb
- Flavor_1cpu_8gb: Fuga_1cpu_8gb
I have to migrate from one Openstack cloud to another, and one of my problem is to find correspondances between flavors.
I want to find which flavor (key) has the value "Devwatt_1cpu_2gb" in "Devwatt", and after get the value of the same key in "Fuga"
I tried a lot of solution (with-dict, when, jija filters, json_query) but I can't find a way to do that.
Please, may you help me ?
Inspired by Eric's answer and this usefull resource, I, finally, used this solution:
I changed a little bit my data structure and put it in a file matrice.yml:
MyClouds:
Devwatt:
ExternalNetwork: PublicRSC
Flavors:
- name: Flavor_1cpu_1gb
FlavorName: Devwatt_1cpu_1gb
- name: Flavor_2cpu_1gb
FlavorName: Devwatt_2cpu_1gb
- name: Flavor_1cpu_2gb
FlavorName: Devwatt_1cpu_2gb
Fuga:
ExternalNetwork: Internet
Flavors:
- name: Flavor_1cpu_1gb
FlavorName: Fuga_1cpu_1gb
- name: Flavor_2cpu_1gb
FlavorName: Fuga_2cpu_1gb
- name: Flavor_1cpu_2gb
FlavorName: Fuga_1cpu_2gb
then I used these filters in my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
SourceFlavorName: "Devwatt_2cpu_1gb"
tasks:
- name: get flavors matrice
include_vars:
file: matrice.yml
- name: Get generic name from flavor name of source cloud
debug:
msg: "{{ MyClouds.Devwatt.Flavors | selectattr('FlavorName','search','^'+ SourceFlavorName +'$') |map (attribute='name') | list }}"
register: result
- name: Get flavor name for target cloud from generic name
debug:
msg: "{{ MyClouds.Fuga.Flavors | selectattr('name','search','^'+ result.msg[0] +'$') |map (attribute='FlavorName') | list }}"
With this solution I can have any number of clouds and find easily correspondances between flavor from one source cloud to target cloud.
Why not using a simple mapping using a dict where keys are "Devwatt" flavors and values are "Fuga" flavors, like this :
---
- hosts: localhost
vars:
FlavorsMapping:
Devwatt_1cpu_1gb: Fuga_1cpu_1gb
Devwatt_1cpu_2gb: Fuga_1cpu_2gb
Devwatt_1cpu_4gb: Fuga_1cpu_4gb
tasks:
- debug:
var: FlavorsMapping['Devwatt_1cpu_2gb']

Resources