Why does OpenApi generator name the class of service Api as "DefaultService" in Angular13 - openapi-generator

I add OpenApi Generator in my angular project, but when i generate the Api with the command " openapi-generator-cli generate -i ../openApi/src/main/resources/api.yml -g typescript-angular -o src/app/core/api/v1", the name of Api service generted is default.service.ts, how can I change the name generated ?
api.yml :
openapi: 3.0.3
info:
title: Test
description:
version: 1.0.0
servers:
- url: /api
paths:
/tests:
$ref: './controllers/test.yml#/tests'
test.yml
tests:
get:
operationId: loadTests
responses:
'200':
description:
content:
application/json:
schema:
type: array
items:
$ref: '../model/test.yml#/b'
script added to package.json :
"generate:api": "openapi-generator-cli generate -i ../openApi/src/main/resources/api.yml -g typescript-angular -o src/app/core/api/v1"
version of openApi :
"#openapitools/openapi-generator-cli": "^2.4.26",
image of files

You can achieve this using tags in your operation object.
tests:
get:
tags:
- testtag
operationId: loadTests
responses:
'200':
description:
...
In the above example, you'd get testtag.service.ts. It is allowed to specify more than one tag - but be aware: for each tag there will be one .service.ts-file. You can but don't have to introduce these tags separately to add a description.
All paths that have no tag will end up in the default.service.ts-file.
To learn more about tags and operations, you should have a look at the operation object chapter and/or the tag object chapter of the specification

Related

Convert protoc to Prototool.yaml

How do i convert protoc command to prototool.yaml file ?
protoc --proto_path=api/proto/v1 --proto_path=third_party --go_out=plugins=grpc:pkg/api/v1 todo-service.proto
protoc --proto_path=api/proto/v1 --proto_path=third_party --grpc-gateway_out=logtostderr=true:pkg/api/v1 todo-service.proto
protoc --proto_path=api/proto/v1 --proto_path=third_party --swagger_out=logtostderr=true:api/swagger/v1 todo-service.proto
with the old swagger name in the grpc-gateway, you should be able to do something like:
//old name
- name: swagger
type: gogo
flags: logtostderr=true,grpc_api_configuration=path/to/router.yaml
output: .
//new name
- name: openapiv2
type: gogo
flags: logtostderr=true,grpc_api_configuration=path/to/router.yaml
output: .
all options can be found in https://github.com/grpc-ecosystem/grpc-gateway/blob/master/protoc-gen-openapiv2/main.go

Not able to execute lifecycle operation using script plugin

I'm trying to learn how to use script plugin. I'm following script plugin docs here but not able to make it work.
I've tried to use the plugin in two ways. The first, when cloudify.interface.lifecycle.start operation is mapped directly to a script:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
inputs: {}
The second with a direct mapping:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: script.script_runner.tasks.run
inputs:
script_path: scripts/create_project.sh
I've created a directory named scripts and placed the below create_project.sh script in this directory:
#! /bin/bash -e
ctx logger info "Hello to this world"
hostname
I'm getting errors while validating the blueprint.
Error when operation is mapped directly to a script:
[2019-04-13 13:29:40.594] [DEBUG] DslParserExecClient - got output from dsl parser Could not extract plugin from operation mapping 'scripts/create_project.sh', which is declared for operation 'start'. In interface 'cloudify.interfaces.lifecycle' in node 'Import_Project' of type 'cloudify.nodes.WebServer'
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-27O0e1t813N6as
in line: 3, column: 2
path: node_templates.Import_Project
value: {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'scripts/create_project.sh', 'inputs': {}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}
Error when using a direct mapping:
[2019-04-13 13:25:21.015] [DEBUG] DslParserExecClient - got output from dsl parser node 'Import_Project' has no relationship which makes it contained within a host and it has a plugin 'script' with 'host_agent' as an executor. These types of plugins must be installed on a host
in: /opt/cloudify-composer/backend/dev/workspace/2/tmp-279QCz2CV3Y81L
in line: 2, column: 0
path: node_templates
value: {'Import_Project': {'interfaces': {'cloudify.interfaces.lifecycle': {'start': {'implementation': 'script.script_runner.tasks.run', 'inputs': {'script_path': 'scripts/create_project.sh'}}}}, 'type': 'cloudify.nodes.WebServer', 'capabilities': {'scalable': {'properties': {'default_instances': 1}}}}}
What is missing to make this work?
I also found the Cloudify Script Plugin examples from their documentation do not work: https://docs.cloudify.co/4.6/working_with/official_plugins/configuration/script/
However, I found I can make the examples work by adding an executor line in parallel with the implementation line to override the host_agent executor as follows:
tosca_definitions_version: cloudify_dsl_1_3
imports:
- 'http://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml'
node_templates:
Import_Project:
type: cloudify.nodes.WebServer
capabilities:
scalable:
properties:
default_instances: 1
interfaces:
cloudify.interfaces.lifecycle:
start:
implementation: scripts/create_project.sh
executor: central_deployment_agent
inputs: {}

QBS: Explicitly setting qbs.profiles inside Products causing build to fail

My use-case is this:
I have a static library which I want to be available for some profiles (e.g. "gcc", "arm-gcc", "mips-gcc").
I also have an application which links to this library, but this applications should only build using a specific profile (e.g. "arm-gcc").
For this I am modifying the app-and-lib QBS example.
The lib.qbs file:
import qbs 1.0
Product {
qbs.profiles: ["gcc", "arm-gcc", "mips-gcc"] //I added only this line
type: "staticlibrary"
name: "mylib"
files: [
"lib.cpp",
"lib.h",
]
Depends { name: 'cpp' }
cpp.defines: ['CRUCIAL_DEFINE']
Export {
Depends { name: "cpp" }
cpp.includePaths: [product.sourceDirectory]
}
}
The app.qbs file:
import qbs 1.0
Product {
qbs.profiles: ["arm-gcc"] //I added only this line
type: "application"
consoleApplication: true
files : [ "main.cpp" ]
Depends { name: "cpp" }
Depends { name: "mylib" }
}
The app build fails. Qbs wrongly tries to link to the "gcc" version of the library instead of the "arm-gcc" version, as you can see in the log:
Build graph does not yet exist for configuration 'default'. Starting from scratch.
Resolving project for configuration default
Setting up build graph for configuration default
Building for configuration default
compiling lib.cpp [mylib {"profile":"gcc"}]
compiling lib.cpp [mylib {"profile":"arm-gcc"}]
compiling lib.cpp [mylib {"profile":"mips-gcc"}]
compiling main.cpp [app]
creating libmylib.a [mylib {"profile":"gcc"}]
creating libmylib.a [mylib {"profile":"mips-gcc"}]
creating libmylib.a [mylib {"profile":"arm-gcc"}]
linking app [app]
ERROR: /usr/bin/arm-linux-gnueabihf-g++ -o /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/app.7d104347/app /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/app.7d104347/3a52ce780950d4d9/main.cpp.o /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/mylib.eyJwcm9maWxlIjoiZ2NjIn0-.792f47ec/libmylib.a
ERROR: /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/mylib.eyJwcm9maWxlIjoiZ2NjIn0-.792f47ec/libmylib.a: error adding symbols: File format not recognized
collect2: error: ld returned 1 exit status
ERROR: Process failed with exit code 1.
The following products could not be built for configuration default:
app
The build fails only when selecting one profile in the app.qbs file, and this profile should not be the first profile in the qbs.profiles line in the lib.qbs file.
When selecting two or more profiles - the build succeeds.
My analysis:
I think this problem is related to multiplexing:
The lib.qbs contains more than one profile. This turns on multiplexing when building the library, which, in turn, adds additional 'multiplexConfigurationId' to the build-directory name (moduleloader.cpp).
The app.lib contains only one profile, so multiplexing is not turned on and the build-directory name does not get the extra string.
The problem can be solved by changing the code (moduleloader.cpp) so that multiplexing is turned even if there is only one profile i.e. with the following patch:
--- moduleloader.cpp 2018-10-24 16:17:43.633527397 +0300
+++ moduleloader.cpp.new 2018-10-24 16:18:27.541370544 +0300
## -872,7 +872,7 ##
= callWithTemporaryBaseModule<const MultiplexInfo>(dummyContext,
extractMultiplexInfoFromProduct);
- if (multiplexInfo.table.size() > 1)
+ if (multiplexInfo.table.size() > 0)
productItem->setProperty(StringConstants::multiplexedProperty(), VariantValue::trueValue());
VariantValuePtr productNameValue = VariantValue::create(productName);
## -891,7 +891,7 ##
const QString multiplexConfigurationId = multiplexInfo.toIdString(row);
const VariantValuePtr multiplexConfigurationIdValue
= VariantValue::create(multiplexConfigurationId);
- if (multiplexInfo.table.size() > 1 || aggregator) {
+ if (multiplexInfo.table.size() > 0 || aggregator) {
multiplexConfigurationIdValues.push_back(multiplexConfigurationIdValue);
item->setProperty(StringConstants::multiplexConfigurationIdProperty(),
multiplexConfigurationIdValue);
This worked for my use case. I don't know if it make sense in a broader view.
Finally, the questions:
Does it all make sense?
Is this a normal behavior?
Is this use-case simply not supported?
Is there a better solution?
Thanks in advance.
Yes, the default behavior with multiplexing is that the a non-multiplexed product depends on all variants of the dependency. In general, there is no way for a user to change that behavior, but there should be.
However, luckily for you, profiles are special:
Depends { name: "mylib"; profiles: "arm-gcc" }
This should fix your problem.

How can I found that the Rscript have been run successfully on the docker by the CWL?

I have written the CWL code which runs the Rscript command in the docker container. I have two files of cwl and yaml and run them by the command:
cwltool --debug a_code.cwl a_input.yaml
I get the output which said that the process status is success but there is no output file and in the result "output": null is reported. I want to know if there is a method to find that the Rscript file has run on the docker successfully. I want actually know about the reason that the output files are null.
The final part of the result is:
[job a_code.cwl] {
"output": null,
"errorFile": null
}
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing input staging directory /tmp/tmpUbyb7k
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
[job a_code.cwl] Removing temporary directory /tmp/tmpkUIOnw
Removing intermediate output directory /tmp/tmpCG9Xs1
Removing intermediate output directory /tmp/tmpCG9Xs1
{
"output": null,
"errorFile": null
}
Final process status is success
Final process status is success
R code:
library(cummeRbund)
cuff<-readCufflinks( dbFile = "cuffData.db", gtfFile = NULL, runInfoFile = "run.info", repTableFile = "read_groups.info", geneFPKM = "genes.fpkm_trac .... )
#setwd("/scripts")
sink("cuff.txt")
print(cuff)
sink()
My cwl file code is:
class: CommandLineTool
cwlVersion: v1.0
id: cummerbund
baseCommand:
- Rscript
inputs:
- id: Rfile
type: File?
inputBinding:
position: 0
- id: cuffdiffout
type: 'File[]?'
inputBinding:
position: 1
- id: errorout
type: File?
inputBinding:
position: 99
prefix: 2>
valueFrom: |
error.txt
outputs:
- id: output
type: File?
outputBinding:
glob: cuff.txt
- id: errorFile
type: File?
outputBinding:
glob: error.txt
label: cummerbund
requirements:
- class: DockerRequirement
dockerPull: cummerbund_0
my input file (yaml file) is:
inputs:
Rfile:
basename: run_cummeR.R
class: File
nameext: .R
nameroot: run_cummeR
path: run_cummeR.R
cuffdiffout:
- class: File
path: cuffData.db
- class: File
path: genes.fpkm_tracking
- class: File
path: read_groups.info
- class: File
path: genes.count_tracking
- class: File
path: genes.read_group_tracking
- class: File
path: isoforms.fpkm_tracking
- class: File
path: isoforms.read_group_tracking
- class: File
path: isoforms.count_tracking
- class: File
path: isoform_exp.diff
- class: File
path: gene_exp.diff
errorout:
- class: File
path: error.txt
Also, this is my Dockerfile for creating image:
FROM r-base
COPY . /scripts
RUN apt-get update
RUN apt-get install -y \
libcurl4-openssl-dev\
libssl-dev\
libmariadb-client-lgpl-dev\
libmariadbclient-dev\
libxml2-dev\
r-cran-plyr\
r-cran-reshape2
WORKDIR /scripts
RUN Rscript /scripts/build.R
ENTRYPOINT /bin/bash
I got the answer!
There were some problems in my program.
1. The docker was not pulled correctly then the cwl couldn't produce any output.
2. The inputs and outputs were not defined mandatory. So I got the success status in the case which I did not have proper inputs and output.

No code coverage in functional test with Codeception and C3

I've written a basic functional test in Codeception and I'm not seeing any coverage in either the HTML or XML reports. The pass/fail reports are generated correctly.
<?php
class MyFirstCest {
public function _before(FunctionalTester $I) {
}
public function _after(FunctionalTester $I) {
}
// tests
public function tryToTestLoginPage(FunctionalTester $I) {
$I->amOnPage('/app/login/');
$I->seeInSource('html');
}
}
The output is:
There was 1 failure:
---------
1) MyFirstCest: Try to test login page
Test tests\functional\MyFirstCest.php:tryToTestLoginPage
Step See in source "html"
Fail Failed asserting that on page /app/login/
--> #!/usr/bin/env php
<br />
<font size='1'><table class='xdebug-error xe-notice' dir='ltr' border='1' cellspacing='0' cellpadding='1'>
<tr><th align='left' bgcolor='#f57900' colspan="5"><span style='background-color: #cc0000; color: #fce94f; font-size: x-large;'>( ! )</span> Notice: Undefined offset:
[Content too long to display. See complete response in 'C:\server\Apache24\htdocs\localhost\tests/_output\' directory]
--> contains "html".
The error occurs in codecept.phar, but I assume other people are able to use the same version (2.3.6) without errors.
codeception.yml:
paths:
tests: tests
output: tests/_output
data: tests/_data
support: tests/_support
envs: tests/_envs
actor_suffix: Tester
settings:
shuffle: true
extensions:
enabled:
- Codeception\Extension\RunFailed
coverage:
enabled: true
blacklist:
include:
- tests\*
low_limit: 0
high_limit: 50
c3_url: 'http://localhost'
# remote: true
functional.suite.yml:
# Codeception Test Suite Configuration
#
# Suite for functional tests
# Emulate web requests and make application process them
# Include one of framework modules (Symfony2, Yii2, Laravel5) to use it
# Remove this suite if you don't use frameworks
actor: FunctionalTester
modules:
enabled:
- PhpBrowser:
url: 'http://localhost'
auth: ['admin', '123345']
curl:
CURLOPT_RETURNTRANSFER: true
cookies:
cookie-1:
Name: userName
Value: john.doe
cookie-2:
Name: authToken
Value: 1abcd2345
Domain: subdomain.domain.com
Path: /admin/
Expires: 1292177455
Secure: true
HttpOnly: false
coverage:
enabled: true
I'm running Codeception with:
php codecept.phar run --html report.html --coverage-html coverage --xml report.xml --coverage-xml coverage.xml functional
The output file says: Notice: Undefined offset: 0 in phar://C:/usr/bin/codecept.phar/src/Codeception/Command/Self‌​Update.php on line 44, which indicates a Codeception error, however, I'm sure other people get this to work without causing errors in Codeception, so I assume I've got a configuration error somewhere.

Resources