How to generate some models for java with OpenApi Generator? - swagger-2.0

I successfully did generate a REST Client in java from a Swagger/OpenApi v2.0 using OpenApi Generator CLI 3.3.2-SNAPSHOT
But I already have a REST Client, so I just want to generate some models from the spec.
I get success when I run:
java -Dmodels -DmodelDocs=false \
-jar modules/openapi-generator-cli/target/openapi-generator-cli.jar generate \
-i swagger.json \
-g java \
-o /temp/my_models
But when I want to generate just specific models with
java -Dmodels=Body,Header -DmodelDocs=false \
-jar modules/openapi-generator-cli/target/openapi-generator-cli.jar generate \
-i swagger.json \
-g java
-o /temp/my_selected_models
I'm getting this ERROR:
[main] INFO o.o.c.languages.AbstractJavaCodegen - Environment
variable JAVA_POST_PROCESS_FILE not defined so the Java code may not
be properly formatted. To define it, try 'export
JAVA_POST_PROCESS_FILE="/usr/local/bin/clang-format -i"' (Linux/Mac)
What is this JAVA_POST_PROCESS_FILE and how can I specify a valid format to generate the models?
Why the code generation success with all models but fails with a subset?

That message is just informational. It aims to inform you that there's a way to auto-format the auto-generated Java code by specifying an environment variable with the auto code formatter (clang_format in this case):
export JAVA_POST_PROCESS_FILE="/usr/local/bin/clang-format -i"
In other words, it does not affect the code generation process if the environment variable is not specified.

Related

Get 90th percentile in jmeter in unix command line

I would like to display the 90th percentile for a request in jmeter using unix command line.iam not able to do so.
i have enabled the flags in jmeter.properties are aggregate_rpt_pct1 =90 , aggregate_rpt_pct2=95 & aggregate_rpt_pct3=99.
But still iam not able to display.Iam using the commands
./jmeter.sh -n -t examples/LTTest_unix.jmx -l /testing12.csv -e -o /bin/outputreports
Iam not able to get the 90th percentile.
please find the screenshot for the same.enter image description here
please help on this.What should i do to get the 90th,99th percentile in non-gui mode in linux
If you want to see 90% percentile for the same in the unix command line you need to install JMeterPluginsCMD Command Line Tool and run an extra command to generate the same from the .jtl results file.
Install JMeter Plugins Manager, you will need to execute the following command from "bin" folder of your JMeter installation for the same
wget https://jmeter-plugins.org/get/ -O ../lib/ext/jmeter-plugins-manager.jar
Install cmdrunner. You will need to run the following commands for the same:
wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.3/cmdrunner-2.3.jar -P ../lib/
and
wget https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.3/cmdrunner-2.3.jar -P ../lib/
Install necessary plugins: JMeterPluginsCMD Command Line Tool and Synthesis Report. You will need to run the following commands for the same:
java -cp ../lib/ext/jmeter-plugins-manager.jar org.jmeterplugins.repository.PluginManagerCMDInstaller
and
./PluginsManagerCMD.sh install jpgc-cmd,jpgc-synthesis
Run your test using the same command you're using for the same
Generate CSV version of the Aggregate Report, you can run below command for the same:
./JMeterPluginsCMD.sh --generate-csv /desired/path/to/CSV/version/of/aggregate/report.csv --input-jtl /path/to/your/test/result.jtl --plugin-type AggregateReport
In addition
You don't need to amend any JMeter properties to get 90 percentile, all your overrides match default values they are the same
Any configuration overrides should go to user.properties file, this way they can survive upgrades

Compiling Jaeger gRPC proto files with Python

I'm currently playing around with Jaeger Query and trying to access its content through the API, which uses gRPC. I'm not familiar with gRPC, but my understanding is that I need to use the Python gRPC compiler (grpcio_tools.protoc) on the relevant proto file to get useful Python definitions. What I'm trying to do is find out ways to access Jaeger Query by API, without the frontend UI.
Currently, I'm very stuck on compiling the proto files. Every time I try, I get dependency issues (Import "fileNameHere" was not found or has errors.). The Jaeger query.proto file contains import references to files outside the repo. Whilst I can find these and manually collect them, they also have dependencies. I get the impression that following through and collecting each of these one by one is not how this was intended to be done.
Am I doing something wrong here? The direct documentation through Jaeger is limited for this. The below is my basic terminal session, before including any manually found files (which themselves have dependencies I would have to go and find the files for).
$ python -m grpc_tools.protoc --grcp_python_out=. --python_out=. --proto_path=. query.proto
model.proto: File not found.
gogoproto/gogo.proto: File not found.
google/api/annotations.proto: File not found.
protoc-gen-swagger/options/annotations.proto: File not found.
query.proto:20:1: Import "model.proto" was not found or had errors.
query.proto:21:1: Import "gogoproto/gogo.proto" was not found or had errors.
query.proto:22:1: Import "google/api/annotations.proto" was not found or had errors.
query.proto:25:1: Import "protoc-gen-swagger/options/annotations.proto" was not found or had errors.
query.proto:61:12: "jaeger.api_v2.Span" is not defined.
query.proto:137:12: "jaeger.api_v2.DependencyLink" is not defined.
Thanks for any help.
A colleague of mine provided the answer... It was hidden in the Makefile, which hadn't worked for me as I don't use Golang (and it had been more complex than just installing Golang and running it, but I digress...).
The following .sh will do the trick. This assumes the query.proto file is a subdirectory from the same location as the script below, under model/proto/api_v2/ (as it appears in the main Jaeger repo).
#!/usr/bin/env sh
set +x
rm -rf ./js_out 2> /dev/null
mkdir ./js_out
PROTO_INCLUDES="
-I model/proto \
-I idl/proto \
-I vendor/github.com/grpc-ecosystem/grpc-gateway \
-I vendor/github.com/gogo/googleapis \
-I vendor/github.com/gogo/protobuf/protobuf \
-I vendor/github.com/gogo/protobuf"
python -m grpc_tools.protoc ${PROTO_INCLUDES} --grpc_python_out=./python_out --python_out=./python_out model/proto/api_v2/query.proto
This will definitely generate the needed Python file, but it will still be missing dependencies.
I did the following to get the Jaeger gRPC Python APIs:
git clone --recurse-submodules https://github.com/jaegertracing/jaeger-idl
cd jaeger-idl/
make proto
Use the files inside proto-gen-python/.
Note:
While importing the generated code, if you face the error:
AttributeError: module 'google.protobuf.descriptor' has no attribute '_internal_create_key'
Do:
pip3 install --upgrade pip
pip3 install --upgrade protobuf

running pyspark kafka steam with an error

When I tried to run an example code for spark-steaming: "kafka_wordcount.py"
under the folder: /usr/local/spark/examples/src/main/python/streaming
The code explicitly describes the instruction to execute the code as:
" $ bin/spark-submit --jars \
external/kafka-assembly/target/scala-*/spark-streaming-kafka-assembly-*.jar \
examples/src/main/python/streaming/kafka_wordcount.py \
localhost:2181 test
test is the topic name. But I cannot find the jar and the path:
" external/kafka-assembly/target/scala-/spark-streaming-kafka-assembly-.jar"
So instead I created a folder "streaming/jar/" and put all jars from the
website http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22spark-streaming-kafka-assembly_2.10%22 and then when I run
"park-submit --jars ~/stream-example/jars/spark-streaming-kafka-assembly_*.jar kafka_wordcount.py localhost:2181 topic"
which shows
"Error: No main class set in JAR; please specify one with --class
Run with --help for usage help or --verbose for debug output"
What is wrong with that? Where are jars?
A ton of Thanks!!
This question was asked long ago, so I assume you have figured out by now.
But, as I just had the same problem, I will post the solution that worked for me.
The deployment section of this guide (http://spark.apache.org/docs/latest/streaming-kafka-integration.html) says you can pass the lib with the --packages argument, like bellow:
bin/spark-submit \
--packages org.apache.spark:spark-streaming-kafka_2.10:1.6.2 \
examples/src/main/python/streaming/kafka_wordcount.py \
localhost:2181 test
You can also download the jar itself here: http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22spark-streaming-kafka-assembly_2.10%22
Note: I didn't ran the command above, I tested with this other example, but it should work the same way:
bin/spark-submit
--packages org.apache.spark:spark-streaming-kafka_2.10:1.6.2 \
examples/src/main/python/streaming/direct_kafka_wordcount.py \
localhost:9092 test

How to set QT_QPA_PLATFORM_PLUGIN_PATH properly (concept)?

I have Qt Creator and Qt 5.5 installed.
QT_QPA_PLATFORM_PLUGIN_PATH = C:\Qt\5.5\msvc2013\plugins
If I disable the environment var, I do get an error when I launch an application from QtC. So the variable seems to be required.
My problem is:
When I run other Qt based applications (i.e. Teamspeak or such), those fail, I always have to disable (delete) QT_QPA_PLATFORM_PLUGIN_PATH first
When I use KITS in QtC and switch between Qt versions (i.e. 5.4, 5.6) the variable is not in sync with this very version
How is this supposed to work?
The best solution I have found so far is to set it on the QtC Project page for that specific build
My decision that helped me. It:
In the search for Win 10 enter sysdm.cpl
Advanced -> Environment Variables -> to System Variables -> add:
PATH
C: \ Users \ ~ \ AppData \ Local \ Programs \ Python \ Python36-32 \ Lib \ site-packages \ pyqt5_tools \ plugins \ platforms \ (your address to qminimal.dll, qoffscreen.dll, qwebgl.dll)
dll took from here: https://www.riverbankcomputing.com/software/pyqt/download5 official site

How to pass variable file to a test when running robot framework tests on sauce labs?

I am trying to run some robot framework tests on sauce labs. I am not able to pass a variable file to my pybot command.
When I run my tests locally I use this command :
pybot -vbrowser:firefox -vbaseur --variablefile ../VariableFiles/superdesk.py mytest.robot
On sauce labs I need to pass some other variables: sauce username, key...
pybot -v browser:firefox -v baseurl:http://myurl.fr -v sauce_apikey:mykey -v sauce_platform:linux -v sauce_username:myusername mytest.robot
How could I pass a variable to the second command as as soon as I pass --variable filemy tests run locally and not on sauce labs.
I have tried this command
pybot -v browser:firefox -v baseurl:http://myurl.fr -v sauce_apikey:mykey -v sauce_platform:linux -v sauce_username:myusername --variablefile myvarfile.py mytest.robot
When running the command above the variable file is not taken into account. My tests run with default variables
The literal answer to your question is "you pass a variable file to a test the same way no matter whether you are using saucelabs or not".
Unless robot is throwing an error, your variable file is being passed to robot when you include --variablefile myvarfile.py. You can write a simple test to verify that, by having the test log the values from the variable file.
If you are seeing different behaviours, the behaviours must be in your test cases, or in your own variable file. There is no feature in robot that behaves differently when running on saucelabs or not.

Resources