Configuration:
zerocode-tdd.1.3.2
${host}
At runtime, system property set with -D java option. All is well.
Problem / What I Need:
At unit test time, system property not set, and host not resolved.
App uses Junit and Zerocode, would like to simply configure Zerocode to set the system property.
Example:
host.properties
web.application.endpoint.host:${host}
web.application.endpoint.port=
web.application.endpoint.context=
More Info:
Requirement is for configuration only. Can't introduce new Java code, or entries into IDE.
Any help out there? Any ideas are appreciated.
This feature is available in zerocode version 1.3.9 and higher.
Please use the place holder like ${SYSTEM.PROP:host} e.g. ${SYSTEM.PROPERTY:java.vendor} resolves to Oracle Corporation or Azul Systems, Inc.
Example link:
https://github.com/authorjapps/zerocode/blob/master/README.md#general-place-holders
Found a solution, but not sure if this is the correct way to do so.
Step 1: Create a config file and load system properties.
Config.java
public class Config {
public Map<String, Object> readProperties(String optionalString) {
Map<String, Object> propertiesMap = new HashMap<>();
final String host = System.getProperty("host");
propertiesMap.put("host", host);
return propertiesMap;
}
}
Step 2: Add a step (before other steps) to use the loaded properties in the .json file.
test.json
{
"scenarioName": "Test ...",
"steps": [
{
"name": "config",
"url": "com.test.Config",
"operation": "readProperties",
"request": "",
"assertions": {}
}
]
}
Step 3: Use loaded property in step config
test.json
{
"scenarioName": "Test ...",
"steps": [
{
"name": "config",
"url": "com.test.Config",
"operation": "readProperties",
"request": "",
"assertions": {}
},
{
"name": "test",
"url": "${$.config.response.host}/test/xxx",
"operation": "GET",
"request": {},
"assertions": {
"status": 200
}
}
]
}
That's it, although it is working but I am looking for better approach.
Some possible options I am trying are:
Common step for load/config (in one place)
Directly using properties as {host} in json files
Custom client
Again any help/ideas are appreciated.
My question is why are you trying to access the actual host/port? Sorry for the long answer but bear with me. I think there is an easier way to achieve what you are attempting. I find its best to think about zerocode usage in two ways,
live integration tests (which is what I think your trying to do) [meaning this calls a live endpoint / service], or
what I refer to as a thintegration test (an integration test but using a mock endpoint / service).
Thinking about it this way gives you the opportunity for two different metrics,
when using the mock endpoint / service how performant / resilient is my code, and
when using live integration tests what is the rough real life performance (expect a bit slower than external load test due to data setup / test setup).
This lets you evaluate both yourself and your partner service.
So outside of the evaluation above why would you want to build a thintegration test? The real value in doing this is you still make it all the way through your code like you would in an integration test but you control the result of said test like you would in a standard unit test. Additionally since you control the result of the test this may improve build time test stability vs a live api.
Obviously it seems you already know how to setup an integration test so I'll assume you're good to go there but what about the thintegration tests?
To setup a thintegration test you really have two options,
use postman mock server (https://learning.postman.com/docs/designing-and-developing-your-api/mocking-data/setting-up-mock/)
a. more work to setup
b. external config to maintain
c. monthly api call limits
use WireMock (http://wiremock.org/)
a. lives with your code
b. all local so no limits
If you already have integration tests you can copy them to a new file and make the updates or just convert your existing.
**** To address your specific question ****
When using WireMock you can setup a dynamic local server url with dynamic port using the following.
protected String urlWithPort;
#Rule
public WireMockRule wireMockRule = new WireMockRule(wireMockConfig().dynamicPort().dynamicHttpsPort());
protected String getUriWithPort() {
return "http://localhost:" + wireMockRule.port();
}
Note: The above was tested using WireMock version 2.27.1 and ZeroCode 1.3.27
Hope that helps you answer how to dynamically get a server/port for your tests.
Related
I have a .NET lambda API that I was previously using Swashbuckle to generate a swagger.json file that was given to an external site to use. I am now trying to setup so the swagger.json file is is generated by the API and available through a url for the external site to us ie: mylambdaapi.com/swagger/v2/swagger.json. I was able to get this working by adding a dummy event to my template when pushing to aws as follows.
"SwaggerJson": {
"Type": "Api",
"Properties": {
"Path": "/swagger/v2/swagger.json",
"Method": "GET"
}
}
This works for just accessing the file normally, however the external site will run into CORS "No 'Access-Control-Allow-Origin' header" issues when trying to load the json. Is there any way to force the generation to use "Access-Control-Allow-Origin" in this case? Or is this not feasible in this way? I'm working off what another developer had built previously so I'm trying not to rewrite every, however I'm open to another method as long as it is able to produce some swagger json that the external site can consume.
EDIT: I should note that I am using API gateway, hover the swagger.json is only used for documentation purposes for the external site.
Attempted to use UseCors() functionality however that did not work. I was able to fix the issue by adding an anonymous function to handle the response before UseSwagger.
The following snip-it is from the Configure function in my startup.
app.Use((context, next) =>
{
context.Response.Headers["Access-Control-Allow-Origin"] = "*";
return next.Invoke();
});
app.UseSwagger();
This doc says "With the Reference-Based Catalog Management API, you can create a custom slot type that references an external data source to get the slot type values. This API allows you to create and maintain a catalog of slot type values independent of your Alexa skill."
However as you dig into it, it doesn't provide some needed details on how to actually setup the catalog on an endpoint like s3.
While this resource was provided as an answer in this similar question, it actually refers to content catalogs (like music playlists), not the Reference-Based Catalog Management API, so I assume that was in error and it is not applicable.
So, for the Reference-Based Catalog Management API: The docs say it needs to be in JSON format, and offers ingredients.json as an example. However I used this directly, and it fails (see below). Also, it does not describe what the format should be to include synonyms. Please describe this.
I can successfully create the catalog with '/v1/skills/api/custom/interactionModel/catalogs/' and get a catalogId in return. However, creating the catalog version via '/skills/api/custom/interactionModel/catalogs/{catalogId}/versions' fails. I get "Website Temporarily Unavailable" when I issue the POST.
Here's the request body structure that I'm including with that post:
data: {
"source": {
"type": "URL",
"url": "https://s3.amazonaws.com/..../ingredients.json"
},
"description": "test S3 bucket"
}
Also, does the S3 endpoint have to be made public? I tried it both ways, didn't seem to matter. If it does have to be public though, how did you handle security?
Thanks for the help.
While the API call fails, I did get this to work using the CLI approach.
ask api create-model-catalog-version -c {catalogID} -f {filename}
The file should be JSON with the following structure:
{
"type": "URL",
"url": "[your catalog url]"
}
It remains an open question how to get the API approach to work, so any answers appreciated. Maybe it is a bug, because I specify the exact same 'source' definition in the data structure of the API call as I do in the JSON file used by the CLI command.
Here's what I learned as I got it to work with the CLI:
Yes, the S3 endpoint must be made public in order for the create-model-catalog-version job to succeed. This strikes me as a problem, would like to see the ability to wrap some security around these endpoints.
Here is the format of the JSON that you will want to use, including the use of synonyms which is not described in the official Amazon example. Note that you don't have to include an ID as shown in that example.
{
"values": [
{
"name": {
"value": "hair salon",
"synonyms": [
"hairdresser",
"beauty parlor"
]
}
},
{
"name": {
"value": "hospital",
"synonyms": [
"emergency room",
"clinic"
]
}
},
]
}
I have a 90-day trial and I am registered at (Evaluation 2018-06-29).
But when I request with my correct copied app id and app code I get the below error.
{
"response": {
"_type": "ns2:RoutingServiceErrorType",
"type": "PermissionError",
"subtype": "InvalidCredentials",
"details": "This is not a valid app_id and app_code pair. Please verify that the values are not swapped between the app_id and app_code and the values provisioned by HERE (either by your customer representative or via http://developer.here.com/myapps) were copied correctly into the request.",
"metaInfo": {
"timestamp": "2018-08-15T18:52:35Z",
"mapVersion": "8.30.86.153",
"moduleVersion": "7.2.201832-36299",
"interfaceVersion": "2.6.34"
}
}
}
Can anyone help, especially someone from here api developer support team?
Go into your account projects and add a new project explicitly for the Freemium plan. Then you should be able to generate a new JavaScript/REST App ID and App Code. If you are using one of the mobile SDKs you would generate a new id / code there as well.
(1) Copy and Paste
I'm not certain this is what may be happening for you, but one of my codes had a leading underscore and it was very easy to copy and paste it incorrectly into my source code.
(2) Domain Protection
Also make sure that if you checked "Secure app credentials against a specific domain" that you are calling the routing service from the same domain.
(3) Shell Interpolation
Without more detail about how you are making the calls to the routing service (curl, postman, javascript, ios, android, etc.) it may also indicate where to offer advice.
For example, if you are using curl make sure your parameters have surrounding quotes as & will be interpreted by a shell such that ?app_id=your-app-id&app_code=your-app-code is not interpreted properly. That could generate the response you saw as the shell took your app_code parameter away before curl could make the request only passing the app_id.
I've ran a query on a hapi fhir database which has returned a paged result back to me. I'm using hapi base in java to actually do the search, as per the documentation here: http://hapifhir.io/doc_rest_client.html
Bundle bundle = client.search().forResource(Basic.class).returnBundle(ca.uhn.fhir.model.dstu2.resource.Bundle.class).execute();
do {
for (Entry entry: bundle.getEntry())
System.out.println(entry.getFullUrl());
if (bundle.getLink(Bundle.LINK_NEXT) != null)
bundle = client.loadPage().next(bundle).execute();
else
bundle = null;
}
while (bundle != null);
The code runs as far as getting the first bundle, and prints out the urls as expected, however when it tries to execute the next bundle, I get a ConnectionException 'Connection refused: connect'.
The server still appears to be responsive however as I can rerun my program and have the exact same result returned.
Any idea why the connection would be being refused? I get a similar issue when I try to run it manually from postman.
What you're doing certainly looks correct. If you perform a search manually (say, using a browser or postman or whatever) what does the next link look like? And does it work if you use that link directly in a browser too?
For example, if I run the CLI locally on my machine, and execute a search I see the following in the response:
"link": [
{
"relation": "self",
"url": "http://localhost:8080/baseDstu3/_history"
},
{
"relation": "next",
"url": "http://localhost:8080/baseDstu3?_getpages=d8454866-624d-4bb3-b7a0-0858e4870e7e&_getpagesoffset=10&_count=10&_pretty=true&_bundletype=history"
}
],
If I plug the next link (http://localhost:8080/baseDstu3?_getpages=d8454866-624d-4bb3-b7a0-0858e4870e7e&_getpagesoffset=10&_count=10&_pretty=true&_bundletype=history) into a browser, I get the next page.
Can you try this and see how it goes?
Just in case anyone stumbles across this. I had some sort of redirection going on (setup by another member of my team). Essentially my base url was localhost:8080, but the next address was returning as localhost:1080 (which I don't entirely understand why).
He changed a config in the server to make it not redirect.
I'm writing a simple REST service in Node.js (just experimenting), trying to figure out if Node has matured enough yet. I'm also using NodeUnit for my unit testing.
Now, NodeUnit works fine as a testing framework for testing GET-requests, using the HttpUtils, however, testing POST-requests doesn't seem to be obvious.
Testing GET looks like this:
exports.testHelloWorld = function(test) {
test.expect(1);
httputil(app.cgi(), function(server, client) {
client.fetch('GET', '/', {}, function (resp) {
test.equals('hello world'), resp.body);
test.done();
});
});
}
But how do I test POST-requests? I can change 'GET' to 'POST' and try to write something to 'client', however this doesn't work before .fetch is called because there's no connection yet. And it doesn't work in the .fetch callback function either, because at that time the request has already been executed.
I've looked into the nodeunit code, and there doesn't seem to be support for POSTing data at the moment. So here's my questions:
What does it take to test POST-requests?
Should I even test POST-requests in a unit test, or does that fall under an integration test and I should use another approach?
You could try this library instead of nodeunit: https://github.com/masylum/testosterone
It's built specifically to test web apps over http.
I've just written this library for testing HTTP servers with nodeunit:
https://github.com/powmedia/nodeunit-httpclient