Quarkus integration testing against running application with HTTPS - integration-testing

I would like to use Integration-Tests with the Quarkus feature "testing against running application", as described in the documentation.
I managed to make it work on local running instances, but I would like to test against our server which only accepts HTTPS requests.
Setting quarkus.http.test-host and quarkus.http.test-port as described in the documentation automatically leads to a HTTP request. I skimmed through the documentation of all configuation options, but could not find additional parameters that would help in this case.
Does anyone know how I could configure or customize the tests so that they would use HTTPS?

Related

Replace XCC calls with Rest Calls in Marklogic

In an application .Net XCC being used to make communication with marklogic module database to execute module, function and adhoc queries etc.
I want to replace the same XCC calls with REST calls so that we can run application in marklogic 9 as .Net XCC has been deprecated in Marklogic 9.
I have tried in built rest api in marklogic. It only allows to execute module exiting in module database.
Is there any online source stuffs available or anything that could help us.
Any help would be appreciated.
Thanks,
ArvindKr
There is /v1/invoke to invoke modules in the modules database attached to the REST app-server you are addressing, but also /v1/eval that allows running ad hoc queries.
HTH!
If you're going to replace XCC.NET with RESTful calls, try out XQRS, it allows you to build services in XQuery in a manner similar to JAX-RS for Java.
I only consider the following for cases such as yours, where compatibility with legacy code is useful or required and where other options are exausted. This is not an elegant approach, but it may be useful in special cases.
The XDBC protocol (which is what XCC uses) is supported natively on the exactly same app servers and ports which the REST API is exposed. You can see this on port 8000 in a default install. The server literally cannot tell a 'REST Application' and an 'XCC Application' apart except by the URI requested in the request (and in some cases additional headers like cookies). REST and XDBC are both HTTP based, and at the HTTP layer are very similar to the extent that they can share the same ports and configurations.
XDBC is 'passed through' the REST processing via the XML Rewriter. XDBC uses /eval and /invoke while REST uses /v1/eval and /vi/invoke. If you look at the default rewriter.xml for port 8000 you can see how the routing is made. While the XDBC protocol is not formally published its not difficult to 'reverse engineer' by looking at the XCC code (public java source) and the rewriter. For example its not difficult to construct URL and payload data to do a basic eval or invoke call. You should be able to replicate existing XCC.NET client behaviour exactly by using the /eval and /invoke endpoints (look for the xdbc attribute set in the rewriter.xml, this causes the request handling to use pure XDBC protocol and behaviour.
Another alternative, if you cannot solve the external variables problem is to write new 'REST Friendly' apis that then xdmp:invoke() on the legacy APIS passing in the appropriate namespaces. An option is to put the legacy code in an entirely seperate modules DB and then replicate the module URIs exactly with the new code. If you don't need to maintain co-existing versions then you modify the old code to remove the namespaces from the parameters or assign local variable aliases.

Spring Cloud Contract Debugging

Im new to Spring Cloud contract. I have written the groovy contract but the wiremock tests are failing. All I see in the console is
org.junit.ComparisonFailure: expected:<[2]00> but was:<[4]00>
Can anyone please guide me how to enable more debugging ad also is there a way to print the request and response sent by wiremock?
I have set the logging.level.com.github.tomakehurst.wiremock=DEBUG in my spring boot app but no luck
If you use one of the latest version of sc-contract, WireMock should print exactly what wasn't matched. You can also read the documentation over here https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html#_how_can_i_debug_the_request_response_being_sent_by_the_generated_tests_client where we answer your questions in more depth. Let me copy that part for you
86.8 How can I debug the request/response being sent by the generated tests client? The generated tests all boil down to RestAssured in some
form or fashion which relies on Apache HttpClient. HttpClient has a
facility called wire logging which logs the entire request and
response to HttpClient. Spring Boot has a logging common application
property for doing this sort of thing, just add this to your
application properties
logging.level.org.apache.http.wire=DEBUG
86.8.1 How can I debug the mapping/request/response being sent by WireMock? Starting from version 1.2.0 we turn on WireMock logging to
info and the WireMock notifier to being verbose. Now you will exactly
know what request was received by WireMock server and which matching
response definition was picked.
To turn off this feature just bump WireMock logging to ERROR
logging.level.com.github.tomakehurst.wiremock=ERROR
86.8.2 How can I see what got registered in the HTTP server stub? You can use the mappingsOutputFolder property on #AutoConfigureStubRunner
or StubRunnerRule to dump all mappings per artifact id. Also the port
at which the given stub server was started will be attached.
86.8.3 Can I reference text from file? Yes! With version 1.2.0 we’ve added such a possibility. It’s enough to call file(…​) method in the
DSL and provide a path relative to where the contract lays. If you’re
using YAML just use the bodyFromFile property.

How to configure dynamic routing of gRPC requests with envoy, nomad and consul

We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.
The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.
The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.
When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like
10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services
It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.
A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:
service {
tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}
But this is discouraged by Consul:
Dots are not supported because Consul internally uses them to delimit service tags.
Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(
So my questions are:
Has anyone ever done something like that?
If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
Does anyone have some other ideas or insights into this?
How is this accomplished in e.g. istio?
I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/
We do something similar to this, using our own product, Turbine Labs.
We're on a slightly different stack, but the idea is:
Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).
Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.
Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.
(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)
Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.

Unit testing remote API? (in a Symfony2 bundle)

I've created a very small symfony2 bundle here: https://github.com/BranchBit/AirGramBundle
Just a simple service, which calls a remote url.
Now, unit-testing-wise, should I even be testing this? Really all that can go wrong, is the remote host not being available, but that's not my code's issue.
If this were your bundle to maintain, what would you suggest, I would like 100% code coverage, however 50% of the code, is just calling the remote url ..
Suggestions?
This test is unnecessary in my opinion. If you try to test it, then you can use WebTestCase. Your service in the Bundle should accept a Symfony\Component\BrowserKit\Client for client implementation. You can choose between Goutte\Client for production firing with cURL or Symfony\Component\HttpKernel\Client for testing. Then you have to dynamically add a route or a request listener.
Doesn't matter how, but the test is more complex than the implementation, so how can you be sure, your test is okay?

Is there any JMX - REST bridge available?

Hi I would like to monitor a Java application using the browser but at the same time utilising the existing JMX infrastructure.
I know that JMX provides a HTTP interface but I think it provides a standard web gui and its not possible to mashup its functionality with an existing system.
Are you aware of any REST interface for JMX?
My research on google currently shows that there is one project which does something similar. Is this the only option?
Jolokia is a new (at this time) JMX Agent you can install in your JVM and exposes the MBeanServer over HTTP in JSON format.
Tomcat provides a JMX Proxy Servlet in its Manager Application. I don't think it's exactly REST, but it's stateless and is built from simple HTTP requests, so it should be close enough.
For posterity, I've recently added a little web server to my SimpleJMX package. It exposes beans from the platform MBeanServer to HTTP via Jetty if in the classpath. There is also text versions of all pages that make it easy to scrape.
// create a new JMX server listening on a specific port
JmxServer jmxServer = new JmxServer(8000);
jmxServer.start();
// register any beans to jmx as necessary
jmxServer.register(someObj);
// create a web server publisher listening on a specific port
JmxWebServer jmxWebServer = new JmxWebServer(8080);
jmxWebServer.start();
There's a little test program which shows it in operation. Here's an image of java.lang:type=Memory accessed from a browser. As you can see the output is very basic HTML.
You might want to have a look at jmx4perl. It comes with an agent servlet which proxies REST request to local JMX calls and returns a JSON structure with the answers. It supports read, write, exec, list (list of mbeans) and search operations and knows how to dive into complex data structures via an XPath like expression. Look at the protocol description for more details.
The forthcoming release can deal with bulk (== multiple at once) requests as well and adds the possibility to post a JSON request as alternative to a pure REST GET-request.
In one of the next releases there will support a proxy mode so that no agent servlet needs to be deployed on the target platform, but only on an intermediate, proxy server.
MX4J is another alternative., quoting below from the it's home page -
MX4J is a project to build an Open Source implementation of the Java(TM) Management Extensions (JMX) and of the JMX Remote API (JSR 160) specifications, and to build tools relating to JMX.

Resources