WireMock To Use As Proxy For SOAP Service - integration-testing

Here is the scenario I'm trying to work on:
I'm writing Contract Driven Tests using Spring Cloud Contract. The tests for inter-communication between the microservices works fine.
Some microservices are calling SOAP-based services. As part of integration tests, I'm trying to use
WireMock as a proxy for the SOAP-based services. Basically, the WireMock should intercept the call, then call the target live environment with the same request, return the same response to the test as a stub.
Unfortunately, I couldn't find any examples how to proceed with that. These services use the HTTP protocol. Any examples of how or any pointers to achieve this would be great. Thanks!

Firstly you need to point your SOAP client to the WireMock base URL, so e.g. if you're using a Spring properties file you might have something like this:
soap.api.host=wiremock-host.internal
soap.api.port=8888
Then you need to configure the WireMock server with a low-priority, broad matching proxy stub. Here's an example of how that would look in JSON form:
{
"priority": 8,
"response": {
"proxyBaseUrl" : "http://target.soap.endpoint"
}
}
Then finally, you would create additional stubs (at the default priority) for each request you want to intercept e.g.
{
"request": {
"method": "POST",
"urlPath": "/v1/some/thing",
"headers": {
"SOAPAction": {
"contains": "MyAction"
}
}
},
"response": {
"status": 200,
"body": "<soap:Envelope ..."
}
}

Related

http.server.requests and http.client.requests metrics not coming in spring actuator json

In my springboot application i can not see any http metrics.
I can only see below metrics:
// 20230120164530
// http://localhost:8081/actuator/metrics
{
"names": [
"application.ready.time",
"application.started.time",
"disk.free",
"disk.total",
"executor.active",
"executor.completed",
"executor.pool.core",
"executor.pool.max",
"executor.pool.size",
"executor.queue.remaining",
"executor.queued",
"jvm.buffer.count",
"jvm.buffer.memory.used",
"jvm.buffer.total.capacity",
"jvm.classes.loaded",
"jvm.classes.unloaded",
"jvm.gc.live.data.size",
"jvm.gc.max.data.size",
"jvm.gc.memory.allocated",
"jvm.gc.memory.promoted",
"jvm.gc.overhead",
"jvm.gc.pause",
"jvm.memory.committed",
"jvm.memory.max",
"jvm.memory.usage.after.gc",
"jvm.memory.used",
"jvm.threads.daemon",
"jvm.threads.live",
"jvm.threads.peak",
"jvm.threads.states",
"logback.events",
"process.cpu.usage",
"process.files.max",
"process.files.open",
"process.start.time",
"process.uptime",
"resilience4j.retry.calls",
"system.cpu.count",
"system.cpu.usage",
"system.load.average.1m",
"tomcat.sessions.active.current",
"tomcat.sessions.active.max",
"tomcat.sessions.alive.max",
"tomcat.sessions.created",
"tomcat.sessions.expired",
"tomcat.sessions.rejected"
]
}
Making the rest call to the api generated the http.server.requests and http.client.requests metrics

Using nginx to redirect dynamic request

I have a druid service which runs at my local machine at port 8082 as follows:
Method POST: http://localhost:8082/druid/v2/?pretty
Body:
{
"queryType" : "topN",
"dataSource" : "some_source",
"intervals" : ["2015-09-12/2015-09-13"],
"granularity" : "all",
"dimension" : "page",
"metric" : "edits",
"threshold" : 25,
"filter": {
"type": "and",
"fields": [
{
"type": "selector",
"dimension": "pix_id",
"value": "1234"
}
}
}
Hitting this query gives me a list of records based on the value of the dimension 'pix_id'.
Now, I want to setup an nginx such that the external application should not have any clue about my druid service. I just want the external application to hit the URL:
http://localhost:80/pix_id/98765
This url should dynamically generate a JSON with the above mentioned pix_id and send a request to druid and return the response to the user.
Is it possible to do this in nginx?
Yes you can do this, but rather I would suggest to have a php or python script in between to give the results.
So the setup would be -
Have php page receive the request.
make a curl call from php to the druid, locally.
get the result and pass on the response.
There are multiple benefits of doing this eg. -
You completely mask druid, and not necessarily limited to druid.
You can do more calculations in php before sending the request to druid.
caching at php end.

GAE endpoints generates wrong discovery doc

I have upgraded to the latest Cloud Endpoints 2.0 as well as the endpoints_proto_datastore to its latest commit. When I now try to generate the API discovery doc I get the following error messages:
Method user.update specifies path parameters but you are not using a ResourceContainer This will fail in future releases; please switch to using ResourceContainer as soon as possible
Method position.update specifies path parameters but you are not using a ResourceContainer This will fail in future releases; please switch to using ResourceContainer as soon as possible
The only two available endpoints are the following two methods which should update the User and the Position model:
#User.method(name='user.update', path='users/{id}', http_method='PUT')
def UserUpdate(self, user):
""" Update an user resource. """
user.put()
return user
#Position.method(name='position.update', path='positions/{id}', http_method='PUT')
def PositionUpdate(self, position):
""" Update a position resource. """
position.put()
return position
Before upgrading to Cloud Endpoints 2.0 everything worked fine. But now if I take a look into the generated discovery file both endpoints have a ProtorpcMessagesCombinedContainer in their request. But the combined container itself is defined with the properties of the Position model!
This is how both methods request attribute are defined:
"request": {
"$ref": "ProtorpcMessagesCombinedContainer",
"parameterName": "resource"
},
And this is the definition of the combined container (which has the properties of the Position model):
"ProtorpcMessagesCombinedContainer": {
"id": "ProtorpcMessagesCombinedContainer",
"type": "object",
"properties": {
"displayName": {
"type": "string"
},
"shortName": {
"type": "string"
}
}
},
Does anyone else had this issue with GAE and Cloud Endpoints 2.0?
What am I doing wrong? Usually the endpoints-proto-datastore should handle the ResourceContainer and the methods path parameters. Also the endpoints-proto-datastore wasn't updated for years ... I really don't know where the error comes from.
Thanks for your help!

Openstack API Authentication

Openstack noob here. I have setup an Ubuntu VM with DevStack, and am trying to authenticate with Keystone to obtain a token to be used for subsequent Openstack API calls. The identity endpoint shown on the “API Access” page in Horizon is: http://<DEVSTACK_IP>/identity.
When I post the below JSON payload to this endpoint, I get the error get_version_v3() got an unexpected keyword argument 'auth’.
{
"auth": {
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"name": "admin",
"domain": {
"name": "Default"
},
"password": “AdminPassword”
}
}
}
}
}
Based on the Openstack docs, I should be hitting http://<DEVSTACK_IP>/v3/auth/tokens to obtain a token, but when I hit that endpoint, I get 404 Not Found.
I'm currently using Postman for testing this, but will eventually be doing programmatically.
Does anybody have any experience with authenticating against the Openstack API that can help?
Not sure whether you want to do it in a python way, but if you do, here is a way to do it:
from keystoneauth1.identity import v3
from keystoneauth1 import session
v3_auth = v3.Password(auth_url=V3_AUTH_URL,
username=USERNAME,
password=PASSWORD,
project_name=PROJECT_NAME,
project_domain_name="default",
user_domain_name="default")
v3_ses = session.Session(auth=v3_auth)
auth_token = v3_ses.get_token()
And you V3_AUTH_URL should be http://<DEVSTACK_IP>:5000/v3 since keystone is using port 5000 as a default.
If you do have a multi-domain devstack, you can change the domains, otherwise, they should be default
Just in case you don't have the client library installed: pip install python-keystoneclient
Here is a good doc for you to read about it:
https://docs.openstack.org/keystoneauth/latest/using-sessions.html
HTH

How can I consume Spring Hateoas embedded?

I am studying Spring Projects (Web, Security, DATA JPA, Hateoas)
And I am developing sample web service.
It is consist of two module.
Rest API server (provide data, Spring Data JPA, Spring Data Rest, Spring Hateoas)
Web server (provide Web service, Spring MVC, Spring Security)
Actually "Rest API server" is simple and easy.
I just defined some Entity class, and just use "#RepositoryRestResource"
#RepositoryRestResource generated RestController, right?
So when I call REST API like "localhost:8080/users", I can receive the responses.
But there is very critical issue.
I got the response like below:
{
"_links":{
"self":{
"href": "http://localhost:8888/users{?page,size,sort}",
"templated": true
},
"search":{
"href": "http://localhost:8888/users/search"
}
},
"_embedded":{
"users":[
{
"email": "test#gmail.com", "name": null, "isShowName": null, "nickName": null
}
]
},
"page":{
"size": 20,
"totalElements": 5,
"totalPages": 1,
"number": 0
}
}
When I print the response as toString, there are no "Embedded" values.
There are just "link" values.
I need the "Embedded" values.
I tried to googling, but there are no clear resolutions.
So I am contemplating using not Hateoas but RestController.
If I use RestController, I can resolve this easily.
Who can help me please?

Resources