I have the following problem: I'm making a request to external system using OSB. External system gives me a response and I would like to make a series of transformations with it. How can I test this series of transformations of a response?
Basically, I would like to write an XML response by hand, apply all transformations (in bulk) and get the result. So, I don't create a mock of an external service and start with a request, since I would like to test only the response part - whether my transformations are valid and are working properly.
Here is the screenshot of my message flow The part I would like to test is in yellow.
Thank you!
You can create a separate callable pipeline that includes only the response transformations.
Then you would be able to test it by calling it from the service bus console
Related
Right now, my HTTP call has 3 assertions. The reporting options for the results listener only provide a checkbox for "Assertion Results", which lumps all of my assertion results into one value in the CSV output.
The team would like to create a csv output for each assertion. The problem is, you can't add a results listener under an assertion, it must be under the HTTP call. I can't think of a way to create separate reports besides making three separate HTTP calls, each with their own results listener writing a report. That is not ideal.
I tried with jtl with XML.
Below is the config. Use minimum parameter required as it consume a lot of resource.
Below is the output.
But i think if you try to have response time in the same then there will be same entry twice. Similarly, other calculation might get impacted. So, you can use two one this and other simple data writer.
Hope this helps.
With Karate, I'm looking to simulate an end-to-end test structure where I do the following:
Make a GET request to specific data
Store a value as a def variable
Use that information for a separate scenario
This is what I have so far:
Scenario: Search for asset
Given url "https://foo.bar.buzz"
When method get
Then status 200
* def responseItem = $.items[0].id // variable initialized from the response
Scenario: Modify asset found
Given url "https://foo.bar.buzz/" + responseItem
// making request payload
When method put.....
I tried reading the documentation for reusing information, but that seemed to be for more in-depth testing.
Thoughts?
It is highly recommended to model flows like this as one scenario. Please refer to the documentation: https://github.com/intuit/karate#script-structure
Variables set using def in the Background will be re-set before every
Scenario. If you are looking for a way to do something only once per
Feature, take a look at callonce. On the other hand, if you are
expecting a variable in the Background to be modified by one Scenario
so that later ones can see the updated value - that is not how you
should think of them, and you should combine your 'flow' into one
scenario. Keep in mind that you should be able to comment-out a
Scenario or skip some via tags without impacting any others. Note that
the parallel runner will run Scenario-s in parallel, which means they
can run in any order.
That said, maybe the Background or hooks is what you are looking for: https://github.com/intuit/karate#hooks
I have set up an IotHub that receives messages from a device. The Hub is getting the messages, and I am able to see the information reaching and being processed in TSI.
Metrics from TSI Azure
However, when trying to view the data in the TSI enviroment I get an error message saying there is no data.
I think the problem might have to do with setting up the model. I have created an hierarchy, types, and an instance.
model view - instance
As I understand it the instance fields are what is need to reference the set of data. In my case, the Json message being pushed thru the IOT HUb has a field called dvcid, in which "1" is the name of the only device sending values.
Am I doing something wrong?
How can i check the data being stored in TSI, like the rows and columns?
Is there an tutorial or example online where I can see the raw data going in and the model creation based on that data?
Thanks in advance
I also had a similar issue when I first tried using TSI. My problem was due to the timestamp I sent that was not in a proper format (the formatter sent things like "/Date(1547048015593+0100)/", which is not a typical way of encoding dates). When I specified the 'o' date to string format, it worked fine afterwards:
message.Timestamp = DateTime.UtcNow.ToString("o");
Hope this helps
f
I've got a use case where I need to keep track of processing time metrics for a given component and use that as a feedback loop for tuning purposes within my spring-boot application. I thought I'd use a custom metric via an autowired GaugeService in the component I need to monitor, which is working fine and I can see my metrics in the /metrics endpoint. What I'm having trouble with is how to consume those metrics in application code. I would ideally like to receive every gauge submit result and compute a weighted moving average. Is this not a good use case for spring-boot-actuator metrics?
Reading the metrics information via code should be quiet possible via Injecting MetricsEndpoint bean into your application code as usual we inject anyother bean.
MetricsEndpoint bean is defined by EndpointAutoConfiguration.
Every Endpoint defines [#ReadOperation, #WriteOperation, #DeleteOperation] annotated method in order for that end point to be adapted by exposing technology like JMX, Spring webflux etc.
In your case you may be only interested in calling the #ReadOperation of MetricsEndpoint which means two method : listNames or metric. See the given link for more information.
Now when/how often you call this endpoint depends on you only.
Hope this helps.
Most examples of Flux use a todo or chat example. In all those examples, the data set you are storing is somewhat small and and be kept locally so not exactly sure if my planned use of stores falls in line with the flux "way".
The way I intend to use stores are somewhat like ORM repositories. A way to access data in multiple ways and persist data to the data service, whatever that might be.
Lets say I am building a project management system. I would probably have methods like these for data retrieval:
getIssueById
getIssuesByProject
getIssuesByAssignedUser
getIssueComments
getIssueCommentById
etc...
I would also have methods like this for persisting data to the data service:
addIssue
updateIssue
removeIssue
addIssueComment
etc...
The one main thing I would not do is locally store any issue data (and for that matter most store data that related to a data store). Most of the data is important to have fresh because maybe the issue status has updated since I last retrieved that issue. All my data retrieval method would probably always make an API requests to the the latest data.
Is this against the flux "way"? Are there any issue with going about flux in this way?
I wouldn't get too hung up on the term "store". You need create application state in some way if you want your components to render something. If you need to clear that state every time a different request is made, no problem. Here's how things would flow with getIssueById(), as an example:
component calls store.getIssueById(id)
returns empty object since issue isn't in store's cache
the store calls action.fetchIssue(id)
component renders empty state
server responds with issue data and calls action.receiveIssue(data)
store caches that data and dispatches a change event
component responds to event by calling store.getIssueById(id)
the issue data is returned
component renders data
Persisting changes would be similar, with only the most recent server response being held in the store.
user interaction in component triggers action.updateIssue(modifiedIssue)
store handles action, sending changes to server
server responds with updated issue and calls action.receiveIssue(data)
...and so on with the last 4 steps from above.
As you can see, it's not really about modeling your data, just controlling how it comes and goes.