Settings HTTP output header parameters working with aggregation - http

I am trying to change the contentType from the response of an aggregated operation, here is my example code.
interface MyAggregateInterface {
RequestResponse:
op1(typeOp1Request)(typeOp1Response)
}
outputPort MyAggregatePort {
Interfaces: MyAggregateInterface
}
embedded {
Jolie:
"MyAggratedCode.ol" in MyAggregatePort
}
inputPort MyInputPortHttp {
Protocol: http {
.debug= 1;
.debug.showContent =1;
.format -> format;
.contentType -> mime;
.charset ="UTF-8";
.default = "default";
.compression = false
}
Location: "socket://localhost:8081"
Interfaces: DefaultHttpInterface
Aggregates: MyAggregatePort
}
I would like to change the return format for op1.

well I will try to answer your question
we need to define your op1 response type
type typeOp1Response:any{
.format?:string
}
or if you prefer
type typeOp1Response:undefined
I personally prefer the first one so that you can decide the mime in the aggregated service
Now you need to add a courier sessions
courier MyInputPortHttp {
[interface MyAggregateInterface( request )( response )]{
forward( request )( response );
if (is_defined(response.format)){
mime = response.format;
undef(response.format);
}
}
This implementation has a limitation that can return flat data in the root node
Another way is to use the inputType to define your output format.
type typeOp1Request:void{
.otherParameter1:string
.format?:string
}
then your courier
courier MyInputPortHttp {
[interface MyAggregateInterface( request )( response )]{
forward( request )( response );
if (request.format=="json"){
mime = "application/json"
};
if (request.format=="xml"){
mime = "application/xml"
};
}
Not sure if this answers your question

As Balint pointed out, we are missing some information on the nature of the response.
However, it seems to me that the second example better covers the general case. We abstract from any information coming from the aggregated service (which ignores the fact it is aggregated) and we decide what to do with the response, based on local logic (within the aggregator).
Following Balint's example, we can wrap the aggregated operation with a courier and define the format of the output there. I include below a minimal working example.
Aggregated service
type PersonRequestType: void {
.name: string
}
type PersonResponseType: void {
.name: string
.surname: string
}
interface MyAggregatedInterface {
RequestResponse: op1( PersonRequestType )( PersonResponseType ) throws RecordNotFound
}
inputPort IN {
Location: "local"
Interfaces: MyAggregatedInterface
}
execution { concurrent }
main
{
op1( request )( response ){
if( request.name == "Mario" ){
response.name = "Mario";
response.surname = "Mario"
} else {
throw ( RecordNotFound )
}
}
}
Aggregator service
include "aggregated.ol"
outputPort MyAggregatePort { Interfaces: MyAggregatedInterface }
embedded { Jolie: "aggregated.ol" in MyAggregatePort }
inputPort HttpPort {
Location: "socket://localhost:8000"
Protocol: http {
.format -> format
}
Aggregates: MyAggregatePort
}
courier HttpPort {
[ interface MyAggregatedInterface( request )( response ) ]{
forward( request )( response );
format = "json" // e.g., alternative xml
}
}
By changing the value set to format, e.g., from "json" to "xml", we change the format of the HTTP response.
References:
Courier sessions in the Jolie documentation
Reference introduction of couriers and detailed example of its semantics, Pre-print version, https://doi.org/10.1109/SOCA.2012.6449432

Related

Akka http unsupported method exception

I have a plain http route defined like so
delete {
handleErrorsAndReport("delete_foo") {
fooRepository.deleteFoo(fooId)
complete(NoContent)
}
}
I wanted to add some basic authentication to it so now it looks like
seal(
authenticateBasic(
realm = "Secure foo",
scopeAuthenticator
) { scopes =>
delete {
handleErrorsAndReport("delete_foo") {
fooRepository.deleteFoo(fooId, scopes)
complete(NoContent)
}
}
}
)
The full directive is
concat(
get {
// something else which is working
},
seal(
// something else which is working
),
seal(
authenticateBasic(
realm = "Secure foo",
scopeAuthenticator
) { scopes =>
delete {
handleErrorsAndReport("delete_foo") {
fooRepository.deleteFoo(fooId, scopes)
complete(NoContent)
}
}
}
)
)
Now I am getting the following exception when I am trying to delete foos
Request DELETE http://builder/foos/fooId failed with response code 405 due to request error. Response body: HTTP method not allowed, supported methods: PUT
What could be the issue? The way I've been consuming the API has not changed but I'm afraid that something has changed with the introduction of the seal directive.
I have no idea why but this worked
concat(
get {
// something else which is working
},
seal(
authenticateBasic(
realm = "Secure foo",
scopeAuthenticator
) { scopes =>
concat(
delete {
handleErrorsAndReport("delete_foo") {
fooRepository.deleteFoo(fooId, scopes)
complete(NoContent)
},
put {//other authenticated endpoint}
)
}
}
)
)
In other words, I consolidated the two sealed directives which were before separate. Maybe it's related to this

Negative (database dependent) test cases in Spring Cloud Contract

I am writing spring contract for a simple API that receives account number and returns "good" response
if this account exists in the database or "bad" response otherwise. How do i specify "bad" response in the contract if the request that causes "bad" responses has the same format as "good" request?
My Java classes:
#PostMapping("/postrequest")
public AccountDto postMethod(#RequestBody FindAccountRequest rq){
return service.findAccountByNumber(rq);
}
public class FindAccountRequest {
String accountNumber;
}
public class AccountDto {
Integer balance;
Integer errorCode;
}
Contracts:
Contract.make {
request {
description("Existing account — good response")
method POST()
url '/postrequest'
headers { contentType(applicationJson()) }
body( [ accountNumber: $( regex('[0-9]{20}') ) ] )
}
response {
status 200
headers { contentType(applicationJson()) }
body( [balance: anyInteger()] )
}
}
Contract.make {
request {
description("Nonexistent account — bad response")
method POST()
url '/postrequest'
headers { contentType(applicationJson()) }
body( [ accountNumber: $( regex('[0-9]{20}') ) ])
}
response {
status 200
headers { contentType(applicationJson()) }
body( [errorCode: anyInteger()] )
}
}
Request:
{
accountNumber: "12345678901234567890"
}
Prepare two different account numbers one for the positive case and one for the negative one. Two different contracts

Cloudflare Workers - selectively cache HTML content

I'm wanting to create a Cloudflare Worker which selectively caches HTML page contents equivalent to if I had a page rule for cache-level=cache everything, edge cache TTL=30 mins
Requests going via the below, simplified, worker code never hit the cache, instead making a request from my origin every time.
Any idea what I'm missing here?
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url)
if (request.method == "GET" && url.pathname == "/foo/bar") {
newurl=url.protocol + "//" + url.hostname + url.pathname + "?" + url.search
let response = await fetch(newurl, request, { cf: { cacheTtl: 1800 } })
response = new Response(response.body, response)
response.headers.delete("pragma")
return response
} else {
const response = await fetch(request)
return response
}
}
This call is the problem:
fetch(newurl, request, { cf: { cacheTtl: 1800 } })
fetch() takes two parameters, not three. JavaScript calling convention ignores extra parameters, so your { cf: { cacheTtl: 1800 } } is being ignored.
It looks like you aren't actually making changes to the URL, so perhaps you could simply do:
fetch(request, { cf: { cacheTtl: 1800 } })
If you really do want to rewrite the URL, then you'll need a two-step process:
request = new Request(newurl, request);
let response = await fetch(request, { cf: { cacheTtl: 1800 } });

How to issue a subsequent API request only after the first was done?

I need to call an API to upload a photo, this API returns an ID of the photo. Then, I need to get that ID and use it as a parameter for another API.
The problem is, the second API gets called before the first API has a chance to complete (and return the ID). What can be done about this?
I'm using Alamofire 4 and Swift 3.
My code:
// First API - Upload file
func uploadFile(url: String, image: UIImage, callback: #escaping (JSON) -> ()) {
let imageData = UIImagePNGRepresentation(image)
let URL2 = try! URLRequest(url: url, method: .post, headers: header)
Alamofire.upload(multipartFormData: { (multipartFormData) in
multipartFormData.append(imageData!, withName: "photo", fileName: "picture.png", mimeType: "image/png")
}, with: URL2, encodingCompletion: { (result) in
switch result {
case .success(let upload, _, _):
upload.responseJSON
{
response in
switch response.result {
case .success(let value):
let json = JSON(value)
callback(json)
case .failure(let error):
print(error)
}
}
case .failure(let encodingError):
print(encodingError)
}
})
}
// Second API
func Post(url: String, parameters: [String:Any], callback: #escaping (JSON) -> ()) {
Alamofire.request(url, method: .post, parameters: parameters.asParameters(), encoding: ArrayEncoding(), headers: header).responseData { (response) in
switch response.result {
case .success(let value):
callback(json)
case .failure(let error):
print(error)
}
}
}
// Calling the First API
var uploadedImage: [String:String]!
uploadFile(url: baseUrl, image: image, callback: { (json) in
DispatchQueue.main.async {
uploadedImage = ["filename": json["data"]["photo"]["filename"].stringValue, "_id": json["data"]["photo"]["_id"].stringValue]
}
})
// Calling the Second API
Post(url: baseUrl, parameters: uploadedImage) { (json) in
DispatchQueue.main.async {
self.activityIndicator.stopAnimating()
}
}
In order to avoid the race condition that you're describing, you must be able to serialize the two calls somehow. The solutions that come to mind are either barriers, blocking calls, or callbacks. Since you're already using asynchronous (non-blocking) calls, I will focus on the last item.
I hope that a pseudocode solution would be helpful to you.
Assuming the 1st call is always performed before the 2nd, you would do something like:
firstCall(paramsForFirstOfTwo){
onSuccess {
secondCall(paramsForSuccess)
}
onFailure {
secondCall(paramsForFailure)
}
}
However, if the 1st call is optional, you could do:
if (someCondition){
// The above example where 1->2
} else {
secondCall(paramsForNoFirstCall)
}
If you must perform the 1st call a certain amount of times before the 2nd is performed, you can:
let n = 3; var v = 0; // assuming these are accessible from within completedCounter()
firstCall1(...){/* after running code specific to onSuccess or onFailure call c..C..() */}
firstCall2(...){/* same as previous */}
firstCall3(...){/* same as previous */}
function completedCounter(){
// v must be locked (synchronized) or accessed atomically! See:
// http://stackoverflow.com/q/30851339/3372061 (atomics)
// http://stackoverflow.com/q/24045895/3372061 (locks)
lock(v){
if (v < n)
v += 1
else
secondCall(paramsBasedOnResultsOfFirstCalls);
}
}

Get the whole response body when the response is chunked?

I'm making a HTTP request and listen for "data":
response.on("data", function (data) { ... })
The problem is that the response is chunked so the "data" is just a piece of the body sent back.
How do I get the whole body sent back?
request.on('response', function (response) {
var body = '';
response.on('data', function (chunk) {
body += chunk;
});
response.on('end', function () {
console.log('BODY: ' + body);
});
});
request.end();
Over at https://groups.google.com/forum/?fromgroups=#!topic/nodejs/75gfvfg6xuc, Tane Piper provides a good solution very similar to scriptfromscratch's, but for the case of a JSON response:
request.on('response',function(response){
var data = [];
response.on('data', function(chunk) {
data.push(chunk);
});
response.on('end', function() {
var result = JSON.parse(data.join(''))
return result
});
});`
This addresses the issue that OP brought up in the comments section of scriptfromscratch's answer.
I never worked with the HTTP-Client library, but since it works just like the server API, try something like this:
var data = '';
response.on('data', function(chunk) {
// append chunk to your data
data += chunk;
});
response.on('end', function() {
// work with your data var
});
See node.js docs for reference.
In order to support the full spectrum of possible HTTP applications, Node.js's HTTP API is very low-level. So data is received chunk by chunk not as whole.
There are two approaches you can take to this problem:
1) Collect data across multiple "data" events and append the results
together prior to printing the output. Use the "end" event to determine
when the stream is finished and you can write the output.
var http = require('http') ;
http.get('some/url' , function (resp) {
var respContent = '' ;
resp.on('data' , function (data) {
respContent += data.toString() ;//data is a buffer instance
}) ;
resp.on('end' , function() {
console.log(respContent) ;
}) ;
}).on('error' , console.error) ;
2) Use a third-party package to abstract the difficulties involved in
collecting an entire stream of data. Two different packages provide a
useful API for solving this problem (there are likely more!): bl (Buffer
List) and concat-stream; take your pick!
var http = require('http') ;
var bl = require('bl') ;
http.get('some/url', function (response) {
response.pipe(bl(function (err, data) {
if (err) {
return console.error(err)
}
data = data.toString() ;
console.log(data) ;
}))
}).on('error' , console.error) ;
The reason it's messed up is because you need to call JSON.parse(data.toString()). Data is a buffer so you can't just parse it directly.
If you don't mind using the request library
var request = require('request');
request('http://www.google.com', function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body) // Print the google web page.
}
})
If you are dealing with non-ASCII contents(Especially for Chinese/Japanese/Korean characters, no matter what encoding they are), you'd better not treat chunk data passed over response.on('data') event as string directly.
Concatenate them as byte buffers and decode them in response.on('end') only to get the correct result.
// Snippet in TypeScript syntax:
//
// Assuming that the server-side will accept the "test_string" you post, and
// respond a string that concatenates the content of "test_string" for many
// times so that it will triggers multiple times of the on("data") events.
//
const data2Post = '{"test_string": "swamps/沼泽/沼澤/沼地/늪"}';
const postOptions = {
hostname: "localhost",
port: 5000,
path: "/testService",
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(data2Post) // Do not use data2Post.length on CJK string, it will return improper value for 'Content-Length'
},
timeout: 5000
};
let body: string = '';
let body_chunks: Array<Buffer> = [];
let body_chunks_bytelength: number = 0; // Used to terminate connection of too large POST response if you need.
let postReq = http.request(postOptions, (res) => {
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (chunk: Buffer) => {
body_chunks.push(chunk);
body_chunks_bytelength += chunk.byteLength;
// Debug print. Please note that as the chunk may contain incomplete characters, the decoding may not be correct here. Only used to demonstrating the difference compare to the final result in the res.on("end") event.
console.log("Partial body: " + chunk.toString("utf8"));
// Terminate the connection in case the POST response is too large. (10*1024*1024 = 10MB)
if (body_chunks_bytelength > 10*1024*1024) {
postReq.connection.destroy();
console.error("Too large POST response. Connection terminated.");
}
});
res.on('end', () => {
// Decoding the correctly concatenated response data
let mergedBodyChunkBuffer:Buffer = Buffer.concat(body_chunks);
body = mergedBodyChunkBuffer.toString("utf8");
console.log("Body using chunk: " + body);
console.log(`body_chunks_bytelength=${body_chunks_bytelength}`);
});
});
How about HTTPS chunked response? I've been trying to read a response from an API that response over HTTPS with a header Transfer-Encoding: chunked. Each chunk is a Buffer but when I concat them all together and try converting to string with UTF-8 I get weird characters.

Resources