Get the whole response body when the response is chunked? - http

I'm making a HTTP request and listen for "data":
response.on("data", function (data) { ... })
The problem is that the response is chunked so the "data" is just a piece of the body sent back.
How do I get the whole body sent back?

request.on('response', function (response) {
var body = '';
response.on('data', function (chunk) {
body += chunk;
});
response.on('end', function () {
console.log('BODY: ' + body);
});
});
request.end();

Over at https://groups.google.com/forum/?fromgroups=#!topic/nodejs/75gfvfg6xuc, Tane Piper provides a good solution very similar to scriptfromscratch's, but for the case of a JSON response:
request.on('response',function(response){
var data = [];
response.on('data', function(chunk) {
data.push(chunk);
});
response.on('end', function() {
var result = JSON.parse(data.join(''))
return result
});
});`
This addresses the issue that OP brought up in the comments section of scriptfromscratch's answer.

I never worked with the HTTP-Client library, but since it works just like the server API, try something like this:
var data = '';
response.on('data', function(chunk) {
// append chunk to your data
data += chunk;
});
response.on('end', function() {
// work with your data var
});
See node.js docs for reference.

In order to support the full spectrum of possible HTTP applications, Node.js's HTTP API is very low-level. So data is received chunk by chunk not as whole.
There are two approaches you can take to this problem:
1) Collect data across multiple "data" events and append the results
together prior to printing the output. Use the "end" event to determine
when the stream is finished and you can write the output.
var http = require('http') ;
http.get('some/url' , function (resp) {
var respContent = '' ;
resp.on('data' , function (data) {
respContent += data.toString() ;//data is a buffer instance
}) ;
resp.on('end' , function() {
console.log(respContent) ;
}) ;
}).on('error' , console.error) ;
2) Use a third-party package to abstract the difficulties involved in
collecting an entire stream of data. Two different packages provide a
useful API for solving this problem (there are likely more!): bl (Buffer
List) and concat-stream; take your pick!
var http = require('http') ;
var bl = require('bl') ;
http.get('some/url', function (response) {
response.pipe(bl(function (err, data) {
if (err) {
return console.error(err)
}
data = data.toString() ;
console.log(data) ;
}))
}).on('error' , console.error) ;

The reason it's messed up is because you need to call JSON.parse(data.toString()). Data is a buffer so you can't just parse it directly.

If you don't mind using the request library
var request = require('request');
request('http://www.google.com', function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body) // Print the google web page.
}
})

If you are dealing with non-ASCII contents(Especially for Chinese/Japanese/Korean characters, no matter what encoding they are), you'd better not treat chunk data passed over response.on('data') event as string directly.
Concatenate them as byte buffers and decode them in response.on('end') only to get the correct result.
// Snippet in TypeScript syntax:
//
// Assuming that the server-side will accept the "test_string" you post, and
// respond a string that concatenates the content of "test_string" for many
// times so that it will triggers multiple times of the on("data") events.
//
const data2Post = '{"test_string": "swamps/沼泽/沼澤/沼地/늪"}';
const postOptions = {
hostname: "localhost",
port: 5000,
path: "/testService",
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(data2Post) // Do not use data2Post.length on CJK string, it will return improper value for 'Content-Length'
},
timeout: 5000
};
let body: string = '';
let body_chunks: Array<Buffer> = [];
let body_chunks_bytelength: number = 0; // Used to terminate connection of too large POST response if you need.
let postReq = http.request(postOptions, (res) => {
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (chunk: Buffer) => {
body_chunks.push(chunk);
body_chunks_bytelength += chunk.byteLength;
// Debug print. Please note that as the chunk may contain incomplete characters, the decoding may not be correct here. Only used to demonstrating the difference compare to the final result in the res.on("end") event.
console.log("Partial body: " + chunk.toString("utf8"));
// Terminate the connection in case the POST response is too large. (10*1024*1024 = 10MB)
if (body_chunks_bytelength > 10*1024*1024) {
postReq.connection.destroy();
console.error("Too large POST response. Connection terminated.");
}
});
res.on('end', () => {
// Decoding the correctly concatenated response data
let mergedBodyChunkBuffer:Buffer = Buffer.concat(body_chunks);
body = mergedBodyChunkBuffer.toString("utf8");
console.log("Body using chunk: " + body);
console.log(`body_chunks_bytelength=${body_chunks_bytelength}`);
});
});

How about HTTPS chunked response? I've been trying to read a response from an API that response over HTTPS with a header Transfer-Encoding: chunked. Each chunk is a Buffer but when I concat them all together and try converting to string with UTF-8 I get weird characters.

Related

Response of Requests made shown as empty in json report of newman

After running the collection using newman with json reporter, json file gets generated.
But for response portion it is [] i.e. empty, while it has different response related attributes with proper values (e.g. for responseTime, responseSize, etc).
So how can I get the response body/data in this json reporter.
As per my actual requirement, I need to record response for each request made in either json or excel/csv format file.
While I was not able to solve this problem directly, I used Newman as a javascript library and recorded the request and response in separate text files.
The files generated will have file names like request1, request2 and so on for request files; and similar behavior will be for response files and their names for every execution.
Below is the code for the above mentioned solution:
const newman = require('newman'),
fs = require('fs');
var rq = 1;
var rs = 1;
newman.run({
collection: require('./ABC.postman_collection.json'),
environment: require('./XYZ.postman_environment.json'),
iterationData: './DataSet.csv',
reporters: 'cli'
}).on('beforeRequest', function (error, args) {
if (error) {
console.error(error);
} else {
fs.writeFile('request' + rq++ + '.txt', args.request.body.raw, function (error) {
if (error) {
console.error(error);
}
});
}
}).on('request', function (error, args) {
if (error) {
console.error(error);
}
else {
fs.writeFile('response' + rs++ + '.txt', args.response.stream, function (error) {
if (error) {
console.error(error);
}
});
}
});

Downloading Blob from Docusign Envelopes API

Using Meteor HTTP I'm able to get a response from docusign and convert to a base64 buffer.
try {
const response = HTTP.get(`${baseUrl}/envelopes/${envelopeId}/documents/1`, {
headers: {
"Authorization": `bearer ${token}`,
"Content-Type": "application/json",
},
});
const buffer = new Buffer(response.content).toString('base64');
return buffer
} catch(e) {
console.log(e);
throw new Meteor.Error(e.reason);
}
Then on the client I'm using FileSaver.js to saveAs a blob created from an ArrayBuffer via this function
function _base64ToArrayBuffer(base64) {
const binary_string = window.atob(base64);
const len = binary_string.length;
const bytes = new Uint8Array( len );
for (let i = 0; i < len; i++) {
bytes[i] = binary_string.charCodeAt(i);
}
return bytes.buffer;
}
// template helper
'click [data-action="download"]'(e, tmpl){
const doc = this;
return Meteor.call('downloadPDF', doc, (err, pdf)=>{
if(err) {
return notify({
message: err,
timeout: 3000,
})
}
const pdfBuffer = pdf && _base64ToArrayBuffer(pdf);
console.log(pdfBuffer);
return saveAs(new Blob([pdfBuffer], {type: 'application/pdf'}), `docusign_pdf.pdf`);
});
},
The PDF is downloading with the correct size and page length, but all the pages are blank. Should I be encoding the buffer differently? Is there something else I'm missing?
When uploading documents into DocuSign you can either send the raw document bytes as part of a multipart/form-data request or you can send the document as a base64 encoded file in the document node in your request body.
However once the envelope is complete the DocuSign platform converts the doc into a PDF (if it wasn't one already) and provides that raw file. As such, you shouldn't have to base64 decode the doc so I would try removing that part of your code.

Retrieving an External image and storing in CollectionFS

Alright, so I have an image at a specific URL with the variable path, I want to download it and store it in CollectionFS. How can I get/create the image buffer from the response?
var request = Npm.require('request').defaults({encoding: null});
request.get(path, function(err, res, body)
{
return CollectionFS.storeBuffer(res.request.uri.path, body, {
});
});
It appears to store successfully, with the correct size. But when I try to view the image it appears to be corrupted.
Answering my own question. I had two issues with the above code. Firstly, meteor doesn't like it when you try to touch the database inside of a callback, so I added a future to wait until the callback was done.
Secondly, the default encoding on storeBuffer is utf-8 - which is a problem for images. Changing it to ascii solved my problem.
var Future = Npm.require("fibers/future");
var pre = new Future();
var preOnComplete = pre.resolver();
var request = Npm.require('request').defaults({encoding: null});
request.get(path, function(err, res, body)
{
return preOnComplete(err, res, body);
});
Future.wait(pre);
return EventImages.storeBuffer(pre.value.request.href, pre.value.body, {
contentType: 'image/jpeg',
metadata: { eventId: eventId },
encoding: 'binary'
});

How to handle http.get() when received no data in nodejs?

Let's say that I have a service that returns customer information by a given id. If it's not found I return null.
My app calls this services like this:
http.get('http://myserver/myservice/customer/123', function(res) {
res.on('data', function(d) {
callback(null, d);
});
}).on('error', function(e) {
callback(e);
});
Currently my service responds with 200 but no data.
How do I handle when no data is returned?
Should I change it to return a different http code? In this case how to handle this? I tried many different approaches without success.
First off, it's important to remember that res is a stream of data; as it stands, your code is likely to call callback multiple times with chunks of data. (The data event is fired each time new data comes in.) If your method has been working up to this point, it's only because the responses have been small enough that the response payload wasn't broken into multiple chunks.
To make your life easier, use a library that handles buffering HTTP response bodies and allows you to get the complete response. A quick search on npm reveals the request package.
var request = require('request');
request('http://server/customer/123', function (error, response, body) {
if (!error && response.statusCode == 200) {
callback(null, body);
} else {
callback(error || response.statusCode);
}
});
As far as your service goes, if a customer ID is not found, return a 404 – it's semantically invalid to return a 200 OK when in fact there is no such customer.
Use the Client Response 'end' event:
//...
res.on('data', function(d) {
callback(null, d);
}).on('end', function() {
console.log('RESPONSE COMPLETE!');
});
I finally found a way to solve my problem:
http.get('http://myserver/myservice/customer/123', function(res) {
var data = '';
response.on('data', function(d) {
data += d;
});
response.on('end', function() {
if (data != '') {
callback(null, data);
}
else {
callback('No customer was found');
}
});
}).on('error', function(e) {
callback(e);
});
This solves the problem, but I'm also going to adopt josh3736's suggestion and return a 404 on my service when the customer is not found.

How to listen to node http-proxy traffic?

I am using node-http-proxy. However, in addition to relaying HTTP requests, I also need to listen to the incoming and outgoing data.
Intercepting the response data is where I'm struggling. Node's ServerResponse object (and more generically the WritableStream interface) doesn't broadcast a 'data' event. http-proxy seems to create it's own internal request, which produces a ClientResponse object (which does broadcast the 'data' event) however this object is not exposed publically outside the proxy.
Any ideas how to solve this without monkey-patching node-http-proxy or creating a wrapper around the response object?
Related issue in issues of node-http-proxy on Github seems to imply this is not possible. For future attempts by others, here is how I hacked the issue:
you'll quickly find out that the proxy is only calling writeHead(), write() and end() methods of the res object
since res is already an EventEmitter, you can start emitting new custom events
listen for these new events to assemble the response data and then use it
var eventifyResponse = function(res) {
var methods = ['writeHead', 'write', 'end'];
methods.forEach(function(method){
var oldMethod = res[method]; // remember original method
res[method] = function() { // replace with a wrapper
oldMethod.apply(this, arguments); // call original method
arguments = Array.prototype.slice.call(arguments, 0);
arguments.unshift("method_" + method);
this.emit.apply(this, arguments); // broadcast the event
};
});
};
res = eventifyResponse(res), outputData = '';
res.on('method_writeHead', function(statusCode, headers) { saveHeaders(); });
res.on('method_write', function(data) { outputData += data; });
res.on('method_end', function(data) { use_data(outputData + data); });
proxy.proxyRequest(req, res, options)
This is a simple proxy server sniffing the traffic and writing it to console:
var http = require('http'),
httpProxy = require('http-proxy');
//
// Create a proxy server with custom application logic
//
var proxy = httpProxy.createProxyServer({});
// assign events
proxy.on('proxyRes', function (proxyRes, req, res) {
// collect response data
var proxyResData='';
proxyRes.on('data', function (chunk) {
proxyResData +=chunk;
});
proxyRes.on('end',function () {
var snifferData =
{
request:{
data:req.body,
headers:req.headers,
url:req.url,
method:req.method},
response:{
data:proxyResData,
headers:proxyRes.headers,
statusCode:proxyRes.statusCode}
};
console.log(snifferData);
});
// console.log('RAW Response from the target', JSON.stringify(proxyRes.headers, true, 2));
});
proxy.on('proxyReq', function(proxyReq, req, res, options) {
// collect request data
req.body='';
req.on('data', function (chunk) {
req.body +=chunk;
});
req.on('end', function () {
});
});
proxy.on('error',
function(err)
{
console.error(err);
});
// run the proxy server
var server = http.createServer(function(req, res) {
// every time a request comes proxy it:
proxy.web(req, res, {
target: 'http://localhost:4444'
});
});
console.log("listening on port 5556")
server.listen(5556);
I tried your hack but it didn't work for me. My use case is simple: I want to log the in- and outgoing traffic from an Android app to our staging server which is secured by basic auth.
https://github.com/greim/hoxy/
was the solution for me. My node-http-proxy always returned 500 (while the direct request to stage did not). Maybe the authorization headers would not be forwarded correctly or whatever.
Hoxy worked fine right from the start.
npm install hoxy [-g]
hoxy --port=<local-port> --stage=<your stage host>:<port>
As rules for logging I specified:
request: $aurl.log()
request: #log-headers()
request: $method.log()
request: $request-body.log()
response: $url.log()
response: $status-code.log()
response: $response-body.log()
Beware, this prints any binary content.

Resources