Lambda binary payload encoding in Go - http

I'm trying to write a lambda that will return a .WAV file in chunks over HTTP. I've got my actual data in a byte slice (outputPayload [] byte) and am trying to pass it back. While the request seems to run, the response received is of a different length to what I expect and seems to be corrupted. Here's my code:
//Create the necessary headers
responseHeader := make(map[string]string)
responseHeader["Accept-Ranges"] = "bytes"
responseHeader["Content-Range"] = fmt.Sprintf("%s/%d", rangeRequired, fileSize)
responseHeader["Content-Type"] = fileType // this will be "audio/wav"
responseHeader["Content-Length"] = fmt.Sprintf("%d", returnedByteCount)
responseBody := string(outputPayload)
return events.APIGatewayProxyResponse{
StatusCode: http.StatusPartialContent,
Headers: responseHeader,
Body: responseBody,
}, nil
As a basic check, using more at the command line, the start of the original file looks like this:
RIFF$^?^C^#WAVEfmt ^P^#^#^#^A^#^B^#D<AC>^#^#^P<B1>^B^#^D^#^P^#data^#^?^C^#ESC^#^Y^#
While the downloaded file looks like this:
RIFF$^?^C^#WAVEfmt ^P^#^#^#^A^#^B^#D�^#^#^P�^B^#^D^#^P^#data^#^?^C^#ESC^#^Y^#
I'm guessing I have an encoding issue somewhere. My hunch is the string conversion is the problem, but that's the variable type I need for an APIGatewayProxyResponse "Body" component. How do I change my code output to ensure the payload matches the original file?

Related

How to access field names in MultipartDecoder

I am decoding form fields submitted via a HTTP POST request using request-toolbelt. I successfully instantiated a MultipartDecoder like described here. Now I would like to access the form fields by the name I have given them when sending the request.
I am able to get the name of a field like this
from requests_toolbelt.multipart import decoder
multipart_string = b"--ce560532019a77d83195f9e9873e16a1\r\nContent-Disposition: form-data; name=\"author\"\r\n\r\nJohn Smith\r\n--ce560532019a77d83195f9e9873e16a1\r\nContent-Disposition: form-data; name=\"file\"; filename=\"example2.txt\"\r\nContent-Type: text/plain\r\nExpires: 0\r\n\r\nHello World\r\n--ce560532019a77d83195f9e9873e16a1--\r\n"
content_type = "multipart/form-data; boundary=ce560532019a77d83195f9e9873e16a1"
decoder = decoder.MultipartDecoder(multipart_string, content_type)
field_name = decoder.parts[0].headers[b'Content-Disposition'].decode().split(';')[1].split('=')[1]
But this seems quite wrong. What is the usual way to access the form field names?
I use it in the following to decode the result of that method:
lst = []
for part in decoder.MultipartDecoder(postdata.encode('utf-8'), content_type_header).parts:
disposition = part.headers[b'Content-Disposition']
params = {}
for dispPart in str(disposition).split(';'):
kv = dispPart.split('=', 2)
params[str(kv[0]).strip()] = str(kv[1]).strip('\"\'\t \r\n') if len(kv)>1 else str(kv[0]).strip()
type = part.headers[b'Content-Type'] if b'Content-Type' in part.headers else None
lst.append({'content': part.content, "type": type, "params": params})
I assume because its a standard Mime header, there are functions that can do the same, but with less code as well.

python-requests %2F characters

I am building a request containing a list of parameters, this is a list of endpoints that is read from a file. All these containing "/" characters.
First the file is read as:
pointRef = []
with open("myfolder/" + scope, 'r') as f:
for line in f:
pointRef.append(line.strip())
then passing
params = {'endDate': endDate, 'startDate': startDate, 'pointRef': pointRef}
and executing
r = requests.get(url=url_ranged_multiple, headers=headers, params=params)
and this gives error (I tried other requests by hand and they work), but I noticed that the final url request that is composed by "request.get" contains the "%2F" character instead of "/"
I wonder if this is the problem how can I correct it.
Many thanks in advance

Concat 2 strings erlang and send with http

I'm trying to concat 2 variables Address and Payload. After that I want to send them with http to a server but I have 2 problems. When i try to concat the 2 variables with a delimiter ';' it doesn't work. Also sending the data of Payload or Address doesn't work. This is my code:
handle_rx(Gateway, #link{devaddr=DevAddr}=Link, #rxdata{port=Port, data= RxData }, RxQ)->
Data = base64:encode(RxData),
Devaddr = base64:encode(DevAddr),
TextAddr="Device address: ",
TextPayload="Payload: ",
Address = string:concat(TextAddr, Devaddr),
Payload = string:concat(TextPayload, Data),
Json=string:join([Address,Payload], "; "),
file:write_file("/tmp/foo.txt", io_lib:fwrite("~s.\n", [Json] )),
inets:start(),
ssl:start(),
httpc:request(post, {"http://192.168.0.121/apiv1/lorapacket/rx", [], "application/x-www-form-urlencoded", Address },[],[]),
ok;
handle_rx(_Gateway, _Link, RxData, _RxQ) ->
{error, {unexpected_data, RxData}}.
I have no errors that I can show you. When I write Address or Payload individually to the file it works but sending doesn't work...
Thank you for your help!
When i try to concat the 2 variables with a delimiter ';' it doesn't work.
5> string:join(["hello", <<"world">>], ";").
[104,101,108,108,111,59|<<"world">>]
6> string:join(["hello", "world"], ";").
"hello;world"
base64:encode() returns a binary, yet string:join() requires string arguments. You can do this:
7> string:join(["hello", binary_to_list(<<"world">>)], ";").
"hello;world"
Response to comment:
In erlang the string "abc" is equivalent to the list [97,98,99]. However, the binary syntax <<"abc">> is not equivalent to <<[97,98,99]>>, rather the binary syntax <<"abc">> is special short hand notation for the binary <<97, 98, 99>>.
Therefore, if you write:
Address = [97,98,99].
then the code:
Bin = <<Address>>.
after variable substitution becomes:
Bin = <<[97,98,99]>>.
and that isn't legal binary syntax.
If you need to convert a string/list contained in a variable, like Address, to a binary, you use list_to_binary(Address)--not <<Address>>.
In your code here:
Json = string:join([binary_to_list(<<Address>>),
binary_to_list(<<Pa‌​yload>>)],
";").
Address and Payload were previously assigned the return value of string:concat(), which returns a string, so there is no reason to (attempt) to convert Address to a binary with <<Address>>, then immediately convert the binary back to a string with binary_to_list(). Instead, you would just write:
Json = string:join(Address, Payload, ";")
The problem with your original code is that you called string:concat() with a string as the first argument and a binary as the second argument--yet string:concat() takes two string arguments. You can use binary_to_list() to convert a binary to the string that you need for the second argument.
Sorry I'm new to Erlang
As with any language, you have to study the basics and write numerous toy examples before you can start writing code that actually does something.
You don't have to concatenate strings. It is called iolist and is one of best things in Erlang:
1> RxData = "Hello World!", DevAddr = "Earth",
1> Data = base64:encode(RxData), Devaddr = base64:encode(DevAddr),
1> TextAddr="Device address", TextPayload="Payload",
1> Json=["{'", TextAddr, "': '", Devaddr, "', '", TextPayload, "': '", Data, "'}"].
["{'","Device address","': '",<<"RWFydGg=">>,"', '",
"Payload","': '",<<"SGVsbG8gV29ybGQh">>,"'}"]
2> file:write_file("/tmp/foo.txt", Json).
ok
3> file:read_file("/tmp/foo.txt").
{ok,<<"{'Device address': 'RWFydGg=', 'Payload': 'SGVsbG8gV29ybGQh'}">>}

NIO Http file server - connection closed prematurely

I want to create an HTTP static file server using java NIO and it works fine for small files, but seems to truncate the HTTP response for larger files (672 KB out of a 3.8 MB image is returned according to my Chrome Inspector, and my browser displays a a partially corrupted image). Is this code below incorrect?
(I know there are existing libraries for this and eventually I will use one in my project. But initially I want to implement a basic one myself to see if my project concept is feasible.)
Iterator<SelectionKey> keys = selector.selectedKeys().iterator();
while (keys.hasNext()) {
SelectionKey key = keys.next();
keys.remove();
if (key.isAcceptable()) {
// New Client encountered
serverSocket.accept().configureBlocking(false)
.register(selector, SelectionKey.OP_READ);
} else if (key.isReadable()) {
// Additional data for existing client encountered
SocketChannel selectedClient = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(548);
String requestedFile2 = getRequstedFile(key, selectedClient, buffer);
buffer.clear();
buffer.flip();
FileChannel fc = FileChannel.open(Paths.get(requestedFile2));
String string = "HTTP/1.1 200 Ok\nContent-Type: image/jpeg\nContent-Length: "
+ (Files.size(Paths.get(requestedFile2)) + "\n\n");
selectedClient.write(ByteBuffer.wrap(string.getBytes()));
while (fc.read(buffer) > -1) {
buffer.flip(); // read from the buffer
selectedClient.write(buffer);
buffer.clear();
}
selectedClient.close();
}
}
(Exception handling etc. omitted for brevity)
EDIT
I have a content-length-mismatch error message. So what is the right way to determine the HTTP response size when reading a file's contents using the NIO API?
buffer.clear();
That should be
buffer.compact();
and the loop should be
while (fc.read(buffer) > 0 || buffer.position() > 0)
You're assuming everything got written by the write.
Also you need to change the HTTP header line terminators to \r\n.
And you need to study RFC 2616 about the content length.
I guess you have to check return value from selectedClient.write(), check the SocketChannel.write() documentation:
Unless otherwise specified, a write operation will return only after writing all of the r requested bytes. Some types of channels, depending upon their state, may write only some of the bytes or possibly none at all.
Which could be the case here. Either add another inner loop which would write to output as long as there are bytes remaining in the buffer. Or you can amend the loop according to example in ByteBuffer.compact(): http://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffer.html#compact()
while (buffer.position() > 0 || fc.read(buffer) > 0) {
buffer.flip(); // read from the buffer
selectedClient.write(buffer);
buffer.compact();
}
And remember that the code supposes that selectedClient is blocking. If that wasn't the case, you would need to invoke another select() waiting on the selectedClient becoming writable...

How to check actual content length against Content-Length header?

A user can POST a document to our web service. We stream it elsewhere. But, at the end of the streaming, we need to be sure they didn't lie about their Content-Length.
I assume if headerContentLength > realContentLength, the request will just wait for them to send the rest, eventually timing out. So that's probably OK.
What about if headerContentLength < realContentLength? I.e. what if they keep sending data after they said they were done?
Is this taken care of by Node.js in any way? If not, what is a good way to check? I suppose I could just count up the bytes inside of some data event listeners---i.e., req.on("data", function (chunk) { totalBytes += chunk.length; }). That seems like a kludge though.
To check the actual length of the request, you have to add it up yourself. The data chunks are Buffers and they have a .length property that you can add up.
If you specify the encoding with request.setEncoding(), your data chunks will be Strings instead. In that case, call Buffer.byteLength(chunk) to get the length. (Buffer is a global object in node.)
Add up the total for each of your chunks and you'll know how much data was sent.
Here's a rough (untested) example:
https.createServer(function(req, res) {
var expected_length = req.headers['content-length']; // I think this is a string ;)
var actual_length = 0;
req.on('data', function (chunk) {
actual_length += chunk.length;
});
req.on('end', function() {
console.log('expected: ' + expected_length + ', actual: ' + actual_length);
});
});
Note: length refers to the maximum length of the Buffer's content, not the actual length. However, it works in this case because chunk buffers are always created at the exact correct length. Just be aware of that if you're working with buffers somewhere else.

Resources