A user can POST a document to our web service. We stream it elsewhere. But, at the end of the streaming, we need to be sure they didn't lie about their Content-Length.
I assume if headerContentLength > realContentLength, the request will just wait for them to send the rest, eventually timing out. So that's probably OK.
What about if headerContentLength < realContentLength? I.e. what if they keep sending data after they said they were done?
Is this taken care of by Node.js in any way? If not, what is a good way to check? I suppose I could just count up the bytes inside of some data event listeners---i.e., req.on("data", function (chunk) { totalBytes += chunk.length; }). That seems like a kludge though.
To check the actual length of the request, you have to add it up yourself. The data chunks are Buffers and they have a .length property that you can add up.
If you specify the encoding with request.setEncoding(), your data chunks will be Strings instead. In that case, call Buffer.byteLength(chunk) to get the length. (Buffer is a global object in node.)
Add up the total for each of your chunks and you'll know how much data was sent.
Here's a rough (untested) example:
https.createServer(function(req, res) {
var expected_length = req.headers['content-length']; // I think this is a string ;)
var actual_length = 0;
req.on('data', function (chunk) {
actual_length += chunk.length;
});
req.on('end', function() {
console.log('expected: ' + expected_length + ', actual: ' + actual_length);
});
});
Note: length refers to the maximum length of the Buffer's content, not the actual length. However, it works in this case because chunk buffers are always created at the exact correct length. Just be aware of that if you're working with buffers somewhere else.
Related
I'm trying to use SequenceReader<T> in .Net Core Preview 8 to parse Guacamole Protocol network traffic.
The traffic might look as follows:
5.error,14.some text here,1.0;
This is a single error instruction. There are 3 fields:
OpCode = error
Reason = some text here
Status = 0 (see Status Codes)
The fields are comma delimited (semi-colon terminated), but they also have the length prefixed on each field. I presume that's so that you could parse something like:
5.error,24.some, text, with, commas,1.0;
To produce Reason = some, text, with, commas.
Simple comma delimited parsing is simple enough to do (with or without SequenceReader). However, to utilise the length I've tried the following:
public static bool TryGetNextElement(this ref SerializationContext context, out ReadOnlySequence<byte> element)
{
element = default;
var start = context.Reader.Position;
if (!context.Reader.TryReadTo(out ReadOnlySequence<byte> lengthSlice, Utf8Bytes.Period, advancePastDelimiter: true))
return false;
if (!lengthSlice.TryGetInt(out var length))
return false;
context.Reader.Advance(length);
element = context.Reader.Sequence.Slice(start, context.Reader.Position);
return true;
}
Based on my understanding of the initial proposal, this should work, though also could be simplified I think because some of the methods in the proposal make life a bit easier than that which is available in .Net Core Preview 8.
However, the problem with this code is that the SequenceReader does not seem to Advance as I would expect. It's Position and Consumed properties remain unchanged when advancing, so the element I slice at the end is always an empty sequence.
What do I need to do in order to parse this protocol correctly?
I'm guessing that .Reader here is a property; this is important because SequenceReader<T> is a mutable struct, but every time you access .SomeProperty you are working with an isolated copy of the reader. It is fine to hide it behind a property, but you'd need to make sure you work with a local and then push back when complete, i.e.
var reader = context.Reader;
var start = reader.Position;
if (!reader.TryReadTo(out ReadOnlySequence<byte> lengthSlice,
Utf8Bytes.Period, advancePastDelimiter: true))
return false;
if (!lengthSlice.TryGetInt(out var length))
return false;
reader.Advance(length);
element = reader.Sequence.Slice(start, reader.Position);
context.Reader = reader; // update position
return true;
Note that a nice feature of this is that in the failure cases (return false), you won't have changed the state yet, because you've only been mutating your local standalone clone.
You could also consider a ref-return property for .Reader.
I am using redis2-nginx-module to serve html content stored as a value in redis. Following is the nginx config code to get value for a key from redis.
redis2_query get $fullkey;
redis2_pass localhost:6379;
#default_type text/html;
When the url is hit the following unwanted response is rendered along with the value for that key.
$14
How to remove this unwanted output? Also if key passed as an argument doesn't exist in the redis, how to check this condition and display some default page?
(Here's a similar question on ServerFault)
There's no way with just redis2 module, as it always return a raw Redis response.
If you only need GET and SET commands you may try with HttpRedisModule (redis_pass). If you need something fancier, like hashes, you should probably try filtering the raw response from Redis with Lua, e.g. something along the lines of
content_by_lua '
local res = ngx.location.capture("/redis",
{ args = { key = ngx.var.fullkey } }
)
local body = res.body
local s, e = string.find(body, "\r\n", 1, true)
ngx.print(string.sub(body, e + 1))
';
(Sorry, the code's untested, don't have an OpenResty instance at hand.)
Is it possible to make http GET requests from within a node-red "function" node.
If yes could somebody point me to some example code please.
The problem I want to solve is the following:
I want to parse a msg.payload with custom commands. For each command I want to make an http request and replace the command with the response of a HTTP GET request.
expl:
msg.payload = "Good day %name%. It's %Time% in the %TimeOfDay%. Time for your coffee";
The %name%,%TimeOfDay% and %Time% should be replaced by the content of a Get request to http://nodeserver/name,..., http://nodeserver/Time.
thnx Hardilb,
After half a day searching I found out that the http-node can also be configured by placing a node just before it setting the
msg.url = "http://127.0.0.1:1880/" + msg.command ;
msg.method = "GET";
I used the following code to get a list of commands
var parts = msg.payload.split('%'),
len = parts.length,
odd = function(num){return num % 2;};
msg.txt= msg.payload;
msg.commands = [];
msg.nrOfCommands = 0;
for (var i = 0; i < len ; i++){
if(odd(i)){
msg.commands.push(parts[i]);
msg.nrOfCommands = msg.nrOfCommands + 1;
}
}
return msg;
You should avoid doing asynchronous or blocking stuff in function nodes.
Don't try to do it all in one function node, chain multiple function nodes with multiple http Request nodes to build the string up a part at a time.
You can do this by stashing the string in another variable off the msg object instead of payload.
One thing to looks out for is that you should make sure you clear out msg.headers before each call to the next http Request node
I want to create an HTTP static file server using java NIO and it works fine for small files, but seems to truncate the HTTP response for larger files (672 KB out of a 3.8 MB image is returned according to my Chrome Inspector, and my browser displays a a partially corrupted image). Is this code below incorrect?
(I know there are existing libraries for this and eventually I will use one in my project. But initially I want to implement a basic one myself to see if my project concept is feasible.)
Iterator<SelectionKey> keys = selector.selectedKeys().iterator();
while (keys.hasNext()) {
SelectionKey key = keys.next();
keys.remove();
if (key.isAcceptable()) {
// New Client encountered
serverSocket.accept().configureBlocking(false)
.register(selector, SelectionKey.OP_READ);
} else if (key.isReadable()) {
// Additional data for existing client encountered
SocketChannel selectedClient = (SocketChannel) key.channel();
ByteBuffer buffer = ByteBuffer.allocate(548);
String requestedFile2 = getRequstedFile(key, selectedClient, buffer);
buffer.clear();
buffer.flip();
FileChannel fc = FileChannel.open(Paths.get(requestedFile2));
String string = "HTTP/1.1 200 Ok\nContent-Type: image/jpeg\nContent-Length: "
+ (Files.size(Paths.get(requestedFile2)) + "\n\n");
selectedClient.write(ByteBuffer.wrap(string.getBytes()));
while (fc.read(buffer) > -1) {
buffer.flip(); // read from the buffer
selectedClient.write(buffer);
buffer.clear();
}
selectedClient.close();
}
}
(Exception handling etc. omitted for brevity)
EDIT
I have a content-length-mismatch error message. So what is the right way to determine the HTTP response size when reading a file's contents using the NIO API?
buffer.clear();
That should be
buffer.compact();
and the loop should be
while (fc.read(buffer) > 0 || buffer.position() > 0)
You're assuming everything got written by the write.
Also you need to change the HTTP header line terminators to \r\n.
And you need to study RFC 2616 about the content length.
I guess you have to check return value from selectedClient.write(), check the SocketChannel.write() documentation:
Unless otherwise specified, a write operation will return only after writing all of the r requested bytes. Some types of channels, depending upon their state, may write only some of the bytes or possibly none at all.
Which could be the case here. Either add another inner loop which would write to output as long as there are bytes remaining in the buffer. Or you can amend the loop according to example in ByteBuffer.compact(): http://docs.oracle.com/javase/7/docs/api/java/nio/ByteBuffer.html#compact()
while (buffer.position() > 0 || fc.read(buffer) > 0) {
buffer.flip(); // read from the buffer
selectedClient.write(buffer);
buffer.compact();
}
And remember that the code supposes that selectedClient is blocking. If that wasn't the case, you would need to invoke another select() waiting on the selectedClient becoming writable...
I'm adding sqlite support to a my Google Chrome extension, to store historical data.
When creating the database, it is required to set the maximum size (I used 5MB, as suggested in many examples)
I'd like to know how much memory I'm really using (for example after adding 1000 records), to have an idea of when the 5MB limit will be reached, and act accordingly.
The Chrome console doesn't reveal such figures.
Thanks.
You can calculate those figures if you wanted to. Basically, the default limit for localStorage and webStorage is 5MB where the name and values are saved as UTF16 therefore it is really half of that which is 2.5 MB in terms of stored characters. In webStorage, you can increase that by adding "unlimited_storage" within the manifest.
Same thing would apply in WebStorage, but you have to go through all tables and figure out how many characters there is per row.
In localStorage You can test that by doing a population script:
var row = 0;
localStorage.clear();
var populator = function () {
localStorage[row] = '';
var x = '';
for (var i = 0; i < (1024 * 100); i++) {
x += 'A';
}
localStorage[row] = x;
row++;
console.log('Populating row: ' + row);
populator();
}
populator();
The above should crash in row 25 for not enough space making it around 2.5MB. You can do the inverse and count how many characters per row and that determines how much space you have.
Another way to do this, is always adding a "payload" and checking the exception if it exists, if it does, then you know your out of space.
try {
localStorage['foo'] = 'SOME_DATA';
} catch (e) {
console.log('LIMIT REACHED! Do something else');
}
Internet Explorer did something called "remainingSpace", but that doesn't work in Chrome/Safari:
http://msdn.microsoft.com/en-us/library/cc197016(v=VS.85).aspx
I'd like to add a suggestion.
If it is a Chrome extension, why not make use of Web SQL storage or Indexed DB?
http://html5doctor.com/introducing-web-sql-databases/
http://hacks.mozilla.org/2010/06/comparing-indexeddb-and-webdatabase/
Source: http://caniuse.com/