Has anyone had any success at all using node's childprocess.spawn() on meteor on any platform? I've tried it on both OS X and Windows as follows and the app crashes immediately:
if (Meteor.isServer) {
Meteor.startup(function() {
cmd = __meteor_bootstrap__.require('child_process').spawn('irb', [], {detached: true, stdio:'pipe'});
cmd.stdout.on('data', function(data){
Fiber(function(){
Replies.remove({});
Replies.insert({message: data});
}).run();
});
});
}
In the console, I get the following message on OS X and a similar one on Windows:
Assertion failed: (handle->InternalFieldCount() > 0), function Unwrap, file ../src/node_object_wrap.h, line 61.
Exited from signal: SIGABRT
Does anyone have any thoughts?
Thanks!
-Greg
data is a node Buffer which can't be inserted into a collection; convert it to a string first.
Also note your data event callback will be called multiple times as data is streamed from the child process (unless the output is so small that you happen to get it all in one buffer). You'll want to accumulate the data in a buffer and then insert it in your collection when you get the stream end event.
If there is any chance that your child process will be outputing utf-8 (anything other than pure ASCII), make sure to accumulate the data in a node Buffer first, and then convert the entire Buffer to a string, rather than converting each chunk of data to a string and accumulating the data as a string. (utf-8 characters can span multiple bytes, so you can't chop a byte stream into arbitrary pieces and parse each piece as utf-8 separately).
Related
I want to add a column to a MongoDB collection via R. The collection has tabular format and is already relatively big (14000000 entries, 140 columns).
The function I am currently using is
function (collection, name, value)
{
mongolite::mongo(collection)$update("{}", paste0("{\"$set\":{\"",
name, "\": ", value, "}}"), multiple = TRUE)
invisible(NULL)
}
It does work so far. (It takes about 5-10 Minutes, which is ok. Although, it would be nice if the speed could be improved somehow).
However, it also gives me persistently the following error that interrupts the execution of the rest of the script.
The error message reads:
Error: Failed to send "update" command with database "test": Failed to read 4 bytes: socket error or timeout
Any help on resolving this error would be appreciated. (If there are ways to improve the performance of the update itself I'd also be more than happy for any advices.)
the default socket timeout is 5 minutes.
You can override the default by setting sockettimeoutms directly in your connection URI:
mongoURI <- paste0("mongodb://", user,":",pass, "#", mongoHost, ":", mongoPort,"/",db,"?sockettimeoutms=<something large enough in milliseconds>")
mcon <- mongo(mongoCollection, url=mongoURI)
mcon$update(...)
I have a Qt program that processes stdin data like this:
QTextStream qtin(stdin);
QString stdindata = qtin.readAll();
QByteArray ba;
ba = stdindata.toUtf8();
QJsonDocument exJSONDoc(QJsonDocument::fromJson(ba));
QJsonObject extRoot;
extRoot = exJSONDoc.object();
QStringList keys;
keys = extRoot.keys();
for (int n=0; n <= keys.count()-1; n++)
{
qDebug() << extRoot.value(keys[n]).toString();
}
It works when I call my program like this:
myprogram < ./data.json
But if I call it without any "<" it hangs in qtin.readAll().
How can I check with Qt if the stdin is empty?
(I am assuming a Linux -or at least POSIX- operating system)
QTextStream qtin(stdin);
QString stdindata = qtin.readAll();
This would read stdin till end-of-file is reached. So works with a redirected input like
myprogram < ./data.json
But if I call it without any "<" it hangs ...
But then (that is, if you run myprogram alone) stdin is not empty. It is the same as your shell's stdin. and your program, being the foreground job, is waiting for input on the terminal you are typing (see also tty(4)). Try (in that case) typing some input on the terminal (which you could end with Ctrl D to make an end-of-file condition). Read about job control and the tty demystified and see also termios(3).
Perhaps you could detect that situation with e.g. isatty(3) on STDIN_FILENO. But that won't detect a pipe(7) like
tail -55 somefile | myprogram
You need to define what an empty stdin is for you. I have no idea what that means to you, and I would instead think of myprogram < /dev/null (see null(4)) as the way to get an empty stdin.
Perhaps you should design myprogram so that some program
option (perhaps --ignore-stdin) is avoiding any read from stdin.
Problem here is readAll. See documentation:
Reads the entire content of the stream, and returns it as a QString.
Avoid this function when working on large files, as it will consume a
significant amount of memory.
So it reads stdin until it encounters end of file and since stdin is associated with console you have to signal end of file. Usually it is Ctrl-D and press enter.
It is more probable you what to read stdin line by line.
To alow user text editing console transfers data to standard input of the application only line by line. This was designed like this ages ago when computer had only a printer as user interface (no screen).
Now question is how to read JSon form stdin console connected with console without end of file information?
I would use some SAX parser, but this would be to complicated for you.
So is there another way to detect end of JSon?
You can try this approach (this is basic idea, not final solution, so it has couple shortcomings):
QFile file(stdin);
QByteArray data = file.peak(largeNumber);
QJsonParseError error;
QJSonDocument doc = QJSonDocument::fromJson(data, &error);
while (!doc.isValid() && JSonNotTerminatedError(error.error))
{
// TODO: wait for new data - it would be best to use readyRead signal
doc = QJSonDocument::fromJson(data, &error);
}
Where JSonNotTerminatedError returns true for respective QJsonParseError::ParseError values (see linked documentation) which are related with unterminated JSon data.
Now I see QFile doesn't have required constructor, but main concept should be clear. Read data from stdin and check if it is a valid JSon document.
I have a simple script which transfers an image file from one machine to another,
does image processing and returns a result as "dice count".
The problem is that randomly the image received is partially grey. There seems no
reason as to when the transfer is incomplete and I see no other issues in the code.
The client gives no error or any indication that sendall failed or was incomplete.
Thanks guys.
server code:
def reciveImage():
#create local buffer file
fo = open("C:/foo.jpg", "wb")
print "inside reciveImage"
while True:
print "inside loop"
data = client.recv(4096)
print "data length: ", len(data)
fo.write(data)
print "data written"
if (len(data) < 4096):
break
print "break"
fo.close()
print "Image received"
And the (simplifyed) client code:
data = open("/home/nao/recordings/cameras/image1.jpg", "rb")
# prepare to send data
binary_data = data.read()
data.close()
sock.sendall(binary_data)
Normal server output:
Client Command: findBlob. Requesting image...
inside reciveImage
inside loop
data length: 4096
data written
#... This happens a bunch of times....
inside loop
data length: 4096
data written
inside loop
data length: 1861
data written
Image received
dice count: 0
Waiting for a connection
But randomly it will only loop a few times or less, like:
Client Command: findBlob. Requesting image...
inside reciveImage
inside loop
data length: 1448
data written
Image received
dice count: 0
Waiting for a connection
recv does not block until all data are received, it only blocks until some data are received and returns them. E.g. if the client send first 512 bytes and then a second later another 512 byte your recv will return the first 512 bytes, even if you asked for 4096. So you should only break if recv returns, that no more data are available (connection closed).
I managed to establish a connection in R to Mtgox websocket with following specs:
url: https://socketio.mtgox.com/mtgox?Currency=USD
port: 80
specs: https://en.bitcoin.it/wiki/MtGox/API/Streaming
I used the improved R library "websocket" downloaded from https://github.com/zeenogee/R-Websockets:
require("websockets")
con = websocket("https://socketio.mtgox.com/mtgox?Currency=USD")
and the connection was successfully established. However, it seems that the socket is not broadcasting. I made an easy function f
f = function(con) {
Print("Test Test!", con)
}
set_callback("receive", f, con)
while(TRUE)
{
service(con)
Sys.sleep(0.05)
}
which should print some text whenever some data are received from the websocket. But the websocket doesnt seem to trigger the "receive" method and nothing is displayed. Code ended up with infinite loop with no output.
I know that the websocket is working so there must be a mistake in the code. Do I have to "ping" the socket somehow to start broadcasting? Anyone has anidea how to get it working?
Thanks!
Firstly, you have an infinite loop, because you have defined an infinite loop:
While(TRUE)
It is worth noting, numerous R websocket implementations leverage this loop, so may not be a bug but rather an implementation detail causing what you are seeing.
It would appear that you need to subscribe to the 'message' event not 'receive' (
https://en.bitcoin.it/wiki/MtGox/API/Streaming).
In JavaScript (from MtGox Spec):
conn.on('message', function(data) {
// Handle incoming data object.
});
Or in R:
set_callback('message',f,con)
Failing that...
I would also comment to say, that maybe the stream is returning you data that you are not able to implicitly print in R Print function.
Sample:
{
"op":"remark",
"message":<MESSAGE FROM THE SERVER>,
"success":<boolean>
}
If the data follows this format as defined in spec, you may which to examine how that data is being parsed, and the "op" which is being returned.
I'm trying to write a lua script that reads input from other processes and analyzes it. For this purpose I'm using io.popen and it works as expected in Windows, but on Unix(Solaris) reading from io.popen blocks, so the script just waits there until something comes along instead of returning immediately...
As far as I know I can't change the functionality of io.popen from within the script, and if at all possible I would rather not have to change the C code, because then the script will then need to be bound with the patched binary.
Does that leave me with any command-line solutions?
Ok got no answers so far, but for posterity if someone needs a similar solution I did the following more or less
function my_popen(name,cmd)
local process = {}
process.__proc = assert(io.popen(cmd..">"..name..".tmp", 'r'))
process.__file = assert(io.open(name..".tmp", 'r'))
process.lines = function(self)
return self.__file:lines()
end
process.close = function(self)
self.__proc:close()
self.__file:close()
end
return process
end
proc = my_popen("somename","some command")
while true
--do stuf
for line in proc:lines() do
print(line)
end
--do stuf
end
Your problems seems to be related to buffering. For some reason the pipe is waiting for some data to be read before it allows the opened program to write more to it, and it seems to be less than a line. What you can do is use io.popen(cmd):read"*a" to read everything. This should avoid the buffering problem. Then you can split the returned string in lines with for line in string.gmatch("[^\n]+") do someting_with(line) end.
Your solution consist in dumping the output of the process to a file, and reading that file. You can replace your use or io.popen with io.execute, and discard the return value (just check it's 0).