I'm running a Docker build as a subprocess in Deno and would like the stdout streamed to the parent stdout (Deno.stdout) so it's outputted straight away.
How can I achieve this?
Currently I have the following but it doesn't output anything until the subprocess has finished.
const p = Deno.run({
cmd: ['docker', 'build', '.'],
stdout: 'piped'
});
const stdout = await p.output();
await Deno.stdout.write(stdout);
You're not far off; you just need to start the process of piping the output before you await the process. There's some other optimizations you can make, like using Deno.copy to pipe the subprocess' output to the main process' stdout without copying stuff in memory for example.
import { copy } from "https://deno.land/std#0.104.0/io/util.ts";
const cat = Deno.run({
cmd: ["docker", "build", "--no-cache", ".", "-t", "foobar:latest"],
cwd: "/path/to/your/project",
stdout: "piped",
stderr: "piped",
});
copy(cat.stdout, Deno.stdout);
copy(cat.stderr, Deno.stderr);
await cat.status();
console.log("Done!");
If you want to prefix each line with the name of the process it came from (useful when you have multiple subprocesses running, you can make a simple function that uses the std lib's readLines function and a text encoder to do that
import { readLines } from "https://deno.land/std#0.104.0/io/mod.ts";
import { writeAll } from "https://deno.land/std#0.104.0/io/util.ts";
async function pipeThrough(
prefix: string,
reader: Deno.Reader,
writer: Deno.Writer,
) {
const encoder = new TextEncoder();
for await (const line of readLines(reader)) {
await writeAll(writer, encoder.encode(`[${prefix}] ${line}\n`));
}
}
const cat = Deno.run({
cmd: ["docker", "build", "--no-cache", ".", "-t", "foobar:latest"],
cwd: "/path/to/your/project",
stdout: "piped",
stderr: "piped",
});
pipeThrough("docker", cat.stdout, Deno.stdout);
pipeThrough("docker", cat.stderr, Deno.stderr);
await cat.status();
console.log("Done!");
The p stands for process, Deno.run returns the process state upon return (not the stdout):
console.log(await p.status());
// { success: true, code: 0 }
Since you are awaiting the status, the stdout will not stream the output until the process has finished.
Try using the process like this:
const p = Deno.run({ cmd, stderr: 'piped', stdout: 'piped' });
const [status, stdout, stderr] = await Promise.all([
p.status(),
p.output(),
p.stderrOutput()
]);
console.log(new TextDecoder().decode(stdout)); // since it's returned as UInt8Array
p.close();
But it'll still wait until the sub process is done.
Related
I am currently trying to create a temp file from /api/sendEmail.js with fs.mkdirSync
fs.mkdirSync(path.join(__dirname, "../../public"));
but on Vercel (where my app is running) all folders are read-only and I can't create any temp files.
Error:
ERROR
Error: EROFS: read-only file system, mkdir '/var/task/.next/server/public'
As I can see there are some questions about this but no clear answer, have any of you guys managed to do this?
Vercel allows creation of files in /tmp directory. However, there are limitations with this. https://github.com/vercel/vercel/discussions/5320
An example of /api function that writes and reads files is:
import fs from 'fs';
export default async function handler(req, res) {
const {
method,
body,
} = req
try {
switch (method) {
case 'GET': {
// read
// This line opens the file as a readable stream
const readStream = fs.createReadStream('/tmp/text.txt')
// This will wait until we know the readable stream is actually valid before piping
readStream.on('open', function () {
// This just pipes the read stream to the response object (which goes to the client)
readStream.pipe(res)
})
// This catches any errors that happen while creating the readable stream (usually invalid names)
readStream.on('error', function (err) {
res.end(err)
})
return
}
case 'POST':
// write
fs.writeFileSync('./test.txt', JSON.stringify(body))
break
default:
res.setHeader('Allow', ['GET', 'POST'])
res.status(405).end(`Method ${method} Not Allowed`)
}
// send result
return res.status(200).json({ message: 'Success' })
} catch (error) {
return res.status(500).json(error)
}
}
}
Also see: https://vercel.com/support/articles/how-can-i-use-files-in-serverless-functions
Suppose I have 2 script, father.ts and child.ts, how do I spawn child.ts from father.ts and periodically send message from father.ts to child.ts ?
You have to use the Worker API
father.ts
const worker = new Worker("./child.ts", { type: "module", deno: true });
worker.postMessage({ filename: "./log.txt" });
child.ts
self.onmessage = async (e) => {
const { filename } = e.data;
const text = await Deno.readTextFile(filename);
console.log(text);
self.close();
};
You can send messages using .postMessage
You can use child processes. Here is an example: proc with PushIterable
This will let you send and receive multiple commands from non-Deno child processes asynchronously, as well as Deno child processes.
Be careful as this requires --allow-run to work, and this almost always breaks out of the sandbox, if you care about that.
I want to run any arbitrary bash command from Deno, like I would with a child_process in Node. Is that possible in Deno?
In order to run a shell command, you have to use Deno.run, which requires --allow-run permissions.
There's an ongoing discussion to use --allow-all instead for running a subprocess
The following will output to stdout.
// --allow-run
const process = Deno.run({
cmd: ["echo", "hello world"]
});
// Close to release Deno's resources associated with the process.
// The process will continue to run after close(). To wait for it to
// finish `await process.status()` or `await process.output()`.
process.close();
If you want to store the output, you'll have to set stdout/stderr to "piped"
const process = Deno.run({
cmd: ["echo", "hello world"],
stdout: "piped",
stderr: "piped"
});
const output = await process.output() // "piped" must be set
const outStr = new TextDecoder().decode(output);
/*
const error = await p.stderrOutput();
const errorStr = new TextDecoder().decode(error);
*/
process.close();
Deno 1.28.0 added a new API (unstable) to run a shell command: Deno.Command
let cmd = new Deno.Command("echo", { args: ["hello world"] });
let { stdout, stderr } = await cmd.output();
// stdout & stderr are a Uint8Array
console.log(new TextDecoder().decode(stdout)); // hello world
More advanced usage:
const c = new Deno.Command("cat", { stdin: "piped" });
c.spawn();
// open a file and pipe input from `cat` program to the file
const file = await Deno.open("output.txt", { write: true });
await c.stdout.pipeTo(file.writable);
const stdin = c.stdin.getWriter();
await stdin.write(new TextEncoder().encode("foobar"));
await stdin.close();
const s = await c.status;
console.log(s);
--unstable flag is required to use Deno.Command
This API will most likely replace Deno.run
Make sure to await status or output of the child process created with Deno.run.
Otherwise, the process might be killed, before having executed any code. For example:
deno run --allow-run main.ts
main.ts:
const p = Deno.run({
cmd: ["deno", "run", "--allow-write", "child.ts"],
});
const { code } = await p.status(); // (*1); wait here for child to finish
p.close();
child.ts:
// If we don't wait at (*1), no file is written after 3 sec delay
setTimeout(async () => {
await Deno.writeTextFile("file.txt", "Some content here");
console.log("finished!");
}, 3000);
Pass arguments via stdin / stdout:
main.ts:
const p = Deno.run({
cmd: ["deno", "run", "--allow-write", "child.ts"],
// Enable pipe between processes
stdin: "piped",
stdout: "piped",
stderr: "piped",
});
if (!p.stdin) throw Error();
// pass input to child
await p.stdin.write(new TextEncoder().encode("foo"));
await p.stdin.close();
const { code } = await p.status();
if (code === 0) {
const rawOutput = await p.output();
await Deno.stdout.write(rawOutput); // could do some processing with output
} else { /* error */ }
child.ts:
import { readLines } from "https://deno.land/std/io/bufio.ts"; // convenient wrapper
// read given input argument
let args = "";
for await (const line of readLines(Deno.stdin)) {
args += line;
}
setTimeout(async () => {
await Deno.writeTextFile("file.txt", `Some content here with ${args}`);
console.log(`${args} finished!`); // prints "foo finished!""
}, 3000);
There is also a good example resource in Deno docs.
You can do that with the run like this:
// myscript.js
Deno.run({
cmd: ["echo", "hello world"]
})
You'll have to --allow-run when running the script in order for this to work:
deno run --allow-run ./myscript.js
If your shell command prints out some messages before the process is about to end, you really want pipe stdin and stdout to your own streams and also throw an exception which you can catch.
You can even alter the output while piping the process streams to your own streams:
async function run(cwd, ...cmd) {
const stdout = []
const stderr = []
cwd = cwd || Deno.cwd()
const p = Deno.run({
cmd,
cwd,
stdout: "piped",
stderr: "piped"
})
console.debug(`$ ${cmd.join(" ")}`)
const decoder = new TextDecoder()
streams.readableStreamFromReader(p.stdout).pipeTo(new WritableStream({
write(chunk) {
for (const line of decoder.decode(chunk).split(/\r?\n/)) {
stdout.push(line)
console.info(`[ ${cmd[0]} ] ${line}`)
}
},
}))
streams.readableStreamFromReader(p.stderr).pipeTo(new WritableStream({
write(chunk) {
for (const line of decoder.decode(chunk).split(/\r?\n/)) {
stderr.push(line)
console.error(`[ ${cmd[0]} ] ${line}`)
}
},
}))
const status = await p.status()
if (!status.success) {
throw new Error(`[ ${cmd[0]} ] failed with exit code ${status.code}`)
}
return {
status,
stdout,
stderr,
}
}
If you don't have different logic for each writable stream, you can also combine them to one:
streams.mergeReadableStreams(
streams.readableStreamFromReader(p.stdout),
streams.readableStreamFromReader(p.stderr),
).pipeTo(new WritableStream({
write(chunk): void {
for (const line of decoder.decode(chunk).split(/\r?\n/)) {
console.error(`[ ${cmd[0]} ] ${line}`)
}
},
}))
Alternatively, you can also invoke shell command via task runner such as drake as below
import { desc, run, task, sh } from "https://deno.land/x/drake#v1.5.0/mod.ts";
desc("Minimal Drake task");
task("hello", [], async function () {
console.log("Hello World!");
await sh("deno run --allow-env src/main.ts");
});
run();
$ deno run -A drakefile.ts hello
This is really a follow in question from:
Dart: How to pass data from one process to another via streams
I using dart to spawn two processes.
Lets call these two processes 'lhs' and 'rhs'.
(lhs - left hand side and rhs - right hand side).
The first process (lhs) writes to stdout and stderr.
I need to pipe all the data from the first process (lhs) to stdin of the second process (rhs).
In the above noted stack overflow the answer was to use the 'pipe' method to stream data from lhs.stdout to rhs.stdin.
Given I now want to write data from both of lhs' streams (stdout and stderr) the pipe method doesn't work as it only supports a single stream and you can't call pipe twice on rhs' stdin (an error is thrown stating correctly that you can't call addStream twice).
So I've tried the following code which seems to partially work but I only see the first character from lhs' stderr stream and then everything completes (the onDone methods are called on both of lhs' stream).
Some detail to help understand what is going on here.
In the below code the 'lhs' is a call to 'dart --version'. When dart writes out its version string it writes it to stderr with nothing being written to stdout.
I use 'head' as the second process - 'rhs'. Its job is to simply received the combined output of stdout and stderr from lhs and print it to the console.
The output from a run of the below code is:
lhs exitCode=0
done
listen: stderr
listen: stderr written
done err
import 'dart:async';
import 'dart:cli';
import 'dart:io';
Future<void> main() async {
var dart = start('dart', ['--version']);
var head = start('head', ['-n', '5']);
pipeTo(dart, head);
}
void pipeTo(Future<Process> lhs, Future<Process> rhs) {
var complete = Completer<void>();
// wait for the lhs and rhs processes to
// start and then start piping lhs
// output to the rhs input.
lhs.then((lhsProcess) {
rhs.then((rhsProcess) {
// write stdout from lhs to stdin of rhs
lhsProcess.stdout.listen((datum) {
print('listen');
rhsProcess.stdin.add(datum);
print('listen written');
}
, onDone: () {
print('done');
complete.complete();
}
, onError: (Object e, StackTrace s) =>
print('onError $e')
,cancelOnError: true);
// write stderr from lhs to stdin of rhs.
lhsProcess.stderr.listen((datum) {
print('listen: stderr');
rhsProcess.stdin.add(datum);
print('listen: stderr written');
}
, onDone: () {
print('done err');
if (!complete.isCompleted) complete.complete();
}
, onError: (Object e, StackTrace s) =>
print('onError $e')
, cancelOnError: true);
lhsProcess.exitCode.then((exitCode) {
print('lhs exitCode=$exitCode');
});
rhsProcess.exitCode.then((exitCode) {
print('rhs exitCode=$exitCode');
});
});
});
waitFor(complete.future);
}
Future<Process> start(String command, List<String> args) async {
var process = Process.start(
command,
args,
);
return process;
}
I think your solution of using listen() to forward events to rhsProcess.stdin will work. You could clean it up by using await in place of the then() callbacks and remove the Completer. You could make pipeTo() return a Future<void> and alternatively use waitFor(pipeTo()) if desired.
Here's a condensed example:
import 'dart:async';
import 'dart:convert';
import 'dart:io';
void main() {
var dart = Process.start('dart', ['--version']);
var head = Process.start('head', ['-n', '1']);
pipeTo(dart, head);
}
Future<void> pipeTo(Future<Process> lhs, Future<Process> rhs) async {
var lhsProcess = await lhs;
var rhsProcess = await rhs;
lhsProcess.stdout.listen(rhsProcess.stdin.add);
lhsProcess.stderr.listen(rhsProcess.stdin.add);
rhsProcess.stdout.transform(utf8.decoder).listen(print);
}
Note: I pass -n 1 to head otherwise the program never exits.
Future<void> main() async {
var dart = start('cat', ['/var/log/syslog']);
var head = start('head', ['-n', '5']);
await pipeTo2(dart, head);
}
Future<void> pipeTo2(Future<Process> lhs, Future<Process> rhs) async {
// wait for both processes to start
var lhsProcess = await lhs;
var rhsProcess = await rhs;
// send the lhs stdout to the rhs stdin
lhsProcess.stdout.listen(rhsProcess.stdin.add);
// send the lhs stderr to the rhs stdin
lhsProcess.stderr.listen(rhsProcess.stdin.add).onDone(() {
rhsProcess.stdin.close();
});
// send the rhs stdout and stderr to the console
rhsProcess.stdout.listen(stdout.add);
rhsProcess.stderr.listen(stderr.add);
// wait for rhs to finish.
// if it finishes before the lhs does we will get
// a broken pipe error which is fine so we can
// suppress it.
await rhsProcess.stdin.done.catchError(
(Object e) {
// forget broken pipe after rhs terminates before lhs
},
test: (e) => e is SocketException && e.osError.message == 'Broken pipe',
);
}
In the case you have to wait the first process to end (but you should be able to implement this easily) I use something like this:
import 'dart:convert';
import 'dart:io';
Future<void> main() async {
var result = await Process.run('dart', ['--version']);
if (result.exitCode != 0) {
throw StateError(result.stderr);
}
var process = await Process.start('head', ['-n', '1']);
var buffer = StringBuffer();
var errBuffer = StringBuffer();
// ignore: unawaited_futures
process.stdout
.transform(utf8.decoder)
.forEach((_value) => buffer.write(_value));
// ignore: unawaited_futures
process.stderr
.transform(utf8.decoder)
.forEach((_value) => errBuffer.write(_value));
process.stdin.writeln('${result.stdout}${result.stderr}');
var exitCode = await process.exitCode;
if (exitCode != 0) {
throw StateError('$exitCode - $errBuffer');
}
print('$buffer');
}
This firebase function should take a pdf in /test/testfile.pdf, convert it to grey and save it somewhere. I want to use this function in a more complicated process, but the exec('convert') is really not helping me.
The issue is the 'exec' command keeps failing. In the shell, the exact command line you see here is working:
convert -colorspace GRAY -density 300 test/testfile.pdf /tmp/out.pdf
The error in the logs is this:
{ ChildProcessError: Command failed: convert -colorspace GRAY -density 300 test/testfile.pdf /tmp/out.pdf convert: no images defined `/tmp/out.pdf' # error/convert.c/ConvertImageCommand/3210. `convert -colorspace GRAY -density 300 test/testfile.pdf /tmp/out.pdf\` (exited with error code 1) at callback (/user_code/node_modules/child-process-promise/lib/index.js:33:27) at ChildProcess.exithandler (child_process.js:205:5) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:191:7) at maybeClose (internal/child_process.js:920:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:230:5) name: 'ChildProcessError', code: 1, childProcess: { ChildProcess: { [Function: ChildProcess] super_: [Object] }, fork: [Function], _forkChild: [Function], exec: [Function], execFile: [Function], spawn: [Function], spawnSync: [Function: spawnSync], execFileSync: [Function: execFileSync], execSync: [Function: execSync] }, stdout: '', stderr: 'convert: no images defined `/tmp/out.pdf\' # error/convert.c/ConvertImageCommand/3210.\n' }
This is the function:
const functions = require('firebase-functions');
const rp = require('request-promise');
const request = require('request');
const baseURL = "https://www.google.com/cloudprint/"
const exec = require('child-process-promise').exec;
const mkdirp = require('mkdirp-promise');
const path = require('path');
const os = require('os');
const fs = require('fs');
exports.convertPDF = functions.https.onRequest((req, res) => {
const tempLocalThumbFile = path.join(os.tmpdir(), "out.pdf");
try {
let tempLocalFile = "test/testfile.pdf"
exec('convert -colorspace GRAY -density 300 test/testfile.pdf '+tempLocalThumbFile).then((a) => {
console.log('Conversion created at', tempLocalThumbFile);
}, function (err) {
console.log(err)
})
} catch(err) {
console.log(err)
}
})
I am pretty stuck. How to get this convert to work in Firebase Functions?
Actually the problem is what has been stated by #Nivco: Cloud Functions are missing the ghostscript package. There is a feature request already asking for making ghostscript package available. You can go to the link and click on the start icon to get email notifications when there are some news.
There is another StackoverFlow thread where it is mentioned a workaround consisting on getting gs binaries on your own.