How do I run an arbitrary shell command from Deno? - deno

I want to run any arbitrary bash command from Deno, like I would with a child_process in Node. Is that possible in Deno?

In order to run a shell command, you have to use Deno.run, which requires --allow-run permissions.
There's an ongoing discussion to use --allow-all instead for running a subprocess
The following will output to stdout.
// --allow-run
const process = Deno.run({
cmd: ["echo", "hello world"]
});
// Close to release Deno's resources associated with the process.
// The process will continue to run after close(). To wait for it to
// finish `await process.status()` or `await process.output()`.
process.close();
If you want to store the output, you'll have to set stdout/stderr to "piped"
const process = Deno.run({
cmd: ["echo", "hello world"],
stdout: "piped",
stderr: "piped"
});
const output = await process.output() // "piped" must be set
const outStr = new TextDecoder().decode(output);
/*
const error = await p.stderrOutput();
const errorStr = new TextDecoder().decode(error);
*/
process.close();
Deno 1.28.0 added a new API (unstable) to run a shell command: Deno.Command
let cmd = new Deno.Command("echo", { args: ["hello world"] });
let { stdout, stderr } = await cmd.output();
// stdout & stderr are a Uint8Array
console.log(new TextDecoder().decode(stdout)); // hello world
More advanced usage:
const c = new Deno.Command("cat", { stdin: "piped" });
c.spawn();
// open a file and pipe input from `cat` program to the file
const file = await Deno.open("output.txt", { write: true });
await c.stdout.pipeTo(file.writable);
const stdin = c.stdin.getWriter();
await stdin.write(new TextEncoder().encode("foobar"));
await stdin.close();
const s = await c.status;
console.log(s);
--unstable flag is required to use Deno.Command
This API will most likely replace Deno.run

Make sure to await status or output of the child process created with Deno.run.
Otherwise, the process might be killed, before having executed any code. For example:
deno run --allow-run main.ts
main.ts:
const p = Deno.run({
cmd: ["deno", "run", "--allow-write", "child.ts"],
});
const { code } = await p.status(); // (*1); wait here for child to finish
p.close();
child.ts:
// If we don't wait at (*1), no file is written after 3 sec delay
setTimeout(async () => {
await Deno.writeTextFile("file.txt", "Some content here");
console.log("finished!");
}, 3000);
Pass arguments via stdin / stdout:
main.ts:
const p = Deno.run({
cmd: ["deno", "run", "--allow-write", "child.ts"],
// Enable pipe between processes
stdin: "piped",
stdout: "piped",
stderr: "piped",
});
if (!p.stdin) throw Error();
// pass input to child
await p.stdin.write(new TextEncoder().encode("foo"));
await p.stdin.close();
const { code } = await p.status();
if (code === 0) {
const rawOutput = await p.output();
await Deno.stdout.write(rawOutput); // could do some processing with output
} else { /* error */ }
child.ts:
import { readLines } from "https://deno.land/std/io/bufio.ts"; // convenient wrapper
// read given input argument
let args = "";
for await (const line of readLines(Deno.stdin)) {
args += line;
}
setTimeout(async () => {
await Deno.writeTextFile("file.txt", `Some content here with ${args}`);
console.log(`${args} finished!`); // prints "foo finished!""
}, 3000);
There is also a good example resource in Deno docs.

You can do that with the run like this:
// myscript.js
Deno.run({
cmd: ["echo", "hello world"]
})
You'll have to --allow-run when running the script in order for this to work:
deno run --allow-run ./myscript.js

If your shell command prints out some messages before the process is about to end, you really want pipe stdin and stdout to your own streams and also throw an exception which you can catch.
You can even alter the output while piping the process streams to your own streams:
async function run(cwd, ...cmd) {
const stdout = []
const stderr = []
cwd = cwd || Deno.cwd()
const p = Deno.run({
cmd,
cwd,
stdout: "piped",
stderr: "piped"
})
console.debug(`$ ${cmd.join(" ")}`)
const decoder = new TextDecoder()
streams.readableStreamFromReader(p.stdout).pipeTo(new WritableStream({
write(chunk) {
for (const line of decoder.decode(chunk).split(/\r?\n/)) {
stdout.push(line)
console.info(`[ ${cmd[0]} ] ${line}`)
}
},
}))
streams.readableStreamFromReader(p.stderr).pipeTo(new WritableStream({
write(chunk) {
for (const line of decoder.decode(chunk).split(/\r?\n/)) {
stderr.push(line)
console.error(`[ ${cmd[0]} ] ${line}`)
}
},
}))
const status = await p.status()
if (!status.success) {
throw new Error(`[ ${cmd[0]} ] failed with exit code ${status.code}`)
}
return {
status,
stdout,
stderr,
}
}
If you don't have different logic for each writable stream, you can also combine them to one:
streams.mergeReadableStreams(
streams.readableStreamFromReader(p.stdout),
streams.readableStreamFromReader(p.stderr),
).pipeTo(new WritableStream({
write(chunk): void {
for (const line of decoder.decode(chunk).split(/\r?\n/)) {
console.error(`[ ${cmd[0]} ] ${line}`)
}
},
}))

Alternatively, you can also invoke shell command via task runner such as drake as below
import { desc, run, task, sh } from "https://deno.land/x/drake#v1.5.0/mod.ts";
desc("Minimal Drake task");
task("hello", [], async function () {
console.log("Hello World!");
await sh("deno run --allow-env src/main.ts");
});
run();
$ deno run -A drakefile.ts hello

Related

Does Deno's oak wait for all in-flight requests to drain before returning its listen promise?

I'm setting up a new web project using Deno and oak.
I've passed an AbortSignal into the listen call and I'm listening for a SIGTERM from the OS and calling abort, in case this is not built-in behaviour.
Similar to setups described here: Deno and Docker how to listen for the SIGTERM signal and close the server
Question: Upon abort, will the await listen(...) call return immediately or after all remaining requests have completed?
If not then I guess I will need to accurately count concurrent requests using Atomics and wait until that counter drops to zero before ending the process.
Rather than rely on second hand information from someone else (which might not be correct), why not just do a test and find out for yourself (or review the source code)?
Here's a reproducible example which indicates that — when using Deno#1.28.2 with Oak#11.1.0 — the server gracefully shuts down: it still responds to a pending request even after the AbortSignal is aborted:
so-74600368.ts:
import {
Application,
type Context,
} from "https://deno.land/x/oak#v11.1.0/mod.ts";
import { delay } from "https://deno.land/std#0.166.0/async/delay.ts";
async function sendRequestAndLogResponseText(): Promise<void> {
try {
const response = await fetch("http://localhost:8000/");
if (!response.ok) {
throw new Error(`Response not OK (Status code: ${response.status})`);
}
const text = await response.text();
console.log(performance.now(), text);
} catch (ex) {
console.error(ex);
}
}
async function sendSquentialRequsets(numOfRequests: number): Promise<void> {
for (let i = 0; i < numOfRequests; i += 1) {
await sendRequestAndLogResponseText();
}
}
function printStartupMessage({ hostname, port, secure }: {
hostname: string;
port: number;
secure?: boolean;
}): void {
if (!hostname || hostname === "0.0.0.0") hostname = "localhost";
const address =
new URL(`http${secure ? "s" : ""}://${hostname}:${port}/`).href;
console.log(`Listening at ${address}`);
console.log("Use ctrl+c to stop");
}
async function main() {
const log = new Map<Context, boolean>();
const controller = new AbortController();
controller.signal.addEventListener("abort", () => {
console.log(performance.now(), "Abort method invoked");
});
const app = new Application();
app.use(async (ctx) => {
log.set(ctx, false);
if (log.size > 2) {
console.log(performance.now(), "Aborting");
controller.abort(new Error("Received third request. Aborting now."));
}
// A bit of artificial delay, to ensure that no unaccounted for latency
// might cause a non-deterministic/unexpected result:
await delay(300);
ctx.response.body = `Response OK: (#${log.size})`;
log.set(ctx, true);
});
app.addEventListener("listen", (ev) => {
console.log(performance.now(), "Server starting");
printStartupMessage(ev);
});
const listenerPromise = app.listen({
hostname: "localhost",
port: 8000,
signal: controller.signal,
})
.then(() => {
console.log(performance.now(), "Server stopped");
return { type: "server", ok: true };
})
.catch((reason) => ({ type: "server", ok: false, reason }));
const requestsPromise = sendSquentialRequsets(3)
.then(() => {
console.log(performance.now(), "All responses OK");
return { type: "requests", ok: true };
})
.catch((reason) => ({ type: "requests", ok: false, reason }));
const results = await Promise.allSettled([listenerPromise, requestsPromise]);
for (const result of results) console.log(result);
const allResponsesSent = [...log.values()].every(Boolean);
console.log({ allResponsesSent });
}
if (import.meta.main) main();
% deno --version
deno 1.28.2 (release, x86_64-apple-darwin)
v8 10.9.194.1
typescript 4.8.3
% deno run --allow-net=localhost so-74600368.ts
62 Server starting
Listening at http://127.0.0.1:8000/
Use ctrl+c to stop
378 Response OK: (#1)
682 Response OK: (#2)
682 Aborting
682 Abort method invoked
990 Server stopped
992 Response OK: (#3)
992 All responses OK
{ status: "fulfilled", value: { type: "server", ok: true } }
{ status: "fulfilled", value: { type: "requests", ok: true } }
{ allResponsesSent: true }

Deno connection close event

How to detect in Deno that remote has closed (aborted) the TCP/IP (HTTP) connection?
const server = Deno.listen({ port: 8080 });
for await (const conn of server) {
conn.on('abort', () => { // <-- API I expect but doesn't exist
// ...
});
const httpConn = Deno.serveHttp(conn);
for await (const requestEvent of httpConn) {
//
}
}
While Deno does not provide an API to know when a connection was closed, the most reliable way to detect a connection closure is to attempt to write to it, which will throw an error if it's closed.
The following snippet that tries to perform a zero-length write periodically will solve your issue:
const server = Deno.listen({ port: 8080 });
for await (const conn of server) {
const httpConn = Deno.serveHttp(conn);
for await (const requestEvent of httpConn) {
let interval;
const stream = new ReadableStream({
start(controller) {
interval = setInterval(() =>
// attempt to write a 0 length buffer, it will fail if
// connection is closed
controller.enqueue(new Uint8Array(0)),
500); // tune interval depending on your needs
},
async pull(controller) {
/*
const result = await someComputation();
// in case you want to return some response
controller.enqueue(result);
// cleanup
clearInterval(interval);
controller.close();
*/
},
});
requestEvent.respondWith(new Response(stream))
.catch((err) => {
clearInterval(interval);
// check for <connection closed> error
if (err.message.includes('connection closed before message completed')) {
// stop your operation
console.log('connection closed');
}
});
}
}
The error logic can also be added to ReadableStreamDefaultController cancel method:
const stream = new ReadableStream({
start(controller) {
// ..
},
async pull(controller) {
// ...
},
cancel(reason) {
clearInterval(interval);
if (reason && reason.message.includes('connection closed before message completed')) {
// stop your operation
console.log('connection closed');
}
}
});
AFAIK there's not an event-oriented API, but when the connection's ReadableStream closes, you'll know that the connection has closed. This will also be reflected in Deno's internal resource map. Consider the following self-contained example:
A TCP listener is started, and is closed after 500ms. While it is open, three connections are created and closed (once every 100ms). When each connection is established:
The current TCP entries from Deno's resource map are printed to the console.
A reader is acquired on the connection's readable stream and a read is performed. Because the connection is closed from the client without any data being written, the first read is the final read (reflected in the read result's done property being true).
The reader's lock on the stream is released. The stream is closed.
The current TCP entries from Deno's resource map are printed to the console. Note that none appear at this point.
so-74228364.ts:
import { delay } from "https://deno.land/std#0.161.0/async/delay.ts";
function getTCPConnectionResources() {
return Object.fromEntries(
Object.entries(Deno.resources()).filter(([, type]) => type === "tcpStream"),
);
}
async function startServer(options: Deno.ListenOptions, signal: AbortSignal) {
const listener = Deno.listen(options);
signal.addEventListener("abort", () => listener.close());
for await (const conn of listener) {
console.log("Resources after open:", getTCPConnectionResources());
const reader = conn.readable.getReader();
reader.read()
.then(({ done }) => console.log({ done }))
.then(() => {
reader.releaseLock();
console.log("Resources after final read:", getTCPConnectionResources());
});
}
}
const controller = new AbortController();
delay(500).then(() => controller.abort());
const options: Deno.ListenOptions = {
hostname: "localhost",
port: 8080,
};
startServer(options, controller.signal);
for (let i = 0; i < 3; i += 1) {
await delay(100);
(await Deno.connect(options)).close();
}
% deno --version
deno 1.27.0 (release, x86_64-apple-darwin)
v8 10.8.168.4
typescript 4.8.3
% deno run --allow-net=localhost so-74228364.ts
Resources after open: { "7": "tcpStream" }
{ done: true }
Resources after final read: {}
Resources after open: { "10": "tcpStream" }
{ done: true }
Resources after final read: {}
Resources after open: { "13": "tcpStream" }
{ done: true }
Resources after final read: {}

Stream stdout from subprocess

I'm running a Docker build as a subprocess in Deno and would like the stdout streamed to the parent stdout (Deno.stdout) so it's outputted straight away.
How can I achieve this?
Currently I have the following but it doesn't output anything until the subprocess has finished.
const p = Deno.run({
cmd: ['docker', 'build', '.'],
stdout: 'piped'
});
const stdout = await p.output();
await Deno.stdout.write(stdout);
You're not far off; you just need to start the process of piping the output before you await the process. There's some other optimizations you can make, like using Deno.copy to pipe the subprocess' output to the main process' stdout without copying stuff in memory for example.
import { copy } from "https://deno.land/std#0.104.0/io/util.ts";
const cat = Deno.run({
cmd: ["docker", "build", "--no-cache", ".", "-t", "foobar:latest"],
cwd: "/path/to/your/project",
stdout: "piped",
stderr: "piped",
});
copy(cat.stdout, Deno.stdout);
copy(cat.stderr, Deno.stderr);
await cat.status();
console.log("Done!");
If you want to prefix each line with the name of the process it came from (useful when you have multiple subprocesses running, you can make a simple function that uses the std lib's readLines function and a text encoder to do that
import { readLines } from "https://deno.land/std#0.104.0/io/mod.ts";
import { writeAll } from "https://deno.land/std#0.104.0/io/util.ts";
async function pipeThrough(
prefix: string,
reader: Deno.Reader,
writer: Deno.Writer,
) {
const encoder = new TextEncoder();
for await (const line of readLines(reader)) {
await writeAll(writer, encoder.encode(`[${prefix}] ${line}\n`));
}
}
const cat = Deno.run({
cmd: ["docker", "build", "--no-cache", ".", "-t", "foobar:latest"],
cwd: "/path/to/your/project",
stdout: "piped",
stderr: "piped",
});
pipeThrough("docker", cat.stdout, Deno.stdout);
pipeThrough("docker", cat.stderr, Deno.stderr);
await cat.status();
console.log("Done!");
The p stands for process, Deno.run returns the process state upon return (not the stdout):
console.log(await p.status());
// { success: true, code: 0 }
Since you are awaiting the status, the stdout will not stream the output until the process has finished.
Try using the process like this:
const p = Deno.run({ cmd, stderr: 'piped', stdout: 'piped' });
const [status, stdout, stderr] = await Promise.all([
p.status(),
p.output(),
p.stderrOutput()
]);
console.log(new TextDecoder().decode(stdout)); // since it's returned as UInt8Array
p.close();
But it'll still wait until the sub process is done.

dart pass data from stdin and stderr to stdin of another process

This is really a follow in question from:
Dart: How to pass data from one process to another via streams
I using dart to spawn two processes.
Lets call these two processes 'lhs' and 'rhs'.
(lhs - left hand side and rhs - right hand side).
The first process (lhs) writes to stdout and stderr.
I need to pipe all the data from the first process (lhs) to stdin of the second process (rhs).
In the above noted stack overflow the answer was to use the 'pipe' method to stream data from lhs.stdout to rhs.stdin.
Given I now want to write data from both of lhs' streams (stdout and stderr) the pipe method doesn't work as it only supports a single stream and you can't call pipe twice on rhs' stdin (an error is thrown stating correctly that you can't call addStream twice).
So I've tried the following code which seems to partially work but I only see the first character from lhs' stderr stream and then everything completes (the onDone methods are called on both of lhs' stream).
Some detail to help understand what is going on here.
In the below code the 'lhs' is a call to 'dart --version'. When dart writes out its version string it writes it to stderr with nothing being written to stdout.
I use 'head' as the second process - 'rhs'. Its job is to simply received the combined output of stdout and stderr from lhs and print it to the console.
The output from a run of the below code is:
lhs exitCode=0
done
listen: stderr
listen: stderr written
done err
import 'dart:async';
import 'dart:cli';
import 'dart:io';
Future<void> main() async {
var dart = start('dart', ['--version']);
var head = start('head', ['-n', '5']);
pipeTo(dart, head);
}
void pipeTo(Future<Process> lhs, Future<Process> rhs) {
var complete = Completer<void>();
// wait for the lhs and rhs processes to
// start and then start piping lhs
// output to the rhs input.
lhs.then((lhsProcess) {
rhs.then((rhsProcess) {
// write stdout from lhs to stdin of rhs
lhsProcess.stdout.listen((datum) {
print('listen');
rhsProcess.stdin.add(datum);
print('listen written');
}
, onDone: () {
print('done');
complete.complete();
}
, onError: (Object e, StackTrace s) =>
print('onError $e')
,cancelOnError: true);
// write stderr from lhs to stdin of rhs.
lhsProcess.stderr.listen((datum) {
print('listen: stderr');
rhsProcess.stdin.add(datum);
print('listen: stderr written');
}
, onDone: () {
print('done err');
if (!complete.isCompleted) complete.complete();
}
, onError: (Object e, StackTrace s) =>
print('onError $e')
, cancelOnError: true);
lhsProcess.exitCode.then((exitCode) {
print('lhs exitCode=$exitCode');
});
rhsProcess.exitCode.then((exitCode) {
print('rhs exitCode=$exitCode');
});
});
});
waitFor(complete.future);
}
Future<Process> start(String command, List<String> args) async {
var process = Process.start(
command,
args,
);
return process;
}
I think your solution of using listen() to forward events to rhsProcess.stdin will work. You could clean it up by using await in place of the then() callbacks and remove the Completer. You could make pipeTo() return a Future<void> and alternatively use waitFor(pipeTo()) if desired.
Here's a condensed example:
import 'dart:async';
import 'dart:convert';
import 'dart:io';
void main() {
var dart = Process.start('dart', ['--version']);
var head = Process.start('head', ['-n', '1']);
pipeTo(dart, head);
}
Future<void> pipeTo(Future<Process> lhs, Future<Process> rhs) async {
var lhsProcess = await lhs;
var rhsProcess = await rhs;
lhsProcess.stdout.listen(rhsProcess.stdin.add);
lhsProcess.stderr.listen(rhsProcess.stdin.add);
rhsProcess.stdout.transform(utf8.decoder).listen(print);
}
Note: I pass -n 1 to head otherwise the program never exits.
Future<void> main() async {
var dart = start('cat', ['/var/log/syslog']);
var head = start('head', ['-n', '5']);
await pipeTo2(dart, head);
}
Future<void> pipeTo2(Future<Process> lhs, Future<Process> rhs) async {
// wait for both processes to start
var lhsProcess = await lhs;
var rhsProcess = await rhs;
// send the lhs stdout to the rhs stdin
lhsProcess.stdout.listen(rhsProcess.stdin.add);
// send the lhs stderr to the rhs stdin
lhsProcess.stderr.listen(rhsProcess.stdin.add).onDone(() {
rhsProcess.stdin.close();
});
// send the rhs stdout and stderr to the console
rhsProcess.stdout.listen(stdout.add);
rhsProcess.stderr.listen(stderr.add);
// wait for rhs to finish.
// if it finishes before the lhs does we will get
// a broken pipe error which is fine so we can
// suppress it.
await rhsProcess.stdin.done.catchError(
(Object e) {
// forget broken pipe after rhs terminates before lhs
},
test: (e) => e is SocketException && e.osError.message == 'Broken pipe',
);
}
In the case you have to wait the first process to end (but you should be able to implement this easily) I use something like this:
import 'dart:convert';
import 'dart:io';
Future<void> main() async {
var result = await Process.run('dart', ['--version']);
if (result.exitCode != 0) {
throw StateError(result.stderr);
}
var process = await Process.start('head', ['-n', '1']);
var buffer = StringBuffer();
var errBuffer = StringBuffer();
// ignore: unawaited_futures
process.stdout
.transform(utf8.decoder)
.forEach((_value) => buffer.write(_value));
// ignore: unawaited_futures
process.stderr
.transform(utf8.decoder)
.forEach((_value) => errBuffer.write(_value));
process.stdin.writeln('${result.stdout}${result.stderr}');
var exitCode = await process.exitCode;
if (exitCode != 0) {
throw StateError('$exitCode - $errBuffer');
}
print('$buffer');
}

How can I upload an FTP file to firebase storage using Cloud Functions for Firebase?

Within the same firebase project and using a cloud function (written in node.js), I first download an FTP file (using npm ftp module) and then try to upload it into the firebase storage.
Every attempts failed so far and documentation doesn't help...any expert advices/tips would be greatly appreciated?
The following code uses two different approaches : fs.createWriteStream() and bucket.file().createWriteStream(). Both failed but for different reasons (see error messages in the code).
'use strict'
// [START import]
let admin = require('firebase-admin')
let functions = require('firebase-functions')
const gcpStorage = require('#google-cloud/storage')()
admin.initializeApp(functions.config().firebase)
var FtpClient = require('ftp')
var fs = require('fs')
// [END import]
// [START Configs]
// Firebase Storage is configured with the following rules and grants read write access to everyone
/*
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write;
}
}
}
*/
// Replace this with your project id, will be use by: const bucket = gcpStorage.bucket(firebaseProjectID)
const firebaseProjectID = 'your_project_id'
// Public FTP server, uploaded files are removed after 48 hours ! Upload new ones when needed for testing
const CONFIG = {
test_ftp: {
source_path: '/48_hour',
ftp: {
host: 'ftp.uconn.edu'
}
}
}
const SOURCE_FTP = CONFIG.test_ftp
// [END Configs]
// [START saveFTPFileWithFSCreateWriteStream]
function saveFTPFileWithFSCreateWriteStream(file_name) {
const ftpSource = new FtpClient()
ftpSource.on('ready', function() {
ftpSource.get(SOURCE_FTP.source_path + '/' + file_name, function(err, stream) {
if (err) throw err
stream.once('close', function() { ftpSource.end() })
stream.pipe(fs.createWriteStream(file_name))
console.log('File downloaded: ', file_name)
})
})
ftpSource.connect(SOURCE_FTP.ftp)
}
// This fails with the following error in firebase console:
// Error: EROFS: read-only file system, open '20170601.tar.gz' at Error (native)
// [END saveFTPFileWithFSCreateWriteStream]
// [START saveFTPFileWithBucketUpload]
function saveFTPFileWithBucketUpload(file_name) {
const bucket = gcpStorage.bucket(firebaseProjectID)
const file = bucket.file(file_name)
const ftpSource = new FtpClient()
ftpSource.on('ready', function() {
ftpSource.get(SOURCE_FTP.source_path + '/' + file_name, function(err, stream) {
if (err) throw err
stream.once('close', function() { ftpSource.end() })
stream.pipe(file.createWriteStream())
console.log('File downloaded: ', file_name)
})
})
ftpSource.connect(SOURCE_FTP.ftp)
}
// [END saveFTPFileWithBucketUpload]
// [START database triggers]
// Listens for new triggers added to /ftp_fs_triggers/:pushId and calls the saveFTPFileWithFSCreateWriteStream
// function to save the file in the default project storage bucket
exports.dbTriggersFSCreateWriteStream = functions.database
.ref('/ftp_fs_triggers/{pushId}')
.onWrite(event => {
const trigger = event.data.val()
const fileName = trigger.file_name // i.e. : trigger.file_name = '20170601.tar.gz'
return saveFTPFileWithFSCreateWriteStream(trigger.file_name)
// This fails with the following error in firebase console:
// Error: EROFS: read-only file system, open '20170601.tar.gz' at Error (native)
})
// Listens for new triggers added to /ftp_bucket_triggers/:pushId and calls the saveFTPFileWithBucketUpload
// function to save the file in the default project storage bucket
exports.dbTriggersBucketUpload = functions.database
.ref('/ftp_bucket_triggers/{pushId}')
.onWrite(event => {
const trigger = event.data.val()
const fileName = trigger.file_name // i.e. : trigger.file_name = '20170601.tar.gz'
return saveFTPFileWithBucketUpload(trigger.file_name)
// This fails with the following error in firebase console:
/*
Error: Uncaught, unspecified "error" event. ([object Object])
at Pumpify.emit (events.js:163:17)
at Pumpify.onerror (_stream_readable.js:579:12)
at emitOne (events.js:96:13)
at Pumpify.emit (events.js:188:7)
at Pumpify.Duplexify._destroy (/user_code/node_modules/#google-cloud/storage/node_modules/duplexify/index.js:184:15)
at /user_code/node_modules/#google-cloud/storage/node_modules/duplexify/index.js:175:10
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)
*/
})
// [END database triggers]
I've finally found the correct way to implement this.
1) Make sure the bucket is correctly referenced. Initially I just used
my project_id without the '.appspot.com' at the end'.
const bucket = gsc.bucket('<project_id>.appspot.com')
2) Create a bucket stream first then pipe the stream from the FTP get call to the bucketWriteStream. Note that file_name will be the name of the saved file (this file does not have to exist beforehand).
ftpSource.get(filePath, function(err, stream) {
if (err) throw err
stream.once('close', function() { ftpSource.end() })
// This didn't work !
//stream.pipe(fs.createWriteStream(fileName))
// This works...
let bucketWriteStream = bucket.file(fileName).createWriteStream()
stream.pipe(bucketWriteStream)
})
Et voilà, works like a charm...

Resources