How to create file from /api in NextJS? - next.js

I am currently trying to create a temp file from /api/sendEmail.js with fs.mkdirSync
fs.mkdirSync(path.join(__dirname, "../../public"));
but on Vercel (where my app is running) all folders are read-only and I can't create any temp files.
Error:
ERROR
Error: EROFS: read-only file system, mkdir '/var/task/.next/server/public'
As I can see there are some questions about this but no clear answer, have any of you guys managed to do this?

Vercel allows creation of files in /tmp directory. However, there are limitations with this. https://github.com/vercel/vercel/discussions/5320
An example of /api function that writes and reads files is:
import fs from 'fs';
export default async function handler(req, res) {
const {
method,
body,
} = req
try {
switch (method) {
case 'GET': {
// read
// This line opens the file as a readable stream
const readStream = fs.createReadStream('/tmp/text.txt')
// This will wait until we know the readable stream is actually valid before piping
readStream.on('open', function () {
// This just pipes the read stream to the response object (which goes to the client)
readStream.pipe(res)
})
// This catches any errors that happen while creating the readable stream (usually invalid names)
readStream.on('error', function (err) {
res.end(err)
})
return
}
case 'POST':
// write
fs.writeFileSync('./test.txt', JSON.stringify(body))
break
default:
res.setHeader('Allow', ['GET', 'POST'])
res.status(405).end(`Method ${method} Not Allowed`)
}
// send result
return res.status(200).json({ message: 'Success' })
} catch (error) {
return res.status(500).json(error)
}
}
}
Also see: https://vercel.com/support/articles/how-can-i-use-files-in-serverless-functions

Related

fetching mp3 file from MeteorJS and trying to convert it into a Blob so that I can play it

am playing around with downloading and serving mp3 files in Meteor.
I am trying to download an MP3 file (https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3) on my MeteorJS Server side (to circumvent CORS issues) and then pass it back to the client to play it in a AUDIO tag.
In Meteor you use the Meteor.call function to call a server method. There is not much to configure, it's just a method call and a callback.
When I run the method I receive this:
content:
"ID3���#K `�)�<H� e0�)������1������J}��e����2L����������fȹ\�CO��ȹ'�����}$A�Lݓ����3D/����fijw��+�LF�$?��`R�l�YA:A��#�0��pq����4�.W"�P���2.Iƭ5��_I�d7d����L��p0��0A��cA�xc��ٲR�BL8䝠4���T��..etc..", data:null,
headers: {
accept-ranges:"bytes",
connection:"close",
content-length:"443926",
content-type:"audio/mpeg",
date:"Mon, 20 Aug 2018 13:36:11 GMT",
last-modified:"Fri, 17 Jun 2016 18:16:53 GMT",
server:"Apache",
statusCode:200
which is the working Mp3 file (the content-length is exactly the same as the file I write to disk on the MeteorJS Server side, and it is playable).
However, my following code doesn't let me convert the response into a BLOB:
```
MeteorObservable.call( 'episode.download', episode.url.url ).subscribe( ( result: any )=> {
console.log( 'response', result);
let URL = window.URL;
let blob = new Blob([ result.content ], {type: 'audio/mpeg'} );
console.log('blob', blob);
let audioUrl = URL.createObjectURL(blob);
let audioElement:any = document.getElementsByTagName('audio')[0];
audioElement.setAttribute("src", audioUrl);
audioElement.play();
})
When I run the code, the Blob has the wrong size and is not playable
Blob(769806) {size: 769806, type: "audio/mpeg"}
size:769806
type:"audio/mpeg"
__proto__:Blob
Uncaught (in promise) DOMException: Failed to load because no supported source was found.
On the backend I just run a return HTTP.get( url ); in the method which is using import { HTTP } from 'meteor/http'.
I have been trying to use btoa or atob but that doesn't work and as far as I know it is already a base64 encoded file, right?
I am not sure why the Blob constructor creates a larger file then the source returned from the backend. And I am not sure why it is not playing.
Can anyone point me to the right direction?
Finally found a solution that uses request instead of Meteor's HTTP:
First you need to install request and request-promise-native in order to make it easy to return your result to clients.
$ meteor npm install --save request request-promise-native
Now you just return the promise of the request in a Meteor method:
server/request.js
import { Meteor } from 'meteor/meteor'
import request from 'request-promise-native'
Meteor.methods({
getAudio (url) {
return request.get({url, encoding: null})
}
})
Notice the encoding: null flag, which causes the result to be binary. I found this in a comment of an answer related to downloading binary data via node. This causes not to use string but binary representation of the data (I don't know how but maybe it is a fallback that uses Node Buffer).
Now it gets interesting. On your client you wont receive a complex result anymore but either an Error or a Uint8Array which makes sense because Meteor uses EJSON to send data over the wires with DDP and the representation of binary data is a Uint8Array as described in the documentation.
Because you can just pass in a Uint8Array into a Blob you can now easily create the blob like so:
const blob = new Blob([utf8Array], {type: 'audio/mpeg'})
Summarizing all this into a small template if could look like this:
client/fetch.html
<template name="fetch">
<button id="fetchbutton">Fetch Mp3</button>
{{#if source}}
<audio id="player" src={{source}} preload="none" content="audio/mpeg" controls></audio>
{{/if}}
</template>
client/fetch.js
import { Template } from 'meteor/templating'
import { ReactiveVar } from 'meteor/reactive-var'
import './fetch.html'
Template.fetch.onCreated(function helloOnCreated () {
// counter starts at 0
this.source = new ReactiveVar(null)
})
Template.fetch.helpers({
source () {
return Template.instance().source.get()
},
})
Template.fetch.events({
'click #fetchbutton' (event, instance) {
Meteor.call('getAudio', 'https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3', (err, uint8Array) => {
const blob = new Blob([uint8Array], {type: 'audio/mpeg'})
instance.source.set(window.URL.createObjectURL(blob))
})
},
})
Alternative solution is adding a REST endpoint *using Express) to your Meteor backend.
Instead of HTTP we use request and request-progress to send the data chunked in case of large files.
On the frontend I catch the chunks using https://angular.io/guide/http#listening-to-progress-events to show a loader and deal with the response.
I could listen to the download via
this.http.get( 'the URL to a mp3', { responseType: 'arraybuffer'} ).subscribe( ( res:any ) => {
var blob = new Blob( [res], { type: 'audio/mpeg' });
var url= window.URL.createObjectURL(blob);
window.open(url);
} );
The above example doesn't show progress by the way, you need to implement the progress-events as explained in the angular article. Happy to update the example to my final code when finished.
The Express setup on the Meteor Server:
/*
Source:http://www.mhurwi.com/meteor-with-express/
## api.class.ts
*/
import { WebApp } from 'meteor/webapp';
const express = require('express');
const trackRoute = express.Router();
const request = require('request');
const progress = require('request-progress');
export function api() {
const app = express();
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.use('/episodes', trackRoute);
trackRoute.get('/:url', (req, res) => {
res.set('content-type', 'audio/mp3');
res.set('accept-ranges', 'bytes');
// The options argument is optional so you can omit it
progress(request(req.params.url ), {
// throttle: 2000, // Throttle the progress event to 2000ms, defaults to 1000ms
// delay: 1000, // Only start to emit after 1000ms delay, defaults to 0ms
// lengthHeader: 'x-transfer-length' // Length header to use, defaults to content-length
})
.on('progress', function (state) {
// The state is an object that looks like this:
// {
// percent: 0.5, // Overall percent (between 0 to 1)
// speed: 554732, // The download speed in bytes/sec
// size: {
// total: 90044871, // The total payload size in bytes
// transferred: 27610959 // The transferred payload size in bytes
// },
// time: {
// elapsed: 36.235, // The total elapsed seconds since the start (3 decimals)
// remaining: 81.403 // The remaining seconds to finish (3 decimals)
// }
// }
console.log('progress', state);
})
.on('error', function (err) {
// Do something with err
})
.on('end', function () {
console.log('DONE');
// Do something after request finishes
})
.pipe(res);
});
WebApp.connectHandlers.use(app);
}
and then add this to your meteor startup:
import { Meteor } from 'meteor/meteor';
import { api } from './imports/lib/api.class';
Meteor.startup( () => {
api();
});

How can I upload an FTP file to firebase storage using Cloud Functions for Firebase?

Within the same firebase project and using a cloud function (written in node.js), I first download an FTP file (using npm ftp module) and then try to upload it into the firebase storage.
Every attempts failed so far and documentation doesn't help...any expert advices/tips would be greatly appreciated?
The following code uses two different approaches : fs.createWriteStream() and bucket.file().createWriteStream(). Both failed but for different reasons (see error messages in the code).
'use strict'
// [START import]
let admin = require('firebase-admin')
let functions = require('firebase-functions')
const gcpStorage = require('#google-cloud/storage')()
admin.initializeApp(functions.config().firebase)
var FtpClient = require('ftp')
var fs = require('fs')
// [END import]
// [START Configs]
// Firebase Storage is configured with the following rules and grants read write access to everyone
/*
service firebase.storage {
match /b/{bucket}/o {
match /{allPaths=**} {
allow read, write;
}
}
}
*/
// Replace this with your project id, will be use by: const bucket = gcpStorage.bucket(firebaseProjectID)
const firebaseProjectID = 'your_project_id'
// Public FTP server, uploaded files are removed after 48 hours ! Upload new ones when needed for testing
const CONFIG = {
test_ftp: {
source_path: '/48_hour',
ftp: {
host: 'ftp.uconn.edu'
}
}
}
const SOURCE_FTP = CONFIG.test_ftp
// [END Configs]
// [START saveFTPFileWithFSCreateWriteStream]
function saveFTPFileWithFSCreateWriteStream(file_name) {
const ftpSource = new FtpClient()
ftpSource.on('ready', function() {
ftpSource.get(SOURCE_FTP.source_path + '/' + file_name, function(err, stream) {
if (err) throw err
stream.once('close', function() { ftpSource.end() })
stream.pipe(fs.createWriteStream(file_name))
console.log('File downloaded: ', file_name)
})
})
ftpSource.connect(SOURCE_FTP.ftp)
}
// This fails with the following error in firebase console:
// Error: EROFS: read-only file system, open '20170601.tar.gz' at Error (native)
// [END saveFTPFileWithFSCreateWriteStream]
// [START saveFTPFileWithBucketUpload]
function saveFTPFileWithBucketUpload(file_name) {
const bucket = gcpStorage.bucket(firebaseProjectID)
const file = bucket.file(file_name)
const ftpSource = new FtpClient()
ftpSource.on('ready', function() {
ftpSource.get(SOURCE_FTP.source_path + '/' + file_name, function(err, stream) {
if (err) throw err
stream.once('close', function() { ftpSource.end() })
stream.pipe(file.createWriteStream())
console.log('File downloaded: ', file_name)
})
})
ftpSource.connect(SOURCE_FTP.ftp)
}
// [END saveFTPFileWithBucketUpload]
// [START database triggers]
// Listens for new triggers added to /ftp_fs_triggers/:pushId and calls the saveFTPFileWithFSCreateWriteStream
// function to save the file in the default project storage bucket
exports.dbTriggersFSCreateWriteStream = functions.database
.ref('/ftp_fs_triggers/{pushId}')
.onWrite(event => {
const trigger = event.data.val()
const fileName = trigger.file_name // i.e. : trigger.file_name = '20170601.tar.gz'
return saveFTPFileWithFSCreateWriteStream(trigger.file_name)
// This fails with the following error in firebase console:
// Error: EROFS: read-only file system, open '20170601.tar.gz' at Error (native)
})
// Listens for new triggers added to /ftp_bucket_triggers/:pushId and calls the saveFTPFileWithBucketUpload
// function to save the file in the default project storage bucket
exports.dbTriggersBucketUpload = functions.database
.ref('/ftp_bucket_triggers/{pushId}')
.onWrite(event => {
const trigger = event.data.val()
const fileName = trigger.file_name // i.e. : trigger.file_name = '20170601.tar.gz'
return saveFTPFileWithBucketUpload(trigger.file_name)
// This fails with the following error in firebase console:
/*
Error: Uncaught, unspecified "error" event. ([object Object])
at Pumpify.emit (events.js:163:17)
at Pumpify.onerror (_stream_readable.js:579:12)
at emitOne (events.js:96:13)
at Pumpify.emit (events.js:188:7)
at Pumpify.Duplexify._destroy (/user_code/node_modules/#google-cloud/storage/node_modules/duplexify/index.js:184:15)
at /user_code/node_modules/#google-cloud/storage/node_modules/duplexify/index.js:175:10
at _combinedTickCallback (internal/process/next_tick.js:67:7)
at process._tickDomainCallback (internal/process/next_tick.js:122:9)
*/
})
// [END database triggers]
I've finally found the correct way to implement this.
1) Make sure the bucket is correctly referenced. Initially I just used
my project_id without the '.appspot.com' at the end'.
const bucket = gsc.bucket('<project_id>.appspot.com')
2) Create a bucket stream first then pipe the stream from the FTP get call to the bucketWriteStream. Note that file_name will be the name of the saved file (this file does not have to exist beforehand).
ftpSource.get(filePath, function(err, stream) {
if (err) throw err
stream.once('close', function() { ftpSource.end() })
// This didn't work !
//stream.pipe(fs.createWriteStream(fileName))
// This works...
let bucketWriteStream = bucket.file(fileName).createWriteStream()
stream.pipe(bucketWriteStream)
})
Et voilà, works like a charm...

Meteor method gets data as undefined

So I have this method in my component
uploadCallback (file) {
// TODO: Integrate dropbox with its SDK
// TODO: Pass the link to the editor
return new Promise(
(resolve, reject) => {
console.log('uploadCallback promise')
console.log('file', file)
const dataObject = {
file,
resolve,
reject
}
console.log('dataObject', dataObject)
Meteor.call('uploadToDropbox', dataObject, function (error, result) {
console.log('uploadToDropbox callback')
if (error) {
console.log('error', error)
}
if (result) {
console.log('result', result)
}
})
}
)
}
In my dataObject I am getting everything as needed. Here is what the console logs
uploadCallback promise
file File {name: "nodejs-2560x1440.png", lastModified: 1485410804857, lastModifiedDate: Thu Jan 26 2017 10:06:44 GMT+0400 (+04), webkitRelativePath: "", size: 1699460…}
dataObject Object {file: File}
uploadToDropbox callback
So everything seems to be ok here.
And here is my server code
import { Meteor } from 'meteor/meteor'
import Dropbox from 'dropbox'
console.log('dropbox settings', Meteor.settings.dropbox)
const dbx = new Dropbox({accessToken: Meteor.settings.dropbox.accessToken})
Meteor.methods({
'uploadToDropbox': function (dataObject) {
console.log('dataObject', dataObject)
const { file } = dataObject
console.log('file', file)
const { resolve, reject } = dataObject
console.log('resolve', resolve)
console.log('reject', reject)
dbx.filesUpload({path: '/' + file.name, contents: file})
.then(function (response) {
console.log(response)
resolve({ data: { link: 'http://dummy_image_src.com' } })
})
.catch(function (error) {
console.error(error)
reject('some error')
})
return false
}
})
The problem is here. dataObject is being passed almost empty
This is what the server logs
I20170217-11:44:36.141(4)? dataObject { file: {} }
I20170217-11:44:36.143(4)? file {}
I20170217-11:44:36.143(4)? resolve undefined
I20170217-11:44:36.144(4)? reject undefined
W20170217-11:44:36.371(4)? (STDERR) [TypeError: first argument must be a string or Buffer]
So why is this happening?
i suspect that File you're trying to pass to the method is a file handle. if true, then that's not going to work: even if the server did get that info, it has no access to your local filesystem to grab those bytes.
your solution is going to take 1 of 2 forms:
client uploads to dropbox
client reads bytes from file system into memory
client uploads bytes to dropbox
client receives some dropbox metadata about the uploaded file (e.g. location)
client calls server with that metadata info
server saves that info to db
server uploads to dropbox
client reads bytes from file system into memory
client formats that data into something that can be handled by JSON
client calls server with that JSON object
server uploads bytes to dropbox
server receives some dropbox metadata about the uploaded file (e.g. location)
server saves that info to db
which to do? it depends on which dropbox package/solution you're using and how you want to structure your app.
You are returning a promises not data, you have to wait for result and then return data.

Meteor reading data from a file stored on server

I want to show some data points on the client side, and I want to get the latitude, longitude and summary from a file stored on the server.
I have read a lot of posts saying to use papaParse using Meteor methods but I am not able to make it work.
Can you guys point me to right direction, my questions are:
In which folder to should I store a .txt, .csv or .json file in Meteor?
How to access it from the client and return the read data to client for display.
You can put your static files into private folder on the server and get them through Assets.
For exmaple, you have a data.json file in your private folder.
Method to get this data:
Meteor.methods({
getData() {
return JSON.parse(Assets.getText('data.json'));
}
});
You can now call this method on the client:
Meteor.call('getData', function(err, res) {
console.log(res);
});
UPD
Ok, how to display it.
Meteor.call runs async, so we will use reactivity to update our view on result.
Here is how we can display data on ourData template.
<template name="ourData">
<!-- Here you may want to use #each or whatever -->
<p>{{ourData}}</p>
</template>
Template.ourData.onCreated(function() {
this.ourData = new ReactiveVar();
Meteor.call('getData', (err, res) => {
if (err) {
console.error(err);
} else {
// Putting data in reactive var
this.ourData.set(res);
}
});
});
Template.ourData.helpers({
ourData: function() {
// Helper will automatically rerun on method res
return Template.instance().ourData.get();
}
});
reactive-var package required or you can also use Session.

[meteor][0.6.*] With meteorjs, how to download file with Meteor.http?

In my meteor app, the server try to download some file to store them on filesystem.
I use Meteor.http package to do that, but in fact, if file are downloaded, they seems to be corrupted.
var fileUrl = 'http://cdn.sstatic.net/stackoverflow/img/sprites.png?v=5'; //for example
Meteor.http.call("GET", fileUrl, function funcStoreFile(error, result) {
"use strict";
if (!error) {
var fstream = Npm.require('fs'),
filename = './.meteor/public/storage/' + collectionId;
fstream.writeFile(filename, result.content, function funcStoreFileWriteFS(err) {
if (!err) {
var Fiber = Npm.require('fibers');
Fiber(function funcStoreImageSaveDb() {
MyfileCollection.update({_id: collectionId}, {$set: {fileFsPath: filename}});
}).run();
} else {
console.log('error during writing file', err);
}
});
} else {
console.log('dl file FAIL');
}
});
I did a symlink from public/storage to ../.meteor/public/storage to enable direct download from url (http://localhost:3000/storage/myfileId)
When i compare the file downloaded with this system and the same file downloaded directly from a browser, they are different. What's wrong with my conception?
I had a similar problem and made a solution based on this discussion:
on https://github.com/meteor/meteor/issues/905
By using the request library, which meteor is using under the hood as well, one can avoid the problem with binary downloads. Besides I would recommend not saving small files to the filesystem but base64 encoded in mongodb directly. This is the easiest solution, if you plan to deploy to meteor.com or other cloud services.
An other glitch I found when saving files to the public dir in development is that meteor is reloading the files for every change in the public dir. this can lead to data corruption as chunks of the file are being downloaded. Here some code i am using based on the above discussion.
Future = Npm.require("fibers/future")
request = Npm.require 'request'
Meteor.methods
downloadImage: (url) ->
if url
fut = new Future()
options =
url: url
encoding: null
# Get raw image binaries
request.get options, (error, result, body) ->
if error then return console.error error
base64prefix = "data:" + result.headers["content-type"] + ";base64,"
image = base64prefix + body.toString("base64")
fut.ret image
# pause until binaries are fully loaded
return fut.wait()
else false
Meteor.call 'downloadImage', url, (err, res) ->
if res
Movies.update({_id: id}, {$set: {image: res}})
Hope this is helpful.

Resources