This Meteor server code gets called in various places, the code running on my local development server saves a file to the hard driver.
But now it is running on AWS EC2 Docker container, How do I go about writing the file to a S3 bucket? thx
'saveToFile': function (fileName, text) {
if (env != 'development') return;
const playPath = '/Users/localPath/' + fileName + '.html';
fs.writeFile(playPath, text, (err) => {
if (err) throw err;
console.log(`Saved to ` + fileName);
});
}
You need to use the NodeJS SDK for AWS.
https://aws.amazon.com/sdk-for-node-js/
Direct example -
https://gist.github.com/homam/8646090
Related
Contract deploys to address 0x47c5e40890bcE4a473A49D7501808b9633F29782
It looks like many other contracts were deployed to the same address by other people.
Should the address of the contract not be unique or is it deterministically generated or cached somehow by hardhat?
Why would other people have deployed to the same address?
I am wondering if this is some bug with Polygon/Mumbai testnet
const { ethers } = require("hardhat");
async function main() {
const SuperMarioWorld = await ethers.getContractFactory("Rilu");
const superMarioWorld = await SuperMarioWorld.deploy("Rilu", "RILU");
await superMarioWorld.deployed();
console.log("Success contract was deployed to: ", superMarioWorld.address)
await superMarioWorld.mint("https://ipfs.io/ipfs/QmZkUCDt5CVRWQjLDyRS4c8kU6UxRNdpsjMf6vomDcd7ep")
}
// We recommend this pattern to be able to use async/await everywhere
// and properly handle errors.
main()
.then(() => process.exit(0))
.catch((error) => {
console.error(error);
process.exit(1);
});
Hardhat
module.exports = {
solidity: '0.8.4',
networks: {
mumbai: {
url: process.env.MUMBAI_RPC,
accounts: [process.env.PRIVATE_KEY],
},
},
};
.env file (no problem with sharing the private key, just one I got from vanity-eth.tk and used for testing)
PRIVATE_KEY=84874e85685c95440e51d5edacf767f952f596cca6fd3da19b90035a20f57e37
MUMBAI_RPC=https://rpc-mumbai.maticvigil.com
Output
~/g/s/b/r/nft ❯❯❯ npx hardhat run scripts/deploy.js --network mumbai ✘ 1
Compiling 12 files with 0.8.4
Compilation finished successfully
Success contract was deployed to: 0x47c5e40890bcE4a473A49D7501808b9633F29782
I'm moving a MERN project into React + MongoDB Stitch after seeing it allows for easy user authentication, quick deployment, etc.
However, I am having a hard time understanding where and how can I call a site scraping function. Previously, I web scraped in Express.js with cheerio like:
app.post("/api/getTitleAtURL", (req, res) => {
if (req.body.url) {
request(req.body.url, function(error, response, body) {
if (!error && response.statusCode == 200) {
const $ = cheerio.load(body);
const webpageTitle = $("title").text();
const metaDescription = $("meta[name=description]").attr("content");
const webpage = {
title: webpageTitle,
metaDescription: metaDescription
};
res.send(webpage);
} else {
res.status(400).send({ message: "THIS IS AN ERROR" });
}
});
}
});
But obviously with Stitch no Node & Express is needed. Is there a way to fetch another site's content without having to host a node.js application just serving that one function?
Thanks
Turns out you can build Functions in MongoDB Stitch that allows you to upload external dependencies.
However, there're limitation, for example, cheerio didn't work as an uploaded external dependency while request worked. A solution, therefore, would be to create a serverless function in AWS's lambda, and then connect mongoDB stitch to AWS lambda (mongoDB stitch can connect to many third party services, including many AWS lambda cloud services like lambda, s3, kinesis, etc).
AWS lambda allows you to upload any external dependencies, if mongoDB stitch allowed for any, we wouldn't need lambda, but stitch still needs many support. In my case, I had a node function with cheerio & request as external dependencies, to upload this to lambda: make an account, create new lambda function, and pack your node modules & code into a zip file to upload it. Your zip should look like this:
and your file containing the function should look like:
const cheerio = require("cheerio");
const request = require("request");
exports.rss = function(event, context, callback) {
request(event.requestURL, function(error, response, body) {
if (!error && response.statusCode == 200) {
const $ = cheerio.load(body);
const webpageTitle = $("title").text();
const metaDescription = $("meta[name=description]").attr("content");
const webpage = {
title: webpageTitle,
metaDescription: metaDescription
};
callback(null, webpage);
return webpage;
} else {
callback(null, {message: "THIS IS AN ERROR"})
return {message: "THIS IS AN ERROR"};
}
});
};
and in mongoDB, connect to a third party service, choose AWS, enter the secret keys you got from making an IAM amazon user. In rules -> actions, choose lambda as your API, and allow for all actions. Now, in your mongoDB stitch functions, you can connect to Lambda, and that function should look like this in my case:
exports = async function(requestURL) {
const lambda = context.services.get('getTitleAtURL').lambda("us-east-1");
const result = await lambda.Invoke({
FunctionName: "getTitleAtURL",
Payload: JSON.stringify({requestURL: requestURL})
});
console.log(result.Payload.text());
return EJSON.parse(result.Payload.text());
};
Note: this slowed down performances big time though, generally, it took twice extra time for the call to finish.
I have been trying to find a solution for this since a couple of days now. I want to backup my data of Realtime Database more frequently than what is provided by Firebase i.e. once in 24 hours. I have done a similar thing with my FireStore, deployed an app and a cron job to backup every 8 hours and save that to my GS bucket. But I couldn't find anything at all which is a similar for Realtime Database.
Here's what I have tried-
I am using firebase functions here (I will add a cron job later). What I am doing is fetching every child of my db's root node and zipping it with zlib. Here's what my code looks like-
const functions = require('firebase-functions');
const admin = require("firebase-admin");
const cors = require('cors')({origin: true});
const zlib = require('zlib');
exports.backup = functions.https.onRequest((request, response) => {
cors(request, response, async () => {
const db = admin.database();
const ref_deb = db.ref("/");
ref_deb.once('value').then(function (snapshot) {
console.log("Database value " + snapshot.val());
let jsonFileContent = snapshot.val();
if (jsonFileContent != null) {
//console.log(io);
console.log("getCompressedJSONFile function started", jsonFileContent);
let bufferObject = new Buffer.from(JSON.stringify(jsonFileContent));
zlib.gzip(bufferObject, function (err, zippedData) {
if (err) {
console.log("error in gzip compression using zlib module", err);
response.send("Error").status(500);
} else {
console.log("zippedData", zippedData);
response.send(zippedData).status(200);
}
})
} else {
response.send("Error").status(500)
}
}, function (error) {
console.error(error);
});
});
});
I am getting the response (obviously encoded and not readable)
Now I want to upload it to my GS bucket. How can I do that? Is it possible to do it by passing the zippedContents from zlib without creating a file? Any help will be appreciated.
My database is less than 0.5 MB uncompressed so backing everything at once shouldn't be an issue.
There is many questions in your question.
First, you can't trigger manually a backup like you can do it into Firestore (Firebase).
Then, you can stream directly your file from your function to storage, but I don't recommend it. Indeed, there is no CRC when you stream data, and, if an issue occur in your ZIP file, the whole file is corrupted and you can't recover your data
If your file is small, I recommend you to store it into /tmp directory, a writable in-memory file system (tmpfs), and then copy the file to the bucket.
This Meteor server code (part of an app) running on the local machine downloads a file from the web and saves it to the AWS S3.
This Meteor app also runs on EC2 docker container. but when the below modifications are made, it failed to run as docker ps does not show a running container.
The modifications runs ok on the local machine which downloads a file from the web and uploads it to AWS S3.
Any ideas how to fix it so that when runs on the EC2 docker container it downloads the file and saves it to the AWS S3? Thanks
// server
let AWS = require('aws-sdk');
fs = require('fs');
let request = Npm.require('request');
Meteor.startup(() => {
AWS.config.update({
accessKeyId: 'abc',
secretAccessKey: 'xyz'
});
let url = "http://techslides.com/demos/sample-videos/small.mp4";
let fileArray = url.split("/");
let file = fileArray[fileArray.length - 1];
// (((it would be good if copying locally is avoided)))
// let localFilePath = "/home/ec2-user/"+file; // <=== fails on EC2
let localFilePath = "/local/path/ + file; // <=== works locally
request(url).pipe(fs.createWriteStream(localFilePath)).on("finish", function() {
fs.readFile(localFilePath, function(err, data) {
if (err) {
console.log("file does not exists");
throw err;
}
let base64data = new Buffer(data, 'binary');
let s3 = new AWS.S3();
s3.putObject({
Bucket: 'myBucket',
Key: file,
Body: base64data,
}, function(resp) {
console.log(arguments);
console.log('Successfully uploaded package.');
fs.unlink(localFilePath);
});
})
});
});
The reason is that the local docker file system is read only, so you can't save a file locally. See this answer to a similar question: Allow user to download file from public folder Meteor.js
There are several Meteor packages to help you with this, such as https://atmospherejs.com/ostrio/files You can do a search on Atmosphere to find a suitable package
In my meteor app, the server try to download some file to store them on filesystem.
I use Meteor.http package to do that, but in fact, if file are downloaded, they seems to be corrupted.
var fileUrl = 'http://cdn.sstatic.net/stackoverflow/img/sprites.png?v=5'; //for example
Meteor.http.call("GET", fileUrl, function funcStoreFile(error, result) {
"use strict";
if (!error) {
var fstream = Npm.require('fs'),
filename = './.meteor/public/storage/' + collectionId;
fstream.writeFile(filename, result.content, function funcStoreFileWriteFS(err) {
if (!err) {
var Fiber = Npm.require('fibers');
Fiber(function funcStoreImageSaveDb() {
MyfileCollection.update({_id: collectionId}, {$set: {fileFsPath: filename}});
}).run();
} else {
console.log('error during writing file', err);
}
});
} else {
console.log('dl file FAIL');
}
});
I did a symlink from public/storage to ../.meteor/public/storage to enable direct download from url (http://localhost:3000/storage/myfileId)
When i compare the file downloaded with this system and the same file downloaded directly from a browser, they are different. What's wrong with my conception?
I had a similar problem and made a solution based on this discussion:
on https://github.com/meteor/meteor/issues/905
By using the request library, which meteor is using under the hood as well, one can avoid the problem with binary downloads. Besides I would recommend not saving small files to the filesystem but base64 encoded in mongodb directly. This is the easiest solution, if you plan to deploy to meteor.com or other cloud services.
An other glitch I found when saving files to the public dir in development is that meteor is reloading the files for every change in the public dir. this can lead to data corruption as chunks of the file are being downloaded. Here some code i am using based on the above discussion.
Future = Npm.require("fibers/future")
request = Npm.require 'request'
Meteor.methods
downloadImage: (url) ->
if url
fut = new Future()
options =
url: url
encoding: null
# Get raw image binaries
request.get options, (error, result, body) ->
if error then return console.error error
base64prefix = "data:" + result.headers["content-type"] + ";base64,"
image = base64prefix + body.toString("base64")
fut.ret image
# pause until binaries are fully loaded
return fut.wait()
else false
Meteor.call 'downloadImage', url, (err, res) ->
if res
Movies.update({_id: id}, {$set: {image: res}})
Hope this is helpful.