Fine Uploader - Combine scaled image with autoUpload:false and custom filename Not Working - scaling

I'm using fine-uploader 5.0.6 to upload images to my amazon s3 account, where i have the following option set:
autoUpload: false
validation: max 10mb, min 400kb
custom filename
as below:
var manualuploader = jQuery("#fine-uploader").fineUploaderS3({
...,
autoUpload: false,
validation: {
allowedExtensions: ['jpeg', 'jpg', 'png'],
sizeLimit: 10000000, // 10mb
minSizeLimit: 400000 // 400kb
},
objectProperties: {
key: function (fileId) {
var filename = jQuery('#fine-uploader').fineUploader('getName', fileId);
var uuid = jQuery('#fine-uploader').fineUploader('getUuid', fileId);
var ext = filename.substr(filename.lastIndexOf('.') + 1);
design_name = design_name.replace(/[^a-z0-9\s]/gi, '');
folder_name = design_name.replace(/\s/g, '-');
return artist + '/' + folder_name + '/original.' + ext;
}
}
...
});
jQuery('#triggerUpload').click(function() {
manualuploader.fineUploaderS3('uploadStoredFiles');
});
This is working as expected but now i need to include a 500px scaled image to upload at the same time onto s3. I know i can do this with the scaling object but it doesn't work with my code. Having tried several methods i seem to have got it down to these problems:
When i add the following in :
scaling: {
sizes: [
{name: "web", maxSize: 500}
]
}
Only the original/main file gets uploaded onto the s3 server, not both as intended. If i use sendOriginal: false then the 'web' version does get sent to the server.
Also where i change the filename in objectProperties{key so that it becomes
artist/folder/original.ext
i also need to be able to have the smaller web version follow the same structure so it's:
artist/folder/web.ext
Is this possible?

I'm not able to reproduce the issue you have reported with scaling or validation. You can change the key name of your file, as you are already doing, via the objectProperties.key option. You should be able to look at the original name of the file to determine if it is an original, or a scaled version.

Related

Updating records and their associations with sequelize

I have the following models defined in sequelize:
Library, MediaParentDir, MediaSubDir and MediaSubDirEpisodes
The first three hold information about directories on the system and the last one holds information about files in a particular directory on the system.
The associations are as follows:
Library.MediaParentDirs = Library.hasMany(models.MediaParentDir, {onDelete: 'CASCADE'});
Library.MediaSubDirs = Library.hasMany(models.MediaSubDir, {onDelete: 'CASCADE'});
MediaParentDir.MediaSubDirs = MediaParentDir.hasMany(models.MediaSubDir, {onDelete: 'CASCADE'});
MediaSubDir.Episodes = MediaSubDir.hasMany(models.Episode, {onDelete: 'CASCADE'});
And this is how I populate the database on first run:
db.Library.find({
where: lib
}).then((existingLib) => {
let includes = [{
model: db.MediaParentDir,
include: [{
model: db.MediaSubDir,
include: [db.Episode]
}]
},
{
model: db.MediaSubDir,
include: [db.Episode]
}
];
let mediaParentDirs = removeIgnored(library.getMediaParentDirsFrom(lib))
.map((parentDir) => {
parentDir.MediaSubDirs = removeIgnored(library.getMediaSubDirsFrom(parentDir));
parentDir.MediaSubDirs.map((subDir) => {
subDir.Episodes = removeIgnored(library.getMediaSubDirEpisodesFrom(subDir));
return subDir;
});
return parentDir;
});
let mediaSubDirs = removeIgnored(library.getMediaSubDirsFrom(lib))
.map((subDir) => {
subDir.Episodes = removeIgnored(library.getMediaSubDirEpisodesFrom(subDir));
return subDir;
});
let updatedLib = db.Library.build({
name: lib.name,
path: lib.path,
type: lib.type,
// Add all media parent dirs and child sub dirs under media parent dirs
MediaParentDirs: mediaParentDirs,
// Add all media sub dirs directly under library
MediaSubDirs: mediaSubDirs,
}, {
include: includes
});
if (!existingLib)
return updatedLib.save();
// Record already exists. Update library data.
});
In the code above, I'm reading the library directory and gathering all the information about MediaParentDirs and other models mentioned previously. Finally, I build a Library instance with all the nested associations defined.
Now, if a library already exists, I need to update the data associated to it and its models. I already tried a few things:
Library.upsert() but this doesn't update the associations.
Library.update() same as above.
embed.update() from https://github.com/Wsiegenthaler/sequelize-embed but this requires me to supply object IDs explicitly
Is there any other way I could update the associated model instances?
Any help would be appreciated. Thanks.
Sequelize automatically adds setter methods to models that can be used to update associated data in the database.
Following is the code that I use to list the methods. Add this in index.js after associations are defined and restart node server.
for (let assoc of Object.keys(db[modelName].associations)) {
for (let accessor of Object.keys(db[modelName].associations[assoc].accessors)) {
console.log(db[modelName].name + '.' + db[modelName].associations[assoc].accessors[accessor]+'()');
}

Change Firebase data coming from angularfire2

I have a database which stores a node like this:
-KdFlSK9eqzRDPd_I71I
address:
heats:
image: 2017-02-18T10:30:15.025Z.jpeg
title:
Inside the image tag, there is the filename of a file stored in Firebase Storage. What I would like to do is get the full path to get the file from Firebase Storage before 2017-02-18T10:30:15.025Z.jpeg is inserted inside the image source. So basically, alter the data before render.
[Answer]
Very simple - I did not change the data from firebase, I simple set the new data to a new variable which doesn't show if not set.
You can do it as shown below.You just need to put downloadURL on your image property after the image has been saved in the firebase storage.
Note: This is just a sample code where I used.Please adjust it according to your situation.
takeBillPhoto(billId: string, imageURL: string) {
const storageRef = firebase.storage().ref(this.userId);
return storageRef.child(billId).child('billPicture')
.putString(imageURL, 'base64', { contentType: 'image/png' })
.then(pictureSnapshot => {
this.billList.update(billId, { picture: pictureSnapshot.downloadURL });
});
}

Cloud Functions for Firebase: write to database on fileupload

I have a cloud function (modified version of generateThumbnail sample function). I want to create thumbnail, but I also want to get image width and height, and update size value in the database.
To break up this problem:
I need to get snapshot of current database
Navigate to /projects of database
Find correct key using filename (project.src == fileName)
Get size of image (done)
Update project.size to new value
I did some research, but I only found the functions.database.DeltaSnapshot interface, that is given, when you listen on functions.database().ref().onwrite(snapshot => {})
projects.json:
[{
"name": "lolipop",
"src": "lolipop.jpg",
"size": ""
},{
"name": "cookie",
"src": "cookie.jpg",
"size": ""
}]
Database interaction can be done using the firebase-admin package. Check out this sample to see how a function not triggered by a database write accesses the database.
Accessing child nodes by the value of one of their keys in Firebase is a bit clunky, more on that at the end.
For each concern:
1 & 2: create a reference to the projects key in your DB
3: Find the project you're looking for by its src key
5: Update the project
// create reference
const projectsRef = admin.database().ref('projects');
// create query
const srcProjectQuery = projectsRef.orderByChild('src').equalTo(fileName);
// read objects that fit the query
return srcPojectQuery.once('value').then(snapshot => {
const updates = {};
snapshot.forEach(childSnapshot => {
updates[`${childSnapshot.key}/size`] = fileSize;
});
return projectsRef.update(updates);
});
Since it looks like you're treating the src values as unique, a lot of headache can be avoided by using the src as the key for each project object. This would simplify things to:
const projectsRef = admin.database().ref(`projects/${src}`);
projectsRef.update({'size': fileSize});

Download an image from its url and directly upload it to AWS

I can't find a satisfying answer to my question. Given an image url, I want to download it (without saving it to the disk) and to immediately upload it to an AWS Bucket. Here is my code :
self.downloadImage = function(url){
let response = HTTP.get(url, {
encoding:null // for binary
})
if (!response.headers['content-type'].split('/')[0] == 'image'){
throw new Error("not an image")
}
return {
data : response.content,
contentType : response.headers['content-type']
}
}
self.uploadImage = function(websiteName, imgUrl, callback){
// we retrieve the image
let image = downloadImage(imgUrl)
let imageName = self.extractImageName(imgUrl)
let filename = 'img/' +websiteName + "/" + imageName
let newUrl = `https://s3.${Meteor.settings.AWS.REGION}.amazonaws.com/${Meteor.settings.S3.BUCKET_NAME}/${filename}`
// makes the async function sync like
let putObjectSync = Meteor.wrapAsync(s3Client.putObject, s3Client)
try{
let res = putObjectSync({
Body: image.data,
ContentType : image.contentType,
ContentEncoding: 'base64',
Key: filename,
ACL:'public-read',
Bucket: Meteor.settings.S3.BUCKET_NAME
})
return newUrl
} catch(e) {
return null
}
}
Everything works fine, except that the image seems corrupted. So far I tried :
to use aldeed:http, in order to set the encoding to null when downloading, which seems a good strategy for images
not to use it and to pass the text content of the response directly as the upload body
to add base64 encoding in aws
Still corrupted. I feel very close to the solution, as the image as the correct type and file size, but still won't print in the browser or on my computer. Any idea about how to correctly encode/retrieve the data ?
Okay I found the answer by myself :
aldeed:meteor allow to add a responseType parameter to the get request. We simply need to set this option to buffer, so that we get the data as a buffer. Then we simply give this buffer, with no transformation, as the Body of the upload function.

Trouble reading sqlite3 database columns of type blob with sql.js

So i am using the sql.js library i.e. the port of sqlite in javascript which can be found here https://github.com/kripken/sql.js.
This is my code to open and read the database that comes from a flat file store locally.
First the file a local file is selected via this HTML
<input type="file" id="input" onchange="handleFiles(this.files)">
The js code behind the scenes is as follows,
function handleFiles(files) {
var file = files[0];
var reader = new FileReader();
reader.readAsBinaryString(file);
openDbOnFileLoad(reader);
function openDbOnFileLoad(reader){
setTimeout(function () {
if(reader.readyState == reader.DONE) {
//console.log(reader.result);
db = SQL.open(bin2Array(reader.result));
execute("SELECT * FROM table");
} else {
//console.log("Waiting for loading...");
openDbOnFileLoad(reader);
}
}, 500);
}
}
function execute(commands) {
commands = commands.replace(/\n/g, '; ');
try {
var data = db.exec(commands);
console.log(data);
} catch(e) {
console.log(e);
}
}
function bin2Array(bin) {
'use strict';
var i, size = bin.length, ary = [];
for (i = 0; i < size; i++) {
ary.push(bin.charCodeAt(i) & 0xFF);
}
return ary;
}
Now this works and i can access all the columns and values in the database, however there is one column which is of type blob and that just shows up as empty. Any ideas of how i can access the contents of this blob?
The correct answer!
So what I was trying to ask in this question is simply how to read the contents of a column of type blob using sql.js. The correct answer is to specify the column names in the question and for the column that contains data of type blob, get its contents using the hex function i.e. select column1,hex(column2) from table. It was by no means a question about the most efficient way of doing this. I have also written a blog post about this.
Here is a slightly modified copy of the function responsible for initializing my sqlite database:
sqlite.prototype._initQueryDb = function(file, callback) {
self = this;
var reader = new FileReader();
// Fires when the file blob is done loading to memory.
reader.onload = function(event) {
var arrayBuffer = event.target.result,
eightBitArray = new Uint8Array(arrayBuffer),
database = SQL.open(eightBitArray);
self._queryDb = database;
// Trigger the callback to the calling function
callback();
}
// Start reading the file blob.
reader.readAsArrayBuffer(file);
}
In this case, file is a local sqlite database handle that I get from an HTML input element. I specify a function to call when a change event happens to that input and get the blob from the resulting event.target.files[0] object.
For the sake of brevity on my part I left some things out but I can throw together a smaller and more simplified example if you are still struggling.
The answer is: with kripken's sql.js, that you mentioned above you can't. At least as of today (may 2014). The original author doesn't maintain sql.js anymore.
However, I'm the author of a fork of sql.js, that is available here: https://github.com/lovasoa/sql.js .
This fork brings several improvements, including support for prepared statements, in which, contrarily to the original version, values are handled in their natural javascript type, and not only as strings.
With this version, you can handle BLOBs (both for reading and writing), they appear as Uint8Arrays (that you can for instance convert to object URL to display contents to your users).
Here is an example of how to read blob data from a database:
var db = new SQL.Database(eightBitArray); // eightBitArray can be an Uint8Array
var stmt = db.prepare("SELECT blob_column FROM your_table");
while (stmt.step()) { // Executed once for every row of result
var my_blob = stmt.get()[0]; // Get the first column of result
//my_blob is now an Uint8Array, do whatever you want with it
}
db.close(); // Free the memory used by the database
You can see the full documentation here: http://lovasoa.github.io/sql.js/documentation/

Resources