I have been creating a video uploader using meteor and have been using CollectionFS to store the files. Unfortunately under heavy load it takes extended amounts of time to load the videos to display, all of the video files are around 50MB. In order to fix the issue of strain on the collection I want to save all files currently in the CollectionFS collection to the drive so I can place them on a CDN.
I do not know how to save the files to the hard drive, so any enlightenment on the subject would be helpful. The strain on the server is forcing meteor to run out of memory a little too often.
This is in fact a mongodb question rather than meteor. Taken from the official documentation, the following example shows you how to do this for one file. Of course, you can use find instead of findone and iterate over the result to write each individual file.
// returns GridFS bucket named "contracts"
GridFS myContracts = new GridFS(myDatabase, "contracts");
// retrieve GridFS object "smithco"
GridFSDBFile file = myContracts.findOne("smithco");
// saves the GridFS file to the file system
file.writeTo(new File("/tmp/smithco.pdf"));
There is also an official mongofiles utility that you can use for the same purpose:
mongofiles -d records get smithco.pdf
Related
I'm using gcs bucket for wordpress (wp-stateless plugin)
after create and upload media file to a bucket. I copy it to other bucket (duplicate). But generation number of each object has been change (maybe random).
My question is: How to keep generation number same bucket source like in destination bucket?
Thanks in advance.
Basically, there’s not an official way of keeping the same version and generation numbers when copying files from one bucket to another. This is WAI and intuitive because the version number refers to this object (which resides on this bucket), when you copy it to another bucket, it's not the same object (it's a copy) so it cannot keep the same version number.
I could think of a workaround, keeping somewhere your own version of the objects and then through the API make an organized copy. This would mean you would be dumping the bucket but you would need to have a list of all the objects and its versions and then add them in sequential order (sounds like a lot of work). You could keep your own versioning (or the same versioning) in the metadata of each object.
I would recommend that if your application depends on the object’s versioning, to use custom metadata. Basically, if you did your own versioning using custom metadata, when copying the objects to a new bucket, it would keep the same metadata.
There is already a feature request created about this. But, it has mentioned that it's currently infeasible.
However, you can raise a new feature request here
still new to flutter and firebase. I understand how to store and retrieve images and display it on the app.
But how do I go about synchronizing files from the app's local asset folder to an asset folder stored in firebase storage? My intention is to check the cloud folder if a new image like an icon is recently uploaded, and download it to the app's local folder. If a file is removed in the cloud storage, it should also remove it from the local assets folder, mirroring it.
I need a way to compare local AssetManifest.json to the one on firebase storage. I Just need a little direction/algorithm to start with. Thanks for the help!
There is nothing specific built into Cloud Storage's Firebase SDK for this, so you'll have to build it in your own application code.
Using only the Cloud Storage for Firebase API
If you're just using Cloud Storage, you'll have to:
List all files that you're interested in from Storage.
Loop over the files.
Get the metadata for each file and check then the files was last updated
Compare that to when you wrote the local file.
Download the file if it is new or modified.
This approach will work, but it requires quite some calls to the Storage API, because there's no specific API to give you files that were modified since a specific date/time.
Storing metadata in a cloud database
You could also consider storing the metadata in a cloud database, like Firebase's Realtime Database or Cloud Firestore, and then use the query capabilities of that database to retrieve only files that were modified since your device last synchronized.
The recipe then becomes:
Determine when we last synchronized, which is a value you'll want to store in local storage/shared preferences.
Execute a query to the database to determine which files were added/modified since then.
Loop over the query results and...
Download each file that was modified.
In here, only step 2 and 4 make calls to the external APIs, so it is likely to be faster and cheaper to execute (but more work for you to write initially).
I am trying to move files into separate folders in Firebase Storage once they have been uploaded. As it turns out, you can not achieve this with the the JavaScript Web Client SDK for Storage. However, it appears that you could do so with the Admin SDK for Storage using Firebase Functions. So that is what I am trying to do. I understand that you need to first download a file into your Firebase Functions and then re-upload into a new folder in Storage.
To download a file, I need to pass its reference from the client and here is where it gets confusing to me. I am currently getting all the uploaded files in the client via the listAll() function which returns items and prefixes. I am wondering whether or not I can use either the items or the prefixes to then download the files into Firebase Functions using them (items or prefixes). Alternatively, I can pass the URLs. However, the question is, which method do I use to get and download them in Functions afterwards?
I know of admin.storage.object as explained in https://firebase.google.com/docs/storage/extend-with-functions#trigger_a_function_on_changes. However, does it handle multiples files? In other words, the object, as I understand, is one file that is uploaded to Storage and you can use its attributes such as object.bucket or object.name to access more information. However, what if there are multiple files uploaded at the same time, does it handle them one by one? Also, if I am passing the references or URLs of the files that need to be downloaded from the client, is admin.storage.object the right choice? Because it seems to simply process all the files uploaded to Storage, instead of getting any references from the client.
Further, there is a description of how to download a file (https://firebase.google.com/docs/storage/extend-with-functions#example_image_transformation) which is this code: await bucket.file(filePath).download({destination: tempFilePath});
I understand that the filepath is basically the name of the file that is already in Storage (ex. /someimage). But what if there are other files with the same name? Might the wrong file be downloaded? And how do I make sure that the filepath is the file that I passed from the client?
Let me know what your thoughts are and whether or not I am heading in the right direction. If you include a code in your answer, please write it in JavaScript for the Web. Thank you.
Thank you!
Here are some points that could help:
In GCP Storage technically there are no folders, GCS emulates the directory structure by using / in the names of objects.
When setting a cloud function triggered by a GCS object change, each object change is an event, each event triggers an invocation of the function (you might have an bucket for unprocessed files which triggers the function and have them move to a different bucket when proccesed)
You might consider using the REST API to move/copy/rename the objects without having to download them
As a side note the question is a little too broad, possibly these points could help clarify things for you.
i'm asp.net beginner and currently working in "upload download file" project with asp.net and vb.net as code behind language (like skydrive's web).
what i'm want ask is about upload file in server, must we store path file, size, accessed or created date into database? as we know we can use directory listing in system.io.
Thanks for your help.
You definetly want to store the path of the file. You want a way to find the file ;) Maybe later you will have multiple servers, replication or other fancy things.
For the rest, it depends a bit on the type of website. If it's going to get high traffic then store it in the database, this will limit the number of IO call (very slow). Also, it'll be a lot easier to handle sorting and queries. (sort by date, pull only the read onyl files, ...).
Database will also help if you want to show history or statistique.
You can save file in some directory and can save path of that file in database. You can also store size and created date of that file in DB. But storing a file in DB is a bit difficult. Rather than save file in Directory and save path of that file in DB
you could store the file information in a database to built some extra features like "avoid storing duplicate files", because you are having a faster search in the database! if you search the filesystem always a recursive function call get started
I want to import csv file(already uploaded in blob storage) in Azure.
For example I have uploaded test.csv on blob storage, now I just want to import that test.csv file in .net(azure) and after importing I will insert that data into azure database. I am using C# .net. Please suggest how can I achieve this. I want to follow below steps:-
Creating a cvs file with all rows.
Upload it as blob.
Parse it with a Worker role and insert it in the sql azure db.
Thanks.
A bit more clarification around your question would be helpful. Are you trying to upload a file to Azure blob storage? Download it from there for your app to consume? What language(s) are you using?
There are plenty of examples of loading files into and pulling them from Azure blob storage using .NET at least a handful for doing it with Java or PHP.
I you can clarify what you're trying to do, I'd be happy to point you at the appropriate ones. :)
-- answer based on comment update --
The steps for retrieving the blob are fairly straight forward:
1) create your Azure storage client using your azure storage credentials
add a using clause:
using Microsoft.WindowsAzure.StorageClient;
get a client for accessing blob storage:
CloudBlobClient tmpClient = new CloudBlobClient("<nameofyourconfigsetting>");
get a referrence to the blob you want to download:
CloudBlob myBlob = tmpClient.GetBlobReference("container/myblob.csv");
2) read the blob & save to a file
myBlob.DownloadToFile("<path>/myblob.csv");
The save location can be the %temp% location or if its a large file you may want to allocate some local storage space and put it there. The other thing you want to keep in mind is that if you are doing this in a role instance, you'll need to make sure you have measures in place to prevent two instances from concurrently trying to process the same file. If the file is small enough, you can probably even keep it as a memory stream and process it that way. If this is the case, you can use the DownloadToStream property of the CloudBlob object.
For additionally reading, I'd recommend checking out the MSDN library for the details on the StorageClient and CloudBlob contains. Additionally, the Windows Azure Platform Training Kit has some good labs to help you get a better understanding of how Azure Storage works.