I received an existing firebase project from another developer. I was setting up a bucket for making backups programmatically and I found a bucket named "backups" where I find these documents with extension data.json.gz that are created every day at 3 am, but I'm not sure what they are, does anyone know what they could be?, Client is asking me if there a backup of the database, but as far I know firestore backups have an extension named overall_export_metadata
As mentioned by Alexander N, (for Linux) hit gzip -d filename on your terminal, this will create the ungzip file, and you'll be able to read the data. In this case, these files are backups from one of the firestore collections and the firestore DB rules.
Related
I have a backup of DynamoDB table, I want to download it to my localhost in order to restore it on local dynamodb instance. Couldn't find any documents, every tool I found like dynamodump creates on-demand backup and then downloads it. Can anyone help me?
Your best bet is to do an Export to S3 and then you’ll have direct access to the objects in S3. Hopefully that satisfies your need?
The backup which you state belongs to DynamoDB and not directly accessible to you, it's only purpose is to restore to DynamoDB tables in the cloud.
You have 2 options
1. Export to S3
As #hunterhacker stated you can do an export to S3 and then download the data from there.
2. Scan
A more cost effective solution is to write a local script which does a Scan or if there is a large amount of data a parallel Scan and store the data locally
I currently have a firebase database and exported it using a schedule according to the following manual.
https://firebase.google.com/docs/firestore/solutions/schedule-export#gcp-console
Inside the main function I have collectionIds: [] to store everything. Once I ran the backup, I noticed that my database which was calculated to be 2.02 MiB was backed up to a folder with size 96.21 KiB. This makes me wonder if the export actually backed up photos or if the compression is really that good. Is there a way to know for sure if the photos are contained? Thanks.
Firestore exports will contain all data in all documents with all fields present. The export will not attempt to crawl any URLs in those fields, or try to get any other external data to save as well. You will have to handle external data separately.
There is almost certainly no data missing in the export. The difference in size between your database as reported in Firestore, and the size reported in Cloud Storage could be attributed to the fact that size in Firestore includes all of the indexes it builds that are required for serving queries efficiently. Those indexes do not need to be exported. They can be rebuilt after import.
As the title suggested, can I post with the: https://firestore.googleapis.com/v1beta1/{database}/documents.commit command a single JSON file directly in my Firestore database and will they be processed? Added to the collections etc? Or should I go with POST projects.databases.documents.createDocument. I was reading this documentation
I want to put json files from different sources in to my Firestore database to build up my collection.
And where should I put the filename of the json file that I want to upload?
Thanks!!
You can see here [1] the usage of both calls:
documents.commit= Commits a transaction, while optionally updating documents
documents.createDocument= Creates a new document
For using the JSON in the API call you need to send a POST request, check this question [2].
Also, regarding your last comment, you can start collections and add documents using the Firestore UI, but also you can do that using client libraries in different languages (Python, Java, Go...). Here is a list of "How to"s regarding Firestore [3].
In case you think that some features are missing, you can always file a Feature Request following this link [4] (as Firestore is still not there, I would choose Datastore), but keep in mind that Firestore is still in Beta.
I am using the REST API of Firebase Realtime Database from an AppEngine Standard project with Java. I am able to successfully put data under different locations, however I don't know how I could ensure atomic updates to different paths.
To put some data separately at a specific location I am doing:
requestFactory.buildPutRequest("dbUrl/path1/17/", new ByteArrayContent("application/json", json1.getBytes())).execute();
requestFactory.buildPutRequest("dbUrl/path2/1733455/", new ByteArrayContent("application/json", json2.getBytes())).execute();
Now to ensure that when saving a /path1/17/ a /path2/1733455/ is also saved, I've been looking into multi path updates and batched updates (https://firebase.google.com/docs/firestore/manage-data/transactions#batched-writes, only available in Cloud Firestore?) However, I did not find whether this feature is available for the REST API of the Firebase Realtime Database as well or only through the Firebase Admin SDK.
The example here shows how to do a multi path update at two locations under the "users" node.
curl -X PATCH -d '{
"alanisawesome/nickname": "Alan The Machine",
"gracehopper/nickname": "Amazing Grace"
}' \
'https://docs-examples.firebaseio.com/rest/saving-data/users.json'
But I don't have a common upper node for path1 and path2.
Tried setting as the url as the database url without any nodes (https://db.firebaseio.com.json) and adding the nodes in the json object sent, but I get an error: nodename nor servname provided, or not known.
This would be possible with the Admin SDK I think, according to this blog post: https://firebase.googleblog.com/2015/09/introducing-multi-location-updates-and_86.html
Any ideas if these atomic writes can be achieved with the REST API?
Thank you!
If the updates are going to a single database, there is always a common path.
In your case you'll run the PATCH command against the root of the database:
curl -X PATCH -d '{
"path1/17": json1,
"path2/1733455": json2
}' 'https://yourdatabase.firebaseio.com/.json'
The key difference with your URL seems to be the / before .json. Without that you're trying to connect to a domain on the json TLD, which doesn't exist (yet) afaik.
Note that the documentation link you provide for Batched Updates is for Cloud Firestore, which is a completely separate database from the Firebase Realtime Database.
I researched several places and could not find any direction on what options are there to archive old data from cosmosdb into a cold storage. I see for DynamoDb in AWS it is mentioned that you can move dynamodb data into S3. But not sure what options are for cosmosdb. I understand there is time to live option where the data will be deleted after certain date but I am interested in archiving versus deleting. Any direction would be greatly appreciated. Thanks
I don't think there is a single-click built-in feature in CosmosDB to achieve that.
Still, as you mentioned appreciating any directions, then I suggest you consider DocumentDB Data Migration Tool.
Notes about Data Migration Tool:
you can specify a query to extract only the cold-data (for example, by creation date stored within documents).
supports exporting export to various targets (JSON file, blob
storage, DB, another cosmosDB collection, etc..),
compacts the data in the process - can merge documents into single array document and zip it.
Once you have the configuration set up you can script this
to be triggered automatically using your favorite scheduling tool.
you can easily reverse the source and target to restore the cold data to active store (or to dev, test, backup, etc).
To remove exported data you could use the mentioned TTL feature, but that could cause data loss should your export step fail. I would suggest writing and executing a Stored Procedure to query and delete all exported documents with single call. That SP would not execute automatically but could be included in the automation script and executed only if data was exported successfully first.
See: Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs.
UPDATE:
These days CosmosDB has added Change feed. this really simplifies writing a carbon copy somewhere else.