Using firebase-import results in "killed" with no explanation - firebase

Although I am able to upload a 5MB json file, I can't upload a 50MB json file using the command-line tool for uploading to firebase: firebase-import.
When I run the upload on the 50MB json file, it prints:
"
Reading ... [path to json]
Preparing JSON for import... (may take a minute)
Killed
"
It does not provide me with any more information. I testing this multiple times on a 5 MB file and had no issues.
The CLI's documentation states that this tool has been tested up until 400MB, so I do not think that this is a size issue. However, like I said, the only difference between the file that fails to upload and the file that uploads is the size.
Has anyone seen anything like this? Does anyone have any suggestions for diagnosing this? Thank you.
I have searched the web and SO for any similar questions but found none.
firebase-import --database_url [my url] --path [my path] --json [smaller file works here but larger doesn't] --service_account [path]
Expected: An upload progress bar followed by my data being visible on the firebase GUI.
Actual Result: A simple "Killed" with no information as to why.

As it turns out, this is a memory limit issue. Since I am using an EC2 instance with low memory, preparing the json took up more space than available. Uploading via the GUI works fine for the 50MB file.

Related

Resume download if _.gstmp files after downloading Sentinel-2 SAFE products using sen2r R package

I have downloaded a large number of Sentinel-2 SAFE files using the R package 'sen2r', which has implemented a Google Cloud download method to retrieve products stored in Long Term Archive. This has worked for me, but after checking the files I have found a decent number of empty files appended with _.gstmp, which according to this represent partially downloaded temporary files that are supposed to be resumed by gsutil. I have re-run the sen2r() command (with server = "gcloud" setting) but it does not resume and correct the downloads as the folders are already there. I would like to resume downloading just the _.gstmp files as it took over a week to download all of the SAFE products and I don't want to start all over again. I'm guessing I can fix this by using 'gsutil' directly but I'm a bit out of my element as this is my first experience using Google Cloud and the sen2r author as they no longer have time to respond to issues on github. If you have any tips for resuming these downloads manually using gsutil command line it would be much appreciated.
I have searched stack exchange and also the sen2r manual and github issues and have found any other reports of the problem.

How to download multiple files from the Firebase Storage Console?

I have an images directory in Firebase Storage and I am trying to download all the files in that directory from the console. It gives me the option to select all files and a download button appears but when I click it only 1 image is downloaded.
Is there a way to download all the images via the Firebase Console?
You can use the gsutil tool to download all files from your Firebase Storage bucket. Use the cp command:
gsutil -m cp -r gs://{{bucket_url}} {{local_path_to_save_downloads}}
-m performs a multi-threaded download. Use this if you want to download a large number of files to perform parallel downloads
-r to copy an entire directory tree
Just had the same issue. Turns out it's a known issue.
Here's the response from Firebase support:
This is a known bug when trying to download directly on the Storage
Console. Our engineers are working on getting it fixed, however I
can't provide any timelines right now. We apologize for the
inconvenience this may have caused you. For the time being, you may
download the file by right-clicking the image previewed, then choose
"Save image as...".

Alfresco accepting only .txt files while uploading and giving 500 internal server error for other files

Today I found very wearied behavior of alfresco, When I upload any .txt file via share UI, its getting uploaded successfully, but if I upload any other type of file then it's giving 500 internal server error as shown in attached screen shot, also you can see in the image that .txt file got uploaded successfully.
The strange thing is there is no any error in the server logs.
Doe's anyone faced similar issue?
Also it is working for .txt, so is it a issue of Transformation?
Please suggest the possibilities of error.
Thanks in Advance.
Error while uploading through CMIS workbench:
It does sound like a transformation problem to me, but it is hard to be sure.
Because TXT files are working that means there isn't a problem with your repo being read-only or something like that. If these were office files, especially if they were large, you might be hitting a configurable transformer limit.
I would try uploading the problem files using something other than Share, such as:
Alfresco FTP
Alfresco WebDAV
Alfresco CIFS/SMB
Apache Chemistry Workbench
Using any (or all) of these will give you a clue as to whether the problem is in Share or lower in the stack.

Uncompress file on Rackspace cloud files container for CDN

I have created a Rackspace account earlier today for CDN to serve my Opencart images from Rackspace.
I have created a container where i will upload over 500,000 images, but prefer to upload them as a compressed file, feels more flexible.
If i upload all the images in a compressed file how do i extract the file when it is in the container? and what compression type files would work?
The answer may depend on how you are attempting to upload your file/files. Since this was not specified, I will answer your question using the CLI from a *nix environment.
Answer to your question (using curl)
Using curl, you can upload a compressed file and have it extracted using the extract-archive feature.
$ tar cf archive.tar directory_to_be_archived
$ curl -i -XPUT -H'x-auth-token: AUTH_TOKEN' https://storage101.iad3.clouddrive.com/v1/MossoCloudFS_aaa-aaa-aaa-aaa?extract-archive=tar -T ./archive.tar
You can find the documentation for this feature here: http://docs.rackspace.com/files/api/v1/cf-devguide/content/Extract_Archive-d1e2338.html
Recommended solution (using Swiftly)
Uploading and extracting that many objects using the above method might take a long time to complete. Additionally if there is a network interruption during that time, you will have to start over from the beginning.
I would recommend instead using a tool like Swiftly, which will allow you to concurrently upload your files. This way if there is a problem during the upload, you don't have to re-upload objects that have alreaady been successfully uploaded.
An example of how to do this is as follows:
$ swiftly --auth-url="https://identity.api.rackspacecloud.com/v2.0" \
--auth-user="{username}" --auth-key="{api_key}" --region="DFW" \
--concurrency=10 put container_name -i images/
If there is a network interruption while uploading, or you have to stop/restart uploading your files, you can add the "--different" option after the 'put' in the above command. This will tell Swiftly to HEAD the object first and only upload if the time or size of the local file does not match its corresponding object, skipping objects that have already been uploaded.
Swiftly can be found on github here: https://github.com/gholt/swiftly
There are other clients that possibly do the same things, but I know Swiftly works, so I recommend it.

"sqlite3.OperationalError: database or disk is full" on Lustre

I have this error in my application log:
sqlite3.OperationalError: database or disk is full
As plenty of disk space is available and my SQLite database does not appear to be corrupted (integrity_check did not report any error), why is this happening and how can I debug it?
I am using the Lustre filesystem (with flock set), and until now, it worked perfectly.
Versions are:
Python 2.6.6
SQLite 3.3.6
It's probably too late for the original poster, but I just had this problem and couldn't find an answer so I'll document my findings in the hope that it will help others:
As it turns out, an SQLite database actually can get full even if there's plenty of disk space, because it has a limit for the number of pages in a database file:
http://www.sqlite.org/pragma.html#pragma_max_page_count
In my case the value was 1073741823, which meant that in combination with a page size of 1024 Bytes the database maxed out at 1 TB and returned the "database or disk is full" error.
The good news is that you can raise the limit; for example double it by issuing PRAGMA max_page_count = 2147483646;.
The limit doesn't seem to be saved in the database file, though, so you have to run it in your application every time you open the database.
By default, SQLite uses /tmp temporary directory (not the memory). If /tmp is too small you will get disk full. In that case change the temporary directory like that: export TMPDIR=<big file system>.
I had same problem too.
Your host or PC's storage is full so delete some files in your system then problem is gone.

Resources