I can upload, e.g. json file in 10kb to Firebase database.
But when I want to upload more, e.g. json file in 30kb or 70kb it shows an error "There was a problem contacting the server. Try uploading your file again":
Please first refer to the status dashboard:
https://status.firebase.google.com/
At time of question - the console was experiencing a service disruption - which is why you can read / write to the DB via your application but can not perform admin tasks via the console.
Note the location is substantially above the RTDB
Related
I wish to use the firebase admin-sdk but for some reason, I am getting a "project id is required to access Firestore error".
I have downloaded an admin-sdk json file from the firebase console and I have placed it in the same directory as the file that calls it.
opt := option.WithCredentialsFile("../<FILENAME>.json")
The credentials file has the project id but for some reason the opt variable is unable to extract data from the credentials file, and as a result when I try to get the firestore client this error is occuring.
Thanks!
Problem solved, I was using the wrong absolute path.
I opened our firebase functions code today, typed firebase serve as I usually do, but when I called an http function that touches our real-time database, I was greeted with this message:
#firebase/database: FIREBASE WARNING: {"code":"app/invalid-credential","message":"Credential implementation provided to initializeApp() via the \"credential\" property failed to fetch a valid Google OAuth2 access token with the following error: \"Error fetching access token: invalid_grant (Bad Request)\". There are two likely causes: (1) your server time is not properly synced or (2) your certificate key file has been revoked. To solve (1), re-sync the time on your server. To solve (2), make sure the key ID for your key file is still present at https://console.firebase.google.com/iam-admin/serviceaccounts/project. If not, generate a new key file at https://console.firebase.google.com/project/_/settings/serviceaccounts/adminsdk."}
Nothing changed from when I was able to run this last (a couple weeks ago?) and my system time is set to automatic for time and timezone. I ran firebase logout and firebase login and I'm using the parameterless admin.initializeApp();. Has something changed I need to take into account?
I used the information found # https://firebase.google.com/docs/functions/local-emulator to generate and set up a key file # https://console.cloud.google.com/iam-admin/serviceaccounts/details/##################?authuser=0&project=my-project-name by clicking 'create key' at the bottom.
Once the key file was downloaded, I set it with this command line in my projects directory
set GOOGLE_APPLICATION_CREDENTIALS=path\to\key.json
With that done, I was able to run my functions as I expected locally.
I love you all.
I have created a brand new free tier project, cloned Puppeteer Firebase Functions demo repository and only changed the default project name in .firebaserc file.
When I run the simple test or version functions I get the correct result. When I open the .com/screenshot page without any parameter I get correct ("Please provide a URL...") response.
But when I try any url, i.e. .com/screenshot?url=https://en.wikipedia.org/wiki/Google I get Error: net::ERR_NAME_RESOLUTION_FAILED at https://en.wikipedia.org/wiki/Google thrown in response.
I tried looking for any name resolution errors related to Puppeteer but I could not find anything. Could this be a problem of using free tier?
The free Spark payment plan restricts all outgoing connections except those API endpoints that are fully controlled by Google. As a result, I expect that puppeteer would not be able to make any outgoing connections to external web sites.
I've run into problems trying to run the Azure Storage Emulator on a newly installed computer.
At first it was returning
Cannot create database 'AzureStorageEmulatorDb56' : The database 'AzureStorageEmulatorDb56' does not exist. Supply a valid database name. To see available databases, use sys.databases..
However, when I ran sqllocaldb i I could see that there was a DB named 'AzureStorageEmulatorDb56'.
I eventually ran the command
AzureStorageEmulator init -server localhost -forcecreate
which returned
Granting database access to user AzureAD\[username elided].
Database access for user AzureAD\[username elided] was granted.
Initialization successful. The storage emulator is now ready for use.
The storage emulator was successfully initialized and is ready to use.
which looks promising.
However, when I right-click the emulator's icon in the system try and select "Start Storage Emulator" nothing happens. And if I then look in the log files I can see an error log (Error20-Jul-18-11-07.log) which contains...
7/20/2018 11:06:36 AM [Error] [ActivityId=00000000-0000-0000-0000-000000000000] Input string was not in a correct format.
There's also an Info20-Jul-18-11-07.log file which contains
7/20/2018 11:06:36 AM [Info] [ActivityId=00000000-0000-0000-0000-000000000000] Starting Service: Blob
7/20/2018 11:06:36 AM [Info] [ActivityId=00000000-0000-0000-0000-000000000000] Stopping Service: Blob
Can anyone explain what's going wrong and how I can get the local storage emulator up and running?
Try to disable logging, there seems to be a bug in the 5.5 release:
https://github.com/Azure/azure-storage-net/issues/728
I am just wondering what is the best practice when handling upload and deleting of firebase database and storage.
So for example, I have images uploaded into the storage, and I have extra informations that are uploaded into the database that points to the images in the storage.
So currently, when I upload an image, this is the logic process I go through.
upload image to storage
If fail: let user know.
If success: step 2.
upload image info to database
If fail: let user know + delete previous image from storage.
If success: step 3.
upload image location into geofire database
If fail: let user know + delete image info from database, delete image from storage.
If success: let user know.
So as you can see, I am doing this daisy chain kind of upload, where the next upload only occurs if the previous is successful. And if one step of the upload have failed, I will run codes to revert/delete the uploads I just made.
My delete mechanism is pretty much the exact opposite to my upload. Delete 1 step at a time, and if one step fails, go back and revert/upload the previous data again.
I am just wondering if this sort of protection mechanism is necessary? Or do most people just run all upload and download simultaneously? Such as:
- upload image to storage.
- upload image info to database.
- upload image location to geofire.
Thank you!