We're currently using Firebase Functions with the node 14 public preview.
We need to increase the memory of a function past 1Gb.
The documentation of Google Cloud functions specifies that the max_old_space_size must be set for newest runtime, and the documentation shows :
gcloud functions deploy envVarMemory \
--runtime nodejs12 \
--set-env-vars NODE_OPTIONS="--max_old_space_size=8Gi" \
--memory 8Gi \
--trigger-http
However, the set-env-vars options does not exist in the firebase deploy
Using
firebase deploy --only functions:myFunction --set-env-vars NODE_OPTIONS="--max_old_space_size=4Gi" Yields the error: unknown option '--set-env-vars' error.
While deploying a heavy function, i logically get a heap out of memory error :
[1:0x29c51e07b7a0] 120101 ms: Mark-sweep (reduce) 1017.1 (1028.5) -> 1016.2 (1028.7) MB, 928.7 / 0.1 ms (average mu = 0.207, current mu = 0.209) allocation failure scavenge might not succeed
[1:0x29c51e07b7a0] 119169 ms: Scavenge (reduce) 1016.9 (1025.2) -> 1016.2 (1026.5) MB, 3.6 / 0.0 ms (average mu = 0.205, current mu = 0.191) allocation failure
And we can see the function only has 1028Mb of ram, not 4.
We did ask it to deploy with 4Gb:
functions
.runWith({ memory: '4GB', timeoutSeconds: 300 ,})
What is the key here?
We had exactly the same issue. It seems to happen when deploying function with node 12 or more.
Here is the solution to solve this:
Find your function on GCP web app
Click on "edit"
Scroll down and find "Runtime environment variables"
Add key NODE_OPTIONS with value : --max_old_space_size=4096
Here is a picture of the setting :
This is really annoying and I did not find any solution to set this setting while deploying in command line.
Related
I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error:
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.
Firebase Crashlytics console lists this file as missing:
D79D73EB-C9BE-3D0A-B3F5-4D6E5BF6E3A0 1.1.1 (1084) Optional
So I successfully download symbols from
https://appstoreconnect.apple.com/WebObjects/iTunesConnect.woa/ra/ng/app/***/activity/ios/builds/1.1.1/1084/details
The UUID is present:
$ dwarfdump -u ~/Downloads/appDsyms/* | grep 3A0
UUID: D79D73EB-C9BE-3D0A-B3F5-4D6E5BF6E3A0 (arm64) /Users/gene/Downloads/appDsyms/d79d73eb-c9be-3d0a-b3f5-4d6e5bf6e3a0.dSYM
Then I upload it to Crashlytics, probably for the 10th time:
$ Pods/Fabric/upload-symbols -gsp GoogleService-Info.plist \
-p ios ~/Downloads/appDsyms/d79d73eb-c9be-3d0a-b3f5-4d6e5bf6e3a0.dSYM
Successfully submitted symbols for architecture arm64 with UUID d79d73ebc9be3d0ab3f54d6e5bf6e3a0 in dSYM: /Users/gene/Downloads/appDsyms/d79d73eb-c9be-3d0a-b3f5-4d6e5bf6e3a0.dSYM
Now I go back to Firebase Crashlitics console and still see the UUID as missing.
I did try to upload it as appDsyms.zip too, also successfully. And unzipped and upload individual files. Tried different ways of uploading maybe 10 times. All worked fine. But the UUID is still missing:
What am I doing wrong?
Apparently Crashlytics has been partially broken for at least 4 days:
https://groups.google.com/forum/?pli=1#!topic/firebase-talk/nO-NCq7p2iQ
I suppose they could not be bothered to tell us that it's been broken. The status dashboard shows "green" for the whole week.
Google, shame on you.
I have a Shiny app that runs as expected locally, and I am working to deploy it to Bluemix using Cloud Foundry. I am using this buildpack.
The default staging time for apps to build is 15 minutes, but that is not long enough to install R and the packages needed for my app. If I try to push my app with the defaults, I get an error about running out of time:
Error restarting application: sherlock-topics failed to stage within 15.000000 minutes
I changed my manifest.yml to increase the staging time:
applications:
- name: sherlock-topics
memory: 728M
instances: 1
buildpack: git://github.com/beibeiyang/cf-buildpack-r.git
env:
CRAN_MIRROR: https://cran.rstudio.com
CF_STAGING_TIMEOUT: 45
CF_STARTUP_TIMEOUT: 9999
Then I also changed the staging time for the CLI before pushing:
cf set-env sherlock-topics CF_STAGING_TIMEOUT 45
cf push sherlock-topics
What happens then is that the app tries to deploy. It installs R in the container and installs packages, but only for about 15 minutes (a little longer). When it gets to the first new task (package) after the 15 minute mark, it errors but with a different, sadly uninformative error message.
Staging failed
Destroying container
Successfully destroyed container
FAILED
Error restarting application: StagingError
There is nothing in the logs but info about the libraries being installed and then Staging failed.
Any ideas about why it isn't continuing to stage past the 15-minute mark, even after I increase CF_STAGING_TIMEOUT?
The platform operator controls the hard limit for staging timeout and application startup timeout. CF_STAGING_TIMEOUT and CF_STARTUP_TIMEOUT are cf cli configuration options that tell the cli how long to wait for staging and app startup.
See docs here for reference:
https://docs.cloudfoundry.org/devguide/deploy-apps/large-app-deploy.html#consid_limits
As an end user, it's not possible to exceed the hard limits put in place by your platform operator.
I ran into the same exact issue. I was also trying to deploy a shinyR app with a quite a number of dependencies.
The trick is to add the num_threads property to r.yml file. This will definitely speed up the build time.
Here's how one might look:
packages:
- packages:
- name: bupaR
- name: edeaR
num_threads: 8
See https://docs.cloudfoundry.org/buildpacks/r/index.html for more info
I am experimenting with Google App Engine's flexible Python 3 environment and Cloud Datastore. When testing locally, this (generally) calls for running your app in something like Gunicorn and accessing the Datastore API from gcloud.datastore. For example:
import gcloud.datastore as g_datastore
ds = g_datastore.Client(...)
entity = datastore.Entity(key=ds.key(...))
ds.put(entity)
When run locally (in dev mode), Entities' states are persisted between runs. I can't for the life of me figure out where they are stored or how to clear the dev datastore that is created after creating/accessing gcloud.datastore.Client. As far as I can tell, it does not use the same place that ndb uses when run via dev_appserver.py.
I've tried to figure it out with something like this (when running OS X):
$ touch foo
$ GCLOUD_PROJECT=... python .../main.py
* Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger pin code: ...
127.0.0.1 - - [04/Jul/2016 10:36:01] "GET / HTTP/1.1" 200 -
...
^C
$ sudo find /private/tmp /var/db /var/tmp ~/.config/gcloud ~/Library -newer foo
...
# nothing meaningful
I tried looking at the source code, and found some unit test cleanup code that: a) isn't distributed with pip install gcloud; and (more important for me) b) doesn't give any clue as to where that stuff is actually stored.
I've even tried this while Gunicorn was running:
$ sudo lsof | grep -Ei 'python'
# nothing meaningful
Where the foo does gcloud.datastore store its state between runs when run locally (in dev mode)?!
Boy do I feel silly! 😖 By default, gcloud.datastore connects to ... (wait for it) ... the Google Cloud Datastore. The real one. I don't know why I expected any different.
I didn't figure this out right away because my local gcloud configuration was already provisioned to use my account credentials, and I had the GCLOUD_PROJECT environment set when running my local instance. Whoops! 😳 (No wonder I wasn't seeing any changes on local disk!)
So, if you want to have a "dev" Cloud Datastore running locally, you'll need to run the Datastore emulator. This is more complicated than running dev_appserver.py (which pretty much takes care of all this for you; see, e.g., this workflow for how to infer values for your index.yaml file from your app's Datastore calls). If you don't supply the --data-dir option to the start command, the default local storage location is ~/.config/gcloud/emulators/datastore/....
Rather than delete the question, I'm leaving it here as a warning/explanation to numbskulls like myself.
Getting this error soon after running riak start despite a config file that should be working correctly.
Turns out that this is a limit of Riak's error messaging: you will get the above message if you try to do a riak-admin test on your setup before the configuration has finished loading.
I encountered the same problem while starting new Riak clusters over and over again during automated testing. My solution was, in my test fixture setup, to execute code that keeps trying to put an object into a Riak bucket and then eventually succeeding.
Granted, my solution here is an Erlang snippet but it generally solves this problem in lieu of any Riak-supplied admin/wait functions. But since I've used a number of different Riak versions this technique here seems to work for all of them.
wait_for_riak() ->
{ok, C} = riak:local_client(),
io:format("Waiting for Raik..."),
wait_for_riak(C),
io:format("and had a successful put.~n").
wait_for_riak(C) ->
Strawman = riak_object:new(<<"test">>, <<"strawman">>, []),
case C:put(Strawman, 1) of
ok ->
ok;
_Error ->
receive after 1000 -> ok end,
wait_for_riak(C)
end.
adding sleep 4 like so:
brew install riak
riak start
sleep 4
riak-admin test
should help