How to choose a zone for Google Cloud Shell Session? - google-cloud-shell

Since echo backs of typing character is very slow for me in Japan, it seems that instances of Google Cloud Shell sessions are in some us region.
How do I change the zone of an instance?

Cloud Shell is globally distributed across multiple GCP regions. When a user connects for the first time, the system assigns them to the geographically closest region that can accommodate new users. While the users cannot manually chose their region, the system does its best to pick the closest region Cloud Shell operates in. If Cloud Shell does not initially pick the closest region or if the user later connects from a location that's geographically closer to a different region, Cloud Shell will migrate the user to a closer region on session end.
Since Cloud Shell runs as a GCP VM, you can view your current region by running
curl -H "Metadata-Flavor: Google" metadata/computeMetadata/v1/instance/zone
inside the Cloud Shell session. This is documented in https://cloud.google.com/compute/docs/storing-retrieving-metadata.

Related

How to redirect traffic for a Firebase Function to another region when current one goes down?

Suppose I have a callable function that is deployed to multiple regions.
My client side app does not specify region(but default is us-central1), so in the event where the default region goes down, does Firebase/Google Cloud automatically redirect traffic to other regions that are up?
If that wasn't the case, what to do in such scenarios?
I'm sure there's something, but my search attempts haven't reached anything.
No, each Cloud Function deployed has its own URL that also includes the region and requests would be routed to that function only. Cloud Functions don't have a load balancer like functionality by default. If number of requests rises, Cloud Functions will just create new instances to handle them.
You can check user's location, find the nearest GCP region where your function is deployed and call that. That should also reduce latency a bit and balance the requests based on user region.
Alternatively, if you want to ensure requests are handled by functions in same regions, also checkout Global external HTTP(S) load balancer with Cloud Functions.

To trigger a Databricks job in one region by an airflow job running in another region

We plan to have one common AWS MWAA cluster in us west region which triggers Databricks jobs in different regions.
Is there a way to trigger a Databricks job in one region by an airflow job running in another region? I checked the databricks connection document as here but it does not list any region parameters. How to achieve this?
Airflow operator doesn't know anything about regions - it just needs URL of the workspace, and personal access token. That workspace could be in the same region, or in another - it doesn't matter to a Databricks operator.

Is it possible to use the backup and/or restore function to create a new account in a new region?

I'm looking for options for failover in the case of a region outage that doesn't involve incurring the cost of having a hot second region and geo-redundancy. Is it possible to use the backup feature to manually create a new account in a new region and then upload the related documents?
I was looking at the restore feature and noticed these limitations:
"The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account did not exist."
I also saw this limitation which makes me wonder if backups work in case of region failure? Even with the geo-redundant backup storage option picked?:
"The backups are not automatically geo-disaster resistant. You have to explicitly add another region to have resiliency for the account and the backup."
Does this mean that the backups will also go down if geo-redundancy and multiple regions aren't set up?
Thanks.
Backups can only be restored into the same region in which the account was created. Backups are not intended or able to provide resiliency to regional outages. And certainly cannot provide anything approaching 99.999% availability. Regional replication is the only means to provide resiliency to regional events for mission critical workloads.
Backup is by default is geo-dr into the pair for that region. These are only restored into the source region where the regional event impacted the storage for that region which prevents it from recovering normally.

From the cluster, cannot download a Python Wheel from the storage account

1) We upload a python wheel to the storage account associated with the workspace successfully.
2) In the second step we submit an experiment which runs in the cluster and needs to download and run the package from step 1.
The experiment is able to download the package and run when the storage account is not associated with any VNet. However, when we associate the storage account in Vnets the experiment hangs and eventually fails. This storage account is in two Vnet’s, a, and b. The cluster is also in the Vnet, a.
I don’t know why the cluster cannot download the wheel package when the storage account is in a Vnet. It is our policy to have storage accounts in Vnet’s.
It there something else we are missing? I also checked the container registry setting and its set to ‘allow from everywhere’ (we are using std SKU).
Let me know if any further information is required. Thanks.

Can I use Firebase Cloud Functions for search engine?

Firebase recently released integration to Cloud Functions that allows us to upload Javascript functions to run without needing our own servers.
Is it possible to build a search engine using those functions? My idea is to use local disk (tmpfs volume) to keep indexed data in memory and for each write event I would index the new data. Does tmpfs keeps data between function calls (instances)?
Can cloud functions be used for this purpose or should I use a dedicated server for indexing data?
Another question related to this is: when cloud functions get data from Firebase Realtime Database, does it consumes network or just disk reading? How is it computed in princing?
Thanks
You could certainly try that. Cloud Functions have a local file system that typically is used to maintain state during a run. See this answer for more: Write temporary files from Google Cloud Function
But there are (as far as I know) no guarantees that state will be maintained between runs of your function. Or even that the function will be running on the same container next time. You may be running on a newly created container next time. Or when there's a spike in invocations, your function may be running on multiple containers at once. So you'd potentially have to rebuild the search index for every run of your function.
I would instead look at integrating an external dedicated search engine, such as Algolia in this example: https://github.com/firebase/functions-samples/tree/master/fulltext-search. Have a look at the code: even with comments and license it's only 55 lines!
Alternatively you could find a persistent storage service (Firebase Database and Firebase Storage being two examples) and use that to persist the search index. So you'd run the code to update the search index in Cloud Functions, but would store the resulting index files in a more persistent location.
GCF team member + former google search member. Cloud Functions would not be suitable for in-memory search engines for a few reasons.
A search engine is very wise to separate its indexing and serving machines. At scale you'll want to worry about read and write hot-spotting differently.
As Frank alluded to, you're not guaranteed to get the same instance across multiple requests. I'd like to strengthen his concern: you will never get the same instance across two different Cloud Functions. Each Cloud Function has its own backend infrastructure that is provisioned and scaled independently.
I know it's tempting to cut dependencies, but cutting out persistent storage isn't the way. Your serving layer can use caching to speed up requests, but durable storage makes sure you don't have to reindex the whole corpus if your Cloud Function crashes or you deploy an update (each guarantees the whole instance is scrapped and recreated).

Resources