Is it possible to use the backup and/or restore function to create a new account in a new region? - azure-cosmosdb

I'm looking for options for failover in the case of a region outage that doesn't involve incurring the cost of having a hot second region and geo-redundancy. Is it possible to use the backup feature to manually create a new account in a new region and then upload the related documents?
I was looking at the restore feature and noticed these limitations:
"The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account did not exist."
I also saw this limitation which makes me wonder if backups work in case of region failure? Even with the geo-redundant backup storage option picked?:
"The backups are not automatically geo-disaster resistant. You have to explicitly add another region to have resiliency for the account and the backup."
Does this mean that the backups will also go down if geo-redundancy and multiple regions aren't set up?
Thanks.

Backups can only be restored into the same region in which the account was created. Backups are not intended or able to provide resiliency to regional outages. And certainly cannot provide anything approaching 99.999% availability. Regional replication is the only means to provide resiliency to regional events for mission critical workloads.
Backup is by default is geo-dr into the pair for that region. These are only restored into the source region where the regional event impacted the storage for that region which prevents it from recovering normally.

Related

Firebase limits me to set my database to one region permanently, how do I serve people from other regions with low latency?

So I have set the location of my Firestore database to asia-south-1, let suppose now I start getting users from the US as well as from India, how will I serve both the groups with lower latency?
You can't change the properties of your project once it's set up, nor even simply pay more for better service. If you want improved service in different regions, you will need to buy computing resources in each of those regions. This requires setting up multiple projects, with each Firestore product configured for the region you want improved service. Note that not all Firebase and GCP products are available in all regions.
If you want all users in all regions to be using the same set of data with improved latency, that's not possible to configure. You will have to build a way to mirror data between the different projects. There is nothing terribly easy about that, nor are there any existing solutions offered by Firebase or GCP to do this for you.
Doug nailed it for Firestore. So 👍
Note that Firebase's Realtime Database (unlike Firestore) can have multiple databases per project, which means you can set up a database instances in each region (it support three right now) and redirect your user's traffic to the region closest to them. You may still have to replicate data between the database instances, similar to Doug's answer, but (unlike in Doug's answer) with Realtime Database this can happen within a single project.

Firebase real time database worldwide available

Quick question, can't seem to find the answer, altough I am pretty sure it is an easy one. When creating a real time database in Firebase and implementing it in my app. Can I access the data worldwide? Or is it bound to a region? Say I register the database in the US and I go to Europe. Can I still access all my data?
The Database location has nothing to do with accessibility of data. It just specifies where you data will be stored.
To reduce latency and increase availability, store your data close to the users and services that need it.
That being said you can access your project and database as long as you have internet connectivity (and you clients can access it if they pass the security rules).

firebase billing - kill switch consequences

The firebase documentation includes a warning that if you use a kill switch to stop using firebase when a budget cap is exceeded, as follows:
Warning: This example removes Cloud Billing from your project,
shutting down all resources. Resources might not shut down gracefully,
and might be irretrievably deleted. There is no graceful recovery if
you disable Cloud Billing. You can re-enable Cloud Billing, but there
is no guarantee of service recovery and manual configuration is
required.
I'm trying to investigate what gets irretrievably deleted. Does the datastore get deleted when the kill switch is activated? Is there any opportunity to save data previously stored in cloud firestore, before the deletion takes place? Is there a way to download the database so that I can keep a back up in this scenario?
Please review the following reply from Firebase Team member(samstern) to gain more clarity on this:
these things are handled on a per-product basis and each product has different thresholds for quota overages and different procedures for what happens to inactive resources in a bad state.
For instance I know in Realtime Database if your DB is too big for the
free plan after you downgrade from a paid plan we will not delete
your data automatically. Instead you'll just be stopped from using the
database until you restore your billing.
However that statement clearly says that some products may delete data
if you pull Cloud Billing. It could be for technical reasons, it could
be for policy reasons.
If you want to turn off a product without risking this pulling your
billing account is NOT the best way to do this. It's the nuclear
option if you get into a bad situation and want to get out at all
costs. Instead you should use per-product APIs to shut down the
individual product or service and prevent future usage. This could
include doing things like flipping off APIs in the APIs console,
changing security rules to prevent all further writes, deleting Cloud
Functions, etc
The best source of information I've been able to uncover in answer to this particular question is a discussion on reddit which indicates that you can't recover access to your data, until you pay the bill (including blow out charges) - so maybe that buys some time, but if you don't pay, the project gets deleted. There may also be lost data for things happening at the time the kill switch activates.

Linking Google Analytics 360 to Big Query, permissions issue

I have linked GA360 to Big Query. I do have a service account added to GCP as per documentation. The account I used has Project Owner permissions as required to link to said project.
Can I remove the Project Owner permissions from the GCP account once the link has been established in GA360? I do not want that account to have such a high access level to the project.
I did run a test on a small scale and it worked but I am not willing to risk a transfer failure on all of the data in production.
Yes, you can remove the permissions from the account you used to link GA360 to BQ.
The permission is only required for the time of setting this up.
It is not being checked whether the account which set up a connection is still active or has the same rights.
We have had multiple views linked by different accounts, of which most are not in the team anymore and therefore do not have "owner" rights anymore. The exports still work though (which makes sense, given that a company might keep using GA and the exports but part ways with the internal/external employee who sat it up).

Couple of Queries on Google Cloud SQL

I'm setting up an application which is a set of Mircroservices consuming a Cloud SQL DB in GCP. My queries are -
I want to set up HA for Cloud SQL in across regions(primary region and a secondary region with active replication enabled). I do not see any out of the box set up from Google Cloud to achieve the same. Out of the box HA for Cloud SQL 2nd Gen is to have a HA instance in the same region in another zone in the same region. Please provide the best practice to achieve the same.
All the microservices should be using private ip to do actions on this MySQL. How do set this up?
Is there any native support from MySQL to enable Active replication to another region?
Is it possible to set up manual backup as per customer requirements? I do understand automatic backup available.To meet RPO RTO requirements want to customize db backup frequency - is that possible?
I want to set up HA for Cloud SQL in across regions(primary region and a secondary region with active replication enabled)
You can use the external master feature to replicate to an instance in another zone.
All the microservices should be using private ip to do actions on this MySQL. How do set this up?
Instructions for Private IP setup are here. In short, your services will need to be on the same VPC as the Cloud SQL instances.
Is it possible to set up manual backup as per customer requirements?
You can configure backups using the SQL Admin API.
Please, let me list your questions along with their response:
I want to set up HA for Cloud SQL in across regions(primary region and a secondary region with active replication enabled). I do not see any out of the box set up from Google Cloud to achieve the same. Out of the box HA for Cloud SQL 2nd Gen is to have a HA instance in the same region in another zone in the same region. Please provide the best practice to achieve the same.
-According to the documentation [1], the configuration is made up of a primary instance (master) in the primary zone and a failover replica in the secondary zone, at the moment is not possible the HA for Cloud SQL across regions.
All the microservices should be using private ip to do actions on this MySQL. How do set this up?
-You can set up a cloud SQL instance to use private IP, please review the next information, you may find it helpful [2].
Is there any native support from MySQL to enable Active replication to another region?
-I would recommend to get in contact with mysql support [3], so that you get the help you need, in the meantime you could review the next link [4], and see if this fits your needs.
Is it possible to set up manual backup as per customer requirements? I do understand automatic backup available.To meet RPO RTO requirements want to customize db backup frequency - is that possible?
-You can create a backing up on demand, please review the next link [5] which helps to illustrate how to set this kind of backups.
Please let me know if this information helps to address your questions.
[1] https://cloud.google.com/sql/docs/mysql/high-availability
[2] https://cloud.google.com/sql/docs/mysql/private-ip
[3] https://www.mysql.com/support/
[4] https://dev.mysql.com/doc/mysql-cluster-excerpt/5.6/en/mysql-cluster-replication-conflict-resolution.html
[5] https://cloud.google.com/sql/docs/mysql/backup-recovery/backing-up#on-demand

Resources