How to atomically update and roll back a Firebase Hosting site + Cloud Run service? - firebase

Suppose we have a site on Google Firebase Hosting that routes some requests to a Google Cloud Run service. The service is considered entirely an implementation detail and its only client is the single website. The only reason for using a Cloud Run service is that it is the only suitable technical option within the Firebase platform.
Now, suppose that the API of the service may have a breaking change with every update, so the Firebase Hosting content must change too. How do you update or roll back both parts together so as to avoid incompatibilities?
Straightforwardly, we can update the service and the site content in separate steps, but that means some requests from the old revision of the site may reach the new revision of the service or the other way around, causing errors due to API mismatch. The same issues are present when rolling back the site content and the service at the same time.
One theoretical solution would be to deterministically route requests to different service revisions based on revision labels, but that is not supported on Cloud Run.
One realistic solution would be to create a new service for every update of the site content. However, that would result in unbounded accumulation of services which are not automatically deleted like service revisions are.
Another solution (proposed below) would be to maintain backwards compatibility in the service - it would support both the latest and the previous API version. However, this can be considered an unnecessary overhead. Since the two parts (static content and the service) have no real need to ever be updated independently, it would be very convenient to avoid the overhead of maintaining backwards compatibility in the service.

For what I know there is no way to make this update in a single transaction to avoid this behavior you mentioned as Firebase and Cloud Run are different products.
Also a good Practice in API design is to allow Service Evolution this means that updating the API shall not break the apps consuming it and new versions of the app shall be able to evolve in a way that they can consume the current API.
Something that is done when a new API will not allow retrocompatibility is to have different endpoints this is why some APIs are apiName/V1/method and apiName/v2/method but in this case both versions of the API are deployed.

Related

Add users for ASP.NET Core from internal website

Sorry no code here because I am looking for a better idea or if I am on the right track?
I have two websites, lets call them A and B.
A is a website exposed to the internet and only users with valid account can access.
B is a internal (intranet) website with (Windows authentication using Active directory). I want Application B (intranet) to create users for Application A.
Application A is using the inbuilt ASP.NET JWT token authentication.
My idea is to expose a Api on the extranet website (A) and let (B) access this API. I can use CORS to make sure only (B) has access to the end point but I am not sure if this is a good enough protection? We will perform security penetrations test from a third party company so this might fail the security test?
Or
I can use entity framework to a update the AspnetUsers table manually. Not idea if this is feasible or the right way or doing things.
Any other solution?
In my opinion, don't expose your internal obligations with external solutions like implementing APIs etc ...
Just share the database to be accessible for B. In this way, the server administration is the only security concern and nobody knows how you work. In addition, It's not important how you implement the user authentication for each one (whether Windows Authentication or JWT) and has an independent infrastructure.
They are multiple solution to this one problem. It then end it really depends on your specific criteria.
You could go with:
B (intranet) website, reaching into the database and creating user as needed.
A (internet) website, having an API exposing the necessary endpoint to create user.
A (internet) website, having data migration running every now and then to insert users.
But they all comes with there ups and downs, I'll try to break them down for you.
API solution
Ups:
Single responsibility, you have only one piece of code touching this database which makes it easier to mitigate side effect
it is "future proof" you could easily have more services using this api.
Downs:
Attack surface increased, the API is on a public so subject to 3rd parties trying to play with it.
Maintain API as the database model changes (one more piece to maintain)
Not the fastest solution to implement.
Database direct access
Ups:
Attack surface minimal.
Very quick to develop
Downs:
Database model has to be maintained twice
migration + deployment have to be coordinated, hard to maintain.
Make the system more error prone.
Migration on release
Ups:
Cheapest to develop
Highest performance on inserts
Downs:
Not flexible
Very slow for user
Many deployment
Manual work (will be costly over time)
In my opinion I suggest you go for the API, secure the API access with OAuth mechanism. It OAuth is too time consuming to put in place. Maybe you can try some easier Auth protocols.

Is there a way to prevent content caching or scraping from an API?

Imagine the following situation. I have an API and a developer builds an application that retrieves new content from it on a daily base. She stores this content and provides this data to all the instances of an app she developed. In this way these apps do not have to call the API directly.
Is there a way to prevent this and force the apps (and therefore the end users) to use the API and not only the application on the server.
I found many questions about how to cache API data but not how to prevent that. I am fairly new to this, so maybe I am overlooking something or maybe it is not possible to prevent this.
Thank you in advance!
Assuming you are using Apigee for API-management, you have some options. First, consider the options available to you contractually, if this is that sort of business relationship and you can impose certain API behavior with a business partner through a contract.
Separate from the legal side of things, we remember that you control your API and the credentials you issue for use by your API clients. You cannot though control, practically, what a client developer does with the credentials you issue: she could promise to embed the credentials in the mobile apps' API client, but change her mind and use it centrally, and then design her mobile client to call into her central cache. If though you really insist that only mobile app clients should be calling your API and not a hub/cache server, then you could consider applying constraint policies on your API (within the Apigee proxy, such as Access Control). For instance, you could blacklist your partner's hub/cache server IP address, although that is weak security at best. Or, you could apply a constraint that only clients with certain identifying User-Agent strings (mobile OS, client) are allowed to connect to your API. Or use GeoIP filtering to allow only clients from certain regions, if that applies to your use-case.
Finally, depending on the data model, you might be able to rate-limit such that a bulk cache becomes impractical: if your edge-client use-cases is to fetch a single record, but a cache would have to hold thousands of records, then you could impose a per-client rate limit (Quota policy) which is no bother to individual mobile clients, but makes the work of a hub/cache server untenable.

Can I add a domain to Firebase hosting via the API?

I want to be able to add domains to Firebase hosting with the API instead of the web UI, is that possible?
I want to add potentially hundreds of domains, is there a domain limit per project in Firebase?
As far as I can tell from the entire CLI documentation, there isn't any way to do this.
Lets take a step back and consider what the web UI process involves i.e. the generation of a TXT record to add to your DNS records, after verifying the presence of said TXT record on the domain, providing A records that you (authorized owner) add to allow redirecting to your firebase hosted site.
In my opinion, this very manual back and forth is necessary as a security measure. The only way it is taken out of the equation via the CLI is by providing a means for you to authenticate ownership of a domain (registered with any one of many domain registrars), and being granted authorization to change your A records. These are both outside the scope of Firebase, and could potentially introduce severe security flaws. Regardless, even if it existed, it would still have to be step-by-step and somewhat manual via CLI rather than the single command it sounds like you're looking for.
It is not possible to add custom domains automatically through an API at this time.
Nor would it allow you to create a reseller or multi-tenant project (i.e. connect a large number of domains or subdomains dynamically) since you cannot connect more than about 36 domains connected to one project.
It's possible to add domains using Firebase Hosting Rest Api. I am not sure why they didn't put it on their official website but I checked today and it works. https://developers.google.com/resources/api-libraries/documentation/firebasehosting/v1beta1/java/latest/com/google/api/services/firebasehosting/v1beta1/FirebaseHosting.Sites.Domains.html
Answer that I've received from Firebase support:
There is no API yet that would allow you to add custom domains, it was
requested as a feature before but unfortunately we have no more
information on that - so for now, only the Console UI allows you to do
it.
When it comes to the limits, in a project, a custom domain is
attached to a site - there can be 36 sites per project, and for one
site there is no hard limit, but we recommend not exceeding 20 custom
domains. You can experience technical issues with SSL certs when you
exceed 20 domains per site, which we won’t be able to troubleshoot
since the system was not designed for such use cases.

How to host daemon process that aperiodically updates firebase database?

I have so far been very impressed with the firebase platform for hosting a client-side single page app and for data storage. However, I have one component that I don't know where to host...
I want to have a background process that aperiodically updates the database. The nature of when an update is needed is based on an external source and, although the general timeframe of when updates are available is known, the exact timing is not. My thinking was to have a background task running that has some smarts to determine when an update is needed, and then trigger an update at that time.
I don't know where I would host something like this. I considered running it in a loop in a firebase function, but due to pricing model being based on time, that would get very expensive, and functions are not suited for daemon-type processes. The actual "database update" would be suitable for a function, but not the triggering logic. Also, I have seen functions-cron which does offload the triggering logic, but since my updates are not truly periodic, it doesn't seem exactly appropriate. I haven't looked too much into AppEngine and how that relates to the firebase platform...so basically my question:
What are the options for "reasonably-priced" hosting an always-running background task?
Google App Engine - Standard is something you want to look at more. It is reasonably priced since what you are doing will likely fit into GAE-Std's free daily quota. In GAE-Std, you create a scheduled cron job: GAE will call you task as if it was an incoming web request.
See Firebase doc for integrating with GAE
See GAE doc for cron jobs

Sharing particular data between realms

I am about to start working on the back-end for a mobile app (initially iOS/Android, later also website) and I am thinking whether Realm could fulfill all my needs.
The basic idea is that there are two types of users - customers and service-providers. The customers send requests to the server once in a while and are subscribed (real-time) for any event that might occur in relation to this request in the future. Each service-provider is listening for specific requests from all customers and is the one who is going to trigger various events (send data) for each of those requests.
From the Realm docs, it is obvious that the real-time data sync is not going to be a problem. The thing I am concerned about is how to model the scenario (customer/service-provider) in the Realm 'world'. Based on what I read, it is preferred to have one realm per user. Therefore, I suppose the user will register and will be given a realm. Then whenever he makes a request, it is going to be stored in his realm. Now the question is how to model the service-provider. There are going to be various service-providers each responding (triggering various kinds of events up to one hour after request) to different kinds of requests. (Each user can send any request and therefore be served by any service-provider.)
I read a bit about that Realm supports data sharing among different realms which could be a partial solution for this problem, however I was not able to find if this 'sharing' could share only particular requests. (Meaning each service-provider will get only requests intended for him.)
My question is whether this scenario is doable using Realm?
This sounds like a perfect fit for Realm's server-side event-handling. Put simply, Realm offers the ability through our Node SDK to listen for changes across Realms on the server.
So in your example, where each mobile user would have their own Realm, the URL for this would be /~/myRealm in which the tilde represents the Realm user ID. The Node SDK event handling API allows you to register a JS function that will run in response to changes represented by a Regex pattern for Realm URLs. In this case you could use: ^/([0-9a-f]+)/myRealm so that any time any user's myRealm updated, the server could perform some logic.
In this manner, the server via the Node SDK is really a "super-user" or service-provider as you describe. When an event fires, the JS function that runs is provided the Realm that was updated and a list of indexes pertaining to the objects in the Realm that were inserted, deleted, or modified. You can then perform any logic in JS, such as using the changed data to call out to another API or opening the Realm in question or any other and writing changes which will get pushed back out to the respective clients.
The full server-side event handling is part of Realm Professional Edition, but we recently released another way to interact with this called Realm Functions. This provides the ability through the server's dashboard to create the same JS functions that will run in response to changes across Realms. The developer edition support 3 functions so you can try it out immediately!

Resources