Using a AWS IOT Rule functionality I can define a rule that maps MQTT data to DynamoDB. Is it possible instead of using local DynamoDB on the same account use a third party DynamoDB resource from a different account to achieve the same result? If positive, how it can be achieved?
You can use cross-account credentials to access resources belonging to another AWS account in an IoT rule. This process is described in detail in the blog here.
Related
I have an idea to make my apps to be Backend Driven and for this reason I want to query Firebase Remote Config for some values which has some condition properties when they should be applied (for example: parameter show_banner should be applied only for Country: Portugal) and I want to have the possibility to use A/B testing for such scenarios.
In other words I want my backend microservice to make requests to Firebase Remote COnfig (with A/B testing feature) on behalf of mobile applications (I can have information about the version, country, mobile id etc.. in the backend). Is there any REST APIs that can help me achieving this ?
The REST API for Remote Config only allows managing the template, so the equivalent of the operations you can perform in the Firebase console.
There is no public REST (or other server) API to get a set of remote config values for a specific device. That operation is only possible through the client-side SDKs.
I am looking to potentially use Firebase Cloud Messaging to flexibly add push notifications to my platform. The issue I am running into is that Firebase advises against sharing actual Firebase "projects" between "logically distinct apps."
For a normal multitenant SaaS offering, this would be a single project to represent all the client environments with end-users data being intermingled within a single location. In this case, we would logically (in our backend logic) prevent client's abilities to access each other's customer data.
The wrinkle comes in because we actually have individual, nearly identical application servers within our private AWS cluster (and databases) for each customer. We think this would work via still programmatically controlling access via each individual backend, allowing only the servers to communicate with the Firebase admin SDK.
Does anyone know if this is within the best practices? or does this violate the expected limitations?
I have to build an API using Firebase, and need some help with design choices. I want to be able to sell the API to users, who can then use it to build/integrate their own applications. Users will have both read and write privileges.
General information:
I'm using Firestore db with email & password authentication.
Only specifically assigned users may use the API
Each user may only access specific documents concerning them.
I've noticed 3 different ways in which an API can be provided to a user of my Firestore db:
https triggered cloud functions (https://firebase.google.com/docs/functions/http-events)
Using the SDK (https://firebase.google.com/docs/firestore/client/libraries)
Using the REST API provided by Firbase (https://firebase.google.com/docs/firestore/use-rest-api)
API requirements:
Used only by users that I specifically grant access to (email & password login)
I want to limit these users to only a couple of read/write tasks that they're able to perform.
It needs to be safe.
My current approach is:
Use the 3rd option - the REST API provided by Firebase (thereby giving users the projectId and API key)
Add authorised users to the list of authorised accounts on Firbase, and limit access using custom claims and database rules.
My questions:
It seems that https functions (option 1) are normally used in API building. Are options 2 and 3 unsafe?
What are the normal use cases of the 3 options? When should each be used and when should each be avoided?
Are there any obvious flaws in my choice of option 3?
Any other useful information about making these design decisions will be much appreciated.
Thank you in advance
TL;DL: It depends on what you want to do with this API and how many and what type of devices/users will be calling it.
Before answering your questions I will list below the advantages of each approach:
Cloud Functions:
Cloud Function is a Functions as a Service Solution, so it's also a hosting service for your API, therefore you won't have to provision, manage, or upgrade servers and the API will automatically scale based on the load. Also this option takes into account the pros of SDKs and client libraries, since your code will have to use it to connect to Firestore anyway.
SDKs and client libraries:
This is the easiest and more optimized way to reach Firestore, however, environments where running a native library is not possible such as IOT devices will be left out of your solution, so consider this while implementing this option.
Cloud Firestore REST API:
Every device properly authorized to access Firestore will be able to do so.
NOTE: For both SDK and REST API you will need to consider hosting of your API, either on Cloud Functions, as mentioned, App Engine Standard, App Engine Flex or a Compute Engine Server Instance.
All that being said, it's up to you and your API's usage and requirements to say which option is best considering the points above.
As per security, I'd say that all option can be secure if firebase rules and firebase auth are set correctly.
I'm developing an application following quite closely Feldman Elm SPA example with the API hosted on AWS API Gateway. My problem is the following:
I need to sign my API calls with AWS API Signature v4. It is a less trivial task than I initially thought in Elm:
There is no Elm AWS signature package, so I naturally looked at JS libraries to use via Ports.
Option 1: Use AWS Amplify API that does all the job => But then how to process the result in the most Elm-esque way (ideally with RemoteData).
Option 2: Use a third-party JS library just to sign the request forged by Elm Http.request and send send/process the HTTP request via Elm => So far I found only buggy implementations of AWS Sigv4, I would prefer an official implementation anyway.
In the 2 cases, I'm stuck with the Main Parent / Page Children communication: I can send the request 1) or 2) via Port from the Child. But then, how to can the Child receive the response to his request? Indeed, all responses go into Elm via the same port subscription. Do I need to 'tag' the outgoing requests and then dispatch the response based on the tag? But it will look horrible and won't scale well.
Please note that it is a question about App pattern and architecture. It is not a basic question about Elm Ports (I already successfully call the API from Elm).
Any recommendations or pointers appreciated. Thanks!
Additional info about my setup (following the first comment)
I follow the AWS best practices (scenario #3 Access Resources with API Gateway and Lambda with a User Pool)
Front-end App users are managed by:
Cognito User Pool (signup, sign-in, etc...)
Cognito Identity Pool (map users with IAM role to access AWS resources, including the API Gateway)
Back-end is Serverless: API Gateway + Lambda functions
API Gateway: Lambda proxy integration + Authorization = IAM => this requires the AWS Signature
I don't use API keys because:
I don't want to provide any access to the back-end to unauthenticated users
I need to identify the user from the request headers
I don't want to rely on long-term secrets for authentication on client side
I'm new to the AWS services and would like to understand if it is possible to use dynamodb and cognito sync in this specific scenario:
publish data from a company office to few tables on one central dynamodb
use cognito sync in a mobile app to periodically get those tables copied to storage local to the mobile device (unidirectional sync from central dynamodb to the remote mobile devices)
it is my understanding that cognito sync is normally used to sync profile data of the user, but i would like to understand if it is possible to use it in this different way (one dynamodb repository for all the authorized mobile users).
Thank you,
Mario
No, Amazon Cognito Sync provides its own per-identity storage (ie: storage not shared between your app users) but if you want to have a shared DynamoDB database, you can still use Amazon Cognito Identity with a role that gives read access to those tables to your users.
Albert
You can use AWS Cognito Sync Events to trigger a AWS Lambda. Then use the Cognito Sync data passed to event, to update the AWS Dynamodb table.