Grant one IAM role access to a large number of DynamoDB tables - amazon-dynamodb

I have an AppSync app defined using a master CloudFormation stack and more than a dozen nested stacks. Each nested stack defines a DynamoDB table, an AppSync DataSource for that table, and an IAM role for that DataSource to access that table. The DataSource depends on the role, which depends on the table.
I would like to consolidate these IAM roles, for three reasons:
The role definitions are very repetitive and boilerplate-y.
There are many copies of this app, and it adds up to a lot of IAM roles — enough that we're running close to the soft limits.
Some resolvers use DynamoDB batch operations to access multiple tables, so at least some of the IAM roles must grant access to multiple tables anyway.
I do not want to give the role blanket access to all DynamoDB tables in the account.
The simplest way to grant one role access to every required table would be to list them manually in the policy document. This has the obvious downside of requiring that the policy be manually kept in sync when new tables are added. However, there is also a dependency problem: the DataSource in a nested stack depends on a role in the master stack, which depends on tables in the nested stacks.
I would have liked to use tags: grant for all DynamoDB tables that have a certain tag, then set that tag for each table. This way, the IAM role would not need to be edited when a new table was added. But apparently DynamoDB does not support tag-based conditions.
Is there an easy way to grant a single IAM role access to many DynamoDB tables without granting access to all of DynamoDB and without individually listing the tables in the role?

If you can name your tables in a way that gives them the same prefix you can use wildcards in the resource.
arn:aws:dynamodb:<Region>:<Account>:table/MyPrefix-*
That will work on all tables that start with MyPrefix-
If you are using generated names you can probably use the AWS::StackName value in place of MyPrefix but be aware that with nested stacks that value may get shortened.

Related

Azure Cosmos DB secret connection strings per database

in my Azure Cosmos DB account, I can add multiple databases (containing multiple collections).
However, I only seem to find account-level connection strings (secrets), that are valid for each database. Differing only in the database name section.
I find this odd. Is this expected? If I want more granular control do I need to create separate accounts for each database?
PS: I'm using the Mongo API if it's somehow relevant.
Cheers
The account-level connection strings you mentioned in the question is master key.Based on this document, Azure Cosmos DB uses two types of keys to authenticate users and provide access to its data and resources.
Master keys cannot be used to provide granular access to containers and documents.
If you want more granular control,please get an idea of Resource Tokens which provides access to specific containers, partition keys, documents, attachments, stored procedures, triggers, and UDFs.More details,please refer to this link.

OpenEdge SQL DBA Account Setup

I'm setting up SQL access in a newly created OpenEdge 11.5 database.
In checking the contents of the sysdbauth table using "select * from sysprogress.sysdbauth", I see that there are two users setup by default: sysprogress and a user with the name of the Linux user account that was used to create the database.
I'm looking for recommendations as to how to handle these two accounts. Obviously I want to have an account to use for DBA tasks. Should I use one of these accounts for the purpose? If so, what should I do with the other account?
Is it possible (and safe) to be deleting either of these predefined accounts?
On page 175 of the Database Administration guide you can read about default users and why they are created:
Tables used from SQL only
An SQL database administrator (DBA) is a person assigned a sysdbauth record in the database.
SQL DBAs have access to all meta data and data in the database. To support internal schema
caching, every OpenEdge database begins with a DBA defined as "sysprogress." However,
OpenEdge restricts the use of "sysprogress."
When you create an OpenEdge database using the PROCOPY or PRODB commands, and the
database does not have any _User records defined (from the source database, for example), then
a DBA is automatically designated with the login ID of the person who creates the database. This
person can log into the database and use the GRANT statement to designate additional SQL DBAs,
and use the CREATE USER and DROP USER statements to add and delete user IDs.When creating
users, this DBA can also specify users as SQL-only users, who can only access the database
through SQL.
There are several knowledge base entries around the task of deleting or disabling the default users.
http://knowledgebase.progress.com/articles/Article/P5094
http://knowledgebase.progress.com/articles/Article/P161411
This suggests that it's really safe to delete or disable these accounts but you should:
1) Create replacing accounts first.
2) As always: test in a separate environment first and not in production!
Yes, in fact Progress kind of expects you to do so. Create a root account and get rid of both. It's fine.

Maintain users data integrity across multiple databases for ASP.NET

I have 2 questions.
I am developing a ASP.NET web application that uses the standard ASP.NET membership. We intend to have the membership tables in 1 database. We have 2 other databases that stores data for 2 different applications.
Shared - Membership info
DB1 - Application1
DB2 - Application2
Both applications uses the membership info in the "Shared" database.
The Shared database has a table called userdetals that will store additional users' info such as name, phone and job title for example.
However, DB1 also has a table called employees that store the same fields as name, phone and job title. Each employee may be an user.
Also for each table in DB1 and DB2, we keep audit trial, i.e. which user updated the tables in the database. Hence, we need to store UserID in the tables of DB1 and DB2.
We thought of having a Users table added in DB1 and DB2. So everytime a new user is created in Shared, the same user will be created in Users table in DB1 and DB2.
Our questions are:
What is the best way to maintain database integrity given the above setup? E.g. Each employee is assigned as an user. If any fields in DB1 such as username, name and phone is updated, then the same fields in Shared DB should be updated and vice versa.
Is it advisable to have membership database in a different database in our case? What is the best solution since almost all the tables in DB1 and DB2 references userID in the Shared database.
1.
The technology you are looking for is Merge Replication (http://bit.ly/KUtkPl). Essentially, you would create a common Users table on both databases, create a Merge Replication publisher on one application database, and then create a Merge Replication subscriber on the other application database. You could also set this up to synchronize the schema as well (which also means you only need to create the table once on the publishing database: it will push the table, schema with data, to the subscriber).
But if you are looking for more of a manual approach, I would not denormalize the user data to the employee(s) table, instead create a supplemental table and a view on each Application server. Kind of like inheritance in OOP: Any common data between the Employee table and Users table, leave on the shared user table. Any unique columns for the Employee, add to the supplemental table only and store on each database. The view would merge both the supplemental table and shared table. (http://bit.ly/9KPxt0)
Even if you do use Replication Services, I would still use this view design with the synchronized table.
You COULD update through the view, but I would not recommend that. It has been done before successfully in production, but there are too many constraints that could blow up (http://bit.ly/LJCJev). Instead update the table directly that holds the data.
Absolutely avoid "triggers that synchronize". Too risky (can cause an infant loop on your SQL server) and too much maintenance overhead.
2.
I would do the Merge Replication, it is just less for you to worry about and maintain after it is configured correctly. But your approach is OK if want something more manual or if you are not familiar with Replication services in SQL... just use the view noted above and you'll be set.
Easy way:
You can create link server to these databases.
And then create synonym to easy access to tables of each database.
Create trigger to update data when any data was updated on each table.

Forms authentication table locations

When using aspnet_regsql to create the base tables for forms authentication, is it recommended that these tables be stored inside of the application database catalog or should a database catalog just for authentication be created.
Thanks!
To clarify what jro said you can join across dbs but you'll lose some performance there.
Secondly if you want to maintain referential constraints you'll need the tables in the same DB. What I mean is if you have a new table for your app, say CustomerOrders, and you want to ensure the UserID column values exist in the Users table you'll need those in the same DB.
It doesn't matter.
If you need to eventually join records on the membership tables with your own catalog, I would suggest using an application database.
Otherwise, use your preferences for database management.

Best Practice ASP.NET Membership: User tables in the same datastore?

Is it better to extend my business database with the tables of the ASP.NET Membership Security model. Or should I have a different datastore where I only manage Identities and Roles... Basically 1 or 2 databases?
This can depend on scale. If it's an enterprise solution with different apps sharing one membership source the answer is simple - separate them. There might also be performance reasons why you would want to separate this data from the rest of the app. Arguably these tables do not belong in a data warehouse for example.
The only thing the 2 databases solution doesn't give you is referential integrity. If you extend your membership tables to hold more application specific details about the user, and these tables need to link into the main database then you might want to keep them together. Otherwise you would need some sort of replication job maintaining this for you.
This is quite subjective, but unless those users are going to be using more than one database, then I'd say keep them in the same db.
I would only use a separate database for users and roles if those users and roles were used in more than one database.
So no, I'd never use two. I might however use three.
Which database platform are you using? If one that supports schemas within a database, e.g. SQL Server 2008, then you can put your membership tables into their own schema, for neatness. You can also add cross-schema foreign keys if required.

Resources