mariadb table usage statistics by user - mariadb

I have a mariadb instance with a database shared by a number of users. I would like to track which tables within that database each user is accessing, preferably including type of operation (ie. read / write).
Is such a thing possible?

Related

Archive Data from Cosmos DB - Mongo DB

in the project i am working on, we have a database per tenant and each tenant consists of at least 1 department. One of the requirements we have is that when an admin user deletes a department using a custom frontend we've provided, the system should first archive the data of that department on a blob storage before the data is deleted. The same we have for the tenant, we need to archive the data before the database of that tenant is removed from the account.
Now, my question: is there any best practice to do this? We are planning to retrieve all the data from all collections, using a mongo query, based on the department id (which is also the partition key) and then send it to a blob storage. The challenge we have is the execution of the query to retrieve all the data because it can be a huge amount and the RUs required for that action may affect the performance of the system because other users may be using the system while we remove the data.
I looked at mongodump and mongoexport but these are applications so we cannot execute it from our code?
Any ideas? Thanks a lot.
I think one way to solve this is by using ChangeFeed, as it reallyhelps and simplifies writing a carbon copy somewhere else.
However, as of now the change feed processor won't notify you for deleted documents so you can't listen for them, this feature is planned as of now.
Your best bet is to write some custom application that does archiving using Query language support

Grant one IAM role access to a large number of DynamoDB tables

I have an AppSync app defined using a master CloudFormation stack and more than a dozen nested stacks. Each nested stack defines a DynamoDB table, an AppSync DataSource for that table, and an IAM role for that DataSource to access that table. The DataSource depends on the role, which depends on the table.
I would like to consolidate these IAM roles, for three reasons:
The role definitions are very repetitive and boilerplate-y.
There are many copies of this app, and it adds up to a lot of IAM roles — enough that we're running close to the soft limits.
Some resolvers use DynamoDB batch operations to access multiple tables, so at least some of the IAM roles must grant access to multiple tables anyway.
I do not want to give the role blanket access to all DynamoDB tables in the account.
The simplest way to grant one role access to every required table would be to list them manually in the policy document. This has the obvious downside of requiring that the policy be manually kept in sync when new tables are added. However, there is also a dependency problem: the DataSource in a nested stack depends on a role in the master stack, which depends on tables in the nested stacks.
I would have liked to use tags: grant for all DynamoDB tables that have a certain tag, then set that tag for each table. This way, the IAM role would not need to be edited when a new table was added. But apparently DynamoDB does not support tag-based conditions.
Is there an easy way to grant a single IAM role access to many DynamoDB tables without granting access to all of DynamoDB and without individually listing the tables in the role?
If you can name your tables in a way that gives them the same prefix you can use wildcards in the resource.
arn:aws:dynamodb:<Region>:<Account>:table/MyPrefix-*
That will work on all tables that start with MyPrefix-
If you are using generated names you can probably use the AWS::StackName value in place of MyPrefix but be aware that with nested stacks that value may get shortened.

OpenEdge SQL DBA Account Setup

I'm setting up SQL access in a newly created OpenEdge 11.5 database.
In checking the contents of the sysdbauth table using "select * from sysprogress.sysdbauth", I see that there are two users setup by default: sysprogress and a user with the name of the Linux user account that was used to create the database.
I'm looking for recommendations as to how to handle these two accounts. Obviously I want to have an account to use for DBA tasks. Should I use one of these accounts for the purpose? If so, what should I do with the other account?
Is it possible (and safe) to be deleting either of these predefined accounts?
On page 175 of the Database Administration guide you can read about default users and why they are created:
Tables used from SQL only
An SQL database administrator (DBA) is a person assigned a sysdbauth record in the database.
SQL DBAs have access to all meta data and data in the database. To support internal schema
caching, every OpenEdge database begins with a DBA defined as "sysprogress." However,
OpenEdge restricts the use of "sysprogress."
When you create an OpenEdge database using the PROCOPY or PRODB commands, and the
database does not have any _User records defined (from the source database, for example), then
a DBA is automatically designated with the login ID of the person who creates the database. This
person can log into the database and use the GRANT statement to designate additional SQL DBAs,
and use the CREATE USER and DROP USER statements to add and delete user IDs.When creating
users, this DBA can also specify users as SQL-only users, who can only access the database
through SQL.
There are several knowledge base entries around the task of deleting or disabling the default users.
http://knowledgebase.progress.com/articles/Article/P5094
http://knowledgebase.progress.com/articles/Article/P161411
This suggests that it's really safe to delete or disable these accounts but you should:
1) Create replacing accounts first.
2) As always: test in a separate environment first and not in production!
Yes, in fact Progress kind of expects you to do so. Create a root account and get rid of both. It's fine.

Oracle trigger audit - How to log App user

I'm constructing a website.
In this website, people will be able to manipulate several DB tables data.
Everytime someone wants to make a CUD operation I want to log it (like and audit).
The way I see it, I should use triggers for CUD operations, but I can't understand how do I log the user, since triggers don't accept any input parameter.
NOTE: the user I want to log is the network user. I know this user when they access the website (user to log <> user logged to DB).
ANOTHER NOTE: None of my tables saves creation date, creator, update date and updator. Just don't know why they should when I have the audit tables.
So this is the basic problem with web apps. If you have a huge user base ( more then say 500 ), then provisioning them in the database, while this is very easily doable, it is something most web programmers, sadly, don't want to deal with and want only ONE connection user for the database. You have already shot yourself in the foot because you don't have the created_by,modified_by, created_date, modified_date in the tables. To fix this you really only have one choice:
Put the columns on the tables and force the UI people to push the "network" user name through. The rest of the columns can be handled by one very simple trigger.
Why DB audit will not help you:
The DB audit feature ONLY deals with users defined as actual users in the database, sorry that is just the way it is.
Here are some things to look at when dealing with a front end system.
You can write SP's or Packages that execute as the schema owner, but can be run by ANYONE who is defined in the database and those can handle all the INSERT, UPDATE, DELETE operations on the schema they are defined in by simply giving other users the EXECUTE privilege on that set of SP's. This give the DB fine grain control over how tables are manipulated and you only have to grans the select privilege to all the users.
You can write a SP or Package in the SYSTEM schema that allows a group of people to provision users on the system by granting the execute privilege on that SP. Within that SP you define what ROLES they are assigned and therefor can control all their access.

Maintain users data integrity across multiple databases for ASP.NET

I have 2 questions.
I am developing a ASP.NET web application that uses the standard ASP.NET membership. We intend to have the membership tables in 1 database. We have 2 other databases that stores data for 2 different applications.
Shared - Membership info
DB1 - Application1
DB2 - Application2
Both applications uses the membership info in the "Shared" database.
The Shared database has a table called userdetals that will store additional users' info such as name, phone and job title for example.
However, DB1 also has a table called employees that store the same fields as name, phone and job title. Each employee may be an user.
Also for each table in DB1 and DB2, we keep audit trial, i.e. which user updated the tables in the database. Hence, we need to store UserID in the tables of DB1 and DB2.
We thought of having a Users table added in DB1 and DB2. So everytime a new user is created in Shared, the same user will be created in Users table in DB1 and DB2.
Our questions are:
What is the best way to maintain database integrity given the above setup? E.g. Each employee is assigned as an user. If any fields in DB1 such as username, name and phone is updated, then the same fields in Shared DB should be updated and vice versa.
Is it advisable to have membership database in a different database in our case? What is the best solution since almost all the tables in DB1 and DB2 references userID in the Shared database.
1.
The technology you are looking for is Merge Replication (http://bit.ly/KUtkPl). Essentially, you would create a common Users table on both databases, create a Merge Replication publisher on one application database, and then create a Merge Replication subscriber on the other application database. You could also set this up to synchronize the schema as well (which also means you only need to create the table once on the publishing database: it will push the table, schema with data, to the subscriber).
But if you are looking for more of a manual approach, I would not denormalize the user data to the employee(s) table, instead create a supplemental table and a view on each Application server. Kind of like inheritance in OOP: Any common data between the Employee table and Users table, leave on the shared user table. Any unique columns for the Employee, add to the supplemental table only and store on each database. The view would merge both the supplemental table and shared table. (http://bit.ly/9KPxt0)
Even if you do use Replication Services, I would still use this view design with the synchronized table.
You COULD update through the view, but I would not recommend that. It has been done before successfully in production, but there are too many constraints that could blow up (http://bit.ly/LJCJev). Instead update the table directly that holds the data.
Absolutely avoid "triggers that synchronize". Too risky (can cause an infant loop on your SQL server) and too much maintenance overhead.
2.
I would do the Merge Replication, it is just less for you to worry about and maintain after it is configured correctly. But your approach is OK if want something more manual or if you are not familiar with Replication services in SQL... just use the view noted above and you'll be set.
Easy way:
You can create link server to these databases.
And then create synonym to easy access to tables of each database.
Create trigger to update data when any data was updated on each table.

Resources