Is there a Oracle data dictionary table/view that store the table referential integrity information?
I thought the all_tab_column would show which column is the Pk/Fk.
The documentation has a section on Viewing Constraint Information:
Oracle Database provides the following views that enable you to see constraint definitions on tables and to identify columns that are specified in constraints:
DBA_CONSTRAINTS/ALL_CONSTRAINTS/USER_CONSTRAINTS - DBA view describes all constraint definitions in the database. ALL view describes constraint definitions accessible to current user. USER view describes constraint definitions owned by the current user.
DBA_CONS_COLUMNS/ALL_CONS_COLUMNS/USER_CONS_COLUMNS -
DBA view describes all columns in the database that are specified in constraints. ALL view describes only those columns accessible to current user that are specified in constraints. USER view describes only those columns owned by the current user that are specified in constraints.
You can get more information on those in other sections of the documentation; ALL_CONSTRAINTS and ALL_CONS_COLUMNS.
You haven't said exactly what you're looking for, but this old answer has an example of looking at primary/unique and foreign keys.
Since you tagged your question with SQL Developer, if you view the table from the expanded Connections pane there is a Constraints tab that lists all the constraints on that table. If you select a constraint from the list you can see the columns it applies to. You can use the data modeller to see how tables are related.
Related
I am new to Dataverse, moving from the SQL Server world, and just created my first Dataverse table (Standard table). Upon creation, the table has lots of what I assume are automatically-added columns? These include "Owner", "Status", "Version Number". I come from the SQL Server background where new tables come "empty", with no columns. I do not think I need these automatically-added columns (this is just going to be a small log table that holds datetime, action, etc. columns).
Would it break anything if these automatically-added columns were deleted? Also, if anyone could provide information about why these columns are included, that would help. I have researched these questions online, but found very little. Thank you in advance.
They are standard, out of the box attributes that you can't remove.
You can change the Ownership within the Table Type to "Organization" when creating the table to remove the Owner however the rest are created as part of every table.
There is some high level detail on the docs
https://learn.microsoft.com/en-us/powerapps/maker/data-platform/entity-overview
Dataverse (earlier called as Common Data service) is Dynamics CRM under the hood. It’s a SaaS model CRM online software comes with some basic fundamental components.
When you create a table (entity) it comes with columns (attributes), relationships, views, forms, dashboards, etc.
The UCI model driven app can be made quickly to include these components with all CRUD operations without any code by doing simple configuration and customization.
To support these barebone functionalities - the necessary attributes like name, currency, statecode, statuscode, createdby, createdon, modifiedby, modifiedon and security implication fields like owner, owning business unit, owning team and change tracking & concurrency fields like row version, etc will be created.
You can keep them aside as they are part of platform and do your customization as you need.
I'm a MSSQL developer who recently was tasked with building a new application using DynamoDB since we use AWS and we wanted a highly scaleable database service.
My biggest concern is data integrity. For example, I have a table for all my users where every row needs to have a username, email, and name field, all strings, with a verified field that's an int. Is there anyway to require all entries in that table to have those fields and to be of that particular type?
Since the application is in PHP I'm using Kettle as my ORM which should prevent me from messing up the data integrity but another developer voiced a concern about if we ever add another application or if someone manually changes some types via the console.
https://github.com/inouet/kettle
Currently, no, you are responsible for maintaining the integrity of your items with respect to the existence of attributes that are not keys on the base table. However, you can use LSI and GSI to enforce data types of attributes (notwithstanding my qualm that this is not a recommended pattern, as it could cause partition heat especially for attributes whose range of values is small). For example, verified seems like it might take only 0 or 1 as a value, so if you create a GSI with PK=verified where verified is a Number, writes to the base table may get throttled by the verified GSI.
I queried the user_tables view of sys.all_tables and saw a column called LOGGING which is set to either YES or NO. This is an Oracle 11g database. I am not too familiar with the specifics of Oracle databases.
I just want to find out what that parameter does. What kind of logging are we talking about?
I am interested in finding out if there is any connection between this parameter and the CREATED and LAST_MODIFIED fields usually available in Oracle based applications.
Also does this logging parameter also enable logging of data changes (INSERT, UPDATE, DELETE) including old and new values of fields changed?
Appreciate your help folks!
Sort of. The documentation describes the column thusly:
Indicates whether or not changes to the table are logged; NULL for partitioned tables
The relates to the LOGGING clause in the CREATE TABLE statement:
Specify whether the creation of the table and of any indexes required
because of constraints, partition, or LOB storage characteristics will
be logged in the redo log file (LOGGING) or not (NOLOGGING).
This is separately documented, along with a lot more information. Simply put this indicates whether changes made to the table are being logged so that they can be recovered in the event of an instance failure. It is not so you can reference changes; you'll have to use triggers or a materialized view for that.
I am planning to create a website using ASP.NET and SQL Server. However, my plan for the database design leaves me wondering if there is a better way.
The website will serve as a repository of information for various users. I figure I would have two databases, a Membership and Profile database.
The profile database would contain user data for all users, where each user may have ~20 tables. I would create the tables when the user account is created and generate a key used to name the tables. The tables are not directly related.
For Example a set of tables for two different users could look like:
User1 Tables - TransactionTable_Key1, AssetTable_Key1, ResearchTable_Key1 ....;
User2 Tables - TransactionTable_Key2, AssetTable_Key2, ResearchTable_Key2 ....;
The Key1, Key2 etc.. values would be retrieved based on the MembershipID data when the account was created. This could result in a very large number of tables over time. I'm not sure if this will limit scalability by setting up the database in this way. Any recommendations?
Edit: I should mention that some of these tables would contain 20k+ rows.
Realistically it sounds like you only really need one database for this.
From the way you worded your question, it sounds like you're trying to dynamically create tables for users as they create accounts. I wouldn't recommend this method.
What you want to do is create a master table that contains a primary key for each individual user. I'm assuming this is the Membership table. Then create the ~20 tables that you need for the profiles of these members. Every record, no matter the number of users that you have, will go into these tables. These 20 tables would need to have a foreign key pointing to the unique identifier of the Membership table.
When you want to query a Member for their user information, just select from the tables where the membership table's primary Id matches the foreign key in the profile tables.
This would result in only a few tables in the end and is easily maintainable and follows better database design.
Your ORM layer (EF, LINQ, DAL code) will hate having to deal with one set of tables per tenant. It is much better to have either one set of tables for all tenant in a single database, or a separate database per tenant. The later is only better if schema upgrade has to be vetted by tenant (like Salesforce.com has). If you can afford to upgrade all tenant to a new schema at once then there is no reason for database per tenant.
When you design a schema that hold multiple tenant the important things to remember are
don't use heaps, all tables must be clustered index
add the tenant ID as the leftmost key to every clustered
add the tenant ID as the leftmost key to every non-clustered index too
add the Left.tenantID = right.tenantID predicate to every join
add the table.TenantID = #currentTenantID to every query
These are fairly simple rules and if you obey them (with no exceptions) you will get a perfect partitioning per tenant of every query (no query will ever ever scan rows in a range of a different tenant) so you eliminate contention between tenants. To be more through, you can disable lock escalation to make sure no tenant escalates to block every other tenant.
This design also lends itself to table partitioning and to sharing the database for scale-out.
You definitely don't want to create a set of tables for each user, and you would want these only in one database. Even with SQL Server 2008's large capacity for tables (note really total objects in database), it would quickly become unmanageable. Your best bet is to use 20 tables, and separate them via a column into user areas. You might consider partitioning the tables by this user value, but that should be tested for performance reasons too.
Yes, since the tables only contain id, key, and value, why not make one single table?
Have the columns:
id, user ID, key, value
Put an Index on the user ID field.
A key idea behind a relational database is that the table structure does not change. You create a solid set of tables, and these are the "bones" of your application.
Cheers,
Daniel
Neal,
The solution really depends on your requirement. If security and data access are concern and you have only a handful of users, you can set up a different db for each user with access for him set to only his/her database.
Other wise, what Daniel Williams suggested is a good alternative where you have one DB and tables laid out with a indexed column partitioning the users data rows.
It's hard to tell from the summary, but it looks like you are designing for dynamic attribution by user. This design approach is called EAV (Entity-Attribute-Value) and consists of a simple base collection key (UserID, SiteID, ProductID...) and then rows consisting of name/value pairs. In a more complex version, categories are sometimes added as "super columns" to the tuple/row and provide sub-groupings for a set of name/value pairs.
Designing in this way moves responsibility for data type integrity, relational integrity and tuple integrity to the application layer.
The risk with doing this in a relational system involves the breaking of the tuple or row into a set of rows. Updates, deletes, missing values and the definition of a tuple are no longer easily accessible through human interaction. As your application evolves and the definition of a tuple changes, it becomes almost impossible to tell if a name/value pair is missing because it's part of an earlier-version tuple or because it was unintentionally deleted. Ad-hoc research as well becomes harder to manage as business analysts must keep an understanding of the virtual structure either in their heads or in documentation provided.
If you are looking to implement an EAV model, I would suggest you look at a non-relational solution (nosql) like MongoDB or CouchDB. These stores allow a developer to save and retrieve "documents" or json-formatted messages that are essentially made up of a collection of name/value pairs and can look very much like a serialized object. The advantage here is that you can store dynamic attribution without breaking your tuple. You always know that you have a complete tuple because you can store and retrieve it as a single "blob" of information that can be serialized and deserialized at-will. You can also update single attributes within the tuple, if that's a concern.
MongoDB also provides some database-like features such as multiple-attribute indexes, a query engine that is robust in comparison to other similar non-relational offerings and a sharding solution that is much less trouble than trying to do it with MySQL.
I hope this helps.
I went through a custom profile provider example a while ago and I am
now revisiting it.
My database has all the dbo.aspnet_* tables created when I ran the aspnet registration
wizard. In these tables I have aspnet_Profile which has a FK constraint pointing to aspnet_Users.
I also have two tables in MyDB: The first, dbo.ProfileData, has a foreign key constraint
pointing to dbo.Profile.
What I want to understand is how the tables in MyDB relate to
those in dbo.aspnet_*. Shouldn't there be a foreign key constraint (or some kind of
relationship) between the profile tables in MyDB and the aspnet tables? Some discussion
of how my custom tables relate to those provided by aspnet would be wonderful.
Thanks in advance.
There are two options I can see, both of which will yield basically the same result:
FK from dbo.aspnet_User.UserID to dbo.Profile.UserID, then define a unique key on dbo.Profile.UserID (unless you use it as the PK column for dbo.Profile)
FK from dbo.aspnet_Profile.ProfileID to dbo.Profile.ProfileID
dbo.aspnet_User is logically 1 - 1 with dbo.aspnet_Profile, so it doesn't really matter which approach you use as you will still get the same relational integrity.
If you are replacing the standard profile data table with your own implementation then it makes more sense to use the first suggestion, otherwise if you are extending the Profile schema then use the second suggestion.
EDIT
aspnet_Profile is the standard table - the standard SqlProfileProvider stores the user's profile data as a serialized property bag in aspnet_Profile, hence why there is no separate aspnet_ProfileData table as well.
This approach allows the profile schema to be customized easily for different applications without requiring any changes to the underlying database, and is the most optimal solution for a framework such as .NET. The drawback is that SQL Server does not have easy access to this data at all, so it is much more difficult to index, update and query the user's profile data using T-SQL and set-based logic.
The most common approach I have seen to remove this limitation is to extend the standard SqlProfileProvider to write to a custom profile data table which has specific columns for application-specific profile properties. This table naturally has a 1-1 relationship with the aspnet_Profile table, so it has a foreign key as indicated above.
The role of the extended provider is to promote specific profile properties to columns during profile writes, and read in the columns when the profile is retrieved.
This allows you to mix-and-match storage solutions on an as-needs basis, as long as your extended provider knows how to fall back to the standard implementation where it does not 'know' about a given property.
I always think it is best to leave the standard membership tables as-is, and extend where necessary using new tables with appropriate foreign keys, then subclass the appropriate provider and override the provider methods with your own implementation (calling into the base implementation wherever possible).