Good day.
I have a basic question on SQL and table structure.
What we have now: 17 tables. These tables include 1 admin table. The other 13 tables are all branched off 3 "main" tables: customers, CareWorkers, Staff.
If I'm wanting to adhere to ACID ideology, I want to then create tables that each houses unique information.
My question is, and what I'm trying to wrap my head around, when I create each of these "nested-deeper" (not sure what to call it) tables, I simply do an inner join statement to grab the foreign key on my ASP.NET app correct?
First, inner join is how you get your tables "back together", and #SpectralGhost's example is how you do it. But you might want to consider doing it in the database rather than in your ASP code. The way you do that is with views. If you create a view (the syntax is CREATE VIEW and there are plenty of examples out there) then you can make the database schema as complex as you need to without making it hard to use in your ASP application. You can even make views updatable (you define an "INSTEAD OF" trigger, again, many examples if you search).
But you probably don't want to update a view, or a table, directly from your ASP code. You probably want to define STORED PROCEDUREs that update your data, and call those from your ASP code. This allows you to restrict access to your tables and views to read only and force any writes to come through a stored procedure you can control better. This prevents SQL INJECTION, making your ASP application much more secure. If the service account the application pool you ASP page runs under can pass raw queries to the database then any compromise can do tremendous damage to your database. If all it can do is execute a stored procedure where the parameters can be changed but not the functionality, they can only put some junk values in, or maybe not even that if you range check well.
The last bit of advice is that you are not preserving "ACID", you are preserving "NORMALIZED". It's definitely a tough concept to wrap your head around, here's a resource that helped me out a great deal when I was starting out. http://www.marcrettig.com/data-normalization-poster/ I still have a copy on my wall. You shouldn't obsess over normalization, but you should definitely keep it in mind and stick to it when you reasonably can. Again, there are numerous resources a search will get you, but the basic benefit is a normalized database is much more resistant to consistency problems, and is more storage efficient. And since disk IO is slow, storage efficient is usually query efficient too.
They are related tables. You should have at least one table with a primary key and often several that related back to that table from that table's foreign key.
TableOne
TableOneID
TableTwo
TableTwoID
TableOneID
TableTwo relates to TableOne via TableOneID. An inner join would give you where there are records in both tables based on your join. Example:
SELECT *
FROM TableOne t1
INNER JOIN TableTwo t2 ON t1.TableOneID=t2.TableOneID
Specifically how to do this in your application depends on your design. If you are using an ORM, then actual SQL is not terribly important. If you are using stored procedures, then it is.
Related
I'm creating(ed) an ASP.NET application (SQL Server backend) that allows the user (a business) to create their own tables and fields. They will all be child tables of a parent table (non-dynamic) and have proper PK/FK relationships (default fields when the table is created).
However, I don't like my current method of updating/inserting and selecting the fields. I was going to create an SP that was passed the proper keys and table names, then have it return the proper SQL statement. I'm thinking that it might make more sense to just pass the name/value pairs of fields/values and have an SP actually process them. Is this the best way to do it? If so, I'm not good at SP's so any examples of how?
I don't have a lot of experience with the EAV model, but it does sound like it might be a good idea for implementing what you're trying to achieve. However, if you already have a system in place, an overhaul could be very expensive.
If the queries you're making against the user tables are basic CRUD operations, what about just creating CRUD stored procs for each table? E.g. -
Table:
acme_orders
Stored Procs:
acme_orders_insert
acme_orders_update
acme_orders_select
acme_orders_delete
... [other necessary procs]
I have no idea what the business needs are for these tables, but I imagine that whatever you're doing currently could be translated into doing the same thing with stored procs.
I was going to create an SP that was passed the proper keys and table names, then have it >return the proper SQL statement. I'm thinking that it might make more sense to just pass the >name/value pairs of fields/values and have an SP actually process them.
Assuming you mean the proc would generate and then execute the SQL (sometimes known as dynamic SQL) this can work, but it probably performs slower than static / compiled SQL, as in normal procs.
Instead create only stored procedures and call them from from the code?
There is a place for dynamic SQL and/or ad hoc SQL, but it needs to be justified based on the particular usage needs.
Stored procedures are by far a best practice for almost all situations and should be strongly considered first.
This issue is a little bigger than just procs or ad hoc, because the database has a wide variety of tools to define its interface, including tables, views, functions and procedures.
People here have mentioned the execution plans and parameterization but, by far, the most important thing in my mind is that any technique which relies on exposed base tables to users means that you lose any ability for the database to change its underlying implementation or control security vertically or horizontally. At the very least, I would expose only views to a typical application/user/role.
In a scenario where the application or user's account only has access to EXEC SPs, then there is no possibility of that account being able to even have a hope of using a SQL injection of the form: "; SELECT name, password from USERS;" or "; DELETE FROM USERS;" or "; DROP TABLE USERS;" because the account doesn't have anything but EXEC (and certainly no DDL). You can control column visibility at the SP level and not have to deny select on an employee salary column, for example.
In other words, unless you are comfortable granting db_datareader to public (because that's effectively what you are doing when you LINQ-to-tables), then you need some sort of realistic security in your application, and SPs are the only way to go, with LINQ-to-views possibly being acceptable.
Depends entirely on what you're doing.
As a general rule a stored proc will have it's query plan cached better than a dynamically generated SQL statement. It will also be slightly easier to maintain indexes for.
However, dynamically generated SQL statements can have their query plans cached, so the difference is marginal.
Dynamically generated SQL statement can also introduce security risk - always parameterise them.
That said sprocs are a pain to maintain and update, they separate DB-logic and .Net code in a way that makes it harder for developers to piece together what a data access method is doing.
Also, to fix or update a SQL string you just change code. To fix or update a sproc you have to change the database - often a much messier option.
So I wouldn't recommend that as a 'one size fits all' best practice.
There is no right or wrong answer here. There are benefits to both which can be easily obtained through a google search. Different projects with different requirements may lead you to different solutions. It's not as black or white as you might want it to be. You might as well throw ORMs into the mix. If you prefer sql queries in your data layer as opposed to stored procs, make sure you use parametrized queries.
sql in sp- easy to maintain, sql in app- pain in the butt ot maintain.
it's so much faster and easier to hop onto a sql instance, modify an sp, test it, then deploy the sp, instead of having to modify the code in the app, test it, then deploy the app.
It depends on the data distribution in your table. Prepared query plans and stored procedures get cached, and the plan itself depends on the table statistics.
Suppose you've building a blog and that your posts table has a user_id. And that you're frequently doing stuff like:
select posts.* from posts where user_id = ? order by published desc limit 20;
Suppose indexes on posts (user_id) and posts (published desc).
Additionally suppose that you've two authors, author1 which wrote 3 posts a long time ago, and author2 who has written 10k posts since.
In this case, the query plan of the ad hoc query will be very different depending on whether you're fetching the author1 posts or the author2 posts:
For author1, the database will decide to use the index on user_id and sort the results.
For author2, the database will read the first 20 rows using the index on published.
If you prepare the statement, the planner will pick either of the two. Suppose the second (which I think is likely): applied to author1, this means going through the whole table by way of the index -- which is much slower than the optimal plan.
If simplicity is your goal, then an ORM would be a good practice for your simple database operations
ORMs like Entity Framework, nHibernate, LINQ to SQL, etc. will manage the code creation of the data access and repository layers and provide you with strongly typed objects representing your tables. This can lead to a cleaner, more maintainable architecture.
Save the stored procedures for your more complex queries. This is where you can take advantage of advanced SQL and cached query plans.
Dynamic SQL - Bad
Stored Procedures - Better
Linq-To-SQL or Linq-to-EF (or ORM tools) - Best
You do not want dynamic SQL inside your application since you do not have compile-time checking. Stored procedures will at least be checked, but it is still not part of a cohesive usnit and removes business logic to the database layer. Linq-To-EF will allow business logic to stay inside your application and allow you to have compile-time checking of syntax.
I have worked on a timesheet application application in MVC 2 for internal use in our company. Now other small companies have showed interest in the application. I hadn't considered this use of the application, but it got me interested in what it might imply.
I believe I could make it work for several clients by modifying the database (Sql Server accessed by Entity Framework model). But I have read some people advocating multiple databases (one for each client).
Intuitively, this feels like a good idea, since I wouldn't risk having the data of various clients mixed up in the same database (which shouldn't happen of course, but what if it did...). But how would a multiple database solution be implemented specifically?
I.e. with a single database I could just have a client register and all the data needed would be added by the application the same way it is now when there's just one client (my own company).
But with a multiple database solution, how would I create a new database programmatically when a user registers? Please note that I have done all database stuff using Linq to Sql, and I am not very familiar with regular SQL programming...
I would really appreciate a clear detailed explanation of how this could be done (as well as input on whether it is a good idea or if a single database would be better for some reason).
EDIT:
I have also seen discussions about the single database alternative, suggesting that you would then add ClientId to each table... But wouldn't that be hard to maintain in the code? I would have to add "where" conditions to a lot of linq queries I assume... And I assume having a ClientId on each table would mean that each table would have need to have a many to one relationship to the Client table? Wouldn't that be a very complex database structure?
As it is right now (without the Client table) I have the following tables (1 -> * designates one to many relationship):
Customer 1 -> * Project 1 -> * Task 1 -> * TimeSegment 1 -> * Employee
Also, Customer has a one to many relationship directly with TimeSegment, for convenience to simplify some queries.
This has worked very well so far. Wouldn't it be possible to simply have a Client table (or UserCompany or whatever one might call it) with a one to many relationship with Customer table? Wouldn't the data integrity be sufficient for the other tables since the rest is handled by the relationships?
as far as whether or not to use a single database or multiple databases, it really all depends on the use cases. more databases means more management needs, potentially more diskspace needs, etc. there are alot more things to consider here than just how to create the database, such as how will you automate the backup process creation, etc. i personally would use one database with a good authentication system that would filter the data to the appropriate client.
as to creating a database, check out this blog post. it describes how to use SMO (sql management objects) in c#.net to create a database. they are a really neat tool, and you'll definitely want to familiarize yourself with them.
to deal with the follow up question, yes, a single, top level relationship between clients and customers should be enough to limit the new customers to their appropriate data.
without any real knowledge about your application i can't say how complex adding that table will be, but assuming your data layer is up to snuff, i would assume you'd really only need to limit the customers class by the current client, and then get all the rest of your data based on the customers that are available.
did that make any sense?
See my answer here, it applies to your case as well: c# database architecture
Ok, the basic situation: Due to a few mixed up starts, a project ends up with not one, but three separate databases, each containing a portion of the overall project data. All three databases are the same, it's just that, say 10% of the project was run into the first, then a new DB was made due to a code update and 15% of the project was run into the new one, then another code change required another new database for the rest of the project. Again, the pertinent tables are all exactly the same across all three databases.
Now, assume I wanted to take all three of those databases - bearing in mind that they can't just be compiled into a single databases due to Primary Key issues and so on - and run a single query that would look through all three of them, select a given set of data from each, then compile those three sets into one single result and return it to the reporting page I'm working on.
For reference, at its endpoint the data is output to an ASP.Net/VB.Net backed page, specifically a Gridview object. It doesn't need to be edited, fortunately, just displayed.
What would be the best way to approach this mess? I'm thinking that creating a temporary table would be my best bet, but honestly I'm stepping into a portion of SQL that I'm not familiar with here, and would appreciate any guidance somebody more experienced might have.
I'd say your best bet is to suck it up and combine the databases, even if it is a major pain to combine the primary keys. It may be a major pain now, but it is going to be 10x as painful over the life of the project.
You can do a union across multiple databases as Scott has pointed out, but you are in for a world of trouble as the application gets more complex. For example, even if you circumvent the technical limitations by having multiple tables/databases for the same entity, having duplicates in the PK for a logical entity is a world of trouble.
Implement the workaround solution if you must, but I guarantee you will hate yourself for it later.
Why not just use 3 part naming on the tables and union them all together?
select db1.dbo.Table1.Field1,
db1.dbo.Table1.Field2
from db1.dbo.Table1
UNION
select db2.dbo.Table1.Field1,
db2.dbo.Table1.Field2
from db2.dbo.Table1
UNION
select db3.dbo.Table1.Field1,
db3.dbo.Table1.Field2
from db3.dbo.Table1
-- where ...
-- order by ...
You should create what is called a Partitioned View for each of your tables of interest. These views do a union of the underlying base tables and eventually add a syntetic column to uniquefy the rows:
CREATE VIEW vTableXDB
AS
SELECT 'DB1' as db_key, *
FROM DB1.dbo.table
UNION ALL
SELECT 'DB2' as db_key, *
FROM DB2.dbo.table
UNION ALL
SELECT 'DB3' as db_key, *
FROM DB3.dbo.table;
You create one such view for each table and then design your reports on these views, not on the base tables. You must add the db_key to your join conditions. The query optimizzer has some understanding of the partitioned views and might be able to create plans that do the right thing and avoid joins that span multiple dbs, but that is not guaranteed. If things go haywire and the optimizer does not recognize the partitioning resulting in very bad execution times, you may have to move the db_key into the tables themselves and add some artificial check constraints on the base tables so that the optimizer can understand the partitioning (see the article I linked for details).
You can actually join tables on different databases. If I remember right the syntax is changed from "tablename.columnName" to "Server.Owner.tablename.columnName". You will need to run some stored procedures as an admin to allow this connectivity. It's also pretty slow but the effort to get it working is low.
If you have time to do it right look at data warehouse concepts. That's basically a temp table that collects the data you need to report on.
Building on Scott Ivey's excellent example above,
Use table name aliasing to simplify your code
Use UNION ALL instead of UNION assuming that your data is unique between the three databases
Code:
select
d1t1.Field1,
d1t1.Field2
from db1.dbo.Table1 AS d1t1
UNION ALL
select
d2t1.Field1,
d2.Field2
from db2.dbo.Table1 AS d2t1
UNION ALL
select
d3t1.Field1,
d3t1.Field2
from db3.dbo.Table1 AS d3t1
-- where ...
-- order by ...
I am re-designing an application for a ASP.NET CMS that I really don't like. I have made som improvements in performance only to discover that not only does this CMS use MS SQL but some users "simply" use MS Access database.
The problem is that I have some tables which I inner join, that with the MS Access version are in two different files. I am not allowed to simply move the tables to the other mdb file.
I am now trying to figure out a good way to "inner join" across multiple access db files?
It would really be a pity if I have fetch all the data and the do it programmatically!
Thanks
You don't need linked tables at all. There are two approaches to using data from different MDBs that can be used without a linked table. The first is to use "IN 'c:\MyDBs\Access.mdb'" in the FROM clause of your SQL. One of your saved queries would be like:
SELECT MyTable.*
FROM MyTable IN 'c:\MyDBs\Access.mdb'
and the other saved query would be:
SELECT OtherTable.*
FROM OtherTable IN 'c:\MyDBs\Other.mdb'
You could then save those queries, and then use the saved queries to join the two tables.
Alternatively, you can manage it all in a single SQL statement by specifying the path to the source MDB for each table in the FROM clause thus:
SELECT MyTable.ID, OtherTable.OtherField
FROM [c:\MyDBs\Access.mdb].MyTable
INNER JOIN [c:\MyDBs\Other.mdb].OtherTable ON MyTable.ID = OtherTable.ID
Keep one thing in mind, though:
The Jet query optimizer won't necessarily be able to use the indexes from these tables for the join (whether it will use them for criteria on individual fields is another question), so this could be extremely slow (in my tests, it's not, but I'm not using big datasets to test). But that performance issue applies to linked tables, too.
If you have access to the MDBs, and are able to change them, you might consider using Linked Tables. Access provides the ability to link to external data (in other MDBs, in Excel files, even in SQL Server or Oracle), and then you can perform your joins against the links.
I'd strongly encourage performance testing such an option. If it's feasible to migrate users of the Access databases to another system (even SQL Express), that would also be preferable -- last I checked, there are no 64-bit JET drivers for ODBC anymore, so if the app is ever hosted in a 64-bit environment, these users will be hosed.
Inside one access DB you can create "linked tables" that point to the other DB. You should (I think) be able to query the tables as if they both existed in the same DB.
It does mean you have to change one of the DBs to create the virtual table, but at least you're not actually moving the data, just making a pointer to it
Within Access, you can add remote tables through the "Linked Table Manager". You could add the links to one Access file or the other, or you could create a new Access file that references the tables in both files. After this is done, the inner-join queries are no different than doing them in a single database.