Progress-OpenEdge query to show Column Relations of the table - openedge

I want a query to get the column relation or reference of column for the table or for all the databases.
Like in MySQL, we have a query
SELECT * FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE
TABLE_SCHEMA = 'database_name';
So, What is the query for Progress OpenEdge to get column relation.
Also where are procedure, functions and views stored in ProgressDB?
Query to view database name list?

To find relationships, or views, or stored procedures you must query the meta-schema. Stefan's link documents SYSTABLES, SYSCOLUMNS, SYSINDEXES, SYSPROCEDURES, and SYSVIEWS. These are the tables that define what you have asked for.
https://docs.progress.com/bundle/openedge-sql-reference-117/page/OpenEdge-SQL-System-Catalog-Tables.html
The Progress database does not explicitly store relationships. They are implied, by convention, when there are common field names between tables but this does not create any special relationship in the engine. You can parse the tables above and make some guesses but, ultimately, you probably need to refer to the documentation for the application that you are working with.
Most Progress databases were created to be used by Progress 4gl applications. SQL came later and is mostly used to support 3rd party reporting tools. As a result there are two personas - the 4gl and sql. They have have many common capabilities but there are some things that they do not share. Stored procedures are one such feature. You can create them on the sql side but the 4gl side of things does not know about them and and will not use them to enforce constraints or for any other purpose. Since, as I mentioned, most Progress databases are created to support a 4gl application, it is very unusual to have any sql stored procedures.
(To make matters even more complicated there is some old sql-89 syntax embedded within the 4gl engine. But this very old syntax is really just token sql support and is not available to non-4gl programs.)

Related

SQL. Re: inner joins & foreign keys

Good day.
I have a basic question on SQL and table structure.
What we have now: 17 tables. These tables include 1 admin table. The other 13 tables are all branched off 3 "main" tables: customers, CareWorkers, Staff.
If I'm wanting to adhere to ACID ideology, I want to then create tables that each houses unique information.
My question is, and what I'm trying to wrap my head around, when I create each of these "nested-deeper" (not sure what to call it) tables, I simply do an inner join statement to grab the foreign key on my ASP.NET app correct?
First, inner join is how you get your tables "back together", and #SpectralGhost's example is how you do it. But you might want to consider doing it in the database rather than in your ASP code. The way you do that is with views. If you create a view (the syntax is CREATE VIEW and there are plenty of examples out there) then you can make the database schema as complex as you need to without making it hard to use in your ASP application. You can even make views updatable (you define an "INSTEAD OF" trigger, again, many examples if you search).
But you probably don't want to update a view, or a table, directly from your ASP code. You probably want to define STORED PROCEDUREs that update your data, and call those from your ASP code. This allows you to restrict access to your tables and views to read only and force any writes to come through a stored procedure you can control better. This prevents SQL INJECTION, making your ASP application much more secure. If the service account the application pool you ASP page runs under can pass raw queries to the database then any compromise can do tremendous damage to your database. If all it can do is execute a stored procedure where the parameters can be changed but not the functionality, they can only put some junk values in, or maybe not even that if you range check well.
The last bit of advice is that you are not preserving "ACID", you are preserving "NORMALIZED". It's definitely a tough concept to wrap your head around, here's a resource that helped me out a great deal when I was starting out. http://www.marcrettig.com/data-normalization-poster/ I still have a copy on my wall. You shouldn't obsess over normalization, but you should definitely keep it in mind and stick to it when you reasonably can. Again, there are numerous resources a search will get you, but the basic benefit is a normalized database is much more resistant to consistency problems, and is more storage efficient. And since disk IO is slow, storage efficient is usually query efficient too.
They are related tables. You should have at least one table with a primary key and often several that related back to that table from that table's foreign key.
TableOne
TableOneID
TableTwo
TableTwoID
TableOneID
TableTwo relates to TableOne via TableOneID. An inner join would give you where there are records in both tables based on your join. Example:
SELECT *
FROM TableOne t1
INNER JOIN TableTwo t2 ON t1.TableOneID=t2.TableOneID
Specifically how to do this in your application depends on your design. If you are using an ORM, then actual SQL is not terribly important. If you are using stored procedures, then it is.

Reorganise Stored Procedures display in SQL Server Management Studio

I'm currently working with an Asp.NET web application which involves a lot of stored procedures.
Since I'm also using the Asp.NET Membership/Roles/Profile system, the list of stored procedures displayed in the Microsoft Sql Server Management Studio is really becoming something of a pest to navigate. As soon as I open the Programmability/Stored Procedures tree, I have a long list of dbo.aspnet_spXXX stored procedures with my procedures loitering at the end.
What I would like to do is shuffle all those aspnet stored procedures into a folder of their own, leaving mine floating loose as they are now. I don't want to dispense with the current organisation, I just want to refine it a little.
Is it possible to do this?
I think the best you can do in SSMS is to use a filter to exclude the aspnet stored procedures.
Right click the Stored Procedures folder
Select Filter -> Filter Settings
Filter by Name, Does not contain, 'aspnet_sp'.
I would recommend redgate's SQL search tool - handy for finding a particular proc, rather than scrolling through a large list. Allows you to double click and go to it:
http://www.red-gate.com/products/sql-development/sql-search/
Management Studio doesn't support the ability to sort these objects other than alphabetically.
I like the filter and 3rd party add-in ideas, but another idea you can explore is using a different schema for your objects. If you name the schema 'abc' or something more logical, they will always sort first and none of your users will have to apply the filter.
CREATE SCHEMA abc AUTHORIZATION dbo;
GO
ALTER SCHEMA abc TRANSFER dbo.proc1;
ALTER SCHEMA abc TRANSFER dbo.proc2;
ALTER SCHEMA abc TRANSFER dbo.proc3;
...
Of course you will need to update your code to reference this schema and you should also change all of your users' default schema.
This isn't really one of the primary purposes of schemas, but short of putting your objects in a different database, this is one way to visually separate them.

What is the best way to CRUD dynamically created tables?

I'm creating(ed) an ASP.NET application (SQL Server backend) that allows the user (a business) to create their own tables and fields. They will all be child tables of a parent table (non-dynamic) and have proper PK/FK relationships (default fields when the table is created).
However, I don't like my current method of updating/inserting and selecting the fields. I was going to create an SP that was passed the proper keys and table names, then have it return the proper SQL statement. I'm thinking that it might make more sense to just pass the name/value pairs of fields/values and have an SP actually process them. Is this the best way to do it? If so, I'm not good at SP's so any examples of how?
I don't have a lot of experience with the EAV model, but it does sound like it might be a good idea for implementing what you're trying to achieve. However, if you already have a system in place, an overhaul could be very expensive.
If the queries you're making against the user tables are basic CRUD operations, what about just creating CRUD stored procs for each table? E.g. -
Table:
acme_orders
Stored Procs:
acme_orders_insert
acme_orders_update
acme_orders_select
acme_orders_delete
... [other necessary procs]
I have no idea what the business needs are for these tables, but I imagine that whatever you're doing currently could be translated into doing the same thing with stored procs.
I was going to create an SP that was passed the proper keys and table names, then have it >return the proper SQL statement. I'm thinking that it might make more sense to just pass the >name/value pairs of fields/values and have an SP actually process them.
Assuming you mean the proc would generate and then execute the SQL (sometimes known as dynamic SQL) this can work, but it probably performs slower than static / compiled SQL, as in normal procs.

Is it a good practice to avoid ad hoc sql altogether in ASP.NET applications?

Instead create only stored procedures and call them from from the code?
There is a place for dynamic SQL and/or ad hoc SQL, but it needs to be justified based on the particular usage needs.
Stored procedures are by far a best practice for almost all situations and should be strongly considered first.
This issue is a little bigger than just procs or ad hoc, because the database has a wide variety of tools to define its interface, including tables, views, functions and procedures.
People here have mentioned the execution plans and parameterization but, by far, the most important thing in my mind is that any technique which relies on exposed base tables to users means that you lose any ability for the database to change its underlying implementation or control security vertically or horizontally. At the very least, I would expose only views to a typical application/user/role.
In a scenario where the application or user's account only has access to EXEC SPs, then there is no possibility of that account being able to even have a hope of using a SQL injection of the form: "; SELECT name, password from USERS;" or "; DELETE FROM USERS;" or "; DROP TABLE USERS;" because the account doesn't have anything but EXEC (and certainly no DDL). You can control column visibility at the SP level and not have to deny select on an employee salary column, for example.
In other words, unless you are comfortable granting db_datareader to public (because that's effectively what you are doing when you LINQ-to-tables), then you need some sort of realistic security in your application, and SPs are the only way to go, with LINQ-to-views possibly being acceptable.
Depends entirely on what you're doing.
As a general rule a stored proc will have it's query plan cached better than a dynamically generated SQL statement. It will also be slightly easier to maintain indexes for.
However, dynamically generated SQL statements can have their query plans cached, so the difference is marginal.
Dynamically generated SQL statement can also introduce security risk - always parameterise them.
That said sprocs are a pain to maintain and update, they separate DB-logic and .Net code in a way that makes it harder for developers to piece together what a data access method is doing.
Also, to fix or update a SQL string you just change code. To fix or update a sproc you have to change the database - often a much messier option.
So I wouldn't recommend that as a 'one size fits all' best practice.
There is no right or wrong answer here. There are benefits to both which can be easily obtained through a google search. Different projects with different requirements may lead you to different solutions. It's not as black or white as you might want it to be. You might as well throw ORMs into the mix. If you prefer sql queries in your data layer as opposed to stored procs, make sure you use parametrized queries.
sql in sp- easy to maintain, sql in app- pain in the butt ot maintain.
it's so much faster and easier to hop onto a sql instance, modify an sp, test it, then deploy the sp, instead of having to modify the code in the app, test it, then deploy the app.
It depends on the data distribution in your table. Prepared query plans and stored procedures get cached, and the plan itself depends on the table statistics.
Suppose you've building a blog and that your posts table has a user_id. And that you're frequently doing stuff like:
select posts.* from posts where user_id = ? order by published desc limit 20;
Suppose indexes on posts (user_id) and posts (published desc).
Additionally suppose that you've two authors, author1 which wrote 3 posts a long time ago, and author2 who has written 10k posts since.
In this case, the query plan of the ad hoc query will be very different depending on whether you're fetching the author1 posts or the author2 posts:
For author1, the database will decide to use the index on user_id and sort the results.
For author2, the database will read the first 20 rows using the index on published.
If you prepare the statement, the planner will pick either of the two. Suppose the second (which I think is likely): applied to author1, this means going through the whole table by way of the index -- which is much slower than the optimal plan.
If simplicity is your goal, then an ORM would be a good practice for your simple database operations
ORMs like Entity Framework, nHibernate, LINQ to SQL, etc. will manage the code creation of the data access and repository layers and provide you with strongly typed objects representing your tables. This can lead to a cleaner, more maintainable architecture.
Save the stored procedures for your more complex queries. This is where you can take advantage of advanced SQL and cached query plans.
Dynamic SQL - Bad
Stored Procedures - Better
Linq-To-SQL or Linq-to-EF (or ORM tools) - Best
You do not want dynamic SQL inside your application since you do not have compile-time checking. Stored procedures will at least be checked, but it is still not part of a cohesive usnit and removes business logic to the database layer. Linq-To-EF will allow business logic to stay inside your application and allow you to have compile-time checking of syntax.

Inner join across multiple access db's

I am re-designing an application for a ASP.NET CMS that I really don't like. I have made som improvements in performance only to discover that not only does this CMS use MS SQL but some users "simply" use MS Access database.
The problem is that I have some tables which I inner join, that with the MS Access version are in two different files. I am not allowed to simply move the tables to the other mdb file.
I am now trying to figure out a good way to "inner join" across multiple access db files?
It would really be a pity if I have fetch all the data and the do it programmatically!
Thanks
You don't need linked tables at all. There are two approaches to using data from different MDBs that can be used without a linked table. The first is to use "IN 'c:\MyDBs\Access.mdb'" in the FROM clause of your SQL. One of your saved queries would be like:
SELECT MyTable.*
FROM MyTable IN 'c:\MyDBs\Access.mdb'
and the other saved query would be:
SELECT OtherTable.*
FROM OtherTable IN 'c:\MyDBs\Other.mdb'
You could then save those queries, and then use the saved queries to join the two tables.
Alternatively, you can manage it all in a single SQL statement by specifying the path to the source MDB for each table in the FROM clause thus:
SELECT MyTable.ID, OtherTable.OtherField
FROM [c:\MyDBs\Access.mdb].MyTable
INNER JOIN [c:\MyDBs\Other.mdb].OtherTable ON MyTable.ID = OtherTable.ID
Keep one thing in mind, though:
The Jet query optimizer won't necessarily be able to use the indexes from these tables for the join (whether it will use them for criteria on individual fields is another question), so this could be extremely slow (in my tests, it's not, but I'm not using big datasets to test). But that performance issue applies to linked tables, too.
If you have access to the MDBs, and are able to change them, you might consider using Linked Tables. Access provides the ability to link to external data (in other MDBs, in Excel files, even in SQL Server or Oracle), and then you can perform your joins against the links.
I'd strongly encourage performance testing such an option. If it's feasible to migrate users of the Access databases to another system (even SQL Express), that would also be preferable -- last I checked, there are no 64-bit JET drivers for ODBC anymore, so if the app is ever hosted in a 64-bit environment, these users will be hosed.
Inside one access DB you can create "linked tables" that point to the other DB. You should (I think) be able to query the tables as if they both existed in the same DB.
It does mean you have to change one of the DBs to create the virtual table, but at least you're not actually moving the data, just making a pointer to it
Within Access, you can add remote tables through the "Linked Table Manager". You could add the links to one Access file or the other, or you could create a new Access file that references the tables in both files. After this is done, the inner-join queries are no different than doing them in a single database.

Resources