I'm new to Data analysis, I have to perform impact analysis for Refresh on table.Is it required to refresh the parent tables and child tables as well? - oracle11g

The table has a set of child tables and parent tables. Also has grand children, so would the refresh impact those tables as well.
I need to perform 'data refresh' and it is for getting all production data synced up to dev for a specific table, where we made changes. It is for test data setup basically. We are looking at 'Append and Refresh' instead of 'Truncate and Refresh'.
Example - we have a table for emp_sched that has a parent table emp and set of child tables with emp provisions.
Here the emp_sched table has PK reference to two tables (emp and emp_change).

Related

Rollback/delete parent if child is null in Pentaho kettle

I'm using Pentaho Kettle 8.0 and I've created a transformation to migrate data between postgresql databases. This transformation reads information about orders (parent) and its items (child) and inserts or updates the target database. But I'm having problems with orders that have no items or that the transformation fails to insert the items. What I need is, every order must have at least 1 item.
I've designed the transformation to lookup the order data and insert/update the target table and then lookup the items. If there is an error during these steps, how can I rollback/delete the parents?
The target tables are like this:
Orders - Order_ID, Value, Qty, Customer_ID
OrderItems - Item_ID, Value, Qty, Order_ID
I suggest you do it in two steps. First you do exactly what you do : inserting parent and child, without any concern about insertion errors. Once it is finished an other transformation clean up any parent without child.
If you need to do it in one step (for example, if the system is in production), I would produce the orders and items flows. Then, for each order lookup if there is one (or more) item and filter those orders before to write to the database. Something like this:
You may also count the number of items by orders, before to filter out the orders without any items.

icCube join table with ETL

I have a Customers table which contains the salesRepEmployeeNumber which is in the Employees table.
How do I do something like
SELECT *
FROM Customers
JOIN Employees
ON Customers.salesRepEmployeeNumber = Employees.employeeNumber
with icCube ETL ?
As pointed in another answer, you can add a table based in an SQL statement that would do the job. In case your original datasource is not able to do a join :
We've not yet an join transformation, added this in our todo list. On the meantime, what you can do is.
Create an Union Table with your two tables. This will create a new table with the columns of both tables. Put the small one, first as we're going to cache it later on.
Create a Javascript view, you might need to activate Javascript in your icCube.xml configuration. In this one you can cache the first table and use a bit of js to do the join. You can trigger the table change on a field being empty. Don't forget to put 'Table Row Ordering' to Keep Table Order.
hope it helps
No need to use the ETL.
With the designer, add a table with the + sign in the menu above DataSource. The next panel gives you the choice between reading data from an existing table or an sql query.

Alternatives to a UNION query in Access 2010 Web Database

I need to assign one of multiple parent types to a single child item. The problem I encounter is that in an Access 2010 web database I cannot create a Union query to bring all the potential parents (from multiple tables) into a single drop down / listbox.
I'm a bit green to all this and could be going about it completely wrong. I'm very open to suggestions. Here is my example:
Contracts are the parent of Subcontracts.
Both Contracts and Subcontracts have a Statement of Work (SoW).
Contracts and Subcontracts can both be direct parents of a SoW.
Each SoW will have only one parent
SoWs are split into paragraphs (not overly consequential)
With a union query I would build the database this way:
Contracts table
Subcontracts table
Union table for contracts and subcontracts
Lookup to union table from SoW table in order to select either a contract or a subcontract as parent from a single data source.
The problem here is that I cannot create a union query in a web database.
My only other thought is to construct the database in this fashion:
Contracts table
Subcontracts table
Contracts SoW table
Subcontracts SoW table
This design (using two tables) might work more effectively for data entry as there could be issues with subforms when attempting to use a union table. I'm not sure as I haven't yet tried. With this method, the Access report should be able to bind the subcontract to the parent contract and display all data in a detail section. However, this design still means that I will use two separate tables to house identical data.
I would put the two contract tables together into one table that would look something like this:
CREATE TABLE ContractTable(
ContactID INTEGER NOT NULL PRIMARY KEY, -- Possibly an autonumber
[various contract columns],
ParentContract INTEGER
);
Note, I know this is not Access friendly syntax. I usually use bigger DBs, but you should be able to get the idea.
Then your query to find parent contracts is SELECT ... FROM ContractTable WHERE ParentContract IS NULL.
To find sub contracts SELECT ... FROM ContractTable WHERE ParentContract IS NOT NULL.
My concern with this approach is that if you need to search through chains of contracts (i.e. A parent of B parent of C parent of D, and you need to go from A to D), you could run into recursive SQL which I don't think Access can handle. You'd have to do it VBA code.

MS Access 2010 Triggers/data macros

I am developing a database to do a annual inventory count with 32 tables in it, 33 including the Master.
We currently have 4000 SKU's so the master table needed to be broken down into smaller tables so I can hand out a realistic amount of work to my counters.
What I am trying to achive is when my counters enter data in the smaller tables using the UI it would automatically populate the fields in the master table.
Any help would be greatly appreciated.
Michael
In Access, there is no way to apply a trigger to a table. What you can do is create a form that implements a grid. Have an After-Update event fire that does what you need. You can make the form look like a table by using the datasheet view.
While you can create a data macro* to update a table from an update on another, why would you want to do it in this case? You can either include the quantity field in the sub table and validate the data against the main table before running an update query, or the sub table (note, table, the employee ID will be sufficient to divide the data) could consist only of an employee id and an SKU, the sub table can then be joined to the main table by SKU and all updates use the quantity field from the main table:
SELECT Mytable1.SKU, MyTable.Quantity
FROM MyTable1
INNER JOIN MyTable
ON MyTable1.SKU = MyTable.SKU
WHERE EmployeeID = [Enter ID: ]
*Data Macro

Teradata: Is there a way to generate DDL from a view or select statement?

I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.

Resources