I have a stored procedure, in which temporary tables "on delete preserve rows" are created dynamically and data was inserted, when i try to execute any other dynamic sql statement, data in the temporary tables are deleted. But I need a data for further process.
Can any one tell me why data is losing, and what is the solution for this.
Thank you.
Three possible reasons for this:
There is an explicit commit.
There is an implicit commit (a DDL statement, typically).
You are are closing the session and starting a new one.
If you cannot avoid these then you'll have to create a permanent table.
Related
I have a performance issue with multiple temporary tables that I'm trying to solve with RecordSortedList, but I'm getting strange results. I have a temporary table that has a couple hundred thousand records being inserted into it, and then used elsewhere for joins to other temporary tables. The problem is after trace parsing this solution the insert is taking too long for all the individual inserts and I was hoping to use a RecordSortedList to bulk insert into the staging table. However, I can't find a handle to the temporary table after the RecordSortedList.insertDatabase() call.
I've tried something like this:
RecordSortedList tmpTableSortedList;
MyTempTable myTempTable;
AssetTrans assetTrans;
int i = 1;
tmpTableSortedList = new RecordSortedList(tableNum(MyTempTable));
tmpTableSortedList.sortOrder(fieldNum(MyTempTable, LineNum));
//the real scenario has a much more complicated data gathering, but just for sample
while select * from AssetTrans
{
myTempTable.AssetGroup = assetTrans.AssetGroup
myTempTable.LineNum = i;
tmpTableSortedList.ins(myTempTable);
i++;
}
tmpTableSortedList.insertDatabase();
//strange things happen here
MyTempTable myTempTableCopy;
AnotherTmpTable anotherTmpTable;
tmpTableSortedList.first(myTempTableCopy); //returns a buffer, but not usable buffer in join.
//does not work, I imagine because the myTempTableCopy isn't actually pointing to the
//inserted records above; somehow the temp table is out of scope.
while select * from anotherTmpTable
join myTempTableCopy
where anotherTmpTable.id == myTempTableCopy.id
{
//logic
}
Is there a way to get a pointer to the temp table after the call to RecordSortedList.insertDatabase()? I've also tried linkPhysicalTable() and a few other things, but maybe RecordSortedList was not supposed to be used with tempDb tables?
Edit: Like Aliaksandr points out below this works with RecordInsertList instead of RecordSortedList
but maybe RecordSortedList was not supposed to be used with tempDb tables?
Error message when using TempDb tables:
RecordInsertList or RecordSortedList operations are not allowed with database temporary tables.
So it's not allowed, which might make sense because RecordSortedList is a memory-based object and TempDb tables are not. I would think you could though because I'm not sure there's a huge difference in a TempDb table and a Regular table when they're both stored on disk?
If you wanted to use an InMemory table, look at \Classes\CustVendSettle specifically the variable rslTmpOverUnderReverseTax, which uses an InMemory table.
IF TempDb tables were allowed, you would use getPhysicalTableName() to get the handle combined with useExistingTempDBTable().
Or did I misread your question?
does not work, I imagine because the myTempTableCopy isn't actually pointing to the inserted records above; somehow the temp table is out of scope.
Method new of RecordSortedList has additional Common parameter where you should pass your tempDB table buffer.
Error message when using TempDb tables:
RecordInsertList or RecordSortedList operations are not allowed with database temporary tables.
So it's not allowed, which might make sense because RecordSortedList is a memory-based object and TempDb tables are not.
Although the message says we can't use temporary tables for such operations, indeed we can. We just need to be careful because the code must be executed on the server.
RecordSortedList objects must be server-located before the insertDatabase method can be called. Otherwise, an exception is thrown.
I have a temporary table that has a couple hundred thousand records being inserted into it
There is no limit to the size of a RecordSortedList object, but they are completely memory-based, so there are potential memory consumption problems. So this may not be the best solution in your case.
I have a massive database (~800 GB) with several indexed tables. I need to copy one table (including indexes) to a new database. Copying the table itself is pretty straightforward.
$ sqlite3 newDB
> attach database 'oldDB.db' as oldDB
> create table newTable as select * from oldDB.oldTable
But I can't seem to find any information on a way to also copy over an index. Is there any way to do this? Since the tables are so large I'd really like to avoid having to re-index them.
SQLite has no mechanism to copy index contents.
If this particular table would be the majority of the data in the database, the fastest way to copy it would be to copy the database file and then to drop all other tables.
But otherwise, there you cannot avoid the reindex operation.
Please note that CREATE TABLE ... AS ... does copy only the contents of the table, but not the complete table definition (such as column types or constraints).
Copying large table in a single transaction is not a good idea. If you really have to you should turn off journaling first (destination database):
PRAGMA journal_mode=OFF;
As the others have stated, the index cannot be broken out. I suspect that time spent copying the database and then dropping a very large table would be longer than just -> 1. creating the new destination database, 2. determining the original CREATE TABLE statement (from the SQLITE_MASTER table of the source database) and recreating the table in the destination database. Then 3. just ATTACH your destination database to the source database and INSERT INTO destinationdb.tablename SELECT * FROM sourcedb.tablename;* to get the copy rolling.
I am using a global application user account to access database A. This user account does not have permissions to modify database A's schema (ie, create tables, modify tables, etc). This user also has access to database B, but only views. I need to run SQL to feed data from a view in database B into a table in database A.
In a perfect world, I would be able to use this SQL:
create database_a.mytable as (select * from database_b) with no data
However, the user can't create tables in database A. If I could get the DDL of the select statement then I could log in under my personal account (which doesn't have any access to database B) and run the DDL in database A to create the table.
The only other option is to manually write the SQL, but I don't want to do that, especially since this view I am wanting to copy has many columns of varying data types and sizes.
Edit: I may be getting closer. I just experimented with this:
show (select * from database_b.myview)
However, it generated the DLL of every single table that is used in the view itself, as well as the definition for the view. This doesn't really help me since I just want the schema of the select statement itself. In other words, I need what would be generated if I were to use the create table as statement mentioned above.
Edit for Rob: Perhaps "DDL" was the wrong term to use. Using show view db.myview just shows the definition of the view, not the schema it represents. In my above example of create table as, I show how you can create a table that mimics the schema of a result set returned in a select. It generates a DDL on the back end for creating a table and then executes that DDL to actually create the table. You can then say show table db.newtable and see the new table's DDL. I want to get that DDL directly from a select statement so that I can copy it, log out of the app account, into my personal account, and then execute the DDL to create the table.
This is only to save me the headache of having to type out the DDL manually by hand to save time and reduce typing errors, especially since the source view has so many columns. That said, I think hitting up the DBA or writing some snazzy stored procedure to do dynamic stuff would be a bit over the top for my needs. I think there has to be a way to get the DDL for creating a table schema directly from a select statement.
Generate DDL Statements for objects:
SHOW TABLE {DatabaseB}.{Table1};
SHOW VIEW {DatabaseB}.{View1};
Breakdown of columns in a view:
HELP VIEW {DatabaseB}.{View1};
However, without the ability to create the object in the target database DatabaseA your don't have much leverage. Obviously, if the object already existed INSERT INTO SELECT ... FROM DatabaseB.Table1 or MERGE INTO would be options that you already explored.
Alternative Solution
Would it be possible to have a stored procedure created that dynamically created the table based on the view name that is provided? The global application account would simply need privilege to execute the procedure. Generally the user creating the stored procedure would need the permissions to perform the actions contained within the stored procedure. (You have some additional flexibility with this in Teradata 13.10.)
There are some caveats with this approach. You are attempting to materialize views that could reference anywhere from hundreds to billions of records. These aren't simple 1:1 views that are put on top of the target tables. Trying to determine the required space in the target database to materialize the view will be difficult. Performance can and will vary depending on the complexity of the view and the data volumes. This will not be a fast-path or data block optimized operation.
As a DBA, I would be concerned with this approach being taken on by a global application account without fully understanding the intent. I trust you have an open line of communication with the DBA(s) involved for supporting this system. I'm sure there are reasons for your madness that can't be disclosed here.
Possible Solution - VOLATILE TABLE
Unless the implicit privilege for CREATE TABLE has been revoked from the global application account this solution should work.
Volatile tables do not require perm space. There table definitions persist for the duration of the session and any data inserted into them relies on the spool space of the user who instantiated it.
CREATE VOLATILE TABLE {Global Application UserID}.{TableA_Copy} AS
(
SELECT *
FROM {DatabaseB}.{TableA}
)
WITH NO DATA
NO PRIMARY INDEX
ON COMMIT PRESERVE ROWS;
SHOW TABLE {Global Application UserID}.{TableA_Copy};
I opted to use a Teradata 13.10 feature called NO PRIMARY INDEX. By default, CREATE TABLE AS will take the first column of the SELECT statement and make it the PRIMARY INDEX of the table. This could lead to skewing and perm space issues in your testing depending on the data demographics. You can specify an explicit PRIMARY INDEX on your own as you understand the underlying data. (See the DDL manuals for details on the syntax if you're uncertain.)
The use of ON COMMIT PRESERVE ROWS for the intent of this example is probably extraneous. But in reality if you popped any data into that table for testing this clause would be beneficial in Teradata mode as the data would otherwise be lost immediately after the CREATE TABLE or any other data manipulation was performed against the volatile table.
The short version:
I have a grid view bound to a data source which has a SelectCommand with a left join in it because the FK can be null. On Update I want to create a record in the FK table if the FK is null and then update the parent table with the new records ID. Is this possible to do with just SqlDataSources?
The detailed version:
I have two tables: Company and Address. The column Company.AddressId can be null. On my ascx page I am using a SqlDataSource to select a left join of company and address and a GridView to display the results. By having my UpdateCommand and DeleteCommand of the SqlDataSource execute two statements separated by a semi-colon I am able to use the GridView's Edit and Delete functionality to update both table simultaneously.
The problem I have is when the Company.AddressId is null. What I need to have happen is have the data source create a record in the Address table and then update the Company table with the new Address.ID then proceed with the update as usual. I would like to do this with just data sources if possible for consistency/simplicity sake. Is it possible to have my data source do this, or perhaps add a second data source to the page to handle some of this?
Once I have that working I can probably figure out how to make it work with the InsertCommand as well but if you are on a roll and have an answer for how to make that fly as well feel free to provide it.
Thanks.
execute two statements separated by a
semi-colon
I don't see any reason why it wouldn't be possible to do both an INSERT and UPDATE in two statements with SqlDataSource just like you are doing here.
However, just so you know, if you have a lot of traffic or users using the application at the same time, you can run into concurrently issues where one user does something that affects another user and unexpected results can cascade and mess up your data. In general, for things like what you are doing - INSERT and UPDATE involving primary or foreign keys, usually SQL TRANSACTIONs are used. But, you must execute them as SQL stored procedures (or functions), on your SQL database. You are still able to call them from your SqlDataSource however by simply telling it that you are calling a stored procedure.
I have table on a database on a server, that is identical to a table on my local. I have click once application that needs to download the version of records on the server down to my local.
At the moment i have webservice that pulls back the records on the server in batches, using asp.net datasets as containers. How do i commit the whole dataset to the table in my local? The table on my local is empty.
Cheers in advance!
If you already have a DataSet, containing one or several DataTables, why don't you just use the SqlDataAdapter and call its ".Update()" method with your DataSet?
In the SqlDataAdapter, you can define an InsertCommand, an UpdateCommand, a DeleteCommand which will take care of the three basic insert/update/delete statements for your rows. All you need to do is define / write those three SQL Statements once, and the SqlDataAdapter will do the rest for you (looping through the rows, figuring out whether to insert, update or delete etc.).
If you want, you can even use your basic SELECT statement from the SelectCommand in your DataSet and use the SqlCommandBuilder to build the INSERT, UPDATE and DELETE statements based on your SELECT.
MSDN Library doc on SqlDataAdapter
SQL Data Adapter without SqlCommandBuilder
MSDN Library doc on SqlCommandBuilder
Marc
There are several options. Here are the first two that come to mind.
One would be to loop through the DataTable and build an SQL Insert statement on each loop and then execute the Insert statement against the local.
Another would be to use the SQL Bulk Copy to insert the data