I have a table that I drop and create an index each day and I have another job that queries this table with an ACCESS lock.
Sometimes these jobs happen at the same time and then I get the following error:
2641 %DBID.%TVMID was restructured. Resubmit.
I have read in the documentation the following:
Explanation:
A table was changed before a statement that references the table was processed.
(For example, an index may have been added or a field removed.)
Notes:
The statement may not have the intended result because of the change in the table.
Remedy:
Examine the table and resubmit the request.
https://docs.teradata.com/reader/8MhLDQBmL52OycrEKPuGqg/Ju5pqm9uRFO6VziQdcmA6w
I guess this is because the CREATE INDEX sentence requests an EXCLUSIVE lock and the SELECT sentence is queued while the index is created, but when the SELECT is poped from the queue the table has a different version number and it fails.
Maybe I am completely wrong but,
Is there anyway to avoid this behaviour?
Something in the way of making the SELECT sentence reevaluate when it gets the chance to get executed.
Thank you!
It is up to the application to handle the 2641 and resubmit the request. There is no option to have the database do so automatically.
Related
I have a SQLite table in my application that periodically has INSERT/UPDATE statements executed against it. I would like to display a view in my application that reflects some query that is run against that table, and keep it continually updated as the table contents change. Since the table could be large, I would like to avoid having to re-run the query each time the table is updated so that I can update the view.
One idea I had was to use SQLite's Data Change Notification Callbacks to be notified whenever an INSERT/UPDATE occurs against the table in question. In my callback, I have the rowid of the newly-updated row, and I would like to see whether it matches the query. Assuming I have the query available as a prepared sqlite3_stmt, what would be the most efficient way to test whether the row would be matched by the query?
Aside: I know that I can't do anything in the callback itself that would affect the state of the database connection, and that's fine. I can defer the actual work of checking the query until later to ensure safety; I'm just trying to determine what the best mechanism for checking the query against the new row contents would be.
I have a set table in teradata , when I load duplicate records throough informatica , session fails because it tries to push duplicate records in SET table.
I want that whenever duplicate records being loaded informatica rejects them using TPT or Relation connection
can anyone help me with properties I need to set
Do you really need to keep track of what records are rejected due to duplication in the TPT logs? It seems like you are open to suggestions about TPT or relational connections, so I assume you don't really care about TPT level logs.
If this assumption is correct then you can simply put an Aggregator Transformation in the mapping and mark every field as Group By. As expected, this will add a group by clause in the generated query and eliminate duplicates in the source data.
Please try following things:
1. If you'll use fload or TPT Fast load then the utility will implicitly remove the duplicates but this utility can only be used for loading into empty tables.
2. If you are trying to load data in non-empty table then place a sorter and de-dupe your data in Informatica
3. Also try changing the flag stop on error to 0 and flag Error limit in target to -1
Please share your results with us.
I have an asp.net Gridview that handles insert operations into a SQL database. Records are only permitted to be inserted if they meet a uniqueness criteria, and this constraint is being enforced using unique indexes in SQL server. If the user attempts to insert a record that already exists, an error message is displayed.
I'm wondering what the best practice is for implementing this.
Check if the record exists SQL side, using IF EXISTS, and locking hints (updlock, holdlock, etc). Return an error code to ASP.net depending on whether the record was inserted
Perform the INSERT operation inside a SQL server try/catch block, relying on the unique index to prevent the insert from occurring if the record exists. Return an error code depending on whether an exception was thrown.
Perform the INSERT operation SQL side, but without SQL try/catch. Handle the PK violation exception inside ASP.net instead.
Normally I'd consider using exceptions to handle valid operations to be bad practice - i.e. software should not throw exceptions unless something is broken. However if the unique index on the table in SQL is going to implement the desired constraint, why bother performing a manual check for existence of the record?
I would make a separate call to check if the record already exists. If yes, show message to user, if no make insert. The reason I would do it this way is because I prefer keeping all the business logic in the application.
If you insist in making just one stored proc call:
I would check before I insert. I would also add an output parameter to the stored proc that returns a message if the insert was unsuccessful. In my application if I see a message in the output parameter, I will display that to the user.
I want to restrict the execution of my PL/SQL code from repetition. That is, I have written a PL/SQL code with three input parameters viz, Month, Year and a Flag. I have executed the procedure with the following values for the parameters:
Month: March
Year : 2011
Flag: Y
Now, If I am trying to execute the procedure with the same values to the parameters as above, I want to write some code in the PL/SQL to restrict the unwanted second execution. Can anyone help. I hope the question is no ambiguous.
You can use the function result cache: http://www.oracle-developer.net/display.php?id=504 . So Oracle can do this for you.
I would create another table that would store the 3 parameters of each request. When your procedure is called it would first check the "parameter request" table to see if the calling parameters have beem used before. If found, then exit the procedure. If not found, then save the parameters and execute the rest of the procedure.
Your going to need to keep "State" of the last call somewhere. I would recommend creating a table with a datetime column.
When your procedure is called update this table. So, next time when your procedure is called.. check this table to see when was the last time your procedure was called and then proceed accordingly.
Why not set up a table to track what arguments you've already executed it with?
In your procedure, first check that table to see if similar parameters have already been processed. If so, exit (with or without an error).
If not, insert them and do the processing necessary.
Depending on how tight the requirements are, you'll need to get a exclusive lock on that table to prevent concurrent execution.
A nice plus would be an extra column with "in progress"/"done"/"error" status so that you can check if things are going on properly. (Maybe a timestamp too if that's important/interesting.)
This setup allows you to easily clear some of the executions (by deleting some rows) if you find things need to be re-done for whatever reason.
Make an insert in the beginning of the procedure, and do a select for update tolock the table so no one else can process any data and if everything goes ok with the procedure, commit and release the table 😀
I would like to find out if it is possible to find out which package or procedure in a package is updating a table?
Due to a certain project being handed over (the person who handed over the project has since left) without proper documentation, data that we know we have updated always go back to some strange source point.
We are guessing that this could be a database job or scheduler that is running the update command without our knowledge. I am hoping that there is a way to find out where the source code is calling from that is updating the table and inserting the source as a trigger on that table that we are monitoring.
Any ideas?
Thanks.
UPDATE: I poked around and found out
how to trace a statement back to its
owning PL/SQL object.
In combination with what Tony mentioned, you can create a logging table and a trigger that looks like this:
CREATE TABLE statement_tracker
( SID NUMBER
, serial# NUMBER
, date_run DATE
, program VARCHAR2(48) null
, module VARCHAR2(48) null
, machine VARCHAR2(64) null
, osuser VARCHAR2(30) null
, sql_text CLOB null
, program_id number
);
CREATE OR REPLACE TRIGGER smb_t_t
AFTER UPDATE
ON smb_test
BEGIN
INSERT
INTO statement_tracker
SELECT ss.SID
, ss.serial#
, sysdate
, ss.program
, ss.module
, ss.machine
, ss.osuser
, sq.sql_fulltext
, sq.program_id
FROM v$session ss
, v$sql sq
WHERE ss.sql_address = sq.address
AND ss.SID = USERENV('sid');
END;
/
In order for the trigger above to compile, you'll need to grant the owner of the trigger these permissions, when logged in as the SYS user:
grant select on V_$SESSION to <user>;
grant select on V_$SQL to <user>;
You will likely want to protect the insert statement in the trigger with some condition that only makes it log when the the change you're interested in is occurring - on my test server this statement runs rather slowly (1 second), so I wouldn't want to be logging all these updates. Of course, in that case, you'd need to change the trigger to be a row-level one so that you could inspect the :new or :old values. If you are really concerned about the overhead of the select, you can change it to not join against v$sql, and instead just save the SQL_ADDRESS column, then schedule a job with DBMS_JOB to go off and update the sql_text column with a second update statement, thereby offloading the update into another session and not blocking your original update.
Unfortunately, this will only tell you half the story. The statement you're going to see logged is going to be the most proximal statement - in this case, an update - even if the original statement executed by the process that initiated it is a stored procedure. This is where the program_id column comes in. If the update statement is part of a procedure or trigger, program_id will point to the object_id of the code in question - you can resolve it thusly:
SELECT * FROM all_objects where object_id = <program_id>;
In the case when the update statement was executed directly from the client, I don't know what program_id represents, but you wouldn't need it - you'd have the name of the executable in the "program" column of statement_tracker. If the update was executed from an anonymous PL/SQL block, I'm not how to track it back - you'll need to experiment further.
It may be, though, that the osuser/machine/program/module information may be enough to get you pointed in the right direction.
If it is a scheduled database job then you can find out what scheduled database jobs exist and look into what they do. Other things you can do are:
look at the dependencies views e.g. ALL_DEPENDENCIES to see what packages/triggers etc. use that table. Depending on the size of your system that may return a lot of objects to trawl through.
Search all the database source code for references to the table like this:
select distinct type, name
from all_source
where lower(text) like lower('%mytable%');
Again that may return a lot of objects, and of course there will be some "false positives" where the search string appears but isn't actually a reference to that table. You could even try something more specific like:
select distinct type, name
from all_source
where lower(text) like lower('%insert into mytable%');
but of course that would miss cases where the command was formatted differently.
Additionally, could there be SQL scripts being run through "cron" jobs on the server?
Just write an "after update" trigger and, in this trigger, log the results of "DBMS_UTILITY.FORMAT_CALL_STACK" in a dedicated table.
The purpose of this function is exactly to give you the complete call stack of al the stored procedures and triggers that have been fired to reach your code.
I am writing from the mobile app, so i can't give you more detailed examples, but if you google for it you'll find many of them.
A quick and dirty option if you're working locally, and are only interested in the first thing that's altering the data, is to throw an error in the trigger instead of logging. That way, you get the usual stack trace and it's a lot less typing and you don't need to create a new table:
AFTER UPDATE ON table_of_interest
BEGIN
RAISE_APPLICATION_ERROR(-20001, 'something changed it');
END;
/