I have an UFT testcase to test a desktop application. I set application to be executed during test execution under Record and Run settings in UFT. Application is executed, but twice in each test execution.
Following suggestions:
Check the run iterations settings
Use a for loop to iterate through the rows in datatable.
Count the number of rows and set the counter for each row.
rowcount = DataTable.GetSheet("Global").GetRowCount
DataTable. SetCurrentRow(2)
Related
In my Oracle Apex 19.3 application I have a SQL statement that needs to be used on several pages and changes slightly based on the user that is logged in. So that I do not need to duplicate this code over and over on each page I generate this statement as an application item called: QUERY_BASED_ON_USER.
An application computation then statically sets it to SELECT j.* FROM table(pkg_jobstatus.report()) j WHERE j.id IN (:USERIDS)
(USERIDS is a separate application item)
I wish to use the application item QUERY_BASED_ON_USER as the sql statement for a table. When setting the data source to PL/SQL and using the following code,
BEGIN
return :QUERY_BASED_ON_USER;
END;
I get this error: PL/SQL function body did not return a value.
I tried debugging this by settings a static page region to: &QUERY_BASED_ON_USER. and it outputs the query correctly.
My assumption is that the code editor does not evaluate the application computation and thus it returns an empty string, which it then refuses to validate or save. But I do not know how to validate this or how to work around this.
How can I use the application item as the sql statement?
You need to set "Use Generic Column Names" to true, and specify the number of columns your query will return:
Then the query is not parsed until runtime, when the item value is available.
A view returns 6 millions records.
Earlier i used to export with sql developer(right click and export, it takes more than 2hr), but my senior said that, if do like this query will get executes every 500 rows. so we are getting warnings from DBA(long running queries).
when searching internet i found that spooling works better for exporting. So i have doubt,In spooling is query executes once & start exporting or it executes for till last row exporting.
it is executed once, create a cursor which is the image of your query at the time of the execution, and then start fetching your data in your spool.
If you make any update after the execution, it will not appear in your spool.
I’m trying to cause a ‘SELECT’ query to fail if the record it is trying to read is locked.
To simulate it I have added a trigger on UPDATE that sleeps for 20 seconds and then in one thread (Java application) I’m updating a record (oid=53) and in another thread I’m performing the following query:
“SET STATEMENT max_statement_time=1 FOR SELECT * FROM Jobs j WHERE j.oid =53”.
(Note: Since my mariadb server version is 10.2 I cannot use the “SELECT … NOWAIT” option and must use “SET STATEMENT max_statement_time=1 FOR ….” instead).
I would expect that the SELECT will fail since the record is in a middle of UPDATE and should be read/write locked, but the SELECT succeeds.
Only if I add ‘for update’ to the SELECT query the query fails. (But this is not a good option for me).
I checked the INNODB_LOCKS table during the this time and it was empty.
In the INNODB_TRX table I saw the transaction with isolation level – REPEATABLE READ, but I don’t know if it is relevant here.
Any thoughts, how can I make the SELECT fail without making it 'for update'?
Normally consistent (and dirty) reads are non-locking, they just read some sort of snapshot, depending on what your transaction isolation level is. If you want to make the read wait for concurrent transaction to finish, you need to set isolation level to SERIALIZABLE and turn off autocommit in the connection that performs the read. Something like
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET autocommit = 0;
SET STATEMENT max_statement_time=1 FOR ...
should do it.
Relevant page in MariaDB KB
Side note: my personal preference would be to use innodb_lock_wait_timeout=1 instead of max_statement_time=1. Both will make the statement fail, but innodb_lock_wait_timeout will cause an error code more suitable for the situation.
I have an MLOAD job that inserts data from an Oracle database into a Teradata database. One of the things it does it drop the destination table and recreate it. Our production website populates a dropdown list based on what's in the destination table.
If the MLOAD script is not on a single transaction then it's possible that the dropdown list could fail to populate properly if the binding occurs during the MLOAD job. If it is transactional, however, it would be a seamless process because the changes would not show until the transaction is committed.
I checked the dbc.DBQLogTbl and dbc.DBQLQryLogsql views after running the MLOAD job and it appears there are several transactions occurring within the job, so it would seem that the entire job is not done in a single transaction. However, I wanted to verify that this is indeed the case before I make assumptions.
A transaction in Teradata cannot include multiple DDL statements, each DDL must be commited seperatly.
A MLoad is treated logically as a single transaction even if you see multiple transactions in DBQL, these are steps to prepare and cleanup.
When your application tries to select from the target table everything will be ok (unless it's doing a dirty read using LOCKING ROW FOR ACCESS).
Btw, there might be another error message "table doesn't exist" when the application tries to select. Why do you drop/recreate the table instead of a simple DELETE?
Another solution would be a loading a copy of the table and use view switching:
mload tab2;
replace view v as select * from tab2;
delete from tab1;
The next load will do:
mload tab1;
replace view v as select * from tab1;
delete from tab2;
And so on. Of course your load job needs to implement the switching logic.
I'm running an asp.net 2.0 web application. Suddenly this morning, ExecuteNonQuery started returning -1, even though the commands in the SP that ExecuteNonQuery is running, are executing (I see items inserted and updated in the DB), and there is no exception thrown.
This happens only when I am connected to our production DB. When I'm connected to our development DB they return the correct number of rows affected.
Also, interestingly enough, ExecuteDataSet doesn't have problems - it returns DataSets beautifully.
So it doesn't seem like a connection problem. But then, what else could it be?
You can also check the procedure to see if there's this line of code in there -
SET NOCOUNT ON
This will cause the ExecuteNonQuery to always return -1.
-1 means "No rows effected".
SELECT statement will also return -1.
Check the effect of the SP on dev and live environments.
Taken from SqlCommand::ExecuteNonQuery Method
For UPDATE, INSERT, and DELETE statements, the return value is the number of rows affected by the command. When a trigger exists on a table being inserted or updated, the return value includes the number of rows affected by both the insert or update operation and the number of rows affected by the trigger or triggers. For all other types of statements, the return value is -1. If a rollback occurs, the return value is also -1.
I encounter almost same problem (in Postgresql though) with NpgsqlDataAdapter, the Update method always returns 1.
Perhaps your connection string to your development db is different from your production db. Or if there's really no difference between their connection strings, perhaps your production db's service pack is different from your development db.
ExceuteNonQuery returns -1 for all types stored procedure as per the msdn
it will returns updated records values only in case of sattment