how to save a table when it is created in Oracle 11g? - oracle11g

I gave the command to CREATE TABLE and INSERT the values into the table but its not saved. The next time I login and the table is there but the data inserted is not there. "no selected rows" is the statement shown in screen.
please help to save the changes in ORACLE 11g.

Solution
First set your Oracle Database client application to always implicitly commit or rollback transactions before termination (behaviour recommended by Oracle). Then learn about database transactions and either set your Oracle Database client application in autocommit mode or explicitly commit your D.M.L. transactions (with the COMMIT statement) because you should not rely on the commit behaviour of a client application.
Summary on database transactions
Database transactions are a fundamental concept of database management systems. The essential property of a transaction is its atomicity: a transaction is either
committed (database changes are all made permanent, that is stored on disk); or
rolled back (database changes are all discarded).
Lexically, a transaction consists of one or more S.Q.L. statements of these kinds:
data definition language (D.D.L.) statements (CREATE, DROP, ALTER, GRANT, REVOKE, …) that change the structure of the database;
data manipulation language (D.M.L.) statements (INSERT, DELETE, UPDATE, SELECT, …) that retrieve or change the contents of the database.
Database client applications can operate in two different modes:
autocommit mode, where implicit transactions are implicitly started before D.D.L. and D.M.L. statements and implicitly committed on success and rolled back on failure after D.D.L. and D.M.L. statements, and explicit transactions are explicitly started at BEGIN statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements;
non-autocommit mode, where mixed transactions are implicitly started before D.D.L. or D.M.L. statements and explicitly committed at COMMIT statements and rolled back at ROLLBACK statements.
In both modes, transactions are implicitly committed before the database client application terminates normally and implicitly rolled back before the database client terminates abnormally. For database management systems that do not support rolling back D.D.L. statements (Maria D.B., Oracle Database, S.Q.L. Server), transactions are also implicitly committed after D.D.L. statements.
Explanation
When you issued your CREATE statement, since it is a D.D.L. statement and Oracle Database does not support rolling back D.D.L. statements, the transaction that creates the table was implicitly committed. That is why the table structure was still there after you started a new session. But when you issued an INSERT statement, since it is a D.M.L. statement and you are not in autocommit mode, the transaction that populates the table was not implicitly committed. And as your Oracle Database client application was not set to implicitly commit transactions before termination, that is why the table contents had gone after you started a new session.

Related

BizTalk send/receive - does it wait for completion of a called stored procedure?

I've setup a BizTalk design that chains a couple of send/receives to a SQL stored procedure (which inserts the data to relevant tables).
It's organised in a specific sequence, so data goes into Table A, and following tables after this check that the data exists in Table A at the Stored Procedure level (simple (IF EXISTS in Table A setup...).
I've noticed though that the flow isn't consistent further down the chain, almost as if SQL is executing the stored procedure to insert/update the record slower than the BizTalk transaction is occurring.
I've made sure that my Biz design is send/receive, as I assumed the transaction wouldn't progress until Biz received a response from the stored procedure (which would indicate SQL has finished inserting the required data).
The below example highlights where the process writes data to the Person table, but is later called upon by the Student Programme/Student Module. Occasionally, it will dehydrate on the Programme or Module stored procedure (from what I can tell, because the stored procedures are looking to see if a Person record created at the start of the flow exists)
Can anyone please confirm if;
Send/Receive will wait for a SQL stored procedure to finish executing before progressing the BizTalk transaction further through the orchestration?
BizTalk Orchestrations have some smarts built into it, if there are no dependencies in the next shapes on the response, then no, it might not wait for the response to execute the next shapes. What you can try is to enable the Delivery Notification to Transmitted on the Logical Send Port settings.

SQLite parallel read/write operations

SQLite throws following exception while using 1 transaction (read/write) and 1 none-transaction related read/write operation: "Database is locked"
The SQLite default isolation level is Serialized and should; as far as I understood; lock all pages affected by a insert or update operation. (Hence, a select onto those dataSets would still be possible)
The exception occurs in following scenario: A connection with a transaction (serialized) creates temp tables and filles them with data. A second connection (no transaction applied), creates a table. Already at this point the exception gets thrown. However, the none-transaction related connection doesn't read/write from those temp tables and shouldn't interfere in any terms.
After some online search, I found the WAL DB configuration option, introduces with version 3.7. Even thought the journal_mode has been change to WAL, the exception still occurs.
Why does the exception occur? The connections shouldn't allocate locks over the same pages. Furthermore, is there any solution to read/write different tables within different transactions?

Locking transactions (SQL Server + EF5)

I am trying to debug a performance issue in an ASP.NET application using .NET 4.5, EF5 (with a 2nd level cache and lazy loaded navigation properties) and SQL Server 2014. We are experiencing a number of wait locks in the SQL server. When I look at the locking transactions, they contain a very quick UPDATE, and then a very large SELECT. The UPDATE is ostensibly a necessary one, but I am confused as to why the SELECT is being run in the same transaction (and why anything is being selected at all). The fundamental issue is that the table referenced in the UPDATE statement is locked for the duration of the SELECT statement.
We use repository pattern for getting data from the db, and DbContext.SaveChanges() for committing changes. I cannot figure out how it is possible that EF produces a transaction where there is both a write and a read, and I am not getting relevant results when I try to search Google.
We have a number of interfaces into the system, and a couple of console applications working on the database as well, but they all go through the same setup/versions of .NET and EF.
I figure that it must be through SaveChanges, since this is (AFAIK) the only time that things are written to the database.
Does anyone here have a hint as to how these locking transactions might be produced?
The fundamental issue is that the table referenced in the UPDATE
statement is locked for the duration of the SELECT statement.
The answer is in your question:
the SELECT is being run in the same transaction
X lock is always held until the end of the transaction, i.e. until it commits or rolls back. So if after your quick update there is a long select, all that update locked in your table remains locked until your select ends.
You can separate your update and select if your business rules permit, you can add an appropriate index on the updated table to lock only some rows and not the whole table, or you can optimize your select to execute faster.

Calling SP from asp.net gives error whereas it works fine from SSMS

I have two tables:
Property
Property_Localize
and a SP to delete record from Property:
(Delete_SP)
I have a trigger that fires Instead Of deleting records from Property it deletes related records from Property_Localize table first and then delete records from Property table..
Property table's primary key is also used in some other table as foreign key.
Now we are trying to delete record from Delete_SP.
If the property table has associated record in any other table it throws exception this exception is handled in try..catch block , thus in any case SP returns some default value in output parameter and executes successfully.
This works absolutely fine if we run this SP from back end (SQL Server Management Studio).
But when we execute this SP from asp.net it gives following error:
"Uncommittable transaction is detected at the end of the batch. The
transaction is rolled back."
Although it gives proper output parameter.
We also try to add try catch and explicit transaction block in trigger but it still gives same error.
Any help would be appreciated.
I forgot to mention , I am using CodeSmith's generated database layer.
SQL Profiler
This will enable you to track down all phases of your query(s) during your request from ASP.Net to database.
Your Stored procedure should be like this..
Create Proc ProcedureName
#UserName Varchar(50),
#Password Varchar(50),
#Email Varchar(50)
As
SET NOCOUNT ON
SET XACT_ABORT ON
Begin Try
Begin Tran
//Your Code
Commit Tran
End Try
Begin Catch
Rollback Tran
End Catch
SSMS can have it's own configuration settings that is uses when executing the sproc that are not neccesarily the same as the settings that are used when the Stored Procedure is called from Ado.Net.
By default the connection from Ado.Net should use the default Server Settings and if there is a diff between the two then that could account for the difference.
Maybe this will help. [Bolding for emphasis mine]
http://msdn.microsoft.com/en-us/library/ms175976.aspx
Uncommittable Transactions and XACT_STATE
If an error generated in a TRY block causes the state of the current transaction to be invalidated, the transaction is classified as an uncommittable transaction. An error that ordinarily ends a transaction outside a TRY block causes a transaction to enter an uncommittable state when the error occurs inside a TRY block. An uncommittable transaction can only perform read operations or a ROLLBACK TRANSACTION. The transaction cannot execute any Transact-SQL statements that would generate a write operation or a COMMIT TRANSACTION. The XACT_STATE function returns a value of -1 if a transaction has been classified as an uncommittable transaction. When a batch finishes, the Database Engine rolls back any active uncommittable transactions. If no error message was sent when the transaction entered an uncommittable state, when the batch finishes, an error message will be sent to the client application. This indicates that an uncommittable transaction was detected and rolled back.
For more information about uncommittable transactions and the XACT_STATE function, see XACT_STATE (Transact-SQL).

SQLite multiple insert issue

I'm working with SQLite for my Android application and after some research I figured out how to do multiple insert transactions using the UNION statement.
But this is quite inefficient. From what I see at http://www.sqlite.org/speed.html, and in a lot of other forums, I can speed up the process using the BEGIN - COMMIT statements. But when I use them I get this error:
Cannot start a transaction within a transaction.
Why? What is the most efficient way of doing multiple insert?
Which JDBC driver are you using? Is there only one that's built into the Android distribution?
The problem is most likely with java.sql.Connection#setAutoCommit(). If the connection already has auto-commit enabled—which you can check with Connection#getAutoCommit()—then your JDBC driver is already issuing the SQL commands to start a transaction before your manual attempt to do, which renders your manual command redundant and invalid.
If you're looking to control transaction extent, you need to disable auto-commit mode for the Connection by calling
connection.setAutoCommit(false);
and then later, after your individual DML statements have all been issued, either commit or roll back the active transaction via Connection#commit() or Connection#rollback().
I have noticed that some JDBC drivers have a hard time coordinating auto-commit mode with PreparedStatement's batch-related methods. In particular, the Xerial JDBC driver and the Zentus driver on which it's based both fight against a user controlling the auto-commit mode with batch statement execution.

Resources