I am using Enterprise block and not able to figure this out.
I am using oracle procedure for inserting records into the database from my asp.net application in VB.net
Though it is inserting records as it should When I try to access the dataset returned I am not able to see the just inserted record details.
In my Oracle procedure I have Output Cursor which should return several column values from the just inserted record.
Please help.
This is a bit of a work around to what you're currently doing, but if you're still having issues with this, I'd suggest running ExecuteNonQuery for inserting and then ExecuteDataTable with the data you supplied to call a SELECT on your data.
Keep in mind, however, that this method's performance may be a bit slower (DB call to insert, followed by a DB call and return to select the data), but you will not need to worry about your cursor anymore (not sure what kind of performance gain, if any, this might have).
Related
I'm running a query to create a table using the spatialite gui on my Windows 7 machine. It has been going for days and I would like to cancel it and try something different. Is there a way for me to view the results of the query so far? The .sqlite file has trippled in size and I'm curious about what is happening.
In SQLite, transactions are atomic and isolated. (And if you do not use explicit transactions, every command gets an automatic transaction.)
So there is no easy way to see partial results; the database goes to great efforts to ensure that the transaction either succeeds completely, or is rolled back.
If possible, try the command with a smaller data set, or write the query so that only a part of the data is processed.
I have a handful of records, 5-10, that I need to take from the user and run a SQL merge statement against. I can think of three ways of accomplishing this.
.net Loop processing one record at a time - Wondering what the performance of this would be compared to the other options. I would think it is pretty good given connection pooling?
SQL Data Table type - I have seen these used elsewhere in the project, but as I learned first hand these are a pain to update the table definitions if need, dropping the entire object and recreating
XML variable - I have used this in the past. I like it because it is flexible to change the definition of the object. The .net is simple with XMLSerializer. But I am sure there is probably a performance hit to call XMLSerializer. And then on the SQL side to use the .nodes() function.
Does anyone know by personal experience or some reference, such as a white paper, which method is the most efficient when inserting/updating records in a database via .net application?
For 5-10 items you can use "clasic" insert with more records.
INSERT INTO MyTable
(ColumnA, ColumnB, ColumnC)
VALUES
(#ColumnA_0, #ColumnB_0, #ColumnC_0),
(#ColumnA_1, #ColumnB_1, #ColumnC_1),
(#ColumnA_2, #ColumnB_2, #ColumnC_2)
This is MUCH faster than XML or DataTable. And is faster than isolated inserts in loop.
The limit for number of inserted records is 1000. If you want more, you need execute more statements.
Generating the response schema for a typed stored procedure, the stored procedure did some database updates prior to returning the final resultset. The response schema generated by Visual Studio has quite some garbage.
Is there a way to force it to generate a cleaner schema?
The StoredProcedureResultset4 is the only one that matters.
Here's my same answers from MSDN. Unfortunately, the marked Answer will not work for you since there is no way, or it's really, really hard, to capture and suppress result sets from a called Stored Procedure.
The cause is related to the Stored Procedure code.
The Wizard will only generate Schema types for elements that are returned in the response from SQL Server. Meaning, the Stored Procedure is emitting results for those updates so you're getting metadata for them.
The way to solve this is by modifying the SP code to not emit any result on any operation that shouldn't. Basically, if you see it in the result window in SQL Management Studio, you will get schema for it.
status and message are presumably the result of another SP so one way to suppress that is to assign the result to a temp table thus redirecting it form the output stream.
However, if StoredProcedureResultset4 is all that matters, that's all you have to use. There's nothing wrong with just ignoring all the other results provided they always appear in the same order.
Just to be clear, you still have to write the wrapper that suppresses the unwanted results, simply invoking the original SP from a new SP will not change the output, you'll still get the extra result sets.
In fact, a wrapper would be the harder implementation since you'd have to capture and examine all results sets which I don't think is possible.
The more correct way to do this in BizTalk would be a Port Map that strips the unwanted content.
When I open up TOAD and do a select * from table,the results (first 500 rows) come back almost instantly. But the explain plan shows full table scan and the table is very huge.
How come the results are so quick?
In general, Oracle does not need to materialize the entire result set before it starts returning the data (there are, of course, cases where Oracle has to materialize the result set in order to sort it before it can start returning data). Assuming that your query doesn't require the entire result set to be materialized, Oracle will start returning the data to the client process whether that client process is TOAD or SQL*Plus or a JDBC application you wrote. When the client requests more data, Oracle will continue executing the query and return the next page of results. This allows TOAD to return the first 500 rows relatively quickly even if it would ultimately take many hours for Oracle to execute the entire query and to return the last row to the client.
Toad only returns the first 500 rows for performance, but if you were to run that query through an Oracle interface, JDBC for example, it would return the entire result. My best guess is that the explain plan shows you the results in the case it doesn't get a subset of the records; that's how i use it. I don't have a source for this other than my own experience with it.
Background: I am using SQLite database in my flex application. Size of the database is 4 MB and have 5 tables which are
table 1 have 2500 records
table 2 have 8700 records
table 3 have 3000 records
table 4 have 5000 records
table 5 have 2000 records.
Problem: Whenever I run a select query on any table, it takes around (approx 50 seconds) to fetch data from database tables. This has made the application quite slow and unresponsive while it fetches the data from the table.
How can i improve the performance of the SQLite database so that the time taken to fetch the data from the tables is reduced?
Thanks
As I tell you in a comment, without knowing what structures your database consists of, and what queries you run against the data, there is nothing we can infer suggesting why your queries take much time.
However here is an interesting reading about indexes : Use the index, Luke!. It tells you what an index is, how you should design your indexes and what benefits you can harvest.
Also, if you can post the queries and the table schemas and cardinalities (not the contents) maybe it could help.
Are you using asynchronous or synchronous execution modes? The difference between them is that asynchronous execution runs in the background while your application continues to run. Your application will then have to listen for a dispatched event and then carry out any subsequent operations. In synchronous mode, however, the user will not be able to interact with the application until the database operation is complete since those operations run in the same execution sequence as the application. Synchronous mode is conceptually simpler to implement, but asynchronous mode will yield better usability.
The first time SQLStatement.execute() on a SQLStatement instance, the statement is prepared automatically before executing. Subsequent calls will execute faster as long as the SQLStatement.text property has not changed. Using the same SQLStatement instances is better than creating new instances again and again. If you need to change your queries, then consider using parameterized statements.
You can also use techniques such as deferring what data you need at runtime. If you only need a subset of data, pull that back first and then retrieve other data as necessary. This may depend on your application scope and what needs you have to fulfill though.
Specifying the database with the table names will prevent the runtime from checking each database to find a matching table if you have multiple databases. It also helps prevent the runtime will choose the wrong database if this isn't specified. Do SELECT email FROM main.users; instead of SELECT email FROM users; even if you only have one single database. (main is automatically assigned as the database name when you call SQLConnection.open.)
If you happen to be writing lots of changes to the database (multiple INSERT or UPDATE statements), then consider wrapping it in a transaction. Changes will made in memory by the runtime and then written to disk. If you don't use a transaction, each statement will result in multiple disk writes to the database file which can be slow and consume lots of time.
Try to avoid any schema changes. The table definition data is kept at the start of the database file. The runtime loads these definitions when the database connection is opened. Data added to tables is kept after the table definition data in the database file. If changes such as adding columns or tables, the new table definitions will be mixed in with table data in the database file. The effect of this is that the runtime will have to read the table definition data from different parts of the file rather than at the beginning. The SQLConnection.compact() method restructures the table definition data so it is at the the beginning of the file, but its downside is that this method can also consume much time and more so if the database file is large.
Lastly, as Benoit pointed out in his comment, consider improving your own SQL queries and table structure that you're using. It would be helpful to know your database structure and queries are the actual cause of the slow performance or not. My guess is that you're using synchronous execution. If you switch to asynchronous mode, you'll see better performance but that doesn't mean it has to stop there.
The Adobe Flex documentation online has more information on improving database performance and best practices working with local SQL databases.
You could try indexing some of the columns used in the WHERE clause of your SELECT statements. You might also try minimizing usage of the LIKE keyword.
If you are joining your tables together, you might try simplifying the table relationships.
Like others have said, it's hard to get specific without knowing more about your schema and the SQL you are using.