Oracle WITH clause create table taking too long - oracle11g

I am trying to create a table using the WITH clause in oracle 11g but the query takes very long time but if I run the same query without a create statement it executes. anyone else ever came across this issue?

The trick here was to use query hints specifically hints with index. It cut down the query execution time by 98% . now CTAS running for only 5 minutes down from over 12 hours

Related

Running COUNT in clickhouse immediately after INSERT returns 0

I'm running an INSERT query into a Distributed table of ReplicatedMergeTree with 2 nodes (single shard).
After the INSERT, I want to check the number of INSERTED records, so I run a COUNT query on the Distributed table.
At first, the COUNT returns 0. After several seconds (it can take more than a minute) the count returns the correct number.
I've checked using SHOW PROCESSLIST that the INSERT query has finished running.
Is there a way to verify that everything is in order before executing the COUNT?
It seems you may need to use the FINAL keyword. It is mentioned that one should try to avoid it, so you might be better off checking the table design and storage engine, but it could be good interim solution.
https://clickhouse.com/docs/en/sql-reference/statements/select/from/

SQL Server Performance and Query Execution

Few day before it was hard time..,
I have developed an Application for Online Admission process for College students and was quite successful.
Let me come to the problem i faced,
2 tables were involved in the problem : Student_AdmissionDetails ( contains almost 30-35 fields and most of them were having datatype of nvarchar(70)) and the other one was StudentCategory
After few day from when admission process started, The Student_AdmissionDetails was having about 3,00,000 of records and the StudentCategory was having 4 records
I have developed a dashboard where I was suppose to show No of Students applied in each category. and to achieve this I had following Query.
Select count(*)
from Student_AdmissionDetails
inner join StudentCategory
on Student_AdmissionDetails.Id=StudentCategory.Id
where CategoryTypeName=#ParameterValue
The above query gets fired on single page 3 times. and There were 250-300 users who were accessing the same page simultaneously.Along with that on admission form there were 1300-2000 students were filling form at the same time.
The problem that i get was when i ran above query in the sql server it gets fired 1 out of 5 time. It throws error that An deadlock has occurred while accessing object from memory(forgive me for not remembering the exact error).
what i'm looking for from the following post is :
This time i was bit lucky that i haven't made someone unhappy from my coding but Can anyone let me know what can be done to overcome such scenario. What can be best way to handle large DBs
I tried to figure it out with SQL profiler but since there were 5 more application were running similar kind of mine i was not able to find out howmany users were trying to access the same resource.
I guess following points will be helpful for answering my question.
The application server and DB server different
DB server was running on Windows XP(I guess!) and it was having RAM of 128 GBs
When i executed query from the SQL Server it was taking average of 12-15 second to execute the query.
apologize for writing this big long but i really need help to learn this :)
Try to update your SELECT statement adding WITH (NOLOCK). This will make your results less precise but it seemed that it's enough for your dashboard.
Also it's better to use something like integer CategoryTypeId than CategoryTypeName in WHERE clause.

oracle does full table scan but returns resutls so quickly

When I open up TOAD and do a select * from table,the results (first 500 rows) come back almost instantly. But the explain plan shows full table scan and the table is very huge.
How come the results are so quick?
In general, Oracle does not need to materialize the entire result set before it starts returning the data (there are, of course, cases where Oracle has to materialize the result set in order to sort it before it can start returning data). Assuming that your query doesn't require the entire result set to be materialized, Oracle will start returning the data to the client process whether that client process is TOAD or SQL*Plus or a JDBC application you wrote. When the client requests more data, Oracle will continue executing the query and return the next page of results. This allows TOAD to return the first 500 rows relatively quickly even if it would ultimately take many hours for Oracle to execute the entire query and to return the last row to the client.
Toad only returns the first 500 rows for performance, but if you were to run that query through an Oracle interface, JDBC for example, it would return the entire result. My best guess is that the explain plan shows you the results in the case it doesn't get a subset of the records; that's how i use it. I don't have a source for this other than my own experience with it.

SQLite query delay

I have a very simple query:
SELECT count(id), min(id), max(id), sum(size), sum(frames), sum(catalog_size + file_size)
FROM table1
table1 holds around 3000 to 4000 records.
My problem is that it takes around 20 seconds to this query to run. And since it is called more than once, the delay is pretty obvious to the customer.
Is it normal to this query to take 20 seconds? Is there any way to improve the run time?
If I run this same query from SQLite Manager it takes milliseconds to execute. The delay only occurs if the query is called from our software. EXPLAIN and EXPLAIN QUERY PLAN didn't help much. We use SQLite 3.7.3 version and Windows XPe.
Any thoughts how to troubleshoot this issue or improve the performance of the query?
All the sums require that every single record in the table must be read.
If the table contains more columns than those shown above, then the data read from the disk contains both useful and useless values (especially if you have big blobs). In that case, you can try to reduce the data needed to be read for this query by creating a covering index over exactly those columns needed for this query.
In SQLite versions before 3.7.15, you need to add an ORDER BY for the first index field to force SQLite to use that index, but this doesn't work for all queries. (For your query, try updating to this beta, or wait for 3.7.15.)

The database's auto-deleting task problem

I'm trying to figure out how to develop the on my database. E.g. there are some records in the database:
alt text http://img109.imageshack.us/img109/2962/datax.png
So, if the actual DateTime on the server is 2010-01-09 12:12:12 the record no.1 must be deleted.
OK, but what, if in the datebase are e.g. 1.000.000 records? The server has to search in the database on each second to check what rows must be deleted ? It's not efficient at all.
I'm totally new to Microsoft Server so I'd be grateful of any kind of help
There isn't a time based trigger in sql server. So you are going to have to implement this as a job or through some other scheduled mechanism.
Most likely you will want an index on the StartDate (end date?) column so that your deletion query doesn't have to perform a full table scan to find the data it needs to delete.
Usually you don't actually perform deletes every second. Instead the app should be smart enough to query the table in a way to eliminate those records from it's result set. Then, you can perform lazy deletes at some other time interval to do cleanup. Such as once an hour or once a day etc.

Resources