SSMS 2014 Select Query taking too long and not giving any result or Error - ssms-2014

I am trying to execute select statement in SSMS 2014. But it is taking too long and not providing any results. I waited for 30 minutes but still I did not receive any result.
Can someone please suggest me the solution for this.
Thank You

Related

com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Cannot read from backfilling global secondary index

We keep getting this exception in our app which has a scheduled job to read on a gloabl secondary index. Looks like it keeps backfilling periodically even though there were no changes on the table. The volumes on our table are quite low so a bit surprised to see this a few times a day.
This is not a new index, so wondering should it not backfill only on insert/update of records.
Anyone seen this before?
It might be creating that GSI. Wait for sometime based on the amount of data in your DB. And this issue will go away.
I just waited 30 seconds, the error was gone away automatically. I edited my dynamoDB table from my aws console directly, I think this temporary error originated from this.
This error occurs when you newly create a GSI in dynamodb Table. wait for sometimes once it get created into the table you will not see the error
see in below image it says Creating...
it will be like
then hit your function you will not get an error
Try Detecting and Correcting Index Key Violations. I guess it is due to Index key Violation.

Oracle WITH clause create table taking too long

I am trying to create a table using the WITH clause in oracle 11g but the query takes very long time but if I run the same query without a create statement it executes. anyone else ever came across this issue?
The trick here was to use query hints specifically hints with index. It cut down the query execution time by 98% . now CTAS running for only 5 minutes down from over 12 hours

Date being identified sometimes as string, sometimes as date

I have a script that uploads a Date / Time to Datastore. Everything worked perfectly for a while, but after some time I noticed the following:
I decided to investigate this issue, using the following query:
SELECT DISTINCT data_subida FROM mynamespace ORDER BY data_subida DESC
The query above got me the following results:
It shows me that it stopped sending data on January 10th, but I am sure I am sending more data, so I scrolled down and found the following:
At some point, Datastore stopped storing my date as a Date / Time and started storing it as a String. But, if I open the entity to visualize its data, I get the following result:
So, is this a common issue? Am I doing something wrong? Is there a way to tell Datastore to CAST or CONVERT my field before ordering? Or at least force it to query only the ones interpreted as Date / Time or as String. I need to use this timestamp as a watermark in a ETL process, and without proper ordering I will duplicate the data.
Thanks in advance.

SQL Server Performance and Query Execution

Few day before it was hard time..,
I have developed an Application for Online Admission process for College students and was quite successful.
Let me come to the problem i faced,
2 tables were involved in the problem : Student_AdmissionDetails ( contains almost 30-35 fields and most of them were having datatype of nvarchar(70)) and the other one was StudentCategory
After few day from when admission process started, The Student_AdmissionDetails was having about 3,00,000 of records and the StudentCategory was having 4 records
I have developed a dashboard where I was suppose to show No of Students applied in each category. and to achieve this I had following Query.
Select count(*)
from Student_AdmissionDetails
inner join StudentCategory
on Student_AdmissionDetails.Id=StudentCategory.Id
where CategoryTypeName=#ParameterValue
The above query gets fired on single page 3 times. and There were 250-300 users who were accessing the same page simultaneously.Along with that on admission form there were 1300-2000 students were filling form at the same time.
The problem that i get was when i ran above query in the sql server it gets fired 1 out of 5 time. It throws error that An deadlock has occurred while accessing object from memory(forgive me for not remembering the exact error).
what i'm looking for from the following post is :
This time i was bit lucky that i haven't made someone unhappy from my coding but Can anyone let me know what can be done to overcome such scenario. What can be best way to handle large DBs
I tried to figure it out with SQL profiler but since there were 5 more application were running similar kind of mine i was not able to find out howmany users were trying to access the same resource.
I guess following points will be helpful for answering my question.
The application server and DB server different
DB server was running on Windows XP(I guess!) and it was having RAM of 128 GBs
When i executed query from the SQL Server it was taking average of 12-15 second to execute the query.
apologize for writing this big long but i really need help to learn this :)
Try to update your SELECT statement adding WITH (NOLOCK). This will make your results less precise but it seemed that it's enough for your dashboard.
Also it's better to use something like integer CategoryTypeId than CategoryTypeName in WHERE clause.

oracle does full table scan but returns resutls so quickly

When I open up TOAD and do a select * from table,the results (first 500 rows) come back almost instantly. But the explain plan shows full table scan and the table is very huge.
How come the results are so quick?
In general, Oracle does not need to materialize the entire result set before it starts returning the data (there are, of course, cases where Oracle has to materialize the result set in order to sort it before it can start returning data). Assuming that your query doesn't require the entire result set to be materialized, Oracle will start returning the data to the client process whether that client process is TOAD or SQL*Plus or a JDBC application you wrote. When the client requests more data, Oracle will continue executing the query and return the next page of results. This allows TOAD to return the first 500 rows relatively quickly even if it would ultimately take many hours for Oracle to execute the entire query and to return the last row to the client.
Toad only returns the first 500 rows for performance, but if you were to run that query through an Oracle interface, JDBC for example, it would return the entire result. My best guess is that the explain plan shows you the results in the case it doesn't get a subset of the records; that's how i use it. I don't have a source for this other than my own experience with it.

Resources