I have a query with 4 CASE statements each having 35 WHEN expressions, that is throwing a run time error (code: 14001, message: Query compilation failed for /rds/bin/padb.1.0.7152/data/). The explain plan doesn't show any errors.
However, when I reduce the total number of case statements to 82 by commenting a few, the query runs fine.
Is there a maximum limit on the number of case statements in redshift?
I'm currently using the query by commenting some statements that are not necessary right now but might be in future, so need a more elegant solution to this issue.
Would appreciate any help here. Thanks!
Related
I tried to run a Gremlin query adding a property to vertex through Gremlin console.
g.V().hasLabel("user").has("status", "valid").property(single, "type", "valid")
I constantly get this error:
org.apache.tinkerpop.gremlin.jsr223.console.RemoteException: Connection to server is no longer active
This error happens after query is running for one or two minutes.
I tried some simple queries like g.V().limit(10) and it works fine.
Since the affected vertex count is more than 4 million, not sure if it is failing due to timeout issue.
I also tried to split it into small batches:
g.V().hasLabel("user").has("status", "valid").hasNot("type").limit(200000).property(single, "type", "valid")
It succeeded for first few batches and started failing again.
Is there any recommendations for updating millions of vertices?
The precise approach you take may vary depending on the backend graph database and storage you are using as well as the capacity of the hardware being used.
The capacity of the hardware where Gremlin Server is running in terms of number of CPUs and most importantly, memory, will also be a factor as will the setting of the query timeout value.
To do this in Gremlin, if you had a way to identify distinct ranges of vertices easily you could split this up into multiple threads each doing batches of updates. If the example you show is representative of your actual need then that is likely not possible in this case.
Likewise some graph databases provide a bulk load capability that is often a good way to do large batch updates but probably not an option here as you need to do essentially a conditional update based on looking at the current presence (or not) of a property.
Without more information about your data model and hardware etc. the best answer is probably to do two things:
Use smaller limits. Maybe try 5K or even just 1K at first and work up from there until you find a reliable sweet spot.
Increase the query timeout settings.
You may need to experiment to find the sweet spot for your environment as the capacity of the hardware will definitely play a role in situations like this as well as how you write your query.
I have 2 questions on the case recorder.
1- I am not sure how to restart an optimizaiton from where the recorder left off. I can read in the case reader sql file etc but can not see how this can be fed into the problem() to restart.
2- this question is maybe due to my lack of knowledge in python but how can one access to the iteration number from within an openmdao component (one way is to read the sql file that is constantly being updated but there should be a more efficient way.)
You can re-load a case back via the load_case method on the problem.
See the docs for it here.
Im not completely sure what you mean by access the iteration count, but if you just want to know the number of times your components are called you can add a counter to them yourself.
There is not a programatic API for accessing the iteration count in OpenMDAO as of version 2.3
I hope this is a simple question.
When doing query performance testing, running an identical, consecutive query will always return a response faster than the first attempt (generally, significantly faster).
What's the easiest/fastest method to 'reset' sqlite3 back to its default state?
Running VACUUM can take quite awhile and is obviously doing more than simply 'resetting' things.
Thank you,
So, it seems as though sqlite3 doesn't have the ability to do this on its own. You can compensate for this by flushing the pagecache/inodes in linux by running the following as root:
echo 3 > /proc/sys/vm/drop_caches
For it to be effective for performance testing, you'll need to run this command between each iteration. The value won't change (which is counter intuitive), but each time the value is written to the file, the flush process is activated.
My client-side sensu metric is reporting a WARN and the data is not getting to my OpenTSDB.
It seems to be stuck, but I don't understand what the message is telling me. Can someone translate?
The command is a ruby script.
In /var/log/sensu/sensu-client.log :
{"timestamp":"2014-09-11T16:06:51.928219-0400",
"level":"warn",
"message":"previous check command execution in progress",
"check":{"handler":"metric_store","type":"metric",
"standalone":true,"command":"...",
"output_type":"json","auto_tag_host":"yes",
"interval":60,"description":"description here",
"subscribers"["system"],
"name":"foo_metric","issued":1410466011,"executed":1410465882
}
}
My questions:
what does this message mean?
what causes this?
Does it really mean we are waiting for the same check to run? if so, how do we clear it?
This error means that sensu is (or thinks it is, actually executing this check currently
https://github.com/sensu/sensu/blob/4c36d2684f2e89a9ce811ca53de10cc2eb98f82b/lib/sensu/client.rb#L115
This can be caused by stacking checks, that take longer than their interval to run. (60 seconds in this case)
You can try to set the "timeout" option in the check definition:
https://github.com/sensu/sensu/blob/4c36d2684f2e89a9ce811ca53de10cc2eb98f82b/lib/sensu/client.rb#L101
To try to make sensu time out after a while on that check. You could also add internal logic to your check to make it not hang.
In my case, I had accidentally configured two sensu-client instances to have the same name. I think that caused one of them to always think its checks were already running when in reality they were not. Giving them unique names solved the problem for me.
HI
We am getting time outs in our asp.net application. We are using sql server 2005 as the DB.
The queries run very fast in the query analyser . However when we check the time through the profiler it shows a time that is many times more than what we get in query analyser.
(paramter sinffing is not the cause)
Any help is much appreciated
thanks
We are on a SAN
Cleared the counters. The new counters are
ASYNC_NETWORK_IO 540 9812 375 78
WRITELOG 70 1828 328 0
The timeout happens only on a particular SP which a particular set of params. if we change the params and access the app it works fine. We ran the profiler and found that the SP batchcompleted statement comes up in the profiler after the timeout happens on asp.net side. If we restart the server everything works fine
if we remove the plan from the cache the app works fine. However we have taken into consideration parameter sniffing in the sp. what else could be the reason
If I was to take a guess, I would assume that the background database load from the webserver is elevating locks and causing the whole thing to slow down. Then you take a large-ish query and run it and that causes lock (and resource) contension.
I see this ALL THE TIME with companies complaining of performance problems with their client-server applications when going from one SQL server to a cluster. In the web-world, we get those issues much earlier.
The solution (most times) to lock issues with one of the following:
* Refactor your queries to work better (storing SCOPE_IDENTITY instead of calling it 5 times for example)
* Use the NO LOCK statement everywhere it makes sense.
EDIT:
Also, try viewing the server with the new 2008 SQL Management Studio 'Activity Monitor'. You can find it by right-clicking on your server and selecting 'Activity Monitor'.
Go to the Processes section and look at how many processes are 'waiting'. Your wait time should be near-0. If you see alot of stuff under 'Wait Type', post a screen shot and I can give you an idea of what the next step is.
Go to the Resource Waits section and see what the numbers look like there. Your waiters should always be near-0.
And 'Recent Expensive Queries' is awesome to look at to find out what you can do to improve your general performance.
Edit #2:
How much slower is it? Your SAN seems to be taking up about 10 seconds worth, but if you are talking 20 seconds vs. 360 seconds, then that would not be relevent, and there is no waits for locks, so I guess I am drawing a blank. If the differene is between 1 second and 10 seconds then it seems to be network related.
Run the following script to create this stored proc:
CREATE PROC [dbo].[dba_SearchCachedPlans]
#StringToSearchFor VARCHAR(255)
AS
/*----------------------------------------------------------------------
Purpose: Inspects cached plans for a given string.
------------------------------------------------------------------------
Parameters: #StringToSearchFor - string to search for e.g. '%<MissingIndexes>%'.
Revision History:
03/06/2008 Ian_Stirk#yahoo.com Initial version
Example Usage:
1. exec dba_SearchCachedPlans '%<MissingIndexes>%'
2. exec dba_SearchCachedPlans '%<ColumnsWithNoStatistics>%'
3. exec dba_SearchCachedPlans '%<TableScan%'
4. exec dba_SearchCachedPlans '%CREATE PROC%MessageWrite%'
-----------------------------------------------------------------------*/
BEGIN
-- Do not lock anything, and do not get held up by any locks.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 100
st.TEXT AS [SQL],
cp.cacheobjtype,
cp.objtype,
DB_NAME(st.dbid) AS [DatabaseName],
cp.usecounts AS [Plan usage],
qp.query_plan
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX)) LIKE #StringToSearchFor
ORDER BY cp.usecounts DESC
END
Then execute:
exec dba_SearchCachedPlans '%<MissingIndexes>%'
And see if you are missing any recommended indexes.
When SQL server creates a plan it saves it, along with any recommended indexes. Just click on the query_plan column text to show you the graph. On the top there will be recommended indexes you should implement.
I don't have the answer for you, because I'm not a guru. But I do remember reading on some SQL blogs recently that SQL 2008 has some extra things you can add to the query/stored procedure so it calculates things differently. I think one thing you could try searching for is called 'hints'. Also, how SQL uses the current 'statistics' makes a difference too. Look that up. And how the execution plan is only generated for the first run--if that plan doesn't work with different parameter values because there would be a vast difference in what would be searched/returned, it can present this behavior I think.
Sorry I can't be more helpful. I'm just getting my feet wet with SQL Server performance at this level. I bet if you asked someone like Brent Ozar he could point you in the right direction.
I've had this exact same issue a couple of times before. It seemed to happen to me when a particular user was on the site when it was deployed. When that user would run certain stored procedures with their ID it would timeout. When others would run it, or I would run it from the DB, it would run in no time. We had our DBA's watch everything they could and they never had an answer. In the end, everything was fixed whenever I re-deployed the site and the user was not already logged in.
I've had similar issues and with my case it had to do with the SP recompiling. Specifically it was my use of temp tables vs table variables.