Problems with Impala. "Query waiting to be closed" - cloudera

I have a little problems with my Impala´s Platform. We I try to do a query, sometimes I get a time out, and when reviewed the logs, I saw that I got almost 4000 querys waiting to be closed.
I am using CDH 5.14 and impala 2.11.
Someone know if are there any bugs on the system, about queries?. Or Do I need to configure any parameters on the system to solve it?

Can you decrease
--idle_query_timeout and --idle_session_timeout
Comment: I get this same behavior from HUE. Some connections are not closed when the tab is closed. I don't recall the exact state, but we have the idle_query_timeout and idle_session_timeout set to 1 hour and these are closed after that time. So if idle_query isn't working try idle_session. If that doesn't work, there maybe something going one with your specific setup.
Ref: https://community.cloudera.com/t5/Interactive-Short-cycle-SQL/Query-Cancel-and-idle-query-timeout-is-not-working/td-p/58104

Related

IIS is hanging and needs to be recycled every time?

We've one IIS application server on which we have deployed ASP.NET application.
Now the issue is Server is hanging very frequently and every time we are recycling or restarting application pool. Could you please help me to fix it.
What I've tried:
1) Looked into IIS log and found that there are few requests throwing 500 error some time. All the time except few instances same requests are working fine and getting 200 status.
2) Few resources are taking longer time like
a) "Report.aspx" - State- SendResponse - Time Elapsed- 2-3 minutes
I checked the query and found that sp is returning data in 00:00:02.
b) "NewReport.aspx" - State- ExecuteRequestHandler- Time Elapsed- 1 minutes
I checked the query and found that sp is returning data too quickly.
3) Default recycle time is 29 hrs, and each application is using individual pool.
4) In code application connection has been opened and not closed at very few places.
5) We are using connection pooling, having 2 different connection string with Max Pool=500 and min Pool=50 each.
6) Most of the time there is no error in Event Viewer, some time have received TCP/IP error.
What I am looking for:
1) Do I need to make any changes in IIS to track down or please help if any changes required in IIS?
2) Is there any setting in SQL Server which can help me to find out or fix it?
3) Please help if anything else I've to do to fix this issue from your previous experiences.
I think you should capture a hang dump for W3WP.exe with Procdump or debug diagnostic tool first. We need to check all managed stack trace via WINDBG, sos.dll and mex extension. So that we can figure out worker thread is pending on which method and which module. Then might tell us about how to fix this hang issue. If you could find a way to always reproduce this problem. The problem would become much easier.
This link is about how to capture hang dump.
https://blogs.msdn.microsoft.com/asiatech/2014/01/09/debug-diagnostic-2-0-generate-a-manual-hang-dump-for-all-processes-owned-by-iis/
If you don't know how to analyze dump file manually, debug diagnostic analysis tool could help you achieve this.

How long does Firebase throttle you?

Even with debug enabled for RemoteConfig, I still managed to get the following:
Error fetching remote config values Optional(Error Domain=com.google.remoteconfig.ErrorDomain Code=8002 "(null)"
UserInfo={error_throttled_end_time_seconds=1483110267.054194})
Here is my debug code:
let debug = FIRRemoteConfigSettings(developerModeEnabled: true)
FIRRemoteConfig.remoteConfig().configSettings = debug!
Shouldn't the above prevent throttling?
How long will the throttle error remain in effect?
I've experienced the same error due to throttling. I was calling FIRRemoteConfig.remoteConfig().fetchWithExpirationDuration with an expiry that was less than 60 seconds.
To immediately get around this issue during testing, use an alternative device. The throttling occurs against a particular device. e.g. move from your simulator to a device.
The intention is not to have a single client flooding the server with fetch requests every second. Make sensible use of the caching it offers out of the box and fetch only when necessary.
When you receive this error, plug the value of error_throttled_end_time_seconds into an epoch converter (like this one at https://www.epochconverter.com) and it will tell you the time when throttling ends. I've tested this myself, and the throttling remains in effect for 1 hour from the first moment you are throttled. So either wait an hour or try some of the other recommendations given here.
UPDATE: Also, if you continue making config requests and receive the throttle error, the expire timeout does not increase (i.e. "you are not further penalized").
The quick and easy hack to get your app running is to delete the application and reinstall it. Firebase identifies your device as new device on reinstalling.
Hope it helps and save your time.

ReadConflictError during long transaction on plone 4.1

We have a long request that does a catalog search and then calculates some information and then stores it in another document. Do call is made to index the document after the store.
In the logs we're getting errors such as INFO ZPublisher.Conflict ConflictError at /pooldb/csvExport/runAgent: database conflict error (oid 0x017f1eae, class BTrees.IIBTree.IISet, serial this txn started with 0x03a30beb4659b266 2013-11-27 23:07:16.488370, serial currently committed 0x03a30c0d2ca5b9aa 2013-11-27 23:41:10.464226) (19 conflicts (0 unresolved) since startup at Mon Nov 25 15:59:08 2013)
and the transaction is getting aborted and restarted.
All the documentation I've read says that due to MVVC in ZODB ReadConflicts no longer occur. Since this is written in RestrictedPython putting in a savepoint is not a simple option (but likely my only choice I'm guessing).
Is there another to avoid this conflict? If I need use savepoints, can anyone think of a security reason why I shouldn't whitelist the savepoint transaction method for use in PythonScripts?
The answer is there really is no way to do a large transaction that involves writes, when there are other small transactions on the same objects at the same time. BTrees are supposed to have some special conflict handling code but it doesn't seem to work in the cases I'm dealing with.
The only way I think to manage this is something like this code which breaks the reindex into smaller transactions to reduce the chance of a conflict. There really should be something like this built into zope/plone.

What do I need to apply an RMAN backup from one server to another

Right now I have been givin a controlfile some backupfiles and an spfile.ora
How do I apply these to a database on a server that is not the one they were created from.
If further info is needed please let me know. I am eager to get this process laid out for the next time I have to accomplish it.
I have tried to look up the process but keep seeing things about catalogs and such that I don't feel apply to this situation.
Thank you in advance.
EDIT:
I believe I have thr right files but this is where im currently stuck -
RMAN> recover database;
Starting recover at 25-JUL-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=770 device type=DISK
starting media recovery
unable to find archived log
archived log thread=1 sequence=6173
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/25/2013 20:49:59
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 6173 and starting SCN of 10866915410156
The answer to my question ended up being to run recover database with the noredo option. Then to open the database use alter database open resetlogs;
recover database noredo;
Hope this might help anyone who runs into the same issues and sorry I couldn't lay out a better explanation.

Umbraco Log table is getting filled with At /keepalive/ping.ashx (Referred by: ):

I am getting the following error log entry in every few seconds into one of the website UmbracoLog table...
At /keepalive/ping.ashx (Referred by: ):
Trying to find the solution for the issue for few days without any success.. please see my post in Our Umbraco
http://our.umbraco.org/forum/using/ui-questions/33930-Error-in-umbracoLog-table-At-keepalivepingashx-%28Referred-by-%29?p=0#comment124646
So keepalives are generally used to stop the old CMS backend falling foul of the app pool going to sleep after 20 mins or so.
It sounds like something is happening in your solution that is firing every few seconds which is a bit odd.
If you have nopCommerce installed at all? This post here describes the issue you are having and where to disable the keepalive if you need to.
http://www.nopcommerce.com/boards/t/6895/why-getting-pingashx-on-online-customer-module-.aspx

Resources