How to configure the slow query log in TiDB? - distributed-database

I have already set the slow_query_log to OFF, but why the tidb_slow_query.log still log the queries with execution time of several hundred milliseconds?
slow query log

In TiDB, the slow_query_log logs queries with execution time greater than 300 milliseconds by default. You can modify this value in the TiDB configuration file according to your need.
enter image description here

Related

Time Period of Firebase realtime profile operations

The official document of Firebase Realtime profiler says:
The profiler tool logs all the activity in your database over a given period of time, then generates a detailed report.
But it doesn't tell the specific time like last 24 hours.
My database usage shows that on a particular day, bandwidth consumed is X so I want to specify a particular day or time duration like last 24 hours in Firebase Realtime database profiler >
Q1. Is it possible to specify the duration in profile like last 24 hours?
Q2. How does profiler work?
I think, profiler just scans some log and keeps writing/streaming the operations to user console unless user stops the the profiling tool. Correct me if I am wrong here.
Q1. Is it possible to specify the duration in profile like last 24
hours?
No, it's not possible to profile "last" hours. But you can profile the next 24. (I'll get to that on Q2)
Q2. How does profiler work?
What the profiler does is it logs all the operations happening on your database from the time you run the command until the time you stop it. When you run the command, the console will show you how many operations have been logged so far and you can use Enter to stop logging. It will then show you (or save it to a file if you prefer) speed and bandwidth reports.
But it also has an option to set the logging duration (in seconds). For example, if you want to log the next 24 hours you can use:
firebase database:profile -d 86400
But have in mind that logging only happens if the computer that started it is still on. This means you'll need to keep your computer on for the next 24h.

.Exist doesn't wait as per the given timeout in UFT14

I am trying to play my QTP11 scripts in the UFT14 (trail) but for some reason .Exist doesn't wait for the given timeout. Rather it is waiting as per the Object sync timeout project settings if the object doesn't exist. Any reason why?
Like my project's object sync timeout is set at 60 seconds. And when I use something like If ErrorObject.Exist(10) Then ErrorObject.Close -- this should wait for 10 seconds only but rather UFT14 is waiting for full 60 seconds. Is it a bug or is there any extra setting which I have to apply in UFT14 for Exist to wait for the given timeout only?
Edit - On further inspection I found out that this is an issue with Java objects only. So might be a bug in Java addin. Can anyone verify or provide a workaround.
Edit - HP acknowledged that this is an issue. Here is the link if anyone is interested.
https://softwaresupport.hpe.com/group/softwaresupport/search-result/-/facetsearch/document/KM02764499
This is because of the default timeout in UFT.You can change that default timeout as below
Test Settings -> Run -> Object synchronization timeout
Change the "Object synchronization timeout" in seconds.
Or You can do this directly through vbscript code
Setting("DefaultTimeout") = 5000(This value is in milliseconds)

Session error in Teradata Fastload Script

My Fastload script is scheduled to run every week and when it starts the script failed because of the insufficient number of sessions every week. but, when I restart the script manually then it executed with no session error.
I don't what causes it to fail every week with the same reason of insufficient session. Can anyone let me know what all may be the reason for the same.
Check for:
1. Schedule job connection string, if it point to one Teradata Node (I.P) address. Sometimes based on the concurrent session you exceed PE session limit (120 session). Try using DNS/VIP to achive better load balancing
2. Number of Unilities running on system during schedule time. If your exceed the limit of threshold use SLEEP and TANACITY to plance your job in queue instead it fails
3. Limit the Fastload session limit using SESSIONS
Thanks!!

Split sqllite file into chunks for appcfg.py

I have a 750MB sql3 file that I want to load into appcfg.py, a program that can restore appengine data. It's taking forever to load in there. Is there a way I could split it into smaller, totally-separate chunks, to be loaded independantly?
I don't need to run queries across the data, or maintain any other kind of relationship. I just need to copy a list of the records to my appengine app.
Elaboration:
I'm trying to restore a 750 MB sql3 file I got from
appcfg.py download_data --appl=myapp --url=https://myapp.appspot.com/remote_path --file=backup.sql3
Now, I'm trying to restore the file with
appcfg.py upload_data --appl=restoreapp --url=https://restoreapp.appspot.com/remote_api --file=backup.sql3
I also set some parameters tweaking the default limits.
This prints out some initial logging information, repeating the parameters, etc. Then nothing happens for about 45 minutes, except that python takes about 50% cpu for the duration. Then, finally, it starts to upload to appengine.
From there, it seems to work. But, if there's an error in the transmission, I have to wait the 45 minutes again, even after specifying the progress database. That's why I'm looking for a way to split up the file, or something.
FWIW, both the original app and the restore app use the Java sdk

drupal maximum execution time of cron

Whats the maximum execution time of cron. is it possible to modify it if so any side effects.
The accepted answer above is INCORRECT. Cron's time limit in Drupal is hardcoded to 240 seconds. See the drupal_cron_run function in includes/common.inc, specifically these lines:
drupal_set_time_limit(240);
and
if (!lock_acquire('cron', 240.0)) {
(based on the source of Drupal 7.12)
So, there is no way to change this globally without hacking core. I have heard it suggested to call drupal_set_time_limit yourself inside of your hook_cron implementation, as doing so resets PHP's counter. However that won't help you when it's a third-party module implementing hook_cron.
Maximum execution time for Drupal's cron depends on your php.ini.
For example if you use wget -O - -q -t 1 http://www.example.com/cron.php as your cron command, apache's php.ini is used to determine the maximum execution time.
If you use php -f cron.php as your cron command, then php-cli's php.ini is used to determine the maximum execution time.
It is recommended to use php-cli for higher execution time, where you can set the maximum execution time from /etc/php5/cli/php.ini (if you use debian linux) and have no side effects on apache while cron runs.
I don't know if this is necessarily the case as I've just run the cron.php through my browser and I'm getting a max excution time error of 240 seconds while my max execution time in my php.ini is 1200 seconds. So somewhere besides my php.ini file Drupal is grabbing the max execution time.
That somewhere would be in the ./includes/common.inc or ./includes/locale.inc. Head into there and there are settings to adjust how long drupal will allow the cron to run for before giving up
This module can help you: Set Cron Time

Resources