When I run the slow query log analyzer, I see huge time of requests in seconds. I tried to execute these requests manually, they are executed very fast(0.01sec). What could be the problem?
mysql Ver 15.1 Distrib 10.1.9-MariaDB, for Linux (x86_64) using readline 5.1
CREATE DEFINER = 'root'#'192.168.1.101' EVENT `DEL_EXPIRED_BANS`
ON SCHEDULE EVERY 10 MINUTE STARTS '2013-10-18 13:38:54'
ON COMPLETION NOT PRESERVE
ENABLE
COMMENT '' DO
BEGIN
update users set ban_type=0, ban_expire=null, ban_expire=null, ban_reason=null
where ban_type > 0 and ban_expire < CURRENT_TIMESTAMP();
delete from `flash_client_log` where TIMESTAMPADD(DAY,4, `dttm` ) < CURRENT_TIMESTAMP() and `log_type`=1;
delete from `flash_client_log` where TIMESTAMPADD(DAY,4, `dttm` ) < CURRENT_TIMESTAMP() and `log_type`=0;
END;
[root#xy1 GameServer]# mysqldumpslow -a -s t -t 15 /var/log/mysql_slow.log
Reading mysql slow query log from /var/log/mysql_slow.log
Count: 1344 Time=18446679593472.00s (24792337373626364s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=41408.5 (55653024), Rows_affected=0.0 (0), 2users#localhost
update users set ban_type=0, ban_expire=null, ban_expire=null, ban_reason=null
where ban_type > 0 and ban_expire < CURRENT_TIMESTAMP()
Count: 672 Time=18446679593471.92s (12396168686813130s) Lock=0.15s (98s) Rows_sent=0.0 (0), Rows_examined=33953.0 (22816416), Rows_affected=0.0 (0), root[root]#localhost
delete from `flash_client_log` where TIMESTAMPADD(DAY,1, `dttm` ) < CURRENT_TIMESTAMP() and `log_type`=1
Count: 672 Time=18446679593471.92s (12396168686813128s) Lock=0.15s (100s) Rows_sent=0.0 (0), Rows_examined=33953.0 (22816416), Rows_affected=0.0 (0), root[root]#localhost
delete from `flash_client_log` where TIMESTAMPADD(DAY,3, `dttm` ) < CURRENT_TIMESTAMP() and `log_type`=0
Count: 672 Time=18446679593471.91s (12396168686813120s) Lock=0.09s (63s) Rows_sent=0.0 (0), Rows_examined=14599.2 (9810684), Rows_affected=22.5 (15144), root[root]#192.168.1.101
delete from `flash_client_log` where TIMESTAMPADD(DAY,4, `dttm` ) < CURRENT_TIMESTAMP() and `log_type`=1
Count: 672 Time=18446679593470.33s (12396168686812064s) Lock=1.70s (1140s) Rows_sent=0.0 (0), Rows_examined=28865.1 (19397320), Rows_affected=0.4 (237), root[root]#192.168.1.101
delete from `flash_client_log` where TIMESTAMPADD(DAY,4, `dttm` ) < CURRENT_TIMESTAMP() and `log_type`=0
Count: 1 Time=18446679639052.95s (18446679639052s) Lock=0.00s (0s) Rows_sent=0.0 (0), Rows_examined=0.0 (0), Rows_affected=0.0 (0), billiards3d_net[billiards3d_net]#localhost
delete from guests_log WHERE dttm < DATE_SUB(CURDATE(), INTERVAL 1 WEEK)
As previously mentioned in the comment, a bug report was filed based on this question. The bug has now been fixed, the fix is available in 5.5 tree and will be released with the next releases of MariaDB: 5.5.54, 10.0.29, 10.1.21, 10.2.3.
The clock runeth backward. You need to take it easy -- do not exceed the speed of light!
Seriously, ... I have seen this periodically for the past 15 years, in all versions of MySQL. The number you are seeing is probably -1 being treated as a UNSIGNED number.
Recommendation: think of it as zero, and move on.
OK, that is hard to do in this case, since you have a summary (mysqldumpslow). The source of the problem is somewhere in the slowlog. If it happens again tomorrow (over a different part of the slowlog), file a bug with http://bugs.mysql.com (assuming there are not already several there).
Related
For the sake of argument, assume that I have a very simple database:
CREATE TABLE test(idx integer primary key autoincrement, count integer);
This has one row. The database is accessed by a CGI script, which is called by Apache. The script reads the current value of count, increments it, and writes it back. I can run the script as
curl http://localhost/cgi-bin/test
and it tells me what the new value of count is. The script is actually C++; the basic stripped-down code looks like this:
// 'callback' sets 'count' to the current value of count
sqlite3_exec(con, "select count from test where idx=1", callback, &count, 0);
++count;
command << "update test set count=" << count << " where idx=1";
sqlite3_exec(con, command.str().c_str(), 0, 0, 0);
If I write a bash script that runs 20 instances of curl in the background, then I get lots of messages that the database is locked, and the counter is only incremented to 2 or 3, instead of 20. Ok, that's not very surprising, but how do I fix this?
After some experimenting, I've put both sqlite3_exec statements inside an exclusive transaction:
while(true) {
rc = sqlite3_exec(con, "begin exclusive transaction", 0, 0, 0);
if(rc != SQLITE_BUSY)
break;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
if(rc != SQLITE_OK)
error();
...the select and update code shown above, followed by:
sqlite3_exec(con, "end transaction", 0, 0, 0);
This appears to be rock-solid, but I can't make much sense of the relevant bits of the sqlite docs, and I'm not convinced. Is there anything else I need to think about? Note that I don't have any rollbacks, or any other sqlite3 calls, apart from sqlite_open_v2, sqlite3_errmsg, and sqlite_close, no WAL, and I only test for SQLITE_BUSY. For testing, I run the bash script below, with $1 set to 1000 (ie. 1000 curl instances all running the CGI code). This completes in 10 or 11 seconds, and every time I run it it shows the final value of count as 1000, so it appears to be working.
Test script:
#!/bin/bash
sqlite3 /var/www/cgi-bin/test.db <<EOF
update test set count=0 where idx=1;
EOF
for ((c=0; c<$1; c++ ))
do
curl http://localhost/cgi-bin/test > /dev/null 2>&1 &
done
wait
sqlite3 /var/www/cgi-bin/test.db <<EOF
select count from test where idx=1;
EOF
I have a table with 1 day of hot cache policy on it. And with that assume that cache utilization of the ADX cluster is less than 80%. Considering that, what would be a reliable method to exactly know the amount of cache space (TB) actually occupied by the table? I came up with the following two methods but they both return significantly different numbers:-
.show table <TableName> extents hot | summarize sum(ExtentSize)/pow(1024,4)
.show table <TableName> extents | where MaxCreatedOn >= ago(1d) | summarize extent_size=sum(ExtentSize) | project size_in_TB=((extent_size)/pow(1024,4))
The second command returns count more than 10 times higher than the first one. How can it be that different?
Both commands you ran should result with the same value, assuming:
you ran them at the same time (or quickly one after the other)
the effective caching policy is indeed 1 day (have you verified that is indeed the case?)
Regardless - the most efficient way to get that data point is by using the following command:
.show table TABLENAME details
| project HotExtentSizeTb = HotExtentSize/exp2(40), CachingPolicy
Here's an example from a table of mine, which has a caching policy of 4 days (set at table level), and a retention policy with a soft delete period of 3650 days:
// option 1
// --------
.show table yonis_table extents hot
| summarize HotExtentSizeTb = sum(ExtentSize)/exp2(40)
// returns: HotExtentSizeTb: 0.723723856871402 <---
// option 2: least efficient
// -------------------------
.show table yonis_table extents
| where MaxCreatedOn >= ago(4d)
| summarize HotExtentSizeTb = sum(ExtentSize)/exp2(40)
// returns: HotExtentSizeTb: 0.723723856871402 <---
// option 3: most efficient
// ------------------------
.show table yonis_table details
| project HotExtentSizeTb = HotExtentSize/exp2(40), CachingPolicy, RetentionPolicy
// returns:
HotExtentSizeTb: 0.723723856871402, <---
CachingPolicy: {
"DataHotSpan": "4.00:00:00"
},
RetentionPolicy: {
"SoftDeletePeriod": "3650.00:00:00",
"Recoverability": "Enabled"
}
I've got a query that runs on a view that contains two modifiedBy dates. I need to return all records where either of these dates falls into a specified range.
From everything I've researched it seems I need something like this:
qbdsCustTableAddressView
.addRange(fieldNum(TCMCustTableAddressView, CustTableModified))
.value(
strFmt("(%1>='%2' AND %1<='%3') || (%4>='%2' AND %4<='%3')",
fieldstr(TCMCustTableAddressView, CustTableModified),
DateTimeUtil::toStr(contract.parmFromDateTime()),
DateTimeUtil::toStr(contract.parmToDateTime()),
fieldstr(TCMCustTableAddressView, EBillModified),
0
)
);
when I compare the resulting query to what is produced by:
qbdsCustTableAddressView
.addRange(fieldNum(TCMCustTableAddressView, CustTableModified))
.value(strFmt("%1..%2", contract.parmFromDateTime(), contract.parmtoDateTime()));
Then the above looks correct by I'm getting a non-specific "Syntax error near 22"
You have a few issues with the parenthesis, single quotation marks and using AND instead of &&.
This should work:
qbdsCustTableAddressView
.addRange(fieldNum(TCMCustTableAddressView, CustTableModified))
.value(
strFmt("(((%1 >= %2) && (%1 <= %3)) || ((%4 >= %2) && (%4 <= %3)))",
fieldstr(TCMCustTableAddressView, CustTableModified),
DateTimeUtil::toStr(contract.parmFromDateTime()),
DateTimeUtil::toStr(contract.parmToDateTime()),
fieldstr(TCMCustTableAddressView, EBillModified),
0
)
);
Try this:
qbdsCustTableAddressView
.addRange(fieldNum(TCMCustTableAddressView, CustTableModified))
.value(SysQuery::range(contract.parmFromDateTime(), contract.parmtoDateTime()));
The difference being SysQuery::range(<from>, <to>)
I don't see an obvious problem, but perhaps that might flush it out for you.
I am processing 1k record however I get system violation error after 800 record. Could someone please suggest how can this error be resolved?
There are designated methods for using OQL, you should take care to
Use a cursor variable
Declare a size that makes sense for your query
Open the cursor (allocates memory)
Close the cursor (disposes memory)
procedure ShowMoviesInCategory(theCategory : tCategory)
var Curs : aOQLCursor
var curMovie : aMovie
Curs = Motor.OpenOQLCursor
Curs.BatchSize = 50
OQL select * from x in aMovie++ where x.Category = theCategory using Curs
forEach curMovie in Curs
WriteLn(curMovie)
endFor
Motor.CloseOQLCursor(Curs)
endProc
Please also refer to the eWAM Help under OQL and
wTECH 101 (week1 - day 5 "101A - OQL - Search.pptx"
In Wynsure there is a designated variable for this, please refer to the Wynsure Development Rules.docx
I'd like to know if there is any way to determine a terminal's background color ?
In my case, using gnome-terminal.
It might matter, since it's entirety up to the terminal application to draw the background of its windows, which may even be something else than a plain color.
There's an xterm control sequence for this:
\e]11;?\a
(\e and \a are the ESC and BEL characters, respectively.)
Xterm-compatible terminals should reply with the same sequence, with the question mark replaced by an X11 color name, e.g. rgb:0000/0000/0000 for black.
I've came up with the following:
#!/bin/sh
#
# Query a property from the terminal, e.g. background color.
#
# XTerm Operating System Commands
# "ESC ] Ps;Pt ST"
oldstty=$(stty -g)
# What to query?
# 11: text background
Ps=${1:-11}
stty raw -echo min 0 time 0
# stty raw -echo min 0 time 1
printf "\033]$Ps;?\033\\"
# xterm needs the sleep (or "time 1", but that is 1/10th second).
sleep 0.00000001
read -r answer
# echo $answer | cat -A
result=${answer#*;}
stty $oldstty
# Remove escape at the end.
echo $result | sed 's/[^rgb:0-9a-f/]\+$//'
Source/Repo/Gist: https://gist.github.com/blueyed/c8470c2aad3381c33ea3
Some links:
Xterm escape code:
http://www.talisman.org/~erlkonig/documents/xterm-color-queries/
https://invisible-island.net/xterm/ctlseqs/ctlseqs.html (dynamic colors / Request Termcap/Terminfo String)
http://thrysoee.dk/xtermcontrol/ (xtermcontrol --get-bg)
https://gist.github.com/blueyed/c8470c2aad3381c33ea3
https://github.com/rocky/bash-term-background/blob/master/term-background.sh
COLORFGBG environment variable (used by Rxvt but not many others...):
It is set to sth like <foreground-color>:[<other-setting>:]<background-color>,
(<other setting>: is optional), and if <background-color> in {0,1,2,3,4,5,6,8}, then we have some dark background.
Vim:
code: https://github.com/vim/vim/blob/05c00c038bc16e862e17f9e5c8d5a72af6cf7788/src/option.c#L3974
How does Vim guess background color on xterm?
Emacs... (background-mode) (I think it uses the escape code)
Related questions / reports / discussions:
Is there a way to determine a terminal's background color?
How does Vim guess background color on xterm?
https://unix.stackexchange.com/questions/245378/common-environment-variable-to-set-dark-or-light-terminal-background
https://bugzilla.gnome.org/show_bug.cgi?id=733423
https://github.com/neovim/neovim/issues/2764
E.g. some related snippet from Neovim issue 2764:
/*
* Return "dark" or "light" depending on the kind of terminal.
* This is just guessing! Recognized are:
* "linux" Linux console
* "screen.linux" Linux console with screen
* "cygwin" Cygwin shell
* "putty" Putty program
* We also check the COLORFGBG environment variable, which is set by
* rxvt and derivatives. This variable contains either two or three
* values separated by semicolons; we want the last value in either
* case. If this value is 0-6 or 8, our background is dark.
*/
static char_u *term_bg_default(void)
{
char_u *p;
if (STRCMP(T_NAME, "linux") == 0
|| STRCMP(T_NAME, "screen.linux") == 0
|| STRCMP(T_NAME, "cygwin") == 0
|| STRCMP(T_NAME, "putty") == 0
|| ((p = (char_u *)os_getenv("COLORFGBG")) != NULL
&& (p = vim_strrchr(p, ';')) != NULL
&& ((p[1] >= '0' && p[1] <= '6') || p[1] == '8')
&& p[2] == NUL))
return (char_u *)"dark";
return (char_u *)"light";
}
About COLORFGBG env, from Gnome BugZilla 733423:
Out of quite a few terminals I've just tried on linux, only urxvt and konsole set it (the ones that don't: xterm, st, terminology, pterm). Konsole and Urxvt use different syntax and semantics, i.e. for me konsole sets it to "0;15" (even though I use the "Black on Light Yellow" color scheme - so why not "default" instead of "15"?), whereas my urxvt sets it to "0;default;15" (it's actually black on white - but why three fields?). So in neither of these two does the value match your specification.
This is some own code I'm using (via):
def is_dark_terminal_background():
"""
:return: Whether we have a dark Terminal background color, or None if unknown.
We currently just check the env var COLORFGBG,
which some terminals define like "<foreground-color>:<background-color>",
and if <background-color> in {0,1,2,3,4,5,6,8}, then we have some dark background.
There are many other complex heuristics we could do here, which work in some cases but not in others.
See e.g. `here <https://stackoverflow.com/questions/2507337/terminals-background-color>`__.
But instead of adding more heuristics, we think that explicitly setting COLORFGBG would be the best thing,
in case it's not like you want it.
:rtype: bool|None
"""
if os.environ.get("COLORFGBG", None):
parts = os.environ["COLORFGBG"].split(";")
try:
last_number = int(parts[-1])
if 0 <= last_number <= 6 or last_number == 8:
return True
else:
return False
except ValueError: # not an integer?
pass
return None # unknown (and bool(None) == False, i.e. expect light by default)
Aside from apparently rxvt-only $COLORFGBG, I am not aware that anything else even exists. Mostly people seem to be referring to how vim does it, and even that is an educated guess at best.
As other's have mentioned, you may use OSC 11 ? to query the terminal (although support varies).
In bash and gnome-terminal:
read -rs -d \\ -p $'\e]11;?\e\\' BG
echo "$BG" | xxd
00000000: 1b5d 3131 3b72 6762 3a30 3030 302f 3262 .]11;rgb:0000/2b
00000010: 3262 2f33 3633 361b 0a 2b/3636..
Note that Bash has some nice features for this (eg read -s disabling echo, $'' strings for escape codes), but it unfortunately eats the final backslash.
Do you mean a method to ascertain the terminal background colour, or set the terminal colour?
If the latter you could query your terminal's PS1 environment variable to obtain the colour.
There's an article on setting (and so deriving) the terminal colours here:
http://www.ibm.com/developerworks/linux/library/l-tip-prompt/