handler command failed after migrating mariadb from 10.5.15 to 10.5.16 - mariadb

I have a HANDLER command which works in MariaDB 10.5.15 but fails on 10.5.16.
HANDLER INDEX_NAME READ INDEX_NAME = (....,'......') LIMIT 1;
The failure happens in'mysql_store_result'.
is there any change in 10.5.16 causing the failure?
from 10.6.x, works again.

Related

How to deploy listener of text-index on NebulaGraph Database?

Here are the steps (and problems):
to stop 172.16.0.17 nebula graph
sudo /usr/local/nebula/scripts/nebula.service stop all
kill -9 to stop listener
restart service
sudo /usr/local/nebula/scripts/nebula.service start all
start listener
./bin/nebula-storaged --flagfile /usr/local/nebula/etc/nebula-storaged-listener.conf
On 172.16.0.20 nebula graph, create a new space and use this space. Then
ADD LISTENER ELASTICSEARCH 172.16.0.17:9789
to add listener
SHOW LISTENER. Here is the problem: It's offline
The major reason is that one step is missing:
We must sign in text service before add listener. I.e., before step 4. Otherwise, an error occurs:
Running on machine: k3s01
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E1130 19:25:45.874213 20331 MetaClient.cpp:636] Send request to "172.16.0.17":9559, exceed retry limit
E1130 19:25:45.874900 20339 MetaClient.cpp:139] Heartbeat failed, status:RPC failure in MetaClient: N6apache6thrift9transport19TTransportExceptionE: AsyncSocketException: connect failed, type = Socket not open, errno = 111 (Connection refused): Connection refused

Impossible to connect to a mysql Cleardb database on Heroku

My last build failed with this error message:
Executing script cache:clear [KO]
[KO]
Script cache:clear returned with error code 255
!!
!! In ExceptionConverter.php line 117:
!!
!! An exception occurred in the driver: SQLSTATE[HY000] [1226] User 'xxxxxxxx
!! xxxxx' has exceeded the 'max_user_connections' resource (current value: 15)
And when I try to connect to my database via CLI,
mysql -u xxxxxxxxxx -pxxxxxxxx -h us-cdbr-east-xxx.cleardb.com
I am getting this error message:
ERROR 1040 (HY000): Too many connections
The problem is that the dashboard of the database says : No connections are currently established to the database. Therefore, I guess that a hidden process might be running.
Do you have an idea on how to fix this issue? I have already restarted all the dynos but it didn't have any effect. Is restarting the dynos and restarting the app the same? I also read that I should stop background workers, but I have no clue on how to do this...

About expected timeouts in RobotFramework SSHLibrary

I’m a newb with respect to Robot Framework.
I’m writing a test procedure that is expected to
connect to another machine
perform an image update (which causes the unit to close all services and reboot itself)
re-connect to the unit
run a command that returns a known string.
This is all supposed to happen within the __init__.robot module
What I have noticed is that I must invoke the upgrade procedure in a synchronous, or blocking way, like so
Execute Command sysupgrade upgrade.img
This succeeds in upgrading the unit, but the robotframework script hangs executing the command. I suspect this works because it keeps the ssh session alive long enough for the upgrade to reach a critical junction where the session is closed by the remote host, the host expects this, the upgrade continues, and this does not cause the upgrade to fail.
But the remote host appears to close the ssh session in such a way that the robotframework script does not detect it, and the script hangs indefinitely.
Trying to work around this, I tried invoking the remote command like so
Execute Command sysupgrade upgrade.img &
But then the update fails because the connection appear to drop and leaves the upgrade procedure incomplete.
If instead I execute it like this
Execute Command sysupgrade upgrade.img &
Sleep 600
Then this also fails, for some reason I am unable to deduce.
However, if I invoke it like this
Execute Command sysupgrade upgrade.img timeout=600
The the command succeeds in updating the unit, and after the set timeout period, the robotframework script does indeed resume, but since it has arrived at the timeout, the test has (from the point of view of robotframework) failed.
But this is actually an expected failure, and should be ignored. The rest of the script would then reconnect to the host and continue the remaining test(s)
Is there a way to treat the timeout condition as non-fatal?
Here is the code, as explained above, the __init__.robot initialization module is expected to perform the upgrade and then reconnect, leaving the other xyz.robot files to be run and continue testing the applications.
The __init__.robot file:
*** Settings ***
| Library | OperatingSystem |
| Library | SSHLibrary |
Suite Setup ValidationInit
Suite Teardown ValidationTeardown
*** Keywords ***
ValidationInit
Enable SSH Logging validation.log
Open Connection ${host}
Login ${username} ${password}
# Upload the firmware to the unit.
Put File ${firmware} upgrade.img scp=ALL
# Perform firmware upgrade on the unit.
log "Launch upgrade on unit"
Execute Command sysupgrade upgrade.img timeout=600
log "Restart comms"
Close All Connections
Open Connection ${host}
Login ${username} ${password}
ValidationTeardown
Close All Connections
log “End tests”
This should work :
Comment Change ssh client timeout configuration set client configuration timeout=600 Comment "Launch upgrade on unit" SSHLibrary.Write sysupgrade upgrade.img SSHLibrary.Read Until expectedResult Close All Connections
You can use 'Run Keyword And Ignore Error' to ignore the failgure. Or here I think you should use write command if you do not care the execution result.

openedge db startup error

proenv>proserve dbname -S 2098 -H hostname -B 10000
OpenEdge Release 11.6 as of Fri Oct 16 19:02:26 EDT 2015
11:00:35 BROKER This broker will terminate when session ends. (5405)
11:00:35 BROKER The startup of this database requires 46Mb of shared memory. Maximum segment size is 1024Mb.
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
11:00:35 BROKER : Removed shared memory with segment_id: 39714816 (16869)
11:00:35 BROKER ** This process terminated with exit code 1. (8619)
I am getting the above error when I tried to start the progress database...
This is the problem:
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
My guess is, you have just created the DB using prostrct create. You need to procopy an empty db into your db so that if has the schema tables.
procopy empty yourdbname
See: http://knowledgebase.progress.com/articles/Article/P7713
The database is void it means, it does not have any metaschema.
First create your database using .st file (use prostrct create) than
copy meta schema table using emptyn.
Ex : procopy emptyn urdbname.
Then try to start your database.

Jmap - Error connecting to remote debug server

My requirement is to create a dump file of heap memory of a remote server using Jmap.
I did this way.
jmap -dump:file=remoteDump.txt,format=b 3104
This worked fine as 3104 is the pid of a process from my local machine.
How do I do the same with remote server?
I tried
jmap -dump:file=remoteDump.txt,format=b 3104 54.197.228.33:8080
But it's failed.
I tried creating a debug server using jsadebugd, as below.
1.Started rmiregistry
rmiregistry -J-Xbootclasspath/p:$JAVA_HOME/lib/sa-jdi.jar
2.Ran jsadebugd
>jsadebugd 11594 54.197.228.33:9009
But the step 2 is throwing the following error:
Error attaching to process or starting server: sun.jvm.hotspot.debugger.D
Exception: Windbg Error: WaitForEvent failed!
at sun.jvm.hotspot.debugger.windbg.WindbgDebuggerLocal.attach0(Na
thod)
at sun.jvm.hotspot.debugger.windbg.WindbgDebuggerLocal.attach(Win
ggerLocal.java:152)
at sun.jvm.hotspot.HotSpotAgent.attachDebugger(HotSpotAgent.java:
at sun.jvm.hotspot.HotSpotAgent.setupDebuggerWin32(HotSpotAgent.j
)
at sun.jvm.hotspot.HotSpotAgent.setupDebugger(HotSpotAgent.java:3
at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:313)
at sun.jvm.hotspot.HotSpotAgent.startServer(HotSpotAgent.java:220
at sun.jvm.hotspot.DebugServer.run(DebugServer.java:106)
at sun.jvm.hotspot.DebugServer.main(DebugServer.java:45)
at sun.jvm.hotspot.jdi.SADebugServer.main(SADebugServer.java:55)
Help me get out of it.
The reason why you can not attach to process could be that it is already attached to some other debuger or executed on other visual machine than your jmap is running.
Try to assure that process is not attached to any debuger and you attach to the same VM.

Resources