MariaDB10.6 -- Waiting for log copy thread to read lsn 0 - mariadb

I am getting below message and backup will not be completed.
===
[00] 2023-01-17 03:00:00 >> log scanned up to (6213618627472)
[01] 2023-01-17 03:00:01 ...done
[01] 2023-01-17 03:00:01 Compressing and streaming ./aria_log.00000001 to <STDOUT>
[00] 2023-01-17 03:00:01 >> log scanned up to (6213618660632)
[01] 2023-01-17 03:00:02 ...done
[00] 2023-01-17 03:00:02 Waiting for log copy thread to read lsn 0
I am using below command to take backup
/var/lib/mysql/bin/mariabackup --backup --host=hostname --port=3166 --user=marbackup --password=xxx --close-files --open-files-limit 90000 --slave-info --stream=xbstream --no-lock --safe-slave-backup --no-backup-locks --compress --compress-threads=12 --compress-chunk-size=5M > /mysql/files/backups/nfs/hostname_data/backup.xb

Related

Error on Starting MySQL Cluster 8.0 Data Node on Ubuntu 22.04 LTS

When I start the data nodeid 1 (10.1.1.103) of MySQL Cluster 8.0 on Ubuntu 22.04 LTS I am getting the following error:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO -- Angel allocated nodeid: 2
When I start data nodeid 2 (10.1.1.105) I get the following error:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 11:10:04 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 11:10:04 [ndbd] ERROR -- Failed to allocate nodeid, error: 'Error: Could not alloc node id at 10.1.1.102:1186: Connection done from wrong host ip 10.1.1.105.'
The management node log file reports (on /var/lib/mysql-cluster/ndb_1_cluster.log):
2023-01-02 11:28:47 [MgmtSrvr] INFO -- Node 2: Initial start, waiting for 3 to connect, nodes [ all: 2 and 3 connected: 2 no-wait: ]
What is the relevance of failing to open: /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory?
Why is data node on 10.1.1.105 unable to allocate a nodeid?
I initially installed a single Management Node on 10.1.1.102:
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
dpkg -i mysql-cluster-community-management-server_8.0.31-1ubuntu22.04_amd64.deb
mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
The configuration set up on config.ini:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
[ndb_mgmd]
# Management process options:
hostname=10.1.1.102 # Hostname of the manager
datadir=/var/lib/mysql-cluster # Directory for the log files
[ndbd]
hostname=10.1.1.103 # Hostname/IP of the first data node
NodeId=2 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[ndbd]
hostname=10.1.1.105 # Hostname/IP of the second data node
NodeId=3 # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files
[mysqld]
# SQL node options:
hostname=10.1.1.102 # In our case the MySQL server/client is on the same Droplet as the cluster manager
I then started and killed the running server and created a systemd file for Cluster manager:
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
pkill -f ndb_mgmd
vi /etc/systemd/system/ndb_mgmd.service
Adding the following configuration:
[Unit]
Description=MySQL NDB Cluster Management Server
After=network.target auditd.service
[Service]
Type=forking
ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
I then reloaded the systemd daemon to apply the changes, started and enabled the Cluster Manager and checked its active status:
systemctl daemon-reload
systemctl start ndb_mgmd
systemctl enable ndb_mgmd
Here is the status of the Cluster Manager:
# systemctl status ndb_mgmd
● ndb_mgmd.service - MySQL NDB Cluster Management Server
Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2023-01-01 08:25:07 CST; 27min ago
Main PID: 320972 (ndb_mgmd)
Tasks: 12 (limit: 9273)
Memory: 2.5M
CPU: 35.467s
CGroup: /system.slice/ndb_mgmd.service
└─320972 /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Jan 01 08:25:07 nuc systemd[1]: Starting MySQL NDB Cluster Management Server...
Jan 01 08:25:07 nuc ndb_mgmd[320971]: MySQL Cluster Management Server mysql-8.0.31 ndb-8.0.31
Jan 01 08:25:07 nuc systemd[1]: Started MySQL NDB Cluster Management Server.
I then set up a data node on 10.1.1.103, installing dependencies, downloading the data node and setting up its config:
apt update && apt -y install libclass-methodmaker-perl
wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar
dpkg -i mysql-cluster-community-data-node_8.0.31-1ubuntu22.04_amd64.deb
vi /etc/my.cnf
I entered the address of the Cluster Management Node in the configuration:
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=10.1.1.102 # location of cluster manager
I then created a data directory and started the node:
mkdir -p /usr/local/mysql/data
ndbd
This is when I got the "Failed to open" error result on data nodeid 1 (102.1.1.103):
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO -- Angel allocated nodeid: 2
UPDATED (2023-01-02)
Thank you #MauritzSundell. I corrected the (private) IP addresses above and no longer got:
# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
ERROR: Unable to connect with connect string: nodeid=0,10.1.1.2:1186
Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5 4 3 2 1, failed.
2023-01-01 14:41:57 [ndbd] ERROR -- Could not connect to management server, error: ''
Also #MauritzSundell, in order to use the ndbmtd process rather than the ndbd process, does any alteration need to be made to any of the configuration files (e.g. /etc/systemd/system/ndb_mgmd.service)?
What is the appropriate reference/tutorial documentation for MySQL Cluster 8.0? Is it MySQL Cluster "MySQL NDB Cluster 8.0" on:
https://downloads.mysql.com/docs/mysql-cluster-excerpt-8.0-en.pdf
Or is it "MySQL InnoDB Cluster" on:
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-introduction.html
Not sure I understand the difference.

Webscrape and download the most recent RStudio build

Is it possible to scrape https://rstudio.com/products/rstudio/download/preview/ and download the most recent Ubuntu server version (RStudio Server 1.3.957 - Ubuntu 18/Debian 10 (64-bit)). The names keep changing and I would like to build an R script that does the downloading and updating automatically one per day for me.
R script or bash script would be perfect.
Thanks
Yes, we had that in littler for years as two scripts I run myself every once
in a while. Could be generalized for different distro arguments I suppse.
See
getRStudioDesktop.r
getRStudioServer.r
Demo
edd#rob:/tmp$ getRStudioDesktop.r
--2020-05-06 14:24:53-- https://s3.amazonaws.com/rstudio-ide-build/desktop/bionic/amd64/rstudio-1.4.315-
amd64.deb
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.28.190
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.28.190|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 124609040 (119M) [application/x-debian-package]
Saving to: ‘rstudio-1.4.315-amd64.deb’
rstudio-1.4.315-amd64.deb 100%[=====================================>] 118.84M 3.65MB/s in 39s
2020-05-06 14:25:32 (3.08 MB/s) - ‘rstudio-1.4.315-amd64.deb’ saved [124609040/124609040]
edd#rob:/tmp$ getRStudioServer.r
--2020-05-06 14:25:40-- https://s3.amazonaws.com/rstudio-ide-build/server/bionic/amd64/rstudio-server-1.
4.315-amd64.deb
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.200.197
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.200.197|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 60524622 (58M) [application/x-debian-package]
Saving to: ‘rstudio-server-1.4.315-amd64.deb’
rstudio-server-1.4.315-amd 100%[=====================================>] 57.72M 3.01MB/s in 22s
2020-05-06 14:26:02 (2.65 MB/s) - ‘rstudio-server-1.4.315-amd64.deb’ saved [60524622/60524622]
edd#rob:/tmp$
and of course
edd#rob:/tmp$ wajig install *1.4.315*deb # a dpkg/apt/... wrapper I like
(Reading database ... 478943 files and directories currently installed.)
Preparing to unpack rstudio-1.4.315-amd64.deb ...
Unpacking rstudio (1.4.315) over (1.4.200) ...
Preparing to unpack rstudio-server-1.4.315-amd64.deb ...
Unpacking rstudio-server (1.4.315) over (1.4.200) ...
Setting up rstudio (1.4.315) ...
Setting up rstudio-server (1.4.315) ...
useradd: user 'rstudio-server' already exists
Created symlink /etc/systemd/system/multi-user.target.wants/rstudio-server.service → /lib/systemd/system/
rstudio-server.service.
● rstudio-server.service - RStudio Server
Loaded: loaded (/lib/systemd/system/rstudio-server.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-05-06 14:26:54 CDT; 1s ago
Process: 7315 ExecStart=/usr/lib/rstudio-server/bin/rserver (code=exited, status=0/SUCCESS)
Main PID: 7316 (rserver)
Tasks: 3 (limit: 4915)
Memory: 6.8M
CGroup: /system.slice/rstudio-server.service
└─7316 /usr/lib/rstudio-server/bin/rserver
May 06 14:26:54 rob systemd[1]: Starting RStudio Server...
May 06 14:26:54 rob systemd[1]: Started RStudio Server.
Processing triggers for desktop-file-utils (0.24-1ubuntu1) ...
Processing triggers for mime-support (3.63ubuntu1) ...
Processing triggers for gnome-menus (3.32.0-1ubuntu1) ...
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for shared-mime-info (1.10-1) ...
edd#rob:/tmp$

mpack command in ksh script, ftp file first from windows

WORK_FILE=RetriesExceeded.csv
MAIL="test#test.org"
HOST=lawsonfax
$FTP -v "$HOST" << EOF
get RetriesExceeded.csv
quit
EOF
archive_file $WORK_FILE
/law/bin/mpack -s 'Fax Retries Exceeded' "$WORK_FILE" "$MAIL"
log_stop
exit 0
Newest error at bottom, no such file or directory: [dgftp#lawapp2]/lawif/bin$ get_lawson_fax.ksh
Connected to lawsonfax.phsi.promedica.org.
220 Microsoft FTP Service
331 Password required for dgftp.
230 User logged in.
200 PORT command successful.
125 Data connection already open; Transfer starting.
226 Transfer complete.
352 bytes received in 0.04171 seconds (8.242 Kbytes/s)
local: RetriesExceeded.csv remote: RetriesExceeded.csv
221 Goodbye.
RetriesExceeded.csv: No such file or directory
[dgftp#lawapp2]/lawif/bin$
The last command now:
CMD="/law/bin/mpack -s 'Fax Retries Exceeded' $WORK_FILE $MAIL"
Suggested change:
/law/bin/mpack -s 'Fax Retries Exceeded' "$WORK_FILE" "$MAIL"
Of course only if you actually have such /law/bin/mpack program.

Nginx configs for Dynatrace

nginx version: nginx/1.12.1
Dynatrace version: 7.0
uname -a ==> Linux
xxx.elinux.xxx 3.10.0-693.2.2.el7.x86_64 #1 SMP
Sat Sep 9 03:55:24 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
I have been trying to get nginx metrics being sent across to Dynatrace without success.
When I start nginx, the dynatrace agent logs show -
2017-10-31 21:58:14 [ce68772b] info [native] Successfully loaded agent binary /opt/dynatrace/agent/downloads/appmon/native/7.0.8.1014/linux-x86-64/libdtnginxagent.so
==> dt_dtnginxagent_w1b-dev-et-rz-app-bb-web_26779.0.log <==
2017-10-31 21:58:19.808088 [26779/ce68772b] warning [native] generate_offsets.sh script generated
2017-10-31 21:58:19.808181 [26779/ce68772b] warning [native] Automatic Nginx agent configuration generation failed
2017-10-31 21:58:19.808192 [26779/ce68772b] warning [native] In order to run Nginx with dynaTrace Agent you need to perform this manual step:
2017-10-31 21:58:19.808199 [26779/ce68772b] warning [native] Please run the following command to generate Nginx Agent configuration and then restart Nginx:
2017-10-31 21:58:19.808208 [26779/ce68772b] warning [native] /opt/dynatrace/agent/conf/generate_offsets.sh -b /usr/sbin/nginx -o /opt/dynatrace/agent/conf/dtnginx_self_generated_offsets.json
2017-10-31 22:23:39.474132 [26751/b77f98d5] info [native] New subAgent registered successfully: db846b2b
2017-10-31 22:23:39.485625 [26751/b77f98d5] info [native] sub agent registered with id (sub/slave) db846b2b/dedfe76e9 (UDP queue size: 0)
2017-10-31 22:23:45.090885 [26751/b77f98d5] info [native] New subAgent registered successfully: b805cd43
2017-10-31 22:23:45.090913 [26751/b77f98d5] info [native] sub agent registered with id (sub/slave) b805cd43/d8ec7151e (UDP queue size: 0)
2017-10-31 22:23:52.102050 [26751/c69d78d5] info [native] License update received for sub agent db846b2b: license = ok, uem volume = not exhausted
2017-10-31 22:23:52.102104 [26751/c69d78d5] info [native] Subagent license is updated, id=[0xdb846b2b] = license ok;
2017-10-31 22:23:52.114657 [26751/c69d78d5] info [native] License update received for sub agent b805cd43: license = ok, uem volume = not exhausted
2017-10-31 22:23:52.114689 [26751/c69d78d5] info [native] Subagent license is updated, id=[0xb805cd43] = license ok;
2017-10-31 22:23:59.099111 [26751/c8c0f895] info [native] Unregistering subAgent db846b2b
2017-10-31 22:23:59.099191 [26751/c8c0f895] info [native] sub agent unregistered with id (sub/slave) db846b2b/dedfe76e9 (UDP queue size: 0)
2017-10-31 22:24:05.106795 [26751/c8c0f895] info [native] Unregistering subAgent b805cd43
2017-10-31 22:24:05.106863 [26751/c8c0f895] info [native] sub agent unregistered with id (sub/slave) b805cd43/d8ec7151e (UDP queue size: 0)
As per the documentation, I have updated the '/usr/lib/systemd/system/nginx.service' file as follows -
[Unit]
Description=nginx - high performance web server
Documentation=http://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
Environment=LD_PRELOAD=/opt/dynatrace/agent/lib64/libdtagent.so
ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf -p /opt/log/nginx
ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf -p /opt/log/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
[Install]
WantedBy=multi-user.target
What am I missing?

Can i decide how much memory to allocate in LSF queue

Is there any option to decide how much memory I can allocate in LSF?
I tried
bsub -R "rusage[mem=10000]" sleep 1000s
But when i checked resource using "bjobs -l "
I get this:
Job <203180>, User <xxxxx>, Project <default>, Status <RUN>, Queue <medium>,
Job Priority <50>, Command <sleep 1000s>
Thu Apr 12 09:49:56: Submitted from host <xxxx>, CWD <xx>, Requested Resources <rusa
ge[mem=10000]>;
Thu Apr 12 09:49:58: Started on <xxxx>, Execution Home <xxxx>, E
xecution CWD <xxxxx>;
Thu Apr 12 09:49:58: Resource usage collected.
MEM: 3 Mbytes; SWAP: 16 Mbytes; NTHREAD: 1
PGID: 28231; PIDs: 28231
Where am I wrong?
bsub -R "rusage[mem=10000]": will initially reserve 10000 MBytes of memory.
Whereas:
"MEM: 3 Mbytes" is the total resident memory usage of all currently running processes in your job.
"SWAP: 16 Mbytes" is the total virtual memory usage of all currently running processes in your job.
The values "3 Mbytes" and "16 Mbytes" may change during the runtime.
In my system we use -M, say bsub -M 1 to request 1 G of memory limit, the job is killed if it goes above that limit.

Resources