No Database Connection: MariaDB Won't Start - mariadb

When I run the command sudo /opt/bitnami/ctlscript.sh restart, my Apache server starts but MariaDB does not. Below are the logs from journalctl -xe.
Aug 16 16:29:54 ip-172-26-6-254 bitnami[7264]: ## 2022-08-16 16:29:54+00:00 ## INFO ## Starting services...
Aug 16 16:29:55 ip-172-26-6-254 bitnami[7264]: 2022-08-16T16:29:55.550Z - info: Saving configuration info to disk
Aug 16 16:29:55 ip-172-26-6-254 bitnami[7264]: 2022-08-16T16:29:55.915Z - info: Performing service start operation for php
Aug 16 16:29:56 ip-172-26-6-254 bitnami[7264]: php 16:29:56.22 INFO ==> php-fpm is already running
Aug 16 16:29:56 ip-172-26-6-254 bitnami[7264]: 2022-08-16T16:29:56.225Z - info: Performing service start operation for apache
Aug 16 16:29:56 ip-172-26-6-254 bitnami[7264]: apache 16:29:56.50 INFO ==> apache is already running
Aug 16 16:29:56 ip-172-26-6-254 bitnami[7264]: 2022-08-16T16:29:56.503Z - info: Skipping service start operation for varnish
Aug 16 16:29:56 ip-172-26-6-254 bitnami[7264]: 2022-08-16T16:29:56.503Z - info: Performing service start operation for mariadb
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: mariadb 16:30:56.82 ERROR ==> mariadb did not start
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: 2022-08-16T16:30:56.831Z - error: Unable to perform start operation Export start for mariadb failed with exit code 1
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: ## 2022-08-16 16:30:56+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/010_bitnami_agent_extra...
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: ## 2022-08-16 16:30:56+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/020_bitnami_agent...
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: ## 2022-08-16 16:30:56+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/030_update_welcome_file...
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: ## 2022-08-16 16:30:56+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/040_bitnami_credentials_file...
Aug 16 16:30:56 ip-172-26-6-254 bitnami[7264]: ## 2022-08-16 16:30:56+00:00 ## INFO ## Running /opt/bitnami/var/init/post-start/050_clean_metadata...
Aug 16 16:30:56 ip-172-26-6-254 sudo[7255]: pam_unix(sudo:session): session closed for user root
Aug 16 16:30:56 ip-172-26-6-254 systemd[1]: bitnami.service: Control process exited, code=exited, status=1/FAILURE
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An ExecStart= process belonging to unit bitnami.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
When I try restarting just MariaDB (sudo /opt/bitnami/ctlscript.sh restart mariadb), the command hangs for about 1 minute and then I get the following (unhelpful) error:
Failed to restart mariadb: Failed to restart mariadb
I'm not sure what else to do here.

Related

Re-run airflow historical runs

I have a dag with the following parameters:
start_date=datetime(2020, 7, 6)
schedule_interval="0 12 * * *",
concurrency=2,
max_active_runs=6,
catchup=True
I had to re-process a year's historical data, so I did a reset of dag run status for past one year. In the mid of re-processing, I realised I need to re-process a few latest days first due to some business priority change, but airflow seems be a bit random in picking which days to run, though often favours old days more, so my tree view of the dag run is a bit messed up, and it's going to take quite a while to catch up all runs.
I had two choices:
Set all old dag run days to failure
Delete old dag run days
To avoid generating excessive number of failure notification, I chose the 2nd option.
Here is a quick illustration. Before I delete, in my tree view, I have:
1 Sep 2021 - 1 Jan 2022: dag run successful
2 Jan 2022 - 3 Jan 2022: dag running
4 Jan 2022 - 1 Aug 2022: dag scheduled
2 Aug 2022 - 6 Aug 2022: dag run successful
7 Aug 2022 - 1 Sep 2022: dag scheduled
To speed up the process of August data, I deleted dag runs scheduled between 4 Jan and 1 Aug, now the tree view now becomes
1 Sep 2021 - 1 Jan 2022: dag run successful
2 Jan 2022 - 3 Jan 2022: dag running
2 Aug 2022 - 6 Aug 2022: dag run successful
7 Aug 2022 - 1 Sep 2022: dag scheduled
Note that dag runs between 4 Jan 2022 and 1 Aug 2022 are now completely gone from the tree view.
Unfortunately, because the latest dag run is 6 Aug 2022 and airflow thinks there is only runs starting from 7 Aug to catchup and all deleted runs between 4 Jan and 1 Aug are hence ignored.
So my question now is, if I don't want to re-process those days that have already been re-processed, is there a way for me to tell airflow I need to re-run those days I have deleted?

OpenVPN Server TCP_CLIENT link local: (not bound)

I've been trying to set up an OpenVPN server on my Linux recently but I continuously get the same error every time I try to connect to my server.
My settings are like this:
proto tcp
port 443
resolv-retry infinite
nobind
user nobody
group nogroup
cipher AES-256-CBC
auth SHA256
script-security 2
up /etc/openvpn/update-systemd-resolved
down /etc/openvpn/update-systemd-resolved
down-pre
dhcp-option DOMAIN-ROUTE .
I have checked the settings on my server and local computer a million times and all of them are the same. Still don't know what I have to do about it. Thanks in advance! :*
Sat Nov 27 23:45:11 2021 OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Jul 19 2021
Sat Nov 27 23:45:11 2021 library versions: OpenSSL 1.1.1f 31 Mar 2020, LZO 2.10
Sat Nov 27 23:45:11 2021 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
Sat Nov 27 23:45:11 2021 TCP/UDP: Preserving recently used remote address: [AF_INET]myserverip:443
Sat Nov 27 23:45:11 2021 Socket Buffers: R=[131072->131072] S=[16384->16384]
Sat Nov 27 23:45:11 2021 Attempting to establish TCP connection with [AF_INET]myserverip:443 [nonblock]
Sat Nov 27 23:45:12 2021 TCP connection established with [AF_INET]myserverip:443
Sat Nov 27 23:45:12 2021 TCP_CLIENT link local: (not bound)
Sat Nov 27 23:45:12 2021 TCP_CLIENT link remote: [AF_INET]myserverip:443
Sat Nov 27 23:45:12 2021 NOTE: UID/GID downgrade will be delayed because of --client, --pull, or --up-delay
Sat Nov 27 23:45:12 2021 Connection reset, restarting [0]
Sat Nov 27 23:45:12 2021 SIGUSR1[soft,connection-reset] received, process restarting
Sat Nov 27 23:45:12 2021 Restart pause, 5 second(s)

Not able to grep pattern between a date range

I am trying to fetch lines which are within a specified date range. I have tried many online solutions they did not work
Below is the log file I have
Nov 21 03:31:28 Sample Log test
Nov 21 03:32:01 Sample Log test
Nov 21 03:33:01 Sample Log test
Nov 21 03:34:01 Sample Log test
Nov 21 03:35:02 Sample Log test
Nov 21 03:36:01 Sample Log test
Nov 21 03:37:01 Sample Log test
Nov 21 03:38:01 Sample Log test
Nov 21 03:39:01 Sample Log test
Nov 21 03:39:01 Sample Log test
Nov 21 03:39:01 Sample Log test
Nov 21 03:40:01 Sample Log test
Nov 21 03:40:01 Sample Log test
Nov 21 03:40:29 Sample Log test
Nov 21 03:40:29 Sample Log test
Nov 21 03:41:01 Sample Log test
Nov 21 03:41:22 Sample Log test
Nov 21 03:41:22 Sample Log test
Nov 21 03:41:43 Sample Log test
Nov 21 03:41:43 Sample Log test
Nov 21 03:42:01 Sample Log test
Awk command I am using is
-bash-4.2$ b="03:31:28"
-bash-4.2$ e="03:41:00"
-bash-4.2$ awk -v "b=$b" -v "e=$e" -F ',' '$1 >= b && $1 <= e' ~/test
-bash-4.2$
It is not returning output
Using awk
$ awk -v b=03:31:28 -v e=03:41:00 '$3 >= b && $3 <= e' input_file
Nov 21 03:31:28 Sample Log test
Nov 21 03:32:01 Sample Log test
Nov 21 03:33:01 Sample Log test
Nov 21 03:34:01 Sample Log test
Nov 21 03:35:02 Sample Log test
Nov 21 03:36:01 Sample Log test
Nov 21 03:37:01 Sample Log test
Nov 21 03:38:01 Sample Log test
Nov 21 03:39:01 Sample Log test
Nov 21 03:39:01 Sample Log test
Nov 21 03:39:01 Sample Log test
Nov 21 03:40:01 Sample Log test
Nov 21 03:40:01 Sample Log test
Nov 21 03:40:29 Sample Log test
Nov 21 03:40:29 Sample Log test

Ironic conductor dies as soon as it starts

I have installed ironic standalone on a vagrant centos 7box using bifrost.
The ironic.conf is as below:
[DEFAULT]
debug = True
# NOTE(TheJulia): Until Bifrost supports neutron or some other network
# configuration besides a flat network where bifrost orchustrates the
# control instead of ironic, noop is the only available network driver.
enabled_network_interfaces = noop
default_deploy_interface = direct
enabled_inspect_interfaces = no-inspect,inspector,ilo
default_inspect_interface = inspector
enabled_boot_interfaces = ilo-virtual-media,ilo-pxe
enabled_management_interfaces = ilo,ipmitool,ucsm
enabled_power_interfaces = ilo,ipmitool,ucsm
enabled_console_interfaces = ilo,no-console
enabled_hardware_types = ipmi,ilo,cisco-ucs-managed
enabled_drivers = agent_ipmitool,agent_ilo,agent_ucs,pxe_ipmitool,pxe_ilo
rabbit_userid = ironic
rabbit_password = aSecretPassword473z
auth_strategy = noauth
[pxe]
pxe_append_params = systemd.journald.forward_to_console=yes
pxe_config_template = $pybasedir/drivers/modules/ipxe_config.template
tftp_server = 10.0.15.10
tftp_root = /tftpboot
pxe_bootfile_name = undionly.kpxe
ipxe_enabled = true
ipxe_boot_script = /etc/ironic/boot.ipxe
tftp_master_path = /var/lib/ironic/master_images
[deploy]
http_url = http://10.0.15.10:8080/
http_root = /httpboot
[conductor]
clean_nodes = false
automated_clean = false
[database]
connection = mysql+pymysql://ironic:aSecretPassword473z#localhost/ironic?charset=utf8
[dhcp]
dhcp_provider = none
[ilo]
use_web_server_for_images = true
[inspector]
enabled = true
auth_type=none
endpoint_override=http://127.0.0.1:5050
[service_catalog]
auth_type = none
endpoint_override = http://10.0.15.10:6385
I have also setup dhcp and tftp configurations for a pxe boot. But the ironic conductor keeps dying.
[root#mgmt group_vars]# systemctl restart ironic-conductor
[root#mgmt group_vars]# systemctl status ironic-conductor
● ironic-conductor.service - ironic-conductor service
Loaded: loaded (/usr/lib/systemd/system/ironic-conductor.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2018-06-27 07:34:29 UTC; 1s ago
Main PID: 4264 (ironic-conducto)
CGroup: /system.slice/ironic-conductor.service
└─4264 /usr/bin/python2 /bin/ironic-conductor --config-file /etc/ironic/ironic.conf
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_host = localhost log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_hosts = ['localhost:5672'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_max_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "DEFAULT" is deprecated. Use option "rabbit_password" from group "oslo_messaging_rabbit".
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be sile...nored in the future.
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_port = 5672 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Hint: Some lines were ellipsized, use -l to show in full.
[root#mgmt group_vars]# systemctl status ironic-conductor
● ironic-conductor.service - ironic-conductor service
Loaded: loaded (/usr/lib/systemd/system/ironic-conductor.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Wed 2018-06-27 07:34:33 UTC; 1s ago
Process: 4264 ExecStart=/bin/ironic-conductor --config-file /etc/ironic/ironic.conf (code=exited, status=0/SUCCESS)
Main PID: 4264 (code=exited, status=0/SUCCESS)
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_host = localhost log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_hosts = ['localhost:5672'] log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_max_retries = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.255 4264 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "DEFAULT" is deprecated. Use option "rabbit_password" from group "oslo_messaging_rabbit".
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 WARNING oslo_config.cfg [-] Option "rabbit_password" from group "oslo_messaging_rabbit" is deprecated for removal (Replaced by [DEFAULT]/transport_url). Its value may be sile...nored in the future.
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_password = **** log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_port = 5672 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Jun 27 07:34:31 mgmt ironic-conductor[4264]: 2018-06-27 07:34:31.256 4264 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python2.7/site-packages/oslo_config/cfg.py:2908
Hint: Some lines were ellipsized, use -l to show in full.
As soon as I restart it, it dies within seconds and I am not able to figure out why this happens.
ps aux | grep ironic
ironic 565 1.6 7.6 224256 18396 ? Ss 09:59 0:44 /usr/bin/python2 /bin/ironic-inspector --config-file /etc/ironic-inspector/inspector.conf
ironic 5533 57.2 33.4 254396 80660 ? Rs 10:44 0:02 /usr/bin/python2 /bin/ironic-api --config-file /etc/ironic/ironic.conf
ironic 5545 1.0 31.5 254396 76112 ? S 10:44 0:00 /usr/bin/python2 /bin/ironic-api --config-file /etc/ironic/ironic.conf

Appending BSON arrays in MongoDB (rmongodb)

I found this information on how to use the $push operator to add new values to an array. However, I can't seem to get this working with rmongodb
Suppose we have the following doc in the DB
_id : 7 51005201f8ab44f1690f9526
tags : 4
1 : 2 a
2 : 2 b
3 : 2 c
I'd like to add a value to the array tags. Here's what I tried:
q <- mongo.bson.from.list(list(tags="a"))
TRY 1
Here I tried using the $push operator
Code
bnew <- mongo.bson.from.list(list("$push"=list("tags"="d")))
> mongo.update(mongo=con, ns, criteria=q, objNew=bnew)
[1] FALSE
Logfile
Thu Jan 24 16:42:27 [initandlisten] MongoDB starting : pid=6260 port=27017 dbpath=\data\db\ 64-bit host=ASHB-109C-02
Thu Jan 24 16:42:27 [initandlisten] db version v2.2.2, pdfile version 4.5
Thu Jan 24 16:42:27 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267
Thu Jan 24 16:42:27 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Thu Jan 24 16:42:27 [initandlisten] options: { logpath: "log_1.txt" }
Thu Jan 24 16:42:27 [initandlisten] journal dir=/data/db/journal
Thu Jan 24 16:42:27 [initandlisten] recover : no journal files present, no recovery needed
Thu Jan 24 16:42:27 [initandlisten] waiting for connections on port 27017
Thu Jan 24 16:42:27 [websvr] admin web console waiting for connections on port 28017
Thu Jan 24 16:42:36 [initandlisten] connection accepted from 127.0.0.1:52419 #1 (1 connection now open)
Thu Jan 24 16:42:44 [conn1] __test.test Assertion failure x == _nfields src\mongo\db\jsobj.cpp 1250
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\util\stacktrace.cpp(161) mongo::printStackTrace+0x3e
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\util\assert_util.cpp(109) mongo::verifyFailed+0xdc
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\db\jsobj.cpp(1250) mongo::BSONIteratorSorted::BSONIteratorSorted+0xf3
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\db\ops\update_internal.cpp(906) mongo::ModSetState::createNewFromMods+0xa3
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\db\ops\update.cpp(370) mongo::_updateObjects+0x15a2
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(573) mongo::receivedUpdate+0x60d
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(437) mongo::assembleResponse+0x626
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\db\db.cpp(192) mongo::MyMessageHandler::process+0xf5
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\mongo\util\net\message_server_port.cpp(86) mongo::pms::threadRun+0x59a
Thu Jan 24 16:42:44 [conn1] mongod.exe ...\src\third_party\boost\libs\thread\src\win32\thread.cpp(180) boost::`anonymous namespace'::thread_start_function+0x21
Thu Jan 24 16:42:44 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(314) _callthreadstartex+0x17
Thu Jan 24 16:42:44 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(292) _threadstartex+0x7f
Thu Jan 24 16:42:44 [conn1] kernel32.dll BaseThreadInitThunk+0xd
Thu Jan 24 16:42:44 [conn1] update __test.test query: { tags: "a" } update: { $push: { tags: "d" } } nscanned:1 keyUpdates:0 exception: assertion src\mongo\db\jsobj.cpp:1250 locks(micros) w:398335 399ms
Thu Jan 24 16:42:48 CTRL_CLOSE_EVENT signal
Thu Jan 24 16:42:48 [consoleTerminate] got CTRL_CLOSE_EVENT, will terminate after current cmd ends
Thu Jan 24 16:42:48 [consoleTerminate] now exiting
Thu Jan 24 16:42:48 dbexit:
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: going to close listening sockets...
Thu Jan 24 16:42:48 [consoleTerminate] closing listening socket: 496
Thu Jan 24 16:42:48 [consoleTerminate] closing listening socket: 516
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: going to flush diaglog...
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: going to close sockets...
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: waiting for fs preallocator...
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: lock for final commit...
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: final commit...
Thu Jan 24 16:42:48 [conn1] end connection 127.0.0.1:52419 (0 connections now open)
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: closing all files...
Thu Jan 24 16:42:48 [consoleTerminate] closeAllFiles() finished
Thu Jan 24 16:42:48 [consoleTerminate] journalCleanup...
Thu Jan 24 16:42:48 [consoleTerminate] removeJournalFiles
Thu Jan 24 16:42:48 [consoleTerminate] shutdown: removing fs lock...
Thu Jan 24 16:42:48 dbexit: really exiting now
TRY 2
Here I tried using the $addToSet operator
Code
buf <- mongo.bson.buffer.create()
mongo.bson.buffer.start.object(buf, "$addToSet")
mongo.bson.buffer.start.object(buf, name="tags")
mongo.bson.buffer.start.array(buf, "$each")
values <- list("d", "e", "f")
for (ii in seq(along=values)) {
mongo.bson.buffer.append(
buf=buf,
name=as.character(ii),
value=values[[ii]]
)
}
mongo.bson.buffer.finish.object(buf)
mongo.bson.buffer.finish.object(buf)
mongo.bson.buffer.finish.object(buf)
bnew <- mongo.bson.from.buffer(buf)
bnew
> mongo.update(mongo=con, ns, criteria=q, objNew=bnew)
[1] FALSE
Logfile
Thu Jan 24 16:43:52 [initandlisten] MongoDB starting : pid=4184 port=27017 dbpath=\data\db\ 64-bit host=ASHB-109C-02
Thu Jan 24 16:43:52 [initandlisten] db version v2.2.2, pdfile version 4.5
Thu Jan 24 16:43:52 [initandlisten] git version: d1b43b61a5308c4ad0679d34b262c5af9d664267
Thu Jan 24 16:43:52 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
Thu Jan 24 16:43:52 [initandlisten] options: { logpath: "log_2.txt" }
Thu Jan 24 16:43:52 [initandlisten] journal dir=/data/db/journal
Thu Jan 24 16:43:52 [initandlisten] recover : no journal files present, no recovery needed
Thu Jan 24 16:43:52 [initandlisten] waiting for connections on port 27017
Thu Jan 24 16:43:52 [websvr] admin web console waiting for connections on port 28017
Thu Jan 24 16:43:57 [initandlisten] connection accepted from 127.0.0.1:52435 #1 (1 connection now open)
Thu Jan 24 16:44:27 [conn1] __test.test Assertion failure x == _nfields src\mongo\db\jsobj.cpp 1250
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\util\stacktrace.cpp(161) mongo::printStackTrace+0x3e
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\util\assert_util.cpp(109) mongo::verifyFailed+0xdc
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\db\jsobj.cpp(1250) mongo::BSONIteratorSorted::BSONIteratorSorted+0xf3
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\db\ops\update_internal.cpp(906) mongo::ModSetState::createNewFromMods+0xa3
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\db\ops\update.cpp(370) mongo::_updateObjects+0x15a2
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(573) mongo::receivedUpdate+0x60d
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(437) mongo::assembleResponse+0x626
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\db\db.cpp(192) mongo::MyMessageHandler::process+0xf5
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\mongo\util\net\message_server_port.cpp(86) mongo::pms::threadRun+0x59a
Thu Jan 24 16:44:28 [conn1] mongod.exe ...\src\third_party\boost\libs\thread\src\win32\thread.cpp(180) boost::`anonymous namespace'::thread_start_function+0x21
Thu Jan 24 16:44:28 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(314) _callthreadstartex+0x17
Thu Jan 24 16:44:28 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(292) _threadstartex+0x7f
Thu Jan 24 16:44:28 [conn1] kernel32.dll BaseThreadInitThunk+0xd
Thu Jan 24 16:44:28 [conn1] update __test.test query: { tags: "a" } update: { $addToSet: { tags: { $each: [ "d", "e", "f" ] } } } nscanned:1 keyUpdates:0 exception: assertion src\mongo\db\jsobj.cpp:1250 locks(micros) w:390312 390ms
Thu Jan 24 16:44:33 [conn1] end connection 127.0.0.1:52435 (0 connections now open)
Thu Jan 24 16:44:37 CTRL_CLOSE_EVENT signal
Thu Jan 24 16:44:37 [consoleTerminate] got CTRL_CLOSE_EVENT, will terminate after current cmd ends
Thu Jan 24 16:44:37 [consoleTerminate] now exiting
Thu Jan 24 16:44:37 dbexit:
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: going to close listening sockets...
Thu Jan 24 16:44:37 [consoleTerminate] closing listening socket: 496
Thu Jan 24 16:44:37 [consoleTerminate] closing listening socket: 500
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: going to flush diaglog...
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: going to close sockets...
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: waiting for fs preallocator...
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: lock for final commit...
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: final commit...
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: closing all files...
Thu Jan 24 16:44:37 [consoleTerminate] closeAllFiles() finished
Thu Jan 24 16:44:37 [consoleTerminate] journalCleanup...
Thu Jan 24 16:44:37 [consoleTerminate] removeJournalFiles
Thu Jan 24 16:44:37 [consoleTerminate] shutdown: removing fs lock...
Thu Jan 24 16:44:37 dbexit: really exiting now
What am I doing wrong here?
Additional information
For those interested: here's the code that produced the example doc
pkg <- "rmongodb"
lib <- file.path(R.home(), "library")
if (!suppressWarnings(require(pkg, lib.loc=lib, character.only=TRUE))) {
install.packages(pkg, lib=lib)
require(pkg, lib.loc=lib, character.only=TRUE)
}
# CONNECTION
db <- "__test"
ns <- paste(db, "test", sep=".")
con <- mongo.create(db=db)
# ENSURE EMPTY DB
mongo.remove(mongo=con, ns=ns)
# INSERT
buf <- mongo.bson.buffer.create()
mongo.bson.buffer.start.array(buf, name="tags")
values <- list("a", "b", "c")
for (ii in seq(along=values)) {
mongo.bson.buffer.append(
buf=buf,
name=as.character(ii),
value=values[[ii]]
)
}
mongo.bson.buffer.finish.object(buf)
mongo.bson.buffer.finish.object(buf)
b <- mongo.bson.from.buffer(buf)
mongo.insert(mongo=con, ns=ns, b=b)
EDIT 2013-01-29
As suggested by Tad Marshall from 10gen in his comment to my bug report, I re-ran the code that inserts the document with the MongoDB server running in --objcheck mode (validates BSON structures) and voilà: the server won't let me insert the doc due to an assertion that fails. If I run the server without the --objcheck flag, insertion is successful (but that's probably just due to the fact that no validation takes place).
Note that I tried two different versions of putting together the array in tags as my initial code produced a doc that IMHO is not in sync with MongoDB's indexing conventions:
(Potentially) Invalid document
That's how I did it above. I noticed that I didn't make sure the array index starts with a 0. Insertion of this document will fail (see logfile below)
buf <- mongo.bson.buffer.create()
mongo.bson.buffer.start.array(buf, name="tags")
values <- list("a", "b", "c")
for (ii in seq(along=values)) {
mongo.bson.buffer.append(
buf=buf,
name=as.character(ii),
value=values[[ii]]
)
}
mongo.bson.buffer.finish.object(buf) # finish array 'tags'
mongo.bson.buffer.finish.object(buf) # finish buffer
b <- mongo.bson.from.buffer(buf)
> b
tags : 4
1 : 2 a
2 : 2 b
3 : 2 c
Valid document
I made sure the index starts with 0, so this should definitely be a valid BSON doc. But inserting this document will fail, too (see logfile below)
buf <- mongo.bson.buffer.create()
mongo.bson.buffer.start.array(buf, name="tags")
values <- list("a", "b", "c")
for (ii in seq(along=values)) {
mongo.bson.buffer.append(
buf=buf,
name=as.character(ii-1),
value=values[[ii]]
)
}
mongo.bson.buffer.finish.object(buf) # finish array 'tags'
mongo.bson.buffer.finish.object(buf) # finish buffer
b <- mongo.bson.from.buffer(buf)
b
mongo.insert(mongo=con, ns=ns, b=b)
> b
tags : 4
0 : 2 a
1 : 2 b
2 : 2 c
Logfile
Tue Jan 29 14:20:46 [initandlisten] MongoDB starting : pid=6440 port=27017
[...]
Tue Jan 29 14:20:59 [initandlisten] connection accepted from 127.0.0.1:62137 #1 (1 connection now open)
Tue Jan 29 14:21:03 [conn1] Assertion: 10307:Client Error: bad object in message
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\util\stacktrace.cpp(161) mongo::printStackTrace+0x3e
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\util\assert_util.cpp(154) mongo::msgasserted+0xc1
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\db\dbmessage.h(205) mongo::DbMessage::nextJsObj+0x103
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(784) mongo::receivedInsert+0xdb
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(434) mongo::assembleResponse+0x607
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\db\db.cpp(192) mongo::MyMessageHandler::process+0xf5
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\mongo\util\net\message_server_port.cpp(86) mongo::pms::threadRun+0x59a
Tue Jan 29 14:21:04 [conn1] mongod.exe ...\src\third_party\boost\libs\thread\src\win32\thread.cpp(180) boost::`anonymous namespace'::thread_start_function+0x21
Tue Jan 29 14:21:04 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(314) _callthreadstartex+0x17
Tue Jan 29 14:21:04 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(292) _threadstartex+0x7f
Tue Jan 29 14:21:04 [conn1] kernel32.dll BaseThreadInitThunk+0xd
Tue Jan 29 14:21:04 [conn1] insert __test.test keyUpdates:0 exception: Client Error: bad object in message code:10307 0ms
Tue Jan 29 14:21:07 [conn1] Assertion: 10307:Client Error: bad object in message
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\util\stacktrace.cpp(161) mongo::printStackTrace+0x3e
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\util\assert_util.cpp(154) mongo::msgasserted+0xc1
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\db\dbmessage.h(205) mongo::DbMessage::nextJsObj+0x103
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(784) mongo::receivedInsert+0xdb
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\db\instance.cpp(434) mongo::assembleResponse+0x607
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\db\db.cpp(192) mongo::MyMessageHandler::process+0xf5
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\mongo\util\net\message_server_port.cpp(86) mongo::pms::threadRun+0x59a
Tue Jan 29 14:21:07 [conn1] mongod.exe ...\src\third_party\boost\libs\thread\src\win32\thread.cpp(180) boost::`anonymous namespace'::thread_start_function+0x21
Tue Jan 29 14:21:07 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(314) _callthreadstartex+0x17
Tue Jan 29 14:21:07 [conn1] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(292) _threadstartex+0x7f
Tue Jan 29 14:21:07 [conn1] kernel32.dll BaseThreadInitThunk+0xd
Tue Jan 29 14:21:07 [conn1] insert __test.test keyUpdates:0 exception: Client Error: bad object in message code:10307 0ms
Oh, I am smacking myself up now. I didn't look closely at the way you were creating your document. You have two mongo.bson.finish.object() calls when you need only one to finish off the array you started. You do not need to call it to finish a BSON. mongo.bson.from.buffer() does the necessary housekeeping. This is my fault for not reading your code closely enough. I thought it was your update failing when the initial insert of the documents is the problem. For questions here in the future, it would help if your examples were a little easier to read. For instance, this will build the document:
library('rmongodb')
m = mongo.create()
ns = '__test.test'
mongo.insert(m, ns, list(tags=c('a', 'b', 'c'))
However, you are probably pasting in real-world code so I understand where the complications come in. Everything's cool. Just beating myself up for missing this and sending you on a wild goose chase. Regards
Rappster, both of these examples worked for me on my development machine, but I am slightly out of date running a debug build of mongod 2.1.0.
Since you are crashing the server with your example code, this is something the 10gen people will want to know about. Do you mind going to https://jira.mongodb.org/secure/Dashboard.jspa and reporting this bug?
Thanks,
Gerald Lindsly

Resources