My C++ application running on Ubuntu 14.04 is having problems. I am using grpc to communicate with a go webserver application which is servering up webpages with status/configuration of the c++ application.
I have been using 1 year old version of grpc 0.14 something so before posting here, I upgraded everything (grpc 1.3.1, go version 1.8.1).
It seems my c++ application is crashing quite often with the 1.3.1(and with 1.0.0, 1.2.5, 1.2.0, etc...) grpc version.
I am getting a sigabort with a double free warning. The application will run for awhile but after a period of time in which the web application is requesting data from the c++ application, it will crash: gdb output:
[New LWP 9908]
[New LWP 9881]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./bhio'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 _int_malloc (av=0x7fb3ac000020, bytes=16) at malloc.c:3351
3351 malloc.c: No such file or directory.
(gdb) where
#0 _int_malloc (av=0x7fb3ac000020, bytes=16) at malloc.c:3351
#1 0x00007fb4205db6c0 in __GI___libc_malloc (bytes=16) at malloc.c:2891
#2 0x000000000076bb2f in gpr_malloc ()
#3 0x000000000077678d in grpc_error_create ()
#4 0x000000000078ba94 in ?? ()
#5 0x000000000078dbee in grpc_chttp2_fail_pending_writes ()
#6 0x000000000078e19f in grpc_chttp2_mark_stream_closed ()
#7 0x000000000078e2eb in grpc_chttp2_cancel_stream ()
#8 0x000000000078ef1c in ?? ()
#9 0x000000000077597e in grpc_combiner_continue_exec_ctx ()
#10 0x0000000000777678 in grpc_exec_ctx_flush ()
#11 0x000000000078095f in grpc_call_cancel_with_status ()
#12 0x0000000000780be1 in grpc_call_destroy ()
#13 0x0000000000769bd7 in grpc::ServerContext::~ServerContext() ()
#14 0x0000000000768c7c in grpc::Server::SyncRequest::CallData::~CallData() ()
#15 0x00000000007691e3 in
grpc::Server::SyncRequestThreadManager::DoWork(void*, bool) ()
#16 0x000000000076aff1 in grpc::ThreadManager::MainWorkLoop() ()
#17 0x000000000076b04c in grpc::ThreadManager::WorkerThread::Run() ()
#18 0x00007fb420eebbf0 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#19 0x00007fb421146184 in start_thread (arg=0x7fb3ca686700)
at pthread_create.c:312
#20 0x00007fb42065337d in clone ()
---Type <return> to continue, or q <return> to quit---
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
(gdb)
(gdb) quit
or here:
[Thread 0x7fff7d7fa700 (LWP 3521) exited]
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff7e7fc700 (LWP 3524)]
__GI___libc_free (mem=0xb5) at malloc.c:2929
2929 malloc.c: No such file or directory.
(gdb)
(gdb) where
#0 __GI___libc_free (mem=0xb5) at malloc.c:2929
#1 0x000000000077b7b5 in grpc_byte_buffer_destroy ()
#2 0x0000000000773ac3 in grpc::Server::SyncRequest::CallData::~CallData() ()
#3 0x000000000077405a in grpc::Server::SyncRequestThreadManager::DoWork(void*, bool) ()
#4 0x0000000000776111 in grpc::ThreadManager::MainWorkLoop() ()
#5 0x000000000077616c in grpc::ThreadManager::WorkerThread::Run() ()
#6 0x00007ffff6c9fbf0 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#7 0x00007ffff6efa184 in start_thread (arg=0x7fff7e7fc700)
at pthread_create.c:312
#8 0x00007ffff640737d in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
So if it is ok to start multiple serverbuilder s , what else could the above error point to that I could possibly be doing wrong in using the grpc library.. I am taking this code over for someone else who wrote it, so my knowledge is lacking in the use of grpc... I don't think grpc is that unstable, so it must be something I am doing by using it incorrectly.
any ideas would be helpful..
any suggestions to debug it better would be helpful.
for grpc building, I am just doing the following:
$ git clone -b $(curl -L http://grpc.io/release) https://github.com/grpc/grpc
$ cd grpc
$ git submodule update --init
$ make
$ [sudo] make install
Is there options to compile differently which might provide more information?
thanks in advance for the help/suggestions.
Bob
It turns out, a small memory leak (missing close of a socket file descriptor) in the registered service function was causing the issue.
Related
I'm running CentOS 8.1 and my machine has kernel panic'd. I installed the kernel-debuginfo package and I am generally following the steps in Section 7.11 : Analyzing a core dump.
Here is an abbreviated version of my debugging session :
# crash /usr/lib/debug/usr/lib/modules/4.18.0-147.el8.x86_64/vmlinux /var/crash/XXX/vmcore
.
.
.
WARNING: kernel relocated [336MB]: patching 93296 gdb minimal_symbol values
KERNEL: /usr/lib/debug/usr/lib/modules/4.18.0-147.el8.x86_64/vmlinux
DUMPFILE: /var/crash/XXX/vmcore [PARTIAL DUMP]
CPUS: 48
DATE: Sun Jan 10 13:36:04 2021
UPTIME: 23 days, 22:18:40
LOAD AVERAGE: 10.00, 10.01, 10.00
TASKS: 1966
NODENAME: YYY
RELEASE: 4.18.0-147.el8.x86_64
VERSION: #1 SMP Wed Dec 4 21:51:45 UTC 2019
MACHINE: x86_64 (2794 Mhz)
MEMORY: 2035.9 GB
PANIC: "Kernel panic - not syncing: Hard LOCKUP"
PID: 27666
COMMAND: "R"
TASK: ffff8ff017978000 [THREAD_INFO: ffff8ff017978000]
CPU: 2
STATE: TASK_RUNNING (PANIC)
crash> bt
PID: 27666 TASK: ffff8ff017978000 CPU: 2 COMMAND: "R"
#0 [ffffb5187c6c7a50] machine_kexec at ffffffff96057c4e
#1 [ffffb5187c6c7aa8] __crash_kexec at ffffffff96155b8d
#2 [ffffb5187c6c7b70] panic at ffffffff960b0578
#3 [ffffb5187c6c7bf8] watchdog_overflow_callback.cold.8 at ffffffff9618bb11
#4 [ffffb5187c6c7c08] __perf_event_overflow at ffffffff961f54f2
#5 [ffffb5187c6c7c38] x86_pmu_handle_irq at ffffffff96007a16
#6 [ffffb5187c6c7e88] amd_pmu_handle_irq at ffffffff96008b14
#7 [ffffb5187c6c7ea0] perf_event_nmi_handler at ffffffff960060cd
#8 [ffffb5187c6c7eb8] nmi_handle at ffffffff96021843
#9 [ffffb5187c6c7f10] default_do_nmi at ffffffff96021cce
#10 [ffffb5187c6c7f30] do_nmi at ffffffff96021ea8
#11 [ffffb5187c6c7f50] nmi at ffffffff96a01537
RIP: 0000146816a69d6e RSP: 00007ffc378f6270 RFLAGS: 00000216
RAX: 000000006655e8df RBX: 0000000000000003 RCX: 000000000003ad90
RDX: 0000000000000003 RSI: 0000000007ccb060 RDI: 000000076fe90140
RBP: 0000000000085fc6 R8: 0000000000b13039 R9: 00000007770c0758
R10: 0000000778ba8a38 R11: 0000000000000c36 R12: 000000000070e070
R13: 0000000000000000 R14: 00007ffc378f6410 R15: 00007ffc378f6430
ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b
Clearly the culprit is an R process which I was running (see COMMAND: "R"). Looking at the bt above, it seems like it is only returning the kernel level functions. I want to know what line in my R code (or installed R libraries) causing the issue. Trying
crash> gdb bt
No stack.
gdb: gdb request failed: bt
crash>
is not useful. Looking at man crash it isn't directly obvious how to do that. Some of the R libraries possibly involved have C++ code that was compiled with debug flags. I have a vague hope of the cause of the error being in one of these.
QUESTION :
How do I use the Linux crash utility to recover the line of code (either R or C++) that caused the kernel to panic?
What is a "Kernel panic - not syncing: Hard LOCKUP"
Error Message:
Here are some information that might be useful.
WP version
5.2.5
Plugins Active
Akismet Anti-Spam - 4.1.3
BigCommerce for WordPress - 3.12.0
Breadcrumb - 1.5.3
Coming Soon Page, Under Construction & Maintenance Mode by SeedProd - 5.1.0
Contact Form 7 - 5.1.6
Google Maps Easy - 1.9.27
Insert Headers and Footers - 1.4.4
Insert PHP Code Snippet - 1.3.1
Jetpack by WordPress.com - 7.8.1
LiteSpeed Cache - 2.9.9.2
Notification - 6.3.0
SiteOrigin CSS - 1.2.4
Sticky Side Buttons - 1.0.9
UpdraftPlus - Backup/Restore - 1.16.20
Yoast SEO - 12.8.1
Server Environment
PHP Version
7.1.33
Max Execution Time
30
Memory Limit
256M
Upload Max Filesize
256M
Post Max Size
8M
WP debug
No
WP debug display
Yes
WP debug log
No
Mysql Version
5.5.61
Web Server
Apache/2.4.27 (Unix) OpenSSL/1.0.1e-fips mod_bwlimited/1.4
ERROR LOG
[2020-01-17 03:22:27] BigCommerce.INFO: Starting import [] []
[2020-01-17 03:22:28] BigCommerce.INFO: Running import task {"state":"started","description":"Fetching store information"} []
[2020-01-17 03:22:28] BigCommerce.DEBUG: Requesting store settings [] []
[2020-01-17 03:22:30] BigCommerce.DEBUG: Retrieved store settings {"settings":{"bigcommerce_currency_code":"AUD","bigcommerce_currency_symbol":"$","bigcommerce_currency_symbol_position":"left","bigcommerce_decimal_units":2,"bigcommerce_integer_units":4,"bigcommerce_mass_unit":"kg","bigcommerce_length_unit":"cm","bigcommerce_wishlists_enabled":1,"bigcommerce_facebook_pixel_id":"","bigcommerce_google_analytics_id":""}} []
[2020-01-17 03:22:32] BigCommerce.INFO: Running import task {"state":"fetched_store","description":"Retrieving currency settings"} []
[2020-01-17 03:22:32] BigCommerce.DEBUG: Requesting currency settings [] []
[2020-01-17 03:22:35] BigCommerce.INFO: Running import task {"state":"fetched_currencies","description":"Removing Categories"} []
[2020-01-17 03:22:35] BigCommerce.DEBUG: Removing deleted terms for bigcommerce_category taxonomy {"page":1,"limit":50,"taxonomy":"bigcommerce_category"} []
[2020-01-17 03:22:50] BigCommerce.ERROR: API call to https://api.bigcommerce.com/stores/zby41x9gmk/v3/catalog/categories?id%3Ain=18%2C19%2C20%2C21%2C22%2C23%2C24%2C25%2C26%2C27%2C28%2C29%2C30%2C31%2C32%2C33%2C34%2C35%2C36%2C37%2C38%2C39%2C40%2C41%2C42%2C43%2C44%2C45%2C46%2C47%2C48%2C49%2C50%2C51%2C52%2C53%2C54%2C55%2C56%2C57%2C58%2C59%2C60%2C61%2C62%2C63%2C64%2C65%2C66%2C67&limit=50&include_fields=id failed: Connection timed out after 15001 milliseconds {"response":null,"headers":null} []
[2020-01-17 03:22:50] BigCommerce.DEBUG: #0 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Api/Caching_Client.php(57): BigCommerce\Api\v3\ApiClient->callApi('/catalog/catego...', 'GET', Array, '', Array, '\\BigCommerce\\Ap...', '/catalog/catego...') #1 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/vendor/moderntribe/bigcommerce-api-php-v3/src/Api/CatalogApi.php(6469): BigCommerce\Api\Caching_Client->callApi('/catalog/catego...', 'GET', Array, '', Array, '\\BigCommerce\\Ap...', '/catalog/catego...') #2 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/vendor/moderntribe/bigcommerce-api-php-v3/src/Api/CatalogApi.php(6415): BigCommerce\Api\v3\Api\CatalogApi->getCategoriesWithHttpInfo(Array) #3 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Import/Processors/Category_Purge.php(30): BigCommerce\Api\v3\Api\CatalogApi->getCategories(Array) #4 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Import/Processors/Term_Purge.php(74): BigCommerce\Import\Processors\Category_Purge->get_remote_term_ids(Array) #5 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Container/Import.php(211): BigCommerce\Import\Processors\Term_Purge->run() #6 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Import/Task_Manager.php(94): BigCommerce\Container\Import->BigCommerce\Container\{closure}('fetched_currenc...') #7 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Container/Import.php(280): BigCommerce\Import\Task_Manager->run_next('fetched_currenc...') #8 /home/piranhaoffroadco/public_html/wp-includes/class-wp-hook.php(286): BigCommerce\Container\Import->BigCommerce\Container\{closure}('fetched_currenc...') #9 /home/piranhaoffroadco/public_html/wp-includes/class-wp-hook.php(310): WP_Hook->apply_filters(NULL, Array) #10 /home/piranhaoffroadco/public_html/wp-includes/plugin.php(465): WP_Hook->do_action(Array) #11 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Import/Runner/Cron_Runner.php(51): do_action('bigcommerce/imp...', 'fetched_currenc...') #12 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Container/Import.php(104): BigCommerce\Import\Runner\Cron_Runner->continue_import() #13 /home/piranhaoffroadco/public_html/wp-includes/class-wp-hook.php(284): BigCommerce\Container\Import->BigCommerce\Container\{closure}() #14 /home/piranhaoffroadco/public_html/wp-includes/class-wp-hook.php(310): WP_Hook->apply_filters('', Array) #15 /home/piranhaoffroadco/public_html/wp-includes/plugin.php(465): WP_Hook->do_action(Array) #16 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Import/Runner/Cron_Runner.php(73): do_action('bigcommerce_con...') #17 /home/piranhaoffroadco/public_html/wp-content/plugins/bigcommerce/src/BigCommerce/Container/Import.php(108): BigCommerce\Import\Runner\Cron_Runner->ajax_continue_import() #18 /home/piranhaoffroadco/public_html/wp-includes/class-wp-hook.php(284): BigCommerce\Container\Import->BigCommerce\Container\{closure}() #19 /home/piranhaoffroadco/public_html/wp-includes/class-wp-hook.php(310): WP_Hook->apply_filters(NULL, Array) #20 /home/piranhaoffroadco/public_html/wp-includes/plugin.php(465): WP_Hook->do_action(Array) #21 /home/piranhaoffroadco/public_html/wp-admin/admin-ajax.php(173): do_action('wp_ajax_bigcomm...') #22 {main} [] []
Thanks for sharing these logs! To better understand your BC4WP setup I have some follow up questions:
Was the plugin working before and recently stopped working? If so, have there been any updates to other plugins or the theme you're using? If you could share your WordPress site and BigCommerce store URLs, that can also help narrow down what's causing an issue :)
I'm running the developers edition of Realm Object Server v1.8.3 as a mac app. I start it with the start-object-server.command. It has been running fine for a number of days and everything was working really well, but ROS is now crashing within seconds of starting it.
Clearly the issue is with the JavaScript element, but I am not sure what has led to this position, nor how best to recover from this error. I have not created any additional functions, so not adding any NODE.js issues: it's just ROS with half a dozen realms.
The stack dump I get from the terminal session is as below. Any thoughts on recovery steps and how to prevent it happening again would be appreciated.
Last few GCs
607335 ms: Mark-sweep 1352.1 (1404.9) -> 1351.7 (1402.9) MB, 17.4 / 0.0 ms [allocation failure] [GC in old space requested].
607361 ms: Mark-sweep 1351.7 (1402.9) -> 1351.7 (1367.9) MB, 25.3 / 0.0 ms [last resort gc].
607376 ms: Mark-sweep 1351.7 (1367.9) -> 1351.6 (1367.9) MB, 15.3 / 0.0 ms [last resort gc].
JS stacktrace
Security context: 0x3eb4332cfb39
1: DoJoin(aka DoJoin) [native array.js:~129] [pc=0x1160420f24ad] (this=0x3eb433204381 ,w=0x129875f3a8b1 ,x=3,N=0x3eb4332043c1 ,J=0x3828ea25c11 ,I=0x3eb4332b46c9 )
2: Join(aka Join) [native array.js:180] [pc=0x116042067e32] (this=0x3eb433204381 ,w=0x129875f3a8b1
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
2: node::FatalException(v8::Isolate*, v8::Local, v8::Local) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
4: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
5: v8::internal::Runtime_StringBuilderJoin(int, v8::internal::Object**, v8::internal::Isolate*) [/Applications/realm-mobile-platform/realm-object-server/.prefix/bin/node]
6: 0x1160411092a7
/Applications/realm-mobile-platform/start-object-server.command: line 94: 39828 Abort trap: 6 node "$package/node_modules/.bin/realm-object-server" -c configuration.yml (wd: /Applications/realm-mobile-platform/realm-object-server/object-server)
Your ROS instance has run out of memory. To figure out why it runs out of memory, it would be helpful to see the log file of the server. Can you turn
on the debug level for logging.
If you want to send a log file to Realm, it is better to open an issue for this at https://github.com/realm/realm-mobile-platform/issues.
I'm trying to recover my admin password in Drupal with Drush. I've installed Drush successfully, but every time I run drush uli as well as similar commands I get this error:
Drupal\Core\Database\ConnectionNotDefinedException: The specified database connection is not defined: default in [error]
/Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Core/Database/Database.php:361
Stack trace:
#0 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Core/Database/Database.php(166):
Drupal\Core\Database\Database::openConnection('default', 'default')
#1 [internal function]: Drupal\Core\Database\Database::getConnection('default')
#2 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(254): call_user_func_array(Array,
Array)
#3 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(177):
Drupal\Component\DependencyInjection\Container->createService(Array, 'database')
#4 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(494):
Drupal\Component\DependencyInjection\Container->get('database', 1)
#5 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(236):
Drupal\Component\DependencyInjection\Container->resolveServicesAndParameters(Array)
#6 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(177):
Drupal\Component\DependencyInjection\Container->createService(Array, 'cache.backend.d...')
#7 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Core/Cache/ChainedFastBackendFactory.php(85):
Drupal\Component\DependencyInjection\Container->get('cache.backend.d...')
#8 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Core/Cache/CacheFactory.php(79):
Drupal\Core\Cache\ChainedFastBackendFactory->get('bootstrap')
#9 [internal function]: Drupal\Core\Cache\CacheFactory->get('bootstrap')
#10 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(254): call_user_func_array(Array,
Array)
#11 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(177):
Drupal\Component\DependencyInjection\Container->createService(Array, 'cache.bootstrap')
#12 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(494):
Drupal\Component\DependencyInjection\Container->get('cache.bootstrap', 1)
#13 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(236):
Drupal\Component\DependencyInjection\Container->resolveServicesAndParameters(Array)
#14 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Component/DependencyInjection/Container.php(177):
Drupal\Component\DependencyInjection\Container->createService(Array, 'module_handler')
#15 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Core/DrupalKernel.php(520):
Drupal\Component\DependencyInjection\Container->get('module_handler')
#16 /Users/myusername/.kalabox/apps/canchascrd8/code/core/lib/Drupal/Core/DrupalKernel.php(678):
Drupal\Core\DrupalKernel->preHandle(Object(Symfony\Component\HttpFoundation\Request))
#17 /usr/local/Cellar/drush/8.1.3/libexec/lib/Drush/Boot/DrupalBoot8.php(150):
Drupal\Core\DrupalKernel->prepareLegacyRequest(Object(Symfony\Component\HttpFoundation\Request))
#18 /usr/local/Cellar/drush/8.1.3/libexec/includes/bootstrap.inc(354): Drush\Boot\DrupalBoot8->bootstrap_drupal_full()
#19 /usr/local/Cellar/drush/8.1.3/libexec/commands/user/user.drush.inc(389): drush_bootstrap(5)
#20 /usr/local/Cellar/drush/8.1.3/libexec/includes/command.inc(373): drush_user_login()
#21 /usr/local/Cellar/drush/8.1.3/libexec/includes/command.inc(224): _drush_invoke_hooks(Array, Array)
#22 /usr/local/Cellar/drush/8.1.3/libexec/includes/command.inc(192): drush_command()
#23 /usr/local/Cellar/drush/8.1.3/libexec/lib/Drush/Boot/BaseBoot.php(67): drush_dispatch(Array)
#24 /usr/local/Cellar/drush/8.1.3/libexec/includes/preflight.inc(66): Drush\Boot\BaseBoot->bootstrap_and_dispatch()
#25 /usr/local/Cellar/drush/8.1.3/libexec/drush.php(12): drush_main()
I'm using Kalabox, and brand new to Drupal. Does anyone have any ideas?
If you are using Kalabox you need to use kbox drush uli (not drush uli) from somewhere inside of your apps folder.
You also are going to want to ensure
Your app is actually on
You have actually set up the Drupal site (aka created the database)
You have not edited pantheon.settings.php to remove the logic that grabs your database connection info from the PRESSFLOW_SETTINGS envvar.
Might be worth either destroying and recreating the site in Kalabox or spinning up another site in Pantheon and pulling that down to troubleshoot.
I have cross-compiled Qt 5.1.1 for an i.MX6 powered Nitrogen6x board running Debian 7 (wheezy).
I have configured Qt with the -egl parameter and eglfs has been listed as QPA backend in the configure output.
However if I try to run a small example application with the -platform eglfs parameter I am running into this error:
stdin: is not a tty
[ 1] HAL user version 4.6.9 build 6622 Aug 15 2013 13:22:40
[ 2] HAL kernel version 4.6.9 build 1210
QML debugging is enabled. Only use this in a safe environment.
bash: line 1: 3673 Segmentation fault DISPLAY=:0.0 /opt/Test/bin/Test -platform eglfs
Remote application finished with exit code 139.
OpenGL ES2 and EGL are installed on the board and can be found in /usr/lib and /usr/include.
Sadly I couldn't find proper documentation for eglfs, so I am hoping that someone around here has made some experiences with it.
This is the backtrace output:
run Test-platform eglfs
Starting program: /opt/Test/bin/Test Test -platform eglfs
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
[ 1] HAL user version 4.6.9 build 6622 Aug 15 2013 13:31:17
[ 2] HAL kernel version 4.6.9 build 1210
QML debugging is enabled. Only use this in a safe environment.
[New Thread 0x2c6b7460 (LWP 4057)]
Program received signal SIGSEGV, Segmentation fault.
0x2bab6f48 in gcoHAL_QueryChipCount () from /usr/lib/libGAL.so
(gdb) backrace full
Undefined command: "backrace". Try "help".
(gdb) backrace full[1#t
#0 0x2bab6f48 in gcoHAL_QueryChipCount () from /usr/lib/libGAL.so
No symbol table info available.
#1 0x2ba7ccbc in veglGetThreadData () from /usr/lib/libEGL.so.1
No symbol table info available.
#2 0x2ba74cd0 in eglBindAPI () from /usr/lib/libEGL.so.1
No symbol table info available.
#3 0x2be41934 in ?? () from /usr/local/Qt-Debian/plugins/platforms/libqeglfs.so
No symbol table info available.
#4 0x2be41934 in ?? () from /usr/local/Qt-Debian/plugins/platforms/libqeglfs.so
No symbol table info available.
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) info registers
r0 0x1 1
r1 0x23e54 147028
r2 0x738 1848
r3 0x0 0
r4 0x2bb67d84 733379972
r5 0x23e18 146968
r6 0x2e70c 190220
r7 0x2b430198 725811608
r8 0x7efff9e8 2130704872
r9 0x8 8
r10 0x2b0725c4 721888708
r11 0x7efffae0 2130705120
r12 0x2bab6f1c 732655388
sp 0x7efff8f0 0x7efff8f0
lr 0x2ba7ccbc 732417212
pc 0x2bab6f48 0x2bab6f48 <gcoHAL_QueryChipCount+44>
cpsr 0x80000010 -2147483632
(gdb) x/16i $pc
=> 0x2bab6f48 <gcoHAL_QueryChipCount+44>: ldr r3, [r3, #12]
0x2bab6f4c <gcoHAL_QueryChipCount+48>: sub r2, r3, #1
0x2bab6f50 <gcoHAL_QueryChipCount+52>: cmp r2, #2
0x2bab6f54 <gcoHAL_QueryChipCount+56>: bhi 0x2bab6f70 <gcoHAL_QueryChipCount+84>
0x2bab6f58 <gcoHAL_QueryChipCount+60>: ldr r2, [r4]
0x2bab6f5c <gcoHAL_QueryChipCount+64>: mov r0, #0
0x2bab6f60 <gcoHAL_QueryChipCount+68>: str r3, [r1]
0x2bab6f64 <gcoHAL_QueryChipCount+72>: add r3, r2, #1
0x2bab6f68 <gcoHAL_QueryChipCount+76>: str r3, [r4]
0x2bab6f6c <gcoHAL_QueryChipCount+80>: pop {r4, pc}
0x2bab6f70 <gcoHAL_QueryChipCount+84>: mvn r0, #8
0x2bab6f74 <gcoHAL_QueryChipCount+88>: bl 0x2baad5fc
0x2bab6f78 <gcoHAL_QueryChipCount+92>: ldr r3, [r4]
0x2bab6f7c <gcoHAL_QueryChipCount+96>: mvn r0, #8
0x2bab6f80 <gcoHAL_QueryChipCount+100>: add r3, r3, #1
0x2bab6f84 <gcoHAL_QueryChipCount+104>: str r3, [r4]
(gdb) thread apply all backtrace
Thread 2 (Thread 0x2c6b7460 (LWP 4057)):
#0 0x2b52ef96 in ?? () from /lib/arm-linux-gnueabihf/libc.so.6
#1 0x2b568634 in _IO_file_close () from /lib/arm-linux-gnueabihf/libc.so.6
#2 0x2b568ffe in _IO_file_close_it () from /lib/arm-linux-gnueabihf/libc.so.6
#3 0x2b56113a in fclose () from /lib/arm-linux-gnueabihf/libc.so.6
#4 0x2bea8d00 in udev_new () from /lib/arm-linux-gnueabihf/libudev.so.0
#5 0x2be7d2e4 in ?? () from /usr/local/Qt-Debian/plugins/platforms/libqeglfs.so
#6 0x2be7d2e4 in ?? () from /usr/local/Qt-Debian/plugins/platforms/libqeglfs.so
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Thread 1 (Thread 0x2bcb9220 (LWP 4056)):
#0 0x2bab6f48 in gcoHAL_QueryChipCount () from /usr/lib/libGAL.so
#1 0x2ba7ccbc in veglGetThreadData () from /usr/lib/libEGL.so.1
#2 0x2ba74cd0 in eglBindAPI () from /usr/lib/libEGL.so.1
#3 0x2be41934 in ?? () from /usr/local/Qt-Debian/plugins/platforms/libqeglfs.so
#4 0x2be41934 in ?? () from /usr/local/Qt-Debian/plugins/platforms/libqeglfs.so
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) quit
How could I possibly fix that error?
I have the exact same crash on a MarSBoard running an egl fb application on a Yocto image created with recipes from https://github.com/silmerusse/meta-robomind.
I had to copy the EGL/OpenGL related stuff from http://repository.timesys.com/buildsources/g/gpu-viv-bin-mx6q/.
In my case galcore.ko is builtin.
Edit:
Check that you have /dev/galcore and that its permission are crw.rw.rw. (otherwise sudo chmod 666 /dev/galcore).
If you don't have /dev/galcore, try insmod /lib/modules/..../kernel/drivers/mxc/gpu-viv/galcore.ko.
These steps fixed the crash for me on an ubuntu image.
On the Yocto image the galcore driver is builtin, and seems to be there but I still get the crash.
Edit:
The crash in the Yocto image was caused by the wrong version of the EGL/GAL.so libs. Apparently the galcore driver built into the kernel has version 4.6.9.6622. It requires libs from gpu-viv-bin-mx6q-3.0.35-4.1.0. Using those libs and manually copying them into /usr/lib my fb application runs fine, using hardware OpenGLES 2.0 and hardware decoding of a h264 video.
I fixed this issue by switching to Yocto and therefor gaining access to most recent releases of essential components.
If you're developing for an i.MX cpu I strongly recommend to have a look at https://github.com/Freescale/fsl-community-bsp-platform
It is very important to remove "x11" from default-distrovars and "wayland" from poky.conf as these will cause you to run into errors.
Building Qt5 on such a setup works fine.
I got similar segfault when I forgot to load the galcore module. Here is the backtrace:
#0 0x766062b0 in gcoHAL_QueryChipCount (Hal=Hal#entry=0x0, Count=Count#entry=0x16494)
at gc_hal_user_query.c:1726
#1 0x766da244 in veglGetThreadData () at gc_egl.c:137
#2 0x766d3210 in eglfGetDisplay (display_id=0x16c08) at gc_egl_init.c:464
#3 eglGetDisplay (DisplayID=0x16c08) at gc_egl_init.c:565
Qt 5.3.2, kernel 3.10.17, Galcore version 4.6.9.9754