How to fix the coredump for 'ProfileInfoCommand' of CLIPS? - rules

CLIPS version: 6.31 .
language: c++ clips C API .
I get a coredump file when execute ProfileInfoCommand after EnvRun.
Why is this error happening? Is there any usage error here? Or this is a bug?
The c++ code 1:
#define PROFILING_FUNCTIONS 1
// ...
EnvReset(clips);
// ...
EnvLoadFactsFromString(clips, facts.str().c_str(), -1);
// ...
EnvRun(clips, 100000);
ProfileInfoCommand(clips);
I know if PROFILING_FUNCTIONS is defined as 1, the EnvRun function will start profile automatically.So I use ProfileInfoCommand after EnvRun,but the coredump has occurred!
And I also tried using another method,but the process also generated a core dump(the same backtrace like the c++ code 1).
The c++ code 2:
EnvReset(clips);
Profile(clips, "constructs");
// ...
EnvLoadFactsFromString(clips, facts.str().c_str(), -1);
// ...
EnvRun(clips, max_iters);
Profile(clips, "off");
ProfileInfoCommand(clips);
The coredump file is following:
Core was generated by `/mnt/home/worker/project/ae-arbiter/dist/bin/arbiter-8003 --flagfile=flags.'.
Program terminated with signal 11, Segmentation fault.
#0 0x0000000000bc6b80 in EnvRtnArgCount (theEnv=Cannot access memory at address 0x7f879c3f6af8
) at /mnt/home/worker/project/ae-arbiter/src/clips/argacces.cc:306
306 for (argPtr = EvaluationData(theEnv)->CurrentExpression->argList;
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.212.el6.x86_64
(gdb) bt
#0 0x0000000000bc6b80 in EnvRtnArgCount (theEnv=0x7f85e6454f70) at /mnt/home/worker/project/ae-arbiter/src/clips/argacces.cc:306
#1 0x0000000000bc6bcd in EnvArgCountCheck (theEnv=0x7f85e6454f70, functionName=0xda1188 "profile", countRelation=2, expectedNumber=1) at /mnt/home/worker/project/ae-arbiter/src/clips/argacces.cc:337
#2 0x0000000000c40803 in ProfileInfoCommand (theEnv=0x7f85e6454f70) at /mnt/home/worker/project/ae-arbiter/src/clips/proflfun.cc:245
#3 0x0000000000b62d12 in arbiter::lib::ClipsModuleExecute (clips=0x7f85e6454f70, features=..., max_iters=100000, result_func=..., module_name=..., halt=#0x7f879c3f6fdc)
at /mnt/home/worker/project/ae-arbiter/src/lib/clips-utils.cc:357
...
...

Setting PROFILING_FUNCTIONS to 1 does not automatically profile code when EnvRun is called. It determines whether the profiling functions are included in the CLIPS executable. See Section 2.2 of the Advanced Programming Guide. The functions available for profiling from the CLIPS command are documented in Section 13.15, Profiling Commands, of the Basic Programming Guide. The ProfileInfoCommand can't be called directly. If you want to call a CLIPS command/function that isn't part of the API described in the Advanced Programming Guide, use the EnvEval API:
DATA_OBJECT returnValue;
EnvEval(theEnv,"(profile-info)",&returnValue);

Related

OpenMP code in CUDA source file not compiling on Google Colab

I am trying to run a simple Hello World program with OpenMP directives on Google Colab using OpenMP library and CUDA. I have followed this tutorial but I am getting an error even if I am trying to include %%cu in my code. This is my code-
%%cu
#include<stdio.h>
#include<stdlib.h>
#include<omp.h>
/* Main Program */
int main(int argc , char **argv)
{
int Threadid, Noofthreads;
printf("\n\t\t---------------------------------------------------------------------------");
printf("\n\t\t Objective : OpenMP program to print \"Hello World\" using OpenMP PARALLEL directives\n ");
printf("\n\t\t..........................................................................\n");
/* Set the number of threads */
/* omp_set_num_threads(4); */
/* OpenMP Parallel Construct : Fork a team of threads */
#pragma omp parallel private(Threadid)
{
/* Obtain the thread id */
Threadid = omp_get_thread_num();
printf("\n\t\t Hello World is being printed by the thread : %d\n", Threadid);
/* Master Thread Has Its Threadid 0 */
if (Threadid == 0) {
Noofthreads = omp_get_num_threads();
printf("\n\t\t Master thread printing total number of threads for this execution are : %d\n", Noofthreads);
}
}/* All thread join Master thread */
return 0;
}
And this is the error I am getting-
/tmp/tmpxft_00003eb7_00000000-10_15fcc2da-f354-487a-8206-ea228a09c770.o: In function `main':
tmpxft_00003eb7_00000000-5_15fcc2da-f354-487a-8206-ea228a09c770.cudafe1.cpp:(.text+0x54): undefined reference to `omp_get_thread_num'
tmpxft_00003eb7_00000000-5_15fcc2da-f354-487a-8206-ea228a09c770.cudafe1.cpp:(.text+0x78): undefined reference to `omp_get_num_threads'
collect2: error: ld returned 1 exit status
Without OpenMP directives, a simple Hello World program is running perfectly as can be seen below-
%%cu
#include <iostream>
int main()
{
std::cout << "Welcome To GeeksforGeeks\n";
return 0;
}
Output-
Welcome To GeeksforGeeks
There are two problems here:
nvcc doesn't enable or natively support OpenMP compilation. This has to be enabled by additional command line arguments passed through to the host compiler (gcc by default)
The standard Google Colab/Jupyter notebook plugin for nvcc doesn't allow passing of extra compilation arguments, meaning that even if you solve the first issue, it doesn't help in Colab or Jupyter.
You can solve the first problem as described here, and you can solve the second as described here and here.
Combining these in Colab got me this:
and then this:

posix_fallocate() failed: Operation not permitted while opening .realm file

I get the below error when i try to open and download .realm file in /tmp directory of serverless framework.
{"errorType":"Runtime.UnhandledPromiseRejection","errorMessage":"Error: posix_fallocate() failed: Operation not permitted" }
Below is the code:
let realm = new Realm({path: '/tmp/custom.realm', schema: [schema1, schema2]});
realm.write(() => {
console.log('completed==');
});
EDIT: this might soon be finally fixed in Realm-Core: see issue 4957.
In case you'll run into this problem elsewhere, here's a workaround.
This caused by AWS Lambda not supporting the fallocate and fallocate64 system calls. Instead of returning the correct error code in this case, which would be EINVAL for not supported on this file system, Amazon has blocked the system call so that it returns EPERM. Realm-Core has code that handles EINVAL return value correctly but will be bewildered by the unexpected EPERM returned from the system call.
The solution is to add a small shared library as a layer to the lambda: compile the following C file on Linux machine or inside lambda-ci Docker image:
#include <errno.h>
#include <fcntl.h>
int posix_fallocate(int __fd, off_t __offset, off_t __len) {
return EINVAL;
}
int posix_fallocate64(int __fd, off_t __offset, off_t __len) {
return EINVAL;
}
Now, compile this to a shared object with something like
gcc -shared fix.c -o fix.so
Then add it to a root of a ZIP file:
zip layer.zip fix.so
Create a new lambda layer from this zip
Add the lambda layer to your lambda function
Finally make the shared object be loaded by configuring the environment value LD_PRELOAD with value /opt/fix.so to your Lambda.
Enjoy.

QBS: Explicitly setting qbs.profiles inside Products causing build to fail

My use-case is this:
I have a static library which I want to be available for some profiles (e.g. "gcc", "arm-gcc", "mips-gcc").
I also have an application which links to this library, but this applications should only build using a specific profile (e.g. "arm-gcc").
For this I am modifying the app-and-lib QBS example.
The lib.qbs file:
import qbs 1.0
Product {
qbs.profiles: ["gcc", "arm-gcc", "mips-gcc"] //I added only this line
type: "staticlibrary"
name: "mylib"
files: [
"lib.cpp",
"lib.h",
]
Depends { name: 'cpp' }
cpp.defines: ['CRUCIAL_DEFINE']
Export {
Depends { name: "cpp" }
cpp.includePaths: [product.sourceDirectory]
}
}
The app.qbs file:
import qbs 1.0
Product {
qbs.profiles: ["arm-gcc"] //I added only this line
type: "application"
consoleApplication: true
files : [ "main.cpp" ]
Depends { name: "cpp" }
Depends { name: "mylib" }
}
The app build fails. Qbs wrongly tries to link to the "gcc" version of the library instead of the "arm-gcc" version, as you can see in the log:
Build graph does not yet exist for configuration 'default'. Starting from scratch.
Resolving project for configuration default
Setting up build graph for configuration default
Building for configuration default
compiling lib.cpp [mylib {"profile":"gcc"}]
compiling lib.cpp [mylib {"profile":"arm-gcc"}]
compiling lib.cpp [mylib {"profile":"mips-gcc"}]
compiling main.cpp [app]
creating libmylib.a [mylib {"profile":"gcc"}]
creating libmylib.a [mylib {"profile":"mips-gcc"}]
creating libmylib.a [mylib {"profile":"arm-gcc"}]
linking app [app]
ERROR: /usr/bin/arm-linux-gnueabihf-g++ -o /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/app.7d104347/app /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/app.7d104347/3a52ce780950d4d9/main.cpp.o /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/mylib.eyJwcm9maWxlIjoiZ2NjIn0-.792f47ec/libmylib.a
ERROR: /home/user/programs/qbs/usr/local/share/qbs/examples/app-and-lib/default/mylib.eyJwcm9maWxlIjoiZ2NjIn0-.792f47ec/libmylib.a: error adding symbols: File format not recognized
collect2: error: ld returned 1 exit status
ERROR: Process failed with exit code 1.
The following products could not be built for configuration default:
app
The build fails only when selecting one profile in the app.qbs file, and this profile should not be the first profile in the qbs.profiles line in the lib.qbs file.
When selecting two or more profiles - the build succeeds.
My analysis:
I think this problem is related to multiplexing:
The lib.qbs contains more than one profile. This turns on multiplexing when building the library, which, in turn, adds additional 'multiplexConfigurationId' to the build-directory name (moduleloader.cpp).
The app.lib contains only one profile, so multiplexing is not turned on and the build-directory name does not get the extra string.
The problem can be solved by changing the code (moduleloader.cpp) so that multiplexing is turned even if there is only one profile i.e. with the following patch:
--- moduleloader.cpp 2018-10-24 16:17:43.633527397 +0300
+++ moduleloader.cpp.new 2018-10-24 16:18:27.541370544 +0300
## -872,7 +872,7 ##
= callWithTemporaryBaseModule<const MultiplexInfo>(dummyContext,
extractMultiplexInfoFromProduct);
- if (multiplexInfo.table.size() > 1)
+ if (multiplexInfo.table.size() > 0)
productItem->setProperty(StringConstants::multiplexedProperty(), VariantValue::trueValue());
VariantValuePtr productNameValue = VariantValue::create(productName);
## -891,7 +891,7 ##
const QString multiplexConfigurationId = multiplexInfo.toIdString(row);
const VariantValuePtr multiplexConfigurationIdValue
= VariantValue::create(multiplexConfigurationId);
- if (multiplexInfo.table.size() > 1 || aggregator) {
+ if (multiplexInfo.table.size() > 0 || aggregator) {
multiplexConfigurationIdValues.push_back(multiplexConfigurationIdValue);
item->setProperty(StringConstants::multiplexConfigurationIdProperty(),
multiplexConfigurationIdValue);
This worked for my use case. I don't know if it make sense in a broader view.
Finally, the questions:
Does it all make sense?
Is this a normal behavior?
Is this use-case simply not supported?
Is there a better solution?
Thanks in advance.
Yes, the default behavior with multiplexing is that the a non-multiplexed product depends on all variants of the dependency. In general, there is no way for a user to change that behavior, but there should be.
However, luckily for you, profiles are special:
Depends { name: "mylib"; profiles: "arm-gcc" }
This should fix your problem.

Setting the TCP keepalive interval on the Hiredis async context

I'm writing a wrapper around hiredis to enable publish / subscribe functionality with reconnects should a redis node go down.
I'm using the asynchronous redis API.
So I have a test harness that sets up a publisher and subscriber. The harness then shuts down the slave VM from which the subscriber is reading.
However, the disconnect callback isn't called until much later (when I'm destructing the Subscription object that contains the corresponding redisAsyncContext.
I thought that the solution to this might be using tcp keepalive.
So I found that there's a suitable redis function in net.h:
int redisKeepAlive (redisContext* c, int interval);
However, the following appears to show that the redisKeepAlive function has been omitted from the library on purpose:
$ nm libhiredis.a --demangle | grep redisKeepAlive
0000000000000030 T redisKeepAlive
U redisKeepAlive
$ nm libhiredis.a -u --demangle | grep redisKeepAlive
U redisKeepAlive
Certainly when I try to use the call, the linker complains:
Subscription.cpp:167: undefined reference to `redisKeepAlive(redisContext*, int)'
collect2: error: ld returned 1 exit status
Am I out of luck - is there a way to set the TCP keepalive interval on the Hiredis async context?
Update
I've found this:
int redisEnableKeepAlive(redisContext *c);
But setting this on the asyncContext->c and adjusting REDIS_KEEPALIVE_INTERVAL seems to have no effect.
I found that the implementation of redisKeepAlive contains code that shows how to get direct access to the underlying socket descriptor:
int redisKeepAlive(redisContext *c, int interval) {
int val = 1;
int fd = c->fd;
if (setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &val, sizeof(val)) == -1){
__redisSetError(c,REDIS_ERR_OTHER,strerror(errno));
return REDIS_ERR;
}
Maybe this'll help someone..

How to generate good crash log report for Qt application in windows

I got such crash log in one of Qt application, Any idea how qt application is implemented to get such good crash report ?
//=====================================================
Crash Time: Fri Aug 20 15:05:51 2010
Tool Version: 1.6 (08/21/09)
Exception code: C0000005 ACCESS_VIOLATION
Fault address: 06706000 00:01FED9E4
Registers:
EAX:067689A0
EBX:00000000
ECX:067689A0
EDX:026304C0
ESI:00000012
EDI:01FEF018
CS:EIP:001B:06706000
SS:ESP:0023:01FEE4B4 EBP:01FEE4CC
DS:0023 ES:0023 FS:003B GS:0000
Flags:00010206
Call stack:
Address Frame
06706000 01FEE4B0 0000:00000000
39DE15ED 01FEE4CC QWidget::reparent+2D
39E7F560 01FEE6F4 QMenuBar::calculateRects+2D0
39E7E4B1 01FEE708 QMenuBar::performDelayedContentsChanged+41
39E7E600 01FEE714 QMenuBar::performDelayedChanges+20
39E7FF24 01FEE88C QMenuBar::drawContents+24
39E4162E 01FEEA88 QFrame::paintEvent+3BE
39DE060E 01FEEB00 QWidget::event+39E
39D56AE8 01FEEB38 QApplication::internalNotify+1E8
39D56785 01FEEC68 QApplication::notify+865
39D0A89B 01FEEC7C QApplication::sendSpontaneousEvent+3B
39D13DEF 01FEED2C QApplication::winMouseButtonUp+20DF
39D1095E 01FEEFB0 QApplication::winFocus+FBE
7E418734 01FEEFDC GetDC+6D
7E418816 01FEF044 GetDC+14F
7E41B4C0 01FEF098 DefWindowProcW+184
7E41B50C 01FEF0C0 DefWindowProcW+1D0
7C90EAE3 01FEF12C KiUserCallbackDispatcher+13
7E418A10 01FEF13C DispatchMessageW+F
39D25EED 01FEF194 QEventLoop::processEvents+2AD
39D76814 01FEF1AC QEventLoop::enterLoop+74
39D766CD 01FEF1B8 QEventLoop::exec+1D
39D56CCC 01FEF1C8 QApplication::exec+1C
0044847A 01FEFED8 0001:0004747A c:\QtTest\too\main.exe
004465C0 01FEFEE8 0001:000455C0 c:\QtTest\too\main.exe
00F17729 01FEFF18 0001:00B16729 c:\QtTest\too\main.exe
00F148E7 01FEFFC0 0001:00B138E7 c:\QtTest\too\main.exe
7C816FD7 01FEFFF0 RegisterWaitForInputIdle+49
Crash Log:
( 54350.189194700) T> Compile_And_Link_User_Code_Body
( 54350.191276250) T> Create_Header_Expanded_Driver called for: testQt
( 54350.197348809) T> computil.adb:Get_Preprocessed_File called SOURCE_FILE=QtTest_TMP.8.20.54350.2.10156.40.c
( 54350.786834903) T> Create_Header_Expanded_Driver called for: testQt
( 54350.792789012) T> computil.adb:Get_Preprocessed_File called SOURCE_FILE=QtTest_TMP.8.20.54350.8.10156.49.c,
( 54351.056315801) G> add_all_testcase_user_code_to_harness failure COMPILER_PKG.COMPILE_ERROR_EXCEPTION
( 54351.056962252) G> HierarchyView::initCoverage() ...
( 54351.068467625) G> starting Parser::build_testobjectlist_from_xml()
There is no crash report creation provided as part of Qt.
The application in question probably just registers an exception filter with the operating system, e.g. SetUnhandledExceptionFilter on Windows, and compiles and writes this crash report out itself.
If you didn't fancy rolling your own crash reporter, there are some nice open source libraries you could use for generating verbose crash reports like Google's Breakpad.
http://code.google.com/p/google-breakpad/

Resources