Possible to disable asan using an environment variable? - address-sanitizer

Currently I'm running into a known problem with asan (See report)
==5097==Shadow memory range interleaves with an existing memory mapping. ASan cannot proceed correctly. ABORTING.
==5097==ASan shadow was supposed to be located in the [0x00007fff7000-0x10007fff7fff] range.
Is it possible to use an environment variable to stop asan being used to prevent this error?
Or at least stop this error from being fatal.
The reason I want to do this is the failing command happens when generating code, but I'd like to use asan for the resulting binary. Having different CFLAGS for generated binaries and the final binary is possible but it would be hard to do without hard-coding it in for everyone else. So I'd like a way to disable asan during the build step, but use afterwards.
Edit: in case it's useful, this occurs with an extremely simple program: Error, Code.

No, this is a fundamental error which prevents all later instrumentation by Asan from working correctly. E.g. stack poisoning in function prologues would end up causing segfaults or corrupting random memory.

The error you reported is not an addressabilty error found by address sanitizer but an issue with the address sanitizer itself. Read the FAQ here. Reporting here the part that is relevant to your case:
Q: I'm using dynamic ASan runtime and my program crashes at start with
"Shadow memory range interleaves with an existing memory mapping. ASan
cannot proceed correctly.".
A1: If you are using shared ASan DSO, try LD_PRELOAD'ing Asan runtime
into your program.

Related

How does QProcess manages the resources when started?

Context
I'm trying to implement a sort-of orchestrator pattern for our applications.
Basically, we have three different and independent applications developped in Qt that communicate with each other using Web Socket. We'll call them "core", "business" and "ui". This is a flexibility aim as we can simply develop a new application in a more suiting technology and connect it to the others via the same communication protocole.
Now the idea is to have a simple launcher that allows us to specify which part to start. We launch this "orchestrator-like" application and it starts all required processes from a configuration file.
Everything is done in Qt currently (QML for the UI interfaces).
Initial Issue
I've made a custom class oriented towards reading the configuration file, preparing the processes, and starting them with their respective arguments.
This uses a std::map of QProcess related to their name in the configuration file and launch them using QProcess::start(<process_path>) method.
The catch is that everything went smoothly until recently. The sub processes are started and runs perfectly ; everything goes on as normal until we reach some point were the "ui" part crashes (usually an LLVM memory error or vector:: length error).
At first we thought about a memory leak or a code error but after much debugging we found that the application had no error whatsoever when we ran each part individually (without using the custom orchestrator class).
Question / Concerns
So, our question is: could it be that the QProcess:start() method actually shares the same stack with its parent? Three processes having the same parent, it would not be surprising than a vector of ~500 elements stored in each application can exceed the stack size when returned.
Information
We use MacOS Big Sur, IDE is Qt Creator, using Qt 5.15.0 and C++11.
Tried using valgrind but as read here and here, this seems a dead-end for now. The errors below were seen in the .crash file following the application exit.
libc++abi: terminating with uncaught exception of type std::length_error: vector
ui(2503,0x108215e00) malloc: can't allocate region :*** mach_vm_map(size=140280206704640, flags: 100) failed (error code=3) ui(2503,0x108215e00) malloc: *** set a breakpoint in malloc_error_break to debug LLVM ERROR: out of memory
Also tried to redirects or completly remove the application's output. First changing the setProcessChannelMode when starting the application, then with startDetached instead of start. Then, commented my Log method dumping log info into the corresponding Qt output (info/warning/critical/fatal/debug).
As suggested by #stanislav888, we could rewrite the application manager part in bash scripts and it would probably do the trick but I'd like to understand the root issue to avoid future mistakes.
It looks like a bad design. Application running and orchestrating through bash or PowerShell script looks much more better.
But anyway.
You could try to suppress orchestrated applications output and see what happen. The programs output might flood memory and make crash.
You must check what particular trouble cause the crash. Use memory "coredump" and system error messages to understand the all problem details.
I sure the community need that details. Because "оut of memory", "stack depth exceeded" and same errors make big difference.
Try to write bash or PowerShell script which does the same workflow as the Qt application. Hope it is not hard. But it will help you to figure out the issue.
At least you can run this script from the application.

multiple Rstudio sessions following the use of the parallel package

I've recently run into an issue when using Rstudio-Server that multiple sessions are spawned instead of a single session. In my case (see below) five sessions are created instead of one. This happens even after trying the normal solutions: deleting ~/.rstudio, clearing .GlobalEnv, and restarting R. Note, there is no spawning issue when using the R command prompt.
My belief about the source of this problem is that it is due to a prematurely terminated mclapply. Here are the relevant docs from the parallel package. (discovered after the fact)
It is strongly discouraged to use these functions in GUI or embedded environments, because it leads to several processes sharing the same GUI which will likely cause chaos (and possibly crashes). Child processes should never use on-screen graphics devices.
At least one other person has had the same error as me but there is no documented solution that I can find. As the warning has already been ignored, I would appreciate any pointers that can help me get untangled.
Edit:
I am still encountering the error but was able to catch the ephemeral script sourcing issue that I believe is causing this problem. Unfortunately, I don't know what other files are being sourced and therefore what settings need to be changed. Grrrrr.....

ld warning: too many personality routines for compact unwind to encode

The linker for an iOS simulator target I have is reporting the following warning:
ld: warning: too many personality routines for compact unwind to encode
No line number is given, nor anything else that is actionable. Googling turned up some Apple open source code, but I'm not groking it.
What does it mean and what can I do to address it?
I found some information in the C++ ABI for Itanium docs that sheds some light on what this means.
The personality routine is the function in the C++ (or other language) runtime library which serves as an interface between the system unwind library and language-specific exception handling semantics.
Extrapolating, this warning indicates that you've got too many kinds of exception handling in the same binary, and at least one of them may fail. Indeed this is exactly what is observed in this question.
Sadly, there's no clear way to fix this, only workarounds. You can suppress the warning, remove code, rewrite code in a different language, disable a language's exception handing and possibly others.
If you have a crash on exception with this warning, i.e. ::terminate() call on every throw, the workaround is to use old dwarf exception-handling tables. Add following flags to Build Settings/Linking/Other Linker Flags:
-Wl,-keep_dwarf_unwind -Wl,-no_compact_unwind
You can try silencing the warning with -Wl,-no_compact_unwind for the Other Linker Flags setting. I think it's harmless.
I bumped into this issue as well when trying to run on the iOS simulator while my code was running correctly on the device. In my case, it was not a warning but a linker error.
Luckily, i remembered that I added two flags for luajit to run correctly following the instructions of this page under the section Embedding LuaJIT:
-pagezero_size 10000 -image_base 100000000
My guess is that the image_base address is simply out of bounds on the host CPU.
Anyway, removing these settings made it work in my configuration.
So go to your project's settings and look for any hard wired values of this kind if not the same.
Hope this helps,
M

Calling functions in Qt from third-party DLL works in debug mode, crashes in release

I use a third-party DLL (FTD2xx) to communicate with an external device. Using Qt4, in debug mode everything works fine, but the release crashes silently after successfully completing a called function. It seems to crash at return, but if I write something to the console (with qDebug) at the end of the function, sometimes it does not crash there, but a few, or few dozen lines later.
I suspect a not properly cleaned stack, what the debug build can survive, but the release chokes on it. Did someone encounter a similar problem? The DLL itself cannot be changed, as the source is not available.
It seems the reduction of the optimization level was the only way around. The DLL itself might have problems, as a program which does nothing but calls a single function from that DLL crashes the same way if optimization is turned on.
Fortunately, the size and speed lost by the change in optimization level is negligible.
Edit: for anyone with similar problems on Qt 5.0 or higher: If you change the optimization level (for example, to QMAKE_CXXFLAGS_RELEASE = -O0), it's usually not enough to just rebuild the application. A full "clean all" is required.
Be warned - the EPANET library is not thread safe, it contains a lot of global variables.
Are you calling two methods of that library from different threads?

Is it possible to have the entire contents of a class that tripped an error included in the stacktrace?

A lot of time can pass between the moment a stack trace is generated and the moment the stack trace is thoroughly investigated. During that time, a lot can happen to the file in question, sometimes obscuring the original error. The error might have been fixed in the meantime (overlapping bugs).
Is it possible to get Stacktraces that show the offending file at the time of the error?
Not elegantly, and you normally don't want the user browsing through code that's throwing unexpected exceptions anyway (open door to an attacker).
Usually, what happens in a dev shop is that the user reports an error, stack trace, and the build it occurred on. As a tester, you can grab that build from your archives (you ARE keeping an archive of all supported releases somewhere handy, RIGHT?), install, run, and try to reproduce the error, working with the user to provide additional info as necessary. I've seen very few bugs that couldn't be reproduced EVENTUALLY, even if it required running the program against a backup of the user's production database to do it.
As a developer, you can download that build's source code from your version control repository (you ARE using version control, RIGHT?), and examine the lines in the stack trace to try to discover the problem by inspection, and/or build and run it to reproduce the error. Then, you go back to the latest source version, build, and run the same steps (a UI automation system can help out here), and if you don't get the error, someone else already found and fixed it. If you still get the error, you also got an updated stack trace with lines that match the current build, allowing you to set your breakpoints and step through.
What KeithS said, plus there are ways to capture more helpful state information at the time of the Exception using the Exception.Data property. See http://blog.abodit.com/2010/03/using-exception-data-to-add-additional-information-to-an-exception/
While KeithS' answer as pretty much correct, it can be easier and more elegant than you think. If you can collect a dumpfile (instead of just a stack trace), you can use a Symbol Server and Source Server in combination with your debugger to automatically pull your correct-version code from source control.
For example: if you enable PDB output and source-server integration in MSBuild, and upload the resulting PDBs to a symbol server, Visual Studio can automatically load the correct source control from a TFS or SourceSafe repository based on the information in a minidump.

Resources