If the dynamic linker/loader is itself a shared object file, how is it properly loaded into a dynamically linked program's process image space if it's not already there? Is this some kind of catch 22 thing?
This answer provides some details (although there are technical mistakes in it).
Is this some kind of catch 22 thing?
Yes: ld.so is special -- it is a self-relocating binary.
It starts by carefully executing code that doesn't require any relocations. That code relocates ld.so itself. After this self-relocation / bootstrap process is done, ld.so continues just as a regular shared library.
Refer to
Oracle Solaris 11.1 Linkers and Libraries Guide
It's the best linkers reference that I have come across, concise and explains things well.
On page 89:
As part of the initialization and execution of a dynamic executable,
an interpreter is called to complete the binding of the application to
its dependencies. In the Oracle Solaris OS, this interpreter is
referred to as the runtime linker.
During the link-editing of a dynamic executable, a special .interp section, together with an
associated program header, are created. This section contains a path
name specifying the program's interpreter. The default name supplied
by the link-editor is the name of the runtime linker: /usr/lib/ld.so.1
for a 32–bit executable and /usr/lib/64/ld.so.1 for a 64–bit
executable.
Note – ld.so.1 is a special case of a shared object. Here,
a version number of 1 is used. However, later Oracle Solaris OS
releases might provide higher version numbers.
During the process of
executing a dynamic object, the kernel loads the file and reads the
program header information. See “Program Header” on page 371. From
this information, the kernel locates the name of the required
interpreter. The kernel loads, and transfers control to this
interpreter, passing sufficient information to enable the interpreter
to continue executing the application.
Related
Context
I'm trying to implement a sort-of orchestrator pattern for our applications.
Basically, we have three different and independent applications developped in Qt that communicate with each other using Web Socket. We'll call them "core", "business" and "ui". This is a flexibility aim as we can simply develop a new application in a more suiting technology and connect it to the others via the same communication protocole.
Now the idea is to have a simple launcher that allows us to specify which part to start. We launch this "orchestrator-like" application and it starts all required processes from a configuration file.
Everything is done in Qt currently (QML for the UI interfaces).
Initial Issue
I've made a custom class oriented towards reading the configuration file, preparing the processes, and starting them with their respective arguments.
This uses a std::map of QProcess related to their name in the configuration file and launch them using QProcess::start(<process_path>) method.
The catch is that everything went smoothly until recently. The sub processes are started and runs perfectly ; everything goes on as normal until we reach some point were the "ui" part crashes (usually an LLVM memory error or vector:: length error).
At first we thought about a memory leak or a code error but after much debugging we found that the application had no error whatsoever when we ran each part individually (without using the custom orchestrator class).
Question / Concerns
So, our question is: could it be that the QProcess:start() method actually shares the same stack with its parent? Three processes having the same parent, it would not be surprising than a vector of ~500 elements stored in each application can exceed the stack size when returned.
Information
We use MacOS Big Sur, IDE is Qt Creator, using Qt 5.15.0 and C++11.
Tried using valgrind but as read here and here, this seems a dead-end for now. The errors below were seen in the .crash file following the application exit.
libc++abi: terminating with uncaught exception of type std::length_error: vector
ui(2503,0x108215e00) malloc: can't allocate region :*** mach_vm_map(size=140280206704640, flags: 100) failed (error code=3) ui(2503,0x108215e00) malloc: *** set a breakpoint in malloc_error_break to debug LLVM ERROR: out of memory
Also tried to redirects or completly remove the application's output. First changing the setProcessChannelMode when starting the application, then with startDetached instead of start. Then, commented my Log method dumping log info into the corresponding Qt output (info/warning/critical/fatal/debug).
As suggested by #stanislav888, we could rewrite the application manager part in bash scripts and it would probably do the trick but I'd like to understand the root issue to avoid future mistakes.
It looks like a bad design. Application running and orchestrating through bash or PowerShell script looks much more better.
But anyway.
You could try to suppress orchestrated applications output and see what happen. The programs output might flood memory and make crash.
You must check what particular trouble cause the crash. Use memory "coredump" and system error messages to understand the all problem details.
I sure the community need that details. Because "оut of memory", "stack depth exceeded" and same errors make big difference.
Try to write bash or PowerShell script which does the same workflow as the Qt application. Hope it is not hard. But it will help you to figure out the issue.
At least you can run this script from the application.
I am new to Common Lisp. This is how I develop programs in other languages, and also how I now develop programs in Common Lisp:
Open a text editor (e.g. vim or emacs) to create/edit a text file.
Write source code into the text file. (If unsure about the behavior of a snippet of code, and an REPL is available, then evaluate the snippet in the REPL, verify that the snippet evaluates as expected, and then go back to writing more code.)
Save the text file.
Ask the compiler/interpreter to load and run the source code in the text file. (e.g. sbcl --script myprog.lisp)
Go to step 1 if needed.
This is the conventional write-compile-run development cycle for most programming languages. However, in the lisp world, I hear things like "interactive development" and "image-based development", and I feel that I am missing out on an important feature of Common Lisp. How do I do "image-based development" instead of "write-compile-run development"?
Can someone provide a step-by-step example of "image-based development" similar to how I described "write-compile-run development" above?
(Note: I am using SBCL)
In typical Common Lisp implementations the runtime, the compiler, parts of the development environment and the program you are developing reside in the same program and share the same object space. The compiler is always available while you develop the program and the program can be incrementally developed. The development tools have access to all objects and can inspect their state. One can also undefine/remove, replace, enhance functionality from the running program.
Thus:
don't restart the program you are developing. Stay connected and update it. Even days, weeks, or months - if possible.
write code in such a way that the program can be replicated and built from scratch if necessary. Build it from time to time and fix any build problems.
once you use our program and there is an error -> fix the error within the program, while being able to inspect the full error state
creating a running program is either loading all code into a plain Lisp all the time or saving an executable image with the loaded code/data
Fixes to program bugs can also shipped to the user as compiled Lisp files, which gets loaded into the delivered program and update the code then.
Let's say that you are using SBCL with Emacs and SLIME (e. g. through Portacle).
Open Emacs
Start SLIME (M-x slime) — this starts a “plain” Lisp process in the background and connects the editor functions provided by slime to it; then gives you a REPL that is also connected into this process (image)
Open a text file (e. g. foo.lisp)
Type some code
Press C-c C-k to compile the file and load it into the running Lisp process
Switch to the REPL, try it out
Switch to the Lisp file (step 4).
This is just very basic usage. Further things to do/learn
You can also compile and load just a single toplevel form (C-c C-c)
Learn about packages
Learn about systems (ASDF)
Learn how to use Quicklisp to get the libraries you want
Learn how to access inline documentation from the REPL
Note that you never need to unload your program, you just modify it, even when downloading and loading new libraries. This makes the feedback cycle instantaneous in most cases. You also never need to switch away from the IDE (Emacs).
In gRPC, when building for arm, I need to disable those three variables:
-DRUN_HAVE_STD_REGEX=OFF
-DRUN_HAVE_POSIX_REGEX=OFF
-DRUN_HAVE_STEADY_CLOCK=OFF
It is not super clear to me what they do, so I wonder:
Why is it that CMake cannot detect them automatically when cross-compiling?
What is the impact of disabling them, say on a system that does support them? Will it sometimes crash? Reduce performances in some situations?
Because they are not auto-detected by CMake, it would be easier for me to always disable them if that works everywhere without major issues for my use-case.
gRPC uses CMake's try_run to automatically detect if the platform supports a feature when cross-compiling. However, some variables need to be supplied manually. From the documentation (emphasis added):
When cross compiling, the executable compiled in the first step usually cannot be run on the build host. The try_run command checks the CMAKE_CROSSCOMPILING variable to detect whether CMake is in cross-compiling mode. If that is the case, it will still try to compile the executable, but it will not try to run the executable unless the CMAKE_CROSSCOMPILING_EMULATOR variable is set. Instead it will create cache variables which must be filled by the user or by presetting them in some CMake script file to the values the executable would have produced if it had been run on its actual target platform.
Basically, it's saying that CMake won't try to run the compiled executable on the build machine unless some test results are specified manually (test which would have been run on the target machine). The below tests will usually cause problems:
-DRUN_HAVE_STD_REGEX
-DRUN_HAVE_GNU_POSIX_REGEX
-DRUN_HAVE_POSIX_REGEX
-DRUN_HAVE_STEADY_CLOCK
Hopefully that answers your first question. I do not know how to answer your second question, as I have always just set those variables manually to match the features of whatever system I've compiled for.
This is mostly a stupid question, since UPX (a tool that wrings extra bytes out of your executable files) saves a tiny amount of space over the built in compression in the buildapp tool.
A very small demo application creates a 42 megabyte file. Understandable, since the SBCL environment isn't tiny.
Passing the --compress-core option to buildapp shrinks that down to 9.2MB.
I thought I'd try throwing UPX at the resulting binary, and the savings only amounts to a few more bytes: 9994288 -> 9871360
However, the resulting file no longer runs anymore - it just jumps into the SBCL REPL (with no errors, it's as if I just ran sbcl by hand), and some poking around there reveals that the functions making up my test program no longer exist.
What did UPX do to the binary that resulted in this breakage?
This may not be the answer, but it may serve as a clue: I've found that if you add anything, even a single byte, to the end of an SBCL executable created with sb-ext:save-lisp-and-die, then all the definitions disappear, just as you described.
Perhaps SBCL creates executables by appending the core (which contains your definitions) to a copy of the SBCL ELF (or PE on Windows) binary, plus some metadata at the end so that SBCL can still find the beginning of the core even though it's appended to an executable.
If you hex-edit an executable created with save-lisp-and-die, you'll find that it ends with the string "LCBS" (SBCL backwards), which seems to support my theory. "LCBS" probably serves as a magic number, letting SBCL know that yes, this executable contains its own core.
UPX compresses executables, probably including that magic number at the end. When SBCL opens its UPX-compressed self on disk, it won't find "LCBS" at the end, so it assumes that there is no core appended to the executable.
I can't explain why the standard library seems to still be there if this is the case. It could be that SBCL loads /usr/lib/sbcl/sbcl.core (or its equivalent on Windows) in that case. This could be tested by moving the executable to a machine where SBCL is not installed and seeing if it works at all, and if so, whether you still have car, cdr, list, etc.
I'm trying to grasp a better understanding of Thompson's Trojan Compiler (discussed in his 1984 ACM Turing Award speech "Reflections On Trusting Trust"), and so far this is how I understand it:
"The original login program for Unix would accept whatever login and password the root instructed it to. It would only accept a certain password, known only by the man who wrote the system. This could let him log in to the system as root."
Is this the right concept? I'm not 100% sure if I understand the whole concept.
If someone could make it clearer, it would help.
(See also Bruce Schneier Countering "Trusting Trust")
The original login program accepts matching pairs of name and password from a file.
The modification is to add a super-powerful password, compiled into the login program, that allows root access. In order to ensure that this code isn't visible when reading the login program, there's a change to the compiler to recognize this section of the login program, i its original form and compile it into the super-powerful password binary. Then, in order to hide the existence of this code in the compiler, there needs to be another change to the compiler that recognizes the section of the compiler that the first change was added to and output the modified form.
Once the changed compiler code exists, you can compile the compiler and install it in the standard place, and then revert the source code for both the login program and the compiler to their unmodified form. The installed compiled compiler will then take the unchanged login program and output the insecure form. Similarly, the installed compiler will compile the unmodified compiler source code into the devious variant. Anyone inspecting the source code for either one will agree that there's nothing unusual in them.
Of course, it only works until the source code for either program evolves far enough that the modified compiler no longer recognizes it. Since the modified compiler's source code is no longer present, it can't be maintained, and (assuming that the compiler and login continue to evolve) it will eventually stop producing the insecure output.
I had never encountered the concept before, but this is pretty interesting - I found a neat write-up at http://scienceblogs.com/goodmath/2007/04/strange_loops_dennis_ritchie_a.php
Yes, it is the right concept. There's more to it; the modified compiler must also compile the unmodified compiler source to a similarly modified copy of itself. This includes trivial variations of that source, which basically means the modified compiler has to be able to solve e.g. the halting problem.