Hallo! Currently I'm learning basics of assembly. Earlier I was using TASM and Intel-syntax. There I had to initialize stack in some ways.
But now I'm using GNU Assembler and AT&T syntax. I looked through lots of examples and saw no any declaration/initialization of stack. I wonder if I have to do it? Or, may be, it's made without my help here? If is so, how exactly is it initialized automatically? Are there risks to rub important info in data-segment? I didn't also notice any directives concerning stack.
Thanks for your answers beforehand!
Oh, one more thing: are there any good books concerning programming in ASM (GAS) for Unix-like systems?
An OS with Virtual Memory handles the stack somewhat differently than how an OS without Virtual Memory handles it.
No VM (e.g. DOS, µClinux !MMU): you reserve some physical space for the stack. In DOS it depends on the memory model you use, for larger memory models you will allocate some memory and point SS (the stack segment) to it. In µClinux you will save the stack size in a field of the executable file format's header, see the bFLT format for an example.
VM → the stack grows dynamically, up to a configurable limit (see ulimit -s on Linux). Since each process has its own virtual address space, there is a lot of space between the stack and any other mapped virtual memory area.
Related
Is there a way you can identify whether an object is stored on the stack or heap solely from its memory address? I ask because it would be useful to know this when debugging and a memory address comes up in an error.
For instance:
If have a memory address: 0x7fd8507c6
Can I determine anything about the object based on this address?
You don't mention which OS you are using. I'll answer for Microsoft Windows as that's the one I've been using for the last 25 years. Most of what I knew about Unix/Linux I've forgotten.
If you just have the address and no other information - for 32 bit Windows you can tell if it's user space (lower 2GB) or kernel space (upper 2GB), but that's about it (assuming you don't have the /3GB boot option).
If you have the address and you can run some code you can use VirtualQuery() to get information about the address. If you get a non-zero return value you can use the data in the returned MEMORY_BUFFER_INFORMATION data.
The State, Type, and Protect values will tell you about the possible uses for the memory - whether it's memory mapped, a DLL (Type & MEM_IMAGE != 0), etc. You can't infer from this information if the memory is a thread's stack or if it's in a heap. You can however determine if the address is in memory that isn't heap or stack (memory in a DLL is not in a stack or heap, non-accessible memory isn't in a stack or a heap).
To determine where a thread stack is you could examine all pages in the application looking for a guard page at the end of a thread's stack. You can then infer the location of the stack space using the default stack size stored in the PE header (or if you can't do that, just use the default size of 1MB - few people change it) and the address you have (is it in the space you have inferred?).
To determine if the address is in a memory heap you'd need to enumerate the application heaps (GetProcessHeaps()) and then enumerate each heap (HeapWalk()) found checking the areas being managed by the heap. Not all Windows heaps can be enumerated.
To get any further than this you need to have tracked allocations/deallocations etc to each heap and have all that information stored for use.
You could also track when threads are created/destroyed/exit and calculate the thread addresses that way.
That's a broad brush answer informed by my experience creating a memory leak detection tool (which needs this information) and numerous other tools.
I have an interest in writing a scheduler/RTOS project in XC8 using an enhanced MCU with access to the hardware stack.
I am trying to figure out how to control the creation of the software stacks so each task's software stack will get a certain range in the general purpose ram.
Conceptually this is all easy to program in ASM but I want to be able to write C programs and have the software stacks for each task be put into the right address space.
There doesn't appear to be an option to create a separate software stack for a certain section of code or even create multiple software stacks - how do I do it?
Thanks
Stack switching is the responsibility of teh scheduler,not teh compiler - so you will not find a compiler option for that. You have to implement that in the scheduler you are intending to write - that is in fact most of what a scheduler does.
In an RTOS, switching context involves storing all the registers relating to one thread of execution and replacing them with those of another. This includes replacing the stack-pointer - that is how you switch stacks between threads. A context switch is completed when the program-counter register is loaded effecting a jump to the new thread's last execution point (with all its registers, including the stack-pointer restored.
The context switch itself necessarily involves at least a small amount of assembler code, but much of it may still be written in C, and tasks themselves may be written in C.. A good description of a simple RTOS scheduler is provided in Jean Labrosse's book on μC/OS-II - freely available in PDF. A PIC18 port of μC/OS-II is described here with download.
All modern *nix operating systems use virtual memory concept (with paging). And as far as i know, this concept of virtual memory is used to set a layer of abstraction between the programmer and the real physical memory: the programmer doesn't have to be limited to ram size and he can see the program as a large contiguous space of data, instructions, heap and stack (manipulate pointers according to that concept). When we compile & link a source code we get an executable file stored on HDD known as ELF, that file contains all data and instructions of the program beside some additional information like stack and heap sizes (only created at runtime).
Now my questions:
1. How does this binary file (elf) is mapped to virtual memory ?
2. Does every process has its own virtual memory (a page file !!!) ?
3. What is the program's layout after being mapped to virtual memory ?
4. What is exactly the preferred base address and how does it look in virtual memory ?
5. What is the difference between a RVA and an Offset ?
You don't have to answers all the questions or give detailed answers instead you can provide me with good full readings about the subject, thanks.
How does this binary file (elf) is mapped to virtual memory ??
The executable file contains instructions to the loader on how to lay out the address space. On some systems, parts of the executable can be mapped to memory and serve as a page file.
Does every process has its own virtual memory (a page file !!!) ?
Every process has its own logical address space. Some areas within that address space may be shared with other processes.
What is the program's layout after being mapped to virtual memory ?
The depends upon the system and what the executable told the loader to do.
What is exactly the preferred base address and how does it look in virtual memory ?
That is just the desirable start location for loading something in memory. Most compilers generate relocatable code that is not tied to any specific logical address.
What is the difference between a RVA and an Offset ?
RVA is a screwed up unixism for an offset. What is not clear, in your question is what type of offset you are talking about. There are byte offsets from pages. RVA is usually an offset from a loading location that can span pages.
Consider the following:
Where I'm getting confused is in the step "child duplicate of parent". If you're running a process such as say, skype, if it forks, is it copying skype, then overwriting that process copy with some other program? Moreover, what if the child process has memory requirements far different from the parent process? Wouldn't assigning the same address space as the parent be a problem?
I feel like I'm thinking about this all wrong, perhaps because I'm imagining the processes to be entire programs in execution rather than some simple instruction like "copy data from X to Y".
All modern Unix implementations use virtual memory. That allows them to get away with not actually copying much when forking. Instead, their memory map contains pointers to the parent's memory until they start modifying it.
When a child process exec's a program, that program is copied into memory (if it wasn't already there) and the process's memory map is updated to point to the new program.
fork(2) is difficult to understand. It is explained a lot, read also fork (system call) wikipage and several chapters of Advanced Linux Programming. Notice that fork does not copy the running program (i.e. the /usr/bin/skype ELF executable file), but it is lazily copying (using copy-on-write techniques - by configuring the MMU) the address space (in virtual memory) of the forking process. Each process has its address space (but might share some segments with some other processes, see mmap(2) and execve(2) ....). Since each process has its own address space, changes in the address space of one process does not (usually) affect the parent process. However, processes may have shared memory but then need to synchronize: see shm_overview(7) & sem_overview(7)...
By definition of fork, just after the fork syscall the parent and child processes have nearly equal state (in particular the address space of the child is a copy of the address space of the parent). The only difference being the return value of fork.
And execve is overwriting the address space and registers of the current process.
Notice that on Linux all processes (with a few exceptions, like kernel started processes such as /sbin/modprobe etc) are obtained by fork-ing -from the initial /sbin/init process of pid 1.
At last, system calls -listed in syscalls(2)- like fork are an elementary operation from the application's point of view, since the real processing is done inside the Linux kernel. Play with strace(1). See also this answer and that one.
A process is often some machine state (registers) + its address space + some kernel state (e.g. file descriptors), etc... (but read about zombie processes).
Take time to follow all the links I gave you.
Suppose I'm starting two instances of the same program. Will the text region of both programs have same virtual addresses?
Depends. On most systems, if you run the same program twice in the same environment (same parameters, etc.), you'll find the same address mapping. This is simply because most of what the process does is deterministic, dependent only on the environment, command-line parameters, contents of files read, but not on changing data such as the date or process ID. This is very useful when debugging: if you restart your program, sometimes even after a small code change and recompilation, you have a chance that the memory layout remained the same. Of course, different instances of the program running concurrently may have the same virtual addresses, but they won't have the same physical addresses.
Some systems, such as OpenBSD, or Linux with various hardening settings, implement address space layout randomization (ASLR). ASLR means that each time a process starts, the virtual addresses of its code, data, stack(s) and heap(s) are determined at random. This is a security features, designed to make exploits of security vulnerabilities harder: the exploit code can't just access known code at known addresses. However, as ASLR becomes more popular, exploits also become more sophisticated to work around it. ASLR remains useful because it increases the workload for the exploit writer without adding a lot of complexity.
Probably not, but it's possible that they could. Each process has its own independent memory space.