In the COFF file format, what is the significance of the relocation information section? - coff

I am reading about COFF file formats, which is commonly used to create an executable file format (it has some variants also).
While reading, I came across the relocation section of the format. How is this relocation section used to create an executable file.
It would be very useful if you point me to some links which will help me.

Actually, with COFF there are 2 types of relocation information:
COFF Relocation records
The relocation section in an executable image.
They have similar, but different purposes. The relocation information in an executable identifies things that need to be fixed up, at load time, should the executable image be loaded at a different addresses from its preferred address.
COFF Relocation records identify things that need to be fixed up, at link time, when a section in an object file is assigned to an offset in an executable image.

Relocation is used to place executable code in its own memory space in a process. For example, if you try to load two dlls that both request the same base address (ie, the same place in memory), then one of the dlls will have to be relocated to another address.
NTCore is a useful site for exploring Portable Executable (PE) files, which is what COFF is now called. Here is another site that explains relocation pretty well.

An unintended addition use of relocation is (de-)obfuscating binaries at run time with no additional unpacking code. See this paper.

Related

How to use LibTiff.NET Tiff2Pdf in .NET 6

I want to provide support to convert single-page and multi-page tiff files into PDFs. There is an executable in Bit Miracle's LibTiff.NET called Tiff2Pdf.
How do I use Tiff2Pdf in my application to convert tiff data stream (not a file) into a pdf data stream (not a file)?
I do not know if there is an API exposed because the documentation only lists Tiff2Pdf as a tool. I also do not see any examples in the examples folder using it in a programmatic way to determine if it can handle data streams or how to use it in my own program.
libtiff tools expect a filename so the background run shown below is simply from upper right X.tif to various destinations, first is default
tiff2pdf x.tif
and we can see it writes a tiff2pdf file stream to console (Standard Output) however it failed in memory without a directory to write to. However on second run we can redirect
tiff2pdf x.tif > a.pdf
or alternately specify a destination
tiff2pdf -o b.pdf x.tif
So in order to use those tools we need a File System to receive the file objects, The destination folder/file directory can be a Memory File System drive or folder.
Thus you need to initiate that first.
NuGet is a package manager simply bundling the lib and as I don't use .net your a bit out on a limb as BitMiricle are not offering free support (hence point you at Stack Overflow, a very common tech support PLOY, Pass Liability Over Yonder) however looking at https://github.com/BitMiracle/libtiff.net/tree/master/Samples
they suggest memory in some file names such as https://github.com/BitMiracle/libtiff.net/tree/master/Samples/ConvertToSingleStripInMemory , perhaps get more ideas there?

Difference between unlink and rm on unix

Whats the real different between these two commands? Why is the system call to delete a file called unlink instead of delete?
You need to understand a bit about the original Unix file system to understand this very important question.
Unlike other operating systems of its era (late 60s, early 70s) Unix did not store the file name together with the actual directory information (of where the file was stored on the disks.) Instead, Unix created a separate "Inode table" to contain the directory information, and identify the actual file, and then allowed separate text files to be directories of names and inodes. Originally, directory files were meant to be manipulated like all other files as straight text files, using the same tools (cat, cut, sed, etc.) that shell programmers are familiar with to this day.
One important consequence of this architectural decision was that a single file could have more than one name! Each occurrence of the inode in a particular directory file was essentially linking to the inode, and so it was known. To connect a file name to the file's inode (the "actual" file,) you "linked" it, and when you deleted the name from a directory you "unlinked" it.
Of course, unlinking a file name did not automatically mean that you were deleting / removing the file from the disk, because the file might still be known by other names in other directories. The Inode table also includes a link count to keep track of how many names an inode (a file) was known by; linking a name to a file adds one to the link count, and unlinking it removes one. When the link count drops down to zero, then the file is no longer referred to in any directory, presumed to be "unwanted," and only then can it be deleted.
For this reason the "deletion" of a file by name unlinks it - hence the name of the system call - and there is also the very important ln command to create an additional link to a file (really, the file's inode,) and let it be known another way.
Other, newer operating systems and their file systems have to emulate / respect this behavior in order to comply with the Posix standard.

Executing 'mv A B': Will the 'inode' be changed?

If we execute a command:
mv A B
then what will happen to the fields in the inode of file A? Will it change?
I don't think that it should change just by changing the name of the file, but I'm not sure.
It depends at least partially on what A and B are. If you're moving between file systems, the inode will almost certainly be different.
Simply renaming the file on the same system is more likely to keep the same inode simply because the inode belongs to the data rather than the directory entry and efficiency would lead to that design. However, it depends on the file system and is in no way mandated by standards.
For example, there may be a versioning file system with the inode concept that gives you a new inode because it wants to track the name change.
It depends.
There is a nice example on this site which shows that the inode may stay the same. But I would not rely on this behaviour, I doubt that it is specified in any standard.

Offline symbol server over http

I'm currently trying to set up an offline symbols server that will serve up symbols much the same as Microsoft's own symbol server works.
In essence, I would like to have the ability to use 'srv*C:\symbolcache*http://my.symbol.server' as my symbol path in windbg. I have IIS up and running and added a MIME type for .pdb files.
I used symstore to add the symbols from various flavors of Windows and then updated my symbols bath in windbg to no avail.
Since I'm using several different OSes, the symbols directory just contains .PTR files to the actual location of the .pdb -- which I think is my problem.
As an example, in my symbols folder I have a folder called 'mswrd6.pdb' which contains 4 directories that are all 32 character hex, each one of those directories contains a "file.ptr" file which points to the correct location of the desired .pdb.
Long story short, does anyone know of a guide or documentation out there that goes through how to create a symbols server over http?
Thanks!
This is the guide I used when I setup my symbol server. It takes a fair amount of fiddling with to get everything to work. But this guide along with Google should get you the whole way.
http://msdn.microsoft.com/en-us/library/ms681417(VS.85).aspx

Unix directory structure: managing file name collision

Usually every time `make install' is run, files are not put in a specific directory like /usr/prog1. Instead, the files are put in directories where files from other programs are already in like /usr/lib and /usr/bin. I believe this has been a common practice since long time ago. This practice surely increases the probability of file name collision.
Since my googling returned no good discussion on this matter, I am wondering what people do to manage the file name collision? Do they simply try this or that name and if something goes wrong, a bug is filed by the user and the developer picks another name? Or, do they simply prefix the names of their files? Anyone is aware of a good discussion on this matter?
Usually people choose the name they want and if something collides then the problem gets resolved by the distribution. That's what happened with ack (ack in Debian, Kanji converter) and ack (ack-grep in Debian, text search utility).
Collisions don't seem to be that common though. A quick web search should tell you if the name is used somewhere. If it's not searcheable, it's probably not included in many distributions and that means you're not likely to actually conflict.
Usually when compiling programs, you can usually specify a prefix path like this: ./configure --prefix=/usr/local/prog1 or ./configure --prefix=/opt/prog1 (whether you use /usr/local or /opt doesn't really matter). Then when running make install it'll put the files in the specified prefix path. You can then either 1) add /opt/prog1/bin/ to your PATH or you can make a symlink to the executable file in /usr/local/bin which should already be in your PATH.
Best thing is to use your distributions package manager though.

Resources