Given that a pointer is a memory address in RAM that points to or identifies another memory address in RAM and polymorphism is the provision of a single interface to entities of different types or the use of a single symbol to represent multiple different types. It seems like you could argue that RAM is an interface that's uses a single symbol (memory address in this case) to represent multiple different types (other memory addresses in RAM that point to any type of data stored at that memory address). Is this a fair conclusion to draw?
Suppose you had a zoology textbook which said that the entities located at the following coordinates: (x1, y1), ... (xn, yn) are mammals, and defined mammals this way. Would you view this as legitimate? It is the opposite of "polymorphism".
Related
In game development, it is common to write "baked" assets that are ready to be used by reading their data into memory and using a relocation table to adjust all pointers by the base offset of the buffer used to cache the asset in memory, then directly dereferencing a pointer into that buffer as a particular type. It is similar to relocatable addresses for executable code. This allows for zero-copy reads at runtime, which is significantly faster than parsing the asset into an in-memory format (e.g. parsing JSON to a struct with serde). For example, Halo: Combat Evolved loads all .map data in retail builds to the virtual address 0x40440000 and all pointers in the data on disk are already offset by that base address.
While this is somewhat straightforward in C and C++, Rust's safety rules present a challenge. Not all bitpatterns are safe for all values, and there are stricter alignment requirements for structures. Given the following constraints,
All structures represented in the buffer are repr(C), all bitpatterns for non-pointer fields in them are considered valid representations, and may not necessarily have alignment requirements (ala the zerocopy crate's FromBytes and Unaligned traits)
Magic fields can be used and checked to ensure the validity of the referenced structures from a higher struct
The pointer to the data is wrapped with std::pin::Pin to prevent the data from being moved
The access of the relocated data does not necessarily have to be safe Rust (e.g. dereferencing unchecked pointers), but the pointers themselves can be verified by relocatable offsets accompanying the asset data
How could I implement offset relocation within the loaded asset's buffer, to ensure that the internal pointers of the root structure and all its values are within the bounds of the allocation?
The 68k registers are divided into two groups of eight. Eight data registers (D0 to D7) and eight address registers (A0 to A7). What is the purpose of this separation, would not be better if united?
The short answer is, this separation comes from the architecture limitations and design decisions made at the time.
The long answer:
The M68K implements quite a lot of addressing modes (especially when compared with the RISC-based processors), with many of its instructions supporting most (if not all) of them. This gives a large variety of addressing modes combinations within every instruction.
This also adds a complexity in terms of opcode execution. Take the following example:
move.l $10(pc), -$20(a0,d0.l)
The instruction is just to copy a long-word from one location to another, simple enough. But in order to actually perform the operation, the processor needs to figure out the actual (raw) memory addresses to work with for both source and destination operands. This process, in which operands addressing modes are decoded (resolved), is called the effective address calculation.
For this example:
In order to calculate the source effective address - $10(pc),
the processor loads the value of PC (program) counter register
and adds $10 to it.
In order to calculate the destination effective address -
-$20(a0,d0.l), the processor loads the value of A0 register, adds the value of D0 register to it, then subtracts
$20.
This is quite a lot of calculations of a single opcode, isn't it?
But the M68K is quite fast in performing these calculations. In order to calculate effective addresses quickly, it implements a dedicated Address Unit (AU).
As a general rule, operations on data registers are handled by the ALU (Arithmetic Logical Unit) and operations involving address calculations are handled by the AU (Address Unit).
The AU is well optimized for 32-bit address operations: it performs 32-bit subtraction/addition within one bus cycle (4 CPU ticks), which ALU doesn't (it takes 2 bus cycles for 32-bit operations).
However, the AU is limited to just load and basic addition/subtraction operations (as dictated by the addressing modes), and it's not connected to the CCR (Conditional Codes Register), which is why operations on address registers never update flags.
That said, the AU should've been there to optimize calculation of complex addressing modes, but it just couldn't replace the ALU completely (after all, there were only about 68K transistors in the M68K), hence there are two registers set (data and address registers) each having their own dedicated unit.
So this is just based on a quick lookup, but using 16 registers is obviously easier to program. The problem could be that you would then have to make instructions for each of the 16 registers. Which would double the number of opcodes needed. Using half for each purpose is not ideal but gives access to more registers in general.
one thing I was recently thinking about was how a computer finds his variables. When we run a program, the program will create multiple layers in the stack, one layer for every new scope it opens and put either the variable value or a pointer in case of storage in heap in this scope. When the scope is done, it and all its variables will be destroyed. But how does a computer know where its variables are? And which ones to use if the same variables occur to be present more often.
How I imagine it, the computer searches the scope it is in like an array and if it doesn't find the variable it follows the stack downwards like a linked list and searches the next scope like an array.
That leads to the assumption that a global variable is the slowest to use since it has to traverse all the way back to the last scope. So it has a computational time from a * n (a = average amount of variables per scope, n = amount of scopes). If I now assume that my code is recursive and within the recursive function calls on a global variable (let's say I have defined the variable const PI = 3.1416 and I use it in every recursion), then it would traverse it backwards again for every single call and if my recursive function takes 1000 recursion, then it does that 1000 times.
But on the other hand, while learning about recursion, I have never heard that referring to variables that are not found inside the recursive scope is to be avoided if possible. Therefore I wonder if I am right with my thoughts. Can someone please shed some light on the issue.
You got it the other way around: scopes, frames, heaps don't make variables, variables make scopes, frames, heaps.
Both are a bit of a stretch actually but my point is to avoid focusing on the lifetime of a variable (that's what terms like heap and stack really mean) and instead take a look under the hood.
Memory is a form of storage where each cell is assigned a number, the cell is called word and the number is called address.
The set of addresses is called address space, an address space is usually a range of addresses or a union of ranges of addresses.
The compiler assumes the program data will be loaded at a specific address, say X, and that the there is enough memory after X (i.e. X+1, X+2, X+3, ..., all exists) for all the data.
Variables are then laid out sequentially from X onward, it is the job of the compiler to keep the association between the address X+k and the variable instance.
Note that a variable may be instanced more than one time, calling a function twice or recursion are both examples of that.
In the first case, the two instances can share the same address X+k since they are don't overlap in time (by the time the second instance is alive, the first is over).
In the second case, the two instances overlap in time and two addresses must be used.
So we see that it is the lifetime of a variable that affects how the mapping between the variable name and its address (a.k.a. the allocation of the variable) is done.
Two common strategies are:
A stack
We start from an address X+b and allocates new instances at successive addresses X+b+1, X+b+2, etc.
The current address (e.g. X+b+54) is stored somewhere (it is the stack pointer).
When we want to free a variable we set the stack pointer back (e.g. from X+b+54 to X+b+53).
We can see that it's impossible to free a variable that is not the last allocated.
This allows for a very fast allocation/deallocation and naturally fits the need of a function frame that holds the local variables: when a function is invoked the new variables are allocated, when it ends they are removed.
From what we noted above, we see that if f calls g (i.e. f is the parent of g) then the variables of f cannot be deallocated before those of g.
This again naturally fits the semantics of functions.
The heap
This strategy dynamically allocate a variable instance at an address X+o.
The runtime reserves a block of addresses and manages their status (free, occupied), when asked, it can give a free address and mark it occupied.
This is useful to allocate an object whose size depends on the user input, for example.
The heap (static)
Some variables have the lifespan of the program but their size and number is known a compile time.
In this case, the compiler simply assigns each instance a unique address X+i.
They cannot be deallocated, they are loaded in memory in batch along with the program code and stay there until the program is unloaded.
I left behind some details, like the fact that the stack most often than not grows from bigger to lower addresses (so it can be put at the farthest edge of the memory) and that variables occupy more than one address.
Some programming languages, especially interpreted ones, don't associate addresses to variable instances, instead, they keep a map between the variable name (properly qualified) and the variable value, this way the lifespan of a variable can be controlled in many particular ways (see Closure in Javascript).
Global variables are allocated in the static heap, only one instance is present (only one address).
Each recursive function that uses it always references directly to the sole instance because the unique address is known at compile time.
Local variables in a function are allocated in the stack and each invocation of a function (recursive or not) uses a new set of instances (the addresses don't need to be the same each time, but they could).
Simply put, there is no lookup, variables are allocated so that the code can access them once compiler (either relatively, in the stack, or absolutely, in the heap).
I'm trying to use several arrays of doubles in a kernel that are all the same length. Instead of passing each double* in as a separate argument, I know I can define a structure in the .cl file that holds several doubles and then just pass into the kernel one pointer for an array of the structures instead.
Will the performance be different for the two ways? Please correct me if I am wrong, but I think passing individual double pointers means the access can be coalesced. Will accessing the structures also be coalesced?
As long as your structures don't contain any pointers what you say is absolutely possible. The primary impact is generally, as you've already considered, the effect this has on the coalescing of memory operations. How big an effect is down to your memory access pattern, the size of your struct and the device you're running on. More details would be needed to describe this more fully.
Saying that, one instance where I've used a struct in this way very successfully is where the element being read is the same for all work items in a work group. In this case there is no penalty on my hardware (nvidia GTX 570). Also it is worth remembering that in some cases the added latency introduced by the memory operation being serialised can be hidden. In the CUDA world this would be achieved by having a high occupancy for a problem with a high arithmetic intensity.
Finally it is worth pointing out that the semantic clarity of using a struct can have a benefit in and of itself. You'll have to consider this against any performance cost for your particular problem. My advice is to try it and see; it is very difficult to predict the impact of these issues ahead of time.
Theoretically it is the same performance. However if you access some of the members more often, using several segregatedvarrays will have much more performance, due to cpu cache locality. However most operations will be more difficult when you have several arrays.
The structures and the single elements will have the exact same performance.
Supose you have a big array of doubles, and the first work item uses 0, 100, 200, 300, ... and the next one uses 1, 101, 201, 301, ...
If you have a structure of 100 doubles, in memory the first structure will be first(0-99), then the second(100-199) and so on. The kernels will acess exactly the same memory and in the same places, the only difference is how you define the memory abstraction.
In a more generic case of having a structure of different element types (char, int, double, bool, ...) it may happen that the aligment is not as if it were a single array of data. But it will still be "semi-coalesced". I would even bet the performance is still the same.
In R. Kent Dybvig's paper "Three Implementation Models for Scheme" he speaks of "FFP languages" and "FFP machines". Apparently there is some correlation between FFP machines, and string-reduction on multiple processors.
Googling doesn't really uncover much in terms of explanations or examples.
Can anyone shed some light on this topic?
Thanks.
Kent Dybvig's advisor, Gyula A. Mago, published a detailed description in "The FFP Machine: Technical Report 87-014" in 1987 by Mago and Stanat.
As of this writing, the PDF is freely available at:
http://www.cs.unc.edu/techreports/87-014.pdf
The FFP Machine is a very fine-grained parallel computer architecture:
each processor holds a single symbol / atom / value.
It uses a string reduction model of computation in which
innermost function applications are found and replaced by their
equivalent result (eager evaluation).
Where a result is used in several places, it tends to be re-evaluated
instead of incurring the costs of accessing some global store
(but see Mago's paper on "Copying Operands vs Copying Results", or better yet Mago's "Data Sharing in an FFP Machine" in the 1982 Functional Programming Languages and Computer Architecture conference).
The L cells holding the FFP expression being reduced
communicate through a tree structured arrangement of T cells.
Note that IC's are basically two dimensional and with wiring,
circuits can move towards being three dimensional in physical space.
Interconnection networks that occupy higher dimensions
(such as the Hypercube, Omega, Banyan, Star, etc. networks)
will eventually be unable to perform near their theoretical limit.
This communication network is circuit-switched rather than being packet-switched.
Data packets contain no addresses and do not need routing.
Packets from distinct reductions cannot meet, cannot conflict
and cannot experience congestion with each other.
The configuring activity (called "Partitioning") is performed
in a single sweep upwards in the tree, using a handful
of logic operations on 3-bit messages, leaving "area machines" in its wake,
each created to advance at most a single reducible application.
While it is technically logarithmic in time,
the resulting area machines can begin communicating
in a pipelined fashion behind the partitioning wave,
practically costing a constant time penalty.
(The dismantling of area machines remains a logarithmic cost in time).
Packets within a single reduction should, and must, meet
and thus provide a often-useful synchronization.
Sequences of packets are sorted and combined as they rise
within an area, to be broadcast from the root of the area machine.
Parallel Prefix and Parallel Suffix operations are provided
to reduce area traffic, since there remains a potential bottleneck
within an individual reducible application.
This is accomplished without the need exhibited in
the Ultracomputer (Jack (Jacob?) Schwartz at NYU)
for a separate logarithmic-sized cache memory in each
communication node.
Each T cell (internal tree node) only needs a FIFO buffer
(for efficiency) of size greater than the pipeline path to
the top of the tree and back down.
(This latter is a conjecture of mine, but it seems reasonable).
Since the tree maintains the left-to-right order of data
(unlike some other combining networks), the system enables cells
to rotate their data in logarithmic rather than linear time,
avoiding the plausible congestion at the root of the area machine.
It's worth noting again, that the parallelism within an area
machine is independent of the simultaneous parallelism in other
area machines, and has available to it a number of processors
proportional to the quantity of data in the operand.
Have you come across this yet?: Compiling APL for parallel execution on an FFP machine
Formal FP. Similar to FP, but with regular sugarless syntax, for machine execution is all I can offer you.
See Wikis Fp page.