32 bit operation vs 64 bit operation on a 64bit machine/OS - 32bit-64bit

Which operation i.e a 32 bit operation or a 64 bit operation (like masking a 32 bit flag or a 64 bit flag), would be cheaper on a 64 bit machine?

As you don;t specify an architecture, I can suggest only a general answer, as it depends on the operation and on the processor architecture in question. Once you have the data in a CPU register, then most operations will usually take the same amount of time regardless of whether the value was originally 32 or 64 bit.
However, there can be some differences on some architectures in how the data gets into a register. Here are some situations where a "native" value may be faster than a smaller value on some hardware:
Fetching data
Fetching a "native sized" value may be faster than fetching a smaller value. That is, the processor may need to fetch 64 bits regardless, and then mask/shift off 32 bits of it to "load" a 32-bit value. This masking/shifting is not required when working on a 64 bit value, so it can possibly be loaded faster. (This goes against the intuitive idea that something twice as big might take twice as long to load).
Alternatively, if the bus can handle half-width fetches, then 32 bits may be loaded in the same time as a 64 bit value.
To confuse matters more, the CPU caches can change results as well. Usually when you read one value from memory, a "line" of several memory locations are read into the cache, so that subsequent reads can be supplied from fast cache memory instead of requiring a full fetch from RAM. In which case using 32 bit values will work out faster if you are accessing many values in sequence, as twice as many of them will be cached, resulting in fewer cache misses.
Computation
the processor hardware is optimised for dealing with 64-bit values, so calculating values using 32 bits may cause it more trouble, and thus could slow things down. e.g. It might be able to process a double (64-bit) value "natively" but have to convert a float (32-bit) value into a double before it can process it, then convert the result back to a float afterwards.
Alternatively, there may be 32-bit and 64-bit paths through the CPU, or the CPU may be able to do any conversions required in a way that does not affect the overall execution time of the instruction, in which case they may be calculated at the same speed.
This may affect complex operations (floating point) but is unlikely to be a problem with simple ops (AND, OR, etc)

Generally speaking a 64 bit operation or a 32 bit operation would have the same cost. The 32-bit operation might end up taking an extra instruction depending on if the compiler needed to ensure that the upper 32-bits of a 64-bit register was cleared (or sign-extended), but that operation generally has little cost.
There might be some difference in instruction encoding that might make one take more space than the other, but that (and which way the advantage would lie) would depend on a number of factors.

It depends -- masking a flag will normally use an AND instruction, which will execute quickly (~1 cycle) once the data is in a register. Loading 64 bits of data from memory will generally be slower than loading 32 bits of data -- but if you're using more than 32 flags, you'll have to load more than 32 bits of data anyway, and handling the masking in one cycle will improve speed over doing it in two or three instructions. Whether any of this makes a difference to overall speed will generally depend on surrounding instructions -- for example, if the data is already in the cache anyway, you may not need to load it from memory.
In other words, it's difficult to make generalizations -- you just about have to look at a specific code sequence (not just one instruction, but a whole sequence) to say anything -- and the result for that sequence may not mean much about another sequence that initially looks almost identical.

Related

How does the Program read 32 bit from the memory in a single clock cycle?

So, I have this assignment where I need to design a RISC-32-bit 5 stage pipeline. I must support at least 32 (32-bit) instructions and 32 (32-bit) data values. The memory should be read in 1 clock cycle. Now, for this, I have used a word addressable memory (1 address will contain 32 bits). But, I want to make this byte addressable.
One way of doing this is making the external clock four times slower and then passing these into the other stages of the pipeline. But passing the original clock into the memory part. But, this will make the simulation a bit hectic, like I have to run the clock 20 times (instead of 5).
Another way of doing this will be running a clock (attached to the memory) that will be four times faster than the external clock. So, by the time a single clock cycle passes, memory will be accessed four times so that we would have brought the complete 32-bit. But, circuits for doubling/quadrupling the frequency of a clock seem too complicated.
Are there simpler frequency doubler circuits that can be implemented, or is there any other way to do this?
I am using logisim-evolution to simulate this, and for the memory part, I have used the in-built RAM.
This is the RAM:
The normal way to make a 32-bit byte-addressable memory is to have four 8-bit memory subsystems that are all fed the top N-2 bits of the byte address. When doing a 32-bit load or store, all four memory subsystems will be active. When doing a 16-bit load or store, the second-from-the-bottom address bit will be used control whether to activate the first and second subsystems or the third and fourth. When doing an 8-bit load or store, the bottom address bit will select between the first and second, or between the third and fourth, subsystem.

Do NaN-boxing and tagged pointers have a future on 64bit platforms?

On common 64bit architectures like x86-64 and arm64, usually only 48 bits are used for memory addressing, while the other bits are copies of bit 47 (which is usually zero for user-space programs). Thus, the remaining 16 bits can be used to store additional data like type tags etc., as long as those bits are masked off before dereferencing. Alternatively, the 48 bits can fit into the NaN-representation of a 64-bit float number. Both techniques are often used by dynamic/interpreted languages.
I've read about Intel 5-level-paging which would extend the address range from 48 to 57 bits, thus significantly reducing the leftover bits and also rendering NaN-boxing impossible. The Linux Kernel has already added support for this paging scheme.
Given that 48 bits correspond to 262,144 GiB of memory we can assume that we won't need the 57 bit range anytime soon on consumer devices like PCs, laptops and phones, and thus one might assume that on those devices we will remain on the 48 bit mode for a long time to come, with the above mentioned techniques remaining viable, while the 57 bit mode will be only used for servers/supercomputers.
Am I correct to make those assumptions? Or are there indicators that even consumer-scale devices will use the 57 bit mode in the near future?
Even if memory-mapped persistent storage becomes widespread (NV-DIMM), it'll be a while before consumer PCs have more than 64TiB or 128TiB of storage + DRAM. Remember that high-half kernels want half the virtual address space for kernel use, and typically want to direct-map all physical memory to a bit contiguous range of virtual addresses. As well as making other mappings in kernel space, I think. e.g. see https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt for what Linux does.
As you suspect, OSes wouldn't actually enable PML5 on computers that have far less than 256TiB of physical address space. There's no need for that much virtual address space and it has a performance cost (more expensive page-walks from another level of page tables). The page-walk hardware wouldn't always be able to keep the two actually-used top-level entries cached; invalidations of everything on CR3 changes can force flushing. (Page-walk hardware can in general cache upper levels of the radix tree to speed up TLB misses for nearby pages.)

Best way to shuffle 64-bit portions of two __m128i's

I have two __m128is, a and b, that I want to shuffle so that the upper 64 bits of a fall in the lower 64 bits of dst and the lower 64 bits of b fall in the upper 64 of dst. i.e.
dst[ 0:63] = a[64:127]
dst[64:127] = b[0:63]
Equivalent to:
__m128i dst = _mm_unpacklo_epi64(_mm_srli_si128i(a, 8), b);
or
__m128i dst = _mm_castpd_si128(mm_shuffle_pd(_mm_castsi128_pd(a),_mm_castsi128_pd(b),1));
Is there a better way to do this than the first method? The second one is just one instruction, but the switch to the floating point SIMD execution is more costly than the extra instruction from the first.
Latency isn't always the worst thing ever. If it's not part of a loop-carried dep-chain, then just use the single instruction.
Also, there might not be any! Agner Fog's microarch doc says he found no extra latency in some cases when using the "wrong" type of shuffle or boolean, on Sandybridge. Blends still have the extra latency. On Haswell, he says there are no extra delays at all for mixing types of shuffle. (pg 140, Data Bypass Delays.)
So go ahead and use shufps, unless you care a lot about your code being fast on Nehalem. (Previous designs (merom/conroe, and Penryn) didn't have extra bypass delays for using the wrong move or shuffle.)
For AMD, shufps runs in the ivec domain, same as integer shuffles, so it's fine to use it. Like Intel, FP blends run in the FP domain, and thus have no bypass delay for FP data.
If you include multiple asm versions depending on which instruction sets are supported, without going completely nuts about having the optimal version of everything for every CPU like x264 does, you might use wrong-type ops in your version for AVX CPUs, but use multiple instructions in your non-AVX version. Nehalem has large penalties (2 cycle bypass delays for each domain transition), while Sandybridge is 0 or 1 cycle. SnB is the first generation with AVX.
Pre-Nehalem (no SSE4.2) is so old that it's probably not worth tuning a version specifically for it, even though it doesn't have any penalties for "wrong type" shuffles. Nehalem is right on the cusp of being kinda slow, so software running on those systems will have the hardest time operating in real-time, or not feeling slow. Thus, being bad on Nehalem would add to a bad user experience since their system is already not the fastest.

Moving data from memory to memory in micro controllers

Why can't we move data directly from a memory location into another memory location.
Pardon me if I am asking a dumb question, but I think this is a true situation, at least for the ones I've encountered (8085,8086 n 80386)
I am not really looking for a solution for moving the data (like for eg, using movs n all), but actually the reason for this anomaly.
What about MOVS? It moves a 8/16/32-bit value addressed by esi to the location addressed by edi.
The basic reason is that most instruction sets allow one register operand, and one memory operand, and sticking to this format makes designing the instruction decoder easier. It also makes the execution engine inside the CPU easier, because the instruction can issue typically a memory operation to just one memory location, and at most one register block read or write.
To do a memory-to-memory instruction directly requires two memory locations to be designated. This is awkward given a register/memory instruction format. Given the performance of the machines, there is little justification for modifying the instruction format just for this.
A hack used by more modern CPUs is to provide some type of block-move instruction, in which the source and destination locations are located in registers (for the X86 this is ESI and EDI respectively). Then an instruction can just designate two registers (or in the case of the x86, instructions that simply know which registers). That solves the instruction decoding problem.
The instruction execution problem is a little harder but people have lots of transistors. Organizing a read indirect from one register, and write indirect through another, and increment both is awkward in silicon but that just chews up some transistors.
Now you can have an instruction that moves from memory to memory, just as you asked.
One of the other posters noted for the X86 there are instrucitons (MOVB, MOVW, MOVS, ...) that do exactly this, one memory byte/word/... at a time.
Moving a block of memory would be ideal because the CPU can generate high-bandwith reads and writes. The x86 does this with with a REP (repeat) prefix on MOV- to move a larger block.
But if a single insturction can do this, you have the problem that it might take a long time to execute (how long to move 1Gb? --> millions of clock cycles!) and that ruins the interrupt response rate of the CPU.
The x86 solves this by allowing REP MOV- to be interrupted, with the PC being set back to the beginning of the instruction. By updating the registers during the move appropriately, you can interrupt and restart the REP MOV- instruction having both a fast block move and high interrupt response rates. More transistors down the tube.
The RISC guys figured out that all this complexity for a block move instruction was mostly not worth it. You can code a dumb loop (even the x86):
copy: MOV EAX,[ESI]
ADD ESI,4
MOV [EDI],EAX
ADD EDI,4
DEC ECX
JNE copy
which does the same basic thing as REP MOV- . Pretty much the modern CPUs (x86, others) execute this so fast (superscalar, etc.) that the bus is just as utilized as the custom move instruction, but now you don't need all those wasted transistors (or corresponding heat).
Most CPU varieties don't allow memory-to-memory moves. Normally the CPU can access only one memory location at at time, which means you need a temporary spot to store the value when moving it (a general purpose register, usually). If you think about it, moving directly from one memory location to another would require that the CPU be able to access two different spots in RAM simultaneously - that means two full memory controllers at least, and even then, the chances they'd "play nice" enough to access the same RAM would be pretty bad. The chip designers might have been able to pull some tricks to allow direct copies from one RAM chip to another, but that would be a pretty special-application kind of feature that would just add cost and complexity to solve a very uncommon problem.
You might be able to use some special DMA hardware to make it look to your program like memory is being moved without that temporary storage, at least from the perspective of your CPU.
You have one set of address lines, one set of data lines, and a few control lines between the CPU and RAM. You can't physically move directly from memory to memory without a second set of address lines and a whole bunch of complicated logic inside the RAM. Therefore, we have to store it temporarily in a register.
You could make an instruction that does the load and store together and looks like one instruction to the programmer, but there are other considerations like instruction size, non-duplication of effective address calculation logic, pipelining, etc. that make it desirable to keep it more simple.
Memory-memory machines turn out to be slower in general than load-store machines. This was deduced/figured out/invented by the RISC researchers in 1980ish or so. So the older architectures (VAX/OS360) tend to have memory-memory architectures; newer machines do load-store.
Another interesting variant is stack machines; they seem to always be around as a minority.

Why 24 bits registers?

In my work I deal with different micro-controllers, micro-processors and DSP processors. Many of them have 24-bits registers and counters.
I know how to use them, this is not my question.
My question is why do they have 24-bits register! why not make it 32 bit?
and as I know, it is not a problem of size, because the registers are already 32bits, but have maximum of 0xFFFFFF.
Do this provide easier HW implementation? Faster calculations?
Or it is just "hmmm, lets put 24-bits registers to make the job of programmers more hard"?
My guess is that most DSP applications simply don't need 32-bits. Digital audio uses 24-bits fidelity the most. Implementing 32-bits would require more transistors thus would result in higher costs.
Why would 32 bits be easier for the programmer?
Also, you state that the registers have a maximum of 0xFFFFFF, which makes them 24-bits by definition, not 32-bits as you suggest.
There is no particular reason for 8/16/32/64 bits. There are 24 bit DSPs, 18 bit PICs, 36 bit PDP... Each bit costs time, money and power so having enough bits is good enough. No need to over do it. Just look at the original PCs with 20 adress lines, even though the memory pointers could be up to 32 bits.
Tagging onto Tomas' answer, some DSPs have a register mode where overflowing locks the value at the highest state. If the data is 24-bit and it rolls over to the 25th bit, it should lock there, not at the 32-bit rollover.
For audio you would typically want 16 bit output. Since you lose some precision during processing they pick a reasonable size that is somewhat bigger than 16 bit, which happens to be 24 bit.
The reason not to go to full 32 bits is that that would need substantially more hardware, especially for multiplication.

Resources