/usr/bin/time file inputs / outputs - unix

I'm struggling to find any detailed information about exactly what the various outputs of /usr/bin/time -v mean. Namely I'm confused about the meaning of file inputs / outputs.
If anyone has some experience with '/usr/bin/time' I'dd be grateful if you could straighten this out for me.

What is your OS, is it linux of BSD?
There is some description of fields in man page of time utility (section 1), for example in Linux: http://man7.org/linux/man-pages/man1/time.1.html
But time itself uses some interface to kernel to get the information, probably wait3 / wait4 http://man7.org/linux/man-pages/man2/wait3.2.html
Data printed in time -v is from struct rusage which is described in getrusage(2) man page: Linux http://man7.org/linux/man-pages/man2/getrusage.2.html
Linux have many fields in rusage, but not all fields are used:
The resource usages are returned in the structure pointed to by
usage, which has the following form:
struct rusage {
struct timeval ru_utime; /* user CPU time used */
struct timeval ru_stime; /* system CPU time used */
long ru_maxrss; /* maximum resident set size */
long ru_ixrss; /* integral shared memory size */
long ru_idrss; /* integral unshared data size */
long ru_isrss; /* integral unshared stack size */
long ru_minflt; /* page reclaims (soft page faults) */
long ru_majflt; /* page faults (hard page faults) */
long ru_nswap; /* swaps */
long ru_inblock; /* block input operations */
long ru_oublock; /* block output operations */
long ru_msgsnd; /* IPC messages sent */
long ru_msgrcv; /* IPC messages received */
long ru_nsignals; /* signals received */
long ru_nvcsw; /* voluntary context switches */
long ru_nivcsw; /* involuntary context switches */
};
The http://man7.org/linux/man-pages/man2/getrusage.2.html also give some description and mark unmaintained fields:
ru_utime
ru_stime
ru_maxrss (since Linux 2.6.32)
ru_ixrss (unmaintained)
ru_idrss (unmaintained)
ru_isrss (unmaintained)
ru_minflt
ru_majflt
ru_nswap (unmaintained)
ru_inblock (since Linux 2.6.22)
The number of times the filesystem had to perform input.
ru_oublock (since Linux 2.6.22)
The number of times the filesystem had to perform output.
ru_msgsnd (unmaintained)
ru_msgrcv (unmaintained)
ru_nsignals (unmaintained)
ru_nvcsw (since Linux 2.6)
ru_nivcsw (since Linux 2.6)
POSIX 2004 have no exact list to implement, so it is implementation-specific http://pubs.opengroup.org/onlinepubs/009695399/functions/getrusage.html
The header shall define the rusage structure that includes at least the following members:
struct timeval ru_utime User time used.
struct timeval ru_stime System time used.

Related

XC8 builds font tables from top ROM

I wrote a barebone progran template in XC8 (1.37) that I use to develop and test new GLCD functions for the 18F family. Programming is done via a PICkit3. Since I need to quicky reprogram several times the code it is really important that programming is faster as much as possible.
Tipically, the code size is around 2K and it takes less than 10 sec to program,
Everiything is fine until I must use a font table, defined as:
const char font8[] = {....
Now, with just $400 bytes added, the compiler place the table at the ROM's end and the programming of 64K memory takes more than 1 minute.
Is there any way to avoid this?
I tried to manually limit the memory range in the MPLABX options, but this is annoying and a little unsafe (sometimes part of code is truncated).
A while back I had to write some code for emissions testing, where I needed to copy data between extreme ends of RAM. To do that I needed to specify the exact memory addresses. You can also use the C extension __at() construct. http://ww1.microchip.com/downloads/en/DeviceDoc/50002053F.pdf#page=27
int scanMode __at(0x200);
const char keys[] __at(123) = { ’r’, ’s’, ’u’, ’d’};
int modify(int x) __at(0x1000) {
return x * 2 + 3;
}

Terminating a process specially in QEMU

Example code:
int main () {
printf("Hello world begin\n");
// New instruction
__asm__ __volatile__ (".byte 0x0f\n\t"
".byte 0xd0\n\t"
:);
printf("Hello world end\n");
return 0;
}
My problem [TLDR] :
Make QEMU treat my new instruction similar to return 0; [sys_exit]
My purpose:
I am trying to customize the QEMU emulation of x86 ISA to log some data (like number of ring transitions etc) during CPU emulation between two points. Assume for simplicity, the start will be after the first printf (or some syscall) and end will be marked by a new instruction that I have written (say opcode '0f d0' which is unused in Intel ISA) whose purpose is to simply print those log information and terminate the process there (in line 2) and without executing the rest of the lines. In other words, the behavior should be identical to a scenario where 'return 0;' was placed right after my new instruction.
In my QEMU helper logic, I tried to add cpu_loop_exit() and all other combinations, they seem to loop infinitely. Can anyone point me how to achieve the behavior I want (with only modifying QEMU and not my example code) ?

Why doesn't OpenCL Nvidia compiler (nvcc) use the registers twice?

I'm doing a small OpenCL benchmark using Nvidia drivers,
my kernel performs 1024 fuse multiply-adds and store the result in an array:
#define FLOPS_MACRO_1(x) { (x) = (x) * 0.99f + 10.f; } // Multiply-add
#define FLOPS_MACRO_2(x) { FLOPS_MACRO_1(x) FLOPS_MACRO_1(x) }
#define FLOPS_MACRO_4(x) { FLOPS_MACRO_2(x) FLOPS_MACRO_2(x) }
#define FLOPS_MACRO_8(x) { FLOPS_MACRO_4(x) FLOPS_MACRO_4(x) }
// more recursive macros ...
#define FLOPS_MACRO_1024(x) { FLOPS_MACRO_512(x) FLOPS_MACRO_512(x) }
__kernel void ocl_Kernel_FLOPS(int iNbElts, __global float *pf)
{
for (unsigned i = get_global_id(0); i < iNbElts; i += get_global_size(0))
{
float f = (float) i;
FLOPS_MACRO_1024(f)
pf[i] = f;
}
}
But when I look in the PTX generated, I see this:
.entry ocl_Kernel_FLOPS(
.param .u32 ocl_Kernel_FLOPS_param_0,
.param .u32 .ptr .global .align 4 ocl_Kernel_FLOPS_param_1
)
{
.reg .f32 %f<1026>; // 1026 float registers !
.reg .pred %p<3>;
.reg .s32 %r<19>;
ld.param.u32 %r1, [ocl_Kernel_FLOPS_param_0];
// some more code unrelated to the problem
// ...
BB1_1:
and.b32 %r13, %r18, 65535;
cvt.rn.f32.u32 %f1, %r13;
fma.rn.f32 %f2, %f1, 0f3F7D70A4, 0f41200000;
fma.rn.f32 %f3, %f2, 0f3F7D70A4, 0f41200000;
fma.rn.f32 %f4, %f3, 0f3F7D70A4, 0f41200000;
fma.rn.f32 %f5, %f4, 0f3F7D70A4, 0f41200000;
// etc
// ...
If I am correct, the PTX uses 1026 float registers to perform the 1024 operations and never reuse a register twice even if it could perform all the multiply-add operations using only 2 registers. 1026 is far above the maximum number of registers a thread is allow to have (according to the specs), so I guess this ends up in memory spilling.
Is it a compiler bug or am I totally missing something ?
I am using nvcc version 6.5 on a Quadro K2000 GPU.
EDIT
Actually I did miss something in the specs:
"Since PTX supports virtual registers, it is quite common for a compiler frontend to generate
a large number of register names. Rather than require explicit declaration of every name,
PTX supports a syntax for creating a set of variables having a common prefix string
appended with integer suffixes. For example, suppose a program uses a large number, say
one hundred, of .b32 variables, named %r0, %r1, ..., %r99"
The PTX file format is intended to describe a virtual machine and instruction set architecture:
PTX defines a virtual machine and ISA for general purpose parallel thread execution. PTX programs are translated at install time to the target hardware instruction set. The PTX-to-GPU translator and driver enable NVIDIA GPUs to be used as programmable parallel computers.
So the PTX output that you are obtaining there is not a form of "GPU assembler". It is only an intermediate representation, intended to be capable of describing virtually any form of parallel computation.
The PTX representation is then compiled into actual binaries for the respective target GPU. This is important in order to be possible to abstract from the actual architecture - specifically, regarding your example: It should be possible to use the same PTX representation of a program, regardless of the number of registers that are available on a specific target machine. The 1026 "registers" that you see there are "virtual" registers, and in the end, may be mapped to the (few) real hardware registers that are actually available. You may add the --ptxas-options=-v argument to the NVCC during the compilation to obtain addition information about the register usage.
(This is roughly the same idea as that behind the LLVM - namely, to have a representation that can be optimized on and argued about, both abstracting from the original source code and from the actual target architecture).

Queued kernels slower than expected on AMD gpus only

I am performing a benchmark like show below
CHECK( context = clCreateContext(props, 1, &device, NULL, NULL, &_err); );
CHECK( queue = clCreateCommandQueue(context, device, 0, &_err); );
#define SYNC() clFinish(queue)
#define LAUNCH(glob, loc, kernel) OCL(clEnqueueNDRangeKernel(queue, kernel, 2,\
NULL, glob, loc,\
0, NULL, NULL))
/* Build program, set arguments over here */
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, plus_kernel);
}
SYNC();
STOP;
printf("Time taken (plus) : %lf\n", uSec / iter);
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, minus_kernel);
}
SYNC();
STOP;
printf("Time taken (minus): %lf\n", uSec / iter);
START;
for (int i = 0; i < iter; i++) {
LAUNCH(global, local, plus_kernel);
LAUNCH(global, local, minus_kernel);
}
SYNC();
STOP;
printf("Time taken (both) : %lf\n", uSec / iter);
The results look weird:
Time taken (plus) : 31.450000
Time taken (minus): 28.120000
Time taken (both) : 2256.380000
START, and STOP are just macros that start and stop a timer.
Here are the relevant macros.
I am not sure why queuing up is the kernels is slowing them down (and only on AMD GPUs)!
EDIT I am using Radeon 7970
EDIT Both kernels are operating on independent memory. Also here is the system information.
OS: Ubuntu 11.10
fglrxinfo:
display: :0 screen: 0
OpenGL vendor string: Advanced Micro Devices, Inc.
OpenGL renderer string: AMD Radeon HD 7900 Series
OpenGL version string: 4.2.11762 Compatibility Profile Context
I think the answer has to do with caching of data on newer GPUs (Specifically the Radeon 7970, which uses the Graphics Compute Next (GCN) architecture.
One of the advantages of this architecture is it's caching capabilities (somewhat close to CPU caching at this point). If you perform calls like this:
PLUS
PLUS
PLUS
....
Then the memory that is resident in the inner caches of the GPU. On the other hand if you make calls like this:
PLUS
MINUS
PLUS
MINUS
...
Where the two kernels have different memory objects associated with them, then the data is kicked out of the hardware devices on each CU, causing a need for them to be brought in from the very sluggish global memory.
Two easy ways to test if this is the case:
Run only Pluses with varying numbers of iterations. As the number of iterations increases, the average time will go down because the cost of the first run (which brings the data in) is amortized. Also, you should notice that all calls after the first should be relatively equal.
Make the Plus and Minus kernels run on the same memory objects. If the reason for the slowdown is because of the caching of memory objects, then the overall run time should be the average of the individual running times of PLUS and MINUS (depending perhaps on experiment 1).
Let me know if you find out if this is actually the case!

What's the equivalent of rdtsc opcode for PPC?

I have an assembly program that has the following code.
This code compiles fine for a intel processor. But, when I use a PPC (cross)compiler, I get an error that the opcode is not recognized. I am trying to find if there is an equivalent opcode for PPC architecture.
.file "assembly.s"
.text
.globl func64
.type func64,#function
func64:
rdtsc
ret
.size func64,.Lfe1-func64
.globl func
.type func,#function
func:
rdtsc
ret
PowerPC includes a "time base" register which is incremented regularly (although perhaps not at each clock -- it depends on the actual hardware and the operating system). The TB register is a 64-bit value, read as two 32-bit halves with mftb (low half) and mftbu (high half). The four least significant bits of TB are somewhat unreliable (they increment monotonically, but not necessarily with a fixed rate).
Some of the older PowerPC processors do not have the TB register (but the OS might emulate it, probably with questionable accuracy); however, the 603e already has it, so it is a fair bet that most if not all PowerPC systems actually in production have it. There is also an "aternate time base register".
For details, see the Power ISA specification, available from the power.org Web site. At the time of writing that answer, the current version was 2.06B, and the TB register and opcodes were documented at pages 703 to 706.
When you need a 64-bit value on a 32-bit architecture (not sure how it works on 64-bit) and you read the TB register you can run into the problem of the lower half going from 0xffffffff to 0 - granted this doesn't happen often but you can be sure it will happen when it will do the most damage ;)
I recommend you read the upper half first, then the lower and finally the upper again. Compare the two uppers and if they are equal, no problemo. If they differ (the first should be one less than the last) you have to look at the lower to see which upper it should be paired with: if its highest bit is set it should be paired with the first, otherwise with the last.
Apple has three versions of mach_absolute_time() for the different types of code:
32-bit
64-bit kernel, 32-bit app
64-bit kernel, 64-bit app
Inspired by a comment from Peter Cordes and the disassembly of clang's __builtin_readcyclecounter:
mfspr 3, 268
blr
For gcc you can do the following:
unsigned long long rdtsc(){
unsigned long long rval;
__asm__ __volatile__("mfspr %%r3, 268": "=r" (rval));
return rval;
}
Or For clang:
unsigned long long readTSC() {
// _mm_lfence(); // optionally wait for earlier insns to retire before reading the clock
return __builtin_readcyclecounter();
}

Resources