How to run Klaus Dormann's 6502 test suite on real hardware with separate ROM and RAM - self-modifying

I would like to run the full 6502 test suite by Klaus Dormann to test my Kansas Lava 6502 implementation. However, the code uses self-modification (see all uses of range_adr), which, while trivial to implement in an emulator, doesn't bode well for a hardware implementation: the program image needs to be stored on ROM, so the write-backs are going to be blackholed by whatever routes writes based on addressing ROM or RAM-backed parts.
The same problem, of course, applies both to synthesizing it to a real FPGA and to running it in a simulator (either the low-level VHDL one or the high-level Kansas Lava one).
Is there a way to run the test suite without doing a lengthy (in terms of cycles) dance of suspending the CPU, copying the program from some unaddressable ROM into an all-RAM memory byte-by-byte, and then initializing the CPU and letting it run? I'd prefer not doing this because simulating these extra cycles at startup will slow down running the test considerably.

Knee-jerk observations:
Despite coming as a 64kb image, the test is actually just 14,093 bytes of actual content, from $0000 up to $370d, then a padded fill of $ffs up to the three vectors in $fffa–$ffff. So you'd need to copy at most 14,099 bytes rather than the prima facie 65,536.
Having set up that very test suite in an emulator I wrote yesterday (no, really) the full range of touched addresses — using [x, y] to denote the closed range, i.e. to include both x and y, is:
[000a, 0012], [0100, 0101], [01f9, 01ff] (i.e. the stack and zero pages);
0200;
[0203, 0207];
04a8;
2cbb;
2cdc;
2eb1;
2ed2;
30a7;
30c8;
33f2;
3409;
353b; and
3552.
From the .lst version of the program, that means all you need to move are the variables with labels:
test_case;
ada2;
sba2;
range_adr;
... and either move or remove the routines that:
test AND immediate from 2cac down to 2cec;
test EOR immediate from 2ea2 to 2ee2;
test ORA immediate from 3098 to 30d8;
test decimal ADC/SBC immediate from 33e7 down to 3414 (specifically to include chkdadi and chksbi);
test binary ADC/SBC immediate from 3531 down to 355d.
All the immediate tests self modify the operand. If you're happy leaving that one addressing mode untested then it shouldn't be too troublesome.
So, I guess, edit those tests out of the original file, and you can safely relocate range_adr to the middle of the stack page if my simulation is accurate.

Related

Why is FASTR (ie GraalVM version of R) 10x *slower* compared to normal R despite Oracle's claim of 40x *faster*?

Oracle claims that its graalvm implementaion of R (called "FastR") is up to 40x faster than normal R (https://www.graalvm.org/r/). However, I ran this super simple (but realistic) 4 line test program and not only was GraalVM/FastR not 40x faster, it was actually 10x SLOWER!
x <- 1:300000/300000
mu <- exp(-400*(x-0.6)^2)+
5*exp(-500*(x-0.75)^2)/3+2*exp(-500*(x-0.9)^2)
y <- mu+0.5*rnorm(300000)
t1 <- system.time(fit1 <- smooth.spline(x,y,spar=0.6))
t1
In FASTR, t1 returns this value:
user system elapsed
0.870 0.012 0.901
While in the original normal R, I get this result:
user system elapsed
0.112 0.000 0.113
As you can see, FAST R is super slow even for this simple (ie 4 lines of code, no extra/special library imported etc). I tested this on a 16 core VM on Google Cloud. Thoughts? (FYI: I did a quick peek at the smooth.spline code, and it does call Fortran, but according to the Oracle marketing site, GraalVM/FastR is faster than even Fortran-R code.)
====================================
EDIT:
Per the comments from Ben Bolker and user438383 below, I modified the code to include a for loop so that the code ran for much longer and I had time to monitor CPU usage. The modified code is below:
x <- 1:300000/300000
mu <- exp(-400*(x-0.6)^2)+
5*exp(-500*(x-0.75)^2)/3+2*exp(-500*(x-0.9)^2)
y <- mu+0.5*rnorm(300000)
forloopfunction <- function(xTrain, yTrain) {
for (x in 1:100) {
smooth.spline(xTrain, yTrain, spar=0.6)
}
}
t1 <- system.time(fit1 <-forloopfunction(x,y))
t1
Now, the normal R returns this for t1:
user system elapsed
19.665 0.008 19.667
while FastR returns this:
user system elapsed
76.570 0.210 77.918
So, now, FastR is only 4x slower, but that's still considerably slower. (I would be ok with 5% to even 10% difference, but that's 400% difference.) Moreoever, I checked the cpu usage. Normal R used only 1 core (at 100%) for the entirety of the 19 seconds. However, surprisingly, FastR used between 100% and 300% of CPU usage (ie between 1 full core and 3 full cores) during the ~78 seconds. So, I think it fairly reasonably to conclude that at least for this test (which happens to be a realistic test for my very simple scenario), FastR is at least 4x slower while consuming ~1x to 3x more CPU cores. Particularly given that I'm not importing any special libraries which the FASTR team may not have time to properly analyze (ie I'm using just vanilla R code that ships with R), I think that there's something not quite right with the FASTR implementation, at least when it comes to speed. (I haven't tested accuracy, but that's now moot I think.) Has anyone else experienced anything similar or does anyone know of any "magic" configuration that one needs to do to FASTR to get its claimed speeds (or at least similar, ie +- 5% speeds to normal R)? (Or maybe there's some known limitation to FASTR that I may be able to work around, ie don't use normal fortran binaries etc, but use these special ones etc.)
TL;DR: your example is indeed not the best use-case for FastR, because it spends most of its time in R builtins and Fortran code. There is no reason for it to be slower on FastR, though, and we will work on fixing that. FastR may be still useful for your application overall or just for some selected algorithms that run slowly on GNU-R, but would be a good fit for FastR (loopy, "scalar" code, see FastRCluster package).
As others have mentioned, when it comes to micro benchmarks one needs to repeat the benchmark multiple times to allow the system to warm-up. This is important in any case, but more so for systems that rely on dynamic compilation, like FastR.
Dynamic just-in-time compilation works by first interpreting the program while recording the profile of the execution, i.e., learning how the program executes, and only then compiling the program using this knowledge to optimize it better(*). In case of dynamic languages like R, this can be very beneficial, because we can observe types and other dynamic behavior that is hard if not impossible to statically determine without actually running the program.
It should be now clear why FastR needs few iterations to show the best performance it can achieve. It is true that the interpretation mode of FastR has not been optimized very much, so the first few iterations are actually slower than GNU-R. This is not inherent limitation of the technology that FastR is based on, but tradeoff of where we put our resources. Our priority in FastR has been peak performance, i.e., after a sufficient warm-up for micro benchmarks or for applications that run for long enough time.
To your concrete example. I could also reproduce the issue and I analyzed it by running the program with builtin CPU sampler:
$GRAALVM_HOME/bin/Rscript --cpusampler --cpusampler.Delay=20000 --engine.TraceCompilation example.R
...
-----------------------------------------------------------------------------------------------------------
Thread[main,5,main]
Name || Total Time || Self Time || Location
-----------------------------------------------------------------------------------------------------------
order || 2190ms 81.4% || 2190ms 81.4% || order.r~1-42:0-1567
which || 70ms 2.6% || 70ms 2.6% || which.r~1-6:0-194
ifelse || 140ms 5.2% || 70ms 2.6% || ifelse.r~1-34:0-1109
...
--cpusampler.Delay=20000 delays the start of sampling by 20 seconds
--engine.TraceCompilation prints basic info about the JIT compilation
when the program finishes, it prints the table from CPU sampler
(example.R runs the micro benchmark in a loop)
One observation is that the Fotran routine called from smooth.spline is not to blame here. It makes sense because FastR runs the very same native Fortran code as GNU-R. FastR does have to convert the data to native memory, but that is probably small cost compared to the computation itself. Also the transition between native and R code is in general more expensive on FastR, but here it does not play a role.
So the problem here seems to be a builtin function order. In GNU-R builtin functions are implemented in C, they basically do a big switch on the type of the input (integer/real/...) and then just execute highly optimized C code doing the work on plain C integer/double/... array. That is already the most effective thing one can do and FastR cannot beat that, but there is no reason for it to not be as fast. Indeed it turns out that there is a performance bug in FastR and the fix is on its way to master. Thank you for bringing our attention to it.
Other points raised:
but according to the Oracle marketing site, GraalVM/FastR is faster than even Fortran-R code
YMMV. That concrete benchmark presented at our website does spend considerable amount of time in R code, so the overhead of R<->native transition does not skew the result as much. The best results are when translating the Fortran code to R, so making the whole thing just a pure R program. This shows that FastR can run the same algorithm in R as fast as or quite close to Fortran and that is, performance wise, the main benefit of FastR. There is no free lunch. Warm-up time and the costs of R<->native transition is currently the price to pay.
FastR used between 100% and 300% of CPU usage
This is due to JIT compilations going on on background threads. Again, no free lunch.
To summarize:
FastR can run R code faster by using dynamic just-in-time compilation and optimizing chunks of R code (functions or possibly multiple functions inlined into one compilation unit) to the point that it can get close or even match equivalent native code, i.e., significantly faster than GNU-R. This matters on "scalar" R code, i.e., code with loops. For code that spends majority of time in builtin R functions, like, e.g., sum((x - mean(x))^2) for large x, this doesn't gain that much, because that code already spends much of the time in optimized native code even on GNU-R.
What FastR cannot do is to beat GNU-R on execution of a single R builtin function, which is likely to be already highly optimized C code in GNU-R. For individual builtins we may beat GNU-R, because we happen to choose slightly better algorithm or GNU-R has some performance bug somewhere, or it can be the other way around like in this case.
What FastR also cannot do is speeding up native code, like Fortran routines that some R code may call. FastR runs the very same native code. On top of that, the transition between native and R code is more costly in FastR, so programs doing this transition too often may end up being slower on FastR.
Note: what FastR can do and is a work-in-progress is to run LLVM bitcode instead of the native code. GraalVM supports execution of LLVM bitcode and can optimize it together with other languages, which removes the cost of the R<->native transition and even gives more power to the compiler to optimize across this boundary.
Note: you can use FastR via the cluster package interface to execute only parts of you application.
(*) the first profiling tier may be also compiled, which gives different tradeoffs

Reusing FFTW wisdom on clusters

I'm running distributed MPI programs on clusters using multiple nodes, where I make use of the MPI FFT's of FFTW. To save time I reuse wisdom from one run to the next. To generate this wisdom, FFTW experiments with a lot of different algorithms and what not for the given problem. I am worried that because I am working on a cluster, the best solution stored as wisdom for one set of CPUs/nodes may not be the best solution for some other set of CPUs/nodes performing the same task, and so I should not reuse wisdom unless I am running on exactly the same CPUs/nodes as the run where the wisdom was gathered.
Is this correct, or is the wisdom somehow completely indifferent to the physical hardware on which it is generated?
If your cluster is homogeneous, the saved fftw plans likely make sense, though the the way the processes are connected may affect optimal plans for mpi-related operations. But if your cluster is not homogeneous, saving the fftw plan can be suboptimal and problem related to load balance could proove hard to solve.
Taking a look at wisdom files produced by fftw and fftw_mpi for a 2D c2c transform, I can see additionnal lines likely related to phases like transposition where mpi communications are required, such as:
(fftw_mpi_transpose_pairwise_register 0 #x1040 #x1040 #x0 #x394c59f5 #xf7d5729e #xe8cf4383 #xce624769)
Indeed, there are different algorithms for transposing the 2D (or 3D) array: in the folder mpi of the source of fftw, files transpose-pairwise.c, transpose-alltoall.c and transpose-recurse.c implement these algorithms. As flags FFTW_MEASURE or FFTW_EXHAUSTIVE are set, these algorithms are run to select the fastest, as stated here. The result might depend on the topology of the network of processes (how many processes on each node? How these nodes are connected?). If the optimal plan depends on where the processes are running and on the network topology, using the wisdom utility will not be decisive. Otherwise, using the wisdom feature can save some time as the plan is built.
To test whether the optimal plan changed, you can perform a couple of runs and save the resulting plan in files: a reproductibility test!
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
if (rank == 0) fftw_export_wisdom_to_filename("wisdommpi.txt");
/* save the plan on each process ! Depending on the file system of the cluster, performing communications can be required */
char filename[42];
sprintf(filename, "wisdom%d.txt",rank);
fftw_export_wisdom_to_filename(filename);
Finally, to compare the produced wisdom files, try in a bash script:
for filename in wis*.txt; do
for filename2 in wis*.txt; do
echo "."
if grep -Fqvf "$filename" "$filename2"; then
echo "$filename"
echo "$filename2"
echo $"There are lines in file1 that don’t occur in file2."
fi
done
done
This script check that all lines in files are also present in the other files, following Check if all lines from one file are present somewhere in another file
On my personal computer, using mpirun -np 4 main, all wisdom files are identical except for a permutation of lines.
If the files are different from one run to another, it could be attributed to the communication pattern between processes... or sequential performance of dft for each process. The piece of code above save the optimal plan for each process. If lines related to sequential operations, without fftw_mpi in it, such as:
(fftw_codelet_n1fv_10_sse2 0 #x1440 #x1440 #x0 #xa9be7eee #x53354c26 #xc32b0044 #xb92f3bfd)
become different, it is a clue that the optimal sequential algorithm changes from one process to the other. In this case, the wall clock time of the sequential operations may also differ from one process to another. Hence, checking the load balance between processes could be instructive. As noticed in the documentation of FFTW about load balance:
Load balancing is especially difficult when you are parallelizing over heterogeneous machines; ... FFTW does not deal with this problem, however—it assumes that your processes run on hardware of comparable speed, and that the goal is therefore to divide the problem as equally as possible.
This assumption is consistent with the operation performed by fftw_mpi_gather_wisdom();
(If the plans created for the same problem by different processes are not the same, fftw_mpi_gather_wisdom will arbitrarily choose one of the plans.) Both of these functions may result in suboptimal plans for different processes if the processes are running on non-identical hardware...
The transpose operation in 2D and 3D fft requires a lot a communications: one of the implementation is a call to MPI_Alltoall involving almost the whole array. Hence, a good connectivity between nodes (infiniband...) can proove useful.
Let us know if you found different optimal plans from one run to another and how these plans differ!

How can I achieve something similar to Xilinx' RLOC in Altera FPGAs?

I have so far not found any way to do anything similar to Xilinx' RLOC constraints for Altera FPGAs.
Does anyone know a way to do this?
For example place two FFs in the same or adjacent LABs
So to answer my own question, after some consultation with some Altera manuals and some trial and error, I found that this pretty much does what I want.
module synchronizer (input wire dat_i,
input wire out_clk,
output wire dat_o);
(* altera_attribute = "-name SYNCHRONIZATION_REGISTER_CHAIN_LENGTH 2; -name SYNCHRONIZER_IDENTIFICATION \"FORCED IF ASYNCHRONOUS\"" *)
logic [1:0] out_sync_reg;
always_ff#(posedge out_clk) begin
out_sync_reg <= {out_sync_reg[0],dat_i};
end
assign dat_o = out_sync_reg[1];
endmodule
I tested this by setting global synchronizer detection to off and observed that TimeQuest found and analysed the correct paths for metastability.
This works well even when dat_i is latched by clk_a and out_clk is driven by clk_b and where the two clocks are set as:
set_clock_groups -asynchronous -group {clk_a}
set_clock_groups -asynchronous -group {clk_b}
Thus creating false paths between all connections from registers clocked by clk_a to registers clocked by clk_b
set_max/min_delay wont work since it is ignored (as stated by Altera) if the the two clocks are in different asynchronous clock groups.
Altera do not support RLOC style constraints. Apparently this is something to do with the underlying physical architecture. I believe they over-provision ALMs and fuse out columns during chip test to improve yield, therefore relative locations constraints won't translate as expected to a given physical device.
If you are worried about a synchroniser chain placement, you can enable synchroniser chain detection using SYNCHRONIZATION_REGISTER_CHAIN_LENGTH and SYNCHRONIZER_IDENTIFICATION QSF settings (see also this answer).
If you just want to ensure particular timing properties then use set_max_delay and set_min_delay timing constraints on your path.

Give CPU more power to plot in Octave

I made this function in Octave which plots fractals. Now, it takes a long time to plot all the points I've calculated. I've made my function as efficient as possible, the only way I think I can make it plot faster is by having my CPU completely focus itself on the function or telling it somehow it should focus on my plot.
Is there a way I can do this or is this really the limit?
To determine how much CPU is being consumed for your plot, run your plot, and in a separate window (assuming your on Linux/Unix), run the top command. (for windows, launch the task master and switch to the 'Processes' tab, click on CPU header to sort by CPU).
(The rollover description for Octave on the tag on your question says that Octave is a scripting language. I would expect it's calling gnuplot to create the plots. Look for this as the highest CPU consumer).
You should see that your Octave/gnuplot cmd is near the top of the list, and for top there is a column labeled %CPU (or similar). This will show you how much CPU that process is consuming.
I would expect to see that process is consuming 95% or more CPU. If you see that is a significantly lower number, then you need to check the processes below that, are they consuming the remaining CPU (some sort of Virus scan (on a PC), or DB or Server?)? If a competing program is the problem, then you'll have to decide if you can wait til it/they are finished, OR that you can kill them and restart later. (For lunix, use kill -15 pid or kill -11 pid. Only use kill -9 pid as a last resort. Search here for articles on correct order for trying to kill -$n)
If there are no competing processes AND it octave/gnuplot is using less than 95%, then you'll have to find alternate tools to see what is holding up the process. (This is unlikely, it's possible some part of your overall plotting process is either Disk I/O or Network I/O bound).
So, it depends on the timescale you're currently experiencing versus the time you "want" to experience.
Does your system have multiple CPUs? Then you'll need to study the octave/gnuplot documentation to see if it supports a switch to indicate "use $n available CPUs for processing". (Or find a plotting program that does support using $n multiple CPUs).
Realistically, if your process now takes 10 mins, and you can, by eliminating competing processes, go from 60% to 90%, that is a %50 increase in CPU, but will only reduce it to 5 mins (not certain, maybe less, math is not my strong point ;-)). Being able to divide the task over 5-10-?? CPUs will be the most certain path to faster turn-around times.
So, to go further with this, you'll need to edit your question with some data points. How long is your plot taking? How big is the file it's processing. Is there something especially math intensive for the plotting you're doing? Could a pre-processed data file speed up the calcs? Also, if the results of top don't show gnuplot running at 99% CPU, then edit your posting to show the top output that will help us understand your problem. (Paste in your top output, select it with your mouse, and then use the formatting tool {} at the top of the input box to keep the formatting and avoid having the output wrap in your posting).
IHTH.
P.S. Note the # of followers for each of the tags you've assigned to your question by rolling over. You might get more useful "eyes" on your question by including a tag for the OS you're using, and a tag related to performance measurement/testing (Go to the tags tab and type in various terms to see how many followers you're getting. One bit of S.O. etiquette is to only specify 1 programming language (if appropriate) and that may apply to OS's too.)

What does the load-average used by parallel make represent?

Using GNU make on Windows, what exactly does the load-average value represent?
For example:
make -j --load-average=2.5
What does the 2.5 mean?
It means that make will not start any new thread until the number of runnable processes, averaged over some period of time is below 2.5.
Edit, following vines' remark
a runnable process, in Unix parlance, is a process that is either waiting for CPU time or readily running. Technically it is a process which is in TASK_RUNNING state.
However... this prompted me to re-read the original question, and note its "on Windows" part....
Whereby my original answer is, loosely, correct for GNU Make on Unix-like hosts, it is plain short of factual on Windows. The discrepancy of behavior is due to the fact the the two operating systems provide very different metrics to describe their "current" CPU load. Consequently Make's logic has to interpret these CPU load readings differently, to serve its --load-average feature.
The purpose of the --load-average parameter is to provide guidance to Make as to when it can start new threads; causing Make to share CPU resources with other applications (and within itself) more elegantly.
In Linux, the semantic of this parameter is very close to its name: new Make threads are allowed when the load-average, as reported by the kernel (I'm assuming this is the "one minute" load average, though it could be the five minutes one), is less than the parameter value.
In Windows, Make computes the load average from the weighed-average of the CPU Load (as reported by GetSystemTimes function) and the memory load (eg. from GlobalMemoryStatusEx function).
On Windows - nothing, apparently. This is a UNIX term: http://en.wikipedia.org/wiki/Load_%28computing%29
My copy of Cygwin reports zero load averages when I run the uptime command. I don't think there is a quick way of calculating this on Windows; it was asked on the Cygwin mailing list in the past.
In other words: it's not implemented, so it's always zero.
Here's the implementation of getloadavg, directly from the GNU Make 3.81 sources:
# if !defined (LDAV_DONE) && (defined (__MSDOS__) || defined (WINDOWS32))
# define LDAV_DONE
/* A faithful emulation is going to have to be saved for a rainy day. */
for ( ; elem < nelem; elem++)
{
loadavg[elem] = 0.0;
}
# endif /* __MSDOS__ || WINDOWS32 */
I haven't checked on newer versions of GNU make but I doubt it's changed.

Resources